SCSI misc on 20250529

Updates to the usual drivers (smartpqi, ufs, lpfc, scsi_debug, target,
 hisi_sas) with the only substantive core change being the removal of
 the stream_status member from the scsi_stream_status_header (to get
 rid of flex array members).
 
 Signed-off-by: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCaDiQ2CYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishWtDAP9p0Jd/
 H4VMpYT5iETyq3TeAXTm1jVXL9Gnux5JMfskGwEA9kST8O6gorVOVKck+Eq0Hc9r
 w8NDnBK91hknIai5kE8=
 =/1L9
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "Updates to the usual drivers (smartpqi, ufs, lpfc, scsi_debug, target,
  hisi_sas) with the only substantive core change being the removal of
  the stream_status member from the scsi_stream_status_header (to get
  rid of flex array members)"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (77 commits)
  scsi: target: core: Constify struct target_opcode_descriptor
  scsi: target: core: Constify enabled() in struct target_opcode_descriptor
  scsi: hisi_sas: Fix warning detected by sparse
  scsi: mpt3sas: Fix _ctl_get_mpt_mctp_passthru_adapter() to return IOC pointer
  scsi: sg: Remove unnecessary NULL check before unregister_sysctl_table()
  scsi: ufs: mcq: Delete ufshcd_release_scsi_cmd() in ufshcd_mcq_abort()
  scsi: ufs: qcom: dt-bindings: Document the SM8750 UFS Controller
  scsi: mvsas: Fix typos in SAS/SATA VSP register comments
  scsi: fnic: Replace memset() with eth_zero_addr()
  scsi: ufs: core: Support updating device command timeout
  scsi: ufs: core: Change hwq_id type and value
  scsi: ufs: core: Increase the UIC command timeout further
  scsi: zfcp: Simplify workqueue allocation
  scsi: ufs: core: Print error value as hex format in ufshcd_err_handler()
  scsi: sd: Remove the stream_status member from scsi_stream_status_header
  scsi: docs: Clean up some style in scsi_mid_low_api
  scsi: core: Remove unused scsi_dev_info_list_del_keyed()
  scsi: isci: Remove unused sci_remote_device_reset()
  scsi: scsi_debug: Reduce DEF_ATOMIC_WR_MAX_LENGTH
  scsi: smartpqi: Delete a stray tab in pqi_is_parity_write_stream()
  ...
This commit is contained in:
Linus Torvalds 2025-05-29 22:17:52 -07:00
commit f66bc387ef
63 changed files with 1996 additions and 1829 deletions

View File

@ -1636,3 +1636,52 @@ Description:
attribute value.
The attribute is read only.
What: /sys/bus/platform/drivers/ufshcd/*/wb_resize_enable
What: /sys/bus/platform/devices/*.ufs/wb_resize_enable
Date: April 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
The host can enable the WriteBooster buffer resize by setting this
attribute.
======== ======================================
idle There is no resize operation
decrease Decrease WriteBooster buffer size
increase Increase WriteBooster buffer size
======== ======================================
The file is write only.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_resize_hint
What: /sys/bus/platform/devices/*.ufs/attributes/wb_resize_hint
Date: April 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
wb_resize_hint indicates hint information about which type of resize
for WriteBooster buffer is recommended by the device.
========= ======================================
keep Recommend keep the buffer size
decrease Recommend to decrease the buffer size
increase Recommend to increase the buffer size
========= ======================================
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_resize_status
What: /sys/bus/platform/devices/*.ufs/attributes/wb_resize_status
Date: April 2025
Contact: Huan Tang <tanghuan@vivo.com>
Description:
The host can check the resize operation status of the WriteBooster
buffer by reading this attribute.
================ ========================================
idle Resize operation is not issued
in_progress Resize operation in progress
complete_success Resize operation completed successfully
general_failure Resize operation general failure
================ ========================================
The file is read only.

View File

@ -43,6 +43,7 @@ properties:
- qcom,sm8450-ufshc
- qcom,sm8550-ufshc
- qcom,sm8650-ufshc
- qcom,sm8750-ufshc
- const: qcom,ufshc
- const: jedec,ufs-2.0
@ -158,6 +159,7 @@ allOf:
- qcom,sm8450-ufshc
- qcom,sm8550-ufshc
- qcom,sm8650-ufshc
- qcom,sm8750-ufshc
then:
properties:
clocks:

View File

@ -37,7 +37,7 @@ ISA adapters).]
The SCSI mid level isolates an LLD from other layers such as the SCSI
upper layer drivers and the block layer.
This version of the document roughly matches linux kernel version 2.6.8 .
This version of the document roughly matches Linux kernel version 2.6.8 .
Documentation
=============
@ -48,7 +48,7 @@ found in that directory. A more recent copy of this document may be found
at https://docs.kernel.org/scsi/scsi_mid_low_api.html. Many LLDs are
documented in Documentation/scsi (e.g. aic7xxx.rst). The SCSI mid-level is
briefly described in scsi.rst which contains a URL to a document describing
the SCSI subsystem in the Linux Kernel 2.4 series. Two upper level
the SCSI subsystem in the Linux kernel 2.4 series. Two upper level
drivers have documents in that directory: st.rst (SCSI tape driver) and
scsi-generic.rst (for the sg driver).
@ -75,7 +75,7 @@ It is probably best to study how existing LLDs are organized.
As the 2.5 series development kernels evolve into the 2.6 series
production series, changes are being introduced into this interface. An
example of this is driver initialization code where there are now 2 models
available. The older one, similar to what was found in the lk 2.4 series,
available. The older one, similar to what was found in the Linux 2.4 series,
is based on hosts that are detected at HBA driver load time. This will be
referred to the "passive" initialization model. The newer model allows HBAs
to be hot plugged (and unplugged) during the lifetime of the LLD and will
@ -1026,7 +1026,7 @@ initialized from the driver's struct scsi_host_template instance. Members
of interest:
host_no
- system wide unique number that is used for identifying
- system-wide unique number that is used for identifying
this host. Issued in ascending order from 0.
can_queue
- must be greater than 0; do not send more than can_queue
@ -1053,7 +1053,7 @@ of interest:
- pointer to driver's struct scsi_host_template from which
this struct Scsi_Host instance was spawned
hostt->proc_name
- name of LLD. This is the driver name that sysfs uses
- name of LLD. This is the driver name that sysfs uses.
transportt
- pointer to driver's struct scsi_transport_template instance
(if any). FC and SPI transports currently supported.
@ -1067,7 +1067,7 @@ The scsi_host structure is defined in include/scsi/scsi_host.h
struct scsi_device
------------------
Generally, there is one instance of this structure for each SCSI logical unit
on a host. Scsi devices connected to a host are uniquely identified by a
on a host. SCSI devices connected to a host are uniquely identified by a
channel number, target id and logical unit number (lun).
The structure is defined in include/scsi/scsi_device.h
@ -1091,7 +1091,7 @@ Members of interest:
- should be set by LLD prior to calling 'done'. A value
of 0 implies a successfully completed command (and all
data (if any) has been transferred to or from the SCSI
target device). 'result' is a 32 bit unsigned integer that
target device). 'result' is a 32-bit unsigned integer that
can be viewed as 2 related bytes. The SCSI status value is
in the LSB. See include/scsi/scsi.h status_byte() and
host_byte() macros and related constants.
@ -1180,8 +1180,8 @@ may get out of synchronization. This is why it is best for the LLD
to perform autosense.
Changes since lk 2.4 series
===========================
Changes since Linux kernel 2.4 series
=====================================
io_request_lock has been replaced by several finer grained locks. The lock
relevant to LLDs is struct Scsi_Host::host_lock and there is
one per SCSI host.

View File

@ -1882,6 +1882,11 @@ static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
if (IS_ERR_OR_NULL(ice))
return PTR_ERR_OR_ZERO(ice);
if (qcom_ice_get_supported_key_type(ice) != BLK_CRYPTO_KEY_TYPE_RAW) {
dev_warn(dev, "Wrapped keys not supported. Disabling inline encryption support.\n");
return 0;
}
msm_host->ice = ice;
/* Initialize the blk_crypto_profile */
@ -1962,16 +1967,7 @@ static int sdhci_msm_ice_keyslot_program(struct blk_crypto_profile *profile,
struct sdhci_msm_host *msm_host =
sdhci_msm_host_from_crypto_profile(profile);
/* Only AES-256-XTS has been tested so far. */
if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS)
return -EOPNOTSUPP;
return qcom_ice_program_key(msm_host->ice,
QCOM_ICE_CRYPTO_ALG_AES_XTS,
QCOM_ICE_CRYPTO_KEY_SIZE_256,
key->bytes,
key->crypto_cfg.data_unit_size / 512,
slot);
return qcom_ice_program_key(msm_host->ice, slot, key);
}
static int sdhci_msm_ice_keyslot_evict(struct blk_crypto_profile *profile,

View File

@ -312,15 +312,13 @@ static void zfcp_print_sl(struct seq_file *m, struct service_level *sl)
static int zfcp_setup_adapter_work_queue(struct zfcp_adapter *adapter)
{
char name[TASK_COMM_LEN];
adapter->work_queue =
alloc_ordered_workqueue("zfcp_q_%s", WQ_MEM_RECLAIM,
dev_name(&adapter->ccw_device->dev));
if (!adapter->work_queue)
return -ENOMEM;
snprintf(name, sizeof(name), "zfcp_q_%s",
dev_name(&adapter->ccw_device->dev));
adapter->work_queue = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
if (adapter->work_queue)
return 0;
return -ENOMEM;
return 0;
}
static void zfcp_destroy_adapter_work_queue(struct zfcp_adapter *adapter)

File diff suppressed because it is too large Load Diff

View File

@ -3804,7 +3804,7 @@ sli_cmd_common_write_object(struct sli4 *sli4, void *buf, u16 noc,
wr_obj->desired_write_len_dword = cpu_to_le32(dwflags);
wr_obj->write_offset = cpu_to_le32(offset);
strncpy(wr_obj->object_name, obj_name, sizeof(wr_obj->object_name) - 1);
strscpy(wr_obj->object_name, obj_name);
wr_obj->host_buffer_descriptor_count = cpu_to_le32(1);
bde = (struct sli4_bde *)wr_obj->host_buffer_descriptor;
@ -3833,7 +3833,7 @@ sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, char *obj_name)
SLI4_SUBSYSTEM_COMMON, CMD_V0,
SLI4_RQST_PYLD_LEN(cmn_delete_object));
strncpy(req->object_name, obj_name, sizeof(req->object_name) - 1);
strscpy(req->object_name, obj_name);
return 0;
}
@ -3856,7 +3856,7 @@ sli_cmd_common_read_object(struct sli4 *sli4, void *buf, u32 desired_read_len,
cpu_to_le32(desired_read_len & SLI4_REQ_DESIRE_READLEN);
rd_obj->read_offset = cpu_to_le32(offset);
strncpy(rd_obj->object_name, obj_name, sizeof(rd_obj->object_name) - 1);
strscpy(rd_obj->object_name, obj_name);
rd_obj->host_buffer_descriptor_count = cpu_to_le32(1);
bde = (struct sli4_bde *)rd_obj->host_buffer_descriptor;

View File

@ -200,7 +200,7 @@ void fnic_fcoe_start_fcf_discovery(struct fnic *fnic)
return;
}
memset(iport->selected_fcf.fcf_mac, 0, ETH_ALEN);
eth_zero_addr(iport->selected_fcf.fcf_mac);
pdisc_sol = (struct fip_discovery *) frame;
*pdisc_sol = (struct fip_discovery) {
@ -588,12 +588,12 @@ void fnic_common_fip_cleanup(struct fnic *fnic)
if (!is_zero_ether_addr(iport->fpma))
vnic_dev_del_addr(fnic->vdev, iport->fpma);
memset(iport->fpma, 0, ETH_ALEN);
eth_zero_addr(iport->fpma);
iport->fcid = 0;
iport->r_a_tov = 0;
iport->e_d_tov = 0;
memset(fnic->iport.fcfmac, 0, ETH_ALEN);
memset(iport->selected_fcf.fcf_mac, 0, ETH_ALEN);
eth_zero_addr(fnic->iport.fcfmac);
eth_zero_addr(iport->selected_fcf.fcf_mac);
iport->selected_fcf.fcf_priority = 0;
iport->selected_fcf.fka_adv_period = 0;
iport->selected_fcf.ka_disabled = 0;

View File

@ -46,6 +46,13 @@
#define HISI_SAS_IOST_ITCT_CACHE_DW_SZ 10
#define HISI_SAS_FIFO_DATA_DW_SIZE 32
#define HISI_SAS_REG_MEM_SIZE 4
#define HISI_SAS_MAX_CDB_LEN 16
#define HISI_SAS_BLK_QUEUE_DEPTH 64
#define BYTE_TO_DW 4
#define BYTE_TO_DDW 8
#define HISI_SAS_STATUS_BUF_SZ (sizeof(struct hisi_sas_status_buffer))
#define HISI_SAS_COMMAND_TABLE_SZ (sizeof(union hisi_sas_command_table))
@ -92,6 +99,8 @@
#define HISI_SAS_WAIT_PHYUP_TIMEOUT (30 * HZ)
#define HISI_SAS_CLEAR_ITCT_TIMEOUT (20 * HZ)
#define HISI_SAS_DELAY_FOR_PHY_DISABLE 100
#define NAME_BUF_SIZE 256
struct hisi_hba;
@ -167,6 +176,8 @@ struct hisi_sas_debugfs_fifo {
u32 rd_data[HISI_SAS_FIFO_DATA_DW_SIZE];
};
#define FRAME_RCVD_BUF 32
#define SAS_PHY_RESV_SIZE 2
struct hisi_sas_phy {
struct work_struct works[HISI_PHYES_NUM];
struct hisi_hba *hisi_hba;
@ -178,10 +189,10 @@ struct hisi_sas_phy {
spinlock_t lock;
u64 port_id; /* from hw */
u64 frame_rcvd_size;
u8 frame_rcvd[32];
u8 frame_rcvd[FRAME_RCVD_BUF];
u8 phy_attached;
u8 in_reset;
u8 reserved[2];
u8 reserved[SAS_PHY_RESV_SIZE];
u32 phy_type;
u32 code_violation_err_count;
enum sas_linkrate minimum_linkrate;
@ -348,7 +359,8 @@ struct hisi_sas_hw {
const struct scsi_host_template *sht;
};
#define HISI_SAS_MAX_DEBUGFS_DUMP (50)
#define HISI_SAS_MAX_DEBUGFS_DUMP 50
#define HISI_SAS_DEFAULT_DEBUGFS_DUMP 1
struct hisi_sas_debugfs_cq {
struct hisi_sas_cq *cq;
@ -448,12 +460,12 @@ struct hisi_hba {
dma_addr_t sata_breakpoint_dma;
struct hisi_sas_slot *slot_info;
unsigned long flags;
const struct hisi_sas_hw *hw; /* Low level hw interface */
const struct hisi_sas_hw *hw; /* Low level hw interface */
unsigned long sata_dev_bitmap[BITS_TO_LONGS(HISI_SAS_MAX_DEVICES)];
struct work_struct rst_work;
u32 phy_state;
u32 intr_coal_ticks; /* Time of interrupt coalesce in us */
u32 intr_coal_count; /* Interrupt count to coalesce */
u32 intr_coal_ticks; /* Time of interrupt coalesce in us */
u32 intr_coal_count; /* Interrupt count to coalesce */
int cq_nvecs;
@ -528,12 +540,13 @@ struct hisi_sas_cmd_hdr {
__le64 dif_prd_table_addr;
};
#define ITCT_RESV_DDW 12
struct hisi_sas_itct {
__le64 qw0;
__le64 sas_addr;
__le64 qw2;
__le64 qw3;
__le64 qw4_15[12];
__le64 qw4_15[ITCT_RESV_DDW];
};
struct hisi_sas_iost {
@ -543,22 +556,26 @@ struct hisi_sas_iost {
__le64 qw3;
};
#define ERROR_RECORD_BUF_DW 4
struct hisi_sas_err_record {
u32 data[4];
u32 data[ERROR_RECORD_BUF_DW];
};
#define FIS_RESV_DW 3
struct hisi_sas_initial_fis {
struct hisi_sas_err_record err_record;
struct dev_to_host_fis fis;
u32 rsvd[3];
u32 rsvd[FIS_RESV_DW];
};
#define BREAKPOINT_DATA_SIZE 128
struct hisi_sas_breakpoint {
u8 data[128];
u8 data[BREAKPOINT_DATA_SIZE];
};
#define BREAKPOINT_TAG_NUM 32
struct hisi_sas_sata_breakpoint {
struct hisi_sas_breakpoint tag[32];
struct hisi_sas_breakpoint tag[BREAKPOINT_TAG_NUM];
};
struct hisi_sas_sge {
@ -569,13 +586,15 @@ struct hisi_sas_sge {
__le32 data_off;
};
#define SMP_CMD_TABLE_SIZE 44
struct hisi_sas_command_table_smp {
u8 bytes[44];
u8 bytes[SMP_CMD_TABLE_SIZE];
};
#define DUMMY_BUF_SIZE 12
struct hisi_sas_command_table_stp {
struct host_to_dev_fis command_fis;
u8 dummy[12];
u8 dummy[DUMMY_BUF_SIZE];
u8 atapi_cdb[ATAPI_CDB_LEN];
};
@ -589,12 +608,13 @@ struct hisi_sas_sge_dif_page {
struct hisi_sas_sge sge[HISI_SAS_SGE_DIF_PAGE_CNT];
} __aligned(16);
#define PROT_BUF_SIZE 7
struct hisi_sas_command_table_ssp {
struct ssp_frame_hdr hdr;
union {
struct {
struct ssp_command_iu task;
u32 prot[7];
u32 prot[PROT_BUF_SIZE];
};
struct ssp_tmf_iu ssp_task;
struct xfer_rdy_iu xfer_rdy;
@ -608,9 +628,10 @@ union hisi_sas_command_table {
struct hisi_sas_command_table_stp stp;
} __aligned(16);
#define IU_BUF_SIZE 1024
struct hisi_sas_status_buffer {
struct hisi_sas_err_record err;
u8 iu[1024];
u8 iu[IU_BUF_SIZE];
} __aligned(16);
struct hisi_sas_slot_buf_table {

View File

@ -7,6 +7,16 @@
#include "hisi_sas.h"
#define DRV_NAME "hisi_sas"
#define LINK_RATE_BIT_MASK 2
#define FIS_BUF_SIZE 20
#define WAIT_CMD_COMPLETE_DELAY 100
#define WAIT_CMD_COMPLETE_TMROUT 5000
#define DELAY_FOR_LINK_READY 2000
#define BLK_CNT_OPTIMIZE_MARK 64
#define HZ_TO_MHZ 1000000
#define DELAY_FOR_SOFTRESET_MAX 1000
#define DELAY_FOR_SOFTRESET_MIN 900
#define DEV_IS_GONE(dev) \
((!dev) || (dev->dev_type == SAS_PHY_UNUSED))
@ -114,12 +124,10 @@ u8 hisi_sas_get_ata_protocol(struct sas_task *task)
}
default:
{
if (direction == DMA_NONE)
return HISI_SAS_SATA_PROTOCOL_NONDATA;
return hisi_sas_get_ata_protocol_from_tf(qc);
}
}
}
EXPORT_SYMBOL_GPL(hisi_sas_get_ata_protocol);
@ -131,7 +139,7 @@ void hisi_sas_sata_done(struct sas_task *task,
struct hisi_sas_status_buffer *status_buf =
hisi_sas_status_buf_addr_mem(slot);
u8 *iu = &status_buf->iu[0];
struct dev_to_host_fis *d2h = (struct dev_to_host_fis *)iu;
struct dev_to_host_fis *d2h = (struct dev_to_host_fis *)iu;
resp->frame_len = sizeof(struct dev_to_host_fis);
memcpy(&resp->ending_fis[0], d2h, sizeof(struct dev_to_host_fis));
@ -151,7 +159,7 @@ u8 hisi_sas_get_prog_phy_linkrate_mask(enum sas_linkrate max)
max -= SAS_LINK_RATE_1_5_GBPS;
for (i = 0; i <= max; i++)
rate |= 1 << (i * 2);
rate |= 1 << (i * LINK_RATE_BIT_MASK);
return rate;
}
EXPORT_SYMBOL_GPL(hisi_sas_get_prog_phy_linkrate_mask);
@ -900,7 +908,7 @@ int hisi_sas_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
if (ret)
return ret;
if (!dev_is_sata(dev))
sas_change_queue_depth(sdev, 64);
sas_change_queue_depth(sdev, HISI_SAS_BLK_QUEUE_DEPTH);
return 0;
}
@ -1262,7 +1270,7 @@ static int hisi_sas_phy_set_linkrate(struct hisi_hba *hisi_hba, int phy_no,
sas_phy->phy->minimum_linkrate = min;
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
msleep(100);
msleep(HISI_SAS_DELAY_FOR_PHY_DISABLE);
hisi_hba->hw->phy_set_linkrate(hisi_hba, phy_no, &_r);
hisi_sas_phy_enable(hisi_hba, phy_no, 1);
@ -1292,7 +1300,7 @@ static int hisi_sas_control_phy(struct asd_sas_phy *sas_phy, enum phy_func func,
case PHY_FUNC_LINK_RESET:
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
msleep(100);
msleep(HISI_SAS_DELAY_FOR_PHY_DISABLE);
hisi_sas_phy_enable(hisi_hba, phy_no, 1);
break;
@ -1347,7 +1355,7 @@ static void hisi_sas_fill_ata_reset_cmd(struct ata_device *dev,
static int hisi_sas_softreset_ata_disk(struct domain_device *device)
{
u8 fis[20] = {0};
u8 fis[FIS_BUF_SIZE] = {0};
struct ata_port *ap = device->sata_dev.ap;
struct ata_link *link;
int rc = TMF_RESP_FUNC_FAILED;
@ -1364,7 +1372,7 @@ static int hisi_sas_softreset_ata_disk(struct domain_device *device)
}
if (rc == TMF_RESP_FUNC_COMPLETE) {
usleep_range(900, 1000);
usleep_range(DELAY_FOR_SOFTRESET_MIN, DELAY_FOR_SOFTRESET_MAX);
ata_for_each_link(link, ap, EDGE) {
int pmp = sata_srst_pmp(link);
@ -1494,7 +1502,7 @@ static void hisi_sas_send_ata_reset_each_phy(struct hisi_hba *hisi_hba,
struct device *dev = hisi_hba->dev;
int rc = TMF_RESP_FUNC_FAILED;
struct ata_link *link;
u8 fis[20] = {0};
u8 fis[FIS_BUF_SIZE] = {0};
int i;
for (i = 0; i < hisi_hba->n_phy; i++) {
@ -1561,7 +1569,9 @@ void hisi_sas_controller_reset_prepare(struct hisi_hba *hisi_hba)
hisi_hba->phy_state = hisi_hba->hw->get_phys_state(hisi_hba);
scsi_block_requests(shost);
hisi_hba->hw->wait_cmds_complete_timeout(hisi_hba, 100, 5000);
hisi_hba->hw->wait_cmds_complete_timeout(hisi_hba,
WAIT_CMD_COMPLETE_DELAY,
WAIT_CMD_COMPLETE_TMROUT);
/*
* hisi_hba->timer is only used for v1/v2 hw, and check hw->sht
@ -1862,7 +1872,7 @@ static int hisi_sas_debug_I_T_nexus_reset(struct domain_device *device)
rc = ata_wait_after_reset(link, jiffies + HISI_SAS_WAIT_PHYUP_TIMEOUT,
smp_ata_check_ready_type);
} else {
msleep(2000);
msleep(DELAY_FOR_LINK_READY);
}
return rc;
@ -1885,33 +1895,14 @@ static int hisi_sas_I_T_nexus_reset(struct domain_device *device)
}
hisi_sas_dereg_device(hisi_hba, device);
rc = hisi_sas_debug_I_T_nexus_reset(device);
if (rc == TMF_RESP_FUNC_COMPLETE && dev_is_sata(device)) {
struct sas_phy *local_phy;
if (dev_is_sata(device)) {
rc = hisi_sas_softreset_ata_disk(device);
switch (rc) {
case -ECOMM:
rc = -ENODEV;
break;
case TMF_RESP_FUNC_FAILED:
case -EMSGSIZE:
case -EIO:
local_phy = sas_get_local_phy(device);
rc = sas_phy_enable(local_phy, 0);
if (!rc) {
local_phy->enabled = 0;
dev_err(dev, "Disabled local phy of ATA disk %016llx due to softreset fail (%d)\n",
SAS_ADDR(device->sas_addr), rc);
rc = -ENODEV;
}
sas_put_local_phy(local_phy);
break;
default:
break;
}
if (rc == TMF_RESP_FUNC_FAILED)
dev_err(dev, "ata disk %016llx reset (%d)\n",
SAS_ADDR(device->sas_addr), rc);
}
rc = hisi_sas_debug_I_T_nexus_reset(device);
if ((rc == TMF_RESP_FUNC_COMPLETE) || (rc == -ENODEV))
hisi_sas_release_task(hisi_hba, device);
@ -1934,12 +1925,9 @@ static int hisi_sas_lu_reset(struct domain_device *device, u8 *lun)
hisi_sas_dereg_device(hisi_hba, device);
if (dev_is_sata(device)) {
struct sas_phy *phy;
phy = sas_get_local_phy(device);
struct sas_phy *phy = sas_get_local_phy(device);
rc = sas_phy_reset(phy, true);
if (rc == 0)
hisi_sas_release_task(hisi_hba, device);
sas_put_local_phy(phy);
@ -2123,7 +2111,7 @@ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy,
hisi_sas_bytes_dmaed(hisi_hba, phy_no, gfp_flags);
hisi_sas_port_notify_formed(sas_phy);
} else {
struct hisi_sas_port *port = phy->port;
struct hisi_sas_port *port = phy->port;
if (test_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags) ||
phy->in_reset) {
@ -2296,12 +2284,14 @@ int hisi_sas_alloc(struct hisi_hba *hisi_hba)
goto err_out;
/* roundup to avoid overly large block size */
max_command_entries_ru = roundup(max_command_entries, 64);
max_command_entries_ru = roundup(max_command_entries,
BLK_CNT_OPTIMIZE_MARK);
if (hisi_hba->prot_mask & HISI_SAS_DIX_PROT_MASK)
sz_slot_buf_ru = sizeof(struct hisi_sas_slot_dif_buf_table);
else
sz_slot_buf_ru = sizeof(struct hisi_sas_slot_buf_table);
sz_slot_buf_ru = roundup(sz_slot_buf_ru, 64);
sz_slot_buf_ru = roundup(sz_slot_buf_ru, BLK_CNT_OPTIMIZE_MARK);
s = max(lcm(max_command_entries_ru, sz_slot_buf_ru), PAGE_SIZE);
blk_cnt = (max_command_entries_ru * sz_slot_buf_ru) / s;
slots_per_blk = s / sz_slot_buf_ru;
@ -2466,7 +2456,8 @@ int hisi_sas_get_fw_info(struct hisi_hba *hisi_hba)
if (IS_ERR(refclk))
dev_dbg(dev, "no ref clk property\n");
else
hisi_hba->refclk_frequency_mhz = clk_get_rate(refclk) / 1000000;
hisi_hba->refclk_frequency_mhz = clk_get_rate(refclk) /
HZ_TO_MHZ;
if (device_property_read_u32(dev, "phy-count", &hisi_hba->n_phy)) {
dev_err(dev, "could not get property phy-count\n");
@ -2588,7 +2579,7 @@ int hisi_sas_probe(struct platform_device *pdev,
shost->max_id = HISI_SAS_MAX_DEVICES;
shost->max_lun = ~0;
shost->max_channel = 1;
shost->max_cmd_len = 16;
shost->max_cmd_len = HISI_SAS_MAX_CDB_LEN;
if (hisi_hba->hw->slot_index_alloc) {
shost->can_queue = HISI_SAS_MAX_COMMANDS;
shost->cmd_per_lun = HISI_SAS_MAX_COMMANDS;

View File

@ -1759,7 +1759,7 @@ static const struct scsi_host_template sht_v1_hw = {
.sg_tablesize = HISI_SAS_SGE_PAGE_CNT,
.sdev_init = hisi_sas_sdev_init,
.shost_groups = host_v1_hw_groups,
.host_reset = hisi_sas_host_reset,
.host_reset = hisi_sas_host_reset,
};
static const struct hisi_sas_hw hisi_sas_v1_hw = {

View File

@ -2771,7 +2771,7 @@ static irqreturn_t int_phy_updown_v2_hw(int irq_no, void *p)
irq_msk = (hisi_sas_read32(hisi_hba, HGC_INVLD_DQE_INFO)
>> HGC_INVLD_DQE_INFO_FB_CH0_OFF) & 0x1ff;
while (irq_msk) {
if (irq_msk & 1) {
if (irq_msk & 1) {
u32 reg_value = hisi_sas_phy_read32(hisi_hba, phy_no,
CHL_INT0);
@ -3111,7 +3111,7 @@ static irqreturn_t fatal_axi_int_v2_hw(int irq_no, void *p)
return IRQ_HANDLED;
}
static irqreturn_t cq_thread_v2_hw(int irq_no, void *p)
static irqreturn_t cq_thread_v2_hw(int irq_no, void *p)
{
struct hisi_sas_cq *cq = p;
struct hisi_hba *hisi_hba = cq->hisi_hba;
@ -3499,7 +3499,7 @@ static int write_gpio_v2_hw(struct hisi_hba *hisi_hba, u8 reg_type,
* numbered drive in the fourth byte.
* See SFF-8485 Rev. 0.7 Table 24.
*/
void __iomem *reg_addr = hisi_hba->sgpio_regs +
void __iomem *reg_addr = hisi_hba->sgpio_regs +
reg_index * 4 + phy_no;
int data_idx = phy_no + 3 - (phy_no % 4) * 2;

View File

@ -466,6 +466,12 @@
#define ITCT_HDR_RTOLT_OFF 48
#define ITCT_HDR_RTOLT_MSK (0xffffULL << ITCT_HDR_RTOLT_OFF)
/*debugfs*/
#define TWO_PARA_PER_LINE 2
#define FOUR_PARA_PER_LINE 4
#define DUMP_BUF_SIZE 8
#define BIST_BUF_SIZE 16
struct hisi_sas_protect_iu_v3_hw {
u32 dw0;
u32 lbrtcv;
@ -536,6 +542,43 @@ struct hisi_sas_err_record_v3 {
#define BASE_VECTORS_V3_HW 16
#define MIN_AFFINE_VECTORS_V3_HW (BASE_VECTORS_V3_HW + 1)
#define IRQ_PHY_UP_DOWN_INDEX 1
#define IRQ_CHL_INDEX 2
#define IRQ_AXI_INDEX 11
#define DELAY_FOR_RESET_HW 100
#define HDR_SG_MOD 0x2
#define LUN_SIZE 8
#define ATTR_PRIO_REGION 9
#define CDB_REGION 12
#define PRIO_OFF 3
#define TMF_REGION 10
#define TAG_MSB 12
#define TAG_LSB 13
#define SMP_FRAME_TYPE 2
#define SMP_CRC_SIZE 4
#define HDR_TAG_OFF 3
#define HOST_NO_OFF 6
#define PHY_NO_OFF 7
#define IDENTIFY_REG_READ 6
#define LINK_RESET_TIMEOUT_OFF 4
#define DECIMALISM_FLAG 10
#define WAIT_RETRY 100
#define WAIT_TMROUT 5000
#define ID_DWORD0_INDEX 0
#define ID_DWORD1_INDEX 1
#define ID_DWORD2_INDEX 2
#define ID_DWORD3_INDEX 3
#define ID_DWORD4_INDEX 4
#define ID_DWORD5_INDEX 5
#define TICKS_BIT_INDEX 24
#define COUNT_BIT_INDEX 8
#define PORT_REG_LENGTH 0x100
#define GLOBAL_REG_LENGTH 0x800
#define AXI_REG_LENGTH 0x61
#define RAS_REG_LENGTH 0x10
#define CHNL_INT_STS_MSK 0xeeeeeeee
#define CHNL_INT_STS_PHY_MSK 0xe
@ -811,17 +854,17 @@ static void config_id_frame_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
identify_buffer = (u32 *)(&identify_frame);
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD0,
__swab32(identify_buffer[0]));
__swab32(identify_buffer[ID_DWORD0_INDEX]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD1,
__swab32(identify_buffer[1]));
__swab32(identify_buffer[ID_DWORD1_INDEX]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD2,
__swab32(identify_buffer[2]));
__swab32(identify_buffer[ID_DWORD2_INDEX]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD3,
__swab32(identify_buffer[3]));
__swab32(identify_buffer[ID_DWORD3_INDEX]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD4,
__swab32(identify_buffer[4]));
__swab32(identify_buffer[ID_DWORD4_INDEX]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD5,
__swab32(identify_buffer[5]));
__swab32(identify_buffer[ID_DWORD5_INDEX]));
}
static void setup_itct_v3_hw(struct hisi_hba *hisi_hba,
@ -941,7 +984,7 @@ static int reset_hw_v3_hw(struct hisi_hba *hisi_hba)
/* Disable all of the PHYs */
hisi_sas_stop_phys(hisi_hba);
udelay(50);
udelay(HISI_SAS_DELAY_FOR_PHY_DISABLE);
/* Ensure axi bus idle */
ret = hisi_sas_read32_poll_timeout(AXI_CFG, val, !val,
@ -981,7 +1024,7 @@ static int hw_init_v3_hw(struct hisi_hba *hisi_hba)
return rc;
}
msleep(100);
msleep(DELAY_FOR_RESET_HW);
init_reg_v3_hw(hisi_hba);
if (guid_parse("D5918B4B-37AE-4E10-A99F-E5E8A6EF4C1F", &guid)) {
@ -1030,7 +1073,7 @@ static void disable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
cfg &= ~PHY_CFG_ENA_MSK;
hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg);
mdelay(50);
mdelay(HISI_SAS_DELAY_FOR_PHY_DISABLE);
state = hisi_sas_read32(hisi_hba, PHY_STATE);
if (state & BIT(phy_no)) {
@ -1066,7 +1109,7 @@ static void phy_hard_reset_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO,
txid_auto | TX_HARDRST_MSK);
}
msleep(100);
msleep(HISI_SAS_DELAY_FOR_PHY_DISABLE);
hisi_sas_phy_enable(hisi_hba, phy_no, 1);
}
@ -1111,7 +1154,8 @@ static int get_wideport_bitmap_v3_hw(struct hisi_hba *hisi_hba, int port_id)
for (i = 0; i < hisi_hba->n_phy; i++)
if (phy_state & BIT(i))
if (((phy_port_num_ma >> (i * 4)) & 0xf) == port_id)
if (((phy_port_num_ma >> (i * HISI_SAS_REG_MEM_SIZE)) & 0xf) ==
port_id)
bitmap |= BIT(i);
return bitmap;
@ -1308,10 +1352,10 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
/* map itct entry */
dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF;
dw2 = (((sizeof(struct ssp_command_iu) + sizeof(struct ssp_frame_hdr)
+ 3) / 4) << CMD_HDR_CFL_OFF) |
((HISI_SAS_MAX_SSP_RESP_SZ / 4) << CMD_HDR_MRFL_OFF) |
(2 << CMD_HDR_SG_MOD_OFF);
dw2 = (((sizeof(struct ssp_command_iu) + sizeof(struct ssp_frame_hdr) +
3) / BYTE_TO_DW) << CMD_HDR_CFL_OFF) |
((HISI_SAS_MAX_SSP_RESP_SZ / BYTE_TO_DW) << CMD_HDR_MRFL_OFF) |
(HDR_SG_MOD << CMD_HDR_SG_MOD_OFF);
hdr->dw2 = cpu_to_le32(dw2);
hdr->transfer_tags = cpu_to_le32(slot->idx);
@ -1331,18 +1375,19 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
buf_cmd = hisi_sas_cmd_hdr_addr_mem(slot) +
sizeof(struct ssp_frame_hdr);
memcpy(buf_cmd, &task->ssp_task.LUN, 8);
memcpy(buf_cmd, &task->ssp_task.LUN, LUN_SIZE);
if (!tmf) {
buf_cmd[9] = ssp_task->task_attr;
memcpy(buf_cmd + 12, scsi_cmnd->cmnd, scsi_cmnd->cmd_len);
buf_cmd[ATTR_PRIO_REGION] = ssp_task->task_attr;
memcpy(buf_cmd + CDB_REGION, scsi_cmnd->cmnd,
scsi_cmnd->cmd_len);
} else {
buf_cmd[10] = tmf->tmf;
buf_cmd[TMF_REGION] = tmf->tmf;
switch (tmf->tmf) {
case TMF_ABORT_TASK:
case TMF_QUERY_TASK:
buf_cmd[12] =
buf_cmd[TAG_MSB] =
(tmf->tag_of_task_to_be_managed >> 8) & 0xff;
buf_cmd[13] =
buf_cmd[TAG_LSB] =
tmf->tag_of_task_to_be_managed & 0xff;
break;
default:
@ -1375,7 +1420,8 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
unsigned int interval = scsi_prot_interval(scsi_cmnd);
unsigned int ilog2_interval = ilog2(interval);
len = (task->total_xfer_len >> ilog2_interval) * 8;
len = (task->total_xfer_len >> ilog2_interval) *
BYTE_TO_DDW;
}
}
@ -1395,6 +1441,7 @@ static void prep_smp_v3_hw(struct hisi_hba *hisi_hba,
struct hisi_sas_device *sas_dev = device->lldd_dev;
dma_addr_t req_dma_addr;
unsigned int req_len;
u32 cfl;
/* req */
sg_req = &task->smp_task.smp_req;
@ -1405,7 +1452,7 @@ static void prep_smp_v3_hw(struct hisi_hba *hisi_hba,
/* dw0 */
hdr->dw0 = cpu_to_le32((port->id << CMD_HDR_PORT_OFF) |
(1 << CMD_HDR_PRIORITY_OFF) | /* high pri */
(2 << CMD_HDR_CMD_OFF)); /* smp */
(SMP_FRAME_TYPE << CMD_HDR_CMD_OFF)); /* smp */
/* map itct entry */
hdr->dw1 = cpu_to_le32((sas_dev->device_id << CMD_HDR_DEV_ID_OFF) |
@ -1413,8 +1460,9 @@ static void prep_smp_v3_hw(struct hisi_hba *hisi_hba,
(DIR_NO_DATA << CMD_HDR_DIR_OFF));
/* dw2 */
hdr->dw2 = cpu_to_le32((((req_len - 4) / 4) << CMD_HDR_CFL_OFF) |
(HISI_SAS_MAX_SMP_RESP_SZ / 4 <<
cfl = (req_len - SMP_CRC_SIZE) / BYTE_TO_DW;
hdr->dw2 = cpu_to_le32((cfl << CMD_HDR_CFL_OFF) |
(HISI_SAS_MAX_SMP_RESP_SZ / BYTE_TO_DW <<
CMD_HDR_MRFL_OFF));
hdr->transfer_tags = cpu_to_le32(slot->idx << CMD_HDR_IPTT_OFF);
@ -1479,12 +1527,13 @@ static void prep_ata_v3_hw(struct hisi_hba *hisi_hba,
struct ata_queued_cmd *qc = task->uldd_task;
hdr_tag = qc->tag;
task->ata_task.fis.sector_count |= (u8) (hdr_tag << 3);
task->ata_task.fis.sector_count |=
(u8)(hdr_tag << HDR_TAG_OFF);
dw2 |= hdr_tag << CMD_HDR_NCQ_TAG_OFF;
}
dw2 |= (HISI_SAS_MAX_STP_RESP_SZ / 4) << CMD_HDR_CFL_OFF |
2 << CMD_HDR_SG_MOD_OFF;
dw2 |= (HISI_SAS_MAX_STP_RESP_SZ / BYTE_TO_DW) << CMD_HDR_CFL_OFF |
HDR_SG_MOD << CMD_HDR_SG_MOD_OFF;
hdr->dw2 = cpu_to_le32(dw2);
/* dw3 */
@ -1544,9 +1593,9 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_PHY_ENA_MSK, 1);
port_id = hisi_sas_read32(hisi_hba, PHY_PORT_NUM_MA);
port_id = (port_id >> (4 * phy_no)) & 0xf;
port_id = (port_id >> (HISI_SAS_REG_MEM_SIZE * phy_no)) & 0xf;
link_rate = hisi_sas_read32(hisi_hba, PHY_CONN_RATE);
link_rate = (link_rate >> (phy_no * 4)) & 0xf;
link_rate = (link_rate >> (phy_no * HISI_SAS_REG_MEM_SIZE)) & 0xf;
if (port_id == 0xf) {
dev_err(dev, "phyup: phy%d invalid portid\n", phy_no);
@ -1579,8 +1628,8 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
sas_phy->oob_mode = SATA_OOB_MODE;
attached_sas_addr[0] = 0x50;
attached_sas_addr[6] = shost->host_no;
attached_sas_addr[7] = phy_no;
attached_sas_addr[HOST_NO_OFF] = shost->host_no;
attached_sas_addr[PHY_NO_OFF] = phy_no;
memcpy(sas_phy->attached_sas_addr,
attached_sas_addr,
SAS_ADDR_SIZE);
@ -1596,7 +1645,7 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
(struct sas_identify_frame *)frame_rcvd;
dev_info(dev, "phyup: phy%d link_rate=%d\n", phy_no, link_rate);
for (i = 0; i < 6; i++) {
for (i = 0; i < IDENTIFY_REG_READ; i++) {
u32 idaf = hisi_sas_phy_read32(hisi_hba, phy_no,
RX_IDAF_DWORD0 + (i * 4));
frame_rcvd[i] = __swab32(idaf);
@ -1701,7 +1750,7 @@ static irqreturn_t int_phy_up_down_bcast_v3_hw(int irq_no, void *p)
irq_msk = hisi_sas_read32(hisi_hba, CHNL_INT_STATUS)
& 0x11111111;
while (irq_msk) {
if (irq_msk & 1) {
if (irq_msk & 1) {
u32 irq_value = hisi_sas_phy_read32(hisi_hba, phy_no,
CHL_INT0);
u32 phy_state = hisi_sas_read32(hisi_hba, PHY_STATE);
@ -1866,7 +1915,7 @@ static void handle_chl_int2_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
dev_warn(dev, "phy%d stp link timeout (0x%x)\n",
phy_no, reg_value);
if (reg_value & BIT(4))
if (reg_value & BIT(LINK_RESET_TIMEOUT_OFF))
hisi_sas_notify_phy_event(phy, HISI_PHYE_LINK_RESET);
}
@ -1924,8 +1973,7 @@ static irqreturn_t int_chnl_int_v3_hw(int irq_no, void *p)
u32 irq_msk;
int phy_no = 0;
irq_msk = hisi_sas_read32(hisi_hba, CHNL_INT_STATUS)
& CHNL_INT_STS_MSK;
irq_msk = hisi_sas_read32(hisi_hba, CHNL_INT_STATUS) & CHNL_INT_STS_MSK;
while (irq_msk) {
if (irq_msk & (CHNL_INT_STS_INT0_MSK << (phy_no * CHNL_WIDTH)))
@ -2570,7 +2618,6 @@ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
if (vectors < 0)
return -ENOENT;
hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW - hisi_hba->iopoll_q_cnt;
shost->nr_hw_queues = hisi_hba->cq_nvecs + hisi_hba->iopoll_q_cnt;
@ -2583,7 +2630,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
struct pci_dev *pdev = hisi_hba->pci_dev;
int rc, i;
rc = devm_request_irq(dev, pci_irq_vector(pdev, 1),
rc = devm_request_irq(dev, pci_irq_vector(pdev, IRQ_PHY_UP_DOWN_INDEX),
int_phy_up_down_bcast_v3_hw, 0,
DRV_NAME " phy", hisi_hba);
if (rc) {
@ -2591,7 +2638,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
return -ENOENT;
}
rc = devm_request_irq(dev, pci_irq_vector(pdev, 2),
rc = devm_request_irq(dev, pci_irq_vector(pdev, IRQ_CHL_INDEX),
int_chnl_int_v3_hw, 0,
DRV_NAME " channel", hisi_hba);
if (rc) {
@ -2599,7 +2646,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
return -ENOENT;
}
rc = devm_request_irq(dev, pci_irq_vector(pdev, 11),
rc = devm_request_irq(dev, pci_irq_vector(pdev, IRQ_AXI_INDEX),
fatal_axi_int_v3_hw, 0,
DRV_NAME " fatal", hisi_hba);
if (rc) {
@ -2612,7 +2659,8 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
for (i = 0; i < hisi_hba->cq_nvecs; i++) {
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
int nr = hisi_sas_intr_conv ? 16 : 16 + i;
int nr = hisi_sas_intr_conv ? BASE_VECTORS_V3_HW :
BASE_VECTORS_V3_HW + i;
unsigned long irqflags = hisi_sas_intr_conv ? IRQF_SHARED :
IRQF_ONESHOT;
@ -2670,14 +2718,14 @@ static void interrupt_disable_v3_hw(struct hisi_hba *hisi_hba)
struct pci_dev *pdev = hisi_hba->pci_dev;
int i;
synchronize_irq(pci_irq_vector(pdev, 1));
synchronize_irq(pci_irq_vector(pdev, 2));
synchronize_irq(pci_irq_vector(pdev, 11));
synchronize_irq(pci_irq_vector(pdev, IRQ_PHY_UP_DOWN_INDEX));
synchronize_irq(pci_irq_vector(pdev, IRQ_CHL_INDEX));
synchronize_irq(pci_irq_vector(pdev, IRQ_AXI_INDEX));
for (i = 0; i < hisi_hba->queue_count; i++)
hisi_sas_write32(hisi_hba, OQ0_INT_SRC_MSK + 0x4 * i, 0x1);
for (i = 0; i < hisi_hba->cq_nvecs; i++)
synchronize_irq(pci_irq_vector(pdev, i + 16));
synchronize_irq(pci_irq_vector(pdev, i + BASE_VECTORS_V3_HW));
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK1, 0xffffffff);
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK2, 0xffffffff);
@ -2709,7 +2757,7 @@ static int disable_host_v3_hw(struct hisi_hba *hisi_hba)
hisi_sas_stop_phys(hisi_hba);
mdelay(10);
mdelay(HISI_SAS_DELAY_FOR_PHY_DISABLE);
reg_val = hisi_sas_read32(hisi_hba, AXI_MASTER_CFG_BASE +
AM_CTRL_GLOBAL);
@ -2846,13 +2894,13 @@ static ssize_t intr_coal_ticks_v3_hw_store(struct device *dev,
u32 intr_coal_ticks;
int ret;
ret = kstrtou32(buf, 10, &intr_coal_ticks);
ret = kstrtou32(buf, DECIMALISM_FLAG, &intr_coal_ticks);
if (ret) {
dev_err(dev, "Input data of interrupt coalesce unmatch\n");
return -EINVAL;
}
if (intr_coal_ticks >= BIT(24)) {
if (intr_coal_ticks >= BIT(TICKS_BIT_INDEX)) {
dev_err(dev, "intr_coal_ticks must be less than 2^24!\n");
return -EINVAL;
}
@ -2885,13 +2933,13 @@ static ssize_t intr_coal_count_v3_hw_store(struct device *dev,
u32 intr_coal_count;
int ret;
ret = kstrtou32(buf, 10, &intr_coal_count);
ret = kstrtou32(buf, DECIMALISM_FLAG, &intr_coal_count);
if (ret) {
dev_err(dev, "Input data of interrupt coalesce unmatch\n");
return -EINVAL;
}
if (intr_coal_count >= BIT(8)) {
if (intr_coal_count >= BIT(COUNT_BIT_INDEX)) {
dev_err(dev, "intr_coal_count must be less than 2^8!\n");
return -EINVAL;
}
@ -3023,7 +3071,7 @@ static const struct hisi_sas_debugfs_reg_lu debugfs_port_reg_lu[] = {
static const struct hisi_sas_debugfs_reg debugfs_port_reg = {
.lu = debugfs_port_reg_lu,
.count = 0x100,
.count = PORT_REG_LENGTH,
.base_off = PORT_BASE,
};
@ -3097,7 +3145,7 @@ static const struct hisi_sas_debugfs_reg_lu debugfs_global_reg_lu[] = {
static const struct hisi_sas_debugfs_reg debugfs_global_reg = {
.lu = debugfs_global_reg_lu,
.count = 0x800,
.count = GLOBAL_REG_LENGTH,
};
static const struct hisi_sas_debugfs_reg_lu debugfs_axi_reg_lu[] = {
@ -3110,7 +3158,7 @@ static const struct hisi_sas_debugfs_reg_lu debugfs_axi_reg_lu[] = {
static const struct hisi_sas_debugfs_reg debugfs_axi_reg = {
.lu = debugfs_axi_reg_lu,
.count = 0x61,
.count = AXI_REG_LENGTH,
.base_off = AXI_MASTER_CFG_BASE,
};
@ -3127,7 +3175,7 @@ static const struct hisi_sas_debugfs_reg_lu debugfs_ras_reg_lu[] = {
static const struct hisi_sas_debugfs_reg debugfs_ras_reg = {
.lu = debugfs_ras_reg_lu,
.count = 0x10,
.count = RAS_REG_LENGTH,
.base_off = RAS_BASE,
};
@ -3136,7 +3184,7 @@ static void debugfs_snapshot_prepare_v3_hw(struct hisi_hba *hisi_hba)
struct Scsi_Host *shost = hisi_hba->shost;
scsi_block_requests(shost);
wait_cmds_complete_timeout_v3_hw(hisi_hba, 100, 5000);
wait_cmds_complete_timeout_v3_hw(hisi_hba, WAIT_RETRY, WAIT_TMROUT);
set_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
hisi_sas_sync_cqs(hisi_hba);
@ -3177,7 +3225,7 @@ static void read_iost_itct_cache_v3_hw(struct hisi_hba *hisi_hba,
return;
}
memset(buf, 0, cache_dw_size * 4);
memset(buf, 0, cache_dw_size * BYTE_TO_DW);
buf[0] = val;
for (i = 1; i < cache_dw_size; i++)
@ -3224,7 +3272,7 @@ static void hisi_sas_bist_test_restore_v3_hw(struct hisi_hba *hisi_hba)
reg_val = hisi_sas_phy_read32(hisi_hba, phy_no, PROG_PHY_LINK_RATE);
/* init OOB link rate as 1.5 Gbits */
reg_val &= ~CFG_PROG_OOB_PHY_LINK_RATE_MSK;
reg_val |= (0x8 << CFG_PROG_OOB_PHY_LINK_RATE_OFF);
reg_val |= (SAS_LINK_RATE_1_5_GBPS << CFG_PROG_OOB_PHY_LINK_RATE_OFF);
hisi_sas_phy_write32(hisi_hba, phy_no, PROG_PHY_LINK_RATE, reg_val);
/* enable PHY */
@ -3233,6 +3281,9 @@ static void hisi_sas_bist_test_restore_v3_hw(struct hisi_hba *hisi_hba)
#define SAS_PHY_BIST_CODE_INIT 0x1
#define SAS_PHY_BIST_CODE1_INIT 0X80
#define SAS_PHY_BIST_INIT_DELAY 100
#define SAS_PHY_BIST_LOOP_TEST_0 1
#define SAS_PHY_BIST_LOOP_TEST_1 2
static int debugfs_set_bist_v3_hw(struct hisi_hba *hisi_hba, bool enable)
{
u32 reg_val, mode_tmp;
@ -3251,12 +3302,13 @@ static int debugfs_set_bist_v3_hw(struct hisi_hba *hisi_hba, bool enable)
ffe[FFE_SATA_1_5_GBPS], ffe[FFE_SATA_3_0_GBPS],
ffe[FFE_SATA_6_0_GBPS], fix_code[FIXED_CODE],
fix_code[FIXED_CODE_1]);
mode_tmp = path_mode ? 2 : 1;
mode_tmp = path_mode ? SAS_PHY_BIST_LOOP_TEST_1 :
SAS_PHY_BIST_LOOP_TEST_0;
if (enable) {
/* some preparations before bist test */
hisi_sas_bist_test_prep_v3_hw(hisi_hba);
/* set linkrate of bit test*/
/* set linkrate of bit test */
reg_val = hisi_sas_phy_read32(hisi_hba, phy_no,
PROG_PHY_LINK_RATE);
reg_val &= ~CFG_PROG_OOB_PHY_LINK_RATE_MSK;
@ -3294,13 +3346,13 @@ static int debugfs_set_bist_v3_hw(struct hisi_hba *hisi_hba, bool enable)
SAS_PHY_BIST_CODE1_INIT);
}
mdelay(100);
mdelay(SAS_PHY_BIST_INIT_DELAY);
reg_val |= (CFG_RX_BIST_EN_MSK | CFG_TX_BIST_EN_MSK);
hisi_sas_phy_write32(hisi_hba, phy_no, SAS_PHY_BIST_CTRL,
reg_val);
/* clear error bit */
mdelay(100);
mdelay(SAS_PHY_BIST_INIT_DELAY);
hisi_sas_phy_read32(hisi_hba, phy_no, SAS_BIST_ERR_CNT);
} else {
/* disable bist test and recover it */
@ -3354,7 +3406,7 @@ static const struct scsi_host_template sht_v3_hw = {
.shost_groups = host_v3_hw_groups,
.sdev_groups = sdev_groups_v3_hw,
.tag_alloc_policy_rr = true,
.host_reset = hisi_sas_host_reset,
.host_reset = hisi_sas_host_reset,
.host_tagset = 1,
.mq_poll = queue_complete_v3_hw,
};
@ -3496,7 +3548,7 @@ static void debugfs_snapshot_port_reg_v3_hw(struct hisi_hba *hisi_hba)
for (phy_cnt = 0; phy_cnt < hisi_hba->n_phy; phy_cnt++) {
databuf = hisi_hba->debugfs_port_reg[dump_index][phy_cnt].data;
for (i = 0; i < port->count; i++, databuf++) {
offset = port->base_off + 4 * i;
offset = port->base_off + HISI_SAS_REG_MEM_SIZE * i;
*databuf = hisi_sas_phy_read32(hisi_hba, phy_cnt,
offset);
}
@ -3510,7 +3562,8 @@ static void debugfs_snapshot_global_reg_v3_hw(struct hisi_hba *hisi_hba)
int i;
for (i = 0; i < debugfs_global_reg.count; i++, databuf++)
*databuf = hisi_sas_read32(hisi_hba, 4 * i);
*databuf = hisi_sas_read32(hisi_hba,
HISI_SAS_REG_MEM_SIZE * i);
}
static void debugfs_snapshot_axi_reg_v3_hw(struct hisi_hba *hisi_hba)
@ -3521,7 +3574,9 @@ static void debugfs_snapshot_axi_reg_v3_hw(struct hisi_hba *hisi_hba)
int i;
for (i = 0; i < axi->count; i++, databuf++)
*databuf = hisi_sas_read32(hisi_hba, 4 * i + axi->base_off);
*databuf = hisi_sas_read32(hisi_hba,
HISI_SAS_REG_MEM_SIZE * i +
axi->base_off);
}
static void debugfs_snapshot_ras_reg_v3_hw(struct hisi_hba *hisi_hba)
@ -3532,7 +3587,9 @@ static void debugfs_snapshot_ras_reg_v3_hw(struct hisi_hba *hisi_hba)
int i;
for (i = 0; i < ras->count; i++, databuf++)
*databuf = hisi_sas_read32(hisi_hba, 4 * i + ras->base_off);
*databuf = hisi_sas_read32(hisi_hba,
HISI_SAS_REG_MEM_SIZE * i +
ras->base_off);
}
static void debugfs_snapshot_itct_reg_v3_hw(struct hisi_hba *hisi_hba)
@ -3595,12 +3652,11 @@ static void debugfs_print_reg_v3_hw(u32 *regs_val, struct seq_file *s,
int i;
for (i = 0; i < reg->count; i++) {
int off = i * 4;
int off = i * HISI_SAS_REG_MEM_SIZE;
const char *name;
name = debugfs_to_reg_name_v3_hw(off, reg->base_off,
reg->lu);
if (name)
seq_printf(s, "0x%08x 0x%08x %s\n", off,
regs_val[i], name);
@ -3673,9 +3729,9 @@ static void debugfs_show_row_64_v3_hw(struct seq_file *s, int index,
/* completion header size not fixed per HW version */
seq_printf(s, "index %04d:\n\t", index);
for (i = 1; i <= sz / 8; i++, ptr++) {
for (i = 1; i <= sz / BYTE_TO_DDW; i++, ptr++) {
seq_printf(s, " 0x%016llx", le64_to_cpu(*ptr));
if (!(i % 2))
if (!(i % TWO_PARA_PER_LINE))
seq_puts(s, "\n\t");
}
@ -3689,9 +3745,9 @@ static void debugfs_show_row_32_v3_hw(struct seq_file *s, int index,
/* completion header size not fixed per HW version */
seq_printf(s, "index %04d:\n\t", index);
for (i = 1; i <= sz / 4; i++, ptr++) {
for (i = 1; i <= sz / BYTE_TO_DW; i++, ptr++) {
seq_printf(s, " 0x%08x", le32_to_cpu(*ptr));
if (!(i % 4))
if (!(i % FOUR_PARA_PER_LINE))
seq_puts(s, "\n\t");
}
seq_puts(s, "\n");
@ -3776,7 +3832,7 @@ static int debugfs_iost_cache_v3_hw_show(struct seq_file *s, void *p)
struct hisi_sas_debugfs_iost_cache *debugfs_iost_cache = s->private;
struct hisi_sas_iost_itct_cache *iost_cache =
debugfs_iost_cache->cache;
u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * 4;
u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * BYTE_TO_DW;
int i, tab_idx;
__le64 *iost;
@ -3824,7 +3880,7 @@ static int debugfs_itct_cache_v3_hw_show(struct seq_file *s, void *p)
struct hisi_sas_debugfs_itct_cache *debugfs_itct_cache = s->private;
struct hisi_sas_iost_itct_cache *itct_cache =
debugfs_itct_cache->cache;
u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * 4;
u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * BYTE_TO_DW;
int i, tab_idx;
__le64 *itct;
@ -3853,12 +3909,12 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba, int index)
u64 *debugfs_timestamp;
struct dentry *dump_dentry;
struct dentry *dentry;
char name[256];
char name[NAME_BUF_SIZE];
int p;
int c;
int d;
snprintf(name, 256, "%d", index);
snprintf(name, NAME_BUF_SIZE, "%d", index);
dump_dentry = debugfs_create_dir(name, hisi_hba->debugfs_dump_dentry);
@ -3874,7 +3930,7 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba, int index)
/* Create port dir and files */
dentry = debugfs_create_dir("port", dump_dentry);
for (p = 0; p < hisi_hba->n_phy; p++) {
snprintf(name, 256, "%d", p);
snprintf(name, NAME_BUF_SIZE, "%d", p);
debugfs_create_file(name, 0400, dentry,
&hisi_hba->debugfs_port_reg[index][p],
@ -3884,7 +3940,7 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba, int index)
/* Create CQ dir and files */
dentry = debugfs_create_dir("cq", dump_dentry);
for (c = 0; c < hisi_hba->queue_count; c++) {
snprintf(name, 256, "%d", c);
snprintf(name, NAME_BUF_SIZE, "%d", c);
debugfs_create_file(name, 0400, dentry,
&hisi_hba->debugfs_cq[index][c],
@ -3894,7 +3950,7 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba, int index)
/* Create DQ dir and files */
dentry = debugfs_create_dir("dq", dump_dentry);
for (d = 0; d < hisi_hba->queue_count; d++) {
snprintf(name, 256, "%d", d);
snprintf(name, NAME_BUF_SIZE, "%d", d);
debugfs_create_file(name, 0400, dentry,
&hisi_hba->debugfs_dq[index][d],
@ -3931,9 +3987,9 @@ static ssize_t debugfs_trigger_dump_v3_hw_write(struct file *file,
size_t count, loff_t *ppos)
{
struct hisi_hba *hisi_hba = file->f_inode->i_private;
char buf[8];
char buf[DUMP_BUF_SIZE];
if (count > 8)
if (count > DUMP_BUF_SIZE)
return -EFAULT;
if (copy_from_user(buf, user_buf, count))
@ -3997,7 +4053,7 @@ static ssize_t debugfs_bist_linkrate_v3_hw_write(struct file *filp,
{
struct seq_file *m = filp->private_data;
struct hisi_hba *hisi_hba = m->private;
char kbuf[16] = {}, *pkbuf;
char kbuf[BIST_BUF_SIZE] = {}, *pkbuf;
bool found = false;
int i;
@ -4014,7 +4070,7 @@ static ssize_t debugfs_bist_linkrate_v3_hw_write(struct file *filp,
for (i = 0; i < ARRAY_SIZE(debugfs_loop_linkrate_v3_hw); i++) {
if (!strncmp(debugfs_loop_linkrate_v3_hw[i].name,
pkbuf, 16)) {
pkbuf, BIST_BUF_SIZE)) {
hisi_hba->debugfs_bist_linkrate =
debugfs_loop_linkrate_v3_hw[i].value;
found = true;
@ -4072,7 +4128,7 @@ static ssize_t debugfs_bist_code_mode_v3_hw_write(struct file *filp,
{
struct seq_file *m = filp->private_data;
struct hisi_hba *hisi_hba = m->private;
char kbuf[16] = {}, *pkbuf;
char kbuf[BIST_BUF_SIZE] = {}, *pkbuf;
bool found = false;
int i;
@ -4089,7 +4145,7 @@ static ssize_t debugfs_bist_code_mode_v3_hw_write(struct file *filp,
for (i = 0; i < ARRAY_SIZE(debugfs_loop_code_mode_v3_hw); i++) {
if (!strncmp(debugfs_loop_code_mode_v3_hw[i].name,
pkbuf, 16)) {
pkbuf, BIST_BUF_SIZE)) {
hisi_hba->debugfs_bist_code_mode =
debugfs_loop_code_mode_v3_hw[i].value;
found = true;
@ -4204,7 +4260,7 @@ static ssize_t debugfs_bist_mode_v3_hw_write(struct file *filp,
{
struct seq_file *m = filp->private_data;
struct hisi_hba *hisi_hba = m->private;
char kbuf[16] = {}, *pkbuf;
char kbuf[BIST_BUF_SIZE] = {}, *pkbuf;
bool found = false;
int i;
@ -4220,7 +4276,8 @@ static ssize_t debugfs_bist_mode_v3_hw_write(struct file *filp,
pkbuf = strstrip(kbuf);
for (i = 0; i < ARRAY_SIZE(debugfs_loop_modes_v3_hw); i++) {
if (!strncmp(debugfs_loop_modes_v3_hw[i].name, pkbuf, 16)) {
if (!strncmp(debugfs_loop_modes_v3_hw[i].name, pkbuf,
BIST_BUF_SIZE)) {
hisi_hba->debugfs_bist_mode =
debugfs_loop_modes_v3_hw[i].value;
found = true;
@ -4499,8 +4556,9 @@ static int debugfs_fifo_data_v3_hw_show(struct seq_file *s, void *p)
debugfs_read_fifo_data_v3_hw(phy);
debugfs_show_row_32_v3_hw(s, 0, HISI_SAS_FIFO_DATA_DW_SIZE * 4,
(__le32 *)phy->fifo.rd_data);
debugfs_show_row_32_v3_hw(s, 0,
HISI_SAS_FIFO_DATA_DW_SIZE * HISI_SAS_REG_MEM_SIZE,
(__le32 *)phy->fifo.rd_data);
return 0;
}
@ -4632,14 +4690,14 @@ static int debugfs_alloc_v3_hw(struct hisi_hba *hisi_hba, int dump_index)
struct hisi_sas_debugfs_regs *regs =
&hisi_hba->debugfs_regs[dump_index][r];
sz = debugfs_reg_array_v3_hw[r]->count * 4;
sz = debugfs_reg_array_v3_hw[r]->count * HISI_SAS_REG_MEM_SIZE;
regs->data = devm_kmalloc(dev, sz, GFP_KERNEL);
if (!regs->data)
goto fail;
regs->hisi_hba = hisi_hba;
}
sz = debugfs_port_reg.count * 4;
sz = debugfs_port_reg.count * HISI_SAS_REG_MEM_SIZE;
for (p = 0; p < hisi_hba->n_phy; p++) {
struct hisi_sas_debugfs_port *port =
&hisi_hba->debugfs_port_reg[dump_index][p];
@ -4749,11 +4807,11 @@ static void debugfs_phy_down_cnt_init_v3_hw(struct hisi_hba *hisi_hba)
{
struct dentry *dir = debugfs_create_dir("phy_down_cnt",
hisi_hba->debugfs_dir);
char name[16];
char name[NAME_BUF_SIZE];
int phy_no;
for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
snprintf(name, 16, "%d", phy_no);
snprintf(name, NAME_BUF_SIZE, "%d", phy_no);
debugfs_create_file(name, 0600, dir,
&hisi_hba->phy[phy_no],
&debugfs_phy_down_cnt_v3_hw_fops);
@ -4938,7 +4996,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
shost->max_id = HISI_SAS_MAX_DEVICES;
shost->max_lun = ~0;
shost->max_channel = 1;
shost->max_cmd_len = 16;
shost->max_cmd_len = HISI_SAS_MAX_CDB_LEN;
shost->can_queue = HISI_SAS_UNRESERVED_IPTT;
shost->cmd_per_lun = HISI_SAS_UNRESERVED_IPTT;
if (hisi_hba->iopoll_q_cnt)
@ -5016,12 +5074,13 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba)
{
int i;
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 1), hisi_hba);
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 2), hisi_hba);
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 11), hisi_hba);
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, IRQ_PHY_UP_DOWN_INDEX), hisi_hba);
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, IRQ_CHL_INDEX), hisi_hba);
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, IRQ_AXI_INDEX), hisi_hba);
for (i = 0; i < hisi_hba->cq_nvecs; i++) {
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
int nr = hisi_sas_intr_conv ? 16 : 16 + i;
int nr = hisi_sas_intr_conv ? BASE_VECTORS_V3_HW :
BASE_VECTORS_V3_HW + i;
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, nr), cq);
}
@ -5051,9 +5110,11 @@ static void hisi_sas_reset_prepare_v3_hw(struct pci_dev *pdev)
{
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct hisi_hba *hisi_hba = sha->lldd_ha;
struct Scsi_Host *shost = hisi_hba->shost;
struct device *dev = hisi_hba->dev;
int rc;
wait_event(shost->host_wait, !scsi_host_in_recovery(shost));
dev_info(dev, "FLR prepare\n");
down(&hisi_hba->sem);
set_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags);

View File

@ -392,36 +392,6 @@ enum sci_status sci_remote_device_stop(struct isci_remote_device *idev,
}
}
enum sci_status sci_remote_device_reset(struct isci_remote_device *idev)
{
struct sci_base_state_machine *sm = &idev->sm;
enum sci_remote_device_states state = sm->current_state_id;
switch (state) {
case SCI_DEV_INITIAL:
case SCI_DEV_STOPPED:
case SCI_DEV_STARTING:
case SCI_SMP_DEV_IDLE:
case SCI_SMP_DEV_CMD:
case SCI_DEV_STOPPING:
case SCI_DEV_FAILED:
case SCI_DEV_RESETTING:
case SCI_DEV_FINAL:
default:
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE;
case SCI_DEV_READY:
case SCI_STP_DEV_IDLE:
case SCI_STP_DEV_CMD:
case SCI_STP_DEV_NCQ:
case SCI_STP_DEV_NCQ_ERROR:
case SCI_STP_DEV_AWAIT_RESET:
sci_change_state(sm, SCI_DEV_RESETTING);
return SCI_SUCCESS;
}
}
enum sci_status sci_remote_device_frame_handler(struct isci_remote_device *idev,
u32 frame_index)
{

View File

@ -159,21 +159,6 @@ enum sci_status sci_remote_device_stop(
struct isci_remote_device *idev,
u32 timeout);
/**
* sci_remote_device_reset() - This method will reset the device making it
* ready for operation. This method must be called anytime the device is
* reset either through a SMP phy control or a port hard reset request.
* @remote_device: This parameter specifies the device to be reset.
*
* This method does not actually cause the device hardware to be reset. This
* method resets the software object so that it will be operational after a
* device hardware reset completes. An indication of whether the device reset
* was accepted. SCI_SUCCESS This value is returned if the device reset is
* started.
*/
enum sci_status sci_remote_device_reset(
struct isci_remote_device *idev);
/**
* enum sci_remote_device_states - This enumeration depicts all the states
* for the common remote device state machine.

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -291,6 +291,138 @@ buffer_done:
return len;
}
static ssize_t
lpfc_vmid_info_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(dev);
struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
struct lpfc_hba *phba = vport->phba;
struct lpfc_vmid *vmp;
int len = 0, i, j, k, cpu;
char hxstr[LPFC_MAX_VMID_SIZE * 3] = {0};
struct timespec64 curr_tm;
struct lpfc_vmid_priority_range *vr;
u64 *lta, rct_acc = 0, max_lta = 0;
struct tm tm_val;
ktime_get_ts64(&curr_tm);
len += scnprintf(buf + len, PAGE_SIZE - len, "Key 'vmid':\n");
/* if enabled continue, else return */
if (lpfc_is_vmid_enabled(phba)) {
len += scnprintf(buf + len, PAGE_SIZE - len,
"lpfc VMID Page: ON\n\n");
} else {
len += scnprintf(buf + len, PAGE_SIZE - len,
"lpfc VMID Page: OFF\n\n");
return len;
}
/* if using priority tagging */
if (vport->phba->pport->vmid_flag & LPFC_VMID_TYPE_PRIO) {
len += scnprintf(buf + len, PAGE_SIZE - len,
"VMID priority ranges:\n");
vr = vport->vmid_priority.vmid_range;
for (i = 0; i < vport->vmid_priority.num_descriptors; ++i) {
len += scnprintf(buf + len, PAGE_SIZE - len,
"\t[x%x - x%x], qos: x%x\n",
vr->low, vr->high, vr->qos);
vr++;
}
}
for (i = 0; i < phba->cfg_max_vmid; i++) {
vmp = &vport->vmid[i];
max_lta = 0;
/* only if the slot is used */
if (!(vmp->flag & LPFC_VMID_SLOT_USED) ||
!(vmp->flag & LPFC_VMID_REGISTERED))
continue;
/* if using priority tagging */
if (vport->phba->pport->vmid_flag & LPFC_VMID_TYPE_PRIO) {
len += scnprintf(buf + len, PAGE_SIZE - len,
"VEM ID: %02x:%02x:%02x:%02x:"
"%02x:%02x:%02x:%02x:%02x:%02x:"
"%02x:%02x:%02x:%02x:%02x:%02x\n",
vport->lpfc_vmid_host_uuid[0],
vport->lpfc_vmid_host_uuid[1],
vport->lpfc_vmid_host_uuid[2],
vport->lpfc_vmid_host_uuid[3],
vport->lpfc_vmid_host_uuid[4],
vport->lpfc_vmid_host_uuid[5],
vport->lpfc_vmid_host_uuid[6],
vport->lpfc_vmid_host_uuid[7],
vport->lpfc_vmid_host_uuid[8],
vport->lpfc_vmid_host_uuid[9],
vport->lpfc_vmid_host_uuid[10],
vport->lpfc_vmid_host_uuid[11],
vport->lpfc_vmid_host_uuid[12],
vport->lpfc_vmid_host_uuid[13],
vport->lpfc_vmid_host_uuid[14],
vport->lpfc_vmid_host_uuid[15]);
}
/* IO stats */
len += scnprintf(buf + len, PAGE_SIZE - len,
"ID00 READs:%llx WRITEs:%llx\n",
vmp->io_rd_cnt,
vmp->io_wr_cnt);
for (j = 0, k = 0; j < strlen(vmp->host_vmid); j++, k += 3)
sprintf((char *)(hxstr + k), "%2x ", vmp->host_vmid[j]);
/* UUIDs */
len += scnprintf(buf + len, PAGE_SIZE - len, "UUID:\n");
len += scnprintf(buf + len, PAGE_SIZE - len, "%s\n", hxstr);
len += scnprintf(buf + len, PAGE_SIZE - len, "String (%s)\n",
vmp->host_vmid);
if (vport->phba->pport->vmid_flag & LPFC_VMID_TYPE_PRIO)
len += scnprintf(buf + len, PAGE_SIZE - len,
"CS_CTL VMID: 0x%x\n",
vmp->un.cs_ctl_vmid);
else
len += scnprintf(buf + len, PAGE_SIZE - len,
"Application id: 0x%x\n",
vmp->un.app_id);
/* calculate the last access time */
for_each_possible_cpu(cpu) {
lta = per_cpu_ptr(vmp->last_io_time, cpu);
if (!lta)
continue;
/* if last access time is less than timeout */
if (time_after((unsigned long)*lta, jiffies))
continue;
if (*lta > max_lta)
max_lta = *lta;
}
rct_acc = jiffies_to_msecs(jiffies - max_lta) / 1000;
/* current time */
time64_to_tm(ktime_get_real_seconds(),
-(sys_tz.tz_minuteswest * 60) - rct_acc, &tm_val);
len += scnprintf(buf + len, PAGE_SIZE - len,
"Last Access Time :"
"%ld-%d-%dT%02d:%02d:%02d\n\n",
1900 + tm_val.tm_year, tm_val.tm_mon + 1,
tm_val.tm_mday, tm_val.tm_hour,
tm_val.tm_min, tm_val.tm_sec);
if (len >= PAGE_SIZE)
return len;
memset(hxstr, 0, LPFC_MAX_VMID_SIZE * 3);
}
return len;
}
/**
* lpfc_drvr_version_show - Return the Emulex driver string with version number
* @dev: class unused variable.
@ -3011,6 +3143,7 @@ static DEVICE_ATTR(protocol, S_IRUGO, lpfc_sli4_protocol_show, NULL);
static DEVICE_ATTR(lpfc_xlane_supported, S_IRUGO, lpfc_oas_supported_show,
NULL);
static DEVICE_ATTR(cmf_info, 0444, lpfc_cmf_info_show, NULL);
static DEVICE_ATTR_RO(lpfc_vmid_info);
#define WWN_SZ 8
/**
@ -6117,6 +6250,7 @@ static struct attribute *lpfc_hba_attrs[] = {
&dev_attr_lpfc_vmid_inactivity_timeout.attr,
&dev_attr_lpfc_vmid_app_header.attr,
&dev_attr_lpfc_vmid_priority_tagging.attr,
&dev_attr_lpfc_vmid_info.attr,
NULL,
};

View File

@ -2687,8 +2687,7 @@ static int lpfcdiag_loop_get_xri(struct lpfc_hba *phba, uint16_t rpi,
evt->wait_time_stamp = jiffies;
time_left = wait_event_interruptible_timeout(
evt->wq, !list_empty(&evt->events_to_see),
msecs_to_jiffies(1000 *
((phba->fc_ratov * 2) + LPFC_DRVR_TIMEOUT)));
secs_to_jiffies(phba->fc_ratov * 2 + LPFC_DRVR_TIMEOUT));
if (list_empty(&evt->events_to_see))
ret_val = (time_left) ? -EINTR : -ETIMEDOUT;
else {
@ -3258,8 +3257,7 @@ lpfc_bsg_diag_loopback_run(struct bsg_job *job)
evt->waiting = 1;
time_left = wait_event_interruptible_timeout(
evt->wq, !list_empty(&evt->events_to_see),
msecs_to_jiffies(1000 *
((phba->fc_ratov * 2) + LPFC_DRVR_TIMEOUT)));
secs_to_jiffies(phba->fc_ratov * 2 + LPFC_DRVR_TIMEOUT));
evt->waiting = 0;
if (list_empty(&evt->events_to_see)) {
rc = (time_left) ? -EINTR : -ETIMEDOUT;

View File

@ -161,7 +161,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
struct lpfc_hba *phba;
struct lpfc_work_evt *evtp;
unsigned long iflags;
bool nvme_reg = false;
bool drop_initial_node_ref = false;
ndlp = ((struct lpfc_rport_data *)rport->dd_data)->pnode;
if (!ndlp)
@ -188,8 +188,13 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
spin_lock_irqsave(&ndlp->lock, iflags);
ndlp->rport = NULL;
if (ndlp->fc4_xpt_flags & NVME_XPT_REGD)
nvme_reg = true;
/* Only 1 thread can drop the initial node reference.
* If not registered for NVME and NLP_DROPPED flag is
* clear, remove the initial reference.
*/
if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
if (!test_and_set_bit(NLP_DROPPED, &ndlp->nlp_flag))
drop_initial_node_ref = true;
/* The scsi_transport is done with the rport so lpfc cannot
* call to unregister.
@ -200,13 +205,16 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
/* If NLP_XPT_REGD was cleared in lpfc_nlp_unreg_node,
* unregister calls were made to the scsi and nvme
* transports and refcnt was already decremented. Clear
* the NLP_XPT_REGD flag only if the NVME Rport is
* the NLP_XPT_REGD flag only if the NVME nrport is
* confirmed unregistered.
*/
if (!nvme_reg && ndlp->fc4_xpt_flags & NLP_XPT_REGD) {
ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
if (ndlp->fc4_xpt_flags & NLP_XPT_REGD) {
if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
spin_unlock_irqrestore(&ndlp->lock, iflags);
lpfc_nlp_put(ndlp); /* may free ndlp */
/* Release scsi transport reference */
lpfc_nlp_put(ndlp);
} else {
spin_unlock_irqrestore(&ndlp->lock, iflags);
}
@ -214,14 +222,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
spin_unlock_irqrestore(&ndlp->lock, iflags);
}
/* Only 1 thread can drop the initial node reference. If
* another thread has set NLP_DROPPED, this thread is done.
*/
if (nvme_reg || test_bit(NLP_DROPPED, &ndlp->nlp_flag))
return;
set_bit(NLP_DROPPED, &ndlp->nlp_flag);
lpfc_nlp_put(ndlp);
if (drop_initial_node_ref)
lpfc_nlp_put(ndlp);
return;
}
@ -4695,9 +4697,7 @@ lpfc_nlp_unreg_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
if (ndlp->fc4_xpt_flags & NVME_XPT_REGD) {
vport->phba->nport_event_cnt++;
if (vport->phba->nvmet_support == 0) {
/* Start devloss if target. */
if (ndlp->nlp_type & NLP_NVME_TARGET)
lpfc_nvme_unregister_port(vport, ndlp);
lpfc_nvme_unregister_port(vport, ndlp);
} else {
/* NVMET has no upcall. */
lpfc_nlp_put(ndlp);
@ -5053,7 +5053,7 @@ lpfc_check_sli_ndlp(struct lpfc_hba *phba,
case CMD_GEN_REQUEST64_CR:
if (iocb->ndlp == ndlp)
return 1;
fallthrough;
break;
case CMD_ELS_REQUEST64_CR:
if (remote_id == ndlp->nlp_DID)
return 1;

View File

@ -1907,6 +1907,9 @@ lpfc_sli4_port_sta_fn_reset(struct lpfc_hba *phba, int mbx_action,
uint32_t intr_mode;
LPFC_MBOXQ_t *mboxq;
/* Notifying the transport that the targets are going offline. */
lpfc_scsi_dev_block(phba);
if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >=
LPFC_SLI_INTF_IF_TYPE_2) {
/*

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -2508,7 +2508,10 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
"6031 RemotePort Registration failed "
"err: %d, DID x%06x ref %u\n",
ret, ndlp->nlp_DID, kref_read(&ndlp->kref));
lpfc_nlp_put(ndlp);
/* Only release reference if one was taken for this request */
if (!oldrport)
lpfc_nlp_put(ndlp);
}
return ret;
@ -2614,7 +2617,8 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
* clear any rport state until the transport calls back.
*/
if (ndlp->nlp_type & NLP_NVME_TARGET) {
if ((ndlp->nlp_type & NLP_NVME_TARGET) ||
(remoteport->port_role & FC_PORT_ROLE_NVME_TARGET)) {
/* No concern about the role change on the nvme remoteport.
* The transport will update it.
*/

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -3926,12 +3926,19 @@ void lpfc_poll_eratt(struct timer_list *t)
uint64_t sli_intr, cnt;
phba = from_timer(phba, t, eratt_poll);
if (!test_bit(HBA_SETUP, &phba->hba_flag))
return;
if (test_bit(FC_UNLOADING, &phba->pport->load_flag))
return;
if (phba->sli_rev == LPFC_SLI_REV4 &&
!test_bit(HBA_SETUP, &phba->hba_flag)) {
lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"0663 HBA still initializing 0x%lx, restart "
"timer\n",
phba->hba_flag);
goto restart_timer;
}
/* Here we will also keep track of interrupts per sec of the hba */
sli_intr = phba->sli.slistat.sli_intr;
@ -3950,13 +3957,16 @@ void lpfc_poll_eratt(struct timer_list *t)
/* Check chip HA register for error event */
eratt = lpfc_sli_check_eratt(phba);
if (eratt)
if (eratt) {
/* Tell the worker thread there is work to do */
lpfc_worker_wake_up(phba);
else
/* Restart the timer for next eratt poll */
mod_timer(&phba->eratt_poll,
jiffies + secs_to_jiffies(phba->eratt_poll_interval));
return;
}
restart_timer:
/* Restart the timer for next eratt poll */
mod_timer(&phba->eratt_poll,
jiffies + secs_to_jiffies(phba->eratt_poll_interval));
return;
}
@ -6003,9 +6013,9 @@ lpfc_sli4_get_ctl_attr(struct lpfc_hba *phba)
phba->sli4_hba.flash_id = bf_get(lpfc_cntl_attr_flash_id, cntl_attr);
phba->sli4_hba.asic_rev = bf_get(lpfc_cntl_attr_asic_rev, cntl_attr);
memset(phba->BIOSVersion, 0, sizeof(phba->BIOSVersion));
strlcat(phba->BIOSVersion, (char *)cntl_attr->bios_ver_str,
memcpy(phba->BIOSVersion, cntl_attr->bios_ver_str,
sizeof(phba->BIOSVersion));
phba->BIOSVersion[sizeof(phba->BIOSVersion) - 1] = '\0';
lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"3086 lnk_type:%d, lnk_numb:%d, bios_ver:%s, "

View File

@ -20,7 +20,7 @@
* included with this package. *
*******************************************************************/
#define LPFC_DRIVER_VERSION "14.4.0.8"
#define LPFC_DRIVER_VERSION "14.4.0.9"
#define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */

View File

@ -505,7 +505,7 @@ lpfc_send_npiv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
wait_event_timeout(waitq,
!test_bit(NLP_WAIT_FOR_LOGO,
&ndlp->save_flags),
msecs_to_jiffies(phba->fc_ratov * 2000));
secs_to_jiffies(phba->fc_ratov * 2));
if (!test_bit(NLP_WAIT_FOR_LOGO, &ndlp->save_flags))
goto logo_cmpl;
@ -703,7 +703,7 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
wait_event_timeout(waitq,
!test_bit(NLP_WAIT_FOR_DA_ID,
&ndlp->save_flags),
msecs_to_jiffies(phba->fc_ratov * 2000));
secs_to_jiffies(phba->fc_ratov * 2));
}
lpfc_printf_vlog(vport, KERN_INFO, LOG_VPORT | LOG_ELS,

View File

@ -985,6 +985,10 @@ static int mpi3mr_report_tgtdev_to_host(struct mpi3mr_ioc *mrioc,
goto out;
}
}
dprint_event_bh(mrioc,
"exposed target device with handle(0x%04x), perst_id(%d)\n",
tgtdev->dev_handle, perst_id);
goto out;
} else
mpi3mr_report_tgtdev_to_sas_transport(mrioc, tgtdev);
out:
@ -1344,9 +1348,9 @@ static void mpi3mr_devstatuschg_evt_bh(struct mpi3mr_ioc *mrioc,
(struct mpi3_event_data_device_status_change *)fwevt->event_data;
dev_handle = le16_to_cpu(evtdata->dev_handle);
ioc_info(mrioc,
"%s :device status change: handle(0x%04x): reason code(0x%x)\n",
__func__, dev_handle, evtdata->reason_code);
dprint_event_bh(mrioc,
"processing device status change event bottom half for handle(0x%04x), rc(0x%02x)\n",
dev_handle, evtdata->reason_code);
switch (evtdata->reason_code) {
case MPI3_EVENT_DEV_STAT_RC_HIDDEN:
delete = 1;
@ -1365,8 +1369,13 @@ static void mpi3mr_devstatuschg_evt_bh(struct mpi3mr_ioc *mrioc,
}
tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle);
if (!tgtdev)
if (!tgtdev) {
dprint_event_bh(mrioc,
"processing device status change event bottom half,\n"
"cannot identify target device for handle(0x%04x), rc(0x%02x)\n",
dev_handle, evtdata->reason_code);
goto out;
}
if (uhide) {
tgtdev->is_hidden = 0;
if (!tgtdev->host_exposed)
@ -1406,12 +1415,17 @@ static void mpi3mr_devinfochg_evt_bh(struct mpi3mr_ioc *mrioc,
perst_id = le16_to_cpu(dev_pg0->persistent_id);
dev_handle = le16_to_cpu(dev_pg0->dev_handle);
ioc_info(mrioc,
"%s :Device info change: handle(0x%04x): persist_id(0x%x)\n",
__func__, dev_handle, perst_id);
dprint_event_bh(mrioc,
"processing device info change event bottom half for handle(0x%04x), perst_id(%d)\n",
dev_handle, perst_id);
tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle);
if (!tgtdev)
if (!tgtdev) {
dprint_event_bh(mrioc,
"cannot identify target device for device info\n"
"change event handle(0x%04x), perst_id(%d)\n",
dev_handle, perst_id);
goto out;
}
mpi3mr_update_tgtdev(mrioc, tgtdev, dev_pg0, false);
if (!tgtdev->is_hidden && !tgtdev->host_exposed)
mpi3mr_report_tgtdev_to_host(mrioc, perst_id);
@ -2012,8 +2026,11 @@ static void mpi3mr_fwevt_bh(struct mpi3mr_ioc *mrioc,
mpi3mr_fwevt_del_from_list(mrioc, fwevt);
mrioc->current_event = fwevt;
if (mrioc->stop_drv_processing)
if (mrioc->stop_drv_processing) {
dprint_event_bh(mrioc, "ignoring event(0x%02x) in the bottom half handler\n"
"due to stop_drv_processing\n", fwevt->event_id);
goto out;
}
if (mrioc->unrecoverable) {
dprint_event_bh(mrioc,
@ -2025,6 +2042,9 @@ static void mpi3mr_fwevt_bh(struct mpi3mr_ioc *mrioc,
if (!fwevt->process_evt)
goto evt_ack;
dprint_event_bh(mrioc, "processing event(0x%02x) in the bottom half handler\n",
fwevt->event_id);
switch (fwevt->event_id) {
case MPI3_EVENT_DEVICE_ADDED:
{
@ -2763,6 +2783,9 @@ static void mpi3mr_devstatuschg_evt_th(struct mpi3mr_ioc *mrioc,
goto out;
dev_handle = le16_to_cpu(evtdata->dev_handle);
dprint_event_th(mrioc,
"device status change event top half with rc(0x%02x) for handle(0x%04x)\n",
evtdata->reason_code, dev_handle);
switch (evtdata->reason_code) {
case MPI3_EVENT_DEV_STAT_RC_INT_DEVICE_RESET_STRT:
@ -2786,8 +2809,12 @@ static void mpi3mr_devstatuschg_evt_th(struct mpi3mr_ioc *mrioc,
}
tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle);
if (!tgtdev)
if (!tgtdev) {
dprint_event_th(mrioc,
"processing device status change event could not identify device for handle(0x%04x)\n",
dev_handle);
goto out;
}
if (hide)
tgtdev->is_hidden = hide;
if (tgtdev->starget && tgtdev->starget->hostdata) {
@ -2863,13 +2890,13 @@ static void mpi3mr_energypackchg_evt_th(struct mpi3mr_ioc *mrioc,
u16 shutdown_timeout = le16_to_cpu(evtdata->shutdown_timeout);
if (shutdown_timeout <= 0) {
ioc_warn(mrioc,
dprint_event_th(mrioc,
"%s :Invalid Shutdown Timeout received = %d\n",
__func__, shutdown_timeout);
return;
}
ioc_info(mrioc,
dprint_event_th(mrioc,
"%s :Previous Shutdown Timeout Value = %d New Shutdown Timeout Value = %d\n",
__func__, mrioc->facts.shutdown_timeout, shutdown_timeout);
mrioc->facts.shutdown_timeout = shutdown_timeout;
@ -2945,9 +2972,9 @@ void mpi3mr_add_event_wait_for_device_refresh(struct mpi3mr_ioc *mrioc)
* @mrioc: Adapter instance reference
* @event_reply: event data
*
* Identify whteher the event has to handled and acknowledged
* and either process the event in the tophalf and/or schedule a
* bottom half through mpi3mr_fwevt_worker.
* Identifies whether the event has to be handled and acknowledged,
* and either processes the event in the top-half and/or schedule a
* bottom-half through mpi3mr_fwevt_worker().
*
* Return: Nothing
*/
@ -2974,9 +3001,11 @@ void mpi3mr_os_handle_events(struct mpi3mr_ioc *mrioc,
struct mpi3_device_page0 *dev_pg0 =
(struct mpi3_device_page0 *)event_reply->event_data;
if (mpi3mr_create_tgtdev(mrioc, dev_pg0))
ioc_err(mrioc,
"%s :Failed to add device in the device add event\n",
__func__);
dprint_event_th(mrioc,
"failed to process device added event for handle(0x%04x),\n"
"perst_id(%d) in the event top half handler\n",
le16_to_cpu(dev_pg0->dev_handle),
le16_to_cpu(dev_pg0->persistent_id));
else
process_evt_bh = 1;
break;
@ -3039,11 +3068,15 @@ void mpi3mr_os_handle_events(struct mpi3mr_ioc *mrioc,
break;
}
if (process_evt_bh || ack_req) {
dprint_event_th(mrioc,
"scheduling bottom half handler for event(0x%02x),ack_required=%d\n",
evt_type, ack_req);
sz = event_reply->event_data_length * 4;
fwevt = mpi3mr_alloc_fwevt(sz);
if (!fwevt) {
ioc_info(mrioc, "%s :failure at %s:%d/%s()!\n",
__func__, __FILE__, __LINE__, __func__);
dprint_event_th(mrioc,
"failed to schedule bottom half handler for\n"
"event(0x%02x), ack_required=%d\n", evt_type, ack_req);
return;
}

View File

@ -2869,8 +2869,9 @@ _ctl_get_mpt_mctp_passthru_adapter(int dev_index)
if (ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) {
if (count == dev_index) {
spin_unlock(&gioc_lock);
return 0;
return ioc;
}
count++;
}
}
spin_unlock(&gioc_lock);

View File

@ -101,8 +101,8 @@ enum sas_sata_vsp_regs {
VSR_PHY_MODE9 = 0x09, /* Test */
VSR_PHY_MODE10 = 0x0A, /* Power */
VSR_PHY_MODE11 = 0x0B, /* Phy Mode */
VSR_PHY_VS0 = 0x0C, /* Vednor Specific 0 */
VSR_PHY_VS1 = 0x0D, /* Vednor Specific 1 */
VSR_PHY_VS0 = 0x0C, /* Vendor Specific 0 */
VSR_PHY_VS1 = 0x0D, /* Vendor Specific 1 */
};
enum chip_register_bits {

View File

@ -644,7 +644,7 @@ static DEVICE_ATTR(gsm_log, S_IRUGO, pm8001_ctl_gsm_log_show, NULL);
#define FLASH_CMD_SET_NVMD 0x02
struct flash_command {
u8 command[8];
u8 command[8] __nonstring;
int code;
};

View File

@ -103,25 +103,3 @@ qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
ret:
va_end(va);
}
int
qedi_create_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
{
int ret = 0;
for (; iter->name; iter++) {
ret = sysfs_create_bin_file(&shost->shost_gendev.kobj,
iter->attr);
if (ret)
pr_err("Unable to create sysfs %s attr, err(%d).\n",
iter->name, ret);
}
return ret;
}
void
qedi_remove_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
{
for (; iter->name; iter++)
sysfs_remove_bin_file(&shost->shost_gendev.kobj, iter->attr);
}

View File

@ -87,18 +87,6 @@ void qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
void qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
u32 info, const char *fmt, ...);
struct Scsi_Host;
struct sysfs_bin_attrs {
char *name;
const struct bin_attribute *attr;
};
int qedi_create_sysfs_attr(struct Scsi_Host *shost,
struct sysfs_bin_attrs *iter);
void qedi_remove_sysfs_attr(struct Scsi_Host *shost,
struct sysfs_bin_attrs *iter);
/* DebugFS related code */
struct qedi_list_of_funcs {
char *oper_str;

View File

@ -45,7 +45,6 @@ int qedi_iscsi_cleanup_task(struct iscsi_task *task,
void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd);
void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt,
struct qedi_cmd *qedi_cmd);
void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt);
void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, int16_t *tid);
void qedi_process_iscsi_error(struct qedi_endpoint *ep,
struct iscsi_eqe_data *data);

View File

@ -1877,14 +1877,6 @@ void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, s16 *tid)
WARN_ON(1);
}
void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt)
{
*proto_itt = qedi->itt_map[tid].itt;
QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
"Get itt map tid [0x%x with proto itt[0x%x]",
tid, *proto_itt);
}
struct qedi_cmd *qedi_get_cmd_from_tid(struct qedi_ctx *qedi, u32 tid)
{
struct qedi_cmd *cmd = NULL;

View File

@ -2705,59 +2705,6 @@ ql_dump_buffer(uint level, scsi_qla_host_t *vha, uint id, const void *buf,
}
}
/*
* This function is for formatting and logging log messages.
* It is to be used when vha is available. It formats the message
* and logs it to the messages file. All the messages will be logged
* irrespective of value of ql2xextended_error_logging.
* parameters:
* level: The level of the log messages to be printed in the
* messages file.
* vha: Pointer to the scsi_qla_host_t
* id: This is a unique id for the level. It identifies the
* part of the code from where the message originated.
* msg: The message to be displayed.
*/
void
ql_log_qp(uint32_t level, struct qla_qpair *qpair, int32_t id,
const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char pbuf[128];
if (level > ql_errlev)
return;
ql_ktrace(0, level, pbuf, NULL, qpair ? qpair->vha : NULL, id, fmt);
if (!pbuf[0]) /* set by ql_ktrace */
ql_dbg_prefix(pbuf, ARRAY_SIZE(pbuf), NULL,
qpair ? qpair->vha : NULL, id);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
switch (level) {
case ql_log_fatal: /* FATAL LOG */
pr_crit("%s%pV", pbuf, &vaf);
break;
case ql_log_warn:
pr_err("%s%pV", pbuf, &vaf);
break;
case ql_log_info:
pr_warn("%s%pV", pbuf, &vaf);
break;
default:
pr_info("%s%pV", pbuf, &vaf);
break;
}
va_end(va);
}
/*
* This function is for formatting and logging debug information.
* It is to be used when vha is available. It formats the message

View File

@ -334,9 +334,6 @@ ql_log(uint, scsi_qla_host_t *vha, uint, const char *fmt, ...);
void __attribute__((format (printf, 4, 5)))
ql_log_pci(uint, struct pci_dev *pdev, uint, const char *fmt, ...);
void __attribute__((format (printf, 4, 5)))
ql_log_qp(uint32_t, struct qla_qpair *, int32_t, const char *fmt, ...);
/* Debug Levels */
/* The 0x40000000 is the max value any debug level can have
* as ql2xextended_error_logging is of type signed int

View File

@ -164,10 +164,8 @@ extern int ql2xsmartsan;
extern int ql2xallocfwdump;
extern int ql2xextended_error_logging;
extern int ql2xextended_error_logging_ktrace;
extern int ql2xiidmaenable;
extern int ql2xmqsupport;
extern int ql2xfwloadbin;
extern int ql2xetsenable;
extern int ql2xshiftctondsd;
extern int ql2xdbwr;
extern int ql2xasynctmfenable;
@ -720,7 +718,6 @@ extern void *qla2x00_prep_ms_fdmi_iocb(scsi_qla_host_t *, uint32_t, uint32_t);
extern void *qla24xx_prep_ms_fdmi_iocb(scsi_qla_host_t *, uint32_t, uint32_t);
extern int qla2x00_fdmi_register(scsi_qla_host_t *);
extern int qla2x00_gfpn_id(scsi_qla_host_t *, sw_info_t *);
extern int qla2x00_gpsc(scsi_qla_host_t *, sw_info_t *);
extern size_t qla2x00_get_sym_node_name(scsi_qla_host_t *, uint8_t *, size_t);
extern int qla2x00_chk_ms_status(scsi_qla_host_t *, ms_iocb_entry_t *,
struct ct_sns_rsp *, const char *);
@ -822,7 +819,6 @@ extern int qlafx00_rescan_isp(scsi_qla_host_t *);
/* PCI related functions */
extern int qla82xx_pci_config(struct scsi_qla_host *);
extern int qla82xx_pci_mem_read_2M(struct qla_hw_data *, u64, void *, int);
extern int qla82xx_pci_region_offset(struct pci_dev *, int);
extern int qla82xx_iospace_config(struct qla_hw_data *);
/* Initialization related functions */
@ -866,7 +862,6 @@ extern int qla82xx_rd_32(struct qla_hw_data *, ulong);
/* ISP 8021 IDC */
extern void qla82xx_clear_drv_active(struct qla_hw_data *);
extern uint32_t qla82xx_wait_for_state_change(scsi_qla_host_t *, uint32_t);
extern int qla82xx_idc_lock(struct qla_hw_data *);
extern void qla82xx_idc_unlock(struct qla_hw_data *);
extern int qla82xx_device_state_handler(scsi_qla_host_t *);

View File

@ -2625,96 +2625,6 @@ qla2x00_port_speed_capability(uint16_t speed)
}
}
/**
* qla2x00_gpsc() - FCS Get Port Speed Capabilities (GPSC) query.
* @vha: HA context
* @list: switch info entries to populate
*
* Returns 0 on success.
*/
int
qla2x00_gpsc(scsi_qla_host_t *vha, sw_info_t *list)
{
int rval;
uint16_t i;
struct qla_hw_data *ha = vha->hw;
ms_iocb_entry_t *ms_pkt;
struct ct_sns_req *ct_req;
struct ct_sns_rsp *ct_rsp;
struct ct_arg arg;
if (!IS_IIDMA_CAPABLE(ha))
return QLA_FUNCTION_FAILED;
if (!ha->flags.gpsc_supported)
return QLA_FUNCTION_FAILED;
rval = qla2x00_mgmt_svr_login(vha);
if (rval)
return rval;
arg.iocb = ha->ms_iocb;
arg.req_dma = ha->ct_sns_dma;
arg.rsp_dma = ha->ct_sns_dma;
arg.req_size = GPSC_REQ_SIZE;
arg.rsp_size = GPSC_RSP_SIZE;
arg.nport_handle = vha->mgmt_svr_loop_id;
for (i = 0; i < ha->max_fibre_devices; i++) {
/* Issue GFPN_ID */
/* Prepare common MS IOCB */
ms_pkt = qla24xx_prep_ms_iocb(vha, &arg);
/* Prepare CT request */
ct_req = qla24xx_prep_ct_fm_req(ha->ct_sns, GPSC_CMD,
GPSC_RSP_SIZE);
ct_rsp = &ha->ct_sns->p.rsp;
/* Prepare CT arguments -- port_name */
memcpy(ct_req->req.gpsc.port_name, list[i].fabric_port_name,
WWN_SIZE);
/* Execute MS IOCB */
rval = qla2x00_issue_iocb(vha, ha->ms_iocb, ha->ms_iocb_dma,
sizeof(ms_iocb_entry_t));
if (rval != QLA_SUCCESS) {
/*EMPTY*/
ql_dbg(ql_dbg_disc, vha, 0x2059,
"GPSC issue IOCB failed (%d).\n", rval);
} else if ((rval = qla2x00_chk_ms_status(vha, ms_pkt, ct_rsp,
"GPSC")) != QLA_SUCCESS) {
/* FM command unsupported? */
if (rval == QLA_INVALID_COMMAND &&
(ct_rsp->header.reason_code ==
CT_REASON_INVALID_COMMAND_CODE ||
ct_rsp->header.reason_code ==
CT_REASON_COMMAND_UNSUPPORTED)) {
ql_dbg(ql_dbg_disc, vha, 0x205a,
"GPSC command unsupported, disabling "
"query.\n");
ha->flags.gpsc_supported = 0;
rval = QLA_FUNCTION_FAILED;
break;
}
rval = QLA_FUNCTION_FAILED;
} else {
list->fp_speed = qla2x00_port_speed_capability(
be16_to_cpu(ct_rsp->rsp.gpsc.speed));
ql_dbg(ql_dbg_disc, vha, 0x205b,
"GPSC ext entry - fpn "
"%8phN speeds=%04x speed=%04x.\n",
list[i].fabric_port_name,
be16_to_cpu(ct_rsp->rsp.gpsc.speeds),
be16_to_cpu(ct_rsp->rsp.gpsc.speed));
}
/* Last device exit. */
if (list[i].d_id.b.rsvd_1 != 0)
break;
}
return (rval);
}
/**
* qla2x00_gff_id() - SNS Get FC-4 Features (GFF_ID) query.
*

View File

@ -1099,11 +1099,6 @@ qla82xx_pinit_from_rom(scsi_qla_host_t *vha)
unsigned offset, n;
struct qla_hw_data *ha = vha->hw;
struct crb_addr_pair {
long addr;
long data;
};
/* Halt all the individual PEGs and other blocks of the ISP */
qla82xx_rom_lock(ha);
@ -1595,25 +1590,6 @@ qla82xx_get_fw_offs(struct qla_hw_data *ha)
return (u8 *)&ha->hablob->fw->data[offset];
}
/* PCI related functions */
int qla82xx_pci_region_offset(struct pci_dev *pdev, int region)
{
unsigned long val = 0;
u32 control;
switch (region) {
case 0:
val = 0;
break;
case 1:
pci_read_config_dword(pdev, QLA82XX_PCI_REG_MSIX_TBL, &control);
val = control + QLA82XX_MSIX_TBL_SPACE;
break;
}
return val;
}
int
qla82xx_iospace_config(struct qla_hw_data *ha)
{
@ -2934,32 +2910,6 @@ qla82xx_need_qsnt_handler(scsi_qla_host_t *vha)
}
}
/*
* qla82xx_wait_for_state_change
* Wait for device state to change from given current state
*
* Note:
* IDC lock must not be held upon entry
*
* Return:
* Changed device state.
*/
uint32_t
qla82xx_wait_for_state_change(scsi_qla_host_t *vha, uint32_t curr_state)
{
struct qla_hw_data *ha = vha->hw;
uint32_t dev_state;
do {
msleep(1000);
qla82xx_idc_lock(ha);
dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE);
qla82xx_idc_unlock(ha);
} while (dev_state == curr_state);
return dev_state;
}
void
qla8xxx_dev_failed_handler(scsi_qla_host_t *vha)
{

View File

@ -176,12 +176,6 @@ MODULE_PARM_DESC(ql2xenablehba_err_chk,
" 1 -- Error isolation enabled only for DIX Type 0\n"
" 2 -- Error isolation enabled for all Types\n");
int ql2xiidmaenable = 1;
module_param(ql2xiidmaenable, int, S_IRUGO);
MODULE_PARM_DESC(ql2xiidmaenable,
"Enables iIDMA settings "
"Default is 1 - perform iIDMA. 0 - no iIDMA.");
int ql2xmqsupport = 1;
module_param(ql2xmqsupport, int, S_IRUGO);
MODULE_PARM_DESC(ql2xmqsupport,
@ -199,12 +193,6 @@ MODULE_PARM_DESC(ql2xfwloadbin,
" 1 -- load firmware from flash.\n"
" 0 -- use default semantics.\n");
int ql2xetsenable;
module_param(ql2xetsenable, int, S_IRUGO);
MODULE_PARM_DESC(ql2xetsenable,
"Enables firmware ETS burst."
"Default is 0 - skip ETS enablement.");
int ql2xdbwr = 1;
module_param(ql2xdbwr, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(ql2xdbwr,

View File

@ -1454,50 +1454,6 @@ static struct fc_port *qlt_create_sess(
return sess;
}
/*
* max_gen - specifies maximum session generation
* at which this deletion requestion is still valid
*/
void
qlt_fc_port_deleted(struct scsi_qla_host *vha, fc_port_t *fcport, int max_gen)
{
struct qla_tgt *tgt = vha->vha_tgt.qla_tgt;
struct fc_port *sess = fcport;
unsigned long flags;
if (!vha->hw->tgt.tgt_ops)
return;
if (!tgt)
return;
spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
if (tgt->tgt_stop) {
spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
return;
}
if (!sess->se_sess) {
spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
return;
}
if (max_gen - sess->generation < 0) {
spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
ql_dbg(ql_dbg_tgt_mgt, vha, 0xf092,
"Ignoring stale deletion request for se_sess %p / sess %p"
" for port %8phC, req_gen %d, sess_gen %d\n",
sess->se_sess, sess, sess->port_name, max_gen,
sess->generation);
return;
}
ql_dbg(ql_dbg_tgt_mgt, vha, 0xf008, "qla_tgt_fc_port_deleted %p", sess);
sess->local = 1;
spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
qlt_schedule_sess_for_deletion(sess);
}
static inline int test_tgt_sess_count(struct qla_tgt *tgt)
{
struct qla_hw_data *ha = tgt->ha;
@ -5539,81 +5495,6 @@ qlt_alloc_qfull_cmd(struct scsi_qla_host *vha,
spin_unlock_irqrestore(&vha->hw->tgt.q_full_lock, flags);
}
int
qlt_free_qfull_cmds(struct qla_qpair *qpair)
{
struct scsi_qla_host *vha = qpair->vha;
struct qla_hw_data *ha = vha->hw;
unsigned long flags;
struct qla_tgt_cmd *cmd, *tcmd;
struct list_head free_list, q_full_list;
int rc = 0;
if (list_empty(&ha->tgt.q_full_list))
return 0;
INIT_LIST_HEAD(&free_list);
INIT_LIST_HEAD(&q_full_list);
spin_lock_irqsave(&vha->hw->tgt.q_full_lock, flags);
if (list_empty(&ha->tgt.q_full_list)) {
spin_unlock_irqrestore(&vha->hw->tgt.q_full_lock, flags);
return 0;
}
list_splice_init(&vha->hw->tgt.q_full_list, &q_full_list);
spin_unlock_irqrestore(&vha->hw->tgt.q_full_lock, flags);
spin_lock_irqsave(qpair->qp_lock_ptr, flags);
list_for_each_entry_safe(cmd, tcmd, &q_full_list, cmd_list) {
if (cmd->q_full)
/* cmd->state is a borrowed field to hold status */
rc = __qlt_send_busy(qpair, &cmd->atio, cmd->state);
else if (cmd->term_exchg)
rc = __qlt_send_term_exchange(qpair, NULL, &cmd->atio);
if (rc == -ENOMEM)
break;
if (cmd->q_full)
ql_dbg(ql_dbg_io, vha, 0x3006,
"%s: busy sent for ox_id[%04x]\n", __func__,
be16_to_cpu(cmd->atio.u.isp24.fcp_hdr.ox_id));
else if (cmd->term_exchg)
ql_dbg(ql_dbg_io, vha, 0x3007,
"%s: Term exchg sent for ox_id[%04x]\n", __func__,
be16_to_cpu(cmd->atio.u.isp24.fcp_hdr.ox_id));
else
ql_dbg(ql_dbg_io, vha, 0x3008,
"%s: Unexpected cmd in QFull list %p\n", __func__,
cmd);
list_move_tail(&cmd->cmd_list, &free_list);
/* piggy back on hardware_lock for protection */
vha->hw->tgt.num_qfull_cmds_alloc--;
}
spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
cmd = NULL;
list_for_each_entry_safe(cmd, tcmd, &free_list, cmd_list) {
list_del(&cmd->cmd_list);
/* This cmd was never sent to TCM. There is no need
* to schedule free or call free_cmd
*/
qlt_free_cmd(cmd);
}
if (!list_empty(&q_full_list)) {
spin_lock_irqsave(&vha->hw->tgt.q_full_lock, flags);
list_splice(&q_full_list, &vha->hw->tgt.q_full_list);
spin_unlock_irqrestore(&vha->hw->tgt.q_full_lock, flags);
}
return rc;
}
static void
qlt_send_busy(struct qla_qpair *qpair, struct atio_from_isp *atio,
uint16_t status)
@ -7090,16 +6971,6 @@ qlt_81xx_config_nvram_stage2(struct scsi_qla_host *vha,
}
}
void
qlt_83xx_iospace_config(struct qla_hw_data *ha)
{
if (!QLA_TGT_MODE_ENABLED())
return;
ha->msix_count += 1; /* For ATIO Q */
}
void
qlt_modify_vp_config(struct scsi_qla_host *vha,
struct vp_config_entry_24xx *vpmod)

View File

@ -1014,7 +1014,6 @@ extern int qlt_lport_register(void *, u64, u64, u64,
extern void qlt_lport_deregister(struct scsi_qla_host *);
extern void qlt_unreg_sess(struct fc_port *);
extern void qlt_fc_port_added(struct scsi_qla_host *, fc_port_t *);
extern void qlt_fc_port_deleted(struct scsi_qla_host *, fc_port_t *, int);
extern int __init qlt_init(void);
extern void qlt_exit(void);
extern void qlt_free_session_done(struct work_struct *);
@ -1082,8 +1081,6 @@ extern void qlt_mem_free(struct qla_hw_data *);
extern int qlt_stop_phase1(struct qla_tgt *);
extern void qlt_stop_phase2(struct qla_tgt *);
extern irqreturn_t qla83xx_msix_atio_q(int, void *);
extern void qlt_83xx_iospace_config(struct qla_hw_data *);
extern int qlt_free_qfull_cmds(struct qla_qpair *);
extern void qlt_logo_completion_handler(fc_port_t *, int);
extern void qlt_do_generation_tick(struct scsi_qla_host *, int *);

View File

@ -973,11 +973,6 @@ qla4_82xx_pinit_from_rom(struct scsi_qla_host *ha, int verbose)
unsigned long off;
unsigned offset, n;
struct crb_addr_pair {
long addr;
long data;
};
/* Halt all the indiviual PEGs and other blocks of the ISP */
qla4_82xx_rom_lock(ha);

View File

@ -162,7 +162,7 @@ static const char *sdebug_version_date = "20210520";
#define DEF_VPD_USE_HOSTNO 1
#define DEF_WRITESAME_LENGTH 0xFFFF
#define DEF_ATOMIC_WR 0
#define DEF_ATOMIC_WR_MAX_LENGTH 8192
#define DEF_ATOMIC_WR_MAX_LENGTH 128
#define DEF_ATOMIC_WR_ALIGN 2
#define DEF_ATOMIC_WR_GRAN 2
#define DEF_ATOMIC_WR_MAX_LENGTH_BNDRY (DEF_ATOMIC_WR_MAX_LENGTH)
@ -294,6 +294,14 @@ struct tape_block {
#define FF_SA (F_SA_HIGH | F_SA_LOW)
#define F_LONG_DELAY (F_SSU_DELAY | F_SYNC_DELAY)
/* Device selection bit mask */
#define DS_ALL 0xffffffff
#define DS_SBC (1 << TYPE_DISK)
#define DS_SSC (1 << TYPE_TAPE)
#define DS_ZBC (1 << TYPE_ZBC)
#define DS_NO_SSC (DS_ALL & ~DS_SSC)
#define SDEBUG_MAX_PARTS 4
#define SDEBUG_MAX_CMD_LEN 32
@ -472,6 +480,7 @@ struct opcode_info_t {
/* for terminating element */
u8 opcode; /* if num_attached > 0, preferred */
u16 sa; /* service action */
u32 devsel; /* device type mask for this definition */
u32 flags; /* OR-ed set of SDEB_F_* */
int (*pfp)(struct scsi_cmnd *, struct sdebug_dev_info *);
const struct opcode_info_t *arrp; /* num_attached elements or NULL */
@ -519,7 +528,8 @@ enum sdeb_opcode_index {
SDEB_I_WRITE_FILEMARKS = 35,
SDEB_I_SPACE = 36,
SDEB_I_FORMAT_MEDIUM = 37,
SDEB_I_LAST_ELEM_P1 = 38, /* keep this last (previous + 1) */
SDEB_I_ERASE = 38,
SDEB_I_LAST_ELEM_P1 = 39, /* keep this last (previous + 1) */
};
@ -530,7 +540,7 @@ static const unsigned char opcode_ind_arr[256] = {
SDEB_I_READ, 0, SDEB_I_WRITE, 0, 0, 0, 0, 0,
SDEB_I_WRITE_FILEMARKS, SDEB_I_SPACE, SDEB_I_INQUIRY, 0, 0,
SDEB_I_MODE_SELECT, SDEB_I_RESERVE, SDEB_I_RELEASE,
0, 0, SDEB_I_MODE_SENSE, SDEB_I_START_STOP, 0, SDEB_I_SEND_DIAG,
0, SDEB_I_ERASE, SDEB_I_MODE_SENSE, SDEB_I_START_STOP, 0, SDEB_I_SEND_DIAG,
SDEB_I_ALLOW_REMOVAL, 0,
/* 0x20; 0x20->0x3f: 10 byte cdbs */
0, 0, 0, 0, 0, SDEB_I_READ_CAPACITY, 0, 0,
@ -585,7 +595,9 @@ static int resp_mode_select(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_log_sense(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_readcap(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_read_dt0(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_read_tape(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_write_dt0(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_write_tape(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_write_scat(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_start_stop(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_readcap16(struct scsi_cmnd *, struct sdebug_dev_info *);
@ -613,8 +625,10 @@ static int resp_read_blklimits(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_locate(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_write_filemarks(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_space(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_read_position(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_rewind(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_format_medium(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_erase(struct scsi_cmnd *, struct sdebug_dev_info *);
static int sdebug_do_add_host(bool mk_new_store);
static int sdebug_add_host_helper(int per_host_idx);
@ -629,113 +643,121 @@ static void sdebug_erase_all_stores(bool apart_from_first);
* should be placed in opcode_info_arr[], the others should be placed here.
*/
static const struct opcode_info_t msense_iarr[] = {
{0, 0x1a, 0, F_D_IN, NULL, NULL,
{0, 0x1a, 0, DS_ALL, F_D_IN, NULL, NULL,
{6, 0xe8, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
};
static const struct opcode_info_t mselect_iarr[] = {
{0, 0x15, 0, F_D_OUT, NULL, NULL,
{0, 0x15, 0, DS_ALL, F_D_OUT, NULL, NULL,
{6, 0xf1, 0, 0, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
};
static const struct opcode_info_t read_iarr[] = {
{0, 0x28, 0, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL,/* READ(10) */
{0, 0x28, 0, DS_NO_SSC, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL,/* READ(10) */
{10, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 0,
0, 0, 0, 0} },
{0, 0x8, 0, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL, /* READ(6) */
{0, 0x8, 0, DS_NO_SSC, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL, /* READ(6) disk */
{6, 0xff, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0xa8, 0, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL,/* READ(12) */
{0, 0x8, 0, DS_SSC, F_D_IN | FF_MEDIA_IO, resp_read_tape, NULL, /* READ(6) tape */
{6, 0x03, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0xa8, 0, DS_NO_SSC, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL,/* READ(12) */
{12, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf,
0xc7, 0, 0, 0, 0} },
};
static const struct opcode_info_t write_iarr[] = {
{0, 0x2a, 0, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(10) */
{0, 0x2a, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(10) */
NULL, {10, 0xfb, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7,
0, 0, 0, 0, 0, 0} },
{0, 0xa, 0, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(6) */
{0, 0xa, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(6) disk */
NULL, {6, 0xff, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0} },
{0, 0xaa, 0, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(12) */
{0, 0xa, 0, DS_SSC, F_D_OUT | FF_MEDIA_IO, resp_write_tape, /* WRITE(6) tape */
NULL, {6, 0x01, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0} },
{0, 0xaa, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(12) */
NULL, {12, 0xfb, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xbf, 0xc7, 0, 0, 0, 0} },
};
static const struct opcode_info_t verify_iarr[] = {
{0, 0x2f, 0, F_D_OUT_MAYBE | FF_MEDIA_IO, resp_verify,/* VERIFY(10) */
{0, 0x2f, 0, DS_NO_SSC, F_D_OUT_MAYBE | FF_MEDIA_IO, resp_verify,/* VERIFY(10) */
NULL, {10, 0xf7, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xff, 0xff, 0xc7,
0, 0, 0, 0, 0, 0} },
};
static const struct opcode_info_t sa_in_16_iarr[] = {
{0, 0x9e, 0x12, F_SA_LOW | F_D_IN, resp_get_lba_status, NULL,
{0, 0x9e, 0x12, DS_NO_SSC, F_SA_LOW | F_D_IN, resp_get_lba_status, NULL,
{16, 0x12, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0, 0xc7} }, /* GET LBA STATUS(16) */
{0, 0x9e, 0x16, F_SA_LOW | F_D_IN, resp_get_stream_status, NULL,
{0, 0x9e, 0x16, DS_NO_SSC, F_SA_LOW | F_D_IN, resp_get_stream_status, NULL,
{16, 0x16, 0, 0, 0xff, 0xff, 0, 0, 0, 0, 0xff, 0xff, 0xff, 0xff,
0, 0} }, /* GET STREAM STATUS */
};
static const struct opcode_info_t vl_iarr[] = { /* VARIABLE LENGTH */
{0, 0x7f, 0xb, F_SA_HIGH | F_D_OUT | FF_MEDIA_IO, resp_write_dt0,
{0, 0x7f, 0xb, DS_NO_SSC, F_SA_HIGH | F_D_OUT | FF_MEDIA_IO, resp_write_dt0,
NULL, {32, 0xc7, 0, 0, 0, 0, 0x3f, 0x18, 0x0, 0xb, 0xfa,
0, 0xff, 0xff, 0xff, 0xff} }, /* WRITE(32) */
{0, 0x7f, 0x11, F_SA_HIGH | F_D_OUT | FF_MEDIA_IO, resp_write_scat,
{0, 0x7f, 0x11, DS_NO_SSC, F_SA_HIGH | F_D_OUT | FF_MEDIA_IO, resp_write_scat,
NULL, {32, 0xc7, 0, 0, 0, 0, 0x3f, 0x18, 0x0, 0x11, 0xf8,
0, 0xff, 0xff, 0x0, 0x0} }, /* WRITE SCATTERED(32) */
};
static const struct opcode_info_t maint_in_iarr[] = { /* MAINT IN */
{0, 0xa3, 0xc, F_SA_LOW | F_D_IN, resp_rsup_opcodes, NULL,
{0, 0xa3, 0xc, DS_ALL, F_SA_LOW | F_D_IN, resp_rsup_opcodes, NULL,
{12, 0xc, 0x87, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0,
0xc7, 0, 0, 0, 0} }, /* REPORT SUPPORTED OPERATION CODES */
{0, 0xa3, 0xd, F_SA_LOW | F_D_IN, resp_rsup_tmfs, NULL,
{0, 0xa3, 0xd, DS_ALL, F_SA_LOW | F_D_IN, resp_rsup_tmfs, NULL,
{12, 0xd, 0x80, 0, 0, 0, 0xff, 0xff, 0xff, 0xff, 0, 0xc7, 0, 0,
0, 0} }, /* REPORTED SUPPORTED TASK MANAGEMENT FUNCTIONS */
};
static const struct opcode_info_t write_same_iarr[] = {
{0, 0x93, 0, F_D_OUT_MAYBE | FF_MEDIA_IO, resp_write_same_16, NULL,
{0, 0x93, 0, DS_NO_SSC, F_D_OUT_MAYBE | FF_MEDIA_IO, resp_write_same_16, NULL,
{16, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* WRITE SAME(16) */
};
static const struct opcode_info_t reserve_iarr[] = {
{0, 0x16, 0, F_D_OUT, NULL, NULL, /* RESERVE(6) */
{0, 0x16, 0, DS_ALL, F_D_OUT, NULL, NULL, /* RESERVE(6) */
{6, 0x1f, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
};
static const struct opcode_info_t release_iarr[] = {
{0, 0x17, 0, F_D_OUT, NULL, NULL, /* RELEASE(6) */
{0, 0x17, 0, DS_ALL, F_D_OUT, NULL, NULL, /* RELEASE(6) */
{6, 0x1f, 0xff, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
};
static const struct opcode_info_t sync_cache_iarr[] = {
{0, 0x91, 0, F_SYNC_DELAY | F_M_ACCESS, resp_sync_cache, NULL,
{0, 0x91, 0, DS_NO_SSC, F_SYNC_DELAY | F_M_ACCESS, resp_sync_cache, NULL,
{16, 0x6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* SYNC_CACHE (16) */
};
static const struct opcode_info_t pre_fetch_iarr[] = {
{0, 0x90, 0, F_SYNC_DELAY | FF_MEDIA_IO, resp_pre_fetch, NULL,
{0, 0x90, 0, DS_NO_SSC, F_SYNC_DELAY | FF_MEDIA_IO, resp_pre_fetch, NULL,
{16, 0x2, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* PRE-FETCH (16) */
{0, 0x34, 0, DS_SSC, F_SYNC_DELAY | FF_MEDIA_IO, resp_read_position, NULL,
{10, 0x1f, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xc7, 0, 0,
0, 0, 0, 0} }, /* READ POSITION (10) */
};
static const struct opcode_info_t zone_out_iarr[] = { /* ZONE OUT(16) */
{0, 0x94, 0x1, F_SA_LOW | F_M_ACCESS, resp_close_zone, NULL,
{0, 0x94, 0x1, DS_NO_SSC, F_SA_LOW | F_M_ACCESS, resp_close_zone, NULL,
{16, 0x1, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0, 0, 0xff, 0xff, 0x1, 0xc7} }, /* CLOSE ZONE */
{0, 0x94, 0x2, F_SA_LOW | F_M_ACCESS, resp_finish_zone, NULL,
{0, 0x94, 0x2, DS_NO_SSC, F_SA_LOW | F_M_ACCESS, resp_finish_zone, NULL,
{16, 0x2, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0, 0, 0xff, 0xff, 0x1, 0xc7} }, /* FINISH ZONE */
{0, 0x94, 0x4, F_SA_LOW | F_M_ACCESS, resp_rwp_zone, NULL,
{0, 0x94, 0x4, DS_NO_SSC, F_SA_LOW | F_M_ACCESS, resp_rwp_zone, NULL,
{16, 0x4, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0, 0, 0xff, 0xff, 0x1, 0xc7} }, /* RESET WRITE POINTER */
};
static const struct opcode_info_t zone_in_iarr[] = { /* ZONE IN(16) */
{0, 0x95, 0x6, F_SA_LOW | F_D_IN | F_M_ACCESS, NULL, NULL,
{0, 0x95, 0x6, DS_NO_SSC, F_SA_LOW | F_D_IN | F_M_ACCESS, NULL, NULL,
{16, 0x6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* REPORT ZONES */
};
@ -746,130 +768,132 @@ static const struct opcode_info_t zone_in_iarr[] = { /* ZONE IN(16) */
* REPORT SUPPORTED OPERATION CODES. */
static const struct opcode_info_t opcode_info_arr[SDEB_I_LAST_ELEM_P1 + 1] = {
/* 0 */
{0, 0, 0, F_INV_OP | FF_RESPOND, NULL, NULL, /* unknown opcodes */
{0, 0, 0, DS_ALL, F_INV_OP | FF_RESPOND, NULL, NULL, /* unknown opcodes */
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0x12, 0, FF_RESPOND | F_D_IN, resp_inquiry, NULL, /* INQUIRY */
{0, 0x12, 0, DS_ALL, FF_RESPOND | F_D_IN, resp_inquiry, NULL, /* INQUIRY */
{6, 0xe3, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0xa0, 0, FF_RESPOND | F_D_IN, resp_report_luns, NULL,
{0, 0xa0, 0, DS_ALL, FF_RESPOND | F_D_IN, resp_report_luns, NULL,
{12, 0xe3, 0xff, 0, 0, 0, 0xff, 0xff, 0xff, 0xff, 0, 0xc7, 0, 0,
0, 0} }, /* REPORT LUNS */
{0, 0x3, 0, FF_RESPOND | F_D_IN, resp_requests, NULL,
{0, 0x3, 0, DS_ALL, FF_RESPOND | F_D_IN, resp_requests, NULL,
{6, 0xe1, 0, 0, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0x0, 0, F_M_ACCESS | F_RL_WLUN_OK, NULL, NULL,/* TEST UNIT READY */
{0, 0x0, 0, DS_ALL, F_M_ACCESS | F_RL_WLUN_OK, NULL, NULL,/* TEST UNIT READY */
{6, 0, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
/* 5 */
{ARRAY_SIZE(msense_iarr), 0x5a, 0, F_D_IN, /* MODE SENSE(10) */
{ARRAY_SIZE(msense_iarr), 0x5a, 0, DS_ALL, F_D_IN, /* MODE SENSE(10) */
resp_mode_sense, msense_iarr, {10, 0xf8, 0xff, 0xff, 0, 0, 0,
0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0} },
{ARRAY_SIZE(mselect_iarr), 0x55, 0, F_D_OUT, /* MODE SELECT(10) */
{ARRAY_SIZE(mselect_iarr), 0x55, 0, DS_ALL, F_D_OUT, /* MODE SELECT(10) */
resp_mode_select, mselect_iarr, {10, 0xf1, 0, 0, 0, 0, 0, 0xff,
0xff, 0xc7, 0, 0, 0, 0, 0, 0} },
{0, 0x4d, 0, F_D_IN, resp_log_sense, NULL, /* LOG SENSE */
{0, 0x4d, 0, DS_NO_SSC, F_D_IN, resp_log_sense, NULL, /* LOG SENSE */
{10, 0xe3, 0xff, 0xff, 0, 0xff, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0,
0, 0, 0} },
{0, 0x25, 0, F_D_IN, resp_readcap, NULL, /* READ CAPACITY(10) */
{0, 0x25, 0, DS_NO_SSC, F_D_IN, resp_readcap, NULL, /* READ CAPACITY(10) */
{10, 0xe1, 0xff, 0xff, 0xff, 0xff, 0, 0, 0x1, 0xc7, 0, 0, 0, 0,
0, 0} },
{ARRAY_SIZE(read_iarr), 0x88, 0, F_D_IN | FF_MEDIA_IO, /* READ(16) */
{ARRAY_SIZE(read_iarr), 0x88, 0, DS_NO_SSC, F_D_IN | FF_MEDIA_IO, /* READ(16) */
resp_read_dt0, read_iarr, {16, 0xfe, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc7} },
/* 10 */
{ARRAY_SIZE(write_iarr), 0x8a, 0, F_D_OUT | FF_MEDIA_IO,
{ARRAY_SIZE(write_iarr), 0x8a, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO,
resp_write_dt0, write_iarr, /* WRITE(16) */
{16, 0xfa, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xc7} },
{0, 0x1b, 0, F_SSU_DELAY, resp_start_stop, NULL,/* START STOP UNIT */
{0, 0x1b, 0, DS_ALL, F_SSU_DELAY, resp_start_stop, NULL,/* START STOP UNIT */
{6, 0x1, 0, 0xf, 0xf7, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{ARRAY_SIZE(sa_in_16_iarr), 0x9e, 0x10, F_SA_LOW | F_D_IN,
{ARRAY_SIZE(sa_in_16_iarr), 0x9e, 0x10, DS_NO_SSC, F_SA_LOW | F_D_IN,
resp_readcap16, sa_in_16_iarr, /* SA_IN(16), READ CAPACITY(16) */
{16, 0x10, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0x1, 0xc7} },
{0, 0x9f, 0x12, F_SA_LOW | F_D_OUT | FF_MEDIA_IO, resp_write_scat,
{0, 0x9f, 0x12, DS_NO_SSC, F_SA_LOW | F_D_OUT | FF_MEDIA_IO, resp_write_scat,
NULL, {16, 0x12, 0xf9, 0x0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xc7} }, /* SA_OUT(16), WRITE SCAT(16) */
{ARRAY_SIZE(maint_in_iarr), 0xa3, 0xa, F_SA_LOW | F_D_IN,
{ARRAY_SIZE(maint_in_iarr), 0xa3, 0xa, DS_ALL, F_SA_LOW | F_D_IN,
resp_report_tgtpgs, /* MAINT IN, REPORT TARGET PORT GROUPS */
maint_in_iarr, {12, 0xea, 0, 0, 0, 0, 0xff, 0xff, 0xff,
0xff, 0, 0xc7, 0, 0, 0, 0} },
/* 15 */
{0, 0, 0, F_INV_OP | FF_RESPOND, NULL, NULL, /* MAINT OUT */
{0, 0, 0, DS_ALL, F_INV_OP | FF_RESPOND, NULL, NULL, /* MAINT OUT */
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{ARRAY_SIZE(verify_iarr), 0x8f, 0,
{ARRAY_SIZE(verify_iarr), 0x8f, 0, DS_NO_SSC,
F_D_OUT_MAYBE | FF_MEDIA_IO, resp_verify, /* VERIFY(16) */
verify_iarr, {16, 0xf6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} },
{ARRAY_SIZE(vl_iarr), 0x7f, 0x9, F_SA_HIGH | F_D_IN | FF_MEDIA_IO,
{ARRAY_SIZE(vl_iarr), 0x7f, 0x9, DS_NO_SSC, F_SA_HIGH | F_D_IN | FF_MEDIA_IO,
resp_read_dt0, vl_iarr, /* VARIABLE LENGTH, READ(32) */
{32, 0xc7, 0, 0, 0, 0, 0x3f, 0x18, 0x0, 0x9, 0xfe, 0, 0xff, 0xff,
0xff, 0xff} },
{ARRAY_SIZE(reserve_iarr), 0x56, 0, F_D_OUT,
{ARRAY_SIZE(reserve_iarr), 0x56, 0, DS_ALL, F_D_OUT,
NULL, reserve_iarr, /* RESERVE(10) <no response function> */
{10, 0xff, 0xff, 0xff, 0, 0, 0, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0,
0} },
{ARRAY_SIZE(release_iarr), 0x57, 0, F_D_OUT,
{ARRAY_SIZE(release_iarr), 0x57, 0, DS_ALL, F_D_OUT,
NULL, release_iarr, /* RELEASE(10) <no response function> */
{10, 0x13, 0xff, 0xff, 0, 0, 0, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0,
0} },
/* 20 */
{0, 0x1e, 0, 0, NULL, NULL, /* ALLOW REMOVAL */
{0, 0x1e, 0, DS_ALL, 0, NULL, NULL, /* ALLOW REMOVAL */
{6, 0, 0, 0, 0x3, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0x1, 0, 0, resp_rewind, NULL,
{0, 0x1, 0, DS_SSC, 0, resp_rewind, NULL,
{6, 0x1, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0, 0, F_INV_OP | FF_RESPOND, NULL, NULL, /* ATA_PT */
{0, 0, 0, DS_NO_SSC, F_INV_OP | FF_RESPOND, NULL, NULL, /* ATA_PT */
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0x1d, F_D_OUT, 0, NULL, NULL, /* SEND DIAGNOSTIC */
{0, 0x1d, 0, DS_ALL, F_D_OUT, NULL, NULL, /* SEND DIAGNOSTIC */
{6, 0xf7, 0, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0x42, 0, F_D_OUT | FF_MEDIA_IO, resp_unmap, NULL, /* UNMAP */
{0, 0x42, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_unmap, NULL, /* UNMAP */
{10, 0x1, 0, 0, 0, 0, 0x3f, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0} },
/* 25 */
{0, 0x3b, 0, F_D_OUT_MAYBE, resp_write_buffer, NULL,
{0, 0x3b, 0, DS_NO_SSC, F_D_OUT_MAYBE, resp_write_buffer, NULL,
{10, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc7, 0, 0,
0, 0, 0, 0} }, /* WRITE_BUFFER */
{ARRAY_SIZE(write_same_iarr), 0x41, 0, F_D_OUT_MAYBE | FF_MEDIA_IO,
{ARRAY_SIZE(write_same_iarr), 0x41, 0, DS_NO_SSC, F_D_OUT_MAYBE | FF_MEDIA_IO,
resp_write_same_10, write_same_iarr, /* WRITE SAME(10) */
{10, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0,
0, 0, 0, 0, 0} },
{ARRAY_SIZE(sync_cache_iarr), 0x35, 0, F_SYNC_DELAY | F_M_ACCESS,
{ARRAY_SIZE(sync_cache_iarr), 0x35, 0, DS_NO_SSC, F_SYNC_DELAY | F_M_ACCESS,
resp_sync_cache, sync_cache_iarr,
{10, 0x7, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 0,
0, 0, 0, 0} }, /* SYNC_CACHE (10) */
{0, 0x89, 0, F_D_OUT | FF_MEDIA_IO, resp_comp_write, NULL,
{0, 0x89, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_comp_write, NULL,
{16, 0xf8, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0, 0,
0, 0xff, 0x3f, 0xc7} }, /* COMPARE AND WRITE */
{ARRAY_SIZE(pre_fetch_iarr), 0x34, 0, F_SYNC_DELAY | FF_MEDIA_IO,
{ARRAY_SIZE(pre_fetch_iarr), 0x34, 0, DS_NO_SSC, F_SYNC_DELAY | FF_MEDIA_IO,
resp_pre_fetch, pre_fetch_iarr,
{10, 0x2, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 0,
0, 0, 0, 0} }, /* PRE-FETCH (10) */
/* READ POSITION (10) */
/* 30 */
{ARRAY_SIZE(zone_out_iarr), 0x94, 0x3, F_SA_LOW | F_M_ACCESS,
{ARRAY_SIZE(zone_out_iarr), 0x94, 0x3, DS_NO_SSC, F_SA_LOW | F_M_ACCESS,
resp_open_zone, zone_out_iarr, /* ZONE_OUT(16), OPEN ZONE) */
{16, 0x3 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0x0, 0x0, 0xff, 0xff, 0x1, 0xc7} },
{ARRAY_SIZE(zone_in_iarr), 0x95, 0x0, F_SA_LOW | F_M_ACCESS,
{ARRAY_SIZE(zone_in_iarr), 0x95, 0x0, DS_NO_SSC, F_SA_LOW | F_M_ACCESS,
resp_report_zones, zone_in_iarr, /* ZONE_IN(16), REPORT ZONES) */
{16, 0x0 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xc7} },
/* 32 */
{0, 0x0, 0x0, F_D_OUT | FF_MEDIA_IO,
{0, 0x9c, 0x0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO,
resp_atomic_write, NULL, /* ATOMIC WRITE 16 */
{16, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff} },
{0, 0x05, 0, F_D_IN, resp_read_blklimits, NULL, /* READ BLOCK LIMITS (6) */
{0, 0x05, 0, DS_SSC, F_D_IN, resp_read_blklimits, NULL, /* READ BLOCK LIMITS (6) */
{6, 0, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0x2b, 0, F_D_UNKN, resp_locate, NULL, /* LOCATE (10) */
{0, 0x2b, 0, DS_SSC, F_D_UNKN, resp_locate, NULL, /* LOCATE (10) */
{10, 0x07, 0, 0xff, 0xff, 0xff, 0xff, 0, 0xff, 0xc7, 0, 0,
0, 0, 0, 0} },
{0, 0x10, 0, F_D_IN, resp_write_filemarks, NULL, /* WRITE FILEMARKS (6) */
{0, 0x10, 0, DS_SSC, F_D_IN, resp_write_filemarks, NULL, /* WRITE FILEMARKS (6) */
{6, 0x01, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0x11, 0, F_D_IN, resp_space, NULL, /* SPACE (6) */
{0, 0x11, 0, DS_SSC, F_D_IN, resp_space, NULL, /* SPACE (6) */
{6, 0x07, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
{0, 0x4, 0, 0, resp_format_medium, NULL, /* FORMAT MEDIUM (6) */
{0, 0x4, 0, DS_SSC, 0, resp_format_medium, NULL, /* FORMAT MEDIUM (6) */
{6, 0x3, 0x7, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
/* 38 */
{0, 0x19, 0, DS_SSC, F_D_IN, resp_erase, NULL, /* ERASE (6) */
{6, 0x03, 0x33, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
/* 39 */
/* sentinel */
{0xff, 0, 0, 0, NULL, NULL, /* terminating element */
{0xff, 0, 0, 0, 0, NULL, NULL, /* terminating element */
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
};
@ -1015,6 +1039,19 @@ static const int condition_met_result = SAM_STAT_CONDITION_MET;
static struct dentry *sdebug_debugfs_root;
static ASYNC_DOMAIN_EXCLUSIVE(sdebug_async_domain);
static u32 sdebug_get_devsel(struct scsi_device *sdp)
{
unsigned char devtype = sdp->type;
u32 devsel;
if (devtype < 32)
devsel = (1 << devtype);
else
devsel = DS_ALL;
return devsel;
}
static void sdebug_err_free(struct rcu_head *head)
{
struct sdebug_err_inject *inject =
@ -2032,13 +2069,19 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
unsigned char *cmd = scp->cmnd;
u32 alloc_len, n;
int ret;
bool have_wlun, is_disk, is_zbc, is_disk_zbc;
bool have_wlun, is_disk, is_zbc, is_disk_zbc, is_tape;
alloc_len = get_unaligned_be16(cmd + 3);
arr = kzalloc(SDEBUG_MAX_INQ_ARR_SZ, GFP_ATOMIC);
if (! arr)
return DID_REQUEUE << 16;
is_disk = (sdebug_ptype == TYPE_DISK);
if (scp->device->type >= 32) {
is_disk = (sdebug_ptype == TYPE_DISK);
is_tape = (sdebug_ptype == TYPE_TAPE);
} else {
is_disk = (scp->device->type == TYPE_DISK);
is_tape = (scp->device->type == TYPE_TAPE);
}
is_zbc = devip->zoned;
is_disk_zbc = (is_disk || is_zbc);
have_wlun = scsi_is_wlun(scp->device->lun);
@ -2047,7 +2090,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
else if (sdebug_no_lun_0 && (devip->lun == SDEBUG_LUN_0_VAL))
pq_pdt = 0x7f; /* not present, PQ=3, PDT=0x1f */
else
pq_pdt = (sdebug_ptype & 0x1f);
pq_pdt = ((scp->device->type >= 32 ?
sdebug_ptype : scp->device->type) & 0x1f);
arr[0] = pq_pdt;
if (0x2 & cmd[1]) { /* CMDDT bit set */
mk_sense_invalid_fld(scp, SDEB_IN_CDB, 1, 1);
@ -2170,7 +2214,7 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
if (is_disk) { /* SBC-4 no version claimed */
put_unaligned_be16(0x600, arr + n);
n += 2;
} else if (sdebug_ptype == TYPE_TAPE) { /* SSC-4 rev 3 */
} else if (is_tape) { /* SSC-4 rev 3 */
put_unaligned_be16(0x525, arr + n);
n += 2;
} else if (is_zbc) { /* ZBC BSR INCITS 536 revision 05 */
@ -2279,7 +2323,7 @@ static int resp_start_stop(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
changing = (stopped_state != want_stop);
if (changing)
atomic_xchg(&devip->stopped, want_stop);
if (sdebug_ptype == TYPE_TAPE && !want_stop) {
if (scp->device->type == TYPE_TAPE && !want_stop) {
int i;
set_bit(SDEBUG_UA_NOT_READY_TO_READY, devip->uas_bm); /* not legal! */
@ -2454,11 +2498,12 @@ static int resp_rsup_opcodes(struct scsi_cmnd *scp,
u8 reporting_opts, req_opcode, sdeb_i, supp;
u16 req_sa, u;
u32 alloc_len, a_len;
int k, offset, len, errsts, count, bump, na;
int k, offset, len, errsts, bump, na;
const struct opcode_info_t *oip;
const struct opcode_info_t *r_oip;
u8 *arr;
u8 *cmd = scp->cmnd;
u32 devsel = sdebug_get_devsel(scp->device);
rctd = !!(cmd[2] & 0x80);
reporting_opts = cmd[2] & 0x7;
@ -2481,34 +2526,30 @@ static int resp_rsup_opcodes(struct scsi_cmnd *scp,
}
switch (reporting_opts) {
case 0: /* all commands */
/* count number of commands */
for (count = 0, oip = opcode_info_arr;
oip->num_attached != 0xff; ++oip) {
if (F_INV_OP & oip->flags)
continue;
count += (oip->num_attached + 1);
}
bump = rctd ? 20 : 8;
put_unaligned_be32(count * bump, arr);
for (offset = 4, oip = opcode_info_arr;
oip->num_attached != 0xff && offset < a_len; ++oip) {
if (F_INV_OP & oip->flags)
continue;
if ((devsel & oip->devsel) != 0) {
arr[offset] = oip->opcode;
put_unaligned_be16(oip->sa, arr + offset + 2);
if (rctd)
arr[offset + 5] |= 0x2;
if (FF_SA & oip->flags)
arr[offset + 5] |= 0x1;
put_unaligned_be16(oip->len_mask[0], arr + offset + 6);
if (rctd)
put_unaligned_be16(0xa, arr + offset + 8);
offset += bump;
}
na = oip->num_attached;
arr[offset] = oip->opcode;
put_unaligned_be16(oip->sa, arr + offset + 2);
if (rctd)
arr[offset + 5] |= 0x2;
if (FF_SA & oip->flags)
arr[offset + 5] |= 0x1;
put_unaligned_be16(oip->len_mask[0], arr + offset + 6);
if (rctd)
put_unaligned_be16(0xa, arr + offset + 8);
r_oip = oip;
for (k = 0, oip = oip->arrp; k < na; ++k, ++oip) {
if (F_INV_OP & oip->flags)
continue;
offset += bump;
if ((devsel & oip->devsel) == 0)
continue;
arr[offset] = oip->opcode;
put_unaligned_be16(oip->sa, arr + offset + 2);
if (rctd)
@ -2516,14 +2557,15 @@ static int resp_rsup_opcodes(struct scsi_cmnd *scp,
if (FF_SA & oip->flags)
arr[offset + 5] |= 0x1;
put_unaligned_be16(oip->len_mask[0],
arr + offset + 6);
arr + offset + 6);
if (rctd)
put_unaligned_be16(0xa,
arr + offset + 8);
offset += bump;
}
oip = r_oip;
offset += bump;
}
put_unaligned_be32(offset - 4, arr);
break;
case 1: /* one command: opcode only */
case 2: /* one command: opcode plus service action */
@ -2549,13 +2591,15 @@ static int resp_rsup_opcodes(struct scsi_cmnd *scp,
return check_condition_result;
}
if (0 == (FF_SA & oip->flags) &&
req_opcode == oip->opcode)
(devsel & oip->devsel) != 0 &&
req_opcode == oip->opcode)
supp = 3;
else if (0 == (FF_SA & oip->flags)) {
na = oip->num_attached;
for (k = 0, oip = oip->arrp; k < na;
++k, ++oip) {
if (req_opcode == oip->opcode)
if (req_opcode == oip->opcode &&
(devsel & oip->devsel) != 0)
break;
}
supp = (k >= na) ? 1 : 3;
@ -2563,7 +2607,8 @@ static int resp_rsup_opcodes(struct scsi_cmnd *scp,
na = oip->num_attached;
for (k = 0, oip = oip->arrp; k < na;
++k, ++oip) {
if (req_sa == oip->sa)
if (req_sa == oip->sa &&
(devsel & oip->devsel) != 0)
break;
}
supp = (k >= na) ? 1 : 3;
@ -2914,9 +2959,9 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
subpcode = cmd[3];
msense_6 = (MODE_SENSE == cmd[0]);
llbaa = msense_6 ? false : !!(cmd[1] & 0x10);
is_disk = (sdebug_ptype == TYPE_DISK);
is_disk = (scp->device->type == TYPE_DISK);
is_zbc = devip->zoned;
is_tape = (sdebug_ptype == TYPE_TAPE);
is_tape = (scp->device->type == TYPE_TAPE);
if ((is_disk || is_zbc || is_tape) && !dbd)
bd_len = llbaa ? 16 : 8;
else
@ -3131,7 +3176,7 @@ static int resp_mode_select(struct scsi_cmnd *scp,
md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2);
bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6);
off = (mselect6 ? 4 : 8);
if (sdebug_ptype == TYPE_TAPE) {
if (scp->device->type == TYPE_TAPE) {
int blksize;
if (bd_len != 8) {
@ -3196,7 +3241,7 @@ static int resp_mode_select(struct scsi_cmnd *scp,
}
break;
case 0xf: /* Compression mode page */
if (sdebug_ptype != TYPE_TAPE)
if (scp->device->type != TYPE_TAPE)
goto bad_pcode;
if ((arr[off + 2] & 0x40) != 0) {
devip->tape_dce = (arr[off + 2] & 0x80) != 0;
@ -3204,7 +3249,7 @@ static int resp_mode_select(struct scsi_cmnd *scp,
}
break;
case 0x11: /* Medium Partition Mode Page (tape) */
if (sdebug_ptype == TYPE_TAPE) {
if (scp->device->type == TYPE_TAPE) {
int fld;
fld = process_medium_part_m_pg(devip, &arr[off], pg_len);
@ -3563,6 +3608,30 @@ is_eop:
return check_condition_result;
}
enum {SDEBUG_READ_POSITION_ARR_SZ = 20};
static int resp_read_position(struct scsi_cmnd *scp,
struct sdebug_dev_info *devip)
{
u8 *cmd = scp->cmnd;
int all_length;
unsigned char arr[20];
unsigned int pos;
all_length = get_unaligned_be16(cmd + 7);
if ((cmd[1] & 0xfe) != 0 ||
all_length != 0) { /* only short form */
mk_sense_invalid_fld(scp, SDEB_IN_CDB,
all_length ? 7 : 1, 0);
return check_condition_result;
}
memset(arr, 0, SDEBUG_READ_POSITION_ARR_SZ);
arr[1] = devip->tape_partition;
pos = devip->tape_location[devip->tape_partition];
put_unaligned_be32(pos, arr + 4);
put_unaligned_be32(pos, arr + 8);
return fill_from_dev_buffer(scp, arr, SDEBUG_READ_POSITION_ARR_SZ);
}
static int resp_rewind(struct scsi_cmnd *scp,
struct sdebug_dev_info *devip)
{
@ -3604,10 +3673,6 @@ static int resp_format_medium(struct scsi_cmnd *scp,
int res = 0;
unsigned char *cmd = scp->cmnd;
if (sdebug_ptype != TYPE_TAPE) {
mk_sense_invalid_fld(scp, SDEB_IN_CDB, 0, -1);
return check_condition_result;
}
if (cmd[2] > 2) {
mk_sense_invalid_fld(scp, SDEB_IN_DATA, 2, -1);
return check_condition_result;
@ -3631,6 +3696,19 @@ static int resp_format_medium(struct scsi_cmnd *scp,
return 0;
}
static int resp_erase(struct scsi_cmnd *scp,
struct sdebug_dev_info *devip)
{
int partition = devip->tape_partition;
int pos = devip->tape_location[partition];
struct tape_block *blp;
blp = devip->tape_blocks[partition] + pos;
blp->fl_size = TAPE_BLOCK_EOD_FLAG;
return 0;
}
static inline bool sdebug_dev_is_zoned(struct sdebug_dev_info *devip)
{
return devip->nr_zones != 0;
@ -4467,9 +4545,6 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
u8 *cmd = scp->cmnd;
bool meta_data_locked = false;
if (sdebug_ptype == TYPE_TAPE)
return resp_read_tape(scp, devip);
switch (cmd[0]) {
case READ_16:
ei_lba = 0;
@ -4839,9 +4914,6 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
u8 *cmd = scp->cmnd;
bool meta_data_locked = false;
if (sdebug_ptype == TYPE_TAPE)
return resp_write_tape(scp, devip);
switch (cmd[0]) {
case WRITE_16:
ei_lba = 0;
@ -5573,7 +5645,6 @@ static int resp_sync_cache(struct scsi_cmnd *scp,
*
* The pcode 0x34 is also used for READ POSITION by tape devices.
*/
enum {SDEBUG_READ_POSITION_ARR_SZ = 20};
static int resp_pre_fetch(struct scsi_cmnd *scp,
struct sdebug_dev_info *devip)
{
@ -5585,31 +5656,6 @@ static int resp_pre_fetch(struct scsi_cmnd *scp,
struct sdeb_store_info *sip = devip2sip(devip, true);
u8 *fsp = sip->storep;
if (sdebug_ptype == TYPE_TAPE) {
if (cmd[0] == PRE_FETCH) { /* READ POSITION (10) */
int all_length;
unsigned char arr[20];
unsigned int pos;
all_length = get_unaligned_be16(cmd + 7);
if ((cmd[1] & 0xfe) != 0 ||
all_length != 0) { /* only short form */
mk_sense_invalid_fld(scp, SDEB_IN_CDB,
all_length ? 7 : 1, 0);
return check_condition_result;
}
memset(arr, 0, SDEBUG_READ_POSITION_ARR_SZ);
arr[1] = devip->tape_partition;
pos = devip->tape_location[devip->tape_partition];
put_unaligned_be32(pos, arr + 4);
put_unaligned_be32(pos, arr + 8);
return fill_from_dev_buffer(scp, arr,
SDEBUG_READ_POSITION_ARR_SZ);
}
mk_sense_invalid_opcode(scp);
return check_condition_result;
}
if (cmd[0] == PRE_FETCH) { /* 10 byte cdb */
lba = get_unaligned_be32(cmd + 2);
nblks = get_unaligned_be16(cmd + 7);
@ -6645,7 +6691,7 @@ static void scsi_debug_sdev_destroy(struct scsi_device *sdp)
debugfs_remove(devip->debugfs_entry);
if (sdebug_ptype == TYPE_TAPE) {
if (sdp->type == TYPE_TAPE) {
kfree(devip->tape_blocks[0]);
devip->tape_blocks[0] = NULL;
}
@ -6833,18 +6879,16 @@ static int sdebug_fail_lun_reset(struct scsi_cmnd *cmnd)
static void scsi_tape_reset_clear(struct sdebug_dev_info *devip)
{
if (sdebug_ptype == TYPE_TAPE) {
int i;
int i;
devip->tape_blksize = TAPE_DEF_BLKSIZE;
devip->tape_density = TAPE_DEF_DENSITY;
devip->tape_partition = 0;
devip->tape_dce = 0;
for (i = 0; i < TAPE_MAX_PARTITIONS; i++)
devip->tape_location[i] = 0;
devip->tape_pending_nbr_partitions = -1;
/* Don't reset partitioning? */
}
devip->tape_blksize = TAPE_DEF_BLKSIZE;
devip->tape_density = TAPE_DEF_DENSITY;
devip->tape_partition = 0;
devip->tape_dce = 0;
for (i = 0; i < TAPE_MAX_PARTITIONS; i++)
devip->tape_location[i] = 0;
devip->tape_pending_nbr_partitions = -1;
/* Don't reset partitioning? */
}
static int scsi_debug_device_reset(struct scsi_cmnd *SCpnt)
@ -6862,7 +6906,8 @@ static int scsi_debug_device_reset(struct scsi_cmnd *SCpnt)
scsi_debug_stop_all_queued(sdp);
if (devip) {
set_bit(SDEBUG_UA_POR, devip->uas_bm);
scsi_tape_reset_clear(devip);
if (SCpnt->device->type == TYPE_TAPE)
scsi_tape_reset_clear(devip);
}
if (sdebug_fail_lun_reset(SCpnt)) {
@ -6901,7 +6946,8 @@ static int scsi_debug_target_reset(struct scsi_cmnd *SCpnt)
list_for_each_entry(devip, &sdbg_host->dev_info_list, dev_list) {
if (devip->target == sdp->id) {
set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm);
scsi_tape_reset_clear(devip);
if (SCpnt->device->type == TYPE_TAPE)
scsi_tape_reset_clear(devip);
++k;
}
}
@ -6933,7 +6979,8 @@ static int scsi_debug_bus_reset(struct scsi_cmnd *SCpnt)
list_for_each_entry(devip, &sdbg_host->dev_info_list, dev_list) {
set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm);
scsi_tape_reset_clear(devip);
if (SCpnt->device->type == TYPE_TAPE)
scsi_tape_reset_clear(devip);
++k;
}
@ -6957,7 +7004,8 @@ static int scsi_debug_host_reset(struct scsi_cmnd *SCpnt)
list_for_each_entry(devip, &sdbg_host->dev_info_list,
dev_list) {
set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm);
scsi_tape_reset_clear(devip);
if (SCpnt->device->type == TYPE_TAPE)
scsi_tape_reset_clear(devip);
++k;
}
}
@ -9173,6 +9221,7 @@ static int scsi_debug_queuecommand(struct Scsi_Host *shost,
u32 flags;
u16 sa;
u8 opcode = cmd[0];
u32 devsel = sdebug_get_devsel(scp->device);
bool has_wlun_rl;
bool inject_now;
int ret = 0;
@ -9252,12 +9301,14 @@ static int scsi_debug_queuecommand(struct Scsi_Host *shost,
else
sa = get_unaligned_be16(cmd + 8);
for (k = 0; k <= na; oip = r_oip->arrp + k++) {
if (opcode == oip->opcode && sa == oip->sa)
if (opcode == oip->opcode && sa == oip->sa &&
(devsel & oip->devsel) != 0)
break;
}
} else { /* since no service action only check opcode */
for (k = 0; k <= na; oip = r_oip->arrp + k++) {
if (opcode == oip->opcode)
if (opcode == oip->opcode &&
(devsel & oip->devsel) != 0)
break;
}
}

View File

@ -485,33 +485,6 @@ static struct scsi_dev_info_list *scsi_dev_info_list_find(const char *vendor,
return ERR_PTR(-ENOENT);
}
/**
* scsi_dev_info_list_del_keyed - remove one dev_info list entry.
* @vendor: vendor string
* @model: model (product) string
* @key: specify list to use
*
* Description:
* Remove and destroy one dev_info entry for @vendor, @model
* in list specified by @key.
*
* Returns: 0 OK, -error on failure.
**/
int scsi_dev_info_list_del_keyed(char *vendor, char *model,
enum scsi_devinfo_key key)
{
struct scsi_dev_info_list *found;
found = scsi_dev_info_list_find(vendor, model, key);
if (IS_ERR(found))
return PTR_ERR(found);
list_del(&found->dev_info_list);
kfree(found);
return 0;
}
EXPORT_SYMBOL(scsi_dev_info_list_del_keyed);
/**
* scsi_dev_info_list_add_str - parse dev_list and add to the scsi_dev_info_list.
* @dev_list: string of device flags to add

View File

@ -79,8 +79,6 @@ extern int scsi_dev_info_list_add_keyed(int compatible, char *vendor,
char *model, char *strflags,
blist_flags_t flags,
enum scsi_devinfo_key key);
extern int scsi_dev_info_list_del_keyed(char *vendor, char *model,
enum scsi_devinfo_key key);
extern int scsi_dev_info_add_list(enum scsi_devinfo_key key, const char *name);
extern int scsi_dev_info_remove_list(enum scsi_devinfo_key key);

View File

@ -3509,7 +3509,7 @@ fc_remote_port_rolechg(struct fc_rport *rport, u32 roles)
* state as the LLDD would not have had an rport
* reference to pass us.
*
* Take no action on the del_timer failure as the state
* Take no action on the timer_delete() failure as the state
* machine state change will validate the
* transaction.
*/

View File

@ -3215,7 +3215,7 @@ static bool sd_is_perm_stream(struct scsi_disk *sdkp, unsigned int stream_id)
return false;
if (get_unaligned_be32(&buf.h.len) < sizeof(struct scsi_stream_status))
return false;
return buf.h.stream_status[0].perm;
return buf.s.perm;
}
static void sd_read_io_hints(struct scsi_disk *sdkp, unsigned char *buffer)

View File

@ -1658,8 +1658,7 @@ static void register_sg_sysctls(void)
static void unregister_sg_sysctls(void)
{
if (hdr)
unregister_sysctl_table(hdr);
unregister_sysctl_table(hdr);
}
#else
#define register_sg_sysctls() do { } while (0)

View File

@ -33,11 +33,11 @@
#define BUILD_TIMESTAMP
#endif
#define DRIVER_VERSION "2.1.30-031"
#define DRIVER_VERSION "2.1.34-035"
#define DRIVER_MAJOR 2
#define DRIVER_MINOR 1
#define DRIVER_RELEASE 30
#define DRIVER_REVISION 31
#define DRIVER_RELEASE 34
#define DRIVER_REVISION 35
#define DRIVER_NAME "Microchip SmartPQI Driver (v" \
DRIVER_VERSION BUILD_TIMESTAMP ")"
@ -68,6 +68,7 @@ static struct pqi_cmd_priv *pqi_cmd_priv(struct scsi_cmnd *cmd)
static void pqi_verify_structures(void);
static void pqi_take_ctrl_offline(struct pqi_ctrl_info *ctrl_info,
enum pqi_ctrl_shutdown_reason ctrl_shutdown_reason);
static void pqi_take_ctrl_devices_offline(struct pqi_ctrl_info *ctrl_info);
static void pqi_ctrl_offline_worker(struct work_struct *work);
static int pqi_scan_scsi_devices(struct pqi_ctrl_info *ctrl_info);
static void pqi_scan_start(struct Scsi_Host *shost);
@ -2011,18 +2012,31 @@ static void pqi_dev_info(struct pqi_ctrl_info *ctrl_info,
PQI_DEV_INFO_BUFFER_LENGTH - count,
"-:-");
if (pqi_is_logical_device(device))
if (pqi_is_logical_device(device)) {
count += scnprintf(buffer + count,
PQI_DEV_INFO_BUFFER_LENGTH - count,
" %08x%08x",
*((u32 *)&device->scsi3addr),
*((u32 *)&device->scsi3addr[4]));
else
} else if (ctrl_info->rpl_extended_format_4_5_supported) {
if (device->device_type == SA_DEVICE_TYPE_NVME)
count += scnprintf(buffer + count,
PQI_DEV_INFO_BUFFER_LENGTH - count,
" %016llx%016llx",
get_unaligned_be64(&device->wwid[0]),
get_unaligned_be64(&device->wwid[8]));
else
count += scnprintf(buffer + count,
PQI_DEV_INFO_BUFFER_LENGTH - count,
" %016llx",
get_unaligned_be64(&device->wwid[0]));
} else {
count += scnprintf(buffer + count,
PQI_DEV_INFO_BUFFER_LENGTH - count,
" %016llx%016llx",
get_unaligned_be64(&device->wwid[0]),
get_unaligned_be64(&device->wwid[8]));
" %016llx",
get_unaligned_be64(&device->wwid[0]));
}
count += scnprintf(buffer + count, PQI_DEV_INFO_BUFFER_LENGTH - count,
" %s %.8s %.16s ",
@ -5990,7 +6004,7 @@ static bool pqi_is_parity_write_stream(struct pqi_ctrl_info *ctrl_info,
pqi_stream_data->next_lba = rmd.first_block +
rmd.block_cnt;
pqi_stream_data->last_accessed = jiffies;
per_cpu_ptr(device->raid_io_stats, smp_processor_id())->write_stream_cnt++;
per_cpu_ptr(device->raid_io_stats, raw_smp_processor_id())->write_stream_cnt++;
return true;
}
@ -6069,7 +6083,7 @@ static int pqi_scsi_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scm
rc = pqi_raid_bypass_submit_scsi_cmd(ctrl_info, device, scmd, queue_group);
if (rc == 0 || rc == SCSI_MLQUEUE_HOST_BUSY) {
raid_bypassed = true;
per_cpu_ptr(device->raid_io_stats, smp_processor_id())->raid_bypass_cnt++;
per_cpu_ptr(device->raid_io_stats, raw_smp_processor_id())->raid_bypass_cnt++;
}
}
if (!raid_bypassed)
@ -9129,6 +9143,7 @@ static void pqi_take_ctrl_offline_deferred(struct pqi_ctrl_info *ctrl_info)
pqi_ctrl_wait_until_quiesced(ctrl_info);
pqi_fail_all_outstanding_requests(ctrl_info);
pqi_ctrl_unblock_requests(ctrl_info);
pqi_take_ctrl_devices_offline(ctrl_info);
}
static void pqi_ctrl_offline_worker(struct work_struct *work)
@ -9203,6 +9218,27 @@ static void pqi_take_ctrl_offline(struct pqi_ctrl_info *ctrl_info,
schedule_work(&ctrl_info->ctrl_offline_work);
}
static void pqi_take_ctrl_devices_offline(struct pqi_ctrl_info *ctrl_info)
{
int rc;
unsigned long flags;
struct pqi_scsi_dev *device;
spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags);
list_for_each_entry(device, &ctrl_info->scsi_device_list, scsi_device_list_entry) {
rc = list_is_last(&device->scsi_device_list_entry, &ctrl_info->scsi_device_list);
if (rc)
continue;
/*
* Is the sdev pointer NULL?
*/
if (device->sdev)
scsi_device_set_state(device->sdev, SDEV_OFFLINE);
}
spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
}
static void pqi_print_ctrl_info(struct pci_dev *pci_dev,
const struct pci_device_id *id)
{
@ -9709,6 +9745,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1bd4, 0x0089)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1bd4, 0x00a3)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1ff9, 0x00a1)
@ -10045,6 +10085,30 @@ static const struct pci_device_id pqi_pci_id_table[] = {
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_ADAPTEC2, 0x14f0)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x207d, 0x4044)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x207d, 0x4054)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x207d, 0x4084)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x207d, 0x4094)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x207d, 0x4140)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x207d, 0x4240)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_ADVANTECH, 0x8312)
@ -10261,6 +10325,14 @@ static const struct pci_device_id pqi_pci_id_table[] = {
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1cc4, 0x0201)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1018, 0x8238)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1f3f, 0x0610)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0220)
@ -10269,10 +10341,30 @@ static const struct pci_device_id pqi_pci_id_table[] = {
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0221)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0222)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0223)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0224)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0225)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0520)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0521)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0522)
@ -10293,6 +10385,26 @@ static const struct pci_device_id pqi_pci_id_table[] = {
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0623)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0624)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0625)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0626)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0627)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_LENOVO, 0x0628)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1014, 0x0718)
@ -10321,6 +10433,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1137, 0x0300)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1ded, 0x3301)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1ff9, 0x0045)
@ -10469,6 +10585,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1f51, 0x100a)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1f51, 0x100b)
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1f51, 0x100e)

View File

@ -21,20 +21,63 @@
#include <soc/qcom/ice.h>
#define AES_256_XTS_KEY_SIZE 64
#define AES_256_XTS_KEY_SIZE 64 /* for raw keys only */
#define QCOM_ICE_HWKM_WRAPPED_KEY_SIZE 100 /* assuming HWKM v2 */
/* QCOM ICE registers */
#define QCOM_ICE_REG_VERSION 0x0008
#define QCOM_ICE_REG_FUSE_SETTING 0x0010
#define QCOM_ICE_REG_BIST_STATUS 0x0070
#define QCOM_ICE_REG_ADVANCED_CONTROL 0x1000
/* BIST ("built-in self-test") status flags */
#define QCOM_ICE_REG_CONTROL 0x0000
#define QCOM_ICE_LEGACY_MODE_ENABLED BIT(0)
#define QCOM_ICE_REG_VERSION 0x0008
#define QCOM_ICE_REG_FUSE_SETTING 0x0010
#define QCOM_ICE_FUSE_SETTING_MASK BIT(0)
#define QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK BIT(1)
#define QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK BIT(2)
#define QCOM_ICE_REG_BIST_STATUS 0x0070
#define QCOM_ICE_BIST_STATUS_MASK GENMASK(31, 28)
#define QCOM_ICE_FUSE_SETTING_MASK 0x1
#define QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK 0x2
#define QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK 0x4
#define QCOM_ICE_REG_ADVANCED_CONTROL 0x1000
#define QCOM_ICE_REG_CRYPTOCFG_BASE 0x4040
#define QCOM_ICE_REG_CRYPTOCFG_SIZE 0x80
#define QCOM_ICE_REG_CRYPTOCFG(slot) (QCOM_ICE_REG_CRYPTOCFG_BASE + \
QCOM_ICE_REG_CRYPTOCFG_SIZE * (slot))
union crypto_cfg {
__le32 regval;
struct {
u8 dusize;
u8 capidx;
u8 reserved;
#define QCOM_ICE_HWKM_CFG_ENABLE_VAL BIT(7)
u8 cfge;
};
};
/* QCOM ICE HWKM (Hardware Key Manager) registers */
#define HWKM_OFFSET 0x8000
#define QCOM_ICE_REG_HWKM_TZ_KM_CTL (HWKM_OFFSET + 0x1000)
#define QCOM_ICE_HWKM_DISABLE_CRC_CHECKS_VAL (BIT(1) | BIT(2))
#define QCOM_ICE_REG_HWKM_TZ_KM_STATUS (HWKM_OFFSET + 0x1004)
#define QCOM_ICE_HWKM_KT_CLEAR_DONE BIT(0)
#define QCOM_ICE_HWKM_BOOT_CMD_LIST0_DONE BIT(1)
#define QCOM_ICE_HWKM_BOOT_CMD_LIST1_DONE BIT(2)
#define QCOM_ICE_HWKM_CRYPTO_BIST_DONE_V2 BIT(7)
#define QCOM_ICE_HWKM_BIST_DONE_V2 BIT(9)
#define QCOM_ICE_REG_HWKM_BANK0_BANKN_IRQ_STATUS (HWKM_OFFSET + 0x2008)
#define QCOM_ICE_HWKM_RSP_FIFO_CLEAR_VAL BIT(3)
#define QCOM_ICE_REG_HWKM_BANK0_BBAC_0 (HWKM_OFFSET + 0x5000)
#define QCOM_ICE_REG_HWKM_BANK0_BBAC_1 (HWKM_OFFSET + 0x5004)
#define QCOM_ICE_REG_HWKM_BANK0_BBAC_2 (HWKM_OFFSET + 0x5008)
#define QCOM_ICE_REG_HWKM_BANK0_BBAC_3 (HWKM_OFFSET + 0x500C)
#define QCOM_ICE_REG_HWKM_BANK0_BBAC_4 (HWKM_OFFSET + 0x5010)
#define qcom_ice_writel(engine, val, reg) \
writel((val), (engine)->base + (reg))
@ -42,11 +85,18 @@
#define qcom_ice_readl(engine, reg) \
readl((engine)->base + (reg))
static bool qcom_ice_use_wrapped_keys;
module_param_named(use_wrapped_keys, qcom_ice_use_wrapped_keys, bool, 0660);
MODULE_PARM_DESC(use_wrapped_keys,
"Support wrapped keys instead of raw keys, if available on the platform");
struct qcom_ice {
struct device *dev;
void __iomem *base;
struct clk *core_clk;
bool use_hwkm;
bool hwkm_init_complete;
};
static bool qcom_ice_check_supported(struct qcom_ice *ice)
@ -76,6 +126,35 @@ static bool qcom_ice_check_supported(struct qcom_ice *ice)
return false;
}
/*
* Check for HWKM support and decide whether to use it or not. ICE
* v3.2.1 and later have HWKM v2. ICE v3.2.0 has HWKM v1. Earlier ICE
* versions don't have HWKM at all. However, for HWKM to be fully
* usable by Linux, the TrustZone software also needs to support certain
* SCM calls including the ones to generate and prepare keys. That
* effectively makes the earliest supported SoC be SM8650, which has
* HWKM v2. Therefore, this driver doesn't include support for HWKM v1,
* and it checks for the SCM call support before it decides to use HWKM.
*
* Also, since HWKM and legacy mode are mutually exclusive, and
* ICE-capable storage driver(s) need to know early on whether to
* advertise support for raw keys or wrapped keys, HWKM cannot be used
* unconditionally. A module parameter is used to opt into using it.
*/
if ((major >= 4 ||
(major == 3 && (minor >= 3 || (minor == 2 && step >= 1)))) &&
qcom_scm_has_wrapped_key_support()) {
if (qcom_ice_use_wrapped_keys) {
dev_info(dev, "Using HWKM. Supporting wrapped keys only.\n");
ice->use_hwkm = true;
} else {
dev_info(dev, "Not using HWKM. Supporting raw keys only.\n");
}
} else if (qcom_ice_use_wrapped_keys) {
dev_warn(dev, "A supported HWKM is not present. Ignoring qcom_ice.use_wrapped_keys=1.\n");
} else {
dev_info(dev, "A supported HWKM is not present. Supporting raw keys only.\n");
}
return true;
}
@ -123,17 +202,71 @@ static int qcom_ice_wait_bist_status(struct qcom_ice *ice)
err = readl_poll_timeout(ice->base + QCOM_ICE_REG_BIST_STATUS,
regval, !(regval & QCOM_ICE_BIST_STATUS_MASK),
50, 5000);
if (err)
if (err) {
dev_err(ice->dev, "Timed out waiting for ICE self-test to complete\n");
return err;
}
return err;
if (ice->use_hwkm &&
qcom_ice_readl(ice, QCOM_ICE_REG_HWKM_TZ_KM_STATUS) !=
(QCOM_ICE_HWKM_KT_CLEAR_DONE |
QCOM_ICE_HWKM_BOOT_CMD_LIST0_DONE |
QCOM_ICE_HWKM_BOOT_CMD_LIST1_DONE |
QCOM_ICE_HWKM_CRYPTO_BIST_DONE_V2 |
QCOM_ICE_HWKM_BIST_DONE_V2)) {
dev_err(ice->dev, "HWKM self-test error!\n");
/*
* Too late to revoke use_hwkm here, as it was already
* propagated up the stack into the crypto capabilities.
*/
}
return 0;
}
static void qcom_ice_hwkm_init(struct qcom_ice *ice)
{
u32 regval;
if (!ice->use_hwkm)
return;
BUILD_BUG_ON(QCOM_ICE_HWKM_WRAPPED_KEY_SIZE >
BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE);
/*
* When ICE is in HWKM mode, it only supports wrapped keys.
* When ICE is in legacy mode, it only supports raw keys.
*
* Put ICE in HWKM mode. ICE defaults to legacy mode.
*/
regval = qcom_ice_readl(ice, QCOM_ICE_REG_CONTROL);
regval &= ~QCOM_ICE_LEGACY_MODE_ENABLED;
qcom_ice_writel(ice, regval, QCOM_ICE_REG_CONTROL);
/* Disable CRC checks. This HWKM feature is not used. */
qcom_ice_writel(ice, QCOM_ICE_HWKM_DISABLE_CRC_CHECKS_VAL,
QCOM_ICE_REG_HWKM_TZ_KM_CTL);
/*
* Allow the HWKM slave to read and write the keyslots in the ICE HWKM
* slave. Without this, TrustZone cannot program keys into ICE.
*/
qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_0);
qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_1);
qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_2);
qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_3);
qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_4);
/* Clear the HWKM response FIFO. */
qcom_ice_writel(ice, QCOM_ICE_HWKM_RSP_FIFO_CLEAR_VAL,
QCOM_ICE_REG_HWKM_BANK0_BANKN_IRQ_STATUS);
ice->hwkm_init_complete = true;
}
int qcom_ice_enable(struct qcom_ice *ice)
{
qcom_ice_low_power_mode_enable(ice);
qcom_ice_optimization_enable(ice);
qcom_ice_hwkm_init(ice);
return qcom_ice_wait_bist_status(ice);
}
EXPORT_SYMBOL_GPL(qcom_ice_enable);
@ -149,7 +282,7 @@ int qcom_ice_resume(struct qcom_ice *ice)
err);
return err;
}
qcom_ice_hwkm_init(ice);
return qcom_ice_wait_bist_status(ice);
}
EXPORT_SYMBOL_GPL(qcom_ice_resume);
@ -157,15 +290,58 @@ EXPORT_SYMBOL_GPL(qcom_ice_resume);
int qcom_ice_suspend(struct qcom_ice *ice)
{
clk_disable_unprepare(ice->core_clk);
ice->hwkm_init_complete = false;
return 0;
}
EXPORT_SYMBOL_GPL(qcom_ice_suspend);
int qcom_ice_program_key(struct qcom_ice *ice,
u8 algorithm_id, u8 key_size,
const u8 crypto_key[], u8 data_unit_size,
int slot)
static unsigned int translate_hwkm_slot(struct qcom_ice *ice, unsigned int slot)
{
return slot * 2;
}
static int qcom_ice_program_wrapped_key(struct qcom_ice *ice, unsigned int slot,
const struct blk_crypto_key *bkey)
{
struct device *dev = ice->dev;
union crypto_cfg cfg = {
.dusize = bkey->crypto_cfg.data_unit_size / 512,
.capidx = QCOM_SCM_ICE_CIPHER_AES_256_XTS,
.cfge = QCOM_ICE_HWKM_CFG_ENABLE_VAL,
};
int err;
if (!ice->use_hwkm) {
dev_err_ratelimited(dev, "Got wrapped key when not using HWKM\n");
return -EINVAL;
}
if (!ice->hwkm_init_complete) {
dev_err_ratelimited(dev, "HWKM not yet initialized\n");
return -EINVAL;
}
/* Clear CFGE before programming the key. */
qcom_ice_writel(ice, 0x0, QCOM_ICE_REG_CRYPTOCFG(slot));
/* Call into TrustZone to program the wrapped key using HWKM. */
err = qcom_scm_ice_set_key(translate_hwkm_slot(ice, slot), bkey->bytes,
bkey->size, cfg.capidx, cfg.dusize);
if (err) {
dev_err_ratelimited(dev,
"qcom_scm_ice_set_key failed; err=%d, slot=%u\n",
err, slot);
return err;
}
/* Set CFGE after programming the key. */
qcom_ice_writel(ice, le32_to_cpu(cfg.regval),
QCOM_ICE_REG_CRYPTOCFG(slot));
return 0;
}
int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
const struct blk_crypto_key *blk_key)
{
struct device *dev = ice->dev;
union {
@ -176,15 +352,26 @@ int qcom_ice_program_key(struct qcom_ice *ice,
int err;
/* Only AES-256-XTS has been tested so far. */
if (algorithm_id != QCOM_ICE_CRYPTO_ALG_AES_XTS ||
key_size != QCOM_ICE_CRYPTO_KEY_SIZE_256) {
dev_err_ratelimited(dev,
"Unhandled crypto capability; algorithm_id=%d, key_size=%d\n",
algorithm_id, key_size);
if (blk_key->crypto_cfg.crypto_mode !=
BLK_ENCRYPTION_MODE_AES_256_XTS) {
dev_err_ratelimited(dev, "Unsupported crypto mode: %d\n",
blk_key->crypto_cfg.crypto_mode);
return -EINVAL;
}
memcpy(key.bytes, crypto_key, AES_256_XTS_KEY_SIZE);
if (blk_key->crypto_cfg.key_type == BLK_CRYPTO_KEY_TYPE_HW_WRAPPED)
return qcom_ice_program_wrapped_key(ice, slot, blk_key);
if (ice->use_hwkm) {
dev_err_ratelimited(dev, "Got raw key when using HWKM\n");
return -EINVAL;
}
if (blk_key->size != AES_256_XTS_KEY_SIZE) {
dev_err_ratelimited(dev, "Incorrect key size\n");
return -EINVAL;
}
memcpy(key.bytes, blk_key->bytes, AES_256_XTS_KEY_SIZE);
/* The SCM call requires that the key words are encoded in big endian */
for (i = 0; i < ARRAY_SIZE(key.words); i++)
@ -192,7 +379,7 @@ int qcom_ice_program_key(struct qcom_ice *ice,
err = qcom_scm_ice_set_key(slot, key.bytes, AES_256_XTS_KEY_SIZE,
QCOM_SCM_ICE_CIPHER_AES_256_XTS,
data_unit_size);
blk_key->crypto_cfg.data_unit_size / 512);
memzero_explicit(&key, sizeof(key));
@ -202,10 +389,131 @@ EXPORT_SYMBOL_GPL(qcom_ice_program_key);
int qcom_ice_evict_key(struct qcom_ice *ice, int slot)
{
if (ice->hwkm_init_complete)
slot = translate_hwkm_slot(ice, slot);
return qcom_scm_ice_invalidate_key(slot);
}
EXPORT_SYMBOL_GPL(qcom_ice_evict_key);
/**
* qcom_ice_get_supported_key_type() - Get the supported key type
* @ice: ICE driver data
*
* Return: the blk-crypto key type that the ICE driver is configured to use.
* This is the key type that ICE-capable storage drivers should advertise as
* supported in the crypto capabilities of any disks they register.
*/
enum blk_crypto_key_type qcom_ice_get_supported_key_type(struct qcom_ice *ice)
{
if (ice->use_hwkm)
return BLK_CRYPTO_KEY_TYPE_HW_WRAPPED;
return BLK_CRYPTO_KEY_TYPE_RAW;
}
EXPORT_SYMBOL_GPL(qcom_ice_get_supported_key_type);
/**
* qcom_ice_derive_sw_secret() - Derive software secret from wrapped key
* @ice: ICE driver data
* @eph_key: an ephemerally-wrapped key
* @eph_key_size: size of @eph_key in bytes
* @sw_secret: output buffer for the software secret
*
* Use HWKM to derive the "software secret" from a hardware-wrapped key that is
* given in ephemerally-wrapped form.
*
* Return: 0 on success; -EBADMSG if the given ephemerally-wrapped key is
* invalid; or another -errno value.
*/
int qcom_ice_derive_sw_secret(struct qcom_ice *ice,
const u8 *eph_key, size_t eph_key_size,
u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE])
{
int err = qcom_scm_derive_sw_secret(eph_key, eph_key_size,
sw_secret,
BLK_CRYPTO_SW_SECRET_SIZE);
if (err == -EIO || err == -EINVAL)
err = -EBADMSG; /* probably invalid key */
return err;
}
EXPORT_SYMBOL_GPL(qcom_ice_derive_sw_secret);
/**
* qcom_ice_generate_key() - Generate a wrapped key for inline encryption
* @ice: ICE driver data
* @lt_key: output buffer for the long-term wrapped key
*
* Use HWKM to generate a new key and return it as a long-term wrapped key.
*
* Return: the size of the resulting wrapped key on success; -errno on failure.
*/
int qcom_ice_generate_key(struct qcom_ice *ice,
u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
{
int err;
err = qcom_scm_generate_ice_key(lt_key, QCOM_ICE_HWKM_WRAPPED_KEY_SIZE);
if (err)
return err;
return QCOM_ICE_HWKM_WRAPPED_KEY_SIZE;
}
EXPORT_SYMBOL_GPL(qcom_ice_generate_key);
/**
* qcom_ice_prepare_key() - Prepare a wrapped key for inline encryption
* @ice: ICE driver data
* @lt_key: a long-term wrapped key
* @lt_key_size: size of @lt_key in bytes
* @eph_key: output buffer for the ephemerally-wrapped key
*
* Use HWKM to re-wrap a long-term wrapped key with the per-boot ephemeral key.
*
* Return: the size of the resulting wrapped key on success; -EBADMSG if the
* given long-term wrapped key is invalid; or another -errno value.
*/
int qcom_ice_prepare_key(struct qcom_ice *ice,
const u8 *lt_key, size_t lt_key_size,
u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
{
int err;
err = qcom_scm_prepare_ice_key(lt_key, lt_key_size,
eph_key, QCOM_ICE_HWKM_WRAPPED_KEY_SIZE);
if (err == -EIO || err == -EINVAL)
err = -EBADMSG; /* probably invalid key */
if (err)
return err;
return QCOM_ICE_HWKM_WRAPPED_KEY_SIZE;
}
EXPORT_SYMBOL_GPL(qcom_ice_prepare_key);
/**
* qcom_ice_import_key() - Import a raw key for inline encryption
* @ice: ICE driver data
* @raw_key: the raw key to import
* @raw_key_size: size of @raw_key in bytes
* @lt_key: output buffer for the long-term wrapped key
*
* Use HWKM to import a raw key and return it as a long-term wrapped key.
*
* Return: the size of the resulting wrapped key on success; -errno on failure.
*/
int qcom_ice_import_key(struct qcom_ice *ice,
const u8 *raw_key, size_t raw_key_size,
u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
{
int err;
err = qcom_scm_import_ice_key(raw_key, raw_key_size,
lt_key, QCOM_ICE_HWKM_WRAPPED_KEY_SIZE);
if (err)
return err;
return QCOM_ICE_HWKM_WRAPPED_KEY_SIZE;
}
EXPORT_SYMBOL_GPL(qcom_ice_import_key);
static struct qcom_ice *qcom_ice_create(struct device *dev,
void __iomem *base)
{

View File

@ -673,12 +673,10 @@ static ssize_t emulate_model_alias_store(struct config_item *item,
return ret;
BUILD_BUG_ON(sizeof(dev->t10_wwn.model) != INQUIRY_MODEL_LEN + 1);
if (flag) {
if (flag)
dev_set_t10_wwn_model_alias(dev);
} else {
strscpy(dev->t10_wwn.model, dev->transport->inquiry_prod,
sizeof(dev->t10_wwn.model));
}
else
strscpy(dev->t10_wwn.model, dev->transport->inquiry_prod);
da->emulate_model_alias = flag;
return count;
}
@ -1433,7 +1431,7 @@ static ssize_t target_wwn_vendor_id_store(struct config_item *item,
ssize_t len;
ssize_t ret;
len = strscpy(buf, page, sizeof(buf));
len = strscpy(buf, page);
if (len > 0) {
/* Strip any newline added from userspace. */
stripped = strstrip(buf);
@ -1464,7 +1462,7 @@ static ssize_t target_wwn_vendor_id_store(struct config_item *item,
}
BUILD_BUG_ON(sizeof(dev->t10_wwn.vendor) != INQUIRY_VENDOR_LEN + 1);
strscpy(dev->t10_wwn.vendor, stripped, sizeof(dev->t10_wwn.vendor));
strscpy(dev->t10_wwn.vendor, stripped);
pr_debug("Target_Core_ConfigFS: Set emulated T10 Vendor Identification:"
" %s\n", dev->t10_wwn.vendor);
@ -1489,7 +1487,7 @@ static ssize_t target_wwn_product_id_store(struct config_item *item,
ssize_t len;
ssize_t ret;
len = strscpy(buf, page, sizeof(buf));
len = strscpy(buf, page);
if (len > 0) {
/* Strip any newline added from userspace. */
stripped = strstrip(buf);
@ -1520,7 +1518,7 @@ static ssize_t target_wwn_product_id_store(struct config_item *item,
}
BUILD_BUG_ON(sizeof(dev->t10_wwn.model) != INQUIRY_MODEL_LEN + 1);
strscpy(dev->t10_wwn.model, stripped, sizeof(dev->t10_wwn.model));
strscpy(dev->t10_wwn.model, stripped);
pr_debug("Target_Core_ConfigFS: Set emulated T10 Model Identification: %s\n",
dev->t10_wwn.model);
@ -1545,7 +1543,7 @@ static ssize_t target_wwn_revision_store(struct config_item *item,
ssize_t len;
ssize_t ret;
len = strscpy(buf, page, sizeof(buf));
len = strscpy(buf, page);
if (len > 0) {
/* Strip any newline added from userspace. */
stripped = strstrip(buf);
@ -1576,7 +1574,7 @@ static ssize_t target_wwn_revision_store(struct config_item *item,
}
BUILD_BUG_ON(sizeof(dev->t10_wwn.revision) != INQUIRY_REVISION_LEN + 1);
strscpy(dev->t10_wwn.revision, stripped, sizeof(dev->t10_wwn.revision));
strscpy(dev->t10_wwn.revision, stripped);
pr_debug("Target_Core_ConfigFS: Set emulated T10 Revision: %s\n",
dev->t10_wwn.revision);

View File

@ -55,14 +55,14 @@ transport_lookup_cmd_lun(struct se_cmd *se_cmd)
rcu_read_lock();
deve = target_nacl_find_deve(nacl, se_cmd->orig_fe_lun);
if (deve) {
atomic_long_inc(&deve->total_cmds);
this_cpu_inc(deve->stats->total_cmds);
if (se_cmd->data_direction == DMA_TO_DEVICE)
atomic_long_add(se_cmd->data_length,
&deve->write_bytes);
this_cpu_add(deve->stats->write_bytes,
se_cmd->data_length);
else if (se_cmd->data_direction == DMA_FROM_DEVICE)
atomic_long_add(se_cmd->data_length,
&deve->read_bytes);
this_cpu_add(deve->stats->read_bytes,
se_cmd->data_length);
if ((se_cmd->data_direction == DMA_TO_DEVICE) &&
deve->lun_access_ro) {
@ -126,14 +126,14 @@ out_unlock:
* target_core_fabric_configfs.c:target_fabric_port_release
*/
se_cmd->se_dev = rcu_dereference_raw(se_lun->lun_se_dev);
atomic_long_inc(&se_cmd->se_dev->num_cmds);
this_cpu_inc(se_cmd->se_dev->stats->total_cmds);
if (se_cmd->data_direction == DMA_TO_DEVICE)
atomic_long_add(se_cmd->data_length,
&se_cmd->se_dev->write_bytes);
this_cpu_add(se_cmd->se_dev->stats->write_bytes,
se_cmd->data_length);
else if (se_cmd->data_direction == DMA_FROM_DEVICE)
atomic_long_add(se_cmd->data_length,
&se_cmd->se_dev->read_bytes);
this_cpu_add(se_cmd->se_dev->stats->read_bytes,
se_cmd->data_length);
return ret;
}
@ -322,6 +322,7 @@ int core_enable_device_list_for_node(
struct se_portal_group *tpg)
{
struct se_dev_entry *orig, *new;
int ret = 0;
new = kzalloc(sizeof(*new), GFP_KERNEL);
if (!new) {
@ -329,6 +330,12 @@ int core_enable_device_list_for_node(
return -ENOMEM;
}
new->stats = alloc_percpu(struct se_dev_entry_io_stats);
if (!new->stats) {
ret = -ENOMEM;
goto free_deve;
}
spin_lock_init(&new->ua_lock);
INIT_LIST_HEAD(&new->ua_list);
INIT_LIST_HEAD(&new->lun_link);
@ -351,8 +358,8 @@ int core_enable_device_list_for_node(
" for dynamic -> explicit NodeACL conversion:"
" %s\n", nacl->initiatorname);
mutex_unlock(&nacl->lun_entry_mutex);
kfree(new);
return -EINVAL;
ret = -EINVAL;
goto free_stats;
}
if (orig->se_lun_acl != NULL) {
pr_warn_ratelimited("Detected existing explicit"
@ -360,8 +367,8 @@ int core_enable_device_list_for_node(
" mapped_lun: %llu, failing\n",
nacl->initiatorname, mapped_lun);
mutex_unlock(&nacl->lun_entry_mutex);
kfree(new);
return -EINVAL;
ret = -EINVAL;
goto free_stats;
}
new->se_lun = lun;
@ -394,6 +401,20 @@ int core_enable_device_list_for_node(
target_luns_data_has_changed(nacl, new, true);
return 0;
free_stats:
free_percpu(new->stats);
free_deve:
kfree(new);
return ret;
}
static void target_free_dev_entry(struct rcu_head *head)
{
struct se_dev_entry *deve = container_of(head, struct se_dev_entry,
rcu_head);
free_percpu(deve->stats);
kfree(deve);
}
void core_disable_device_list_for_node(
@ -443,7 +464,7 @@ void core_disable_device_list_for_node(
kref_put(&orig->pr_kref, target_pr_kref_release);
wait_for_completion(&orig->pr_comp);
kfree_rcu(orig, rcu_head);
call_rcu(&orig->rcu_head, target_free_dev_entry);
core_scsi3_free_pr_reg_from_nacl(dev, nacl);
target_luns_data_has_changed(nacl, NULL, false);
@ -679,6 +700,18 @@ static void scsi_dump_inquiry(struct se_device *dev)
pr_debug(" Type: %s ", scsi_device_type(device_type));
}
static void target_non_ordered_release(struct percpu_ref *ref)
{
struct se_device *dev = container_of(ref, struct se_device,
non_ordered);
unsigned long flags;
spin_lock_irqsave(&dev->delayed_cmd_lock, flags);
if (!list_empty(&dev->delayed_cmd_list))
schedule_work(&dev->delayed_cmd_work);
spin_unlock_irqrestore(&dev->delayed_cmd_lock, flags);
}
struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
{
struct se_device *dev;
@ -689,11 +722,13 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
if (!dev)
return NULL;
dev->stats = alloc_percpu(struct se_dev_io_stats);
if (!dev->stats)
goto free_device;
dev->queues = kcalloc(nr_cpu_ids, sizeof(*dev->queues), GFP_KERNEL);
if (!dev->queues) {
hba->backend->ops->free_device(dev);
return NULL;
}
if (!dev->queues)
goto free_stats;
dev->queue_cnt = nr_cpu_ids;
for (i = 0; i < dev->queue_cnt; i++) {
@ -707,6 +742,10 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
INIT_WORK(&q->sq.work, target_queued_submit_work);
}
if (percpu_ref_init(&dev->non_ordered, target_non_ordered_release,
PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
goto free_queues;
dev->se_hba = hba;
dev->transport = hba->backend->ops;
dev->transport_flags = dev->transport->transport_flags_default;
@ -791,6 +830,14 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
sizeof(dev->t10_wwn.revision));
return dev;
free_queues:
kfree(dev->queues);
free_stats:
free_percpu(dev->stats);
free_device:
hba->backend->ops->free_device(dev);
return NULL;
}
/*
@ -980,6 +1027,9 @@ void target_free_device(struct se_device *dev)
WARN_ON(!list_empty(&dev->dev_sep_list));
percpu_ref_exit(&dev->non_ordered);
cancel_work_sync(&dev->delayed_cmd_work);
if (target_dev_configured(dev)) {
dev->transport->destroy_device(dev);
@ -1001,6 +1051,7 @@ void target_free_device(struct se_device *dev)
dev->transport->free_prot(dev);
kfree(dev->queues);
free_percpu(dev->stats);
dev->transport->free_device(dev);
}

View File

@ -1325,7 +1325,7 @@ static void set_dpofua_usage_bits32(u8 *usage_bits, struct se_device *dev)
usage_bits[10] |= 0x18;
}
static struct target_opcode_descriptor tcm_opcode_read6 = {
static const struct target_opcode_descriptor tcm_opcode_read6 = {
.support = SCSI_SUPPORT_FULL,
.opcode = READ_6,
.cdb_size = 6,
@ -1333,7 +1333,7 @@ static struct target_opcode_descriptor tcm_opcode_read6 = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_read10 = {
static const struct target_opcode_descriptor tcm_opcode_read10 = {
.support = SCSI_SUPPORT_FULL,
.opcode = READ_10,
.cdb_size = 10,
@ -1343,7 +1343,7 @@ static struct target_opcode_descriptor tcm_opcode_read10 = {
.update_usage_bits = set_dpofua_usage_bits,
};
static struct target_opcode_descriptor tcm_opcode_read12 = {
static const struct target_opcode_descriptor tcm_opcode_read12 = {
.support = SCSI_SUPPORT_FULL,
.opcode = READ_12,
.cdb_size = 12,
@ -1353,7 +1353,7 @@ static struct target_opcode_descriptor tcm_opcode_read12 = {
.update_usage_bits = set_dpofua_usage_bits,
};
static struct target_opcode_descriptor tcm_opcode_read16 = {
static const struct target_opcode_descriptor tcm_opcode_read16 = {
.support = SCSI_SUPPORT_FULL,
.opcode = READ_16,
.cdb_size = 16,
@ -1364,7 +1364,7 @@ static struct target_opcode_descriptor tcm_opcode_read16 = {
.update_usage_bits = set_dpofua_usage_bits,
};
static struct target_opcode_descriptor tcm_opcode_write6 = {
static const struct target_opcode_descriptor tcm_opcode_write6 = {
.support = SCSI_SUPPORT_FULL,
.opcode = WRITE_6,
.cdb_size = 6,
@ -1372,7 +1372,7 @@ static struct target_opcode_descriptor tcm_opcode_write6 = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_write10 = {
static const struct target_opcode_descriptor tcm_opcode_write10 = {
.support = SCSI_SUPPORT_FULL,
.opcode = WRITE_10,
.cdb_size = 10,
@ -1382,7 +1382,7 @@ static struct target_opcode_descriptor tcm_opcode_write10 = {
.update_usage_bits = set_dpofua_usage_bits,
};
static struct target_opcode_descriptor tcm_opcode_write_verify10 = {
static const struct target_opcode_descriptor tcm_opcode_write_verify10 = {
.support = SCSI_SUPPORT_FULL,
.opcode = WRITE_VERIFY,
.cdb_size = 10,
@ -1392,7 +1392,7 @@ static struct target_opcode_descriptor tcm_opcode_write_verify10 = {
.update_usage_bits = set_dpofua_usage_bits,
};
static struct target_opcode_descriptor tcm_opcode_write12 = {
static const struct target_opcode_descriptor tcm_opcode_write12 = {
.support = SCSI_SUPPORT_FULL,
.opcode = WRITE_12,
.cdb_size = 12,
@ -1402,7 +1402,7 @@ static struct target_opcode_descriptor tcm_opcode_write12 = {
.update_usage_bits = set_dpofua_usage_bits,
};
static struct target_opcode_descriptor tcm_opcode_write16 = {
static const struct target_opcode_descriptor tcm_opcode_write16 = {
.support = SCSI_SUPPORT_FULL,
.opcode = WRITE_16,
.cdb_size = 16,
@ -1413,7 +1413,7 @@ static struct target_opcode_descriptor tcm_opcode_write16 = {
.update_usage_bits = set_dpofua_usage_bits,
};
static struct target_opcode_descriptor tcm_opcode_write_verify16 = {
static const struct target_opcode_descriptor tcm_opcode_write_verify16 = {
.support = SCSI_SUPPORT_FULL,
.opcode = WRITE_VERIFY_16,
.cdb_size = 16,
@ -1424,7 +1424,7 @@ static struct target_opcode_descriptor tcm_opcode_write_verify16 = {
.update_usage_bits = set_dpofua_usage_bits,
};
static bool tcm_is_ws_enabled(struct target_opcode_descriptor *descr,
static bool tcm_is_ws_enabled(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd)
{
struct exec_cmd_ops *ops = cmd->protocol_data;
@ -1434,7 +1434,7 @@ static bool tcm_is_ws_enabled(struct target_opcode_descriptor *descr,
!!ops->execute_write_same;
}
static struct target_opcode_descriptor tcm_opcode_write_same32 = {
static const struct target_opcode_descriptor tcm_opcode_write_same32 = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = VARIABLE_LENGTH_CMD,
@ -1452,7 +1452,7 @@ static struct target_opcode_descriptor tcm_opcode_write_same32 = {
.update_usage_bits = set_dpofua_usage_bits32,
};
static bool tcm_is_caw_enabled(struct target_opcode_descriptor *descr,
static bool tcm_is_caw_enabled(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
@ -1460,7 +1460,7 @@ static bool tcm_is_caw_enabled(struct target_opcode_descriptor *descr,
return dev->dev_attrib.emulate_caw;
}
static struct target_opcode_descriptor tcm_opcode_compare_write = {
static const struct target_opcode_descriptor tcm_opcode_compare_write = {
.support = SCSI_SUPPORT_FULL,
.opcode = COMPARE_AND_WRITE,
.cdb_size = 16,
@ -1472,7 +1472,7 @@ static struct target_opcode_descriptor tcm_opcode_compare_write = {
.update_usage_bits = set_dpofua_usage_bits,
};
static struct target_opcode_descriptor tcm_opcode_read_capacity = {
static const struct target_opcode_descriptor tcm_opcode_read_capacity = {
.support = SCSI_SUPPORT_FULL,
.opcode = READ_CAPACITY,
.cdb_size = 10,
@ -1481,7 +1481,7 @@ static struct target_opcode_descriptor tcm_opcode_read_capacity = {
0x01, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_read_capacity16 = {
static const struct target_opcode_descriptor tcm_opcode_read_capacity16 = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = SERVICE_ACTION_IN_16,
@ -1493,7 +1493,7 @@ static struct target_opcode_descriptor tcm_opcode_read_capacity16 = {
0xff, 0xff, 0x00, SCSI_CONTROL_MASK},
};
static bool tcm_is_rep_ref_enabled(struct target_opcode_descriptor *descr,
static bool tcm_is_rep_ref_enabled(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
@ -1507,7 +1507,7 @@ static bool tcm_is_rep_ref_enabled(struct target_opcode_descriptor *descr,
return true;
}
static struct target_opcode_descriptor tcm_opcode_read_report_refferals = {
static const struct target_opcode_descriptor tcm_opcode_read_report_refferals = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = SERVICE_ACTION_IN_16,
@ -1520,7 +1520,7 @@ static struct target_opcode_descriptor tcm_opcode_read_report_refferals = {
.enabled = tcm_is_rep_ref_enabled,
};
static struct target_opcode_descriptor tcm_opcode_sync_cache = {
static const struct target_opcode_descriptor tcm_opcode_sync_cache = {
.support = SCSI_SUPPORT_FULL,
.opcode = SYNCHRONIZE_CACHE,
.cdb_size = 10,
@ -1529,7 +1529,7 @@ static struct target_opcode_descriptor tcm_opcode_sync_cache = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_sync_cache16 = {
static const struct target_opcode_descriptor tcm_opcode_sync_cache16 = {
.support = SCSI_SUPPORT_FULL,
.opcode = SYNCHRONIZE_CACHE_16,
.cdb_size = 16,
@ -1539,7 +1539,7 @@ static struct target_opcode_descriptor tcm_opcode_sync_cache16 = {
0xff, 0xff, SCSI_GROUP_NUMBER_MASK, SCSI_CONTROL_MASK},
};
static bool tcm_is_unmap_enabled(struct target_opcode_descriptor *descr,
static bool tcm_is_unmap_enabled(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd)
{
struct exec_cmd_ops *ops = cmd->protocol_data;
@ -1548,7 +1548,7 @@ static bool tcm_is_unmap_enabled(struct target_opcode_descriptor *descr,
return ops->execute_unmap && dev->dev_attrib.emulate_tpu;
}
static struct target_opcode_descriptor tcm_opcode_unmap = {
static const struct target_opcode_descriptor tcm_opcode_unmap = {
.support = SCSI_SUPPORT_FULL,
.opcode = UNMAP,
.cdb_size = 10,
@ -1558,7 +1558,7 @@ static struct target_opcode_descriptor tcm_opcode_unmap = {
.enabled = tcm_is_unmap_enabled,
};
static struct target_opcode_descriptor tcm_opcode_write_same = {
static const struct target_opcode_descriptor tcm_opcode_write_same = {
.support = SCSI_SUPPORT_FULL,
.opcode = WRITE_SAME,
.cdb_size = 10,
@ -1568,7 +1568,7 @@ static struct target_opcode_descriptor tcm_opcode_write_same = {
.enabled = tcm_is_ws_enabled,
};
static struct target_opcode_descriptor tcm_opcode_write_same16 = {
static const struct target_opcode_descriptor tcm_opcode_write_same16 = {
.support = SCSI_SUPPORT_FULL,
.opcode = WRITE_SAME_16,
.cdb_size = 16,
@ -1579,7 +1579,7 @@ static struct target_opcode_descriptor tcm_opcode_write_same16 = {
.enabled = tcm_is_ws_enabled,
};
static struct target_opcode_descriptor tcm_opcode_verify = {
static const struct target_opcode_descriptor tcm_opcode_verify = {
.support = SCSI_SUPPORT_FULL,
.opcode = VERIFY,
.cdb_size = 10,
@ -1588,7 +1588,7 @@ static struct target_opcode_descriptor tcm_opcode_verify = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_verify16 = {
static const struct target_opcode_descriptor tcm_opcode_verify16 = {
.support = SCSI_SUPPORT_FULL,
.opcode = VERIFY_16,
.cdb_size = 16,
@ -1598,7 +1598,7 @@ static struct target_opcode_descriptor tcm_opcode_verify16 = {
0xff, 0xff, SCSI_GROUP_NUMBER_MASK, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_start_stop = {
static const struct target_opcode_descriptor tcm_opcode_start_stop = {
.support = SCSI_SUPPORT_FULL,
.opcode = START_STOP,
.cdb_size = 6,
@ -1606,7 +1606,7 @@ static struct target_opcode_descriptor tcm_opcode_start_stop = {
0x01, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_mode_select = {
static const struct target_opcode_descriptor tcm_opcode_mode_select = {
.support = SCSI_SUPPORT_FULL,
.opcode = MODE_SELECT,
.cdb_size = 6,
@ -1614,7 +1614,7 @@ static struct target_opcode_descriptor tcm_opcode_mode_select = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_mode_select10 = {
static const struct target_opcode_descriptor tcm_opcode_mode_select10 = {
.support = SCSI_SUPPORT_FULL,
.opcode = MODE_SELECT_10,
.cdb_size = 10,
@ -1623,7 +1623,7 @@ static struct target_opcode_descriptor tcm_opcode_mode_select10 = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_mode_sense = {
static const struct target_opcode_descriptor tcm_opcode_mode_sense = {
.support = SCSI_SUPPORT_FULL,
.opcode = MODE_SENSE,
.cdb_size = 6,
@ -1631,7 +1631,7 @@ static struct target_opcode_descriptor tcm_opcode_mode_sense = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_mode_sense10 = {
static const struct target_opcode_descriptor tcm_opcode_mode_sense10 = {
.support = SCSI_SUPPORT_FULL,
.opcode = MODE_SENSE_10,
.cdb_size = 10,
@ -1640,7 +1640,7 @@ static struct target_opcode_descriptor tcm_opcode_mode_sense10 = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_pri_read_keys = {
static const struct target_opcode_descriptor tcm_opcode_pri_read_keys = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_IN,
@ -1651,7 +1651,7 @@ static struct target_opcode_descriptor tcm_opcode_pri_read_keys = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_pri_read_resrv = {
static const struct target_opcode_descriptor tcm_opcode_pri_read_resrv = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_IN,
@ -1662,7 +1662,7 @@ static struct target_opcode_descriptor tcm_opcode_pri_read_resrv = {
0xff, SCSI_CONTROL_MASK},
};
static bool tcm_is_pr_enabled(struct target_opcode_descriptor *descr,
static bool tcm_is_pr_enabled(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
@ -1704,7 +1704,7 @@ static bool tcm_is_pr_enabled(struct target_opcode_descriptor *descr,
return true;
}
static struct target_opcode_descriptor tcm_opcode_pri_read_caps = {
static const struct target_opcode_descriptor tcm_opcode_pri_read_caps = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_IN,
@ -1716,7 +1716,7 @@ static struct target_opcode_descriptor tcm_opcode_pri_read_caps = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pri_read_full_status = {
static const struct target_opcode_descriptor tcm_opcode_pri_read_full_status = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_IN,
@ -1728,7 +1728,7 @@ static struct target_opcode_descriptor tcm_opcode_pri_read_full_status = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pro_register = {
static const struct target_opcode_descriptor tcm_opcode_pro_register = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_OUT,
@ -1740,7 +1740,7 @@ static struct target_opcode_descriptor tcm_opcode_pro_register = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pro_reserve = {
static const struct target_opcode_descriptor tcm_opcode_pro_reserve = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_OUT,
@ -1752,7 +1752,7 @@ static struct target_opcode_descriptor tcm_opcode_pro_reserve = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pro_release = {
static const struct target_opcode_descriptor tcm_opcode_pro_release = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_OUT,
@ -1764,7 +1764,7 @@ static struct target_opcode_descriptor tcm_opcode_pro_release = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pro_clear = {
static const struct target_opcode_descriptor tcm_opcode_pro_clear = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_OUT,
@ -1776,7 +1776,7 @@ static struct target_opcode_descriptor tcm_opcode_pro_clear = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pro_preempt = {
static const struct target_opcode_descriptor tcm_opcode_pro_preempt = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_OUT,
@ -1788,7 +1788,7 @@ static struct target_opcode_descriptor tcm_opcode_pro_preempt = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pro_preempt_abort = {
static const struct target_opcode_descriptor tcm_opcode_pro_preempt_abort = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_OUT,
@ -1800,7 +1800,7 @@ static struct target_opcode_descriptor tcm_opcode_pro_preempt_abort = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pro_reg_ign_exist = {
static const struct target_opcode_descriptor tcm_opcode_pro_reg_ign_exist = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_OUT,
@ -1814,7 +1814,7 @@ static struct target_opcode_descriptor tcm_opcode_pro_reg_ign_exist = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_pro_register_move = {
static const struct target_opcode_descriptor tcm_opcode_pro_register_move = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = PERSISTENT_RESERVE_OUT,
@ -1826,7 +1826,7 @@ static struct target_opcode_descriptor tcm_opcode_pro_register_move = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_release = {
static const struct target_opcode_descriptor tcm_opcode_release = {
.support = SCSI_SUPPORT_FULL,
.opcode = RELEASE_6,
.cdb_size = 6,
@ -1835,7 +1835,7 @@ static struct target_opcode_descriptor tcm_opcode_release = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_release10 = {
static const struct target_opcode_descriptor tcm_opcode_release10 = {
.support = SCSI_SUPPORT_FULL,
.opcode = RELEASE_10,
.cdb_size = 10,
@ -1845,7 +1845,7 @@ static struct target_opcode_descriptor tcm_opcode_release10 = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_reserve = {
static const struct target_opcode_descriptor tcm_opcode_reserve = {
.support = SCSI_SUPPORT_FULL,
.opcode = RESERVE_6,
.cdb_size = 6,
@ -1854,7 +1854,7 @@ static struct target_opcode_descriptor tcm_opcode_reserve = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_reserve10 = {
static const struct target_opcode_descriptor tcm_opcode_reserve10 = {
.support = SCSI_SUPPORT_FULL,
.opcode = RESERVE_10,
.cdb_size = 10,
@ -1864,7 +1864,7 @@ static struct target_opcode_descriptor tcm_opcode_reserve10 = {
.enabled = tcm_is_pr_enabled,
};
static struct target_opcode_descriptor tcm_opcode_request_sense = {
static const struct target_opcode_descriptor tcm_opcode_request_sense = {
.support = SCSI_SUPPORT_FULL,
.opcode = REQUEST_SENSE,
.cdb_size = 6,
@ -1872,7 +1872,7 @@ static struct target_opcode_descriptor tcm_opcode_request_sense = {
0xff, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_inquiry = {
static const struct target_opcode_descriptor tcm_opcode_inquiry = {
.support = SCSI_SUPPORT_FULL,
.opcode = INQUIRY,
.cdb_size = 6,
@ -1880,7 +1880,7 @@ static struct target_opcode_descriptor tcm_opcode_inquiry = {
0xff, SCSI_CONTROL_MASK},
};
static bool tcm_is_3pc_enabled(struct target_opcode_descriptor *descr,
static bool tcm_is_3pc_enabled(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
@ -1888,7 +1888,7 @@ static bool tcm_is_3pc_enabled(struct target_opcode_descriptor *descr,
return dev->dev_attrib.emulate_3pc;
}
static struct target_opcode_descriptor tcm_opcode_extended_copy_lid1 = {
static const struct target_opcode_descriptor tcm_opcode_extended_copy_lid1 = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = EXTENDED_COPY,
@ -1900,7 +1900,7 @@ static struct target_opcode_descriptor tcm_opcode_extended_copy_lid1 = {
.enabled = tcm_is_3pc_enabled,
};
static struct target_opcode_descriptor tcm_opcode_rcv_copy_res_op_params = {
static const struct target_opcode_descriptor tcm_opcode_rcv_copy_res_op_params = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = RECEIVE_COPY_RESULTS,
@ -1914,7 +1914,7 @@ static struct target_opcode_descriptor tcm_opcode_rcv_copy_res_op_params = {
.enabled = tcm_is_3pc_enabled,
};
static struct target_opcode_descriptor tcm_opcode_report_luns = {
static const struct target_opcode_descriptor tcm_opcode_report_luns = {
.support = SCSI_SUPPORT_FULL,
.opcode = REPORT_LUNS,
.cdb_size = 12,
@ -1923,7 +1923,7 @@ static struct target_opcode_descriptor tcm_opcode_report_luns = {
0xff, 0xff, 0x00, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_test_unit_ready = {
static const struct target_opcode_descriptor tcm_opcode_test_unit_ready = {
.support = SCSI_SUPPORT_FULL,
.opcode = TEST_UNIT_READY,
.cdb_size = 6,
@ -1931,7 +1931,7 @@ static struct target_opcode_descriptor tcm_opcode_test_unit_ready = {
0x00, SCSI_CONTROL_MASK},
};
static struct target_opcode_descriptor tcm_opcode_report_target_pgs = {
static const struct target_opcode_descriptor tcm_opcode_report_target_pgs = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = MAINTENANCE_IN,
@ -1942,7 +1942,7 @@ static struct target_opcode_descriptor tcm_opcode_report_target_pgs = {
0xff, 0xff, 0x00, SCSI_CONTROL_MASK},
};
static bool spc_rsoc_enabled(struct target_opcode_descriptor *descr,
static bool spc_rsoc_enabled(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
@ -1950,7 +1950,7 @@ static bool spc_rsoc_enabled(struct target_opcode_descriptor *descr,
return dev->dev_attrib.emulate_rsoc;
}
static struct target_opcode_descriptor tcm_opcode_report_supp_opcodes = {
static const struct target_opcode_descriptor tcm_opcode_report_supp_opcodes = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = MAINTENANCE_IN,
@ -1963,7 +1963,7 @@ static struct target_opcode_descriptor tcm_opcode_report_supp_opcodes = {
.enabled = spc_rsoc_enabled,
};
static bool tcm_is_set_tpg_enabled(struct target_opcode_descriptor *descr,
static bool tcm_is_set_tpg_enabled(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd)
{
struct t10_alua_tg_pt_gp *l_tg_pt_gp;
@ -1984,7 +1984,7 @@ static bool tcm_is_set_tpg_enabled(struct target_opcode_descriptor *descr,
return true;
}
static struct target_opcode_descriptor tcm_opcode_set_tpg = {
static const struct target_opcode_descriptor tcm_opcode_set_tpg = {
.support = SCSI_SUPPORT_FULL,
.serv_action_valid = 1,
.opcode = MAINTENANCE_OUT,
@ -1996,7 +1996,7 @@ static struct target_opcode_descriptor tcm_opcode_set_tpg = {
.enabled = tcm_is_set_tpg_enabled,
};
static struct target_opcode_descriptor *tcm_supported_opcodes[] = {
static const struct target_opcode_descriptor *tcm_supported_opcodes[] = {
&tcm_opcode_read6,
&tcm_opcode_read10,
&tcm_opcode_read12,
@ -2053,7 +2053,7 @@ static struct target_opcode_descriptor *tcm_supported_opcodes[] = {
static int
spc_rsoc_encode_command_timeouts_descriptor(unsigned char *buf, u8 ctdp,
struct target_opcode_descriptor *descr)
const struct target_opcode_descriptor *descr)
{
if (!ctdp)
return 0;
@ -2068,7 +2068,7 @@ spc_rsoc_encode_command_timeouts_descriptor(unsigned char *buf, u8 ctdp,
static int
spc_rsoc_encode_command_descriptor(unsigned char *buf, u8 ctdp,
struct target_opcode_descriptor *descr)
const struct target_opcode_descriptor *descr)
{
int td_size = 0;
@ -2087,7 +2087,7 @@ spc_rsoc_encode_command_descriptor(unsigned char *buf, u8 ctdp,
static int
spc_rsoc_encode_one_command_descriptor(unsigned char *buf, u8 ctdp,
struct target_opcode_descriptor *descr,
const struct target_opcode_descriptor *descr,
struct se_device *dev)
{
int td_size = 0;
@ -2110,9 +2110,9 @@ spc_rsoc_encode_one_command_descriptor(unsigned char *buf, u8 ctdp,
}
static sense_reason_t
spc_rsoc_get_descr(struct se_cmd *cmd, struct target_opcode_descriptor **opcode)
spc_rsoc_get_descr(struct se_cmd *cmd, const struct target_opcode_descriptor **opcode)
{
struct target_opcode_descriptor *descr;
const struct target_opcode_descriptor *descr;
struct se_session *sess = cmd->se_sess;
unsigned char *cdb = cmd->t_task_cdb;
u8 opts = cdb[2] & 0x3;
@ -2199,7 +2199,7 @@ static sense_reason_t
spc_emulate_report_supp_op_codes(struct se_cmd *cmd)
{
int descr_num = ARRAY_SIZE(tcm_supported_opcodes);
struct target_opcode_descriptor *descr = NULL;
const struct target_opcode_descriptor *descr = NULL;
unsigned char *cdb = cmd->t_task_cdb;
u8 rctd = (cdb[2] >> 7) & 0x1;
unsigned char *buf = NULL;

View File

@ -280,30 +280,51 @@ static ssize_t target_stat_lu_num_cmds_show(struct config_item *item,
char *page)
{
struct se_device *dev = to_stat_lu_dev(item);
struct se_dev_io_stats *stats;
unsigned int cpu;
u32 cmds = 0;
for_each_possible_cpu(cpu) {
stats = per_cpu_ptr(dev->stats, cpu);
cmds += stats->total_cmds;
}
/* scsiLuNumCommands */
return snprintf(page, PAGE_SIZE, "%lu\n",
atomic_long_read(&dev->num_cmds));
return snprintf(page, PAGE_SIZE, "%u\n", cmds);
}
static ssize_t target_stat_lu_read_mbytes_show(struct config_item *item,
char *page)
{
struct se_device *dev = to_stat_lu_dev(item);
struct se_dev_io_stats *stats;
unsigned int cpu;
u32 bytes = 0;
for_each_possible_cpu(cpu) {
stats = per_cpu_ptr(dev->stats, cpu);
bytes += stats->read_bytes;
}
/* scsiLuReadMegaBytes */
return snprintf(page, PAGE_SIZE, "%lu\n",
atomic_long_read(&dev->read_bytes) >> 20);
return snprintf(page, PAGE_SIZE, "%u\n", bytes >> 20);
}
static ssize_t target_stat_lu_write_mbytes_show(struct config_item *item,
char *page)
{
struct se_device *dev = to_stat_lu_dev(item);
struct se_dev_io_stats *stats;
unsigned int cpu;
u32 bytes = 0;
for_each_possible_cpu(cpu) {
stats = per_cpu_ptr(dev->stats, cpu);
bytes += stats->write_bytes;
}
/* scsiLuWrittenMegaBytes */
return snprintf(page, PAGE_SIZE, "%lu\n",
atomic_long_read(&dev->write_bytes) >> 20);
return snprintf(page, PAGE_SIZE, "%u\n", bytes >> 20);
}
static ssize_t target_stat_lu_resets_show(struct config_item *item, char *page)
@ -1019,8 +1040,11 @@ static ssize_t target_stat_auth_num_cmds_show(struct config_item *item,
{
struct se_lun_acl *lacl = auth_to_lacl(item);
struct se_node_acl *nacl = lacl->se_lun_nacl;
struct se_dev_entry_io_stats *stats;
struct se_dev_entry *deve;
unsigned int cpu;
ssize_t ret;
u32 cmds = 0;
rcu_read_lock();
deve = target_nacl_find_deve(nacl, lacl->mapped_lun);
@ -1028,9 +1052,14 @@ static ssize_t target_stat_auth_num_cmds_show(struct config_item *item,
rcu_read_unlock();
return -ENODEV;
}
for_each_possible_cpu(cpu) {
stats = per_cpu_ptr(deve->stats, cpu);
cmds += stats->total_cmds;
}
/* scsiAuthIntrOutCommands */
ret = snprintf(page, PAGE_SIZE, "%lu\n",
atomic_long_read(&deve->total_cmds));
ret = snprintf(page, PAGE_SIZE, "%u\n", cmds);
rcu_read_unlock();
return ret;
}
@ -1040,8 +1069,11 @@ static ssize_t target_stat_auth_read_mbytes_show(struct config_item *item,
{
struct se_lun_acl *lacl = auth_to_lacl(item);
struct se_node_acl *nacl = lacl->se_lun_nacl;
struct se_dev_entry_io_stats *stats;
struct se_dev_entry *deve;
unsigned int cpu;
ssize_t ret;
u32 bytes = 0;
rcu_read_lock();
deve = target_nacl_find_deve(nacl, lacl->mapped_lun);
@ -1049,9 +1081,14 @@ static ssize_t target_stat_auth_read_mbytes_show(struct config_item *item,
rcu_read_unlock();
return -ENODEV;
}
for_each_possible_cpu(cpu) {
stats = per_cpu_ptr(deve->stats, cpu);
bytes += stats->read_bytes;
}
/* scsiAuthIntrReadMegaBytes */
ret = snprintf(page, PAGE_SIZE, "%u\n",
(u32)(atomic_long_read(&deve->read_bytes) >> 20));
ret = snprintf(page, PAGE_SIZE, "%u\n", bytes >> 20);
rcu_read_unlock();
return ret;
}
@ -1061,8 +1098,11 @@ static ssize_t target_stat_auth_write_mbytes_show(struct config_item *item,
{
struct se_lun_acl *lacl = auth_to_lacl(item);
struct se_node_acl *nacl = lacl->se_lun_nacl;
struct se_dev_entry_io_stats *stats;
struct se_dev_entry *deve;
unsigned int cpu;
ssize_t ret;
u32 bytes = 0;
rcu_read_lock();
deve = target_nacl_find_deve(nacl, lacl->mapped_lun);
@ -1070,9 +1110,14 @@ static ssize_t target_stat_auth_write_mbytes_show(struct config_item *item,
rcu_read_unlock();
return -ENODEV;
}
for_each_possible_cpu(cpu) {
stats = per_cpu_ptr(deve->stats, cpu);
bytes += stats->write_bytes;
}
/* scsiAuthIntrWrittenMegaBytes */
ret = snprintf(page, PAGE_SIZE, "%u\n",
(u32)(atomic_long_read(&deve->write_bytes) >> 20));
ret = snprintf(page, PAGE_SIZE, "%u\n", bytes >> 20);
rcu_read_unlock();
return ret;
}

View File

@ -2213,6 +2213,7 @@ static int target_write_prot_action(struct se_cmd *cmd)
static bool target_handle_task_attr(struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
unsigned long flags;
if (dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return false;
@ -2225,13 +2226,10 @@ static bool target_handle_task_attr(struct se_cmd *cmd)
*/
switch (cmd->sam_task_attr) {
case TCM_HEAD_TAG:
atomic_inc_mb(&dev->non_ordered);
pr_debug("Added HEAD_OF_QUEUE for CDB: 0x%02x\n",
cmd->t_task_cdb[0]);
return false;
case TCM_ORDERED_TAG:
atomic_inc_mb(&dev->delayed_cmd_count);
pr_debug("Added ORDERED for CDB: 0x%02x to ordered list\n",
cmd->t_task_cdb[0]);
break;
@ -2239,29 +2237,29 @@ static bool target_handle_task_attr(struct se_cmd *cmd)
/*
* For SIMPLE and UNTAGGED Task Attribute commands
*/
atomic_inc_mb(&dev->non_ordered);
if (atomic_read(&dev->delayed_cmd_count) == 0)
retry:
if (percpu_ref_tryget_live(&dev->non_ordered))
return false;
break;
}
if (cmd->sam_task_attr != TCM_ORDERED_TAG) {
atomic_inc_mb(&dev->delayed_cmd_count);
/*
* We will account for this when we dequeue from the delayed
* list.
*/
atomic_dec_mb(&dev->non_ordered);
spin_lock_irqsave(&dev->delayed_cmd_lock, flags);
if (cmd->sam_task_attr == TCM_SIMPLE_TAG &&
!percpu_ref_is_dying(&dev->non_ordered)) {
spin_unlock_irqrestore(&dev->delayed_cmd_lock, flags);
/* We raced with the last ordered completion so retry. */
goto retry;
} else if (!percpu_ref_is_dying(&dev->non_ordered)) {
percpu_ref_kill(&dev->non_ordered);
}
spin_lock_irq(&cmd->t_state_lock);
spin_lock(&cmd->t_state_lock);
cmd->transport_state &= ~CMD_T_SENT;
spin_unlock_irq(&cmd->t_state_lock);
spin_unlock(&cmd->t_state_lock);
spin_lock(&dev->delayed_cmd_lock);
list_add_tail(&cmd->se_delayed_node, &dev->delayed_cmd_list);
spin_unlock(&dev->delayed_cmd_lock);
spin_unlock_irqrestore(&dev->delayed_cmd_lock, flags);
pr_debug("Added CDB: 0x%02x Task Attr: 0x%02x to delayed CMD listn",
cmd->t_task_cdb[0], cmd->sam_task_attr);
@ -2313,41 +2311,52 @@ void target_do_delayed_work(struct work_struct *work)
while (!dev->ordered_sync_in_progress) {
struct se_cmd *cmd;
if (list_empty(&dev->delayed_cmd_list))
/*
* We can be woken up early/late due to races or the
* extra wake up we do when adding commands to the list.
* We check for both cases here.
*/
if (list_empty(&dev->delayed_cmd_list) ||
!percpu_ref_is_zero(&dev->non_ordered))
break;
cmd = list_entry(dev->delayed_cmd_list.next,
struct se_cmd, se_delayed_node);
if (cmd->sam_task_attr == TCM_ORDERED_TAG) {
/*
* Check if we started with:
* [ordered] [simple] [ordered]
* and we are now at the last ordered so we have to wait
* for the simple cmd.
*/
if (atomic_read(&dev->non_ordered) > 0)
break;
dev->ordered_sync_in_progress = true;
}
list_del(&cmd->se_delayed_node);
atomic_dec_mb(&dev->delayed_cmd_count);
spin_unlock(&dev->delayed_cmd_lock);
if (cmd->sam_task_attr != TCM_ORDERED_TAG)
atomic_inc_mb(&dev->non_ordered);
cmd->se_cmd_flags |= SCF_TASK_ORDERED_SYNC;
cmd->transport_state |= CMD_T_SENT;
__target_execute_cmd(cmd, true);
dev->ordered_sync_in_progress = true;
list_del(&cmd->se_delayed_node);
spin_unlock(&dev->delayed_cmd_lock);
__target_execute_cmd(cmd, true);
spin_lock(&dev->delayed_cmd_lock);
}
spin_unlock(&dev->delayed_cmd_lock);
}
static void transport_complete_ordered_sync(struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
unsigned long flags;
spin_lock_irqsave(&dev->delayed_cmd_lock, flags);
dev->dev_cur_ordered_id++;
pr_debug("Incremented dev_cur_ordered_id: %u for type %d\n",
dev->dev_cur_ordered_id, cmd->sam_task_attr);
dev->ordered_sync_in_progress = false;
if (list_empty(&dev->delayed_cmd_list))
percpu_ref_resurrect(&dev->non_ordered);
else
schedule_work(&dev->delayed_cmd_work);
spin_unlock_irqrestore(&dev->delayed_cmd_lock, flags);
}
/*
* Called from I/O completion to determine which dormant/delayed
* and ordered cmds need to have their tasks added to the execution queue.
@ -2360,30 +2369,24 @@ static void transport_complete_task_attr(struct se_cmd *cmd)
return;
if (!(cmd->se_cmd_flags & SCF_TASK_ATTR_SET))
goto restart;
return;
if (cmd->sam_task_attr == TCM_SIMPLE_TAG) {
atomic_dec_mb(&dev->non_ordered);
dev->dev_cur_ordered_id++;
} else if (cmd->sam_task_attr == TCM_HEAD_TAG) {
atomic_dec_mb(&dev->non_ordered);
dev->dev_cur_ordered_id++;
pr_debug("Incremented dev_cur_ordered_id: %u for HEAD_OF_QUEUE\n",
dev->dev_cur_ordered_id);
} else if (cmd->sam_task_attr == TCM_ORDERED_TAG) {
spin_lock(&dev->delayed_cmd_lock);
dev->ordered_sync_in_progress = false;
spin_unlock(&dev->delayed_cmd_lock);
dev->dev_cur_ordered_id++;
pr_debug("Incremented dev_cur_ordered_id: %u for ORDERED\n",
dev->dev_cur_ordered_id);
}
cmd->se_cmd_flags &= ~SCF_TASK_ATTR_SET;
restart:
if (atomic_read(&dev->delayed_cmd_count) > 0)
schedule_work(&dev->delayed_cmd_work);
if (cmd->se_cmd_flags & SCF_TASK_ORDERED_SYNC) {
transport_complete_ordered_sync(cmd);
return;
}
switch (cmd->sam_task_attr) {
case TCM_SIMPLE_TAG:
percpu_ref_put(&dev->non_ordered);
break;
case TCM_ORDERED_TAG:
/* All ordered should have been executed as sync */
WARN_ON(1);
break;
}
}
static void transport_complete_qf(struct se_cmd *cmd)

View File

@ -674,7 +674,6 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
int tag = scsi_cmd_to_rq(cmd)->tag;
struct ufshcd_lrb *lrbp = &hba->lrb[tag];
struct ufs_hw_queue *hwq;
unsigned long flags;
int err;
/* Skip task abort in case previous aborts failed and report failure */
@ -713,10 +712,5 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
return FAILED;
}
spin_lock_irqsave(&hwq->cq_lock, flags);
if (ufshcd_cmd_inflight(lrbp->cmd))
ufshcd_release_scsi_cmd(hba, lrbp);
spin_unlock_irqrestore(&hwq->cq_lock, flags);
return SUCCESS;
}

View File

@ -57,6 +57,36 @@ static const char *ufs_hs_gear_to_string(enum ufs_hs_gear_tag gear)
}
}
static const char *ufs_wb_resize_hint_to_string(enum wb_resize_hint hint)
{
switch (hint) {
case WB_RESIZE_HINT_KEEP:
return "keep";
case WB_RESIZE_HINT_DECREASE:
return "decrease";
case WB_RESIZE_HINT_INCREASE:
return "increase";
default:
return "unknown";
}
}
static const char *ufs_wb_resize_status_to_string(enum wb_resize_status status)
{
switch (status) {
case WB_RESIZE_STATUS_IDLE:
return "idle";
case WB_RESIZE_STATUS_IN_PROGRESS:
return "in_progress";
case WB_RESIZE_STATUS_COMPLETE_SUCCESS:
return "complete_success";
case WB_RESIZE_STATUS_GENERAL_FAILURE:
return "general_failure";
default:
return "unknown";
}
}
static const char *ufshcd_uic_link_state_to_string(
enum uic_link_state state)
{
@ -411,6 +441,44 @@ static ssize_t wb_flush_threshold_store(struct device *dev,
return count;
}
static const char * const wb_resize_en_mode[] = {
[WB_RESIZE_EN_IDLE] = "idle",
[WB_RESIZE_EN_DECREASE] = "decrease",
[WB_RESIZE_EN_INCREASE] = "increase",
};
static ssize_t wb_resize_enable_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
int mode;
ssize_t res;
if (!ufshcd_is_wb_allowed(hba) || !hba->dev_info.wb_enabled
|| !hba->dev_info.b_presrv_uspc_en
|| !(hba->dev_info.ext_wb_sup & UFS_DEV_WB_BUF_RESIZE))
return -EOPNOTSUPP;
mode = sysfs_match_string(wb_resize_en_mode, buf);
if (mode < 0)
return -EINVAL;
down(&hba->host_sem);
if (!ufshcd_is_user_access_allowed(hba)) {
res = -EBUSY;
goto out;
}
ufshcd_rpm_get_sync(hba);
res = ufshcd_wb_set_resize_en(hba, mode);
ufshcd_rpm_put_sync(hba);
out:
up(&hba->host_sem);
return res < 0 ? res : count;
}
/**
* pm_qos_enable_show - sysfs handler to show pm qos enable value
* @dev: device associated with the UFS controller
@ -526,6 +594,7 @@ static DEVICE_ATTR_RW(auto_hibern8);
static DEVICE_ATTR_RW(wb_on);
static DEVICE_ATTR_RW(enable_wb_buf_flush);
static DEVICE_ATTR_RW(wb_flush_threshold);
static DEVICE_ATTR_WO(wb_resize_enable);
static DEVICE_ATTR_RW(rtc_update_ms);
static DEVICE_ATTR_RW(pm_qos_enable);
static DEVICE_ATTR_RO(critical_health);
@ -543,6 +612,7 @@ static struct attribute *ufs_sysfs_ufshcd_attrs[] = {
&dev_attr_wb_on.attr,
&dev_attr_enable_wb_buf_flush.attr,
&dev_attr_wb_flush_threshold.attr,
&dev_attr_wb_resize_enable.attr,
&dev_attr_rtc_update_ms.attr,
&dev_attr_pm_qos_enable.attr,
&dev_attr_critical_health.attr,
@ -1549,6 +1619,67 @@ static inline bool ufshcd_is_wb_attrs(enum attr_idn idn)
idn <= QUERY_ATTR_IDN_CURR_WB_BUFF_SIZE;
}
static int wb_read_resize_attrs(struct ufs_hba *hba,
enum attr_idn idn, u32 *attr_val)
{
u8 index = 0;
int ret;
if (!ufshcd_is_wb_allowed(hba) || !hba->dev_info.wb_enabled
|| !hba->dev_info.b_presrv_uspc_en
|| !(hba->dev_info.ext_wb_sup & UFS_DEV_WB_BUF_RESIZE))
return -EOPNOTSUPP;
down(&hba->host_sem);
if (!ufshcd_is_user_access_allowed(hba)) {
up(&hba->host_sem);
return -EBUSY;
}
index = ufshcd_wb_get_query_index(hba);
ufshcd_rpm_get_sync(hba);
ret = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_READ_ATTR,
idn, index, 0, attr_val);
ufshcd_rpm_put_sync(hba);
up(&hba->host_sem);
return ret;
}
static ssize_t wb_resize_hint_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
int ret;
u32 value;
ret = wb_read_resize_attrs(hba,
QUERY_ATTR_IDN_WB_BUF_RESIZE_HINT, &value);
if (ret)
return ret;
return sysfs_emit(buf, "%s\n", ufs_wb_resize_hint_to_string(value));
}
static DEVICE_ATTR_RO(wb_resize_hint);
static ssize_t wb_resize_status_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
int ret;
u32 value;
ret = wb_read_resize_attrs(hba,
QUERY_ATTR_IDN_WB_BUF_RESIZE_STATUS, &value);
if (ret)
return ret;
return sysfs_emit(buf, "%s\n", ufs_wb_resize_status_to_string(value));
}
static DEVICE_ATTR_RO(wb_resize_status);
#define UFS_ATTRIBUTE(_name, _uname) \
static ssize_t _name##_show(struct device *dev, \
struct device_attribute *attr, char *buf) \
@ -1622,6 +1753,8 @@ static struct attribute *ufs_sysfs_attributes[] = {
&dev_attr_wb_avail_buf.attr,
&dev_attr_wb_life_time_est.attr,
&dev_attr_wb_cur_buf.attr,
&dev_attr_wb_resize_hint.attr,
&dev_attr_wb_resize_status.attr,
NULL,
};

View File

@ -53,7 +53,7 @@
/* UIC command timeout, unit: ms */
enum {
UIC_CMD_TIMEOUT_DEFAULT = 500,
UIC_CMD_TIMEOUT_MAX = 2000,
UIC_CMD_TIMEOUT_MAX = 5000,
};
/* NOP OUT retries waiting for NOP IN response */
#define NOP_OUT_RETRIES 10
@ -63,7 +63,11 @@ enum {
/* Query request retries */
#define QUERY_REQ_RETRIES 3
/* Query request timeout */
#define QUERY_REQ_TIMEOUT 1500 /* 1.5 seconds */
enum {
QUERY_REQ_TIMEOUT_MIN = 1,
QUERY_REQ_TIMEOUT_DEFAULT = 1500,
QUERY_REQ_TIMEOUT_MAX = 30000
};
/* Advanced RPMB request timeout */
#define ADVANCED_RPMB_REQ_TIMEOUT 3000 /* 3 seconds */
@ -133,7 +137,24 @@ static const struct kernel_param_ops uic_cmd_timeout_ops = {
module_param_cb(uic_cmd_timeout, &uic_cmd_timeout_ops, &uic_cmd_timeout, 0644);
MODULE_PARM_DESC(uic_cmd_timeout,
"UFS UIC command timeout in milliseconds. Defaults to 500ms. Supported values range from 500ms to 2 seconds inclusively");
"UFS UIC command timeout in milliseconds. Defaults to 500ms. Supported values range from 500ms to 5 seconds inclusively");
static unsigned int dev_cmd_timeout = QUERY_REQ_TIMEOUT_DEFAULT;
static int dev_cmd_timeout_set(const char *val, const struct kernel_param *kp)
{
return param_set_uint_minmax(val, kp, QUERY_REQ_TIMEOUT_MIN,
QUERY_REQ_TIMEOUT_MAX);
}
static const struct kernel_param_ops dev_cmd_timeout_ops = {
.set = dev_cmd_timeout_set,
.get = param_get_uint,
};
module_param_cb(dev_cmd_timeout, &dev_cmd_timeout_ops, &dev_cmd_timeout, 0644);
MODULE_PARM_DESC(dev_cmd_timeout,
"UFS Device command timeout in milliseconds. Defaults to 1.5s. Supported values range from 1ms to 30 seconds inclusively");
#define ufshcd_toggle_vreg(_dev, _vreg, _on) \
({ \
@ -432,7 +453,7 @@ static void ufshcd_add_command_trace(struct ufs_hba *hba, unsigned int tag,
u8 opcode = 0, group_id = 0;
u32 doorbell = 0;
u32 intr;
int hwq_id = -1;
u32 hwq_id = 0;
struct ufshcd_lrb *lrbp = &hba->lrb[tag];
struct scsi_cmnd *cmd = lrbp->cmd;
struct request *rq = scsi_cmd_to_rq(cmd);
@ -644,9 +665,6 @@ static void ufshcd_print_host_state(struct ufs_hba *hba)
"last_hibern8_exit_tstamp at %lld us, hibern8_exit_cnt=%d\n",
div_u64(hba->ufs_stats.last_hibern8_exit_tstamp, 1000),
hba->ufs_stats.hibern8_exit_cnt);
dev_err(hba->dev, "last intr at %lld us, last intr status=0x%x\n",
div_u64(hba->ufs_stats.last_intr_ts, 1000),
hba->ufs_stats.last_intr_status);
dev_err(hba->dev, "error handling flags=0x%x, req. abort count=%d\n",
hba->eh_flags, hba->req_abort_count);
dev_err(hba->dev, "hba->ufs_version=0x%x, Host capabilities=0x%x, caps=0x%x\n",
@ -3365,7 +3383,7 @@ int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode,
struct ufs_query_req *request = NULL;
struct ufs_query_res *response = NULL;
int err, selector = 0;
int timeout = QUERY_REQ_TIMEOUT;
int timeout = dev_cmd_timeout;
BUG_ON(!hba);
@ -3462,7 +3480,7 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode,
goto out_unlock;
}
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT);
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, dev_cmd_timeout);
if (err) {
dev_err(hba->dev, "%s: opcode 0x%.2x for idn %d failed, index %d, err = %d\n",
@ -3558,7 +3576,7 @@ static int __ufshcd_query_descriptor(struct ufs_hba *hba,
goto out_unlock;
}
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT);
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, dev_cmd_timeout);
if (err) {
dev_err(hba->dev, "%s: opcode 0x%.2x for idn %d failed, index %d, err = %d\n",
@ -6020,7 +6038,7 @@ int ufshcd_read_device_lvl_exception_id(struct ufs_hba *hba, u64 *exception_id)
request->query_func = UPIU_QUERY_FUNC_STANDARD_READ_REQUEST;
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT);
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, dev_cmd_timeout);
if (err) {
dev_err(hba->dev, "%s: failed to read device level exception %d\n",
@ -6107,6 +6125,21 @@ int ufshcd_wb_toggle_buf_flush(struct ufs_hba *hba, bool enable)
return ret;
}
int ufshcd_wb_set_resize_en(struct ufs_hba *hba, enum wb_resize_en en_mode)
{
int ret;
u8 index;
index = ufshcd_wb_get_query_index(hba);
ret = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_WRITE_ATTR,
QUERY_ATTR_IDN_WB_BUF_RESIZE_EN, index, 0, &en_mode);
if (ret)
dev_err(hba->dev, "%s: Enable WB buf resize operation failed %d\n",
__func__, ret);
return ret;
}
static bool ufshcd_wb_curr_buff_threshold_check(struct ufs_hba *hba,
u32 avail_buf)
{
@ -6572,7 +6605,7 @@ static void ufshcd_err_handler(struct work_struct *work)
hba = container_of(work, struct ufs_hba, eh_work);
dev_info(hba->dev,
"%s started; HBA state %s; powered %d; shutting down %d; saved_err = %d; saved_uic_err = %d; force_reset = %d%s\n",
"%s started; HBA state %s; powered %d; shutting down %d; saved_err = 0x%x; saved_uic_err = 0x%x; force_reset = %d%s\n",
__func__, ufshcd_state_name[hba->ufshcd_state],
hba->is_powered, hba->shutting_down, hba->saved_err,
hba->saved_uic_err, hba->force_reset,
@ -7001,7 +7034,7 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
}
/**
* ufshcd_intr - Main interrupt service routine
* ufshcd_threaded_intr - Threaded interrupt service routine
* @irq: irq number
* @__hba: pointer to adapter instance
*
@ -7009,16 +7042,14 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
* IRQ_HANDLED - If interrupt is valid
* IRQ_NONE - If invalid interrupt
*/
static irqreturn_t ufshcd_intr(int irq, void *__hba)
static irqreturn_t ufshcd_threaded_intr(int irq, void *__hba)
{
u32 intr_status, enabled_intr_status = 0;
u32 last_intr_status, intr_status, enabled_intr_status = 0;
irqreturn_t retval = IRQ_NONE;
struct ufs_hba *hba = __hba;
int retries = hba->nutrs;
intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
hba->ufs_stats.last_intr_status = intr_status;
hba->ufs_stats.last_intr_ts = local_clock();
last_intr_status = intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
/*
* There could be max of hba->nutrs reqs in flight and in worst case
@ -7042,7 +7073,7 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x (0x%08x, 0x%08x)\n",
__func__,
intr_status,
hba->ufs_stats.last_intr_status,
last_intr_status,
enabled_intr_status);
ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
}
@ -7050,6 +7081,29 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
return retval;
}
/**
* ufshcd_intr - Main interrupt service routine
* @irq: irq number
* @__hba: pointer to adapter instance
*
* Return:
* IRQ_HANDLED - If interrupt is valid
* IRQ_WAKE_THREAD - If handling is moved to threaded handled
* IRQ_NONE - If invalid interrupt
*/
static irqreturn_t ufshcd_intr(int irq, void *__hba)
{
struct ufs_hba *hba = __hba;
/* Move interrupt handling to thread when MCQ & ESI are not enabled */
if (!hba->mcq_enabled || !hba->mcq_esi_enabled)
return IRQ_WAKE_THREAD;
/* Directly handle interrupts since MCQ ESI handlers does the hard job */
return ufshcd_sl_intr(hba, ufshcd_readl(hba, REG_INTERRUPT_STATUS) &
ufshcd_readl(hba, REG_INTERRUPT_ENABLE));
}
static int ufshcd_clear_tm_cmd(struct ufs_hba *hba, int tag)
{
int err = 0;
@ -7245,7 +7299,7 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
* bound to fail since dev_cmd.query and dev_cmd.type were left empty.
* read the response directly ignoring all errors.
*/
ufshcd_issue_dev_cmd(hba, lrbp, tag, QUERY_REQ_TIMEOUT);
ufshcd_issue_dev_cmd(hba, lrbp, tag, dev_cmd_timeout);
/* just copy the upiu response as it is */
memcpy(rsp_upiu, lrbp->ucd_rsp_ptr, sizeof(*rsp_upiu));
@ -8107,6 +8161,9 @@ static void ufshcd_wb_probe(struct ufs_hba *hba, const u8 *desc_buf)
*/
dev_info->wb_buffer_type = desc_buf[DEVICE_DESC_PARAM_WB_TYPE];
dev_info->ext_wb_sup = get_unaligned_be16(desc_buf +
DEVICE_DESC_PARAM_EXT_WB_SUP);
dev_info->b_presrv_uspc_en =
desc_buf[DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN];
@ -8678,7 +8735,7 @@ static void ufshcd_set_timestamp_attr(struct ufs_hba *hba)
put_unaligned_be64(ktime_get_real_ns(), &upiu_data->osf3);
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT);
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, dev_cmd_timeout);
if (err)
dev_err(hba->dev, "%s: failed to set timestamp %d\n",
@ -8793,6 +8850,7 @@ static void ufshcd_config_mcq(struct ufs_hba *hba)
u32 intrs;
ret = ufshcd_mcq_vops_config_esi(hba);
hba->mcq_esi_enabled = !ret;
dev_info(hba->dev, "ESI %sconfigured\n", ret ? "is not " : "");
intrs = UFSHCD_ENABLE_MCQ_INTRS;
@ -10654,7 +10712,8 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
/* IRQ registration */
err = devm_request_irq(dev, irq, ufshcd_intr, IRQF_SHARED, UFSHCD, hba);
err = devm_request_threaded_irq(dev, irq, ufshcd_intr, ufshcd_threaded_intr,
IRQF_ONESHOT | IRQF_SHARED, UFSHCD, hba);
if (err) {
dev_err(hba->dev, "request irq failed\n");
goto out_disable;

View File

@ -5,6 +5,7 @@
#include <linux/acpi.h>
#include <linux/clk.h>
#include <linux/cleanup.h>
#include <linux/delay.h>
#include <linux/devfreq.h>
#include <linux/gpio/consumer.h>
@ -102,6 +103,24 @@ static const struct __ufs_qcom_bw_table {
[MODE_MAX][0][0] = { 7643136, 819200 },
};
static const struct {
int nminor;
char *prefix;
} testbus_info[TSTBUS_MAX] = {
[TSTBUS_UAWM] = {32, "TSTBUS_UAWM"},
[TSTBUS_UARM] = {32, "TSTBUS_UARM"},
[TSTBUS_TXUC] = {32, "TSTBUS_TXUC"},
[TSTBUS_RXUC] = {32, "TSTBUS_RXUC"},
[TSTBUS_DFC] = {32, "TSTBUS_DFC"},
[TSTBUS_TRLUT] = {32, "TSTBUS_TRLUT"},
[TSTBUS_TMRLUT] = {32, "TSTBUS_TMRLUT"},
[TSTBUS_OCSC] = {32, "TSTBUS_OCSC"},
[TSTBUS_UTP_HCI] = {32, "TSTBUS_UTP_HCI"},
[TSTBUS_COMBINED] = {32, "TSTBUS_COMBINED"},
[TSTBUS_WRAPPER] = {32, "TSTBUS_WRAPPER"},
[TSTBUS_UNIPRO] = {256, "TSTBUS_UNIPRO"},
};
static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host);
static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, unsigned long freq);
@ -173,7 +192,7 @@ static int ufs_qcom_ice_init(struct ufs_qcom_host *host)
profile->ll_ops = ufs_qcom_crypto_ops;
profile->max_dun_bytes_supported = 8;
profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
profile->key_types_supported = qcom_ice_get_supported_key_type(ice);
profile->dev = dev;
/*
@ -221,17 +240,8 @@ static int ufs_qcom_ice_keyslot_program(struct blk_crypto_profile *profile,
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
int err;
/* Only AES-256-XTS has been tested so far. */
if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS)
return -EOPNOTSUPP;
ufshcd_hold(hba);
err = qcom_ice_program_key(host->ice,
QCOM_ICE_CRYPTO_ALG_AES_XTS,
QCOM_ICE_CRYPTO_KEY_SIZE_256,
key->bytes,
key->crypto_cfg.data_unit_size / 512,
slot);
err = qcom_ice_program_key(host->ice, slot, key);
ufshcd_release(hba);
return err;
}
@ -250,9 +260,53 @@ static int ufs_qcom_ice_keyslot_evict(struct blk_crypto_profile *profile,
return err;
}
static int ufs_qcom_ice_derive_sw_secret(struct blk_crypto_profile *profile,
const u8 *eph_key, size_t eph_key_size,
u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE])
{
struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
return qcom_ice_derive_sw_secret(host->ice, eph_key, eph_key_size,
sw_secret);
}
static int ufs_qcom_ice_import_key(struct blk_crypto_profile *profile,
const u8 *raw_key, size_t raw_key_size,
u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
{
struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
return qcom_ice_import_key(host->ice, raw_key, raw_key_size, lt_key);
}
static int ufs_qcom_ice_generate_key(struct blk_crypto_profile *profile,
u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
{
struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
return qcom_ice_generate_key(host->ice, lt_key);
}
static int ufs_qcom_ice_prepare_key(struct blk_crypto_profile *profile,
const u8 *lt_key, size_t lt_key_size,
u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
{
struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
return qcom_ice_prepare_key(host->ice, lt_key, lt_key_size, eph_key);
}
static const struct blk_crypto_ll_ops ufs_qcom_crypto_ops = {
.keyslot_program = ufs_qcom_ice_keyslot_program,
.keyslot_evict = ufs_qcom_ice_keyslot_evict,
.derive_sw_secret = ufs_qcom_ice_derive_sw_secret,
.import_key = ufs_qcom_ice_import_key,
.generate_key = ufs_qcom_ice_generate_key,
.prepare_key = ufs_qcom_ice_prepare_key,
};
#else
@ -1609,6 +1663,85 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host)
return 0;
}
static void ufs_qcom_dump_testbus(struct ufs_hba *hba)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
int i, j, nminor = 0, testbus_len = 0;
u32 *testbus __free(kfree) = NULL;
char *prefix;
testbus = kmalloc_array(256, sizeof(u32), GFP_KERNEL);
if (!testbus)
return;
for (j = 0; j < TSTBUS_MAX; j++) {
nminor = testbus_info[j].nminor;
prefix = testbus_info[j].prefix;
host->testbus.select_major = j;
testbus_len = nminor * sizeof(u32);
for (i = 0; i < nminor; i++) {
host->testbus.select_minor = i;
ufs_qcom_testbus_config(host);
testbus[i] = ufshcd_readl(hba, UFS_TEST_BUS);
}
print_hex_dump(KERN_ERR, prefix, DUMP_PREFIX_OFFSET,
16, 4, testbus, testbus_len, false);
}
}
static int ufs_qcom_dump_regs(struct ufs_hba *hba, size_t offset, size_t len,
const char *prefix, enum ufshcd_res id)
{
u32 *regs __free(kfree) = NULL;
size_t pos;
if (offset % 4 != 0 || len % 4 != 0)
return -EINVAL;
regs = kzalloc(len, GFP_ATOMIC);
if (!regs)
return -ENOMEM;
for (pos = 0; pos < len; pos += 4)
regs[pos / 4] = readl(hba->res[id].base + offset + pos);
print_hex_dump(KERN_ERR, prefix,
len > 4 ? DUMP_PREFIX_OFFSET : DUMP_PREFIX_NONE,
16, 4, regs, len, false);
return 0;
}
static void ufs_qcom_dump_mcq_hci_regs(struct ufs_hba *hba)
{
struct dump_info {
size_t offset;
size_t len;
const char *prefix;
enum ufshcd_res id;
};
struct dump_info mcq_dumps[] = {
{0x0, 256 * 4, "MCQ HCI-0 ", RES_MCQ},
{0x400, 256 * 4, "MCQ HCI-1 ", RES_MCQ},
{0x0, 5 * 4, "MCQ VS-0 ", RES_MCQ_VS},
{0x0, 256 * 4, "MCQ SQD-0 ", RES_MCQ_SQD},
{0x400, 256 * 4, "MCQ SQD-1 ", RES_MCQ_SQD},
{0x800, 256 * 4, "MCQ SQD-2 ", RES_MCQ_SQD},
{0xc00, 256 * 4, "MCQ SQD-3 ", RES_MCQ_SQD},
{0x1000, 256 * 4, "MCQ SQD-4 ", RES_MCQ_SQD},
{0x1400, 256 * 4, "MCQ SQD-5 ", RES_MCQ_SQD},
{0x1800, 256 * 4, "MCQ SQD-6 ", RES_MCQ_SQD},
{0x1c00, 256 * 4, "MCQ SQD-7 ", RES_MCQ_SQD},
};
for (int i = 0; i < ARRAY_SIZE(mcq_dumps); i++) {
ufs_qcom_dump_regs(hba, mcq_dumps[i].offset, mcq_dumps[i].len,
mcq_dumps[i].prefix, mcq_dumps[i].id);
cond_resched();
}
}
static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
{
u32 reg;
@ -1616,6 +1749,15 @@ static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
host = ufshcd_get_variant(hba);
dev_err(hba->dev, "HW_H8_ENTER_CNT=%d\n", ufshcd_readl(hba, REG_UFS_HW_H8_ENTER_CNT));
dev_err(hba->dev, "HW_H8_EXIT_CNT=%d\n", ufshcd_readl(hba, REG_UFS_HW_H8_EXIT_CNT));
dev_err(hba->dev, "SW_H8_ENTER_CNT=%d\n", ufshcd_readl(hba, REG_UFS_SW_H8_ENTER_CNT));
dev_err(hba->dev, "SW_H8_EXIT_CNT=%d\n", ufshcd_readl(hba, REG_UFS_SW_H8_EXIT_CNT));
dev_err(hba->dev, "SW_AFTER_HW_H8_ENTER_CNT=%d\n",
ufshcd_readl(hba, REG_UFS_SW_AFTER_HW_H8_ENTER_CNT));
ufshcd_dump_regs(hba, REG_UFS_SYS1CLK_1US, 16 * 4,
"HCI Vendor Specific Registers ");
@ -1658,6 +1800,23 @@ static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
reg = ufs_qcom_get_debug_reg_offset(host, UFS_DBG_RD_REG_TMRLUT);
ufshcd_dump_regs(hba, reg, 9 * 4, "UFS_DBG_RD_REG_TMRLUT ");
if (hba->mcq_enabled) {
reg = ufs_qcom_get_debug_reg_offset(host, UFS_RD_REG_MCQ);
ufshcd_dump_regs(hba, reg, 64 * 4, "HCI MCQ Debug Registers ");
}
/* ensure below dumps occur only in task context due to blocking calls. */
if (in_task()) {
/* Dump MCQ Host Vendor Specific Registers */
if (hba->mcq_enabled)
ufs_qcom_dump_mcq_hci_regs(hba);
/* voluntarily yield the CPU as we are dumping too much data */
ufshcd_dump_regs(hba, UFS_TEST_BUS, 4, "UFS_TEST_BUS ");
cond_resched();
ufs_qcom_dump_testbus(hba);
}
}
/**

View File

@ -50,6 +50,8 @@ enum {
*/
UFS_AH8_CFG = 0xFC,
UFS_RD_REG_MCQ = 0xD00,
REG_UFS_MEM_ICE_CONFIG = 0x260C,
REG_UFS_MEM_ICE_NUM_CORE = 0x2664,
@ -75,6 +77,15 @@ enum {
UFS_UFS_DBG_RD_EDTL_RAM = 0x1900,
};
/* QCOM UFS HC vendor specific Hibern8 count registers */
enum {
REG_UFS_HW_H8_ENTER_CNT = 0x2700,
REG_UFS_SW_H8_ENTER_CNT = 0x2704,
REG_UFS_SW_AFTER_HW_H8_ENTER_CNT = 0x2708,
REG_UFS_HW_H8_EXIT_CNT = 0x270C,
REG_UFS_SW_H8_EXIT_CNT = 0x2710,
};
enum {
UFS_MEM_CQIS_VS = 0x8,
};

View File

@ -346,10 +346,9 @@ static_assert(sizeof(struct scsi_stream_status) == 8);
/* GET STREAM STATUS parameter data */
struct scsi_stream_status_header {
__be32 len; /* length in bytes of stream_status[] array. */
__be32 len; /* length in bytes of following payload */
u16 reserved;
__be16 number_of_open_streams;
DECLARE_FLEX_ARRAY(struct scsi_stream_status, stream_status);
};
static_assert(sizeof(struct scsi_stream_status_header) == 8);

View File

@ -6,33 +6,29 @@
#ifndef __QCOM_ICE_H__
#define __QCOM_ICE_H__
#include <linux/blk-crypto.h>
#include <linux/types.h>
struct qcom_ice;
enum qcom_ice_crypto_key_size {
QCOM_ICE_CRYPTO_KEY_SIZE_INVALID = 0x0,
QCOM_ICE_CRYPTO_KEY_SIZE_128 = 0x1,
QCOM_ICE_CRYPTO_KEY_SIZE_192 = 0x2,
QCOM_ICE_CRYPTO_KEY_SIZE_256 = 0x3,
QCOM_ICE_CRYPTO_KEY_SIZE_512 = 0x4,
};
enum qcom_ice_crypto_alg {
QCOM_ICE_CRYPTO_ALG_AES_XTS = 0x0,
QCOM_ICE_CRYPTO_ALG_BITLOCKER_AES_CBC = 0x1,
QCOM_ICE_CRYPTO_ALG_AES_ECB = 0x2,
QCOM_ICE_CRYPTO_ALG_ESSIV_AES_CBC = 0x3,
};
int qcom_ice_enable(struct qcom_ice *ice);
int qcom_ice_resume(struct qcom_ice *ice);
int qcom_ice_suspend(struct qcom_ice *ice);
int qcom_ice_program_key(struct qcom_ice *ice,
u8 algorithm_id, u8 key_size,
const u8 crypto_key[], u8 data_unit_size,
int slot);
int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
const struct blk_crypto_key *blk_key);
int qcom_ice_evict_key(struct qcom_ice *ice, int slot);
enum blk_crypto_key_type qcom_ice_get_supported_key_type(struct qcom_ice *ice);
int qcom_ice_derive_sw_secret(struct qcom_ice *ice,
const u8 *eph_key, size_t eph_key_size,
u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]);
int qcom_ice_generate_key(struct qcom_ice *ice,
u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
int qcom_ice_prepare_key(struct qcom_ice *ice,
const u8 *lt_key, size_t lt_key_size,
u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
int qcom_ice_import_key(struct qcom_ice *ice,
const u8 *raw_key, size_t raw_key_size,
u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
struct qcom_ice *devm_of_qcom_ice_get(struct device *dev);
#endif /* __QCOM_ICE_H__ */

View File

@ -157,6 +157,7 @@ enum se_cmd_flags_table {
SCF_USE_CPUID = (1 << 16),
SCF_TASK_ATTR_SET = (1 << 17),
SCF_TREAT_READ_AS_NORMAL = (1 << 18),
SCF_TASK_ORDERED_SYNC = (1 << 19),
};
/*
@ -669,15 +670,19 @@ struct se_lun_acl {
struct se_ml_stat_grps ml_stat_grps;
};
struct se_dev_entry_io_stats {
u32 total_cmds;
u32 read_bytes;
u32 write_bytes;
};
struct se_dev_entry {
u64 mapped_lun;
u64 pr_res_key;
u64 creation_time;
bool lun_access_ro;
u32 attach_count;
atomic_long_t total_cmds;
atomic_long_t read_bytes;
atomic_long_t write_bytes;
struct se_dev_entry_io_stats __percpu *stats;
/* Used for PR SPEC_I_PT=1 and REGISTER_AND_MOVE */
struct kref pr_kref;
struct completion pr_comp;
@ -800,6 +805,12 @@ struct se_device_queue {
struct se_cmd_queue sq;
};
struct se_dev_io_stats {
u32 total_cmds;
u32 read_bytes;
u32 write_bytes;
};
struct se_device {
/* Used for SAM Task Attribute ordering */
u32 dev_cur_ordered_id;
@ -821,13 +832,10 @@ struct se_device {
atomic_long_t num_resets;
atomic_long_t aborts_complete;
atomic_long_t aborts_no_task;
atomic_long_t num_cmds;
atomic_long_t read_bytes;
atomic_long_t write_bytes;
struct se_dev_io_stats __percpu *stats;
/* Active commands on this virtual SE device */
atomic_t non_ordered;
struct percpu_ref non_ordered;
bool ordered_sync_in_progress;
atomic_t delayed_cmd_count;
atomic_t dev_qf_count;
u32 export_count;
spinlock_t delayed_cmd_lock;
@ -890,7 +898,7 @@ struct target_opcode_descriptor {
u8 specific_timeout;
u16 nominal_timeout;
u16 recommended_timeout;
bool (*enabled)(struct target_opcode_descriptor *descr,
bool (*enabled)(const struct target_opcode_descriptor *descr,
struct se_cmd *cmd);
void (*update_usage_bits)(u8 *usage_bits,
struct se_device *dev);

View File

@ -182,6 +182,9 @@ enum attr_idn {
QUERY_ATTR_IDN_CURR_WB_BUFF_SIZE = 0x1F,
QUERY_ATTR_IDN_TIMESTAMP = 0x30,
QUERY_ATTR_IDN_DEV_LVL_EXCEPTION_ID = 0x34,
QUERY_ATTR_IDN_WB_BUF_RESIZE_HINT = 0x3C,
QUERY_ATTR_IDN_WB_BUF_RESIZE_EN = 0x3D,
QUERY_ATTR_IDN_WB_BUF_RESIZE_STATUS = 0x3E,
};
/* Descriptor idn for Query requests */
@ -290,6 +293,7 @@ enum device_desc_param {
DEVICE_DESC_PARAM_PRDCT_REV = 0x2A,
DEVICE_DESC_PARAM_HPB_VER = 0x40,
DEVICE_DESC_PARAM_HPB_CONTROL = 0x42,
DEVICE_DESC_PARAM_EXT_WB_SUP = 0x4D,
DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP = 0x4F,
DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN = 0x53,
DEVICE_DESC_PARAM_WB_TYPE = 0x54,
@ -384,6 +388,11 @@ enum {
UFSHCD_AMP = 3,
};
/* Possible values for wExtendedWriteBoosterSupport */
enum {
UFS_DEV_WB_BUF_RESIZE = BIT(0),
};
/* Possible values for dExtendedUFSFeaturesSupport */
enum {
UFS_DEV_HIGH_TEMP_NOTIF = BIT(4),
@ -457,6 +466,28 @@ enum ufs_ref_clk_freq {
REF_CLK_FREQ_INVAL = -1,
};
/* bWriteBoosterBufferResizeEn attribute */
enum wb_resize_en {
WB_RESIZE_EN_IDLE = 0,
WB_RESIZE_EN_DECREASE = 1,
WB_RESIZE_EN_INCREASE = 2,
};
/* bWriteBoosterBufferResizeHint attribute */
enum wb_resize_hint {
WB_RESIZE_HINT_KEEP = 0,
WB_RESIZE_HINT_DECREASE = 1,
WB_RESIZE_HINT_INCREASE = 2,
};
/* bWriteBoosterBufferResizeStatus attribute */
enum wb_resize_status {
WB_RESIZE_STATUS_IDLE = 0,
WB_RESIZE_STATUS_IN_PROGRESS = 1,
WB_RESIZE_STATUS_COMPLETE_SUCCESS = 2,
WB_RESIZE_STATUS_GENERAL_FAILURE = 3,
};
/* Query response result code */
enum {
QUERY_RESULT_SUCCESS = 0x00,
@ -581,6 +612,7 @@ struct ufs_dev_info {
bool wb_buf_flush_enabled;
u8 wb_dedicated_lu;
u8 wb_buffer_type;
u16 ext_wb_sup;
bool b_rpm_dev_flush_capable;
u8 b_presrv_uspc_en;

View File

@ -501,8 +501,6 @@ struct ufs_event_hist {
/**
* struct ufs_stats - keeps usage/err statistics
* @last_intr_status: record the last interrupt status.
* @last_intr_ts: record the last interrupt timestamp.
* @hibern8_exit_cnt: Counter to keep track of number of exits,
* reset this after link-startup.
* @last_hibern8_exit_tstamp: Set time after the hibern8 exit.
@ -510,9 +508,6 @@ struct ufs_event_hist {
* @event: array with event history.
*/
struct ufs_stats {
u32 last_intr_status;
u64 last_intr_ts;
u32 hibern8_exit_cnt;
u64 last_hibern8_exit_tstamp;
struct ufs_event_hist event[UFS_EVT_CNT];
@ -959,6 +954,7 @@ enum ufshcd_mcq_opr {
* ufshcd_resume_complete()
* @mcq_sup: is mcq supported by UFSHC
* @mcq_enabled: is mcq ready to accept requests
* @mcq_esi_enabled: is mcq ESI configured
* @res: array of resource info of MCQ registers
* @mcq_base: Multi circular queue registers base address
* @uhq: array of supported hardware queues
@ -1130,6 +1126,7 @@ struct ufs_hba {
bool mcq_sup;
bool lsdb_sup;
bool mcq_enabled;
bool mcq_esi_enabled;
struct ufshcd_res_info res[RES_MAX];
void __iomem *mcq_base;
struct ufs_hw_queue *uhq;
@ -1476,6 +1473,7 @@ int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *r
struct scatterlist *sg_list, enum dma_data_direction dir);
int ufshcd_wb_toggle(struct ufs_hba *hba, bool enable);
int ufshcd_wb_toggle_buf_flush(struct ufs_hba *hba, bool enable);
int ufshcd_wb_set_resize_en(struct ufs_hba *hba, enum wb_resize_en en_mode);
int ufshcd_suspend_prepare(struct device *dev);
int __ufshcd_suspend_prepare(struct device *dev, bool rpm_ok_for_spm);
void ufshcd_resume_complete(struct device *dev);