Merge branch 'pci/controller/dwc'

- Move struct dwc_pcie_vsec_id to include/linux/pcie-dwc.h, where it can be
  shared by debugfs, perf, sysfs, etc (Manivannan Sadhasivam)

- Add dw_pcie_find_vsec_capability() to locate Vendor Specific Extended
  Capabilities (Shradha Todi)

- Add debugfs-based Silicon Debug, Error Injection, Statistical Counter
  support for DWC (Shradha Todi)

- Add debugfs property to expose LTSSM status of DWC PCIe link (Hans Zhang)

- Add Rockchip Vendor ID and Vendor Specific ID of RAS DES Capability so
  the DWC debugfs features work for Rockchip as well (Niklas Cassel)

* pci/controller/dwc:
  PCI: dw-rockchip: Hide broken ATS capability for RK3588 running in EP mode
  PCI: dwc: ep: Add dw_pcie_ep_hide_ext_capability()
  PCI: dwc: ep: Return -ENOMEM for allocation failures
  PCI: dwc: Add Rockchip to the RAS DES allowed vendor list
  PCI: Add Rockchip Vendor ID
  PCI: dwc: Add debugfs property to provide LTSSM status of the PCIe link
  PCI: dwc: Add debugfs based Statistical Counter support for DWC
  PCI: dwc: Add debugfs based Error Injection support for DWC
  PCI: dwc: Add debugfs based Silicon Debug support for DWC
  PCI: dwc: Add helper to find the Vendor Specific Extended Capability (VSEC)
  perf/dwc_pcie: Move common DWC struct definitions to 'pcie-dwc.h'
This commit is contained in:
Bjorn Helgaas 2025-03-27 13:14:49 -05:00
commit ba4751ae1a
16 changed files with 1070 additions and 25 deletions

View File

@ -0,0 +1,157 @@
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_debug/lane_detect
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: (RW) Write the lane number to be checked for detection. Read
will return whether PHY indicates receiver detection on the
selected lane. The default selected lane is Lane0.
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_debug/rx_valid
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: (RW) Write the lane number to be checked as valid or invalid.
Read will return the status of PIPE RXVALID signal of the
selected lane. The default selected lane is Lane0.
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_err_inj/<error>
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: The "rasdes_err_inj" is a directory which can be used to inject
errors into the system. The possible errors that can be injected
are:
1) tx_lcrc - TLP LCRC error injection TX Path
2) b16_crc_dllp - 16b CRC error injection of ACK/NAK DLLP
3) b16_crc_upd_fc - 16b CRC error injection of Update-FC DLLP
4) tx_ecrc - TLP ECRC error injection TX Path
5) fcrc_tlp - TLP's FCRC error injection TX Path
6) parity_tsos - Parity error of TSOS
7) parity_skpos - Parity error on SKPOS
8) rx_lcrc - LCRC error injection RX Path
9) rx_ecrc - ECRC error injection RX Path
10) tlp_err_seq - TLPs SEQ# error
11) ack_nak_dllp_seq - DLLPS ACK/NAK SEQ# error
12) ack_nak_dllp - ACK/NAK DLLPs transmission block
13) upd_fc_dllp - UpdateFC DLLPs transmission block
14) nak_dllp - Always transmission for NAK DLLP
15) inv_sync_hdr_sym - Invert SYNC header
16) com_pad_ts1 - COM/PAD TS1 order set
17) com_pad_ts2 - COM/PAD TS2 order set
18) com_fts - COM/FTS FTS order set
19) com_idl - COM/IDL E-idle order set
20) end_edb - END/EDB symbol
21) stp_sdp - STP/SDP symbol
22) com_skp - COM/SKP SKP order set
23) posted_tlp_hdr - Posted TLP Header credit value control
24) non_post_tlp_hdr - Non-Posted TLP Header credit value control
25) cmpl_tlp_hdr - Completion TLP Header credit value control
26) posted_tlp_data - Posted TLP Data credit value control
27) non_post_tlp_data - Non-Posted TLP Data credit value control
28) cmpl_tlp_data - Completion TLP Data credit value control
29) duplicate_tlp - Generates duplicate TLPs
30) nullified_tlp - Generates Nullified TLPs
(WO) Write to the attribute will prepare controller to inject
the respective error in the next transmission of data.
Parameter required to write will change in the following ways:
- Errors 9 and 10 are sequence errors. The write command:
echo <count> <diff> > /sys/kernel/debug/dwc_pcie_<dev>/rasdes_err_inj/<error>
<count>
Number of errors to be injected
<diff>
The difference to add or subtract from natural
sequence number to generate sequence error.
Allowed range from -4095 to 4095
- Errors 23 to 28 are credit value error insertions. The write
command:
echo <count> <diff> <vc> > /sys/kernel/debug/dwc_pcie_<dev>/rasdes_err_inj/<error>
<count>
Number of errors to be injected
<diff>
The difference to add or subtract from UpdateFC
credit value. Allowed range from -4095 to 4095
<vc>
Target VC number
- All other errors. The write command:
echo <count> > /sys/kernel/debug/dwc_pcie_<dev>/rasdes_err_inj/<error>
<count>
Number of errors to be injected
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_event_counters/<event>/counter_enable
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: The "rasdes_event_counters" is the directory which can be used
to collect statistical data about the number of times a certain
event has occurred in the controller. The list of possible
events are:
1) EBUF Overflow
2) EBUF Underrun
3) Decode Error
4) Running Disparity Error
5) SKP OS Parity Error
6) SYNC Header Error
7) Rx Valid De-assertion
8) CTL SKP OS Parity Error
9) 1st Retimer Parity Error
10) 2nd Retimer Parity Error
11) Margin CRC and Parity Error
12) Detect EI Infer
13) Receiver Error
14) RX Recovery Req
15) N_FTS Timeout
16) Framing Error
17) Deskew Error
18) Framing Error In L0
19) Deskew Uncompleted Error
20) Bad TLP
21) LCRC Error
22) Bad DLLP
23) Replay Number Rollover
24) Replay Timeout
25) Rx Nak DLLP
26) Tx Nak DLLP
27) Retry TLP
28) FC Timeout
29) Poisoned TLP
30) ECRC Error
31) Unsupported Request
32) Completer Abort
33) Completion Timeout
34) EBUF SKP Add
35) EBUF SKP Del
(RW) Write 1 to enable the event counter and write 0 to disable
the event counter. Read will return whether the counter is
currently enabled or disabled. Counter is disabled by default.
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_event_counters/<event>/counter_value
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: (RO) Read will return the current value of the event counter.
To reset the counter, counter should be disabled first and then
enabled back using the "counter_enable" attribute.
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_event_counters/<event>/lane_select
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: (RW) Some lanes in the event list are lane specific events.
These include events from 1 to 11, as well as, 34 and 35. Write
the lane number for which you wish the counter to be enabled,
disabled, or value dumped. Read will return the current
selected lane number. Lane0 is selected by default.
What: /sys/kernel/debug/dwc_pcie_<dev>/ltssm_status
Date: February 2025
Contact: Hans Zhang <18255117159@163.com>
Description: (RO) Read will return the current PCIe LTSSM state in both
string and raw value.

View File

@ -18123,6 +18123,7 @@ S: Maintained
F: Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml
F: Documentation/devicetree/bindings/pci/snps,dw-pcie.yaml
F: drivers/pci/controller/dwc/*designware*
F: include/linux/pcie-dwc.h
PCI DRIVER FOR TI DRA7XX/J721E
M: Vignesh Raghavendra <vigneshr@ti.com>

View File

@ -86,7 +86,6 @@
#define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025
#define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031
#define PCI_VENDOR_ID_ROCKCHIP 0x1d87
#define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588
static DEFINE_IDA(pci_endpoint_test_ida);

View File

@ -6,6 +6,16 @@ menu "DesignWare-based PCIe controllers"
config PCIE_DW
bool
config PCIE_DW_DEBUGFS
bool "DesignWare PCIe debugfs entries"
depends on DEBUG_FS
depends on PCIE_DW_HOST || PCIE_DW_EP
help
Say Y here to enable debugfs entries for the PCIe controller. These
entries provide various debug features related to the controller and
expose the RAS DES capabilities such as Silicon Debug, Error Injection
and Statistical Counters.
config PCIE_DW_HOST
bool
select PCIE_DW

View File

@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PCIE_DW) += pcie-designware.o
obj-$(CONFIG_PCIE_DW_DEBUGFS) += pcie-designware-debugfs.o
obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o

View File

@ -0,0 +1,677 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Synopsys DesignWare PCIe controller debugfs driver
*
* Copyright (C) 2025 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Shradha Todi <shradha.t@samsung.com>
*/
#include <linux/debugfs.h>
#include "pcie-designware.h"
#define SD_STATUS_L1LANE_REG 0xb0
#define PIPE_RXVALID BIT(18)
#define PIPE_DETECT_LANE BIT(17)
#define LANE_SELECT GENMASK(3, 0)
#define ERR_INJ0_OFF 0x34
#define EINJ_VAL_DIFF GENMASK(28, 16)
#define EINJ_VC_NUM GENMASK(14, 12)
#define EINJ_TYPE_SHIFT 8
#define EINJ0_TYPE GENMASK(11, 8)
#define EINJ1_TYPE BIT(8)
#define EINJ2_TYPE GENMASK(9, 8)
#define EINJ3_TYPE GENMASK(10, 8)
#define EINJ4_TYPE GENMASK(10, 8)
#define EINJ5_TYPE BIT(8)
#define EINJ_COUNT GENMASK(7, 0)
#define ERR_INJ_ENABLE_REG 0x30
#define RAS_DES_EVENT_COUNTER_DATA_REG 0xc
#define RAS_DES_EVENT_COUNTER_CTRL_REG 0x8
#define EVENT_COUNTER_GROUP_SELECT GENMASK(27, 24)
#define EVENT_COUNTER_EVENT_SELECT GENMASK(23, 16)
#define EVENT_COUNTER_LANE_SELECT GENMASK(11, 8)
#define EVENT_COUNTER_STATUS BIT(7)
#define EVENT_COUNTER_ENABLE GENMASK(4, 2)
#define PER_EVENT_ON 0x3
#define PER_EVENT_OFF 0x1
#define DWC_DEBUGFS_BUF_MAX 128
/**
* struct dwc_pcie_rasdes_info - Stores controller common information
* @ras_cap_offset: RAS DES vendor specific extended capability offset
* @reg_event_lock: Mutex used for RAS DES shadow event registers
*
* Any parameter constant to all files of the debugfs hierarchy for a single
* controller will be stored in this struct. It is allocated and assigned to
* controller specific struct dw_pcie during initialization.
*/
struct dwc_pcie_rasdes_info {
u32 ras_cap_offset;
struct mutex reg_event_lock;
};
/**
* struct dwc_pcie_rasdes_priv - Stores file specific private data information
* @pci: Reference to the dw_pcie structure
* @idx: Index of specific file related information in array of structs
*
* All debugfs files will have this struct as its private data.
*/
struct dwc_pcie_rasdes_priv {
struct dw_pcie *pci;
int idx;
};
/**
* struct dwc_pcie_err_inj - Store details about each error injection
* supported by DWC RAS DES
* @name: Name of the error that can be injected
* @err_inj_group: Group number to which the error belongs. The value
* can range from 0 to 5
* @err_inj_type: Each group can have multiple types of error
*/
struct dwc_pcie_err_inj {
const char *name;
u32 err_inj_group;
u32 err_inj_type;
};
static const struct dwc_pcie_err_inj err_inj_list[] = {
{"tx_lcrc", 0x0, 0x0},
{"b16_crc_dllp", 0x0, 0x1},
{"b16_crc_upd_fc", 0x0, 0x2},
{"tx_ecrc", 0x0, 0x3},
{"fcrc_tlp", 0x0, 0x4},
{"parity_tsos", 0x0, 0x5},
{"parity_skpos", 0x0, 0x6},
{"rx_lcrc", 0x0, 0x8},
{"rx_ecrc", 0x0, 0xb},
{"tlp_err_seq", 0x1, 0x0},
{"ack_nak_dllp_seq", 0x1, 0x1},
{"ack_nak_dllp", 0x2, 0x0},
{"upd_fc_dllp", 0x2, 0x1},
{"nak_dllp", 0x2, 0x2},
{"inv_sync_hdr_sym", 0x3, 0x0},
{"com_pad_ts1", 0x3, 0x1},
{"com_pad_ts2", 0x3, 0x2},
{"com_fts", 0x3, 0x3},
{"com_idl", 0x3, 0x4},
{"end_edb", 0x3, 0x5},
{"stp_sdp", 0x3, 0x6},
{"com_skp", 0x3, 0x7},
{"posted_tlp_hdr", 0x4, 0x0},
{"non_post_tlp_hdr", 0x4, 0x1},
{"cmpl_tlp_hdr", 0x4, 0x2},
{"posted_tlp_data", 0x4, 0x4},
{"non_post_tlp_data", 0x4, 0x5},
{"cmpl_tlp_data", 0x4, 0x6},
{"duplicate_tlp", 0x5, 0x0},
{"nullified_tlp", 0x5, 0x1},
};
static const u32 err_inj_type_mask[] = {
EINJ0_TYPE,
EINJ1_TYPE,
EINJ2_TYPE,
EINJ3_TYPE,
EINJ4_TYPE,
EINJ5_TYPE,
};
/**
* struct dwc_pcie_event_counter - Store details about each event counter
* supported in DWC RAS DES
* @name: Name of the error counter
* @group_no: Group number that the event belongs to. The value can range
* from 0 to 4
* @event_no: Event number of the particular event. The value ranges are:
* Group 0: 0 - 10
* Group 1: 5 - 13
* Group 2: 0 - 7
* Group 3: 0 - 5
* Group 4: 0 - 1
*/
struct dwc_pcie_event_counter {
const char *name;
u32 group_no;
u32 event_no;
};
static const struct dwc_pcie_event_counter event_list[] = {
{"ebuf_overflow", 0x0, 0x0},
{"ebuf_underrun", 0x0, 0x1},
{"decode_err", 0x0, 0x2},
{"running_disparity_err", 0x0, 0x3},
{"skp_os_parity_err", 0x0, 0x4},
{"sync_header_err", 0x0, 0x5},
{"rx_valid_deassertion", 0x0, 0x6},
{"ctl_skp_os_parity_err", 0x0, 0x7},
{"retimer_parity_err_1st", 0x0, 0x8},
{"retimer_parity_err_2nd", 0x0, 0x9},
{"margin_crc_parity_err", 0x0, 0xA},
{"detect_ei_infer", 0x1, 0x5},
{"receiver_err", 0x1, 0x6},
{"rx_recovery_req", 0x1, 0x7},
{"n_fts_timeout", 0x1, 0x8},
{"framing_err", 0x1, 0x9},
{"deskew_err", 0x1, 0xa},
{"framing_err_in_l0", 0x1, 0xc},
{"deskew_uncompleted_err", 0x1, 0xd},
{"bad_tlp", 0x2, 0x0},
{"lcrc_err", 0x2, 0x1},
{"bad_dllp", 0x2, 0x2},
{"replay_num_rollover", 0x2, 0x3},
{"replay_timeout", 0x2, 0x4},
{"rx_nak_dllp", 0x2, 0x5},
{"tx_nak_dllp", 0x2, 0x6},
{"retry_tlp", 0x2, 0x7},
{"fc_timeout", 0x3, 0x0},
{"poisoned_tlp", 0x3, 0x1},
{"ecrc_error", 0x3, 0x2},
{"unsupported_request", 0x3, 0x3},
{"completer_abort", 0x3, 0x4},
{"completion_timeout", 0x3, 0x5},
{"ebuf_skp_add", 0x4, 0x0},
{"ebuf_skp_del", 0x4, 0x1},
};
static ssize_t lane_detect_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dw_pcie *pci = file->private_data;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG);
val = FIELD_GET(PIPE_DETECT_LANE, val);
if (val)
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Lane Detected\n");
else
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Lane Undetected\n");
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static ssize_t lane_detect_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct dw_pcie *pci = file->private_data;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
u32 lane, val;
val = kstrtou32_from_user(buf, count, 0, &lane);
if (val)
return val;
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG);
val &= ~(LANE_SELECT);
val |= FIELD_PREP(LANE_SELECT, lane);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG, val);
return count;
}
static ssize_t rx_valid_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dw_pcie *pci = file->private_data;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG);
val = FIELD_GET(PIPE_RXVALID, val);
if (val)
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "RX Valid\n");
else
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "RX Invalid\n");
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static ssize_t rx_valid_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
return lane_detect_write(file, buf, count, ppos);
}
static ssize_t err_inj_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
u32 val, counter, vc_num, err_group, type_mask;
int val_diff = 0;
char *kern_buf;
err_group = err_inj_list[pdata->idx].err_inj_group;
type_mask = err_inj_type_mask[err_group];
kern_buf = memdup_user_nul(buf, count);
if (IS_ERR(kern_buf))
return PTR_ERR(kern_buf);
if (err_group == 4) {
val = sscanf(kern_buf, "%u %d %u", &counter, &val_diff, &vc_num);
if ((val != 3) || (val_diff < -4095 || val_diff > 4095)) {
kfree(kern_buf);
return -EINVAL;
}
} else if (err_group == 1) {
val = sscanf(kern_buf, "%u %d", &counter, &val_diff);
if ((val != 2) || (val_diff < -4095 || val_diff > 4095)) {
kfree(kern_buf);
return -EINVAL;
}
} else {
val = kstrtou32(kern_buf, 0, &counter);
if (val) {
kfree(kern_buf);
return val;
}
}
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + ERR_INJ0_OFF + (0x4 * err_group));
val &= ~(type_mask | EINJ_COUNT);
val |= ((err_inj_list[pdata->idx].err_inj_type << EINJ_TYPE_SHIFT) & type_mask);
val |= FIELD_PREP(EINJ_COUNT, counter);
if (err_group == 1 || err_group == 4) {
val &= ~(EINJ_VAL_DIFF);
val |= FIELD_PREP(EINJ_VAL_DIFF, val_diff);
}
if (err_group == 4) {
val &= ~(EINJ_VC_NUM);
val |= FIELD_PREP(EINJ_VC_NUM, vc_num);
}
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + ERR_INJ0_OFF + (0x4 * err_group), val);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + ERR_INJ_ENABLE_REG, (0x1 << err_group));
kfree(kern_buf);
return count;
}
static void set_event_number(struct dwc_pcie_rasdes_priv *pdata,
struct dw_pcie *pci, struct dwc_pcie_rasdes_info *rinfo)
{
u32 val;
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
val &= ~EVENT_COUNTER_ENABLE;
val &= ~(EVENT_COUNTER_GROUP_SELECT | EVENT_COUNTER_EVENT_SELECT);
val |= FIELD_PREP(EVENT_COUNTER_GROUP_SELECT, event_list[pdata->idx].group_no);
val |= FIELD_PREP(EVENT_COUNTER_EVENT_SELECT, event_list[pdata->idx].event_no);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG, val);
}
static ssize_t counter_enable_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
mutex_unlock(&rinfo->reg_event_lock);
val = FIELD_GET(EVENT_COUNTER_STATUS, val);
if (val)
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Counter Enabled\n");
else
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Counter Disabled\n");
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static ssize_t counter_enable_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
u32 val, enable;
val = kstrtou32_from_user(buf, count, 0, &enable);
if (val)
return val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
if (enable)
val |= FIELD_PREP(EVENT_COUNTER_ENABLE, PER_EVENT_ON);
else
val |= FIELD_PREP(EVENT_COUNTER_ENABLE, PER_EVENT_OFF);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG, val);
/*
* While enabling the counter, always read the status back to check if
* it is enabled or not. Return error if it is not enabled to let the
* users know that the counter is not supported on the platform.
*/
if (enable) {
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset +
RAS_DES_EVENT_COUNTER_CTRL_REG);
if (!FIELD_GET(EVENT_COUNTER_STATUS, val)) {
mutex_unlock(&rinfo->reg_event_lock);
return -EOPNOTSUPP;
}
}
mutex_unlock(&rinfo->reg_event_lock);
return count;
}
static ssize_t counter_lane_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
mutex_unlock(&rinfo->reg_event_lock);
val = FIELD_GET(EVENT_COUNTER_LANE_SELECT, val);
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Lane: %d\n", val);
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static ssize_t counter_lane_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
u32 val, lane;
val = kstrtou32_from_user(buf, count, 0, &lane);
if (val)
return val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
val &= ~(EVENT_COUNTER_LANE_SELECT);
val |= FIELD_PREP(EVENT_COUNTER_LANE_SELECT, lane);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG, val);
mutex_unlock(&rinfo->reg_event_lock);
return count;
}
static ssize_t counter_value_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_DATA_REG);
mutex_unlock(&rinfo->reg_event_lock);
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Counter value: %d\n", val);
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static const char *ltssm_status_string(enum dw_pcie_ltssm ltssm)
{
const char *str;
switch (ltssm) {
#define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3);
default:
str = "DW_PCIE_LTSSM_UNKNOWN";
break;
}
return str + strlen("DW_PCIE_LTSSM_");
}
static int ltssm_status_show(struct seq_file *s, void *v)
{
struct dw_pcie *pci = s->private;
enum dw_pcie_ltssm val;
val = dw_pcie_get_ltssm(pci);
seq_printf(s, "%s (0x%02x)\n", ltssm_status_string(val), val);
return 0;
}
static int ltssm_status_open(struct inode *inode, struct file *file)
{
return single_open(file, ltssm_status_show, inode->i_private);
}
#define dwc_debugfs_create(name) \
debugfs_create_file(#name, 0644, rasdes_debug, pci, \
&dbg_ ## name ## _fops)
#define DWC_DEBUGFS_FOPS(name) \
static const struct file_operations dbg_ ## name ## _fops = { \
.open = simple_open, \
.read = name ## _read, \
.write = name ## _write \
}
DWC_DEBUGFS_FOPS(lane_detect);
DWC_DEBUGFS_FOPS(rx_valid);
static const struct file_operations dwc_pcie_err_inj_ops = {
.open = simple_open,
.write = err_inj_write,
};
static const struct file_operations dwc_pcie_counter_enable_ops = {
.open = simple_open,
.read = counter_enable_read,
.write = counter_enable_write,
};
static const struct file_operations dwc_pcie_counter_lane_ops = {
.open = simple_open,
.read = counter_lane_read,
.write = counter_lane_write,
};
static const struct file_operations dwc_pcie_counter_value_ops = {
.open = simple_open,
.read = counter_value_read,
};
static const struct file_operations dwc_pcie_ltssm_status_ops = {
.open = ltssm_status_open,
.read = seq_read,
};
static void dwc_pcie_rasdes_debugfs_deinit(struct dw_pcie *pci)
{
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
mutex_destroy(&rinfo->reg_event_lock);
}
static int dwc_pcie_rasdes_debugfs_init(struct dw_pcie *pci, struct dentry *dir)
{
struct dentry *rasdes_debug, *rasdes_err_inj;
struct dentry *rasdes_event_counter, *rasdes_events;
struct dwc_pcie_rasdes_info *rasdes_info;
struct dwc_pcie_rasdes_priv *priv_tmp;
struct device *dev = pci->dev;
int ras_cap, i, ret;
/*
* If a given SoC has no RAS DES capability, the following call is
* bound to return an error, breaking some existing platforms. So,
* return 0 here, as this is not necessarily an error.
*/
ras_cap = dw_pcie_find_rasdes_capability(pci);
if (!ras_cap) {
dev_dbg(dev, "no RAS DES capability available\n");
return 0;
}
rasdes_info = devm_kzalloc(dev, sizeof(*rasdes_info), GFP_KERNEL);
if (!rasdes_info)
return -ENOMEM;
/* Create subdirectories for Debug, Error Injection, Statistics. */
rasdes_debug = debugfs_create_dir("rasdes_debug", dir);
rasdes_err_inj = debugfs_create_dir("rasdes_err_inj", dir);
rasdes_event_counter = debugfs_create_dir("rasdes_event_counter", dir);
mutex_init(&rasdes_info->reg_event_lock);
rasdes_info->ras_cap_offset = ras_cap;
pci->debugfs->rasdes_info = rasdes_info;
/* Create debugfs files for Debug subdirectory. */
dwc_debugfs_create(lane_detect);
dwc_debugfs_create(rx_valid);
/* Create debugfs files for Error Injection subdirectory. */
for (i = 0; i < ARRAY_SIZE(err_inj_list); i++) {
priv_tmp = devm_kzalloc(dev, sizeof(*priv_tmp), GFP_KERNEL);
if (!priv_tmp) {
ret = -ENOMEM;
goto err_deinit;
}
priv_tmp->idx = i;
priv_tmp->pci = pci;
debugfs_create_file(err_inj_list[i].name, 0200, rasdes_err_inj, priv_tmp,
&dwc_pcie_err_inj_ops);
}
/* Create debugfs files for Statistical Counter subdirectory. */
for (i = 0; i < ARRAY_SIZE(event_list); i++) {
priv_tmp = devm_kzalloc(dev, sizeof(*priv_tmp), GFP_KERNEL);
if (!priv_tmp) {
ret = -ENOMEM;
goto err_deinit;
}
priv_tmp->idx = i;
priv_tmp->pci = pci;
rasdes_events = debugfs_create_dir(event_list[i].name, rasdes_event_counter);
if (event_list[i].group_no == 0 || event_list[i].group_no == 4) {
debugfs_create_file("lane_select", 0644, rasdes_events,
priv_tmp, &dwc_pcie_counter_lane_ops);
}
debugfs_create_file("counter_value", 0444, rasdes_events, priv_tmp,
&dwc_pcie_counter_value_ops);
debugfs_create_file("counter_enable", 0644, rasdes_events, priv_tmp,
&dwc_pcie_counter_enable_ops);
}
return 0;
err_deinit:
dwc_pcie_rasdes_debugfs_deinit(pci);
return ret;
}
static void dwc_pcie_ltssm_debugfs_init(struct dw_pcie *pci, struct dentry *dir)
{
debugfs_create_file("ltssm_status", 0444, dir, pci,
&dwc_pcie_ltssm_status_ops);
}
void dwc_pcie_debugfs_deinit(struct dw_pcie *pci)
{
if (!pci->debugfs)
return;
dwc_pcie_rasdes_debugfs_deinit(pci);
debugfs_remove_recursive(pci->debugfs->debug_dir);
}
void dwc_pcie_debugfs_init(struct dw_pcie *pci)
{
char dirname[DWC_DEBUGFS_BUF_MAX];
struct device *dev = pci->dev;
struct debugfs_info *debugfs;
struct dentry *dir;
int err;
/* Create main directory for each platform driver. */
snprintf(dirname, DWC_DEBUGFS_BUF_MAX, "dwc_pcie_%s", dev_name(dev));
dir = debugfs_create_dir(dirname, NULL);
debugfs = devm_kzalloc(dev, sizeof(*debugfs), GFP_KERNEL);
if (!debugfs)
return;
debugfs->debug_dir = dir;
pci->debugfs = debugfs;
err = dwc_pcie_rasdes_debugfs_init(pci, dir);
if (err)
dev_err(dev, "failed to initialize RAS DES debugfs, err=%d\n",
err);
dwc_pcie_ltssm_debugfs_init(pci, dir);
}

View File

@ -102,6 +102,45 @@ static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap)
return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap);
}
/**
* dw_pcie_ep_hide_ext_capability - Hide a capability from the linked list
* @pci: DWC PCI device
* @prev_cap: Capability preceding the capability that should be hidden
* @cap: Capability that should be hidden
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap)
{
u16 prev_cap_offset, cap_offset;
u32 prev_cap_header, cap_header;
prev_cap_offset = dw_pcie_find_ext_capability(pci, prev_cap);
if (!prev_cap_offset)
return -EINVAL;
prev_cap_header = dw_pcie_readl_dbi(pci, prev_cap_offset);
cap_offset = PCI_EXT_CAP_NEXT(prev_cap_header);
cap_header = dw_pcie_readl_dbi(pci, cap_offset);
/* cap must immediately follow prev_cap. */
if (PCI_EXT_CAP_ID(cap_header) != cap)
return -EINVAL;
/* Clear next ptr. */
prev_cap_header &= ~GENMASK(31, 20);
/* Set next ptr to next ptr of cap. */
prev_cap_header |= cap_header & GENMASK(31, 20);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, prev_cap_offset, prev_cap_header);
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_hide_ext_capability);
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_header *hdr)
{
@ -796,6 +835,7 @@ void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
dwc_pcie_debugfs_deinit(pci);
dw_pcie_edma_remove(pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_cleanup);
@ -907,6 +947,7 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
if (ret)
return ret;
ret = -ENOMEM;
if (!ep->ib_window_map) {
ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows,
GFP_KERNEL);
@ -971,6 +1012,8 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
dw_pcie_ep_init_non_sticky_registers(pci);
dwc_pcie_debugfs_init(pci);
return 0;
err_remove_edma:

View File

@ -548,6 +548,8 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (pp->ops->post_init)
pp->ops->post_init(pp);
dwc_pcie_debugfs_init(pci);
return 0;
err_stop_link:
@ -572,6 +574,8 @@ void dw_pcie_host_deinit(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
dwc_pcie_debugfs_deinit(pci);
pci_stop_root_bus(pp->bridge->bus);
pci_remove_root_bus(pp->bridge->bus);

View File

@ -16,6 +16,7 @@
#include <linux/gpio/consumer.h>
#include <linux/ioport.h>
#include <linux/of.h>
#include <linux/pcie-dwc.h>
#include <linux/platform_device.h>
#include <linux/sizes.h>
#include <linux/types.h>
@ -283,6 +284,51 @@ u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
}
EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id,
u16 vsec_id)
{
u16 vsec = 0;
u32 header;
if (vendor_id != dw_pcie_readw_dbi(pci, PCI_VENDOR_ID))
return 0;
while ((vsec = dw_pcie_find_next_ext_capability(pci, vsec,
PCI_EXT_CAP_ID_VNDR))) {
header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER);
if (PCI_VNDR_HEADER_ID(header) == vsec_id)
return vsec;
}
return 0;
}
static u16 dw_pcie_find_vsec_capability(struct dw_pcie *pci,
const struct dwc_pcie_vsec_id *vsec_ids)
{
const struct dwc_pcie_vsec_id *vid;
u16 vsec;
u32 header;
for (vid = vsec_ids; vid->vendor_id; vid++) {
vsec = __dw_pcie_find_vsec_capability(pci, vid->vendor_id,
vid->vsec_id);
if (vsec) {
header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER);
if (PCI_VNDR_HEADER_REV(header) == vid->vsec_rev)
return vsec;
}
}
return 0;
}
u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci)
{
return dw_pcie_find_vsec_capability(pci, dwc_pcie_rasdes_vsec_ids);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_rasdes_capability);
int dw_pcie_read(void __iomem *addr, int size, u32 *val)
{
if (!IS_ALIGNED((uintptr_t)addr, size)) {

View File

@ -330,9 +330,40 @@ enum dw_pcie_ltssm {
/* Need to align with PCIE_PORT_DEBUG0 bits 0:5 */
DW_PCIE_LTSSM_DETECT_QUIET = 0x0,
DW_PCIE_LTSSM_DETECT_ACT = 0x1,
DW_PCIE_LTSSM_POLL_ACTIVE = 0x2,
DW_PCIE_LTSSM_POLL_COMPLIANCE = 0x3,
DW_PCIE_LTSSM_POLL_CONFIG = 0x4,
DW_PCIE_LTSSM_PRE_DETECT_QUIET = 0x5,
DW_PCIE_LTSSM_DETECT_WAIT = 0x6,
DW_PCIE_LTSSM_CFG_LINKWD_START = 0x7,
DW_PCIE_LTSSM_CFG_LINKWD_ACEPT = 0x8,
DW_PCIE_LTSSM_CFG_LANENUM_WAI = 0x9,
DW_PCIE_LTSSM_CFG_LANENUM_ACEPT = 0xa,
DW_PCIE_LTSSM_CFG_COMPLETE = 0xb,
DW_PCIE_LTSSM_CFG_IDLE = 0xc,
DW_PCIE_LTSSM_RCVRY_LOCK = 0xd,
DW_PCIE_LTSSM_RCVRY_SPEED = 0xe,
DW_PCIE_LTSSM_RCVRY_RCVRCFG = 0xf,
DW_PCIE_LTSSM_RCVRY_IDLE = 0x10,
DW_PCIE_LTSSM_L0 = 0x11,
DW_PCIE_LTSSM_L0S = 0x12,
DW_PCIE_LTSSM_L123_SEND_EIDLE = 0x13,
DW_PCIE_LTSSM_L1_IDLE = 0x14,
DW_PCIE_LTSSM_L2_IDLE = 0x15,
DW_PCIE_LTSSM_L2_WAKE = 0x16,
DW_PCIE_LTSSM_DISABLED_ENTRY = 0x17,
DW_PCIE_LTSSM_DISABLED_IDLE = 0x18,
DW_PCIE_LTSSM_DISABLED = 0x19,
DW_PCIE_LTSSM_LPBK_ENTRY = 0x1a,
DW_PCIE_LTSSM_LPBK_ACTIVE = 0x1b,
DW_PCIE_LTSSM_LPBK_EXIT = 0x1c,
DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT = 0x1d,
DW_PCIE_LTSSM_HOT_RESET_ENTRY = 0x1e,
DW_PCIE_LTSSM_HOT_RESET = 0x1f,
DW_PCIE_LTSSM_RCVRY_EQ0 = 0x20,
DW_PCIE_LTSSM_RCVRY_EQ1 = 0x21,
DW_PCIE_LTSSM_RCVRY_EQ2 = 0x22,
DW_PCIE_LTSSM_RCVRY_EQ3 = 0x23,
DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF,
};
@ -437,6 +468,11 @@ struct dw_pcie_ops {
void (*stop_link)(struct dw_pcie *pcie);
};
struct debugfs_info {
struct dentry *debug_dir;
void *rasdes_info;
};
struct dw_pcie {
struct device *dev;
void __iomem *dbi_base;
@ -465,6 +501,7 @@ struct dw_pcie {
struct reset_control_bulk_data core_rsts[DW_PCIE_NUM_CORE_RSTS];
struct gpio_desc *pe_rst;
bool suspended;
struct debugfs_info *debugfs;
};
#define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp)
@ -478,6 +515,7 @@ void dw_pcie_version_detect(struct dw_pcie *pci);
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci);
int dw_pcie_read(void __iomem *addr, int size, u32 *val);
int dw_pcie_write(void __iomem *addr, int size, u32 val);
@ -743,6 +781,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num);
void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar);
int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap);
struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no);
#else
@ -800,10 +839,29 @@ static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
{
}
static inline int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci,
u8 prev_cap, u8 cap)
{
return 0;
}
static inline struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no)
{
return NULL;
}
#endif
#ifdef CONFIG_PCIE_DW_DEBUGFS
void dwc_pcie_debugfs_init(struct dw_pcie *pci);
void dwc_pcie_debugfs_deinit(struct dw_pcie *pci);
#else
static inline void dwc_pcie_debugfs_init(struct dw_pcie *pci)
{
}
static inline void dwc_pcie_debugfs_deinit(struct dw_pcie *pci)
{
}
#endif
#endif /* _PCIE_DESIGNWARE_H */

View File

@ -240,6 +240,34 @@ static const struct dw_pcie_host_ops rockchip_pcie_host_ops = {
.init = rockchip_pcie_host_init,
};
/*
* ATS does not work on RK3588 when running in EP mode.
*
* After the host has enabled ATS on the EP side, it will send an IOTLB
* invalidation request to the EP side. However, the RK3588 will never send
* a completion back and eventually the host will print an IOTLB_INV_TIMEOUT
* error, and the EP will not be operational. If we hide the ATS capability,
* things work as expected.
*/
static void rockchip_pcie_ep_hide_broken_ats_cap_rk3588(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev;
/* Only hide the ATS capability for RK3588 running in EP mode. */
if (!of_device_is_compatible(dev->of_node, "rockchip,rk3588-pcie-ep"))
return;
if (dw_pcie_ep_hide_ext_capability(pci, PCI_EXT_CAP_ID_SECPCI,
PCI_EXT_CAP_ID_ATS))
dev_err(dev, "failed to hide ATS capability\n");
}
static void rockchip_pcie_ep_pre_init(struct dw_pcie_ep *ep)
{
rockchip_pcie_ep_hide_broken_ats_cap_rk3588(ep);
}
static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
@ -314,6 +342,7 @@ rockchip_pcie_get_features(struct dw_pcie_ep *ep)
static const struct dw_pcie_ep_ops rockchip_pcie_ep_ops = {
.init = rockchip_pcie_ep_init,
.pre_init = rockchip_pcie_ep_pre_init,
.raise_irq = rockchip_pcie_raise_irq,
.get_features = rockchip_pcie_get_features,
};

View File

@ -367,7 +367,7 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
}
}
rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID,
rockchip_pcie_write(rockchip, PCI_VENDOR_ID_ROCKCHIP,
PCIE_CORE_CONFIG_VENDOR);
rockchip_pcie_write(rockchip,
PCI_CLASS_BRIDGE_PCI_NORMAL << 8,

View File

@ -200,7 +200,6 @@
#define AXI_WRAPPER_NOR_MSG 0xc
#define PCIE_RC_SEND_PME_OFF 0x11960
#define ROCKCHIP_VENDOR_ID 0x1d87
#define PCIE_LINK_IS_L2(x) \
(((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2)
#define PCIE_LINK_TRAINING_DONE(x) \

View File

@ -13,6 +13,7 @@
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/pcie-dwc.h>
#include <linux/perf_event.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
@ -99,26 +100,6 @@ struct dwc_pcie_dev_info {
struct list_head dev_node;
};
struct dwc_pcie_pmu_vsec_id {
u16 vendor_id;
u16 vsec_id;
u8 vsec_rev;
};
/*
* VSEC IDs are allocated by the vendor, so a given ID may mean different
* things to different vendors. See PCIe r6.0, sec 7.9.5.2.
*/
static const struct dwc_pcie_pmu_vsec_id dwc_pcie_pmu_vsec_ids[] = {
{ .vendor_id = PCI_VENDOR_ID_ALIBABA,
.vsec_id = 0x02, .vsec_rev = 0x4 },
{ .vendor_id = PCI_VENDOR_ID_AMPERE,
.vsec_id = 0x02, .vsec_rev = 0x4 },
{ .vendor_id = PCI_VENDOR_ID_QCOM,
.vsec_id = 0x02, .vsec_rev = 0x4 },
{} /* terminator */
};
static ssize_t cpumask_show(struct device *dev,
struct device_attribute *attr,
char *buf)
@ -529,14 +510,14 @@ static void dwc_pcie_unregister_pmu(void *data)
static u16 dwc_pcie_des_cap(struct pci_dev *pdev)
{
const struct dwc_pcie_pmu_vsec_id *vid;
const struct dwc_pcie_vsec_id *vid;
u16 vsec;
u32 val;
if (!pci_is_pcie(pdev) || !(pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT))
return 0;
for (vid = dwc_pcie_pmu_vsec_ids; vid->vendor_id; vid++) {
for (vid = dwc_pcie_rasdes_vsec_ids; vid->vendor_id; vid++) {
vsec = pci_find_vsec_capability(pdev, vid->vendor_id,
vid->vsec_id);
if (vsec) {

View File

@ -2610,6 +2610,8 @@
#define PCI_VENDOR_ID_ZHAOXIN 0x1d17
#define PCI_VENDOR_ID_ROCKCHIP 0x1d87
#define PCI_VENDOR_ID_HYGON 0x1d94
#define PCI_VENDOR_ID_META 0x1d9b

38
include/linux/pcie-dwc.h Normal file
View File

@ -0,0 +1,38 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2021-2023 Alibaba Inc.
* Copyright (C) 2025 Linaro Ltd.
*
* Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
*/
#ifndef LINUX_PCIE_DWC_H
#define LINUX_PCIE_DWC_H
#include <linux/pci_ids.h>
struct dwc_pcie_vsec_id {
u16 vendor_id;
u16 vsec_id;
u8 vsec_rev;
};
/*
* VSEC IDs are allocated by the vendor, so a given ID may mean different
* things to different vendors. See PCIe r6.0, sec 7.9.5.2.
*/
static const struct dwc_pcie_vsec_id dwc_pcie_rasdes_vsec_ids[] = {
{ .vendor_id = PCI_VENDOR_ID_ALIBABA,
.vsec_id = 0x02, .vsec_rev = 0x4 },
{ .vendor_id = PCI_VENDOR_ID_AMPERE,
.vsec_id = 0x02, .vsec_rev = 0x4 },
{ .vendor_id = PCI_VENDOR_ID_QCOM,
.vsec_id = 0x02, .vsec_rev = 0x4 },
{ .vendor_id = PCI_VENDOR_ID_ROCKCHIP,
.vsec_id = 0x02, .vsec_rev = 0x4 },
{ .vendor_id = PCI_VENDOR_ID_SAMSUNG,
.vsec_id = 0x02, .vsec_rev = 0x4 },
{}
};
#endif /* LINUX_PCIE_DWC_H */