coresight: updates for Linux v6.16

CoreSight self-hosted trace driver subsystem updates for Linux v6.16 includes:
  - Clear CLAIM tags on device probe if self-hosted tags are set.
  - Support for perf AUX pause/resume for CoreSight ETM PMU driver, with trace
    collection at pause.
  - Miscellaneous fixes for the subsystem
 
 Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuFy0byloRoXZHaWBxcXRZPKyBqEFAmgu/kwACgkQxcXRZPKy
 BqETaw/8DFDQNbOk7vy1YYoKRvl3UIyw86hFP1CSOLasL1fsCt4Q8H6QsJN2L5Lz
 ZaY+dNXs+5QmErNAKVIu6NLv5H/0gKzW9VV6pqZS+1rRO/OmTNJizGGXURo38EHz
 gZhrF/v7BxeqlUcKRLO9reVxH24vnTVT8XSerGVts7PCKIMS6zeTwxSaK707KkyL
 pJ9WwoxfohiH3xn50GXIiWR5y/YSTy79d9fK+ppr9ipDheZ6R5pZ8BUIsYQGsGPP
 +4tfdcIIW2lbrStaSssKXGPr3yeGzXUsz6x57eH7gob0zxTpPrb8PUiH49v2AU1G
 0OHHbyaELUxcKgUFIaunNn0Vskr1B/Usr8VFFhQjNZrcu+YLj/WxwOcW5rU2cPnh
 hA8+zP6BvIvah83SQRH3llfBEqjXlJKMFSp5Vqnw+EGun5cs+lU2z1HVJCkaCFVQ
 YnBTvJE5ow6Gay0ukKJNS9VR8Sc0ECDpaYDD6Q1m5v/gCLgeBU7nvk6Et2WcAc6s
 18978XHoBbCUuoj3nJCy71zXDZdLHZrxwwqyBOOo9H4sxW7heyqRtKMR/G5Ja6Qj
 c/6LbCtGfKNGxu49Xwo5Kvl1NZFuJ2hr05eWZrrZBtsnDOy8k78gJxR+GFwUoyaY
 9585CnVRc1KzrWa9/W72oR4O2FOhRUSa5Vn13sR1OyG/9hRNPnM=
 =Nex2
 -----END PGP SIGNATURE-----

Merge tag 'coresight-next-v6.16' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/coresight/linux into char-misc-next

Suzuki writes:

coresight: updates for Linux v6.16

CoreSight self-hosted trace driver subsystem updates for Linux v6.16 includes:
 - Clear CLAIM tags on device probe if self-hosted tags are set.
 - Support for perf AUX pause/resume for CoreSight ETM PMU driver, with trace
   collection at pause.
 - Miscellaneous fixes for the subsystem

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>

* tag 'coresight-next-v6.16' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/coresight/linux: (27 commits)
  coresight: prevent deactivate active config while enabling the config
  coresight: holding cscfg_csdev_lock while removing cscfg from csdev
  coresight/etm4: fix missing disable active config
  coresight: etm4x: Fix timestamp bit field handling
  coresight: tmc: fix failure to disable/enable ETF after reading
  Documentation: coresight: Document AUX pause and resume
  coresight: perf: Update buffer on AUX pause
  coresight: tmc: Re-enable sink after buffer update
  coresight: perf: Support AUX trace pause and resume
  coresight: etm4x: Hook pause and resume callbacks
  coresight: Introduce pause and resume APIs for source
  coresight: etm4x: Extract the trace unit controlling
  coresight: cti: Replace inclusion by struct fwnode_handle forward declaration
  coresight: Disable MMIO logging for coresight stm driver
  coresight: replicator: Fix panic for clearing claim tag
  coresight: Add a KUnit test for coresight_find_default_sink()
  coresight: Remove extern from function declarations
  coresight: Remove inlines from static function definitions
  coresight: Clear self hosted claim tag on probe
  coresight: etm3x: Convert raw base pointer to struct coresight access
  ...
This commit is contained in:
Greg Kroah-Hartman 2025-05-22 18:04:43 +02:00
commit bdc319f1c9
33 changed files with 624 additions and 225 deletions

View File

@ -30,6 +30,19 @@ properties:
power-domains:
maxItems: 1
clocks:
minItems: 1
maxItems: 3
clock-names:
oneOf:
- items:
- enum: [apb_pclk, atclk]
- items: # Zynq-700
- const: apb_pclk
- const: dbg_trc
- const: dbg_apb
in-ports:
$ref: /schemas/graph.yaml#/properties/ports
additionalProperties: false

View File

@ -78,6 +78,37 @@ enabled like::
Please refer to the kernel configuration help for more information.
Fine-grained tracing with AUX pause and resume
----------------------------------------------
Arm CoreSight may generate a large amount of hardware trace data, which
will lead to overhead in recording and distract users when reviewing
profiling result. To mitigate the issue of excessive trace data, Perf
provides AUX pause and resume functionality for fine-grained tracing.
The AUX pause and resume can be triggered by associated events. These
events can be ftrace tracepoints (including static and dynamic
tracepoints) or PMU events (e.g. CPU PMU cycle event). To create a perf
session with AUX pause / resume, three configuration terms are
introduced:
- "aux-action=start-paused": it is specified for the cs_etm PMU event to
launch in a paused state.
- "aux-action=pause": an associated event is specified with this term
to pause AUX trace.
- "aux-action=resume": an associated event is specified with this term
to resume AUX trace.
Example for triggering AUX pause and resume with ftrace tracepoints::
perf record -e cs_etm/aux-action=start-paused/k,syscalls:sys_enter_openat/aux-action=resume/,syscalls:sys_exit_openat/aux-action=pause/ ls
Example for triggering AUX pause and resume with PMU event::
perf record -a -e cs_etm/aux-action=start-paused/k \
-e cycles/aux-action=pause,period=10000000/ \
-e cycles/aux-action=resume,period=1050000/ -- sleep 1
Perf test - Verify kernel and userspace perf CoreSight work
-----------------------------------------------------------

View File

@ -259,4 +259,13 @@ config CORESIGHT_DUMMY
To compile this driver as a module, choose M here: the module will be
called coresight-dummy.
config CORESIGHT_KUNIT_TESTS
tristate "Enable Coresight unit tests"
depends on KUNIT
default KUNIT_ALL_TESTS
help
Enable Coresight unit tests. Only useful for development and not
intended for production.
endif

View File

@ -22,6 +22,8 @@ condflags := \
$(call cc-option, -Wstringop-truncation)
subdir-ccflags-y += $(condflags)
CFLAGS_coresight-stm.o := -D__DISABLE_TRACE_MMIO__
obj-$(CONFIG_CORESIGHT) += coresight.o
coresight-y := coresight-core.o coresight-etm-perf.o coresight-platform.o \
coresight-sysfs.o coresight-syscfg.o coresight-config.o \
@ -53,3 +55,4 @@ obj-$(CONFIG_ULTRASOC_SMB) += ultrasoc-smb.o
obj-$(CONFIG_CORESIGHT_DUMMY) += coresight-dummy.o
obj-$(CONFIG_CORESIGHT_CTCU) += coresight-ctcu.o
coresight-ctcu-y := coresight-ctcu-core.o
obj-$(CONFIG_CORESIGHT_KUNIT_TESTS) += coresight-kunit-tests.o

View File

@ -113,9 +113,8 @@ typedef u64 cate_t;
* containing the data page pointer for @offset. If @daddrp is not NULL,
* @daddrp points the DMA address of the beginning of the table.
*/
static inline cate_t *catu_get_table(struct tmc_sg_table *catu_table,
unsigned long offset,
dma_addr_t *daddrp)
static cate_t *catu_get_table(struct tmc_sg_table *catu_table, unsigned long offset,
dma_addr_t *daddrp)
{
unsigned long buf_size = tmc_sg_table_buf_size(catu_table);
unsigned int table_nr, pg_idx, pg_offset;
@ -165,12 +164,12 @@ static void catu_dump_table(struct tmc_sg_table *catu_table)
}
#else
static inline void catu_dump_table(struct tmc_sg_table *catu_table)
static void catu_dump_table(struct tmc_sg_table *catu_table)
{
}
#endif
static inline cate_t catu_make_entry(dma_addr_t addr)
static cate_t catu_make_entry(dma_addr_t addr)
{
return addr ? CATU_VALID_ENTRY(addr) : 0;
}
@ -390,7 +389,7 @@ static const struct attribute_group *catu_groups[] = {
};
static inline int catu_wait_for_ready(struct catu_drvdata *drvdata)
static int catu_wait_for_ready(struct catu_drvdata *drvdata)
{
struct csdev_access *csa = &drvdata->csdev->access;
@ -458,12 +457,17 @@ static int catu_enable_hw(struct catu_drvdata *drvdata, enum cs_mode cs_mode,
static int catu_enable(struct coresight_device *csdev, enum cs_mode mode,
void *data)
{
int rc;
int rc = 0;
struct catu_drvdata *catu_drvdata = csdev_to_catu_drvdata(csdev);
CS_UNLOCK(catu_drvdata->base);
rc = catu_enable_hw(catu_drvdata, mode, data);
CS_LOCK(catu_drvdata->base);
guard(raw_spinlock_irqsave)(&catu_drvdata->spinlock);
if (csdev->refcnt == 0) {
CS_UNLOCK(catu_drvdata->base);
rc = catu_enable_hw(catu_drvdata, mode, data);
CS_LOCK(catu_drvdata->base);
}
if (!rc)
csdev->refcnt++;
return rc;
}
@ -486,12 +490,15 @@ static int catu_disable_hw(struct catu_drvdata *drvdata)
static int catu_disable(struct coresight_device *csdev, void *__unused)
{
int rc;
int rc = 0;
struct catu_drvdata *catu_drvdata = csdev_to_catu_drvdata(csdev);
CS_UNLOCK(catu_drvdata->base);
rc = catu_disable_hw(catu_drvdata);
CS_LOCK(catu_drvdata->base);
guard(raw_spinlock_irqsave)(&catu_drvdata->spinlock);
if (--csdev->refcnt == 0) {
CS_UNLOCK(catu_drvdata->base);
rc = catu_disable_hw(catu_drvdata);
CS_LOCK(catu_drvdata->base);
}
return rc;
}
@ -550,6 +557,7 @@ static int __catu_probe(struct device *dev, struct resource *res)
dev->platform_data = pdata;
drvdata->base = base;
raw_spin_lock_init(&drvdata->spinlock);
catu_desc.access = CSDEV_ACCESS_IOMEM(base);
catu_desc.pdata = pdata;
catu_desc.dev = dev;
@ -558,6 +566,7 @@ static int __catu_probe(struct device *dev, struct resource *res)
catu_desc.subtype.helper_subtype = CORESIGHT_DEV_SUBTYPE_HELPER_CATU;
catu_desc.ops = &catu_ops;
coresight_clear_self_claim_tag(&catu_desc.access);
drvdata->csdev = coresight_register(&catu_desc);
if (IS_ERR(drvdata->csdev))
ret = PTR_ERR(drvdata->csdev);
@ -702,7 +711,7 @@ static int __init catu_init(void)
{
int ret;
ret = coresight_init_driver("catu", &catu_driver, &catu_platform_driver);
ret = coresight_init_driver("catu", &catu_driver, &catu_platform_driver, THIS_MODULE);
tmc_etr_set_catu_ops(&etr_catu_buf_ops);
return ret;
}

View File

@ -65,6 +65,7 @@ struct catu_drvdata {
void __iomem *base;
struct coresight_device *csdev;
int irq;
raw_spinlock_t spinlock;
};
#define CATU_REG32(name, offset) \

View File

@ -228,7 +228,7 @@ struct cscfg_feature_csdev {
* @feats_csdev:references to the device features to enable.
*/
struct cscfg_config_csdev {
const struct cscfg_config_desc *config_desc;
struct cscfg_config_desc *config_desc;
struct coresight_device *csdev;
bool enabled;
struct list_head node;

View File

@ -129,34 +129,36 @@ coresight_find_out_connection(struct coresight_device *csdev,
return ERR_PTR(-ENODEV);
}
static inline u32 coresight_read_claim_tags(struct coresight_device *csdev)
static u32 coresight_read_claim_tags_unlocked(struct coresight_device *csdev)
{
return csdev_access_relaxed_read32(&csdev->access, CORESIGHT_CLAIMCLR);
return FIELD_GET(CORESIGHT_CLAIM_MASK,
csdev_access_relaxed_read32(&csdev->access, CORESIGHT_CLAIMCLR));
}
static inline bool coresight_is_claimed_self_hosted(struct coresight_device *csdev)
{
return coresight_read_claim_tags(csdev) == CORESIGHT_CLAIM_SELF_HOSTED;
}
static inline bool coresight_is_claimed_any(struct coresight_device *csdev)
{
return coresight_read_claim_tags(csdev) != 0;
}
static inline void coresight_set_claim_tags(struct coresight_device *csdev)
static void coresight_set_self_claim_tag_unlocked(struct coresight_device *csdev)
{
csdev_access_relaxed_write32(&csdev->access, CORESIGHT_CLAIM_SELF_HOSTED,
CORESIGHT_CLAIMSET);
isb();
}
static inline void coresight_clear_claim_tags(struct coresight_device *csdev)
void coresight_clear_self_claim_tag(struct csdev_access *csa)
{
csdev_access_relaxed_write32(&csdev->access, CORESIGHT_CLAIM_SELF_HOSTED,
if (csa->io_mem)
CS_UNLOCK(csa->base);
coresight_clear_self_claim_tag_unlocked(csa);
if (csa->io_mem)
CS_LOCK(csa->base);
}
EXPORT_SYMBOL_GPL(coresight_clear_self_claim_tag);
void coresight_clear_self_claim_tag_unlocked(struct csdev_access *csa)
{
csdev_access_relaxed_write32(csa, CORESIGHT_CLAIM_SELF_HOSTED,
CORESIGHT_CLAIMCLR);
isb();
}
EXPORT_SYMBOL_GPL(coresight_clear_self_claim_tag_unlocked);
/*
* coresight_claim_device_unlocked : Claim the device for self-hosted usage
@ -170,18 +172,41 @@ static inline void coresight_clear_claim_tags(struct coresight_device *csdev)
*/
int coresight_claim_device_unlocked(struct coresight_device *csdev)
{
int tag;
struct csdev_access *csa;
if (WARN_ON(!csdev))
return -EINVAL;
if (coresight_is_claimed_any(csdev))
csa = &csdev->access;
tag = coresight_read_claim_tags_unlocked(csdev);
switch (tag) {
case CORESIGHT_CLAIM_FREE:
coresight_set_self_claim_tag_unlocked(csdev);
if (coresight_read_claim_tags_unlocked(csdev) == CORESIGHT_CLAIM_SELF_HOSTED)
return 0;
/* There was a race setting the tag, clean up and fail */
coresight_clear_self_claim_tag_unlocked(csa);
dev_dbg(&csdev->dev, "Busy: Couldn't set self claim tag");
return -EBUSY;
coresight_set_claim_tags(csdev);
if (coresight_is_claimed_self_hosted(csdev))
return 0;
/* There was a race setting the tags, clean up and fail */
coresight_clear_claim_tags(csdev);
return -EBUSY;
case CORESIGHT_CLAIM_EXTERNAL:
/* External debug is an expected state, so log and report BUSY */
dev_dbg(&csdev->dev, "Busy: Claimed by external debugger");
return -EBUSY;
default:
case CORESIGHT_CLAIM_SELF_HOSTED:
case CORESIGHT_CLAIM_INVALID:
/*
* Warn here because we clear a lingering self hosted tag
* on probe, so other tag combinations are impossible.
*/
dev_err_once(&csdev->dev, "Invalid claim tag state: %x", tag);
return -EBUSY;
}
}
EXPORT_SYMBOL_GPL(coresight_claim_device_unlocked);
@ -201,7 +226,7 @@ int coresight_claim_device(struct coresight_device *csdev)
EXPORT_SYMBOL_GPL(coresight_claim_device);
/*
* coresight_disclaim_device_unlocked : Clear the claim tags for the device.
* coresight_disclaim_device_unlocked : Clear the claim tag for the device.
* Called with CS_UNLOCKed for the component.
*/
void coresight_disclaim_device_unlocked(struct coresight_device *csdev)
@ -210,15 +235,15 @@ void coresight_disclaim_device_unlocked(struct coresight_device *csdev)
if (WARN_ON(!csdev))
return;
if (coresight_is_claimed_self_hosted(csdev))
coresight_clear_claim_tags(csdev);
if (coresight_read_claim_tags_unlocked(csdev) == CORESIGHT_CLAIM_SELF_HOSTED)
coresight_clear_self_claim_tag_unlocked(&csdev->access);
else
/*
* The external agent may have not honoured our claim
* and has manipulated it. Or something else has seriously
* gone wrong in our driver.
*/
WARN_ON_ONCE(1);
dev_WARN_ONCE(&csdev->dev, 1, "External agent took claim tag");
}
EXPORT_SYMBOL_GPL(coresight_disclaim_device_unlocked);
@ -367,6 +392,28 @@ void coresight_disable_source(struct coresight_device *csdev, void *data)
}
EXPORT_SYMBOL_GPL(coresight_disable_source);
void coresight_pause_source(struct coresight_device *csdev)
{
if (!coresight_is_percpu_source(csdev))
return;
if (source_ops(csdev)->pause_perf)
source_ops(csdev)->pause_perf(csdev);
}
EXPORT_SYMBOL_GPL(coresight_pause_source);
int coresight_resume_source(struct coresight_device *csdev)
{
if (!coresight_is_percpu_source(csdev))
return -EOPNOTSUPP;
if (!source_ops(csdev)->resume_perf)
return -EOPNOTSUPP;
return source_ops(csdev)->resume_perf(csdev);
}
EXPORT_SYMBOL_GPL(coresight_resume_source);
/*
* coresight_disable_path_from : Disable components in the given path beyond
* @nd in the list. If @nd is NULL, all the components, except the SOURCE are
@ -465,7 +512,7 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
/* Enable all helpers adjacent to the path first */
ret = coresight_enable_helpers(csdev, mode, path);
if (ret)
goto err;
goto err_disable_path;
/*
* ETF devices are tricky... They can be a link or a sink,
* depending on how they are configured. If an ETF has been
@ -486,8 +533,10 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
* that need disabling. Disabling the path here
* would mean we could disrupt an existing session.
*/
if (ret)
if (ret) {
coresight_disable_helpers(csdev, path);
goto out;
}
break;
case CORESIGHT_DEV_TYPE_SOURCE:
/* sources are enabled from either sysFS or Perf */
@ -497,16 +546,19 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
child = list_next_entry(nd, link)->csdev;
ret = coresight_enable_link(csdev, parent, child, source);
if (ret)
goto err;
goto err_disable_helpers;
break;
default:
goto err;
ret = -EINVAL;
goto err_disable_helpers;
}
}
out:
return ret;
err:
err_disable_helpers:
coresight_disable_helpers(csdev, path);
err_disable_path:
coresight_disable_path_from(path, nd);
goto out;
}
@ -579,7 +631,7 @@ struct coresight_device *coresight_get_sink_by_id(u32 id)
* Return true in successful case and power up the device.
* Return false when failed to get reference of module.
*/
static inline bool coresight_get_ref(struct coresight_device *csdev)
static bool coresight_get_ref(struct coresight_device *csdev)
{
struct device *dev = csdev->dev.parent;
@ -598,7 +650,7 @@ static inline bool coresight_get_ref(struct coresight_device *csdev)
*
* @csdev: The coresight device to decrement a reference from.
*/
static inline void coresight_put_ref(struct coresight_device *csdev)
static void coresight_put_ref(struct coresight_device *csdev)
{
struct device *dev = csdev->dev.parent;
@ -821,7 +873,7 @@ void coresight_release_path(struct coresight_path *path)
}
/* return true if the device is a suitable type for a default sink */
static inline bool coresight_is_def_sink_type(struct coresight_device *csdev)
static bool coresight_is_def_sink_type(struct coresight_device *csdev)
{
/* sink & correct subtype */
if (((csdev->type == CORESIGHT_DEV_TYPE_SINK) ||
@ -959,6 +1011,7 @@ coresight_find_default_sink(struct coresight_device *csdev)
}
return csdev->def_sink;
}
EXPORT_SYMBOL_GPL(coresight_find_default_sink);
static int coresight_remove_sink_ref(struct device *dev, void *data)
{
@ -1385,8 +1438,8 @@ EXPORT_SYMBOL_GPL(coresight_unregister);
*
* Returns the index of the entry, when found. Otherwise, -ENOENT.
*/
static inline int coresight_search_device_idx(struct coresight_dev_list *dict,
struct fwnode_handle *fwnode)
static int coresight_search_device_idx(struct coresight_dev_list *dict,
struct fwnode_handle *fwnode)
{
int i;
@ -1585,17 +1638,17 @@ module_init(coresight_init);
module_exit(coresight_exit);
int coresight_init_driver(const char *drv, struct amba_driver *amba_drv,
struct platform_driver *pdev_drv)
struct platform_driver *pdev_drv, struct module *owner)
{
int ret;
ret = amba_driver_register(amba_drv);
ret = __amba_driver_register(amba_drv, owner);
if (ret) {
pr_err("%s: error registering AMBA driver\n", drv);
return ret;
}
ret = platform_driver_register(pdev_drv);
ret = __platform_driver_register(pdev_drv, owner);
if (!ret)
return 0;

View File

@ -774,7 +774,8 @@ static struct platform_driver debug_platform_driver = {
static int __init debug_init(void)
{
return coresight_init_driver("debug", &debug_driver, &debug_platform_driver);
return coresight_init_driver("debug", &debug_driver, &debug_platform_driver,
THIS_MODULE);
}
static void __exit debug_exit(void)

View File

@ -931,6 +931,8 @@ static int cti_probe(struct amba_device *adev, const struct amba_id *id)
cti_desc.ops = &cti_ops;
cti_desc.groups = drvdata->ctidev.con_groups;
cti_desc.dev = dev;
coresight_clear_self_claim_tag(&cti_desc.access);
drvdata->csdev = coresight_register(&cti_desc);
if (IS_ERR(drvdata->csdev)) {
ret = PTR_ERR(drvdata->csdev);

View File

@ -9,7 +9,6 @@
#include <linux/coresight.h>
#include <linux/device.h>
#include <linux/fwnode.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/sysfs.h>
@ -17,6 +16,8 @@
#include "coresight-priv.h"
struct fwnode_handle;
/*
* Device registers
* 0x000 - 0x144: CTI programming and status

View File

@ -95,7 +95,7 @@ struct etb_drvdata {
static int etb_set_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle);
static inline unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata)
static unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata)
{
return readl_relaxed(drvdata->base + ETB_RAM_DEPTH_REG);
}
@ -772,6 +772,8 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
desc.pdata = pdata;
desc.dev = dev;
desc.groups = coresight_etb_groups;
coresight_clear_self_claim_tag(&desc.access);
drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev))
return PTR_ERR(drvdata->csdev);

View File

@ -365,6 +365,18 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
continue;
}
/*
* If AUX pause feature is enabled but the ETM driver does not
* support the operations, clear this CPU from the mask and
* continue to next one.
*/
if (event->attr.aux_start_paused &&
(!source_ops(csdev)->pause_perf || !source_ops(csdev)->resume_perf)) {
dev_err_once(&csdev->dev, "AUX pause is not supported.\n");
cpumask_clear_cpu(cpu, mask);
continue;
}
/*
* No sink provided - look for a default sink for all the ETMs,
* where this event can be scheduled.
@ -450,6 +462,15 @@ err:
goto out;
}
static int etm_event_resume(struct coresight_device *csdev,
struct etm_ctxt *ctxt)
{
if (!ctxt->event_data)
return 0;
return coresight_resume_source(csdev);
}
static void etm_event_start(struct perf_event *event, int flags)
{
int cpu = smp_processor_id();
@ -463,6 +484,14 @@ static void etm_event_start(struct perf_event *event, int flags)
if (!csdev)
goto fail;
if (flags & PERF_EF_RESUME) {
if (etm_event_resume(csdev, ctxt) < 0) {
dev_err(&csdev->dev, "Failed to resume ETM event.\n");
goto fail;
}
return;
}
/* Have we messed up our tracking ? */
if (WARN_ON(ctxt->event_data))
goto fail;
@ -545,6 +574,55 @@ fail:
return;
}
static void etm_event_pause(struct perf_event *event,
struct coresight_device *csdev,
struct etm_ctxt *ctxt)
{
int cpu = smp_processor_id();
struct coresight_device *sink;
struct perf_output_handle *handle = &ctxt->handle;
struct coresight_path *path;
unsigned long size;
if (!ctxt->event_data)
return;
/* Stop tracer */
coresight_pause_source(csdev);
path = etm_event_cpu_path(ctxt->event_data, cpu);
sink = coresight_get_sink(path);
if (WARN_ON_ONCE(!sink))
return;
/*
* The per CPU sink has own interrupt handling, it might have
* race condition with updating buffer on AUX trace pause if
* it is invoked from NMI. To avoid the race condition,
* disallows updating buffer for the per CPU sink case.
*/
if (coresight_is_percpu_sink(sink))
return;
if (WARN_ON_ONCE(handle->event != event))
return;
if (!sink_ops(sink)->update_buffer)
return;
size = sink_ops(sink)->update_buffer(sink, handle,
ctxt->event_data->snk_config);
if (READ_ONCE(handle->event)) {
if (!size)
return;
perf_aux_output_end(handle, size);
perf_aux_output_begin(handle, event);
} else {
WARN_ON_ONCE(size);
}
}
static void etm_event_stop(struct perf_event *event, int mode)
{
int cpu = smp_processor_id();
@ -555,6 +633,9 @@ static void etm_event_stop(struct perf_event *event, int mode)
struct etm_event_data *event_data;
struct coresight_path *path;
if (mode & PERF_EF_PAUSE)
return etm_event_pause(event, csdev, ctxt);
/*
* If we still have access to the event_data via handle,
* confirm that we haven't messed up the tracking.
@ -899,7 +980,8 @@ int __init etm_perf_init(void)
int ret;
etm_pmu.capabilities = (PERF_PMU_CAP_EXCLUSIVE |
PERF_PMU_CAP_ITRACE);
PERF_PMU_CAP_ITRACE |
PERF_PMU_CAP_AUX_PAUSE);
etm_pmu.attr_groups = etm_pmu_attr_groups;
etm_pmu.task_ctx_nr = perf_sw_context;

View File

@ -229,7 +229,7 @@ struct etm_config {
* @config: structure holding configuration parameters.
*/
struct etm_drvdata {
void __iomem *base;
struct csdev_access csa;
struct clk *atclk;
struct coresight_device *csdev;
spinlock_t spinlock;
@ -260,7 +260,7 @@ static inline void etm_writel(struct etm_drvdata *drvdata,
"invalid CP14 access to ETM reg: %#x", off);
}
} else {
writel_relaxed(val, drvdata->base + off);
writel_relaxed(val, drvdata->csa.base + off);
}
}
@ -274,7 +274,7 @@ static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
"invalid CP14 access to ETM reg: %#x", off);
}
} else {
val = readl_relaxed(drvdata->base + off);
val = readl_relaxed(drvdata->csa.base + off);
}
return val;

View File

@ -86,9 +86,9 @@ static void etm_set_pwrup(struct etm_drvdata *drvdata)
{
u32 etmpdcr;
etmpdcr = readl_relaxed(drvdata->base + ETMPDCR);
etmpdcr = readl_relaxed(drvdata->csa.base + ETMPDCR);
etmpdcr |= ETMPDCR_PWD_UP;
writel_relaxed(etmpdcr, drvdata->base + ETMPDCR);
writel_relaxed(etmpdcr, drvdata->csa.base + ETMPDCR);
/* Ensure pwrup completes before subsequent cp14 accesses */
mb();
isb();
@ -101,9 +101,9 @@ static void etm_clr_pwrup(struct etm_drvdata *drvdata)
/* Ensure pending cp14 accesses complete before clearing pwrup */
mb();
isb();
etmpdcr = readl_relaxed(drvdata->base + ETMPDCR);
etmpdcr = readl_relaxed(drvdata->csa.base + ETMPDCR);
etmpdcr &= ~ETMPDCR_PWD_UP;
writel_relaxed(etmpdcr, drvdata->base + ETMPDCR);
writel_relaxed(etmpdcr, drvdata->csa.base + ETMPDCR);
}
/**
@ -365,7 +365,7 @@ static int etm_enable_hw(struct etm_drvdata *drvdata)
struct etm_config *config = &drvdata->config;
struct coresight_device *csdev = drvdata->csdev;
CS_UNLOCK(drvdata->base);
CS_UNLOCK(drvdata->csa.base);
rc = coresight_claim_device_unlocked(csdev);
if (rc)
@ -427,7 +427,7 @@ static int etm_enable_hw(struct etm_drvdata *drvdata)
etm_clr_prog(drvdata);
done:
CS_LOCK(drvdata->base);
CS_LOCK(drvdata->csa.base);
dev_dbg(&drvdata->csdev->dev, "cpu: %d enable smp call done: %d\n",
drvdata->cpu, rc);
@ -549,7 +549,7 @@ static void etm_disable_hw(void *info)
struct etm_config *config = &drvdata->config;
struct coresight_device *csdev = drvdata->csdev;
CS_UNLOCK(drvdata->base);
CS_UNLOCK(drvdata->csa.base);
etm_set_prog(drvdata);
/* Read back sequencer and counters for post trace analysis */
@ -561,7 +561,7 @@ static void etm_disable_hw(void *info)
etm_set_pwrdwn(drvdata);
coresight_disclaim_device_unlocked(csdev);
CS_LOCK(drvdata->base);
CS_LOCK(drvdata->csa.base);
dev_dbg(&drvdata->csdev->dev,
"cpu: %d disable smp call done\n", drvdata->cpu);
@ -574,7 +574,7 @@ static void etm_disable_perf(struct coresight_device *csdev)
if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
return;
CS_UNLOCK(drvdata->base);
CS_UNLOCK(drvdata->csa.base);
/* Setting the prog bit disables tracing immediately */
etm_set_prog(drvdata);
@ -586,7 +586,7 @@ static void etm_disable_perf(struct coresight_device *csdev)
etm_set_pwrdwn(drvdata);
coresight_disclaim_device_unlocked(csdev);
CS_LOCK(drvdata->base);
CS_LOCK(drvdata->csa.base);
/*
* perf will release trace ids when _free_aux()
@ -733,7 +733,7 @@ static void etm_init_arch_data(void *info)
/* Make sure all registers are accessible */
etm_os_unlock(drvdata);
CS_UNLOCK(drvdata->base);
CS_UNLOCK(drvdata->csa.base);
/* First dummy read */
(void)etm_readl(drvdata, ETMPDSR);
@ -764,9 +764,10 @@ static void etm_init_arch_data(void *info)
drvdata->nr_ext_out = BMVAL(etmccr, 20, 22);
drvdata->nr_ctxid_cmp = BMVAL(etmccr, 24, 25);
coresight_clear_self_claim_tag_unlocked(&drvdata->csa);
etm_set_pwrdwn(drvdata);
etm_clr_pwrup(drvdata);
CS_LOCK(drvdata->base);
CS_LOCK(drvdata->csa.base);
}
static int __init etm_hp_setup(void)
@ -827,8 +828,7 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
if (IS_ERR(base))
return PTR_ERR(base);
drvdata->base = base;
desc.access = CSDEV_ACCESS_IOMEM(base);
desc.access = drvdata->csa = CSDEV_ACCESS_IOMEM(base);
spin_lock_init(&drvdata->spinlock);

View File

@ -50,11 +50,11 @@ static ssize_t etmsr_show(struct device *dev,
pm_runtime_get_sync(dev->parent);
spin_lock_irqsave(&drvdata->spinlock, flags);
CS_UNLOCK(drvdata->base);
CS_UNLOCK(drvdata->csa.base);
val = etm_readl(drvdata, ETMSR);
CS_LOCK(drvdata->base);
CS_LOCK(drvdata->csa.base);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
pm_runtime_put(dev->parent);
@ -949,9 +949,9 @@ static ssize_t seq_curr_state_show(struct device *dev,
pm_runtime_get_sync(dev->parent);
spin_lock_irqsave(&drvdata->spinlock, flags);
CS_UNLOCK(drvdata->base);
CS_UNLOCK(drvdata->csa.base);
val = (etm_readl(drvdata, ETMSQR) & ETM_SQR_MASK);
CS_LOCK(drvdata->base);
CS_LOCK(drvdata->csa.base);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
pm_runtime_put(dev->parent);

View File

@ -84,7 +84,7 @@ static int etm4_probe_cpu(unsigned int cpu);
* TRCIDR4.NUMPC > 0b0000 .
* TRCSSCSR<n>.PC == 0b1
*/
static inline bool etm4x_sspcicrn_present(struct etmv4_drvdata *drvdata, int n)
static bool etm4x_sspcicrn_present(struct etmv4_drvdata *drvdata, int n)
{
return (n < drvdata->nr_ss_cmp) &&
drvdata->nr_pe &&
@ -185,7 +185,7 @@ static void etm_write_os_lock(struct etmv4_drvdata *drvdata,
isb();
}
static inline void etm4_os_unlock_csa(struct etmv4_drvdata *drvdata,
static void etm4_os_unlock_csa(struct etmv4_drvdata *drvdata,
struct csdev_access *csa)
{
WARN_ON(drvdata->cpu != smp_processor_id());
@ -431,6 +431,44 @@ static int etm4x_wait_status(struct csdev_access *csa, int pos, int val)
return coresight_timeout(csa, TRCSTATR, pos, val);
}
static int etm4_enable_trace_unit(struct etmv4_drvdata *drvdata)
{
struct coresight_device *csdev = drvdata->csdev;
struct device *etm_dev = &csdev->dev;
struct csdev_access *csa = &csdev->access;
/*
* ETE mandates that the TRCRSR is written to before
* enabling it.
*/
if (etm4x_is_ete(drvdata))
etm4x_relaxed_write32(csa, TRCRSR_TA, TRCRSR);
etm4x_allow_trace(drvdata);
/* Enable the trace unit */
etm4x_relaxed_write32(csa, 1, TRCPRGCTLR);
/* Synchronize the register updates for sysreg access */
if (!csa->io_mem)
isb();
/* wait for TRCSTATR.IDLE to go back down to '0' */
if (etm4x_wait_status(csa, TRCSTATR_IDLE_BIT, 0)) {
dev_err(etm_dev,
"timeout while waiting for Idle Trace Status\n");
return -ETIME;
}
/*
* As recommended by section 4.3.7 ("Synchronization when using the
* memory-mapped interface") of ARM IHI 0064D
*/
dsb(sy);
isb();
return 0;
}
static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
{
int i, rc;
@ -539,33 +577,8 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
etm4x_relaxed_write32(csa, trcpdcr | TRCPDCR_PU, TRCPDCR);
}
/*
* ETE mandates that the TRCRSR is written to before
* enabling it.
*/
if (etm4x_is_ete(drvdata))
etm4x_relaxed_write32(csa, TRCRSR_TA, TRCRSR);
etm4x_allow_trace(drvdata);
/* Enable the trace unit */
etm4x_relaxed_write32(csa, 1, TRCPRGCTLR);
/* Synchronize the register updates for sysreg access */
if (!csa->io_mem)
isb();
/* wait for TRCSTATR.IDLE to go back down to '0' */
if (etm4x_wait_status(csa, TRCSTATR_IDLE_BIT, 0))
dev_err(etm_dev,
"timeout while waiting for Idle Trace Status\n");
/*
* As recommended by section 4.3.7 ("Synchronization when using the
* memory-mapped interface") of ARM IHI 0064D
*/
dsb(sy);
isb();
if (!drvdata->paused)
rc = etm4_enable_trace_unit(drvdata);
done:
etm4_cs_lock(drvdata, csa);
@ -808,6 +821,9 @@ static int etm4_enable_perf(struct coresight_device *csdev,
drvdata->trcid = path->trace_id;
/* Populate pause state */
drvdata->paused = !!READ_ONCE(event->hw.aux_paused);
/* And enable it */
ret = etm4_enable_hw(drvdata);
@ -834,6 +850,9 @@ static int etm4_enable_sysfs(struct coresight_device *csdev, struct coresight_pa
drvdata->trcid = path->trace_id;
/* Tracer will never be paused in sysfs mode */
drvdata->paused = false;
/*
* Executing etm4_enable_hw on the cpu whose ETM is being enabled
* ensures that register writes occur when cpu is powered.
@ -884,25 +903,12 @@ static int etm4_enable(struct coresight_device *csdev, struct perf_event *event,
return ret;
}
static void etm4_disable_hw(void *info)
static void etm4_disable_trace_unit(struct etmv4_drvdata *drvdata)
{
u32 control;
struct etmv4_drvdata *drvdata = info;
struct etmv4_config *config = &drvdata->config;
struct coresight_device *csdev = drvdata->csdev;
struct device *etm_dev = &csdev->dev;
struct csdev_access *csa = &csdev->access;
int i;
etm4_cs_unlock(drvdata, csa);
etm4_disable_arch_specific(drvdata);
if (!drvdata->skip_power_up) {
/* power can be removed from the trace unit now */
control = etm4x_relaxed_read32(csa, TRCPDCR);
control &= ~TRCPDCR_PU;
etm4x_relaxed_write32(csa, control, TRCPDCR);
}
control = etm4x_relaxed_read32(csa, TRCPRGCTLR);
@ -943,6 +949,28 @@ static void etm4_disable_hw(void *info)
* of ARM IHI 0064H.b.
*/
isb();
}
static void etm4_disable_hw(void *info)
{
u32 control;
struct etmv4_drvdata *drvdata = info;
struct etmv4_config *config = &drvdata->config;
struct coresight_device *csdev = drvdata->csdev;
struct csdev_access *csa = &csdev->access;
int i;
etm4_cs_unlock(drvdata, csa);
etm4_disable_arch_specific(drvdata);
if (!drvdata->skip_power_up) {
/* power can be removed from the trace unit now */
control = etm4x_relaxed_read32(csa, TRCPDCR);
control &= ~TRCPDCR_PU;
etm4x_relaxed_write32(csa, control, TRCPDCR);
}
etm4_disable_trace_unit(drvdata);
/* read the status of the single shot comparators */
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
@ -1020,6 +1048,9 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
smp_call_function_single(drvdata->cpu, etm4_disable_hw, drvdata, 1);
raw_spin_unlock(&drvdata->spinlock);
cscfg_csdev_disable_active_config(csdev);
cpus_read_unlock();
/*
@ -1059,10 +1090,43 @@ static void etm4_disable(struct coresight_device *csdev,
coresight_set_mode(csdev, CS_MODE_DISABLED);
}
static int etm4_resume_perf(struct coresight_device *csdev)
{
struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct csdev_access *csa = &csdev->access;
if (coresight_get_mode(csdev) != CS_MODE_PERF)
return -EINVAL;
etm4_cs_unlock(drvdata, csa);
etm4_enable_trace_unit(drvdata);
etm4_cs_lock(drvdata, csa);
drvdata->paused = false;
return 0;
}
static void etm4_pause_perf(struct coresight_device *csdev)
{
struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct csdev_access *csa = &csdev->access;
if (coresight_get_mode(csdev) != CS_MODE_PERF)
return;
etm4_cs_unlock(drvdata, csa);
etm4_disable_trace_unit(drvdata);
etm4_cs_lock(drvdata, csa);
drvdata->paused = true;
}
static const struct coresight_ops_source etm4_source_ops = {
.cpu_id = etm4_cpu_id,
.enable = etm4_enable,
.disable = etm4_disable,
.resume_perf = etm4_resume_perf,
.pause_perf = etm4_pause_perf,
};
static const struct coresight_ops etm4_cs_ops = {
@ -1070,7 +1134,7 @@ static const struct coresight_ops etm4_cs_ops = {
.source_ops = &etm4_source_ops,
};
static inline bool cpu_supports_sysreg_trace(void)
static bool cpu_supports_sysreg_trace(void)
{
u64 dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1);
@ -1176,7 +1240,7 @@ static void cpu_detect_trace_filtering(struct etmv4_drvdata *drvdata)
* tracing at the kernel EL and EL0, forcing to use the
* virtual time as the timestamp.
*/
trfcr = (TRFCR_EL1_TS_VIRTUAL |
trfcr = (FIELD_PREP(TRFCR_EL1_TS_MASK, TRFCR_EL1_TS_VIRTUAL) |
TRFCR_EL1_ExTRE |
TRFCR_EL1_E0TRE);
@ -1372,11 +1436,13 @@ static void etm4_init_arch_data(void *info)
drvdata->nrseqstate = FIELD_GET(TRCIDR5_NUMSEQSTATE_MASK, etmidr5);
/* NUMCNTR, bits[30:28] number of counters available for tracing */
drvdata->nr_cntr = FIELD_GET(TRCIDR5_NUMCNTR_MASK, etmidr5);
coresight_clear_self_claim_tag_unlocked(csa);
etm4_cs_lock(drvdata, csa);
cpu_detect_trace_filtering(drvdata);
}
static inline u32 etm4_get_victlr_access_type(struct etmv4_config *config)
static u32 etm4_get_victlr_access_type(struct etmv4_config *config)
{
return etm4_get_access_type(config) << __bf_shf(TRCVICTLR_EXLEVEL_MASK);
}

View File

@ -2320,11 +2320,11 @@ static ssize_t ts_source_show(struct device *dev,
goto out;
}
switch (drvdata->trfcr & TRFCR_EL1_TS_MASK) {
val = FIELD_GET(TRFCR_EL1_TS_MASK, drvdata->trfcr);
switch (val) {
case TRFCR_EL1_TS_VIRTUAL:
case TRFCR_EL1_TS_GUEST_PHYSICAL:
case TRFCR_EL1_TS_PHYSICAL:
val = FIELD_GET(TRFCR_EL1_TS_MASK, drvdata->trfcr);
break;
default:
val = -1;
@ -2440,7 +2440,7 @@ static u32 etmv4_cross_read(const struct etmv4_drvdata *drvdata, u32 offset)
return reg.data;
}
static inline u32 coresight_etm4x_attr_to_offset(struct device_attribute *attr)
static u32 coresight_etm4x_attr_to_offset(struct device_attribute *attr)
{
struct dev_ext_attribute *eattr;
@ -2464,7 +2464,7 @@ static ssize_t coresight_etm4x_reg_show(struct device *dev,
return scnprintf(buf, PAGE_SIZE, "0x%x\n", val);
}
static inline bool
static bool
etm4x_register_implemented(struct etmv4_drvdata *drvdata, u32 offset)
{
switch (offset) {

View File

@ -983,6 +983,7 @@ struct etmv4_save_state {
* @state_needs_restore: True when there is context to restore after PM exit
* @skip_power_up: Indicates if an implementation can skip powering up
* the trace unit.
* @paused: Indicates if the trace unit is paused.
* @arch_features: Bitmap of arch features of etmv4 devices.
*/
struct etmv4_drvdata {
@ -1036,6 +1037,7 @@ struct etmv4_drvdata {
struct etmv4_save_state *save_state;
bool state_needs_restore;
bool skip_power_up;
bool paused;
DECLARE_BITMAP(arch_features, ETM4_IMPDEF_FEATURE_MAX);
};

View File

@ -255,6 +255,7 @@ static int funnel_probe(struct device *dev, struct resource *res)
drvdata->base = base;
desc.groups = coresight_funnel_groups;
desc.access = CSDEV_ACCESS_IOMEM(base);
coresight_clear_self_claim_tag(&desc.access);
}
dev_set_drvdata(dev, drvdata);
@ -433,7 +434,8 @@ static struct amba_driver dynamic_funnel_driver = {
static int __init funnel_init(void)
{
return coresight_init_driver("funnel", &dynamic_funnel_driver, &funnel_driver);
return coresight_init_driver("funnel", &dynamic_funnel_driver, &funnel_driver,
THIS_MODULE);
}
static void __exit funnel_exit(void)

View File

@ -0,0 +1,74 @@
// SPDX-License-Identifier: GPL-2.0
#include <kunit/test.h>
#include <kunit/device.h>
#include <linux/coresight.h>
#include "coresight-priv.h"
static struct coresight_device *coresight_test_device(struct device *dev)
{
struct coresight_device *csdev = devm_kcalloc(dev, 1,
sizeof(struct coresight_device),
GFP_KERNEL);
csdev->pdata = devm_kcalloc(dev, 1,
sizeof(struct coresight_platform_data),
GFP_KERNEL);
return csdev;
}
static void test_default_sink(struct kunit *test)
{
/*
* Source -> ETF -> ETR -> CATU
* ^
* | default
*/
struct device *dev = kunit_device_register(test, "coresight_kunit");
struct coresight_device *src = coresight_test_device(dev),
*etf = coresight_test_device(dev),
*etr = coresight_test_device(dev),
*catu = coresight_test_device(dev);
struct coresight_connection conn = {};
src->type = CORESIGHT_DEV_TYPE_SOURCE;
/*
* Don't use CORESIGHT_DEV_SUBTYPE_SOURCE_PROC, that would always return
* a TRBE sink if one is registered.
*/
src->subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_BUS;
etf->type = CORESIGHT_DEV_TYPE_LINKSINK;
etf->subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
etr->type = CORESIGHT_DEV_TYPE_SINK;
etr->subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_SYSMEM;
catu->type = CORESIGHT_DEV_TYPE_HELPER;
conn.src_dev = src;
conn.dest_dev = etf;
coresight_add_out_conn(dev, src->pdata, &conn);
conn.src_dev = etf;
conn.dest_dev = etr;
coresight_add_out_conn(dev, etf->pdata, &conn);
conn.src_dev = etr;
conn.dest_dev = catu;
coresight_add_out_conn(dev, etr->pdata, &conn);
KUNIT_ASSERT_PTR_EQ(test, coresight_find_default_sink(src), etr);
}
static struct kunit_case coresight_testcases[] = {
KUNIT_CASE(test_default_sink),
{}
};
static struct kunit_suite coresight_test_suite = {
.name = "coresight_test_suite",
.test_cases = coresight_testcases,
};
kunit_test_suites(&coresight_test_suite);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("James Clark <james.clark@linaro.org>");
MODULE_DESCRIPTION("Arm CoreSight KUnit tests");

View File

@ -139,7 +139,7 @@ coresight_find_csdev_by_fwnode(struct fwnode_handle *r_fwnode)
EXPORT_SYMBOL_GPL(coresight_find_csdev_by_fwnode);
#ifdef CONFIG_OF
static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep)
static bool of_coresight_legacy_ep_is_input(struct device_node *ep)
{
return of_property_read_bool(ep, "slave-mode");
}
@ -159,7 +159,7 @@ static struct device_node *of_coresight_get_port_parent(struct device_node *ep)
return parent;
}
static inline struct device_node *
static struct device_node *
of_coresight_get_output_ports_node(const struct device_node *node)
{
return of_get_child_by_name(node, "out-ports");
@ -327,14 +327,14 @@ static int of_get_coresight_platform_data(struct device *dev,
return 0;
}
#else
static inline int
static int
of_get_coresight_platform_data(struct device *dev,
struct coresight_platform_data *pdata)
{
return -ENOENT;
}
static inline int of_coresight_get_cpu(struct device *dev)
static int of_coresight_get_cpu(struct device *dev)
{
return -ENODEV;
}
@ -356,7 +356,7 @@ static const guid_t coresight_graph_uuid = GUID_INIT(0x3ecbc8b6, 0x1d0e, 0x4fb3,
#define ACPI_CORESIGHT_LINK_SLAVE 0
#define ACPI_CORESIGHT_LINK_MASTER 1
static inline bool is_acpi_guid(const union acpi_object *obj)
static bool is_acpi_guid(const union acpi_object *obj)
{
return (obj->type == ACPI_TYPE_BUFFER) && (obj->buffer.length == 16);
}
@ -365,24 +365,24 @@ static inline bool is_acpi_guid(const union acpi_object *obj)
* acpi_guid_matches - Checks if the given object is a GUID object and
* that it matches the supplied the GUID.
*/
static inline bool acpi_guid_matches(const union acpi_object *obj,
static bool acpi_guid_matches(const union acpi_object *obj,
const guid_t *guid)
{
return is_acpi_guid(obj) &&
guid_equal((guid_t *)obj->buffer.pointer, guid);
}
static inline bool is_acpi_dsd_graph_guid(const union acpi_object *obj)
static bool is_acpi_dsd_graph_guid(const union acpi_object *obj)
{
return acpi_guid_matches(obj, &acpi_graph_uuid);
}
static inline bool is_acpi_coresight_graph_guid(const union acpi_object *obj)
static bool is_acpi_coresight_graph_guid(const union acpi_object *obj)
{
return acpi_guid_matches(obj, &coresight_graph_uuid);
}
static inline bool is_acpi_coresight_graph(const union acpi_object *obj)
static bool is_acpi_coresight_graph(const union acpi_object *obj)
{
const union acpi_object *graphid, *guid, *links;
@ -469,7 +469,7 @@ static inline bool is_acpi_coresight_graph(const union acpi_object *obj)
* }, // End of ACPI Graph Property
* })
*/
static inline bool acpi_validate_dsd_graph(const union acpi_object *graph)
static bool acpi_validate_dsd_graph(const union acpi_object *graph)
{
int i, n;
const union acpi_object *rev, *nr_graphs;
@ -553,7 +553,7 @@ acpi_get_dsd_graph(struct acpi_device *adev, struct acpi_buffer *buf)
return NULL;
}
static inline bool
static bool
acpi_validate_coresight_graph(const union acpi_object *cs_graph)
{
int nlinks;
@ -794,14 +794,14 @@ acpi_get_coresight_platform_data(struct device *dev,
#else
static inline int
static int
acpi_get_coresight_platform_data(struct device *dev,
struct coresight_platform_data *pdata)
{
return -ENOENT;
}
static inline int acpi_coresight_get_cpu(struct device *dev)
static int acpi_coresight_get_cpu(struct device *dev)
{
return -ENODEV;
}

View File

@ -35,7 +35,11 @@ extern const struct device_type coresight_dev_type[];
* Coresight device CLAIM protocol.
* See PSCI - ARM DEN 0022D, Section: 6.8.1 Debug and Trace save and restore.
*/
#define CORESIGHT_CLAIM_SELF_HOSTED BIT(1)
#define CORESIGHT_CLAIM_MASK GENMASK(1, 0)
#define CORESIGHT_CLAIM_FREE 0
#define CORESIGHT_CLAIM_EXTERNAL 1
#define CORESIGHT_CLAIM_SELF_HOSTED 2
#define CORESIGHT_CLAIM_INVALID 3
#define TIMEOUT_US 100
#define BMVAL(val, lsb, msb) ((val & GENMASK(msb, lsb)) >> lsb)
@ -56,10 +60,8 @@ struct cs_off_attribute {
u32 off;
};
extern ssize_t coresight_simple_show32(struct device *_dev,
struct device_attribute *attr, char *buf);
extern ssize_t coresight_simple_show_pair(struct device *_dev,
struct device_attribute *attr, char *buf);
ssize_t coresight_simple_show32(struct device *_dev, struct device_attribute *attr, char *buf);
ssize_t coresight_simple_show_pair(struct device *_dev, struct device_attribute *attr, char *buf);
#define coresight_simple_reg32(name, offset) \
(&((struct cs_off_attribute[]) { \
@ -156,8 +158,8 @@ void coresight_path_assign_trace_id(struct coresight_path *path,
enum cs_mode mode);
#if IS_ENABLED(CONFIG_CORESIGHT_SOURCE_ETM3X)
extern int etm_readl_cp14(u32 off, unsigned int *val);
extern int etm_writel_cp14(u32 off, u32 val);
int etm_readl_cp14(u32 off, unsigned int *val);
int etm_writel_cp14(u32 off, u32 val);
#else
static inline int etm_readl_cp14(u32 off, unsigned int *val) { return 0; }
static inline int etm_writel_cp14(u32 off, u32 val) { return 0; }
@ -168,8 +170,8 @@ struct cti_assoc_op {
void (*remove)(struct coresight_device *csdev);
};
extern void coresight_set_cti_ops(const struct cti_assoc_op *cti_op);
extern void coresight_remove_cti_ops(void);
void coresight_set_cti_ops(const struct cti_assoc_op *cti_op);
void coresight_remove_cti_ops(void);
/*
* Macros and inline functions to handle CoreSight UCI data and driver
@ -249,5 +251,7 @@ void coresight_add_helper(struct coresight_device *csdev,
void coresight_set_percpu_sink(int cpu, struct coresight_device *csdev);
struct coresight_device *coresight_get_percpu_sink(int cpu);
void coresight_disable_source(struct coresight_device *csdev, void *data);
void coresight_pause_source(struct coresight_device *csdev);
int coresight_resume_source(struct coresight_device *csdev);
#endif

View File

@ -63,7 +63,7 @@ static void dynamic_replicator_reset(struct replicator_drvdata *drvdata)
/*
* replicator_reset : Reset the replicator configuration to sane values.
*/
static inline void replicator_reset(struct replicator_drvdata *drvdata)
static void replicator_reset(struct replicator_drvdata *drvdata)
{
if (drvdata->base)
dynamic_replicator_reset(drvdata);
@ -262,6 +262,7 @@ static int replicator_probe(struct device *dev, struct resource *res)
drvdata->base = base;
desc.groups = replicator_groups;
desc.access = CSDEV_ACCESS_IOMEM(base);
coresight_clear_self_claim_tag(&desc.access);
}
if (fwnode_property_present(dev_fwnode(dev),
@ -438,7 +439,8 @@ static struct amba_driver dynamic_replicator_driver = {
static int __init replicator_init(void)
{
return coresight_init_driver("replicator", &dynamic_replicator_driver, &replicator_driver);
return coresight_init_driver("replicator", &dynamic_replicator_driver, &replicator_driver,
THIS_MODULE);
}
static void __exit replicator_exit(void)

View File

@ -301,7 +301,7 @@ static const struct coresight_ops stm_cs_ops = {
.source_ops = &stm_source_ops,
};
static inline bool stm_addr_unaligned(const void *addr, u8 write_bytes)
static bool stm_addr_unaligned(const void *addr, u8 write_bytes)
{
return ((unsigned long)addr & (write_bytes - 1));
}
@ -685,7 +685,7 @@ static int of_stm_get_stimulus_area(struct device *dev, struct resource *res)
return of_address_to_resource(np, index, res);
}
#else
static inline int of_stm_get_stimulus_area(struct device *dev,
static int of_stm_get_stimulus_area(struct device *dev,
struct resource *res)
{
return -ENOENT;
@ -729,7 +729,7 @@ static int acpi_stm_get_stimulus_area(struct device *dev, struct resource *res)
return rc;
}
#else
static inline int acpi_stm_get_stimulus_area(struct device *dev,
static int acpi_stm_get_stimulus_area(struct device *dev,
struct resource *res)
{
return -ENOENT;
@ -1058,7 +1058,7 @@ static struct platform_driver stm_platform_driver = {
static int __init stm_init(void)
{
return coresight_init_driver("stm", &stm_driver, &stm_platform_driver);
return coresight_init_driver("stm", &stm_driver, &stm_platform_driver, THIS_MODULE);
}
static void __exit stm_exit(void)

View File

@ -10,7 +10,7 @@
#include "coresight-syscfg-configfs.h"
/* create a default ci_type. */
static inline struct config_item_type *cscfg_create_ci_type(void)
static struct config_item_type *cscfg_create_ci_type(void)
{
struct config_item_type *ci_type;

View File

@ -395,6 +395,8 @@ static void cscfg_remove_owned_csdev_configs(struct coresight_device *csdev, voi
if (list_empty(&csdev->config_csdev_list))
return;
guard(raw_spinlock_irqsave)(&csdev->cscfg_csdev_lock);
list_for_each_entry_safe(config_csdev, tmp, &csdev->config_csdev_list, node) {
if (config_csdev->config_desc->load_owner == load_owner)
list_del(&config_csdev->node);
@ -867,6 +869,25 @@ unlock_exit:
}
EXPORT_SYMBOL_GPL(cscfg_csdev_reset_feats);
static bool cscfg_config_desc_get(struct cscfg_config_desc *config_desc)
{
if (!atomic_fetch_inc(&config_desc->active_cnt)) {
/* must ensure that config cannot be unloaded in use */
if (unlikely(cscfg_owner_get(config_desc->load_owner))) {
atomic_dec(&config_desc->active_cnt);
return false;
}
}
return true;
}
static void cscfg_config_desc_put(struct cscfg_config_desc *config_desc)
{
if (!atomic_dec_return(&config_desc->active_cnt))
cscfg_owner_put(config_desc->load_owner);
}
/*
* This activate configuration for either perf or sysfs. Perf can have multiple
* active configs, selected per event, sysfs is limited to one.
@ -890,22 +911,17 @@ static int _cscfg_activate_config(unsigned long cfg_hash)
if (config_desc->available == false)
return -EBUSY;
/* must ensure that config cannot be unloaded in use */
err = cscfg_owner_get(config_desc->load_owner);
if (err)
if (!cscfg_config_desc_get(config_desc)) {
err = -EINVAL;
break;
}
/*
* increment the global active count - control changes to
* active configurations
*/
atomic_inc(&cscfg_mgr->sys_active_cnt);
/*
* mark the descriptor as active so enable config on a
* device instance will use it
*/
atomic_inc(&config_desc->active_cnt);
err = 0;
dev_dbg(cscfg_device(), "Activate config %s.\n", config_desc->name);
break;
@ -920,9 +936,8 @@ static void _cscfg_deactivate_config(unsigned long cfg_hash)
list_for_each_entry(config_desc, &cscfg_mgr->config_desc_list, item) {
if ((unsigned long)config_desc->event_ea->var == cfg_hash) {
atomic_dec(&config_desc->active_cnt);
atomic_dec(&cscfg_mgr->sys_active_cnt);
cscfg_owner_put(config_desc->load_owner);
cscfg_config_desc_put(config_desc);
dev_dbg(cscfg_device(), "Deactivate config %s.\n", config_desc->name);
break;
}
@ -1047,7 +1062,7 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
unsigned long cfg_hash, int preset)
{
struct cscfg_config_csdev *config_csdev_active = NULL, *config_csdev_item;
const struct cscfg_config_desc *config_desc;
struct cscfg_config_desc *config_desc;
unsigned long flags;
int err = 0;
@ -1062,8 +1077,8 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
raw_spin_lock_irqsave(&csdev->cscfg_csdev_lock, flags);
list_for_each_entry(config_csdev_item, &csdev->config_csdev_list, node) {
config_desc = config_csdev_item->config_desc;
if ((atomic_read(&config_desc->active_cnt)) &&
((unsigned long)config_desc->event_ea->var == cfg_hash)) {
if (((unsigned long)config_desc->event_ea->var == cfg_hash) &&
cscfg_config_desc_get(config_desc)) {
config_csdev_active = config_csdev_item;
csdev->active_cscfg_ctxt = (void *)config_csdev_active;
break;
@ -1097,7 +1112,11 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
err = -EBUSY;
raw_spin_unlock_irqrestore(&csdev->cscfg_csdev_lock, flags);
}
if (err)
cscfg_config_desc_put(config_desc);
}
return err;
}
EXPORT_SYMBOL_GPL(cscfg_csdev_enable_active_config);
@ -1136,8 +1155,10 @@ void cscfg_csdev_disable_active_config(struct coresight_device *csdev)
raw_spin_unlock_irqrestore(&csdev->cscfg_csdev_lock, flags);
/* true if there was an enabled active config */
if (config_csdev)
if (config_csdev) {
cscfg_csdev_disable_config(config_csdev);
cscfg_config_desc_put(config_csdev->config_desc);
}
}
EXPORT_SYMBOL_GPL(cscfg_csdev_disable_active_config);

View File

@ -287,8 +287,8 @@ static int tmc_open(struct inode *inode, struct file *file)
return 0;
}
static inline ssize_t tmc_get_sysfs_trace(struct tmc_drvdata *drvdata,
loff_t pos, size_t len, char **bufpp)
static ssize_t tmc_get_sysfs_trace(struct tmc_drvdata *drvdata, loff_t pos, size_t len,
char **bufpp)
{
switch (drvdata->config_type) {
case TMC_CONFIG_TYPE_ETB:
@ -591,7 +591,7 @@ static const struct attribute_group *coresight_etr_groups[] = {
NULL,
};
static inline bool tmc_etr_can_use_sg(struct device *dev)
static bool tmc_etr_can_use_sg(struct device *dev)
{
int ret;
u8 val_u8;
@ -621,7 +621,7 @@ static inline bool tmc_etr_can_use_sg(struct device *dev)
return false;
}
static inline bool tmc_etr_has_non_secure_access(struct tmc_drvdata *drvdata)
static bool tmc_etr_has_non_secure_access(struct tmc_drvdata *drvdata)
{
u32 auth = readl_relaxed(drvdata->base + TMC_AUTHSTATUS);
@ -869,6 +869,7 @@ static int __tmc_probe(struct device *dev, struct resource *res)
dev->platform_data = pdata;
desc.pdata = pdata;
coresight_clear_self_claim_tag(&desc.access);
drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev)) {
ret = PTR_ERR(drvdata->csdev);
@ -1060,7 +1061,7 @@ static struct platform_driver tmc_platform_driver = {
static int __init tmc_init(void)
{
return coresight_init_driver("tmc", &tmc_driver, &tmc_platform_driver);
return coresight_init_driver("tmc", &tmc_driver, &tmc_platform_driver, THIS_MODULE);
}
static void __exit tmc_exit(void)

View File

@ -482,6 +482,7 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
unsigned long offset, to_read = 0, flags;
struct cs_buffers *buf = sink_config;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct perf_event *event = handle->event;
if (!buf)
return 0;
@ -586,6 +587,14 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
* is expected by the perf ring buffer.
*/
CS_LOCK(drvdata->base);
/*
* If the event is active, it is triggered during an AUX pause.
* Re-enable the sink so that it is ready when AUX resume is invoked.
*/
if (!event->hw.state)
__tmc_etb_enable_hw(drvdata);
out:
raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
@ -747,7 +756,6 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
char *buf = NULL;
enum tmc_mode mode;
unsigned long flags;
int rc = 0;
/* config types are set a boot time and never change */
if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB &&
@ -773,11 +781,11 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
* can't be NULL.
*/
memset(drvdata->buf, 0, drvdata->size);
rc = __tmc_etb_enable_hw(drvdata);
if (rc) {
raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
return rc;
}
/*
* Ignore failures to enable the TMC to make sure, we don't
* leave the TMC in a "reading" state.
*/
__tmc_etb_enable_hw(drvdata);
} else {
/*
* The ETB/ETF is not tracing and the buffer was just read.

View File

@ -125,7 +125,7 @@ struct etr_sg_table {
* If we spill over to a new page for mapping 1 entry, we could as
* well replace the link entry of the previous page with the last entry.
*/
static inline unsigned long __attribute_const__
static unsigned long __attribute_const__
tmc_etr_sg_table_entries(int nr_pages)
{
unsigned long nr_sgpages = nr_pages * ETR_SG_PAGES_PER_SYSPAGE;
@ -239,13 +239,13 @@ err:
return -ENOMEM;
}
static inline long
static long
tmc_sg_get_data_page_offset(struct tmc_sg_table *sg_table, dma_addr_t addr)
{
return tmc_pages_get_offset(&sg_table->data_pages, addr);
}
static inline void tmc_free_table_pages(struct tmc_sg_table *sg_table)
static void tmc_free_table_pages(struct tmc_sg_table *sg_table)
{
if (sg_table->table_vaddr)
vunmap(sg_table->table_vaddr);
@ -481,7 +481,7 @@ static void tmc_etr_sg_table_dump(struct etr_sg_table *etr_table)
dev_dbg(sg_table->dev, "******* End of Table *****\n");
}
#else
static inline void tmc_etr_sg_table_dump(struct etr_sg_table *etr_table) {}
static void tmc_etr_sg_table_dump(struct etr_sg_table *etr_table) {}
#endif
/*
@ -886,10 +886,8 @@ void tmc_etr_remove_catu_ops(void)
}
EXPORT_SYMBOL_GPL(tmc_etr_remove_catu_ops);
static inline int tmc_etr_mode_alloc_buf(int mode,
struct tmc_drvdata *drvdata,
struct etr_buf *etr_buf, int node,
void **pages)
static int tmc_etr_mode_alloc_buf(int mode, struct tmc_drvdata *drvdata, struct etr_buf *etr_buf,
int node, void **pages)
{
int rc = -EINVAL;
@ -1009,7 +1007,7 @@ static ssize_t tmc_etr_buf_get_data(struct etr_buf *etr_buf,
return etr_buf->ops->get_data(etr_buf, (u64)offset, len, bufpp);
}
static inline s64
static s64
tmc_etr_buf_insert_barrier_packet(struct etr_buf *etr_buf, u64 offset)
{
ssize_t len;
@ -1636,6 +1634,7 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct etr_perf_buffer *etr_perf = config;
struct etr_buf *etr_buf = etr_perf->etr_buf;
struct perf_event *event = handle->event;
raw_spin_lock_irqsave(&drvdata->spinlock, flags);
@ -1705,6 +1704,15 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
*/
smp_wmb();
/*
* If the event is active, it is triggered during an AUX pause.
* Re-enable the sink so that it is ready when AUX resume is invoked.
*/
raw_spin_lock_irqsave(&drvdata->spinlock, flags);
if (csdev->refcnt && !event->hw.state)
__tmc_etr_enable_hw(drvdata);
raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
out:
/*
* Don't set the TRUNCATED flag in snapshot mode because 1) the

View File

@ -318,7 +318,7 @@ static struct platform_driver tpiu_platform_driver = {
static int __init tpiu_init(void)
{
return coresight_init_driver("tpiu", &tpiu_driver, &tpiu_platform_driver);
return coresight_init_driver("tpiu", &tpiu_driver, &tpiu_platform_driver, THIS_MODULE);
}
static void __exit tpiu_exit(void)

View File

@ -160,22 +160,22 @@ static void trbe_check_errata(struct trbe_cpudata *cpudata)
}
}
static inline bool trbe_has_erratum(struct trbe_cpudata *cpudata, int i)
static bool trbe_has_erratum(struct trbe_cpudata *cpudata, int i)
{
return (i < TRBE_ERRATA_MAX) && test_bit(i, cpudata->errata);
}
static inline bool trbe_may_overwrite_in_fill_mode(struct trbe_cpudata *cpudata)
static bool trbe_may_overwrite_in_fill_mode(struct trbe_cpudata *cpudata)
{
return trbe_has_erratum(cpudata, TRBE_WORKAROUND_OVERWRITE_FILL_MODE);
}
static inline bool trbe_may_write_out_of_range(struct trbe_cpudata *cpudata)
static bool trbe_may_write_out_of_range(struct trbe_cpudata *cpudata)
{
return trbe_has_erratum(cpudata, TRBE_WORKAROUND_WRITE_OUT_OF_RANGE);
}
static inline bool trbe_needs_drain_after_disable(struct trbe_cpudata *cpudata)
static bool trbe_needs_drain_after_disable(struct trbe_cpudata *cpudata)
{
/*
* Errata affected TRBE implementation will need TSB CSYNC and
@ -185,7 +185,7 @@ static inline bool trbe_needs_drain_after_disable(struct trbe_cpudata *cpudata)
return trbe_has_erratum(cpudata, TRBE_NEEDS_DRAIN_AFTER_DISABLE);
}
static inline bool trbe_needs_ctxt_sync_after_enable(struct trbe_cpudata *cpudata)
static bool trbe_needs_ctxt_sync_after_enable(struct trbe_cpudata *cpudata)
{
/*
* Errata affected TRBE implementation will need an additional
@ -196,7 +196,7 @@ static inline bool trbe_needs_ctxt_sync_after_enable(struct trbe_cpudata *cpudat
return trbe_has_erratum(cpudata, TRBE_NEEDS_CTXT_SYNC_AFTER_ENABLE);
}
static inline bool trbe_is_broken(struct trbe_cpudata *cpudata)
static bool trbe_is_broken(struct trbe_cpudata *cpudata)
{
return trbe_has_erratum(cpudata, TRBE_IS_BROKEN);
}
@ -208,13 +208,13 @@ static int trbe_alloc_node(struct perf_event *event)
return cpu_to_node(event->cpu);
}
static inline void trbe_drain_buffer(void)
static void trbe_drain_buffer(void)
{
tsb_csync();
dsb(nsh);
}
static inline void set_trbe_enabled(struct trbe_cpudata *cpudata, u64 trblimitr)
static void set_trbe_enabled(struct trbe_cpudata *cpudata, u64 trblimitr)
{
/*
* Enable the TRBE without clearing LIMITPTR which
@ -231,7 +231,7 @@ static inline void set_trbe_enabled(struct trbe_cpudata *cpudata, u64 trblimitr)
isb();
}
static inline void set_trbe_disabled(struct trbe_cpudata *cpudata)
static void set_trbe_disabled(struct trbe_cpudata *cpudata)
{
u64 trblimitr = read_sysreg_s(SYS_TRBLIMITR_EL1);

View File

@ -398,6 +398,8 @@ struct coresight_ops_link {
* is associated to.
* @enable: enables tracing for a source.
* @disable: disables tracing for a source.
* @resume_perf: resumes tracing for a source in perf session.
* @pause_perf: pauses tracing for a source in perf session.
*/
struct coresight_ops_source {
int (*cpu_id)(struct coresight_device *csdev);
@ -405,6 +407,8 @@ struct coresight_ops_source {
enum cs_mode mode, struct coresight_path *path);
void (*disable)(struct coresight_device *csdev,
struct perf_event *event);
int (*resume_perf)(struct coresight_device *csdev);
void (*pause_perf)(struct coresight_device *csdev);
};
/**
@ -671,27 +675,27 @@ static inline void coresight_set_mode(struct coresight_device *csdev,
local_set(&csdev->mode, new_mode);
}
extern struct coresight_device *
coresight_register(struct coresight_desc *desc);
extern void coresight_unregister(struct coresight_device *csdev);
extern int coresight_enable_sysfs(struct coresight_device *csdev);
extern void coresight_disable_sysfs(struct coresight_device *csdev);
extern int coresight_timeout(struct csdev_access *csa, u32 offset,
int position, int value);
struct coresight_device *coresight_register(struct coresight_desc *desc);
void coresight_unregister(struct coresight_device *csdev);
int coresight_enable_sysfs(struct coresight_device *csdev);
void coresight_disable_sysfs(struct coresight_device *csdev);
int coresight_timeout(struct csdev_access *csa, u32 offset, int position, int value);
typedef void (*coresight_timeout_cb_t) (struct csdev_access *, u32, int, int);
extern int coresight_timeout_action(struct csdev_access *csa, u32 offset,
int position, int value,
coresight_timeout_cb_t cb);
int coresight_timeout_action(struct csdev_access *csa, u32 offset, int position, int value,
coresight_timeout_cb_t cb);
int coresight_claim_device(struct coresight_device *csdev);
int coresight_claim_device_unlocked(struct coresight_device *csdev);
extern int coresight_claim_device(struct coresight_device *csdev);
extern int coresight_claim_device_unlocked(struct coresight_device *csdev);
extern void coresight_disclaim_device(struct coresight_device *csdev);
extern void coresight_disclaim_device_unlocked(struct coresight_device *csdev);
extern char *coresight_alloc_device_name(struct coresight_dev_list *devs,
int coresight_claim_device(struct coresight_device *csdev);
int coresight_claim_device_unlocked(struct coresight_device *csdev);
void coresight_clear_self_claim_tag(struct csdev_access *csa);
void coresight_clear_self_claim_tag_unlocked(struct csdev_access *csa);
void coresight_disclaim_device(struct coresight_device *csdev);
void coresight_disclaim_device_unlocked(struct coresight_device *csdev);
char *coresight_alloc_device_name(struct coresight_dev_list *devs,
struct device *dev);
extern bool coresight_loses_context_with_cpu(struct device *dev);
bool coresight_loses_context_with_cpu(struct device *dev);
u32 coresight_relaxed_read32(struct coresight_device *csdev, u32 offset);
u32 coresight_read32(struct coresight_device *csdev, u32 offset);
@ -704,8 +708,8 @@ void coresight_relaxed_write64(struct coresight_device *csdev,
u64 val, u32 offset);
void coresight_write64(struct coresight_device *csdev, u64 val, u32 offset);
extern int coresight_get_cpu(struct device *dev);
extern int coresight_get_static_trace_id(struct device *dev, u32 *id);
int coresight_get_cpu(struct device *dev);
int coresight_get_static_trace_id(struct device *dev, u32 *id);
struct coresight_platform_data *coresight_get_platform_data(struct device *dev);
struct coresight_connection *
@ -723,7 +727,7 @@ coresight_find_output_type(struct coresight_platform_data *pdata,
union coresight_dev_subtype subtype);
int coresight_init_driver(const char *drv, struct amba_driver *amba_drv,
struct platform_driver *pdev_drv);
struct platform_driver *pdev_drv, struct module *owner);
void coresight_remove_driver(struct amba_driver *amba_drv,
struct platform_driver *pdev_drv);