Including fixes from can and xfrm.

The TI regression notified last week is actually on our net-next tree,
 it does not affect 6.16.
 We are investigating a virtio regression which is quite hard to
 reproduce - currently only our CI sporadically hits it. Hopefully it
 should not be critical, and I'm not sure that an additional week would
 be enough to solve it.
 
 Current release - fix to a fix:
 
   - sched: sch_qfq: avoid sleeping in atomic context in qfq_delete_class
 
 Previous releases - regressions:
 
   - xfrm:
     - set transport header to fix UDP GRO handling
     - delete x->tunnel as we delete x
 
   - eth: mlx5: fix memory leak in cmd_exec()
 
   - eth: i40e: when removing VF MAC filters, avoid losing PF-set MAC
 
   - eth: gve: fix stuck TX queue for DQ queue format
 
 Previous releases - always broken:
 
   - can: fix NULL pointer deref of struct can_priv::do_set_mode
 
   - eth: ice: fix a null pointer dereference in ice_copy_and_init_pkg()
 
   - eth: ism: fix concurrency management in ism_cmd()
 
   - eth: dpaa2: fix device reference count leak in MAC endpoint handling
 
   - eth: icssg-prueth: fix buffer allocation for ICSSG
 
 Misc:
 
   - selftests: mptcp: increase code coverage
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmiCGdASHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkV2YP/idUgK6lAEHGIVvmhYn7TJCeA4Ekbybt
 rjf71bUJRs1w64y2UFaK4/LOdOWoK6ztZIX3tFkkPwMKxvGsvymMcClhghSGU6E5
 8h1jcgSCzvtAV6bSDeulDSfqKu4OJ2KCJomwQKKO0bu3RDQTkQdPNLxb6ysAonml
 6Y0v2kiclnjtIdXJ+emdgQ3CBjLabPLTzXBVy1yuk0J1JLL2UACzK4xmacKvlpN0
 heN7eEontRrPmBVNRY5+rw5yrYBwAg6Y8pa3dF3M8bXiXc0xvoT5c8lA9d0PNRSe
 9TBeXwCASdr9vDFORoL87GwDV82pfrXrSAcHBfdW55ItblgkzLBiIkGsGGt5YynZ
 1ZA8fGJVJwNbV7jFqQkjFob+Te6dCL1wcFNbbrrRtU5V24D0Q0YE56HHcFqgbuht
 dLuDSsxVL1TGlpA5q2f1jylfIekLgMV9d1X8adagY1VRuzK4SzGZpCXEcpBUI2z3
 boTpc/BTQx9t42qjegLZqCu+TmvVwYM6zarUvO8jQtC9LZpMY1sPxpu+qFd5yYJ1
 VyK+EGOq7At6qwxchUBiwVsek08G5vXWsQFVA9qn+sPTi5iKhDB97dJFnnb1aBo/
 aw/lYHFBzn5I31HhLaDKFMqie4JUOfj9gd/Ab7hR4u4jAqUG9tkPjrdCWtLFDrSn
 Jn+o05q8HLwK
 =7rIq
 -----END PGP SIGNATURE-----

Merge tag 'net-6.16-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from can and xfrm.

  The TI regression notified last week is actually on our net-next tree,
  it does not affect 6.16.

  We are investigating a virtio regression which is quite hard to
  reproduce - currently only our CI sporadically hits it. Hopefully it
  should not be critical, and I'm not sure that an additional week would
  be enough to solve it.

  Current release - fix to a fix:

   - sched: sch_qfq: avoid sleeping in atomic context in qfq_delete_class

  Previous releases - regressions:

   - xfrm:
      - set transport header to fix UDP GRO handling
      - delete x->tunnel as we delete x

   - eth:
      - mlx5: fix memory leak in cmd_exec()
      - i40e: when removing VF MAC filters, avoid losing PF-set MAC
      - gve: fix stuck TX queue for DQ queue format

  Previous releases - always broken:

   - can: fix NULL pointer deref of struct can_priv::do_set_mode

   - eth:
      - ice: fix a null pointer dereference in ice_copy_and_init_pkg()
      - ism: fix concurrency management in ism_cmd()
      - dpaa2: fix device reference count leak in MAC endpoint handling
      - icssg-prueth: fix buffer allocation for ICSSG

  Misc:

   - selftests: mptcp: increase code coverage"

* tag 'net-6.16-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (34 commits)
  net: hns3: default enable tx bounce buffer when smmu enabled
  net: hns3: fixed vf get max channels bug
  net: hns3: disable interrupt when ptp init failed
  net: hns3: fix concurrent setting vlan filter issue
  s390/ism: fix concurrency management in ism_cmd()
  selftests: drv-net: wait for iperf client to stop sending
  MAINTAINERS: Add in6.h to MAINTAINERS
  selftests: netfilter: tone-down conntrack clash test
  can: netlink: can_changelink(): fix NULL pointer deref of struct can_priv::do_set_mode
  net/sched: sch_qfq: Avoid triggering might_sleep in atomic context in qfq_delete_class
  gve: Fix stuck TX queue for DQ queue format
  net: appletalk: Fix use-after-free in AARP proxy probe
  net: bcmasp: Restore programming of TX map vector register
  selftests: mptcp: connect: also cover checksum
  selftests: mptcp: connect: also cover alt modes
  e1000e: ignore uninitialized checksum word on tgp
  e1000e: disregard NVM checksum on tgp when valid checksum bit is not set
  ice: Fix a null pointer dereference in ice_copy_and_init_pkg()
  i40e: When removing VF MAC filters, only check PF-set MAC
  i40e: report VF tx_dropped with tx_errors instead of tx_discards
  ...
This commit is contained in:
Linus Torvalds 2025-07-24 08:44:42 -07:00
commit 407c114c98
47 changed files with 550 additions and 307 deletions

View File

@ -17383,6 +17383,7 @@ F: include/linux/ethtool.h
F: include/linux/framer/framer-provider.h F: include/linux/framer/framer-provider.h
F: include/linux/framer/framer.h F: include/linux/framer/framer.h
F: include/linux/in.h F: include/linux/in.h
F: include/linux/in6.h
F: include/linux/indirect_call_wrapper.h F: include/linux/indirect_call_wrapper.h
F: include/linux/inet.h F: include/linux/inet.h
F: include/linux/inet_diag.h F: include/linux/inet_diag.h

View File

@ -943,6 +943,7 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
struct fsl_mc_obj_desc endpoint_desc = {{ 0 }}; struct fsl_mc_obj_desc endpoint_desc = {{ 0 }};
struct dprc_endpoint endpoint1 = {{ 0 }}; struct dprc_endpoint endpoint1 = {{ 0 }};
struct dprc_endpoint endpoint2 = {{ 0 }}; struct dprc_endpoint endpoint2 = {{ 0 }};
struct fsl_mc_bus *mc_bus;
int state, err; int state, err;
mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent); mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent);
@ -966,6 +967,8 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
strcpy(endpoint_desc.type, endpoint2.type); strcpy(endpoint_desc.type, endpoint2.type);
endpoint_desc.id = endpoint2.id; endpoint_desc.id = endpoint2.id;
endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev); endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev);
if (endpoint)
return endpoint;
/* /*
* We know that the device has an endpoint because we verified by * We know that the device has an endpoint because we verified by
@ -973,17 +976,13 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
* yet discovered by the fsl-mc bus, thus the lookup returned NULL. * yet discovered by the fsl-mc bus, thus the lookup returned NULL.
* Force a rescan of the devices in this container and retry the lookup. * Force a rescan of the devices in this container and retry the lookup.
*/ */
if (!endpoint) { mc_bus = to_fsl_mc_bus(mc_bus_dev);
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev); if (mutex_trylock(&mc_bus->scan_mutex)) {
err = dprc_scan_objects(mc_bus_dev, true);
if (mutex_trylock(&mc_bus->scan_mutex)) { mutex_unlock(&mc_bus->scan_mutex);
err = dprc_scan_objects(mc_bus_dev, true);
mutex_unlock(&mc_bus->scan_mutex);
}
if (err < 0)
return ERR_PTR(err);
} }
if (err < 0)
return ERR_PTR(err);
endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev); endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev);
/* /*

View File

@ -145,13 +145,16 @@ void can_change_state(struct net_device *dev, struct can_frame *cf,
EXPORT_SYMBOL_GPL(can_change_state); EXPORT_SYMBOL_GPL(can_change_state);
/* CAN device restart for bus-off recovery */ /* CAN device restart for bus-off recovery */
static void can_restart(struct net_device *dev) static int can_restart(struct net_device *dev)
{ {
struct can_priv *priv = netdev_priv(dev); struct can_priv *priv = netdev_priv(dev);
struct sk_buff *skb; struct sk_buff *skb;
struct can_frame *cf; struct can_frame *cf;
int err; int err;
if (!priv->do_set_mode)
return -EOPNOTSUPP;
if (netif_carrier_ok(dev)) if (netif_carrier_ok(dev))
netdev_err(dev, "Attempt to restart for bus-off recovery, but carrier is OK?\n"); netdev_err(dev, "Attempt to restart for bus-off recovery, but carrier is OK?\n");
@ -173,10 +176,14 @@ static void can_restart(struct net_device *dev)
if (err) { if (err) {
netdev_err(dev, "Restart failed, error %pe\n", ERR_PTR(err)); netdev_err(dev, "Restart failed, error %pe\n", ERR_PTR(err));
netif_carrier_off(dev); netif_carrier_off(dev);
return err;
} else { } else {
netdev_dbg(dev, "Restarted\n"); netdev_dbg(dev, "Restarted\n");
priv->can_stats.restarts++; priv->can_stats.restarts++;
} }
return 0;
} }
static void can_restart_work(struct work_struct *work) static void can_restart_work(struct work_struct *work)
@ -201,9 +208,8 @@ int can_restart_now(struct net_device *dev)
return -EBUSY; return -EBUSY;
cancel_delayed_work_sync(&priv->restart_work); cancel_delayed_work_sync(&priv->restart_work);
can_restart(dev);
return 0; return can_restart(dev);
} }
/* CAN bus-off /* CAN bus-off

View File

@ -285,6 +285,12 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
} }
if (data[IFLA_CAN_RESTART_MS]) { if (data[IFLA_CAN_RESTART_MS]) {
if (!priv->do_set_mode) {
NL_SET_ERR_MSG(extack,
"Device doesn't support restart from Bus Off");
return -EOPNOTSUPP;
}
/* Do not allow changing restart delay while running */ /* Do not allow changing restart delay while running */
if (dev->flags & IFF_UP) if (dev->flags & IFF_UP)
return -EBUSY; return -EBUSY;
@ -292,6 +298,12 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
} }
if (data[IFLA_CAN_RESTART]) { if (data[IFLA_CAN_RESTART]) {
if (!priv->do_set_mode) {
NL_SET_ERR_MSG(extack,
"Device doesn't support restart from Bus Off");
return -EOPNOTSUPP;
}
/* Do not allow a restart while not running */ /* Do not allow a restart while not running */
if (!(dev->flags & IFF_UP)) if (!(dev->flags & IFF_UP))
return -EINVAL; return -EINVAL;

View File

@ -818,6 +818,9 @@ static void bcmasp_init_tx(struct bcmasp_intf *intf)
/* Tx SPB */ /* Tx SPB */
tx_spb_ctrl_wl(intf, ((intf->channel + 8) << TX_SPB_CTRL_XF_BID_SHIFT), tx_spb_ctrl_wl(intf, ((intf->channel + 8) << TX_SPB_CTRL_XF_BID_SHIFT),
TX_SPB_CTRL_XF_CTRL2); TX_SPB_CTRL_XF_CTRL2);
if (intf->parent->tx_chan_offset)
tx_pause_ctrl_wl(intf, (1 << (intf->channel + 8)), TX_PAUSE_MAP_VECTOR);
tx_spb_top_wl(intf, 0x1e, TX_SPB_TOP_BLKOUT); tx_spb_top_wl(intf, 0x1e, TX_SPB_TOP_BLKOUT);
tx_spb_dma_wq(intf, intf->tx_spb_dma_addr, TX_SPB_DMA_READ); tx_spb_dma_wq(intf, intf->tx_spb_dma_addr, TX_SPB_DMA_READ);

View File

@ -4666,12 +4666,19 @@ static int dpaa2_eth_connect_mac(struct dpaa2_eth_priv *priv)
return PTR_ERR(dpmac_dev); return PTR_ERR(dpmac_dev);
} }
if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) if (IS_ERR(dpmac_dev))
return 0; return 0;
if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) {
err = 0;
goto out_put_device;
}
mac = kzalloc(sizeof(struct dpaa2_mac), GFP_KERNEL); mac = kzalloc(sizeof(struct dpaa2_mac), GFP_KERNEL);
if (!mac) if (!mac) {
return -ENOMEM; err = -ENOMEM;
goto out_put_device;
}
mac->mc_dev = dpmac_dev; mac->mc_dev = dpmac_dev;
mac->mc_io = priv->mc_io; mac->mc_io = priv->mc_io;
@ -4705,6 +4712,8 @@ err_close_mac:
dpaa2_mac_close(mac); dpaa2_mac_close(mac);
err_free_mac: err_free_mac:
kfree(mac); kfree(mac);
out_put_device:
put_device(&dpmac_dev->dev);
return err; return err;
} }

View File

@ -1448,12 +1448,19 @@ static int dpaa2_switch_port_connect_mac(struct ethsw_port_priv *port_priv)
if (PTR_ERR(dpmac_dev) == -EPROBE_DEFER) if (PTR_ERR(dpmac_dev) == -EPROBE_DEFER)
return PTR_ERR(dpmac_dev); return PTR_ERR(dpmac_dev);
if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) if (IS_ERR(dpmac_dev))
return 0; return 0;
if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) {
err = 0;
goto out_put_device;
}
mac = kzalloc(sizeof(*mac), GFP_KERNEL); mac = kzalloc(sizeof(*mac), GFP_KERNEL);
if (!mac) if (!mac) {
return -ENOMEM; err = -ENOMEM;
goto out_put_device;
}
mac->mc_dev = dpmac_dev; mac->mc_dev = dpmac_dev;
mac->mc_io = port_priv->ethsw_data->mc_io; mac->mc_io = port_priv->ethsw_data->mc_io;
@ -1483,6 +1490,8 @@ err_close_mac:
dpaa2_mac_close(mac); dpaa2_mac_close(mac);
err_free_mac: err_free_mac:
kfree(mac); kfree(mac);
out_put_device:
put_device(&dpmac_dev->dev);
return err; return err;
} }

View File

@ -1917,49 +1917,56 @@ static void gve_turnup_and_check_status(struct gve_priv *priv)
gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status); gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status);
} }
static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) static struct gve_notify_block *gve_get_tx_notify_block(struct gve_priv *priv,
unsigned int txqueue)
{ {
struct gve_notify_block *block;
struct gve_tx_ring *tx = NULL;
struct gve_priv *priv;
u32 last_nic_done;
u32 current_time;
u32 ntfy_idx; u32 ntfy_idx;
netdev_info(dev, "Timeout on tx queue, %d", txqueue);
priv = netdev_priv(dev);
if (txqueue > priv->tx_cfg.num_queues) if (txqueue > priv->tx_cfg.num_queues)
goto reset; return NULL;
ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue); ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue);
if (ntfy_idx >= priv->num_ntfy_blks) if (ntfy_idx >= priv->num_ntfy_blks)
goto reset; return NULL;
block = &priv->ntfy_blocks[ntfy_idx]; return &priv->ntfy_blocks[ntfy_idx];
tx = block->tx; }
static bool gve_tx_timeout_try_q_kick(struct gve_priv *priv,
unsigned int txqueue)
{
struct gve_notify_block *block;
u32 current_time;
block = gve_get_tx_notify_block(priv, txqueue);
if (!block)
return false;
current_time = jiffies_to_msecs(jiffies); current_time = jiffies_to_msecs(jiffies);
if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time) if (block->tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
goto reset; return false;
/* Check to see if there are missed completions, which will allow us to netdev_info(priv->dev, "Kicking queue %d", txqueue);
* kick the queue. napi_schedule(&block->napi);
*/ block->tx->last_kick_msec = current_time;
last_nic_done = gve_tx_load_event_counter(priv, tx); return true;
if (last_nic_done - tx->done) { }
netdev_info(dev, "Kicking queue %d", txqueue);
iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block));
napi_schedule(&block->napi);
tx->last_kick_msec = current_time;
goto out;
} // Else reset.
reset: static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
gve_schedule_reset(priv); {
struct gve_notify_block *block;
struct gve_priv *priv;
out: netdev_info(dev, "Timeout on tx queue, %d", txqueue);
if (tx) priv = netdev_priv(dev);
tx->queue_timeout++;
if (!gve_tx_timeout_try_q_kick(priv, txqueue))
gve_schedule_reset(priv);
block = gve_get_tx_notify_block(priv, txqueue);
if (block)
block->tx->queue_timeout++;
priv->tx_timeo_cnt++; priv->tx_timeo_cnt++;
} }

View File

@ -11,6 +11,7 @@
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/ip.h> #include <linux/ip.h>
#include <linux/ipv6.h> #include <linux/ipv6.h>
#include <linux/iommu.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
@ -1039,6 +1040,8 @@ static bool hns3_can_use_tx_sgl(struct hns3_enet_ring *ring,
static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring) static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring)
{ {
u32 alloc_size = ring->tqp->handle->kinfo.tx_spare_buf_size; u32 alloc_size = ring->tqp->handle->kinfo.tx_spare_buf_size;
struct net_device *netdev = ring_to_netdev(ring);
struct hns3_nic_priv *priv = netdev_priv(netdev);
struct hns3_tx_spare *tx_spare; struct hns3_tx_spare *tx_spare;
struct page *page; struct page *page;
dma_addr_t dma; dma_addr_t dma;
@ -1080,6 +1083,7 @@ static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring)
tx_spare->buf = page_address(page); tx_spare->buf = page_address(page);
tx_spare->len = PAGE_SIZE << order; tx_spare->len = PAGE_SIZE << order;
ring->tx_spare = tx_spare; ring->tx_spare = tx_spare;
ring->tx_copybreak = priv->tx_copybreak;
return; return;
dma_mapping_error: dma_mapping_error:
@ -4874,6 +4878,30 @@ static void hns3_nic_dealloc_vector_data(struct hns3_nic_priv *priv)
devm_kfree(&pdev->dev, priv->tqp_vector); devm_kfree(&pdev->dev, priv->tqp_vector);
} }
static void hns3_update_tx_spare_buf_config(struct hns3_nic_priv *priv)
{
#define HNS3_MIN_SPARE_BUF_SIZE (2 * 1024 * 1024)
#define HNS3_MAX_PACKET_SIZE (64 * 1024)
struct iommu_domain *domain = iommu_get_domain_for_dev(priv->dev);
struct hnae3_ae_dev *ae_dev = hns3_get_ae_dev(priv->ae_handle);
struct hnae3_handle *handle = priv->ae_handle;
if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3)
return;
if (!(domain && iommu_is_dma_domain(domain)))
return;
priv->min_tx_copybreak = HNS3_MAX_PACKET_SIZE;
priv->min_tx_spare_buf_size = HNS3_MIN_SPARE_BUF_SIZE;
if (priv->tx_copybreak < priv->min_tx_copybreak)
priv->tx_copybreak = priv->min_tx_copybreak;
if (handle->kinfo.tx_spare_buf_size < priv->min_tx_spare_buf_size)
handle->kinfo.tx_spare_buf_size = priv->min_tx_spare_buf_size;
}
static void hns3_ring_get_cfg(struct hnae3_queue *q, struct hns3_nic_priv *priv, static void hns3_ring_get_cfg(struct hnae3_queue *q, struct hns3_nic_priv *priv,
unsigned int ring_type) unsigned int ring_type)
{ {
@ -5107,6 +5135,7 @@ int hns3_init_all_ring(struct hns3_nic_priv *priv)
int i, j; int i, j;
int ret; int ret;
hns3_update_tx_spare_buf_config(priv);
for (i = 0; i < ring_num; i++) { for (i = 0; i < ring_num; i++) {
ret = hns3_alloc_ring_memory(&priv->ring[i]); ret = hns3_alloc_ring_memory(&priv->ring[i]);
if (ret) { if (ret) {
@ -5311,6 +5340,8 @@ static int hns3_client_init(struct hnae3_handle *handle)
priv->ae_handle = handle; priv->ae_handle = handle;
priv->tx_timeout_count = 0; priv->tx_timeout_count = 0;
priv->max_non_tso_bd_num = ae_dev->dev_specs.max_non_tso_bd_num; priv->max_non_tso_bd_num = ae_dev->dev_specs.max_non_tso_bd_num;
priv->min_tx_copybreak = 0;
priv->min_tx_spare_buf_size = 0;
set_bit(HNS3_NIC_STATE_DOWN, &priv->state); set_bit(HNS3_NIC_STATE_DOWN, &priv->state);
handle->msg_enable = netif_msg_init(debug, DEFAULT_MSG_LEVEL); handle->msg_enable = netif_msg_init(debug, DEFAULT_MSG_LEVEL);

View File

@ -596,6 +596,8 @@ struct hns3_nic_priv {
struct hns3_enet_coalesce rx_coal; struct hns3_enet_coalesce rx_coal;
u32 tx_copybreak; u32 tx_copybreak;
u32 rx_copybreak; u32 rx_copybreak;
u32 min_tx_copybreak;
u32 min_tx_spare_buf_size;
}; };
union l3_hdr_info { union l3_hdr_info {

View File

@ -9576,33 +9576,36 @@ static bool hclge_need_enable_vport_vlan_filter(struct hclge_vport *vport)
return false; return false;
} }
int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en) static int __hclge_enable_vport_vlan_filter(struct hclge_vport *vport,
bool request_en)
{ {
struct hclge_dev *hdev = vport->back;
bool need_en; bool need_en;
int ret; int ret;
mutex_lock(&hdev->vport_lock);
vport->req_vlan_fltr_en = request_en;
need_en = hclge_need_enable_vport_vlan_filter(vport); need_en = hclge_need_enable_vport_vlan_filter(vport);
if (need_en == vport->cur_vlan_fltr_en) { if (need_en == vport->cur_vlan_fltr_en)
mutex_unlock(&hdev->vport_lock);
return 0; return 0;
}
ret = hclge_set_vport_vlan_filter(vport, need_en); ret = hclge_set_vport_vlan_filter(vport, need_en);
if (ret) { if (ret)
mutex_unlock(&hdev->vport_lock);
return ret; return ret;
}
vport->cur_vlan_fltr_en = need_en; vport->cur_vlan_fltr_en = need_en;
return 0;
}
int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en)
{
struct hclge_dev *hdev = vport->back;
int ret;
mutex_lock(&hdev->vport_lock);
vport->req_vlan_fltr_en = request_en;
ret = __hclge_enable_vport_vlan_filter(vport, request_en);
mutex_unlock(&hdev->vport_lock); mutex_unlock(&hdev->vport_lock);
return 0; return ret;
} }
static int hclge_enable_vlan_filter(struct hnae3_handle *handle, bool enable) static int hclge_enable_vlan_filter(struct hnae3_handle *handle, bool enable)
@ -10623,16 +10626,19 @@ static void hclge_sync_vlan_fltr_state(struct hclge_dev *hdev)
&vport->state)) &vport->state))
continue; continue;
ret = hclge_enable_vport_vlan_filter(vport, mutex_lock(&hdev->vport_lock);
vport->req_vlan_fltr_en); ret = __hclge_enable_vport_vlan_filter(vport,
vport->req_vlan_fltr_en);
if (ret) { if (ret) {
dev_err(&hdev->pdev->dev, dev_err(&hdev->pdev->dev,
"failed to sync vlan filter state for vport%u, ret = %d\n", "failed to sync vlan filter state for vport%u, ret = %d\n",
vport->vport_id, ret); vport->vport_id, ret);
set_bit(HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE, set_bit(HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE,
&vport->state); &vport->state);
mutex_unlock(&hdev->vport_lock);
return; return;
} }
mutex_unlock(&hdev->vport_lock);
} }
} }

View File

@ -497,14 +497,14 @@ int hclge_ptp_init(struct hclge_dev *hdev)
if (ret) { if (ret) {
dev_err(&hdev->pdev->dev, dev_err(&hdev->pdev->dev,
"failed to init freq, ret = %d\n", ret); "failed to init freq, ret = %d\n", ret);
goto out; goto out_clear_int;
} }
ret = hclge_ptp_set_ts_mode(hdev, &hdev->ptp->ts_cfg); ret = hclge_ptp_set_ts_mode(hdev, &hdev->ptp->ts_cfg);
if (ret) { if (ret) {
dev_err(&hdev->pdev->dev, dev_err(&hdev->pdev->dev,
"failed to init ts mode, ret = %d\n", ret); "failed to init ts mode, ret = %d\n", ret);
goto out; goto out_clear_int;
} }
ktime_get_real_ts64(&ts); ktime_get_real_ts64(&ts);
@ -512,7 +512,7 @@ int hclge_ptp_init(struct hclge_dev *hdev)
if (ret) { if (ret) {
dev_err(&hdev->pdev->dev, dev_err(&hdev->pdev->dev,
"failed to init ts time, ret = %d\n", ret); "failed to init ts time, ret = %d\n", ret);
goto out; goto out_clear_int;
} }
set_bit(HCLGE_STATE_PTP_EN, &hdev->state); set_bit(HCLGE_STATE_PTP_EN, &hdev->state);
@ -520,6 +520,9 @@ int hclge_ptp_init(struct hclge_dev *hdev)
return 0; return 0;
out_clear_int:
clear_bit(HCLGE_PTP_FLAG_EN, &hdev->ptp->flags);
hclge_ptp_int_en(hdev, false);
out: out:
hclge_ptp_destroy_clock(hdev); hclge_ptp_destroy_clock(hdev);

View File

@ -3094,11 +3094,7 @@ static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
static u32 hclgevf_get_max_channels(struct hclgevf_dev *hdev) static u32 hclgevf_get_max_channels(struct hclgevf_dev *hdev)
{ {
struct hnae3_handle *nic = &hdev->nic; return min(hdev->rss_size_max, hdev->num_tqps);
struct hnae3_knic_private_info *kinfo = &nic->kinfo;
return min_t(u32, hdev->rss_size_max,
hdev->num_tqps / kinfo->tc_info.num_tc);
} }
/** /**

View File

@ -638,6 +638,9 @@
/* For checksumming, the sum of all words in the NVM should equal 0xBABA. */ /* For checksumming, the sum of all words in the NVM should equal 0xBABA. */
#define NVM_SUM 0xBABA #define NVM_SUM 0xBABA
/* Uninitialized ("empty") checksum word value */
#define NVM_CHECKSUM_UNINITIALIZED 0xFFFF
/* PBA (printed board assembly) number words */ /* PBA (printed board assembly) number words */
#define NVM_PBA_OFFSET_0 8 #define NVM_PBA_OFFSET_0 8
#define NVM_PBA_OFFSET_1 9 #define NVM_PBA_OFFSET_1 9

View File

@ -4274,6 +4274,8 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
ret_val = e1000e_update_nvm_checksum(hw); ret_val = e1000e_update_nvm_checksum(hw);
if (ret_val) if (ret_val)
return ret_val; return ret_val;
} else if (hw->mac.type == e1000_pch_tgp) {
return 0;
} }
} }

View File

@ -558,6 +558,12 @@ s32 e1000e_validate_nvm_checksum_generic(struct e1000_hw *hw)
checksum += nvm_data; checksum += nvm_data;
} }
if (hw->mac.type == e1000_pch_tgp &&
nvm_data == NVM_CHECKSUM_UNINITIALIZED) {
e_dbg("Uninitialized NVM Checksum on TGP platform - ignoring\n");
return 0;
}
if (checksum != (u16)NVM_SUM) { if (checksum != (u16)NVM_SUM) {
e_dbg("NVM Checksum Invalid\n"); e_dbg("NVM Checksum Invalid\n");
return -E1000_ERR_NVM; return -E1000_ERR_NVM;

View File

@ -3137,10 +3137,10 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
const u8 *addr = al->list[i].addr; const u8 *addr = al->list[i].addr;
/* Allow to delete VF primary MAC only if it was not set /* Allow to delete VF primary MAC only if it was not set
* administratively by PF or if VF is trusted. * administratively by PF.
*/ */
if (ether_addr_equal(addr, vf->default_lan_addr.addr)) { if (ether_addr_equal(addr, vf->default_lan_addr.addr)) {
if (i40e_can_vf_change_mac(vf)) if (!vf->pf_set_mac)
was_unimac_deleted = true; was_unimac_deleted = true;
else else
continue; continue;
@ -5006,7 +5006,7 @@ int i40e_get_vf_stats(struct net_device *netdev, int vf_id,
vf_stats->broadcast = stats->rx_broadcast; vf_stats->broadcast = stats->rx_broadcast;
vf_stats->multicast = stats->rx_multicast; vf_stats->multicast = stats->rx_multicast;
vf_stats->rx_dropped = stats->rx_discards + stats->rx_discards_other; vf_stats->rx_dropped = stats->rx_discards + stats->rx_discards_other;
vf_stats->tx_dropped = stats->tx_discards; vf_stats->tx_dropped = stats->tx_errors;
return 0; return 0;
} }

View File

@ -2301,6 +2301,8 @@ enum ice_ddp_state ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf,
return ICE_DDP_PKG_ERR; return ICE_DDP_PKG_ERR;
buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL); buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL);
if (!buf_copy)
return ICE_DDP_PKG_ERR;
state = ice_init_pkg(hw, buf_copy, len); state = ice_init_pkg(hw, buf_copy, len);
if (!ice_is_init_pkg_successful(state)) { if (!ice_is_init_pkg_successful(state)) {

View File

@ -1947,8 +1947,8 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out,
err = mlx5_cmd_invoke(dev, inb, outb, out, out_size, callback, context, err = mlx5_cmd_invoke(dev, inb, outb, out, out_size, callback, context,
pages_queue, token, force_polling); pages_queue, token, force_polling);
if (callback) if (callback && !err)
return err; return 0;
if (err > 0) /* Failed in FW, command didn't execute */ if (err > 0) /* Failed in FW, command didn't execute */
err = deliv_status_to_err(err); err = deliv_status_to_err(err);

View File

@ -1182,19 +1182,19 @@ static void esw_set_peer_miss_rule_source_port(struct mlx5_eswitch *esw,
static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw, static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
struct mlx5_core_dev *peer_dev) struct mlx5_core_dev *peer_dev)
{ {
struct mlx5_eswitch *peer_esw = peer_dev->priv.eswitch;
struct mlx5_flow_destination dest = {}; struct mlx5_flow_destination dest = {};
struct mlx5_flow_act flow_act = {0}; struct mlx5_flow_act flow_act = {0};
struct mlx5_flow_handle **flows; struct mlx5_flow_handle **flows;
/* total vports is the same for both e-switches */
int nvports = esw->total_vports;
struct mlx5_flow_handle *flow; struct mlx5_flow_handle *flow;
struct mlx5_vport *peer_vport;
struct mlx5_flow_spec *spec; struct mlx5_flow_spec *spec;
struct mlx5_vport *vport;
int err, pfindex; int err, pfindex;
unsigned long i; unsigned long i;
void *misc; void *misc;
if (!MLX5_VPORT_MANAGER(esw->dev) && !mlx5_core_is_ecpf_esw_manager(esw->dev)) if (!MLX5_VPORT_MANAGER(peer_dev) &&
!mlx5_core_is_ecpf_esw_manager(peer_dev))
return 0; return 0;
spec = kvzalloc(sizeof(*spec), GFP_KERNEL); spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
@ -1203,7 +1203,7 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
peer_miss_rules_setup(esw, peer_dev, spec, &dest); peer_miss_rules_setup(esw, peer_dev, spec, &dest);
flows = kvcalloc(nvports, sizeof(*flows), GFP_KERNEL); flows = kvcalloc(peer_esw->total_vports, sizeof(*flows), GFP_KERNEL);
if (!flows) { if (!flows) {
err = -ENOMEM; err = -ENOMEM;
goto alloc_flows_err; goto alloc_flows_err;
@ -1213,10 +1213,10 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
misc_parameters); misc_parameters);
if (mlx5_core_is_ecpf_esw_manager(esw->dev)) { if (mlx5_core_is_ecpf_esw_manager(peer_dev)) {
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF); peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF);
esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch, esw_set_peer_miss_rule_source_port(esw, peer_esw, spec,
spec, MLX5_VPORT_PF); MLX5_VPORT_PF);
flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw),
spec, &flow_act, &dest, 1); spec, &flow_act, &dest, 1);
@ -1224,11 +1224,11 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
err = PTR_ERR(flow); err = PTR_ERR(flow);
goto add_pf_flow_err; goto add_pf_flow_err;
} }
flows[vport->index] = flow; flows[peer_vport->index] = flow;
} }
if (mlx5_ecpf_vport_exists(esw->dev)) { if (mlx5_ecpf_vport_exists(peer_dev)) {
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF); peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF);
MLX5_SET(fte_match_set_misc, misc, source_port, MLX5_VPORT_ECPF); MLX5_SET(fte_match_set_misc, misc, source_port, MLX5_VPORT_ECPF);
flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw),
spec, &flow_act, &dest, 1); spec, &flow_act, &dest, 1);
@ -1236,13 +1236,14 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
err = PTR_ERR(flow); err = PTR_ERR(flow);
goto add_ecpf_flow_err; goto add_ecpf_flow_err;
} }
flows[vport->index] = flow; flows[peer_vport->index] = flow;
} }
mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) { mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport,
mlx5_core_max_vfs(peer_dev)) {
esw_set_peer_miss_rule_source_port(esw, esw_set_peer_miss_rule_source_port(esw,
peer_dev->priv.eswitch, peer_esw,
spec, vport->vport); spec, peer_vport->vport);
flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw),
spec, &flow_act, &dest, 1); spec, &flow_act, &dest, 1);
@ -1250,22 +1251,22 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
err = PTR_ERR(flow); err = PTR_ERR(flow);
goto add_vf_flow_err; goto add_vf_flow_err;
} }
flows[vport->index] = flow; flows[peer_vport->index] = flow;
} }
if (mlx5_core_ec_sriov_enabled(esw->dev)) { if (mlx5_core_ec_sriov_enabled(peer_dev)) {
mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport,
if (i >= mlx5_core_max_ec_vfs(peer_dev)) mlx5_core_max_ec_vfs(peer_dev)) {
break; esw_set_peer_miss_rule_source_port(esw, peer_esw,
esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch, spec,
spec, vport->vport); peer_vport->vport);
flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb,
spec, &flow_act, &dest, 1); spec, &flow_act, &dest, 1);
if (IS_ERR(flow)) { if (IS_ERR(flow)) {
err = PTR_ERR(flow); err = PTR_ERR(flow);
goto add_ec_vf_flow_err; goto add_ec_vf_flow_err;
} }
flows[vport->index] = flow; flows[peer_vport->index] = flow;
} }
} }
@ -1282,25 +1283,27 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
return 0; return 0;
add_ec_vf_flow_err: add_ec_vf_flow_err:
mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport,
if (!flows[vport->index]) mlx5_core_max_ec_vfs(peer_dev)) {
if (!flows[peer_vport->index])
continue; continue;
mlx5_del_flow_rules(flows[vport->index]); mlx5_del_flow_rules(flows[peer_vport->index]);
} }
add_vf_flow_err: add_vf_flow_err:
mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) { mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport,
if (!flows[vport->index]) mlx5_core_max_vfs(peer_dev)) {
if (!flows[peer_vport->index])
continue; continue;
mlx5_del_flow_rules(flows[vport->index]); mlx5_del_flow_rules(flows[peer_vport->index]);
} }
if (mlx5_ecpf_vport_exists(esw->dev)) { if (mlx5_ecpf_vport_exists(peer_dev)) {
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF); peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF);
mlx5_del_flow_rules(flows[vport->index]); mlx5_del_flow_rules(flows[peer_vport->index]);
} }
add_ecpf_flow_err: add_ecpf_flow_err:
if (mlx5_core_is_ecpf_esw_manager(esw->dev)) { if (mlx5_core_is_ecpf_esw_manager(peer_dev)) {
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF); peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF);
mlx5_del_flow_rules(flows[vport->index]); mlx5_del_flow_rules(flows[peer_vport->index]);
} }
add_pf_flow_err: add_pf_flow_err:
esw_warn(esw->dev, "FDB: Failed to add peer miss flow rule err %d\n", err); esw_warn(esw->dev, "FDB: Failed to add peer miss flow rule err %d\n", err);
@ -1313,37 +1316,34 @@ alloc_flows_err:
static void esw_del_fdb_peer_miss_rules(struct mlx5_eswitch *esw, static void esw_del_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
struct mlx5_core_dev *peer_dev) struct mlx5_core_dev *peer_dev)
{ {
struct mlx5_eswitch *peer_esw = peer_dev->priv.eswitch;
u16 peer_index = mlx5_get_dev_index(peer_dev); u16 peer_index = mlx5_get_dev_index(peer_dev);
struct mlx5_flow_handle **flows; struct mlx5_flow_handle **flows;
struct mlx5_vport *vport; struct mlx5_vport *peer_vport;
unsigned long i; unsigned long i;
flows = esw->fdb_table.offloads.peer_miss_rules[peer_index]; flows = esw->fdb_table.offloads.peer_miss_rules[peer_index];
if (!flows) if (!flows)
return; return;
if (mlx5_core_ec_sriov_enabled(esw->dev)) { if (mlx5_core_ec_sriov_enabled(peer_dev)) {
mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport,
/* The flow for a particular vport could be NULL if the other ECPF mlx5_core_max_ec_vfs(peer_dev))
* has fewer or no VFs enabled mlx5_del_flow_rules(flows[peer_vport->index]);
*/
if (!flows[vport->index])
continue;
mlx5_del_flow_rules(flows[vport->index]);
}
} }
mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport,
mlx5_del_flow_rules(flows[vport->index]); mlx5_core_max_vfs(peer_dev))
mlx5_del_flow_rules(flows[peer_vport->index]);
if (mlx5_ecpf_vport_exists(esw->dev)) { if (mlx5_ecpf_vport_exists(peer_dev)) {
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF); peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF);
mlx5_del_flow_rules(flows[vport->index]); mlx5_del_flow_rules(flows[peer_vport->index]);
} }
if (mlx5_core_is_ecpf_esw_manager(esw->dev)) { if (mlx5_core_is_ecpf_esw_manager(peer_dev)) {
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF); peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF);
mlx5_del_flow_rules(flows[vport->index]); mlx5_del_flow_rules(flows[peer_vport->index]);
} }
kvfree(flows); kvfree(flows);

View File

@ -288,8 +288,12 @@ static int prueth_fw_offload_buffer_setup(struct prueth_emac *emac)
int i; int i;
addr = lower_32_bits(prueth->msmcram.pa); addr = lower_32_bits(prueth->msmcram.pa);
if (slice) if (slice) {
addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE; if (prueth->pdata.banked_ms_ram)
addr += MSMC_RAM_BANK_SIZE;
else
addr += PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE;
}
if (addr % SZ_64K) { if (addr % SZ_64K) {
dev_warn(prueth->dev, "buffer pool needs to be 64KB aligned\n"); dev_warn(prueth->dev, "buffer pool needs to be 64KB aligned\n");
@ -297,43 +301,66 @@ static int prueth_fw_offload_buffer_setup(struct prueth_emac *emac)
} }
bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET; bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET;
/* workaround for f/w bug. bpool 0 needs to be initialized */
for (i = 0; i < PRUETH_NUM_BUF_POOLS; i++) { /* Configure buffer pools for forwarding buffers
* - used by firmware to store packets to be forwarded to other port
* - 8 total pools per slice
*/
for (i = 0; i < PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE; i++) {
writel(addr, &bpool_cfg[i].addr); writel(addr, &bpool_cfg[i].addr);
writel(PRUETH_EMAC_BUF_POOL_SIZE, &bpool_cfg[i].len); writel(PRUETH_SW_FWD_BUF_POOL_SIZE, &bpool_cfg[i].len);
addr += PRUETH_EMAC_BUF_POOL_SIZE; addr += PRUETH_SW_FWD_BUF_POOL_SIZE;
} }
if (!slice) /* Configure buffer pools for Local Injection buffers
addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE; * - used by firmware to store packets received from host core
else * - 16 total pools per slice
addr += PRUETH_SW_NUM_BUF_POOLS_HOST * PRUETH_SW_BUF_POOL_SIZE_HOST; */
for (i = 0; i < PRUETH_NUM_LI_BUF_POOLS_PER_SLICE; i++) {
int cfg_idx = i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE;
for (i = PRUETH_NUM_BUF_POOLS; /* The driver only uses first 4 queues per PRU,
i < 2 * PRUETH_SW_NUM_BUF_POOLS_HOST + PRUETH_NUM_BUF_POOLS; * so only initialize buffer for them
i++) { */
/* The driver only uses first 4 queues per PRU so only initialize them */ if ((i % PRUETH_NUM_LI_BUF_POOLS_PER_PORT_PER_SLICE)
if (i % PRUETH_SW_NUM_BUF_POOLS_HOST < PRUETH_SW_NUM_BUF_POOLS_PER_PRU) { < PRUETH_SW_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE) {
writel(addr, &bpool_cfg[i].addr); writel(addr, &bpool_cfg[cfg_idx].addr);
writel(PRUETH_SW_BUF_POOL_SIZE_HOST, &bpool_cfg[i].len); writel(PRUETH_SW_LI_BUF_POOL_SIZE,
addr += PRUETH_SW_BUF_POOL_SIZE_HOST; &bpool_cfg[cfg_idx].len);
addr += PRUETH_SW_LI_BUF_POOL_SIZE;
} else { } else {
writel(0, &bpool_cfg[i].addr); writel(0, &bpool_cfg[cfg_idx].addr);
writel(0, &bpool_cfg[i].len); writel(0, &bpool_cfg[cfg_idx].len);
} }
} }
if (!slice) /* Express RX buffer queue
addr += PRUETH_SW_NUM_BUF_POOLS_HOST * PRUETH_SW_BUF_POOL_SIZE_HOST; * - used by firmware to store express packets to be transmitted
else * to the host core
addr += PRUETH_EMAC_RX_CTX_BUF_SIZE; */
rxq_ctx = emac->dram.va + HOST_RX_Q_EXP_CONTEXT_OFFSET;
for (i = 0; i < 3; i++)
writel(addr, &rxq_ctx->start[i]);
addr += PRUETH_SW_HOST_EXP_BUF_POOL_SIZE;
writel(addr, &rxq_ctx->end);
/* Pre-emptible RX buffer queue
* - used by firmware to store preemptible packets to be transmitted
* to the host core
*/
rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET; rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET;
for (i = 0; i < 3; i++) for (i = 0; i < 3; i++)
writel(addr, &rxq_ctx->start[i]); writel(addr, &rxq_ctx->start[i]);
addr += PRUETH_EMAC_RX_CTX_BUF_SIZE; addr += PRUETH_SW_HOST_PRE_BUF_POOL_SIZE;
writel(addr - SZ_2K, &rxq_ctx->end); writel(addr, &rxq_ctx->end);
/* Set pointer for default dropped packet write
* - used by firmware to temporarily store packet to be dropped
*/
rxq_ctx = emac->dram.va + DEFAULT_MSMC_Q_OFFSET;
writel(addr, &rxq_ctx->start[0]);
return 0; return 0;
} }
@ -347,13 +374,13 @@ static int prueth_emac_buffer_setup(struct prueth_emac *emac)
u32 addr; u32 addr;
int i; int i;
/* Layout to have 64KB aligned buffer pool
* |BPOOL0|BPOOL1|RX_CTX0|RX_CTX1|
*/
addr = lower_32_bits(prueth->msmcram.pa); addr = lower_32_bits(prueth->msmcram.pa);
if (slice) if (slice) {
addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE; if (prueth->pdata.banked_ms_ram)
addr += MSMC_RAM_BANK_SIZE;
else
addr += PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE;
}
if (addr % SZ_64K) { if (addr % SZ_64K) {
dev_warn(prueth->dev, "buffer pool needs to be 64KB aligned\n"); dev_warn(prueth->dev, "buffer pool needs to be 64KB aligned\n");
@ -361,39 +388,66 @@ static int prueth_emac_buffer_setup(struct prueth_emac *emac)
} }
bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET; bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET;
/* workaround for f/w bug. bpool 0 needs to be initilalized */
writel(addr, &bpool_cfg[0].addr);
writel(0, &bpool_cfg[0].len);
for (i = PRUETH_EMAC_BUF_POOL_START; /* Configure buffer pools for forwarding buffers
i < PRUETH_EMAC_BUF_POOL_START + PRUETH_NUM_BUF_POOLS; * - in mac mode - no forwarding so initialize all pools to 0
i++) { * - 8 total pools per slice
writel(addr, &bpool_cfg[i].addr); */
writel(PRUETH_EMAC_BUF_POOL_SIZE, &bpool_cfg[i].len); for (i = 0; i < PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE; i++) {
addr += PRUETH_EMAC_BUF_POOL_SIZE; writel(0, &bpool_cfg[i].addr);
writel(0, &bpool_cfg[i].len);
} }
if (!slice) /* Configure buffer pools for Local Injection buffers
addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE; * - used by firmware to store packets received from host core
else * - 16 total pools per slice
addr += PRUETH_EMAC_RX_CTX_BUF_SIZE * 2; */
bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET;
for (i = 0; i < PRUETH_NUM_LI_BUF_POOLS_PER_SLICE; i++) {
int cfg_idx = i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE;
/* Pre-emptible RX buffer queue */ /* In EMAC mode, only first 4 buffers are used,
rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET; * as 1 slice needs to handle only 1 port
for (i = 0; i < 3; i++) */
writel(addr, &rxq_ctx->start[i]); if (i < PRUETH_EMAC_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE) {
writel(addr, &bpool_cfg[cfg_idx].addr);
writel(PRUETH_EMAC_LI_BUF_POOL_SIZE,
&bpool_cfg[cfg_idx].len);
addr += PRUETH_EMAC_LI_BUF_POOL_SIZE;
} else {
writel(0, &bpool_cfg[cfg_idx].addr);
writel(0, &bpool_cfg[cfg_idx].len);
}
}
addr += PRUETH_EMAC_RX_CTX_BUF_SIZE; /* Express RX buffer queue
writel(addr, &rxq_ctx->end); * - used by firmware to store express packets to be transmitted
* to host core
/* Express RX buffer queue */ */
rxq_ctx = emac->dram.va + HOST_RX_Q_EXP_CONTEXT_OFFSET; rxq_ctx = emac->dram.va + HOST_RX_Q_EXP_CONTEXT_OFFSET;
for (i = 0; i < 3; i++) for (i = 0; i < 3; i++)
writel(addr, &rxq_ctx->start[i]); writel(addr, &rxq_ctx->start[i]);
addr += PRUETH_EMAC_RX_CTX_BUF_SIZE; addr += PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE;
writel(addr, &rxq_ctx->end); writel(addr, &rxq_ctx->end);
/* Pre-emptible RX buffer queue
* - used by firmware to store preemptible packets to be transmitted
* to host core
*/
rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET;
for (i = 0; i < 3; i++)
writel(addr, &rxq_ctx->start[i]);
addr += PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE;
writel(addr, &rxq_ctx->end);
/* Set pointer for default dropped packet write
* - used by firmware to temporarily store packet to be dropped
*/
rxq_ctx = emac->dram.va + DEFAULT_MSMC_Q_OFFSET;
writel(addr, &rxq_ctx->start[0]);
return 0; return 0;
} }

View File

@ -26,21 +26,71 @@ struct icssg_flow_cfg {
#define PRUETH_MAX_RX_FLOWS 1 /* excluding default flow */ #define PRUETH_MAX_RX_FLOWS 1 /* excluding default flow */
#define PRUETH_RX_FLOW_DATA 0 #define PRUETH_RX_FLOW_DATA 0
#define PRUETH_EMAC_BUF_POOL_SIZE SZ_8K /* Defines for forwarding path buffer pools:
#define PRUETH_EMAC_POOLS_PER_SLICE 24 * - used by firmware to store packets to be forwarded to other port
#define PRUETH_EMAC_BUF_POOL_START 8 * - 8 total pools per slice
#define PRUETH_NUM_BUF_POOLS 8 * - only used in switch mode (as no forwarding in mac mode)
#define PRUETH_EMAC_RX_CTX_BUF_SIZE SZ_16K /* per slice */ */
#define MSMC_RAM_SIZE \ #define PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE 8
(2 * (PRUETH_EMAC_BUF_POOL_SIZE * PRUETH_NUM_BUF_POOLS + \ #define PRUETH_SW_FWD_BUF_POOL_SIZE (SZ_8K)
PRUETH_EMAC_RX_CTX_BUF_SIZE * 2))
#define PRUETH_SW_BUF_POOL_SIZE_HOST SZ_4K /* Defines for local injection path buffer pools:
#define PRUETH_SW_NUM_BUF_POOLS_HOST 8 * - used by firmware to store packets received from host core
#define PRUETH_SW_NUM_BUF_POOLS_PER_PRU 4 * - 16 total pools per slice
#define MSMC_RAM_SIZE_SWITCH_MODE \ * - 8 pools per port per slice and each slice handles both ports
(MSMC_RAM_SIZE + \ * - only 4 out of 8 pools used per port (as only 4 real QoS levels in ICSSG)
(2 * PRUETH_SW_BUF_POOL_SIZE_HOST * PRUETH_SW_NUM_BUF_POOLS_HOST)) * - switch mode: 8 total pools used
* - mac mode: 4 total pools used
*/
#define PRUETH_NUM_LI_BUF_POOLS_PER_SLICE 16
#define PRUETH_NUM_LI_BUF_POOLS_PER_PORT_PER_SLICE 8
#define PRUETH_SW_LI_BUF_POOL_SIZE SZ_4K
#define PRUETH_SW_USED_LI_BUF_POOLS_PER_SLICE 8
#define PRUETH_SW_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE 4
#define PRUETH_EMAC_LI_BUF_POOL_SIZE SZ_8K
#define PRUETH_EMAC_USED_LI_BUF_POOLS_PER_SLICE 4
#define PRUETH_EMAC_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE 4
/* Defines for host egress path - express and preemptible buffers
* - used by firmware to store express and preemptible packets
* to be transmitted to host core
* - used by both mac/switch modes
*/
#define PRUETH_SW_HOST_EXP_BUF_POOL_SIZE SZ_16K
#define PRUETH_SW_HOST_PRE_BUF_POOL_SIZE (SZ_16K - SZ_2K)
#define PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE PRUETH_SW_HOST_EXP_BUF_POOL_SIZE
#define PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE PRUETH_SW_HOST_PRE_BUF_POOL_SIZE
/* Buffer used by firmware to temporarily store packet to be dropped */
#define PRUETH_SW_DROP_PKT_BUF_SIZE SZ_2K
#define PRUETH_EMAC_DROP_PKT_BUF_SIZE PRUETH_SW_DROP_PKT_BUF_SIZE
/* Total switch mode memory usage for buffers per slice */
#define PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE \
(PRUETH_SW_FWD_BUF_POOL_SIZE * PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE + \
PRUETH_SW_LI_BUF_POOL_SIZE * PRUETH_SW_USED_LI_BUF_POOLS_PER_SLICE + \
PRUETH_SW_HOST_EXP_BUF_POOL_SIZE + \
PRUETH_SW_HOST_PRE_BUF_POOL_SIZE + \
PRUETH_SW_DROP_PKT_BUF_SIZE)
/* Total switch mode memory usage for all buffers */
#define PRUETH_SW_TOTAL_BUF_SIZE \
(2 * PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE)
/* Total mac mode memory usage for buffers per slice */
#define PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE \
(PRUETH_EMAC_LI_BUF_POOL_SIZE * \
PRUETH_EMAC_USED_LI_BUF_POOLS_PER_SLICE + \
PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE + \
PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE + \
PRUETH_EMAC_DROP_PKT_BUF_SIZE)
/* Total mac mode memory usage for all buffers */
#define PRUETH_EMAC_TOTAL_BUF_SIZE \
(2 * PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE)
/* Size of 1 bank of MSMC/OC_SRAM memory */
#define MSMC_RAM_BANK_SIZE SZ_256K
#define PRUETH_SWITCH_FDB_MASK ((SIZE_OF_FDB / NUMBER_OF_FDB_BUCKET_ENTRIES) - 1) #define PRUETH_SWITCH_FDB_MASK ((SIZE_OF_FDB / NUMBER_OF_FDB_BUCKET_ENTRIES) - 1)

View File

@ -1764,10 +1764,15 @@ static int prueth_probe(struct platform_device *pdev)
goto put_mem; goto put_mem;
} }
msmc_ram_size = MSMC_RAM_SIZE;
prueth->is_switchmode_supported = prueth->pdata.switch_mode; prueth->is_switchmode_supported = prueth->pdata.switch_mode;
if (prueth->is_switchmode_supported) if (prueth->pdata.banked_ms_ram) {
msmc_ram_size = MSMC_RAM_SIZE_SWITCH_MODE; /* Reserve 2 MSMC RAM banks for buffers to avoid arbitration */
msmc_ram_size = (2 * MSMC_RAM_BANK_SIZE);
} else {
msmc_ram_size = PRUETH_EMAC_TOTAL_BUF_SIZE;
if (prueth->is_switchmode_supported)
msmc_ram_size = PRUETH_SW_TOTAL_BUF_SIZE;
}
/* NOTE: FW bug needs buffer base to be 64KB aligned */ /* NOTE: FW bug needs buffer base to be 64KB aligned */
prueth->msmcram.va = prueth->msmcram.va =
@ -1924,7 +1929,8 @@ put_iep0:
free_pool: free_pool:
gen_pool_free(prueth->sram_pool, gen_pool_free(prueth->sram_pool,
(unsigned long)prueth->msmcram.va, msmc_ram_size); (unsigned long)prueth->msmcram.va,
prueth->msmcram.size);
put_mem: put_mem:
pruss_release_mem_region(prueth->pruss, &prueth->shram); pruss_release_mem_region(prueth->pruss, &prueth->shram);
@ -1976,8 +1982,8 @@ static void prueth_remove(struct platform_device *pdev)
icss_iep_put(prueth->iep0); icss_iep_put(prueth->iep0);
gen_pool_free(prueth->sram_pool, gen_pool_free(prueth->sram_pool,
(unsigned long)prueth->msmcram.va, (unsigned long)prueth->msmcram.va,
MSMC_RAM_SIZE); prueth->msmcram.size);
pruss_release_mem_region(prueth->pruss, &prueth->shram); pruss_release_mem_region(prueth->pruss, &prueth->shram);
@ -1994,12 +2000,14 @@ static const struct prueth_pdata am654_icssg_pdata = {
.fdqring_mode = K3_RINGACC_RING_MODE_MESSAGE, .fdqring_mode = K3_RINGACC_RING_MODE_MESSAGE,
.quirk_10m_link_issue = 1, .quirk_10m_link_issue = 1,
.switch_mode = 1, .switch_mode = 1,
.banked_ms_ram = 0,
}; };
static const struct prueth_pdata am64x_icssg_pdata = { static const struct prueth_pdata am64x_icssg_pdata = {
.fdqring_mode = K3_RINGACC_RING_MODE_RING, .fdqring_mode = K3_RINGACC_RING_MODE_RING,
.quirk_10m_link_issue = 1, .quirk_10m_link_issue = 1,
.switch_mode = 1, .switch_mode = 1,
.banked_ms_ram = 1,
}; };
static const struct of_device_id prueth_dt_match[] = { static const struct of_device_id prueth_dt_match[] = {

View File

@ -251,11 +251,13 @@ struct prueth_emac {
* @fdqring_mode: Free desc queue mode * @fdqring_mode: Free desc queue mode
* @quirk_10m_link_issue: 10M link detect errata * @quirk_10m_link_issue: 10M link detect errata
* @switch_mode: switch firmware support * @switch_mode: switch firmware support
* @banked_ms_ram: banked memory support
*/ */
struct prueth_pdata { struct prueth_pdata {
enum k3_ring_mode fdqring_mode; enum k3_ring_mode fdqring_mode;
u32 quirk_10m_link_issue:1; u32 quirk_10m_link_issue:1;
u32 switch_mode:1; u32 switch_mode:1;
u32 banked_ms_ram:1;
}; };
struct icssg_firmwares { struct icssg_firmwares {

View File

@ -180,6 +180,9 @@
/* Used to notify the FW of the current link speed */ /* Used to notify the FW of the current link speed */
#define PORT_LINK_SPEED_OFFSET 0x00A8 #define PORT_LINK_SPEED_OFFSET 0x00A8
/* 2k memory pointer reserved for default writes by PRU0*/
#define DEFAULT_MSMC_Q_OFFSET 0x00AC
/* TAS gate mask for windows list0 */ /* TAS gate mask for windows list0 */
#define TAS_GATE_MASK_LIST0 0x0100 #define TAS_GATE_MASK_LIST0 0x0100

View File

@ -130,6 +130,7 @@ static int ism_cmd(struct ism_dev *ism, void *cmd)
struct ism_req_hdr *req = cmd; struct ism_req_hdr *req = cmd;
struct ism_resp_hdr *resp = cmd; struct ism_resp_hdr *resp = cmd;
spin_lock(&ism->cmd_lock);
__ism_write_cmd(ism, req + 1, sizeof(*req), req->len - sizeof(*req)); __ism_write_cmd(ism, req + 1, sizeof(*req), req->len - sizeof(*req));
__ism_write_cmd(ism, req, 0, sizeof(*req)); __ism_write_cmd(ism, req, 0, sizeof(*req));
@ -143,6 +144,7 @@ static int ism_cmd(struct ism_dev *ism, void *cmd)
} }
__ism_read_cmd(ism, resp + 1, sizeof(*resp), resp->len - sizeof(*resp)); __ism_read_cmd(ism, resp + 1, sizeof(*resp), resp->len - sizeof(*resp));
out: out:
spin_unlock(&ism->cmd_lock);
return resp->ret; return resp->ret;
} }
@ -606,6 +608,7 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return -ENOMEM; return -ENOMEM;
spin_lock_init(&ism->lock); spin_lock_init(&ism->lock);
spin_lock_init(&ism->cmd_lock);
dev_set_drvdata(&pdev->dev, ism); dev_set_drvdata(&pdev->dev, ism);
ism->pdev = pdev; ism->pdev = pdev;
ism->dev.parent = &pdev->dev; ism->dev.parent = &pdev->dev;

View File

@ -28,6 +28,7 @@ struct ism_dmb {
struct ism_dev { struct ism_dev {
spinlock_t lock; /* protects the ism device */ spinlock_t lock; /* protects the ism device */
spinlock_t cmd_lock; /* serializes cmds */
struct list_head list; struct list_head list;
struct pci_dev *pdev; struct pci_dev *pdev;

View File

@ -441,7 +441,6 @@ int xfrm_input_register_afinfo(const struct xfrm_input_afinfo *afinfo);
int xfrm_input_unregister_afinfo(const struct xfrm_input_afinfo *afinfo); int xfrm_input_unregister_afinfo(const struct xfrm_input_afinfo *afinfo);
void xfrm_flush_gc(void); void xfrm_flush_gc(void);
void xfrm_state_delete_tunnel(struct xfrm_state *x);
struct xfrm_type { struct xfrm_type {
struct module *owner; struct module *owner;
@ -474,7 +473,7 @@ struct xfrm_type_offload {
int xfrm_register_type_offload(const struct xfrm_type_offload *type, unsigned short family); int xfrm_register_type_offload(const struct xfrm_type_offload *type, unsigned short family);
void xfrm_unregister_type_offload(const struct xfrm_type_offload *type, unsigned short family); void xfrm_unregister_type_offload(const struct xfrm_type_offload *type, unsigned short family);
void xfrm_set_type_offload(struct xfrm_state *x); void xfrm_set_type_offload(struct xfrm_state *x, bool try_load);
static inline void xfrm_unset_type_offload(struct xfrm_state *x) static inline void xfrm_unset_type_offload(struct xfrm_state *x)
{ {
if (!x->type_offload) if (!x->type_offload)
@ -916,7 +915,7 @@ static inline void xfrm_pols_put(struct xfrm_policy **pols, int npols)
xfrm_pol_put(pols[i]); xfrm_pol_put(pols[i]);
} }
void __xfrm_state_destroy(struct xfrm_state *, bool); void __xfrm_state_destroy(struct xfrm_state *);
static inline void __xfrm_state_put(struct xfrm_state *x) static inline void __xfrm_state_put(struct xfrm_state *x)
{ {
@ -926,13 +925,7 @@ static inline void __xfrm_state_put(struct xfrm_state *x)
static inline void xfrm_state_put(struct xfrm_state *x) static inline void xfrm_state_put(struct xfrm_state *x)
{ {
if (refcount_dec_and_test(&x->refcnt)) if (refcount_dec_and_test(&x->refcnt))
__xfrm_state_destroy(x, false); __xfrm_state_destroy(x);
}
static inline void xfrm_state_put_sync(struct xfrm_state *x)
{
if (refcount_dec_and_test(&x->refcnt))
__xfrm_state_destroy(x, true);
} }
static inline void xfrm_state_hold(struct xfrm_state *x) static inline void xfrm_state_hold(struct xfrm_state *x)
@ -1770,7 +1763,7 @@ struct xfrmk_spdinfo {
struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num); struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num);
int xfrm_state_delete(struct xfrm_state *x); int xfrm_state_delete(struct xfrm_state *x);
int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync); int xfrm_state_flush(struct net *net, u8 proto, bool task_valid);
int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid); int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid);
int xfrm_dev_policy_flush(struct net *net, struct net_device *dev, int xfrm_dev_policy_flush(struct net *net, struct net_device *dev,
bool task_valid); bool task_valid);

View File

@ -35,6 +35,7 @@
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
#include <linux/refcount.h>
int sysctl_aarp_expiry_time = AARP_EXPIRY_TIME; int sysctl_aarp_expiry_time = AARP_EXPIRY_TIME;
int sysctl_aarp_tick_time = AARP_TICK_TIME; int sysctl_aarp_tick_time = AARP_TICK_TIME;
@ -44,6 +45,7 @@ int sysctl_aarp_resolve_time = AARP_RESOLVE_TIME;
/* Lists of aarp entries */ /* Lists of aarp entries */
/** /**
* struct aarp_entry - AARP entry * struct aarp_entry - AARP entry
* @refcnt: Reference count
* @last_sent: Last time we xmitted the aarp request * @last_sent: Last time we xmitted the aarp request
* @packet_queue: Queue of frames wait for resolution * @packet_queue: Queue of frames wait for resolution
* @status: Used for proxy AARP * @status: Used for proxy AARP
@ -55,6 +57,7 @@ int sysctl_aarp_resolve_time = AARP_RESOLVE_TIME;
* @next: Next entry in chain * @next: Next entry in chain
*/ */
struct aarp_entry { struct aarp_entry {
refcount_t refcnt;
/* These first two are only used for unresolved entries */ /* These first two are only used for unresolved entries */
unsigned long last_sent; unsigned long last_sent;
struct sk_buff_head packet_queue; struct sk_buff_head packet_queue;
@ -79,6 +82,17 @@ static DEFINE_RWLOCK(aarp_lock);
/* Used to walk the list and purge/kick entries. */ /* Used to walk the list and purge/kick entries. */
static struct timer_list aarp_timer; static struct timer_list aarp_timer;
static inline void aarp_entry_get(struct aarp_entry *a)
{
refcount_inc(&a->refcnt);
}
static inline void aarp_entry_put(struct aarp_entry *a)
{
if (refcount_dec_and_test(&a->refcnt))
kfree(a);
}
/* /*
* Delete an aarp queue * Delete an aarp queue
* *
@ -87,7 +101,7 @@ static struct timer_list aarp_timer;
static void __aarp_expire(struct aarp_entry *a) static void __aarp_expire(struct aarp_entry *a)
{ {
skb_queue_purge(&a->packet_queue); skb_queue_purge(&a->packet_queue);
kfree(a); aarp_entry_put(a);
} }
/* /*
@ -380,9 +394,11 @@ static void aarp_purge(void)
static struct aarp_entry *aarp_alloc(void) static struct aarp_entry *aarp_alloc(void)
{ {
struct aarp_entry *a = kmalloc(sizeof(*a), GFP_ATOMIC); struct aarp_entry *a = kmalloc(sizeof(*a), GFP_ATOMIC);
if (!a)
return NULL;
if (a) refcount_set(&a->refcnt, 1);
skb_queue_head_init(&a->packet_queue); skb_queue_head_init(&a->packet_queue);
return a; return a;
} }
@ -477,6 +493,7 @@ int aarp_proxy_probe_network(struct atalk_iface *atif, struct atalk_addr *sa)
entry->dev = atif->dev; entry->dev = atif->dev;
write_lock_bh(&aarp_lock); write_lock_bh(&aarp_lock);
aarp_entry_get(entry);
hash = sa->s_node % (AARP_HASH_SIZE - 1); hash = sa->s_node % (AARP_HASH_SIZE - 1);
entry->next = proxies[hash]; entry->next = proxies[hash];
@ -502,6 +519,7 @@ int aarp_proxy_probe_network(struct atalk_iface *atif, struct atalk_addr *sa)
retval = 1; retval = 1;
} }
aarp_entry_put(entry);
write_unlock_bh(&aarp_lock); write_unlock_bh(&aarp_lock);
out: out:
return retval; return retval;

View File

@ -54,6 +54,7 @@ static int ipcomp4_err(struct sk_buff *skb, u32 info)
} }
/* We always hold one tunnel user reference to indicate a tunnel */ /* We always hold one tunnel user reference to indicate a tunnel */
static struct lock_class_key xfrm_state_lock_key;
static struct xfrm_state *ipcomp_tunnel_create(struct xfrm_state *x) static struct xfrm_state *ipcomp_tunnel_create(struct xfrm_state *x)
{ {
struct net *net = xs_net(x); struct net *net = xs_net(x);
@ -62,6 +63,7 @@ static struct xfrm_state *ipcomp_tunnel_create(struct xfrm_state *x)
t = xfrm_state_alloc(net); t = xfrm_state_alloc(net);
if (!t) if (!t)
goto out; goto out;
lockdep_set_class(&t->lock, &xfrm_state_lock_key);
t->id.proto = IPPROTO_IPIP; t->id.proto = IPPROTO_IPIP;
t->id.spi = x->props.saddr.a4; t->id.spi = x->props.saddr.a4;

View File

@ -202,6 +202,9 @@ struct sk_buff *xfrm4_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0) if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0)
goto out; goto out;
/* set the transport header to ESP */
skb_set_transport_header(skb, offset);
NAPI_GRO_CB(skb)->proto = IPPROTO_UDP; NAPI_GRO_CB(skb)->proto = IPPROTO_UDP;
pp = call_gro_receive(ops->callbacks.gro_receive, head, skb); pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);

View File

@ -71,6 +71,7 @@ static int ipcomp6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
return 0; return 0;
} }
static struct lock_class_key xfrm_state_lock_key;
static struct xfrm_state *ipcomp6_tunnel_create(struct xfrm_state *x) static struct xfrm_state *ipcomp6_tunnel_create(struct xfrm_state *x)
{ {
struct net *net = xs_net(x); struct net *net = xs_net(x);
@ -79,6 +80,7 @@ static struct xfrm_state *ipcomp6_tunnel_create(struct xfrm_state *x)
t = xfrm_state_alloc(net); t = xfrm_state_alloc(net);
if (!t) if (!t)
goto out; goto out;
lockdep_set_class(&t->lock, &xfrm_state_lock_key);
t->id.proto = IPPROTO_IPV6; t->id.proto = IPPROTO_IPV6;
t->id.spi = xfrm6_tunnel_alloc_spi(net, (xfrm_address_t *)&x->props.saddr); t->id.spi = xfrm6_tunnel_alloc_spi(net, (xfrm_address_t *)&x->props.saddr);

View File

@ -202,6 +202,9 @@ struct sk_buff *xfrm6_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0) if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0)
goto out; goto out;
/* set the transport header to ESP */
skb_set_transport_header(skb, offset);
NAPI_GRO_CB(skb)->proto = IPPROTO_UDP; NAPI_GRO_CB(skb)->proto = IPPROTO_UDP;
pp = call_gro_receive(ops->callbacks.gro_receive, head, skb); pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);

View File

@ -334,8 +334,8 @@ static void __net_exit xfrm6_tunnel_net_exit(struct net *net)
struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net); struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
unsigned int i; unsigned int i;
xfrm_state_flush(net, IPSEC_PROTO_ANY, false);
xfrm_flush_gc(); xfrm_flush_gc();
xfrm_state_flush(net, 0, false, true);
for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++) for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++)
WARN_ON_ONCE(!hlist_empty(&xfrm6_tn->spi_byaddr[i])); WARN_ON_ONCE(!hlist_empty(&xfrm6_tn->spi_byaddr[i]));

View File

@ -1766,7 +1766,7 @@ static int pfkey_flush(struct sock *sk, struct sk_buff *skb, const struct sadb_m
if (proto == 0) if (proto == 0)
return -EINVAL; return -EINVAL;
err = xfrm_state_flush(net, proto, true, false); err = xfrm_state_flush(net, proto, true);
err2 = unicast_flush_resp(sk, hdr); err2 = unicast_flush_resp(sk, hdr);
if (err || err2) { if (err || err2) {
if (err == -ESRCH) /* empty table - go quietly */ if (err == -ESRCH) /* empty table - go quietly */

View File

@ -536,9 +536,6 @@ destroy_class:
static void qfq_destroy_class(struct Qdisc *sch, struct qfq_class *cl) static void qfq_destroy_class(struct Qdisc *sch, struct qfq_class *cl)
{ {
struct qfq_sched *q = qdisc_priv(sch);
qfq_rm_from_agg(q, cl);
gen_kill_estimator(&cl->rate_est); gen_kill_estimator(&cl->rate_est);
qdisc_put(cl->qdisc); qdisc_put(cl->qdisc);
kfree(cl); kfree(cl);
@ -559,10 +556,11 @@ static int qfq_delete_class(struct Qdisc *sch, unsigned long arg,
qdisc_purge_queue(cl->qdisc); qdisc_purge_queue(cl->qdisc);
qdisc_class_hash_remove(&q->clhash, &cl->common); qdisc_class_hash_remove(&q->clhash, &cl->common);
qfq_destroy_class(sch, cl); qfq_rm_from_agg(q, cl);
sch_tree_unlock(sch); sch_tree_unlock(sch);
qfq_destroy_class(sch, cl);
return 0; return 0;
} }
@ -1503,6 +1501,7 @@ static void qfq_destroy_qdisc(struct Qdisc *sch)
for (i = 0; i < q->clhash.hashsize; i++) { for (i = 0; i < q->clhash.hashsize; i++) {
hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i], hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i],
common.hnode) { common.hnode) {
qfq_rm_from_agg(q, cl);
qfq_destroy_class(sch, cl); qfq_destroy_class(sch, cl);
} }
} }

View File

@ -305,7 +305,6 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
return -EINVAL; return -EINVAL;
} }
xfrm_set_type_offload(x);
if (!x->type_offload) { if (!x->type_offload) {
NL_SET_ERR_MSG(extack, "Type doesn't support offload"); NL_SET_ERR_MSG(extack, "Type doesn't support offload");
dev_put(dev); dev_put(dev);

View File

@ -875,7 +875,7 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
return -EINVAL; return -EINVAL;
} }
if (p.collect_md) { if (p.collect_md || xi->p.collect_md) {
NL_SET_ERR_MSG(extack, "collect_md can't be changed"); NL_SET_ERR_MSG(extack, "collect_md can't be changed");
return -EINVAL; return -EINVAL;
} }
@ -886,11 +886,6 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
} else { } else {
if (xi->dev != dev) if (xi->dev != dev)
return -EEXIST; return -EEXIST;
if (xi->p.collect_md) {
NL_SET_ERR_MSG(extack,
"device can't be changed to collect_md");
return -EINVAL;
}
} }
return xfrmi_update(xi, &p); return xfrmi_update(xi, &p);

View File

@ -97,7 +97,7 @@ static int ipcomp_input_done2(struct sk_buff *skb, int err)
struct ip_comp_hdr *ipch = ip_comp_hdr(skb); struct ip_comp_hdr *ipch = ip_comp_hdr(skb);
const int plen = skb->len; const int plen = skb->len;
skb_reset_transport_header(skb); skb->transport_header = skb->network_header + sizeof(*ipch);
return ipcomp_post_acomp(skb, err, 0) ?: return ipcomp_post_acomp(skb, err, 0) ?:
skb->len < (plen + sizeof(ip_comp_hdr)) ? -EINVAL : skb->len < (plen + sizeof(ip_comp_hdr)) ? -EINVAL :
@ -313,7 +313,6 @@ void ipcomp_destroy(struct xfrm_state *x)
struct ipcomp_data *ipcd = x->data; struct ipcomp_data *ipcd = x->data;
if (!ipcd) if (!ipcd)
return; return;
xfrm_state_delete_tunnel(x);
ipcomp_free_data(ipcd); ipcomp_free_data(ipcd);
kfree(ipcd); kfree(ipcd);
} }

View File

@ -424,11 +424,10 @@ void xfrm_unregister_type_offload(const struct xfrm_type_offload *type,
} }
EXPORT_SYMBOL(xfrm_unregister_type_offload); EXPORT_SYMBOL(xfrm_unregister_type_offload);
void xfrm_set_type_offload(struct xfrm_state *x) void xfrm_set_type_offload(struct xfrm_state *x, bool try_load)
{ {
const struct xfrm_type_offload *type = NULL; const struct xfrm_type_offload *type = NULL;
struct xfrm_state_afinfo *afinfo; struct xfrm_state_afinfo *afinfo;
bool try_load = true;
retry: retry:
afinfo = xfrm_state_get_afinfo(x->props.family); afinfo = xfrm_state_get_afinfo(x->props.family);
@ -593,7 +592,7 @@ void xfrm_state_free(struct xfrm_state *x)
} }
EXPORT_SYMBOL(xfrm_state_free); EXPORT_SYMBOL(xfrm_state_free);
static void ___xfrm_state_destroy(struct xfrm_state *x) static void xfrm_state_gc_destroy(struct xfrm_state *x)
{ {
if (x->mode_cbs && x->mode_cbs->destroy_state) if (x->mode_cbs && x->mode_cbs->destroy_state)
x->mode_cbs->destroy_state(x); x->mode_cbs->destroy_state(x);
@ -607,6 +606,7 @@ static void ___xfrm_state_destroy(struct xfrm_state *x)
kfree(x->coaddr); kfree(x->coaddr);
kfree(x->replay_esn); kfree(x->replay_esn);
kfree(x->preplay_esn); kfree(x->preplay_esn);
xfrm_unset_type_offload(x);
if (x->type) { if (x->type) {
x->type->destructor(x); x->type->destructor(x);
xfrm_put_type(x->type); xfrm_put_type(x->type);
@ -631,7 +631,7 @@ static void xfrm_state_gc_task(struct work_struct *work)
synchronize_rcu(); synchronize_rcu();
hlist_for_each_entry_safe(x, tmp, &gc_list, gclist) hlist_for_each_entry_safe(x, tmp, &gc_list, gclist)
___xfrm_state_destroy(x); xfrm_state_gc_destroy(x);
} }
static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me)
@ -780,8 +780,6 @@ void xfrm_dev_state_free(struct xfrm_state *x)
struct xfrm_dev_offload *xso = &x->xso; struct xfrm_dev_offload *xso = &x->xso;
struct net_device *dev = READ_ONCE(xso->dev); struct net_device *dev = READ_ONCE(xso->dev);
xfrm_unset_type_offload(x);
if (dev && dev->xfrmdev_ops) { if (dev && dev->xfrmdev_ops) {
spin_lock_bh(&xfrm_state_dev_gc_lock); spin_lock_bh(&xfrm_state_dev_gc_lock);
if (!hlist_unhashed(&x->dev_gclist)) if (!hlist_unhashed(&x->dev_gclist))
@ -797,22 +795,18 @@ void xfrm_dev_state_free(struct xfrm_state *x)
} }
#endif #endif
void __xfrm_state_destroy(struct xfrm_state *x, bool sync) void __xfrm_state_destroy(struct xfrm_state *x)
{ {
WARN_ON(x->km.state != XFRM_STATE_DEAD); WARN_ON(x->km.state != XFRM_STATE_DEAD);
if (sync) { spin_lock_bh(&xfrm_state_gc_lock);
synchronize_rcu(); hlist_add_head(&x->gclist, &xfrm_state_gc_list);
___xfrm_state_destroy(x); spin_unlock_bh(&xfrm_state_gc_lock);
} else { schedule_work(&xfrm_state_gc_work);
spin_lock_bh(&xfrm_state_gc_lock);
hlist_add_head(&x->gclist, &xfrm_state_gc_list);
spin_unlock_bh(&xfrm_state_gc_lock);
schedule_work(&xfrm_state_gc_work);
}
} }
EXPORT_SYMBOL(__xfrm_state_destroy); EXPORT_SYMBOL(__xfrm_state_destroy);
static void xfrm_state_delete_tunnel(struct xfrm_state *x);
int __xfrm_state_delete(struct xfrm_state *x) int __xfrm_state_delete(struct xfrm_state *x)
{ {
struct net *net = xs_net(x); struct net *net = xs_net(x);
@ -840,6 +834,8 @@ int __xfrm_state_delete(struct xfrm_state *x)
xfrm_dev_state_delete(x); xfrm_dev_state_delete(x);
xfrm_state_delete_tunnel(x);
/* All xfrm_state objects are created by xfrm_state_alloc. /* All xfrm_state objects are created by xfrm_state_alloc.
* The xfrm_state_alloc call gives a reference, and that * The xfrm_state_alloc call gives a reference, and that
* is what we are dropping here. * is what we are dropping here.
@ -921,7 +917,7 @@ xfrm_dev_state_flush_secctx_check(struct net *net, struct net_device *dev, bool
} }
#endif #endif
int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync) int xfrm_state_flush(struct net *net, u8 proto, bool task_valid)
{ {
int i, err = 0, cnt = 0; int i, err = 0, cnt = 0;
@ -943,10 +939,7 @@ restart:
err = xfrm_state_delete(x); err = xfrm_state_delete(x);
xfrm_audit_state_delete(x, err ? 0 : 1, xfrm_audit_state_delete(x, err ? 0 : 1,
task_valid); task_valid);
if (sync) xfrm_state_put(x);
xfrm_state_put_sync(x);
else
xfrm_state_put(x);
if (!err) if (!err)
cnt++; cnt++;
@ -1307,14 +1300,8 @@ static void xfrm_hash_grow_check(struct net *net, int have_hash_collision)
static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x, static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
const struct flowi *fl, unsigned short family, const struct flowi *fl, unsigned short family,
struct xfrm_state **best, int *acq_in_progress, struct xfrm_state **best, int *acq_in_progress,
int *error) int *error, unsigned int pcpu_id)
{ {
/* We need the cpu id just as a lookup key,
* we don't require it to be stable.
*/
unsigned int pcpu_id = get_cpu();
put_cpu();
/* Resolution logic: /* Resolution logic:
* 1. There is a valid state with matching selector. Done. * 1. There is a valid state with matching selector. Done.
* 2. Valid state with inappropriate selector. Skip. * 2. Valid state with inappropriate selector. Skip.
@ -1381,14 +1368,15 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
/* We need the cpu id just as a lookup key, /* We need the cpu id just as a lookup key,
* we don't require it to be stable. * we don't require it to be stable.
*/ */
pcpu_id = get_cpu(); pcpu_id = raw_smp_processor_id();
put_cpu();
to_put = NULL; to_put = NULL;
sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
rcu_read_lock(); rcu_read_lock();
xfrm_hash_ptrs_get(net, &state_ptrs);
hlist_for_each_entry_rcu(x, &pol->state_cache_list, state_cache) { hlist_for_each_entry_rcu(x, &pol->state_cache_list, state_cache) {
if (x->props.family == encap_family && if (x->props.family == encap_family &&
x->props.reqid == tmpl->reqid && x->props.reqid == tmpl->reqid &&
@ -1400,7 +1388,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
tmpl->id.proto == x->id.proto && tmpl->id.proto == x->id.proto &&
(tmpl->id.spi == x->id.spi || !tmpl->id.spi)) (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
xfrm_state_look_at(pol, x, fl, encap_family, xfrm_state_look_at(pol, x, fl, encap_family,
&best, &acquire_in_progress, &error); &best, &acquire_in_progress, &error, pcpu_id);
} }
if (best) if (best)
@ -1417,7 +1405,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
tmpl->id.proto == x->id.proto && tmpl->id.proto == x->id.proto &&
(tmpl->id.spi == x->id.spi || !tmpl->id.spi)) (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
xfrm_state_look_at(pol, x, fl, family, xfrm_state_look_at(pol, x, fl, family,
&best, &acquire_in_progress, &error); &best, &acquire_in_progress, &error, pcpu_id);
} }
cached: cached:
@ -1429,8 +1417,6 @@ cached:
else if (acquire_in_progress) /* XXX: acquire_in_progress should not happen */ else if (acquire_in_progress) /* XXX: acquire_in_progress should not happen */
WARN_ON(1); WARN_ON(1);
xfrm_hash_ptrs_get(net, &state_ptrs);
h = __xfrm_dst_hash(daddr, saddr, tmpl->reqid, encap_family, state_ptrs.hmask); h = __xfrm_dst_hash(daddr, saddr, tmpl->reqid, encap_family, state_ptrs.hmask);
hlist_for_each_entry_rcu(x, state_ptrs.bydst + h, bydst) { hlist_for_each_entry_rcu(x, state_ptrs.bydst + h, bydst) {
#ifdef CONFIG_XFRM_OFFLOAD #ifdef CONFIG_XFRM_OFFLOAD
@ -1460,7 +1446,7 @@ cached:
tmpl->id.proto == x->id.proto && tmpl->id.proto == x->id.proto &&
(tmpl->id.spi == x->id.spi || !tmpl->id.spi)) (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
xfrm_state_look_at(pol, x, fl, family, xfrm_state_look_at(pol, x, fl, family,
&best, &acquire_in_progress, &error); &best, &acquire_in_progress, &error, pcpu_id);
} }
if (best || acquire_in_progress) if (best || acquire_in_progress)
goto found; goto found;
@ -1495,7 +1481,7 @@ cached:
tmpl->id.proto == x->id.proto && tmpl->id.proto == x->id.proto &&
(tmpl->id.spi == x->id.spi || !tmpl->id.spi)) (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
xfrm_state_look_at(pol, x, fl, family, xfrm_state_look_at(pol, x, fl, family,
&best, &acquire_in_progress, &error); &best, &acquire_in_progress, &error, pcpu_id);
} }
found: found:
@ -3077,20 +3063,17 @@ void xfrm_flush_gc(void)
} }
EXPORT_SYMBOL(xfrm_flush_gc); EXPORT_SYMBOL(xfrm_flush_gc);
/* Temporarily located here until net/xfrm/xfrm_tunnel.c is created */ static void xfrm_state_delete_tunnel(struct xfrm_state *x)
void xfrm_state_delete_tunnel(struct xfrm_state *x)
{ {
if (x->tunnel) { if (x->tunnel) {
struct xfrm_state *t = x->tunnel; struct xfrm_state *t = x->tunnel;
if (atomic_read(&t->tunnel_users) == 2) if (atomic_dec_return(&t->tunnel_users) == 1)
xfrm_state_delete(t); xfrm_state_delete(t);
atomic_dec(&t->tunnel_users); xfrm_state_put(t);
xfrm_state_put_sync(t);
x->tunnel = NULL; x->tunnel = NULL;
} }
} }
EXPORT_SYMBOL(xfrm_state_delete_tunnel);
u32 xfrm_state_mtu(struct xfrm_state *x, int mtu) u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
{ {
@ -3295,8 +3278,8 @@ void xfrm_state_fini(struct net *net)
unsigned int sz; unsigned int sz;
flush_work(&net->xfrm.state_hash_work); flush_work(&net->xfrm.state_hash_work);
xfrm_state_flush(net, IPSEC_PROTO_ANY, false);
flush_work(&xfrm_state_gc_work); flush_work(&xfrm_state_gc_work);
xfrm_state_flush(net, 0, false, true);
WARN_ON(!list_empty(&net->xfrm.state_all)); WARN_ON(!list_empty(&net->xfrm.state_all));

View File

@ -977,6 +977,7 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
/* override default values from above */ /* override default values from above */
xfrm_update_ae_params(x, attrs, 0); xfrm_update_ae_params(x, attrs, 0);
xfrm_set_type_offload(x, attrs[XFRMA_OFFLOAD_DEV]);
/* configure the hardware if offload is requested */ /* configure the hardware if offload is requested */
if (attrs[XFRMA_OFFLOAD_DEV]) { if (attrs[XFRMA_OFFLOAD_DEV]) {
err = xfrm_dev_state_add(net, x, err = xfrm_dev_state_add(net, x,
@ -2634,7 +2635,7 @@ static int xfrm_flush_sa(struct sk_buff *skb, struct nlmsghdr *nlh,
struct xfrm_usersa_flush *p = nlmsg_data(nlh); struct xfrm_usersa_flush *p = nlmsg_data(nlh);
int err; int err;
err = xfrm_state_flush(net, p->proto, true, false); err = xfrm_state_flush(net, p->proto, true);
if (err) { if (err) {
if (err == -ESRCH) /* empty table */ if (err == -ESRCH) /* empty table */
return 0; return 0;

View File

@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
import re
import time import time
from lib.py import ksft_pr, cmd, ip, rand_port, wait_port_listen from lib.py import ksft_pr, cmd, ip, rand_port, wait_port_listen
@ -10,12 +11,11 @@ class GenerateTraffic:
self.env = env self.env = env
if port is None: self.port = rand_port() if port is None else port
port = rand_port() self._iperf_server = cmd(f"iperf3 -s -1 -p {self.port}", background=True)
self._iperf_server = cmd(f"iperf3 -s -1 -p {port}", background=True) wait_port_listen(self.port)
wait_port_listen(port)
time.sleep(0.1) time.sleep(0.1)
self._iperf_client = cmd(f"iperf3 -c {env.addr} -P 16 -p {port} -t 86400", self._iperf_client = cmd(f"iperf3 -c {env.addr} -P 16 -p {self.port} -t 86400",
background=True, host=env.remote) background=True, host=env.remote)
# Wait for traffic to ramp up # Wait for traffic to ramp up
@ -56,3 +56,16 @@ class GenerateTraffic:
ksft_pr(">> Server:") ksft_pr(">> Server:")
ksft_pr(self._iperf_server.stdout) ksft_pr(self._iperf_server.stdout)
ksft_pr(self._iperf_server.stderr) ksft_pr(self._iperf_server.stderr)
self._wait_client_stopped()
def _wait_client_stopped(self, sleep=0.005, timeout=5):
end = time.monotonic() + timeout
live_port_pattern = re.compile(fr":{self.port:04X} 0[^6] ")
while time.monotonic() < end:
data = cmd("cat /proc/net/tcp*", host=self.env.remote).stdout
if not live_port_pattern.search(data):
return
time.sleep(sleep)
raise Exception(f"Waiting for client to stop timed out after {timeout}s")

View File

@ -4,7 +4,8 @@ top_srcdir = ../../../../..
CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES) CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \ TEST_PROGS := mptcp_connect.sh mptcp_connect_mmap.sh mptcp_connect_sendfile.sh \
mptcp_connect_checksum.sh pm_netlink.sh mptcp_join.sh diag.sh \
simult_flows.sh mptcp_sockopt.sh userspace_pm.sh simult_flows.sh mptcp_sockopt.sh userspace_pm.sh
TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq mptcp_diag TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq mptcp_diag

View File

@ -0,0 +1,5 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
"$(dirname "${0}")/mptcp_connect.sh" -C "${@}"

View File

@ -0,0 +1,5 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
"$(dirname "${0}")/mptcp_connect.sh" -m mmap "${@}"

View File

@ -0,0 +1,5 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
"$(dirname "${0}")/mptcp_connect.sh" -m sendfile "${@}"

View File

@ -93,32 +93,28 @@ ping_test()
run_one_clash_test() run_one_clash_test()
{ {
local ns="$1" local ns="$1"
local daddr="$2" local ctns="$2"
local dport="$3" local daddr="$3"
local dport="$4"
local entries local entries
local cre local cre
if ! ip netns exec "$ns" ./udpclash $daddr $dport;then if ! ip netns exec "$ns" ./udpclash $daddr $dport;then
echo "FAIL: did not receive expected number of replies for $daddr:$dport" echo "INFO: did not receive expected number of replies for $daddr:$dport"
ret=1 ip netns exec "$ctns" conntrack -S
return 1 # don't fail: check if clash resolution triggered after all.
fi fi
entries=$(conntrack -S | wc -l) entries=$(ip netns exec "$ctns" conntrack -S | wc -l)
cre=$(conntrack -S | grep -v "clash_resolve=0" | wc -l) cre=$(ip netns exec "$ctns" conntrack -S | grep "clash_resolve=0" | wc -l)
if [ "$cre" -ne "$entries" ] ;then if [ "$cre" -ne "$entries" ];then
clash_resolution_active=1 clash_resolution_active=1
return 0 return 0
fi fi
# 1 cpu -> parallel insertion impossible # not a failure: clash resolution logic did not trigger.
if [ "$entries" -eq 1 ]; then # With right timing, xmit completed sequentially and
return 0
fi
# not a failure: clash resolution logic did not trigger, but all replies
# were received. With right timing, xmit completed sequentially and
# no parallel insertion occurs. # no parallel insertion occurs.
return $ksft_skip return $ksft_skip
} }
@ -126,20 +122,23 @@ run_one_clash_test()
run_clash_test() run_clash_test()
{ {
local ns="$1" local ns="$1"
local daddr="$2" local ctns="$2"
local dport="$3" local daddr="$3"
local dport="$4"
local softerr=0
for i in $(seq 1 10);do for i in $(seq 1 10);do
run_one_clash_test "$ns" "$daddr" "$dport" run_one_clash_test "$ns" "$ctns" "$daddr" "$dport"
local rv=$? local rv=$?
if [ $rv -eq 0 ];then if [ $rv -eq 0 ];then
echo "PASS: clash resolution test for $daddr:$dport on attempt $i" echo "PASS: clash resolution test for $daddr:$dport on attempt $i"
return 0 return 0
elif [ $rv -eq 1 ];then elif [ $rv -eq $ksft_skip ]; then
echo "FAIL: clash resolution test for $daddr:$dport on attempt $i" softerr=1
return 1
fi fi
done done
[ $softerr -eq 1 ] && echo "SKIP: clash resolution for $daddr:$dport did not trigger"
} }
ip link add veth0 netns "$nsclient1" type veth peer name veth0 netns "$nsrouter" ip link add veth0 netns "$nsclient1" type veth peer name veth0 netns "$nsrouter"
@ -161,11 +160,11 @@ spawn_servers "$nsclient2"
# exercise clash resolution with nat: # exercise clash resolution with nat:
# nsrouter is supposed to dnat to 10.0.2.1:900{0,1,2,3}. # nsrouter is supposed to dnat to 10.0.2.1:900{0,1,2,3}.
run_clash_test "$nsclient1" 10.0.1.99 "$dport" run_clash_test "$nsclient1" "$nsrouter" 10.0.1.99 "$dport"
# exercise clash resolution without nat. # exercise clash resolution without nat.
load_simple_ruleset "$nsclient2" load_simple_ruleset "$nsclient2"
run_clash_test "$nsclient2" 127.0.0.1 9001 run_clash_test "$nsclient2" "$nsclient2" 127.0.0.1 9001
if [ $clash_resolution_active -eq 0 ];then if [ $clash_resolution_active -eq 0 ];then
[ "$ret" -eq 0 ] && ret=$ksft_skip [ "$ret" -eq 0 ] && ret=$ksft_skip