idxd.h (da5a11d75d6837c9c5ef40810f66ce9d2db6ca5e) | idxd.h (700af3a0a26cbac87e4a0ae1dfa79597d0056d5f) |
---|---|
1/* SPDX-License-Identifier: GPL-2.0 */ 2/* Copyright(c) 2019 Intel Corporation. All rights rsvd. */ 3#ifndef _IDXD_H_ 4#define _IDXD_H_ 5 6#include <linux/sbitmap.h> 7#include <linux/dmaengine.h> 8#include <linux/percpu-rwsem.h> 9#include <linux/wait.h> 10#include <linux/cdev.h> 11#include <linux/idr.h> 12#include <linux/pci.h> 13#include <linux/perf_event.h> 14#include "registers.h" 15 16#define IDXD_DRIVER_VERSION "1.00" 17 18extern struct kmem_cache *idxd_desc_pool; 19 | 1/* SPDX-License-Identifier: GPL-2.0 */ 2/* Copyright(c) 2019 Intel Corporation. All rights rsvd. */ 3#ifndef _IDXD_H_ 4#define _IDXD_H_ 5 6#include <linux/sbitmap.h> 7#include <linux/dmaengine.h> 8#include <linux/percpu-rwsem.h> 9#include <linux/wait.h> 10#include <linux/cdev.h> 11#include <linux/idr.h> 12#include <linux/pci.h> 13#include <linux/perf_event.h> 14#include "registers.h" 15 16#define IDXD_DRIVER_VERSION "1.00" 17 18extern struct kmem_cache *idxd_desc_pool; 19 |
20struct idxd_device; | |
21struct idxd_wq; | 20struct idxd_wq; |
21struct idxd_dev; |
|
22 | 22 |
23enum idxd_dev_type { 24 IDXD_DEV_NONE = -1, 25 IDXD_DEV_DSA = 0, 26 IDXD_DEV_IAX, 27 IDXD_DEV_WQ, 28 IDXD_DEV_GROUP, 29 IDXD_DEV_ENGINE, 30 IDXD_DEV_CDEV, 31 IDXD_DEV_MAX_TYPE, 32}; 33 34struct idxd_dev { 35 struct device conf_dev; 36 enum idxd_dev_type type; 37}; 38 |
|
23#define IDXD_REG_TIMEOUT 50 24#define IDXD_DRAIN_TIMEOUT 5000 25 26enum idxd_type { 27 IDXD_TYPE_UNKNOWN = -1, 28 IDXD_TYPE_DSA = 0, 29 IDXD_TYPE_IAX, 30 IDXD_TYPE_MAX, --- 16 unchanged lines hidden (view full) --- 47 /* 48 * Lock to protect access between irq thread process descriptor 49 * and irq thread processing error descriptor. 50 */ 51 spinlock_t list_lock; 52}; 53 54struct idxd_group { | 39#define IDXD_REG_TIMEOUT 50 40#define IDXD_DRAIN_TIMEOUT 5000 41 42enum idxd_type { 43 IDXD_TYPE_UNKNOWN = -1, 44 IDXD_TYPE_DSA = 0, 45 IDXD_TYPE_IAX, 46 IDXD_TYPE_MAX, --- 16 unchanged lines hidden (view full) --- 63 /* 64 * Lock to protect access between irq thread process descriptor 65 * and irq thread processing error descriptor. 66 */ 67 spinlock_t list_lock; 68}; 69 70struct idxd_group { |
55 struct device conf_dev; | 71 struct idxd_dev idxd_dev; |
56 struct idxd_device *idxd; 57 struct grpcfg grpcfg; 58 int id; 59 int num_engines; 60 int num_wqs; 61 bool use_token_limit; 62 u8 tokens_allowed; 63 u8 tokens_reserved; --- 42 unchanged lines hidden (view full) --- 106 IDXD_WQT_NONE = 0, 107 IDXD_WQT_KERNEL, 108 IDXD_WQT_USER, 109}; 110 111struct idxd_cdev { 112 struct idxd_wq *wq; 113 struct cdev cdev; | 72 struct idxd_device *idxd; 73 struct grpcfg grpcfg; 74 int id; 75 int num_engines; 76 int num_wqs; 77 bool use_token_limit; 78 u8 tokens_allowed; 79 u8 tokens_reserved; --- 42 unchanged lines hidden (view full) --- 122 IDXD_WQT_NONE = 0, 123 IDXD_WQT_KERNEL, 124 IDXD_WQT_USER, 125}; 126 127struct idxd_cdev { 128 struct idxd_wq *wq; 129 struct cdev cdev; |
114 struct device dev; | 130 struct idxd_dev idxd_dev; |
115 int minor; 116}; 117 118#define IDXD_ALLOCATED_BATCH_SIZE 128U 119#define WQ_NAME_SIZE 1024 120#define WQ_TYPE_SIZE 10 121 122enum idxd_op_type { --- 11 unchanged lines hidden (view full) --- 134 struct dma_chan chan; 135 struct idxd_wq *wq; 136}; 137 138struct idxd_wq { 139 void __iomem *portal; 140 struct percpu_ref wq_active; 141 struct completion wq_dead; | 131 int minor; 132}; 133 134#define IDXD_ALLOCATED_BATCH_SIZE 128U 135#define WQ_NAME_SIZE 1024 136#define WQ_TYPE_SIZE 10 137 138enum idxd_op_type { --- 11 unchanged lines hidden (view full) --- 150 struct dma_chan chan; 151 struct idxd_wq *wq; 152}; 153 154struct idxd_wq { 155 void __iomem *portal; 156 struct percpu_ref wq_active; 157 struct completion wq_dead; |
142 struct device conf_dev; | 158 struct idxd_dev idxd_dev; |
143 struct idxd_cdev *idxd_cdev; 144 struct wait_queue_head err_queue; 145 struct idxd_device *idxd; 146 int id; 147 enum idxd_wq_type type; 148 struct idxd_group *group; 149 int client_count; 150 struct mutex wq_lock; /* mutex for workqueue */ --- 18 unchanged lines hidden (view full) --- 169 struct idxd_dma_chan *idxd_chan; 170 char name[WQ_NAME_SIZE + 1]; 171 u64 max_xfer_bytes; 172 u32 max_batch_size; 173 bool ats_dis; 174}; 175 176struct idxd_engine { | 159 struct idxd_cdev *idxd_cdev; 160 struct wait_queue_head err_queue; 161 struct idxd_device *idxd; 162 int id; 163 enum idxd_wq_type type; 164 struct idxd_group *group; 165 int client_count; 166 struct mutex wq_lock; /* mutex for workqueue */ --- 18 unchanged lines hidden (view full) --- 185 struct idxd_dma_chan *idxd_chan; 186 char name[WQ_NAME_SIZE + 1]; 187 u64 max_xfer_bytes; 188 u32 max_batch_size; 189 bool ats_dis; 190}; 191 192struct idxd_engine { |
177 struct device conf_dev; | 193 struct idxd_dev idxd_dev; |
178 int id; 179 struct idxd_group *group; 180 struct idxd_device *idxd; 181}; 182 183/* shadow registers */ 184struct idxd_hw { 185 u32 version; --- 27 unchanged lines hidden (view full) --- 213 const char *name_prefix; 214 enum idxd_type type; 215 struct device_type *dev_type; 216 int compl_size; 217 int align; 218}; 219 220struct idxd_device { | 194 int id; 195 struct idxd_group *group; 196 struct idxd_device *idxd; 197}; 198 199/* shadow registers */ 200struct idxd_hw { 201 u32 version; --- 27 unchanged lines hidden (view full) --- 229 const char *name_prefix; 230 enum idxd_type type; 231 struct device_type *dev_type; 232 int compl_size; 233 int align; 234}; 235 236struct idxd_device { |
221 struct device conf_dev; | 237 struct idxd_dev idxd_dev; |
222 struct idxd_driver_data *data; 223 struct list_head list; 224 struct idxd_hw hw; 225 enum idxd_device_state state; 226 unsigned long flags; 227 int id; 228 int major; 229 u8 cmd_status; --- 66 unchanged lines hidden (view full) --- 296/* 297 * This is software defined error for the completion status. We overload the error code 298 * that will never appear in completion status and only SWERR register. 299 */ 300enum idxd_completion_status { 301 IDXD_COMP_DESC_ABORT = 0xff, 302}; 303 | 238 struct idxd_driver_data *data; 239 struct list_head list; 240 struct idxd_hw hw; 241 enum idxd_device_state state; 242 unsigned long flags; 243 int id; 244 int major; 245 u8 cmd_status; --- 66 unchanged lines hidden (view full) --- 312/* 313 * This is software defined error for the completion status. We overload the error code 314 * that will never appear in completion status and only SWERR register. 315 */ 316enum idxd_completion_status { 317 IDXD_COMP_DESC_ABORT = 0xff, 318}; 319 |
304#define confdev_to_idxd(dev) container_of(dev, struct idxd_device, conf_dev) 305#define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev) | 320#define idxd_confdev(idxd) &idxd->idxd_dev.conf_dev 321#define wq_confdev(wq) &wq->idxd_dev.conf_dev 322#define engine_confdev(engine) &engine->idxd_dev.conf_dev 323#define group_confdev(group) &group->idxd_dev.conf_dev 324#define cdev_dev(cdev) &cdev->idxd_dev.conf_dev |
306 | 325 |
326#define confdev_to_idxd_dev(dev) container_of(dev, struct idxd_dev, conf_dev) 327 328static inline struct idxd_device *confdev_to_idxd(struct device *dev) 329{ 330 struct idxd_dev *idxd_dev = confdev_to_idxd_dev(dev); 331 332 return container_of(idxd_dev, struct idxd_device, idxd_dev); 333} 334 335static inline struct idxd_wq *confdev_to_wq(struct device *dev) 336{ 337 struct idxd_dev *idxd_dev = confdev_to_idxd_dev(dev); 338 339 return container_of(idxd_dev, struct idxd_wq, idxd_dev); 340} 341 342static inline struct idxd_engine *confdev_to_engine(struct device *dev) 343{ 344 struct idxd_dev *idxd_dev = confdev_to_idxd_dev(dev); 345 346 return container_of(idxd_dev, struct idxd_engine, idxd_dev); 347} 348 349static inline struct idxd_group *confdev_to_group(struct device *dev) 350{ 351 struct idxd_dev *idxd_dev = confdev_to_idxd_dev(dev); 352 353 return container_of(idxd_dev, struct idxd_group, idxd_dev); 354} 355 356static inline struct idxd_cdev *dev_to_cdev(struct device *dev) 357{ 358 struct idxd_dev *idxd_dev = confdev_to_idxd_dev(dev); 359 360 return container_of(idxd_dev, struct idxd_cdev, idxd_dev); 361} 362 363static inline void idxd_dev_set_type(struct idxd_dev *idev, int type) 364{ 365 if (type >= IDXD_DEV_MAX_TYPE) { 366 idev->type = IDXD_DEV_NONE; 367 return; 368 } 369 370 idev->type = type; 371} 372 |
|
307extern struct bus_type dsa_bus_type; 308extern struct bus_type iax_bus_type; 309 310extern bool support_enqcmd; 311extern struct ida idxd_ida; 312extern struct device_type dsa_device_type; 313extern struct device_type iax_device_type; 314extern struct device_type idxd_wq_device_type; --- 189 unchanged lines hidden --- | 373extern struct bus_type dsa_bus_type; 374extern struct bus_type iax_bus_type; 375 376extern bool support_enqcmd; 377extern struct ida idxd_ida; 378extern struct device_type dsa_device_type; 379extern struct device_type iax_device_type; 380extern struct device_type idxd_wq_device_type; --- 189 unchanged lines hidden --- |