Lines Matching +full:0 +full:- +full:job +full:- +full:ring
28 #include <linux/dma-fence.h>
36 * DRM_SCHED_FENCE_DONT_PIPELINE - Prevent dependency pipelining
45 * DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT - A fence deadline hint has been set
63 * to an array, and as such should start at 0.
75 * struct drm_sched_entity - A wrapper around a job queue (typically
79 * ring, and the scheduler will alternate between entities based on
96 * Lock protecting the run-queue (@rq) to which this entity belongs,
173 * The dependency fence of the job which is on the top of the job queue.
194 * Points to the finished fence of the last scheduled job. Only written
201 * @last_user: last group leader pushing a job into the entity.
225 * Marks earliest job waiting in SW queue
239 * struct drm_sched_rq - queue of entities to be scheduled.
248 * one specific ring. It implements the scheduling policy that selects
262 * struct drm_sched_fence - fences corresponding to the scheduling of a job.
267 * when the job is scheduled.
273 * when the job is completed.
275 * When setting up an out fence for the job, you should use
291 * when scheduling the job on hardware. We signal the
296 * @sched: the scheduler instance to which the job having this struct
305 * @owner: job owner for debugging
312 * The client_id of the drm_file which owns the job.
320 * struct drm_sched_job - A job to be run by an entity.
323 * @list: a job participates in a "pending" and "done" lists.
324 * @sched: the scheduler instance on which this job is scheduled.
325 * @s_fence: contains the fences for the scheduling of job.
327 * @credits: the number of credits this job contributes to the scheduler
328 * @work: Helper to reschedule job kill to different context.
329 * @karma: increment on every hang caused by this job. If this exceeds the hang
330 * limit of the scheduler then the job is marked guilty and will not
332 * @s_priority: the priority of the job.
333 * @entity: the entity to which this job belongs.
336 * A job is created by the driver using drm_sched_job_init(), and
338 * to schedule the job.
344 * When the job was pushed into the entity queue.
351 * The scheduler this job is or will be scheduled on. Gets set by
383 * Contains the dependencies as struct dma_fence for this job, see
391 * enum drm_gpu_sched_stat - the scheduler's status
407 * struct drm_sched_backend_ops - Define the backend operations
416 * Called when the scheduler is considering scheduling this job next, to
417 * get another struct dma_fence for this job to block on. Once it
427 * @run_job: Called to execute the job once all of the dependencies
430 * @sched_job: the job to run
442 * This method is called in a workqueue context - either from the
455 * completed the job ("hardware fence").
461 * @timedout_job: Called when a job has taken too long to execute,
464 * @sched_job: The job that has timed out
470 * For a FIRMWARE SCHEDULER, each ring has one scheduler, and each
476 * that nothing is queued while the ring is being removed.
477 * 2. Remove the ring. The firmware will make sure that the
484 * one or more entities to one ring. This implies that all entities
493 * 2. Kill the entity the faulty job stems from.
494 * 3. Issue a GPU reset on all faulty rings (driver-specific).
495 * 4. Re-submit jobs on all schedulers impacted by re-submitting them to
514 * @free_job: Called once the job's finished fence has been signaled
527 * Drivers need to signal the passed job's hardware fence with an
528 * appropriate error code (e.g., -ECANCELED) in this callback. They
529 * must not free the job.
539 * struct drm_gpu_scheduler - scheduler instance-specific data
544 * @timeout: the time after which a job is removed from the scheduler.
545 * @name: name of the ring for which this scheduler is being used.
546 * @num_rqs: Number of run-queues. This is at most DRM_SCHED_PRIORITY_COUNT,
547 * as there's usually one run-queue per priority, but could be less.
548 * @sched_rq: An allocated array of run-queues of size @num_rqs;
552 * @job_id_count: used to assign unique id to the each job.
559 * @pending_list: the list of jobs which are currently in the job queue.
561 * @hang_limit: once the hangs by a job crosses this limit then it is marked
566 * @free_guilty: A hit to time out handler to free the guilty job.
571 * One scheduler is implemented for each hardware ring.
601 * struct drm_sched_init_args - parameters for initializing a DRM GPU scheduler
606 * @num_rqs: Number of run-queues. This may be at most DRM_SCHED_PRIORITY_COUNT,
607 * as there's usually one run-queue per priority, but may be less.
609 * @hang_limit: number of times to allow a job to hang before dropping it.
610 * This mechanism is DEPRECATED. Set it to 0.
655 int drm_sched_job_init(struct drm_sched_job *job,
659 void drm_sched_job_arm(struct drm_sched_job *job);
661 int drm_sched_job_add_dependency(struct drm_sched_job *job,
663 int drm_sched_job_add_syncobj_dependency(struct drm_sched_job *job,
667 int drm_sched_job_add_resv_dependencies(struct drm_sched_job *job,
670 int drm_sched_job_add_implicit_dependencies(struct drm_sched_job *job,
673 bool drm_sched_job_has_dependency(struct drm_sched_job *job,
675 void drm_sched_job_cleanup(struct drm_sched_job *job);
681 return s_job && atomic_inc_return(&s_job->karma) > threshold; in drm_sched_invalidate_job()