Searched full:inference (Results 1 – 10 of 10) sorted by relevance
48 * @LP_WITHIN_INF: are we within inference?71 * @inference: current inference87 u32 inference; member110 lp->inference = 0; in tcp_lp_init()118 * Will only call newReno CA when away from inference.281 /* calc inference */ in tcp_lp_pkts_acked()284 lp->inference = 3 * delta; in tcp_lp_pkts_acked()286 /* test if within inference */ in tcp_lp_pkts_acked()287 if (lp->last_drop && (now - lp->last_drop < lp->inference)) in tcp_lp_pkts_acked()308 * and will usually within threshold when within inference */ in tcp_lp_pkts_acked()[all …]
19 - Edge AI - doing inference at an edge device. It can be an embedded ASIC/FPGA,23 - Inference data-center - single/multi user devices in a large server. This32 - Training data-center - Similar to Inference data-center cards, but typically
70 /// Type inference helper function.80 /// inference help as `HasPinData`.100 /// Type inference helper function.
233 //! // Ensure that `data` really is of type `PinData` and help with type inference:1223 // Ensure that `data` really is of type `$data` and help with type inference:1482 // get the correct type inference here:1486 // We have to use type inference here to make zeroed have the correct type. This does1510 // We abuse `slot` to get the correct type inference here:
14 designed to accelerate Deep Learning inference workloads.
169 * Device-unique inference ID (read-only)310 * Performs Deep Learning Neural Compute Inference Operations
38 MODULE_PARM_DESC(inference_timeout_ms, "Inference maximum duration, in milliseconds, 0 - default");199 vdev->timeout.inference; in ivpu_job_timeout_work()
74 /* Job status returned when the job was preempted mid-inference */
267 * past and which do not follow the string inference scheme that libbpf uses. in hashmap__empty()
628 * State | vT1CH | VIA_TIMER_1_INT | inference drawn in mac_read_clk()