Lines Matching +full:number +full:- +full:of +full:- +full:wires

10  - Introduction
11 -- Background
12 -- Xillybus Overview
14 - Usage
15 -- User interface
16 -- Synchronization
17 -- Seekable pipes
19 - Internals
20 -- Source code organization
21 -- Pipe attributes
22 -- Host never reads from the FPGA
23 -- Channels, pipes, and the message channel
24 -- Data streaming
25 -- Data granularity
26 -- Probing
27 -- Buffer allocation
28 -- The "nonempty" message (supporting poll)
35 ----------
37 An FPGA (Field Programmable Gate Array) is a piece of logic hardware, which
40 or even a processor with its peripherals. FPGAs are the LEGO of hardware:
45 (hence not justifying the development of an ASIC).
50 again, pre-designed building blocks, IP cores, are often used. These are the
51 FPGA parallels of library functions. IP cores may implement certain
53 processor (e.g. ARM) or anything that might come handy. Think of them as a
54 building block, with electrical wires dangling on the sides for connection to
57 One of the daunting tasks in FPGA design is communicating with a fullblown
59 low-level bus protocol and the somewhat higher-level interface with the host
61 function is a well-known one (e.g. a video adapter card, or a NIC), it can
63 A special driver is then written to present the FPGA as a well-known interface
67 It's however common that the desired data communication doesn't fit any well-
68 known peripheral function. Also, the effort of designing an elegant
73 interface logic for the FPGA, and write a simple ad-hoc driver for the kernel.
76 -----------------
79 elementary data transport between an FPGA and the host, providing pipe-like
80 data streams with a straightforward user interface. It's intended as a low-
81 effort solution for mixed FPGA-host projects, for which it makes sense to
82 have the project-specific part of the driver running in a user-space program.
85 project to another (the number of data pipes needed in each direction and
86 their attributes), there isn't one specific chunk of logic being the Xillybus
93 the data. This is contrary to a common method of communicating through fixed-
95 There may be more than a hundred of these streams on a single IP core, but
98 In order to ease the deployment of the Xillybus IP core, it contains a simple
102 driver is used to work out of the box with any Xillybus IP core.
111 --------------
115 names of these files depend on the IP core that is loaded in the FPGA (see
125 possibly pressing CTRL-C as some stage, even though the xillybus_* pipes have
130 * Supporting non-blocking I/O (by setting O_NONBLOCK on open() ).
135 pieces of data sent across (like TCP/IP) by autoflushing.
142 ---------------
152 room in the buffers to store any of the data in the buffers.
155 as soon as the respective device file is opened, regardless of if the data
157 of data requested by a read() call is transmitted.
170 --------------
184 ------------------------
186 The Xillybus driver consists of a core module, xillybus_core.c, and modules
194 which execute the DMA-related operations on the bus.
197 ---------------
199 Each pipe has a number of attributes which are set when the FPGA component
204 * is_writebuf: The pipe's direction. A non-zero value means it's an FPGA to
207 * channelnum: The pipe's identification number in communication between the
212 * allowpartial: A non-zero value means that a read() or write() (whichever
213 applies) may return with less than the requested number of bytes. The common
214 choice is a non-zero value, to match standard UNIX behavior.
216 * synchronous: A non-zero value means that the pipe is synchronous. See
219 * bufsize: Each DMA buffer's size. Always a power of two.
221 * bufnum: The number of buffers allocated for this pipe. Always a power of two.
223 * exclusive_open: A non-zero value forces exclusive opening of the associated
227 * seekable: A non-zero value indicates that the pipe is seekable. See
230 * supports_nonempty: A non-zero value (which is typical) indicates that the
235 ------------------------------
238 doesn't expect a card to go away all of the sudden. But since the PCIe card
240 quite likely as a result of an accidental reprogramming of the FPGA while the
243 device, that leads to an immediate freeze of the system on some motherboards,
249 doesn't follow the common practice of checking a status register when it's
253 This mechanism is used on non-PCIe buses as well for the sake of uniformity.
257 ----------------------------------------
259 Each of the (possibly bidirectional) pipes presented to the user is allocated
261 and pipes is necessary only because of channel 0, which is used for interrupt-
265 --------------
267 Even though a non-segmented data stream is presented to the user at both
268 sides, the implementation relies on a set of DMA buffers which is allocated
269 for each channel. For the sake of illustration, let's take the FPGA to host
271 FPGA, the Xillybus IP core writes it to one of the DMA buffers. When the
279 This is not good enough for creating a TCP/IP-like stream: If the data flow
290 balances between bus bandwidth efficiency (preventing a lot of partially
291 filled buffers being sent) and a latency held fairly low for tails of data.
293 A similar setting is used in the host to FPGA direction. The handling of
300 and yet enjoy a stream-like interface.
302 Note that the issue of partial buffer flushing is irrelevant for pipes having
307 ----------------
315 FPGA, so the transmission of up to one word may be held until it's fully
318 This somewhat complicates the handling of host to FPGA streams, because
320 the FPGA, and hence can't be sent. To prevent loss of data, these leftover
325 -------
327 As mentioned earlier, the number of pipes that are created when the driver
333 1. Acquire the length of the IDT, so a buffer can be allocated for it. This
342 -----------------
344 In order to simplify the logic that prevents illegal boundary crossings of
349 all buffers' sizes are powers of two, it's possible to pack any set of such
350 buffers, with a maximal waste of one page of memory.
356 The allocation of buffer memory takes place in the same order they appear in
359 the necessary number of pages is requested from the kernel, and these are
367 ----------------------------------------
379 be configured not to send them for a slight reduction of bandwidth.