xref: /freebsd/sys/dev/isp/DriverManual.txt (revision e0c4386e7e71d93b0edc0c8fa156263fc4a8b0b6)
1
2		Driver Theory of Operation Manual
3
41. Introduction
5
6This is a short text document that will describe the background, goals
7for, and current theory of operation for the joint Fibre Channel/SCSI
8HBA driver for QLogic hardware.
9
10Because this driver is an ongoing project, do not expect this manual
11to remain entirely up to date. Like a lot of software engineering, the
12ultimate documentation is the driver source. However, this manual should
13serve as a solid basis for attempting to understand where the driver
14started and what is trying to be accomplished with the current source.
15
16The reader is expected to understand the basics of SCSI and Fibre Channel
17and to be familiar with the range of platforms that Solaris, Linux and
18the variant "BSD" Open Source systems are available on. A glossary and
19a few references will be placed at the end of the document.
20
21There will be references to functions and structures within the body of
22this document. These can be easily found within the source using editor
23tags or grep. There will be few code examples here as the code already
24exists where the reader can easily find it.
25
262. A Brief History for this Driver
27
28This driver originally started as part of work funded by NASA Ames
29Research Center's Numerical Aerodynamic Simulation center ("NAS" for
30short) for the QLogic PCI 1020 and 1040 SCSI Host Adapters as part of my
31work at porting the NetBSD Operating System to the Alpha architectures
32(specifically the AlphaServer 8200 and 8400 platforms).  In short, it
33started just as simple single SCSI HBA driver for just the purpose of
34running off a SCSI disk. This work took place starting in January, 1997.
35
36Because the first implementation was for NetBSD, which runs on a very
37large number of platforms, and because NetBSD supported both systems with
38SBus cards (e.g., Sun SPARC systems) as well as systems with PCI cards,
39and because the QLogic SCSI cards came in both SBus and PCI versions, the
40initial implementation followed the very thoughtful NetBSD design tenet
41of splitting drivers into what are called MI (for Machine Independent)
42and MD (Machine Dependent) portions. The original design therefore was
43from the premise that the driver would drive both SBus and PCI card
44variants. These busses are similar but have quite different constraints,
45and while the QLogic SBus and PCI cards are very similar, there are some
46significant differences.
47
48After this initial goal had been met, there began to be some talk about
49looking into implementing Fibre Channel mass storage at NAS. At this time
50the QLogic 2100 FC/AL HBA was about to become available. After looking at
51the way it was designed I concluded that it was so darned close to being
52just like the SCSI HBAs that it would be insane to *not* leverage off of
53the existing driver. So, we ended up with a driver for NetBSD that drove
54PCI and SBus SCSI cards, and now also drove the QLogic 2100 FC-AL HBA.
55
56After this, ports to non-NetBSD platforms became interesting as well.
57This took the driver out of the interest with NAS and into interested
58support from a number of other places. Since the original NetBSD
59development, the driver has been ported to FreeBSD, OpenBSD, Linux,
60Solaris, and two proprietary systems. Following from the original MI/MD
61design of NetBSD, a rather successful attempt has been made to keep the
62Operating System Platform differences segregated and to a minimum.
63
64Along the way, support for the 2200 as well as full fabric and target
65mode support has been added, and 2300 support as well as an FC-IP stack
66are planned.
67
683. Driver Design Goals
69
70The driver has not started out as one normally would do such an effort.
71Normally you design via top-down methodologies and set an initial goal
72and meet it. This driver has had a design goal that changes from almost
73the very first. This has been an extremely peculiar, if not risque,
74experience. As a consequence, this section of this document contains
75a bit of "reconstruction after the fact" in that the design goals are
76as I perceive them to be now- not necessarily what they started as.
77
78The primary design goal now is to have a driver that can run both the
79SCSI and Fibre Channel SCSI prototocols on multiple OS platforms with
80as little OS platform support code as possible.
81
82The intended support targets for SCSI HBAs is to support the single and
83dual channel PCI Ultra2 and PCI Ultra3 cards as well as the older PCI
84Ultra single channel cards and SBus cards.
85
86The intended support targets for Fibre Channel HBAs is the 2100, 2200
87and 2300 PCI cards.
88
89Fibre Channel support should include complete fabric and public loop
90as well as private loop and private loop, direct-attach topologies.
91FC-IP support is also a goal.
92
93For both SCSI and Fibre Channel, simultaneous target/initiator mode support
94is a goal.
95
96Pure, raw, performance is not a primary goal of this design. This design,
97because it has a tremendous amount of code common across multiple
98platforms, will undoubtedly never be able to beat the performance of a
99driver that is specifically designed for a single platform and a single
100card. However, it is a good strong secondary goal to make the performance
101penalties in this design as small as possible.
102
103Another primary aim, which almost need not be stated, is that the
104implementation of platform differences must not clutter up the common
105code with platform specific defines. Instead, some reasonable layering
106semantics are defined such that platform specifics can be kept in the
107platform specific code.
108
1094. QLogic Hardware Architecture
110
111In order to make the design of this driver more intelligible, some
112description of the Qlogic hardware architecture is in order. This will
113not be an exhaustive description of how this card works, but will
114note enough of the important features so that the driver design is
115hopefully clearer.
116
1174.1 Basic QLogic hardware
118
119The QLogic HBA cards all contain a tiny 16-bit RISC-like processor and
120varying sizes of SRAM. Each card contains a Bus Interface Unit (BIU)
121as appropriate for the host bus (SBus or PCI).  The BIUs allow access
122to a set of dual-ranked 16 bit incoming and outgoing mailbox registers
123as well as access to control registers that control the RISC or access
124other portions of the card (e.g., Flash BIOS). The term 'dual-ranked'
125means that at the same host visible address if you write a mailbox
126register, that is a write to an (incoming, to the HBA) mailbox register,
127while a read to the same address reads another (outgoing, to the HBA)
128mailbox register with completely different data. Each HBA also then has
129core and auxiliary logic which either is used to interface to a SCSI bus
130(or to external bus drivers that connect to a SCSI bus), or to connect
131to a Fibre Channel bus.
132
1334.2 Basic Control Interface
134
135There are two principle I/O control mechanisms by which the driver
136communicates with and controls the QLogic HBA. The first mechanism is to
137use the incoming mailbox registers to interrupt and issue commands to
138the RISC processor (with results usually, but not always, ending up in
139the ougtoing mailbox registers). The second mechanism is to establish,
140via mailbox commands, circular request and response queues in system
141memory that are then shared between the QLogic and the driver. The
142request queue is used to queue requests (e.g., I/O requests) for the
143QLogic HBA's RISC engine to copy into the HBA memory and process. The
144result queue is used by the QLogic HBA's RISC engine to place results of
145requests read from the request queue, as well as to place notification
146of asynchronous events (e.g., incoming commands in target mode).
147
148To give a bit more precise scale to the preceding description, the QLogic
149HBA has 8 dual-ranked 16 bit mailbox registers, mostly for out-of-band
150control purposes. The QLogic HBA then utilizes a circular request queue
151of 64 byte fixed size Queue Entries to receive normal initiator mode
152I/O commands (or continue target mode requests). The request queue may
153be up to 256 elements for the QLogic 1020 and 1040 chipsets, but may
154be quite larger for the QLogic 12X0/12160 SCSI and QLogic 2X00 Fibre
155Channel chipsets.
156
157In addition to synchronously initiated usage of mailbox commands by
158the host system, the QLogic may also deliver asynchronous notifications
159solely in outgoing mailbox registers. These asynchronous notifications in
160mailboxes may be things like notification of SCSI Bus resets, or that the
161Fabric Name server has sent a change notification, or even that a specific
162I/O command completed without error (this is called 'Fast Posting'
163and saves the QLogic HBA from having to write a response queue entry).
164
165The QLogic HBA is an interrupting card, and when servicing an interrupt
166you really only have to check for either a mailbox interrupt or an
167interrupt notification that the response queue has an entry to
168be dequeued.
169
1704.3 Fibre Channel SCSI out of SCSI
171
172QLogic took the approach in introducing the 2X00 cards to just treat
173FC-AL as a 'fat' SCSI bus (a SCSI bus with more than 15 targets). All
174of the things that you really need to do with Fibre Channel with respect
175to providing FC-4 services on top of a Class 3 connection are performed
176by the RISC engine on the QLogic card itself. This means that from
177an HBA driver point of view, very little needs to change that would
178distinguish addressing a Fibre Channel disk from addressing a plain
179old SCSI disk.
180
181However, in the details it's not *quite* that simple. For example, in
182order to manage Fabric Connections, the HBA driver has to do explicit
183binding of entities it's queried from the name server to specific 'target'
184ids (targets, in this case, being a virtual entity).
185
186Still- the HBA firmware does really nearly all of the tedious management
187of Fibre Channel login state. The corollary to this sometimes is the
188lack of ability to say why a particular login connection to a Fibre
189Channel disk is not working well.
190
191There are clear limits with the QLogic card in managing fabric devices.
192The QLogic manages local loop devices (LoopID or Target 0..126) itself,
193but for the management of fabric devices, it has an absolute limit of
194253 simultaneous connections (256 entries less 3 reserved entries).
195
1965. Driver Architecture
197
1985.1 Driver Assumptions
199
200The first basic assumption for this driver is that the requirements for
201a SCSI HBA driver for any system is that of a 2 or 3 layer model where
202there are SCSI target device drivers (drivers which drive SCSI disks,
203SCSI tapes, and so on), possibly a middle services layer, and a bottom
204layer that manages the transport of SCSI CDB's out a SCSI bus (or across
205Fibre Channel) to a SCSI device. It's assumed that each SCSI command is
206a separate structure (or pointer to a structure) that contains the SCSI
207CDB and a place to store SCSI Status and SCSI Sense Data.
208
209This turns out to be a pretty good assumption. All of the Open Source
210systems (*BSD and Linux) and most of the proprietary systems have this
211kind of structure. This has been the way to manage SCSI subsystems for
212at least ten years.
213
214There are some additional basic assumptions that this driver makes- primarily
215in the arena of basic simple services like memory zeroing, memory copying,
216delay, sleep, microtime functions. It doesn't assume much more than this.
217
2185.2 Overall Driver Architecture
219
220The driver is split into a core (machine independent) module and platform
221and bus specific outer modules (machine dependent).
222
223The core code (in the files isp.c, isp_inline.h, ispvar.h, ispreg.h and
224ispmbox.h) handles:
225
226 + Chipset recognition and reset and firmware download (isp_reset)
227 + Board Initialization (isp_init)
228 + First level interrupt handling (response retrieval) (isp_intr)
229 + A SCSI command queueing entry point (isp_start)
230 + A set of control services accessed either via local requirements within
231   the core module or via an externally visible control entry point
232   (isp_control).
233
234The platform/bus specific modules (and definitions) depend on each
235platform, and they provide both definitions and functions for the core
236module's use.  Generally a platform module set is split into a bus
237dependent module (where configuration is begun from and bus specific
238support functions reside) and relatively thin platform specific layer
239which serves as the interconnect with the rest of this platform's SCSI
240subsystem.
241
242For ease of bus specific access issues, a centralized soft state
243structure is maintained for each HBA instance (struct ispsoftc). This
244soft state structure contains a machine/bus dependent vector (mdvec)
245for functions that read and write hardware registers, set up DMA for the
246request/response queues and fibre channel scratch area, set up and tear
247down DMA mappings for a SCSI command, provide a pointer to firmware to
248load, and other minor things.
249
250The machine dependent outer module must provide functional entry points
251for the core module:
252
253 + A SCSI command completion handoff point (isp_done)
254 + An asynchronous event handler (isp_async)
255 + A logging/printing function (isp_prt)
256
257The machine dependent outer module code must also provide a set of
258abstracting definitions which is what the core module utilizes heavily
259to do its job. These are discussed in detail in the comments in the
260file ispvar.h, but to give a sense of the range of what is required,
261let's illustrate two basic classes of these defines.
262
263The first class are "structure definition/access" class. An
264example of these would be:
265
266	XS_T            Platform SCSI transaction type (i.e., command for HBA)
267	..
268	XS_TGT(xs)      gets the target from an XS_T
269	..
270	XS_TAG_TYPE(xs) which type of tag to use
271	..
272
273The second class are 'functional' class definitions. Some examples of
274this class are:
275
276	MEMZERO(dst, src)                       platform zeroing function
277	..
278	MBOX_WAIT_COMPLETE(struct ispsoftc *)   wait for mailbox cmd to be done
279
280Note that the former is likely to be simple replacement with bzero or
281memset on most systems, while the latter could be quite complex.
282
283This soft state structure also contains different parameter information
284based upon whether this is a SCSI HBA or a Fibre Channel HBA (which is
285filled in by the code module).
286
287In order to clear up what is undoubtedly a seeming confusion of
288interconnects, a description of the typical flow of code that performs
289boards initialization and command transactions may help.
290
2915.3 Initialization Code Flow
292
293Typically a bus specific module for a platform (e.g., one that wants
294to configure a PCI card) is entered via that platform's configuration
295methods. If this module recognizes a card and can utilize or construct the
296space for the HBA instance softc, it does so, and initializes the machine
297dependent vector as well as any other platform specific information that
298can be hidden in or associated with this structure.
299
300Configuration at this point usually involves mapping in board registers
301and registering an interrupt. It's quite possible that the core module's
302isp_intr function is adequate to be the interrupt entry point, but often
303it's more useful have a bus specific wrapper module that calls isp_intr.
304
305After mapping and interrupt registry is done, isp_reset is called.
306Part of the isp_reset call may cause callbacks out to the bus dependent
307module to perform allocation and/or mapping of Request and Response
308queues (as well as a Fibre Channel scratch area if this is a Fibre
309Channel HBA).  The reason this is considered 'bus dependent' is that
310only the bus dependent module may have the information that says how
311one could perform I/O mapping and dependent (e.g., on a Solaris system)
312on the Request and Response queues. Another callback can enable the *use*
313of interrupts should this platform be able to finish configuration in
314interrupt driven mode.
315
316If isp_reset is successful at resetting the QLogic chipset and downloading
317new firmware (if available) and setting it running, isp_init is called. If
318isp_init is successful in doing initial board setups (including reading
319NVRAM from the QLogic card), then this bus specicic module will call the
320platform dependent module that takes the appropriate steps to 'register'
321this HBA with this platform's SCSI subsystem.  Examining either the
322OpenBSD or the NetBSD isp_pci.c or isp_sbus.c files may assist the reader
323here in clarifying some of this.
324
3255.4 Initiator Mode Command Code Flow
326
327A successful execution of isp_init will lead to the driver 'registering'
328itself with this platform's SCSI subsystem. One assumed action for this
329is the registry of a function that the SCSI subsystem for this platform
330will call when it has a SCSI command to run.
331
332The platform specific module function that receives this will do whatever
333it needs to prepare this command for execution in the core module. This
334sounds vague, but it's also very flexible. In principle, this could be
335a complete marshalling/demarshalling of this platform's SCSI command
336structure (should it be impossible to represent in an XS_T). In addition,
337this function can also block commands from running (if, e.g., Fibre
338Channel loop state would preclude successful starting of the command).
339
340When it's ready to do so, the function isp_start is called with this
341command. This core module tries to allocate request queue space for
342this command. It also calls through the machine dependent vector
343function to make sure any DMA mapping for this command is done.
344
345Now, DMA mapping here is possibly a misnomer, as more than just
346DMA mapping can be done in this bus dependent function. This is
347also the place where any endian byte-swizzling will be done. At any
348rate, this function is called last because the process of establishing
349DMA addresses for any command may in fact consume more Request Queue
350entries than there are currently available. If the mapping and other
351functions are successful, the QLogic mailbox inbox pointer register
352is updated to indicate to the QLogic that it has a new request to
353read.
354
355If this function is unsuccessful, policy as to what to do at this point is
356left to the machine dependent platform function which called isp_start. In
357some platforms, temporary resource shortages can be handled by the main
358SCSI subsystem. In other platforms, the machine dependent code has to
359handle this.
360
361In order to keep track of commands that are in progress, the soft state
362structure contains an array of 'handles' that are associated with each
363active command. When you send a command to the QLogic firmware, a portion
364of the Request Queue entry can contain a non-zero handle identifier so
365that at a later point in time in reading either a Response Queue entry
366or from a Fast Posting mailbox completion interrupt, you can take this
367handle to find the command you were waiting on. It should be noted that
368this is probably one of the most dangerous areas of this driver. Corrupted
369handles will lead to system panics.
370
371At some later point in time an interrupt will occur. Eventually,
372isp_intr will be called. This core module will determine what the cause
373of the interrupt is, and if it is for a completing command. That is,
374it'll determine the handle and fetch the pointer to the command out of
375storage within the soft state structure. Skipping over a lot of details,
376the machine dependent code supplied function isp_done is called with the
377pointer to the completing command. This would then be the glue layer that
378informs the SCSI subsystem for this platform that a command is complete.
379
3805.5 Asynchronous Events
381
382Interrupts occur for events other than commands (mailbox or request queue
383started commands) completing. These are called Asynchronous Mailbox
384interrupts. When some external event causes the SCSI bus to be reset,
385or when a Fibre Channel loop changes state (e.g., a LIP is observed),
386this generates such an asynchronous event.
387
388Each platform module has to provide an isp_async entry point that will
389handle a set of these. This isp_async entry point also handles things
390which aren't properly async events but are simply natural outgrowths
391of code flow for another core function (see discussion on fabric device
392management below).
393
3945.6 Target Mode Code Flow
395
396This section could use a lot of expansion, but this covers the basics.
397
398The QLogic cards, when operating in target mode, follow a code flow that is
399essentially the inverse of that for intiator mode describe above. In this
400scenario, an interrupt occurs, and present on the Response Queue is a
401queue entry element defining a new command arriving from an initiator.
402
403This is passed to possibly external target mode handler. This driver
404provides some handling for this in a core module, but also leaves
405things open enough that a completely different target mode handler
406may accept this incoming queue entry.
407
408The external target mode handler then turns around forms up a response
409to this 'response' that just arrived which is then placed on the Request
410Queue and handled very much like an initiator mode command (i.e., calling
411the bus dependent DMA mapping function). If this entry completes the
412command, no more need occur. But often this handles only part of the
413requested command, so the QLogic firmware will rewrite the response
414to the initial 'response' again onto the Response Queue, whereupon the
415target mode handler will respond to that, and so on until the command
416is completely handled.
417
418Because almost no platform provides basic SCSI Subsystem target mode
419support, this design has been left extremely open ended, and as such
420it's a bit hard to describe in more detail than this.
421
4225.7 Locking Assumptions
423
424The observant reader by now is likely to have asked the question, "but what
425about locking? Or interrupt masking" by now.
426
427The basic assumption about this is that the core module does not know
428anything directly about locking or interrupt masking. It may assume that
429upon entry (e.g., via isp_start, isp_control, isp_intr) that appropriate
430locking and interrupt masking has been done.
431
432The platform dependent code may also therefore assume that if it is
433called (e.g., isp_done or isp_async) that any locking or masking that
434was in place upon the entry to the core module is still there. It is up
435to the platform dependent code to worry about avoiding any lock nesting
436issues. As an example of this, the Linux implementation simply queues
437up commands completed via the callout to isp_done, which it then pushes
438out to the SCSI subsystem after a return from it's calling isp_intr is
439executed (and locks dropped appropriately, as well as avoidance of deep
440interrupt stacks).
441
442Recent changes in the design have now eased what had been an original
443requirement that the while in the core module no locks or interrupt
444masking could be dropped. It's now up to each platform to figure out how
445to implement this. This is principally used in the execution of mailbox
446commands (which are principally used for Loop and Fabric management via
447the isp_control function).
448
4495.8 SCSI Specifics
450
451The driver core or platform dependent architecture issues that are specific
452to SCSI are few. There is a basic assumption that the QLogic firmware
453supported Automatic Request sense will work- there is no particular provision
454for disabling it's usage on a per-command basis.
455
4565.9 Fibre Channel Specifics
457
458Fibre Channel presents an interesting challenge here. The QLogic firmware
459architecture for dealing with Fibre Channel as just a 'fat' SCSI bus
460is fine on the face of it, but there are some subtle and not so subtle
461problems here.
462
4635.9.1 Firmware State
464
465Part of the initialization (isp_init) for Fibre Channel HBAs involves
466sending a command (Initialize Control Block) that establishes Node
467and Port WWNs as well as topology preferences. After this occurs,
468the QLogic firmware tries to traverese through serveral states:
469
470	FW_CONFIG_WAIT
471	FW_WAIT_AL_PA
472	FW_WAIT_LOGIN
473	FW_READY
474	FW_LOSS_OF_SYNC
475	FW_ERROR
476	FW_REINIT
477	FW_NON_PART
478
479It starts with FW_CONFIG_WAIT, attempts to get an AL_PA (if on an FC-AL
480loop instead of being connected as an N-port), waits to log into all
481FC-AL loop entities and then hopefully transitions to FW_READY state.
482
483Clearly, no command should be attempted prior to FW_READY state is
484achieved. The core internal function isp_fclink_test (reachable via
485isp_control with the ISPCTL_FCLINK_TEST function code). This function
486also determines connection topology (i.e., whether we're attached to a
487fabric or not).
488
4895.9.2. Loop State Transitions- From Nil to Ready
490
491Once the firmware has transitioned to a ready state, then the state of the
492connection to either arbitrated loop or to a fabric has to be ascertained,
493and the identity of all loop members (and fabric members validated).
494
495This can be very complicated, and it isn't made easy in that the QLogic
496firmware manages PLOGI and PRLI to devices that are on a local loop, but
497it is the driver that must manage PLOGI/PRLI with devices on the fabric.
498
499In order to manage this state an eight level staging of current "Loop"
500(where "Loop" is taken to mean FC-AL or N- or F-port connections) states
501in the following ascending order:
502
503	LOOP_NIL
504	LOOP_LIP_RCVD
505	LOOP_PDB_RCVD
506	LOOP_SCANNING_FABRIC
507	LOOP_FSCAN_DONE
508	LOOP_SCANNING_LOOP
509	LOOP_LSCAN_DONE
510	LOOP_SYNCING_PDB
511	LOOP_READY
512
513When the core code initializes the QLogic firmware, it sets the loop
514state to LOOP_NIL. The first 'LIP Received' asynchronous event sets state
515to LOOP_LIP_RCVD. This should be followed by a "Port Database Changed"
516asynchronous event which will set the state to LOOP_PDB_RCVD. Each of
517these states, when entered, causes an isp_async event call to the
518machine dependent layers with the ISPASYNC_CHANGE_NOTIFY code.
519
520After the state of LOOP_PDB_RCVD is reached, the internal core function
521isp_scan_fabric (reachable via isp_control(..ISPCTL_SCAN_FABRIC)) will,
522if the connection is to a fabric, use Simple Name Server mailbox mediated
523commands to dump the entire fabric contents. For each new entity, an
524isp_async event will be generated that says a Fabric device has arrived
525(ISPASYNC_FABRIC_DEV). The function that isp_async must perform in this
526step is to insert possibly remove devices that it wants to have the
527QLogic firmware log into (at LOOP_SYNCING_PDB state level)).
528
529After this has occurred, the state LOOP_FSCAN_DONE is set, and then the
530internal function isp_scan_loop (isp_control(...ISPCTL_SCAN_LOOP)) can
531be called which will then scan for any local (FC-AL) entries by asking
532for each possible local loop id the QLogic firmware for a Port Database
533entry. It's at this level some entries cached locally are purged
534or shifting loopids are managed (see section 5.9.4).
535
536The final step after this is to call the internal function isp_pdb_sync
537(isp_control(..ISPCTL_PDB_SYNC)). The purpose of this function is to
538then perform the PLOGI/PRLI functions for fabric devices. The next state
539entered after this is LOOP_READY, which means that the driver is ready
540to process commands to send to Fibre Channel devices.
541
5425.9.3 Fibre Channel variants of Initiator Mode Code Flow
543
544The code flow in isp_start for Fibre Channel devices is the same as it is
545for SCSI devices, but with a notable exception.
546
547Maintained within the fibre channel specific portion of the driver soft
548state structure is a distillation of the existing population of both
549local loop and fabric devices. Because Loop IDs can shift on a local
550loop but we wish to retain a 'constant' Target ID (see 5.9.4), this
551is indexed directly via the Target ID for the command (XS_TGT(xs)).
552
553If there is a valid entry for this Target ID, the command is started
554(with the stored 'Loop ID'). If not the command is completed with
555the error that is just like a SCSI Selection Timeout error.
556
557This code is currently somewhat in transition. Some platforms to
558do firmware and loop state management (as described above) at this
559point. Other platforms manage this from the machine dependent layers. The
560important function to watch in this respect is isp_fc_runstate (in
561isp_inline.h).
562
5635.9.4 "Target" in Fibre Channel is a fixed virtual construct
564
565Very few systems can cope with the notion that "Target" for a disk
566device can change while you're using it. But one of the properties of
567for arbitrated loop is that the physical bus address for a loop member
568(the AL_PA) can change depending on when and how things are inserted in
569the loop.
570
571To illustrate this, let's take an example. Let's say you start with a
572loop that has 5 disks in it. At boot time, the system will likely find
573them and see them in this order:
574
575disk#   Loop ID         Target ID
576disk0   0               0
577disk1   1               1
578disk2   2               2
579disk3   3               3
580disk4   4               4
581
582The driver uses 'Loop ID' when it forms requests to send a comamnd to
583each disk. However, it reports to NetBSD that things exist as 'Target
584ID'. As you can see here, there is perfect correspondence between disk,
585Loop ID and Target ID.
586
587Let's say you add a new disk between disk2 and disk3 while things are
588running. You don't really often see this, but you *could* see this where
589the loop has to renegotiate, and you end up with:
590
591disk#   Loop ID         Target ID
592disk0   0               0
593disk1   1               1
594disk2   2               2
595diskN   3               ?
596disk3   4               ?
597disk4   5               ?
598
599Clearly, you don't want disk3 and disk4's "Target ID" to change while you're
600running since currently mounted filesystems will get trashed.
601
602What the driver is supposed to do (this is the function of isp_scan_loop),
603is regenerate things such that the following then occurs:
604
605disk#   Loop ID         Target ID
606disk0   0               0
607disk1   1               1
608disk2   2               2
609diskN   3               5
610disk3   4               3
611disk4   5               4
612
613So, "Target" is a virtual entity that is maintained while you're running.
614
6156. Glossary
616
617HBA - Host Bus Adapter
618
619SCSI - Small Computer
620
6217. References
622
623Various URLs of interest:
624
625http://www.netbsd.org		-	NetBSD's Web Page
626http://www.openbsd.org		-	OpenBSD's Web Page
627https://www.freebsd.org		-	FreeBSD's Web Page
628
629http://www.t10.org		-	ANSI SCSI Commitee's Web Page
630					(SCSI Specs)
631http://www.t11.org		-	NCITS Device Interface Web Page
632					(Fibre Channel Specs)
633
634