17f0b8309SEdward Pilatowicz /* 27f0b8309SEdward Pilatowicz * CDDL HEADER START 37f0b8309SEdward Pilatowicz * 47f0b8309SEdward Pilatowicz * The contents of this file are subject to the terms of the 57f0b8309SEdward Pilatowicz * Common Development and Distribution License (the "License"). 67f0b8309SEdward Pilatowicz * You may not use this file except in compliance with the License. 77f0b8309SEdward Pilatowicz * 87f0b8309SEdward Pilatowicz * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 97f0b8309SEdward Pilatowicz * or http://www.opensolaris.org/os/licensing. 107f0b8309SEdward Pilatowicz * See the License for the specific language governing permissions 117f0b8309SEdward Pilatowicz * and limitations under the License. 127f0b8309SEdward Pilatowicz * 137f0b8309SEdward Pilatowicz * When distributing Covered Code, include this CDDL HEADER in each 147f0b8309SEdward Pilatowicz * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 157f0b8309SEdward Pilatowicz * If applicable, add the following below this CDDL HEADER, with the 167f0b8309SEdward Pilatowicz * fields enclosed by brackets "[]" replaced with your own identifying 177f0b8309SEdward Pilatowicz * information: Portions Copyright [yyyy] [name of copyright owner] 187f0b8309SEdward Pilatowicz * 197f0b8309SEdward Pilatowicz * CDDL HEADER END 207f0b8309SEdward Pilatowicz */ 217f0b8309SEdward Pilatowicz /* 227f0b8309SEdward Pilatowicz * Copyright 2009 Sun Microsystems, Inc. All rights reserved. 237f0b8309SEdward Pilatowicz * Use is subject to license terms. 247f0b8309SEdward Pilatowicz */ 257f0b8309SEdward Pilatowicz 267f0b8309SEdward Pilatowicz #include <io/xdf_shell.h> 277f0b8309SEdward Pilatowicz #include <sys/dkio.h> 287f0b8309SEdward Pilatowicz #include <sys/scsi/scsi_types.h> 297f0b8309SEdward Pilatowicz 307f0b8309SEdward Pilatowicz /* 317f0b8309SEdward Pilatowicz * General Notes 327f0b8309SEdward Pilatowicz * 337f0b8309SEdward Pilatowicz * We don't support disks with bad block mappins. We have this 347f0b8309SEdward Pilatowicz * limitation because the underlying xdf driver doesn't support 357f0b8309SEdward Pilatowicz * bad block remapping. If there is a need to support this feature 367f0b8309SEdward Pilatowicz * it should be added directly to the xdf driver and we should just 377f0b8309SEdward Pilatowicz * pass requests strait on through and let it handle the remapping. 387f0b8309SEdward Pilatowicz * Also, it's probably worth pointing out that most modern disks do bad 397f0b8309SEdward Pilatowicz * block remapping internally in the hardware so there's actually less 407f0b8309SEdward Pilatowicz * of a chance of us ever discovering bad blocks. Also, in most cases 417f0b8309SEdward Pilatowicz * this driver (and the xdf driver) will only be used with virtualized 427f0b8309SEdward Pilatowicz * devices, so one might wonder why a virtual device would ever actually 437f0b8309SEdward Pilatowicz * experience bad blocks. To wrap this up, you might be wondering how 447f0b8309SEdward Pilatowicz * these bad block mappings get created and how they are managed. Well, 457f0b8309SEdward Pilatowicz * there are two tools for managing bad block mappings, format(1M) and 467f0b8309SEdward Pilatowicz * addbadsec(1M). Format(1M) can be used to do a surface scan of a disk 477f0b8309SEdward Pilatowicz * to attempt to find bad block and create mappings for them. Format(1M) 487f0b8309SEdward Pilatowicz * and addbadsec(1M) can also be used to edit existing mappings that may 497f0b8309SEdward Pilatowicz * be saved on the disk. 507f0b8309SEdward Pilatowicz * 517f0b8309SEdward Pilatowicz * The underlying PV driver that this driver passes on requests to is the 527f0b8309SEdward Pilatowicz * xdf driver. Since in most cases the xdf driver doesn't deal with 537f0b8309SEdward Pilatowicz * physical disks it has it's own algorithm for assigning a physical 547f0b8309SEdward Pilatowicz * geometry to a virtual disk (ie, cylinder count, head count, etc.) 557f0b8309SEdward Pilatowicz * The default values chosen by the xdf driver may not match those 567f0b8309SEdward Pilatowicz * assigned to a disk by a hardware disk emulator in an HVM environment. 577f0b8309SEdward Pilatowicz * This is a problem since these physical geometry attributes affect 587f0b8309SEdward Pilatowicz * things like the partition table, backup label location, etc. So 597f0b8309SEdward Pilatowicz * to emulate disk devices correctly we need to know the physical geometry 607f0b8309SEdward Pilatowicz * that was assigned to a disk at the time of it's initalization. 617f0b8309SEdward Pilatowicz * Normally in an HVM environment this information will passed to 627f0b8309SEdward Pilatowicz * the BIOS and operating system from the hardware emulator that is 637f0b8309SEdward Pilatowicz * emulating the disk devices. In the case of a solaris dom0+xvm 647f0b8309SEdward Pilatowicz * this would be qemu. So to work around this issue, this driver will 657f0b8309SEdward Pilatowicz * query the emulated hardware to get the assigned physical geometry 667f0b8309SEdward Pilatowicz * and then pass this geometry onto the xdf driver so that it can use it. 677f0b8309SEdward Pilatowicz * But really, this information is essentially metadata about the disk 687f0b8309SEdward Pilatowicz * that should be kept with the disk image itself. (Assuming or course 697f0b8309SEdward Pilatowicz * that a disk image is the actual backingstore for this emulated device.) 707f0b8309SEdward Pilatowicz * This metadata should also be made available to PV drivers via a common 717f0b8309SEdward Pilatowicz * mechanism, probably the xenstore. The fact that this metadata isn't 727f0b8309SEdward Pilatowicz * available outside of HVM domains means that it's difficult to move 737f0b8309SEdward Pilatowicz * disks between HVM and PV domains, since a fully PV domain will have no 747f0b8309SEdward Pilatowicz * way of knowing what the correct geometry of the target device is. 757f0b8309SEdward Pilatowicz * (Short of reading the disk, looking for things like partition tables 767f0b8309SEdward Pilatowicz * and labels, and taking a best guess at what the geometry was when 777f0b8309SEdward Pilatowicz * the disk was initialized. Unsuprisingly, qemu actually does this.) 787f0b8309SEdward Pilatowicz * 797f0b8309SEdward Pilatowicz * This driver has to map xdf shell device instances into their corresponding 807f0b8309SEdward Pilatowicz * xdf device instances. We have to do this to ensure that when a user 817f0b8309SEdward Pilatowicz * accesses a emulated xdf shell device we map those accesses to the proper 827f0b8309SEdward Pilatowicz * paravirtualized device. Basically what we need to know is how multiple 837f0b8309SEdward Pilatowicz * 'disk' entries in a domU configuration file get mapped to emulated 847f0b8309SEdward Pilatowicz * xdf shell devices and to xdf devices. The 'disk' entry to xdf instance 857f0b8309SEdward Pilatowicz * mappings we know because those are done within the Solaris xvdi code 867f0b8309SEdward Pilatowicz * and the xpvd nexus driver. But the config to emulated devices mappings 877f0b8309SEdward Pilatowicz * are handled entirely within the xen management tool chain and the 887f0b8309SEdward Pilatowicz * hardware emulator. Since all the tools that establish these mappings 897f0b8309SEdward Pilatowicz * live in dom0, dom0 should really supply us with this information, 907f0b8309SEdward Pilatowicz * probably via the xenstore. Unfortunatly it doesn't so, since there's 917f0b8309SEdward Pilatowicz * no good way to determine this mapping dynamically, this driver uses 927f0b8309SEdward Pilatowicz * a hard coded set of static mappings. These mappings are hardware 937f0b8309SEdward Pilatowicz * emulator specific because each different hardware emulator could have 947f0b8309SEdward Pilatowicz * a different device tree with different xdf shell device paths. This 957f0b8309SEdward Pilatowicz * means that if we want to continue to use this static mapping approach 967f0b8309SEdward Pilatowicz * to allow Solaris to run on different hardware emulators we'll have 977f0b8309SEdward Pilatowicz * to analyze each of those emulators to determine what paths they 987f0b8309SEdward Pilatowicz * use and hard code those paths into this driver. yech. This metadata 997f0b8309SEdward Pilatowicz * really needs to be supplied to us by dom0. 1007f0b8309SEdward Pilatowicz * 1017f0b8309SEdward Pilatowicz * This driver access underlying xdf nodes. Unfortunatly, devices 1027f0b8309SEdward Pilatowicz * must create minor nodes during attach, and for disk devices to create 1037f0b8309SEdward Pilatowicz * minor nodes, they have to look at the label on the disk, so this means 1047f0b8309SEdward Pilatowicz * that disk drivers must be able to access a disk contents during 1057f0b8309SEdward Pilatowicz * attach. That means that this disk driver must be able to access 1067f0b8309SEdward Pilatowicz * underlying xdf nodes during attach. Unfortunatly, due to device tree 1077f0b8309SEdward Pilatowicz * locking restrictions, we cannot have an attach operation occuring on 1087f0b8309SEdward Pilatowicz * this device and then attempt to access another device which may 1097f0b8309SEdward Pilatowicz * cause another attach to occur in a different device tree branch 1107f0b8309SEdward Pilatowicz * since this could result in deadlock. Hence, this driver can only 1117f0b8309SEdward Pilatowicz * access xdf device nodes that we know are attached, and it can't use 1127f0b8309SEdward Pilatowicz * any ddi interfaces to access those nodes if those interfaces could 1137f0b8309SEdward Pilatowicz * trigger an attach of the xdf device. So this driver works around 1147f0b8309SEdward Pilatowicz * these restrictions by talking directly to xdf devices via 1157f0b8309SEdward Pilatowicz * xdf_hvm_hold(). This interface takes a pathname to an xdf device, 1167f0b8309SEdward Pilatowicz * and if that device is already attached then it returns the a held dip 1177f0b8309SEdward Pilatowicz * pointer for that device node. This prevents us from getting into 1187f0b8309SEdward Pilatowicz * deadlock situations, but now we need a mechanism to ensure that all 1197f0b8309SEdward Pilatowicz * the xdf device nodes this driver might access are attached before 1207f0b8309SEdward Pilatowicz * this driver tries to access them. This is accomplished via the 1217f0b8309SEdward Pilatowicz * hvmboot_rootconf() callback which is invoked just before root is 1227f0b8309SEdward Pilatowicz * mounted. hvmboot_rootconf() will attach xpvd and tell it to configure 1237f0b8309SEdward Pilatowicz * all xdf device visible to the system. All these xdf device nodes 1247f0b8309SEdward Pilatowicz * will also be marked with the "ddi-no-autodetach" property so that 1257f0b8309SEdward Pilatowicz * once they are configured, the will not be automatically unconfigured. 1267f0b8309SEdward Pilatowicz * The only way that they could be unconfigured is if the administrator 1277f0b8309SEdward Pilatowicz * explicitly attempts to unload required modules via rem_drv(1M) 1287f0b8309SEdward Pilatowicz * or modunload(1M). 1297f0b8309SEdward Pilatowicz */ 1307f0b8309SEdward Pilatowicz 1317f0b8309SEdward Pilatowicz /* 1327f0b8309SEdward Pilatowicz * 16 paritions + fdisk (see xdf.h) 1337f0b8309SEdward Pilatowicz */ 1347f0b8309SEdward Pilatowicz #define XDFS_DEV2UNIT(dev) XDF_INST((getminor((dev)))) 1357f0b8309SEdward Pilatowicz #define XDFS_DEV2PART(dev) XDF_PART((getminor((dev)))) 1367f0b8309SEdward Pilatowicz 1377f0b8309SEdward Pilatowicz #define OTYP_VALID(otyp) ((otyp == OTYP_BLK) || \ 1387f0b8309SEdward Pilatowicz (otyp == OTYP_CHR) || \ 1397f0b8309SEdward Pilatowicz (otyp == OTYP_LYR)) 1407f0b8309SEdward Pilatowicz 1417f0b8309SEdward Pilatowicz #define XDFS_NODES 4 1427f0b8309SEdward Pilatowicz 1437f0b8309SEdward Pilatowicz #define XDFS_HVM_MODE(sp) (XDFS_HVM_STATE(sp)->xdfs_hs_mode) 1447f0b8309SEdward Pilatowicz #define XDFS_HVM_DIP(sp) (XDFS_HVM_STATE(sp)->xdfs_hs_dip) 1457f0b8309SEdward Pilatowicz #define XDFS_HVM_PATH(sp) (XDFS_HVM_STATE(sp)->xdfs_hs_path) 1467f0b8309SEdward Pilatowicz #define XDFS_HVM_STATE(sp) \ 1477f0b8309SEdward Pilatowicz ((xdfs_hvm_state_t *)(&((char *)(sp))[XDFS_HVM_STATE_OFFSET])) 1487f0b8309SEdward Pilatowicz #define XDFS_HVM_STATE_OFFSET (xdfs_ss_size - sizeof (xdfs_hvm_state_t)) 1497f0b8309SEdward Pilatowicz #define XDFS_HVM_SANE(sp) \ 1507f0b8309SEdward Pilatowicz ASSERT(XDFS_HVM_MODE(sp)); \ 1517f0b8309SEdward Pilatowicz ASSERT(XDFS_HVM_DIP(sp) != NULL); \ 1527f0b8309SEdward Pilatowicz ASSERT(XDFS_HVM_PATH(sp) != NULL); 1537f0b8309SEdward Pilatowicz 1547f0b8309SEdward Pilatowicz 1557f0b8309SEdward Pilatowicz typedef struct xdfs_hvm_state { 1567f0b8309SEdward Pilatowicz boolean_t xdfs_hs_mode; 1577f0b8309SEdward Pilatowicz dev_info_t *xdfs_hs_dip; 1587f0b8309SEdward Pilatowicz char *xdfs_hs_path; 1597f0b8309SEdward Pilatowicz } xdfs_hvm_state_t; 1607f0b8309SEdward Pilatowicz 1617f0b8309SEdward Pilatowicz /* local function and structure prototypes */ 1627f0b8309SEdward Pilatowicz static int xdfs_iodone(struct buf *); 1637f0b8309SEdward Pilatowicz static boolean_t xdfs_isopen_part(xdfs_state_t *, int); 1647f0b8309SEdward Pilatowicz static boolean_t xdfs_isopen(xdfs_state_t *); 1657f0b8309SEdward Pilatowicz static cmlb_tg_ops_t xdfs_lb_ops; 1667f0b8309SEdward Pilatowicz 1677f0b8309SEdward Pilatowicz /* 1687f0b8309SEdward Pilatowicz * Globals 1697f0b8309SEdward Pilatowicz */ 1707f0b8309SEdward Pilatowicz major_t xdfs_major; 1717f0b8309SEdward Pilatowicz #define xdfs_hvm_dev_ops (xdfs_c_hvm_dev_ops) 1727f0b8309SEdward Pilatowicz #define xdfs_hvm_cb_ops (xdfs_hvm_dev_ops->devo_cb_ops) 1737f0b8309SEdward Pilatowicz 1747f0b8309SEdward Pilatowicz /* 1757f0b8309SEdward Pilatowicz * Private globals 1767f0b8309SEdward Pilatowicz */ 1777f0b8309SEdward Pilatowicz volatile boolean_t xdfs_pv_disable = B_FALSE; 1787f0b8309SEdward Pilatowicz static void *xdfs_ssp; 1797f0b8309SEdward Pilatowicz static size_t xdfs_ss_size; 1807f0b8309SEdward Pilatowicz 1817f0b8309SEdward Pilatowicz /* 1827f0b8309SEdward Pilatowicz * Private helper functions 1837f0b8309SEdward Pilatowicz */ 1847f0b8309SEdward Pilatowicz static boolean_t 1857f0b8309SEdward Pilatowicz xdfs_tgt_hold(xdfs_state_t *xsp) 1867f0b8309SEdward Pilatowicz { 1877f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 1887f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_holds >= 0); 1897f0b8309SEdward Pilatowicz if (!xsp->xdfss_tgt_attached) { 1907f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 1917f0b8309SEdward Pilatowicz return (B_FALSE); 1927f0b8309SEdward Pilatowicz } 1937f0b8309SEdward Pilatowicz xsp->xdfss_tgt_holds++; 1947f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 1957f0b8309SEdward Pilatowicz return (B_TRUE); 1967f0b8309SEdward Pilatowicz } 1977f0b8309SEdward Pilatowicz 1987f0b8309SEdward Pilatowicz static void 1997f0b8309SEdward Pilatowicz xdfs_tgt_release(xdfs_state_t *xsp) 2007f0b8309SEdward Pilatowicz { 2017f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 2027f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_attached); 2037f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_holds > 0); 2047f0b8309SEdward Pilatowicz if (--xsp->xdfss_tgt_holds == 0) 2057f0b8309SEdward Pilatowicz cv_broadcast(&xsp->xdfss_cv); 2067f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 2077f0b8309SEdward Pilatowicz } 2087f0b8309SEdward Pilatowicz 2097f0b8309SEdward Pilatowicz /*ARGSUSED*/ 2107f0b8309SEdward Pilatowicz static int 2117f0b8309SEdward Pilatowicz xdfs_lb_getinfo(dev_info_t *dip, int cmd, void *arg, void *tg_cookie) 2127f0b8309SEdward Pilatowicz { 2137f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 2147f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 2157f0b8309SEdward Pilatowicz int rv; 2167f0b8309SEdward Pilatowicz 2177f0b8309SEdward Pilatowicz if (xsp == NULL) 2187f0b8309SEdward Pilatowicz return (ENXIO); 2197f0b8309SEdward Pilatowicz 2207f0b8309SEdward Pilatowicz if (!xdfs_tgt_hold(xsp)) 2217f0b8309SEdward Pilatowicz return (ENXIO); 2227f0b8309SEdward Pilatowicz 2237f0b8309SEdward Pilatowicz if (cmd == TG_GETVIRTGEOM) { 2247f0b8309SEdward Pilatowicz cmlb_geom_t pgeom, *vgeomp; 2257f0b8309SEdward Pilatowicz diskaddr_t capacity; 2267f0b8309SEdward Pilatowicz 2277f0b8309SEdward Pilatowicz /* 2287f0b8309SEdward Pilatowicz * The native xdf driver doesn't support this ioctl. 2297f0b8309SEdward Pilatowicz * Intead of passing it on, emulate it here so that the 2307f0b8309SEdward Pilatowicz * results look the same as what we get for a real xdf 2317f0b8309SEdward Pilatowicz * shell device. 2327f0b8309SEdward Pilatowicz * 2337f0b8309SEdward Pilatowicz * Get the real size of the device 2347f0b8309SEdward Pilatowicz */ 2357f0b8309SEdward Pilatowicz if ((rv = xdf_lb_getinfo(xsp->xdfss_tgt_dip, 2367f0b8309SEdward Pilatowicz TG_GETPHYGEOM, &pgeom, tg_cookie)) != 0) 2377f0b8309SEdward Pilatowicz goto out; 2387f0b8309SEdward Pilatowicz capacity = pgeom.g_capacity; 2397f0b8309SEdward Pilatowicz 2407f0b8309SEdward Pilatowicz /* 2417f0b8309SEdward Pilatowicz * If the controller returned us something that doesn't 2427f0b8309SEdward Pilatowicz * really fit into an Int 13/function 8 geometry 2437f0b8309SEdward Pilatowicz * result, just fail the ioctl. See PSARC 1998/313. 2447f0b8309SEdward Pilatowicz */ 2457f0b8309SEdward Pilatowicz if (capacity >= (63 * 254 * 1024)) { 2467f0b8309SEdward Pilatowicz rv = EINVAL; 2477f0b8309SEdward Pilatowicz goto out; 2487f0b8309SEdward Pilatowicz } 2497f0b8309SEdward Pilatowicz 2507f0b8309SEdward Pilatowicz vgeomp = (cmlb_geom_t *)arg; 2517f0b8309SEdward Pilatowicz vgeomp->g_capacity = capacity; 2527f0b8309SEdward Pilatowicz vgeomp->g_nsect = 63; 2537f0b8309SEdward Pilatowicz vgeomp->g_nhead = 254; 2547f0b8309SEdward Pilatowicz vgeomp->g_ncyl = capacity / (63 * 254); 2557f0b8309SEdward Pilatowicz vgeomp->g_acyl = 0; 2567f0b8309SEdward Pilatowicz vgeomp->g_secsize = 512; 2577f0b8309SEdward Pilatowicz vgeomp->g_intrlv = 1; 2587f0b8309SEdward Pilatowicz vgeomp->g_rpm = 3600; 2597f0b8309SEdward Pilatowicz rv = 0; 2607f0b8309SEdward Pilatowicz goto out; 2617f0b8309SEdward Pilatowicz } 2627f0b8309SEdward Pilatowicz 2637f0b8309SEdward Pilatowicz rv = xdf_lb_getinfo(xsp->xdfss_tgt_dip, cmd, arg, tg_cookie); 2647f0b8309SEdward Pilatowicz 2657f0b8309SEdward Pilatowicz out: 2667f0b8309SEdward Pilatowicz xdfs_tgt_release(xsp); 2677f0b8309SEdward Pilatowicz return (rv); 2687f0b8309SEdward Pilatowicz } 2697f0b8309SEdward Pilatowicz 2707f0b8309SEdward Pilatowicz static boolean_t 2717f0b8309SEdward Pilatowicz xdfs_isopen_part(xdfs_state_t *xsp, int part) 2727f0b8309SEdward Pilatowicz { 2737f0b8309SEdward Pilatowicz int otyp; 2747f0b8309SEdward Pilatowicz 2757f0b8309SEdward Pilatowicz ASSERT(MUTEX_HELD(&xsp->xdfss_mutex)); 2767f0b8309SEdward Pilatowicz for (otyp = 0; (otyp < OTYPCNT); otyp++) { 2777f0b8309SEdward Pilatowicz if (xsp->xdfss_otyp_count[otyp][part] != 0) { 2787f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_attached); 2797f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_holds >= 0); 2807f0b8309SEdward Pilatowicz return (B_TRUE); 2817f0b8309SEdward Pilatowicz } 2827f0b8309SEdward Pilatowicz } 2837f0b8309SEdward Pilatowicz return (B_FALSE); 2847f0b8309SEdward Pilatowicz } 2857f0b8309SEdward Pilatowicz 2867f0b8309SEdward Pilatowicz static boolean_t 2877f0b8309SEdward Pilatowicz xdfs_isopen(xdfs_state_t *xsp) 2887f0b8309SEdward Pilatowicz { 2897f0b8309SEdward Pilatowicz int part; 2907f0b8309SEdward Pilatowicz 2917f0b8309SEdward Pilatowicz ASSERT(MUTEX_HELD(&xsp->xdfss_mutex)); 2927f0b8309SEdward Pilatowicz for (part = 0; part < XDF_PEXT; part++) { 2937f0b8309SEdward Pilatowicz if (xdfs_isopen_part(xsp, part)) 2947f0b8309SEdward Pilatowicz return (B_TRUE); 2957f0b8309SEdward Pilatowicz } 2967f0b8309SEdward Pilatowicz return (B_FALSE); 2977f0b8309SEdward Pilatowicz } 2987f0b8309SEdward Pilatowicz 2997f0b8309SEdward Pilatowicz static int 3007f0b8309SEdward Pilatowicz xdfs_iodone(struct buf *bp) 3017f0b8309SEdward Pilatowicz { 3027f0b8309SEdward Pilatowicz struct buf *bp_orig = bp->b_chain; 3037f0b8309SEdward Pilatowicz 3047f0b8309SEdward Pilatowicz /* Propegate back the io results */ 3057f0b8309SEdward Pilatowicz bp_orig->b_resid = bp->b_resid; 3067f0b8309SEdward Pilatowicz bioerror(bp_orig, geterror(bp)); 3077f0b8309SEdward Pilatowicz biodone(bp_orig); 3087f0b8309SEdward Pilatowicz 3097f0b8309SEdward Pilatowicz freerbuf(bp); 3107f0b8309SEdward Pilatowicz return (0); 3117f0b8309SEdward Pilatowicz } 3127f0b8309SEdward Pilatowicz 3137f0b8309SEdward Pilatowicz static int 3147f0b8309SEdward Pilatowicz xdfs_cmlb_attach(xdfs_state_t *xsp) 3157f0b8309SEdward Pilatowicz { 3167f0b8309SEdward Pilatowicz return (cmlb_attach(xsp->xdfss_dip, &xdfs_lb_ops, 3177f0b8309SEdward Pilatowicz xsp->xdfss_tgt_is_cd ? DTYPE_RODIRECT : DTYPE_DIRECT, 3187f0b8309SEdward Pilatowicz xdf_is_rm(xsp->xdfss_tgt_dip), 3197f0b8309SEdward Pilatowicz B_TRUE, 3207f0b8309SEdward Pilatowicz xdfs_c_cmlb_node_type(xsp), 3217f0b8309SEdward Pilatowicz xdfs_c_cmlb_alter_behavior(xsp), 3227f0b8309SEdward Pilatowicz xsp->xdfss_cmlbhandle, 0)); 3237f0b8309SEdward Pilatowicz } 3247f0b8309SEdward Pilatowicz 3257f0b8309SEdward Pilatowicz static boolean_t 3267f0b8309SEdward Pilatowicz xdfs_tgt_probe(xdfs_state_t *xsp, dev_info_t *tgt_dip) 3277f0b8309SEdward Pilatowicz { 3287f0b8309SEdward Pilatowicz cmlb_geom_t pgeom; 3297f0b8309SEdward Pilatowicz int tgt_instance = ddi_get_instance(tgt_dip); 3307f0b8309SEdward Pilatowicz 3317f0b8309SEdward Pilatowicz ASSERT(MUTEX_HELD(&xsp->xdfss_mutex)); 3327f0b8309SEdward Pilatowicz ASSERT(!xdfs_isopen(xsp)); 3337f0b8309SEdward Pilatowicz ASSERT(!xsp->xdfss_tgt_attached); 3347f0b8309SEdward Pilatowicz 3357f0b8309SEdward Pilatowicz xsp->xdfss_tgt_dip = tgt_dip; 3367f0b8309SEdward Pilatowicz xsp->xdfss_tgt_holds = 0; 3377f0b8309SEdward Pilatowicz xsp->xdfss_tgt_dev = makedevice(ddi_driver_major(tgt_dip), 3387f0b8309SEdward Pilatowicz XDF_MINOR(tgt_instance, 0)); 3397f0b8309SEdward Pilatowicz ASSERT((xsp->xdfss_tgt_dev & XDF_PMASK) == 0); 3407f0b8309SEdward Pilatowicz xsp->xdfss_tgt_is_cd = xdf_is_cd(tgt_dip); 3417f0b8309SEdward Pilatowicz 3427f0b8309SEdward Pilatowicz /* 3437f0b8309SEdward Pilatowicz * GROSS HACK ALERT! GROSS HACK ALERT! 3447f0b8309SEdward Pilatowicz * 3457f0b8309SEdward Pilatowicz * Before we can initialize the cmlb layer, we have to tell the 3467f0b8309SEdward Pilatowicz * underlying xdf device what it's physical geometry should be. 3477f0b8309SEdward Pilatowicz * See the block comments at the top of this file for more info. 3487f0b8309SEdward Pilatowicz */ 3497f0b8309SEdward Pilatowicz if (!xsp->xdfss_tgt_is_cd && 3507f0b8309SEdward Pilatowicz ((xdfs_c_getpgeom(xsp->xdfss_dip, &pgeom) != 0) || 3517f0b8309SEdward Pilatowicz (xdf_hvm_setpgeom(xsp->xdfss_tgt_dip, &pgeom) != 0))) 3527f0b8309SEdward Pilatowicz return (B_FALSE); 3537f0b8309SEdward Pilatowicz 3547f0b8309SEdward Pilatowicz /* 3557f0b8309SEdward Pilatowicz * Force the xdf front end driver to connect to the backend. From 3567f0b8309SEdward Pilatowicz * the solaris device tree perspective, the xdf driver devinfo node 3577f0b8309SEdward Pilatowicz * is already in the ATTACHED state. (Otherwise xdf_hvm_hold() 3587f0b8309SEdward Pilatowicz * would not have returned a dip.) But this doesn't mean that the 3597f0b8309SEdward Pilatowicz * xdf device has actually established a connection to it's back 3607f0b8309SEdward Pilatowicz * end driver. For us to be able to access the xdf device it needs 3617f0b8309SEdward Pilatowicz * to be connected. 3627f0b8309SEdward Pilatowicz */ 3637f0b8309SEdward Pilatowicz if (!xdf_hvm_connect(xsp->xdfss_tgt_dip)) { 3647f0b8309SEdward Pilatowicz cmn_err(CE_WARN, "pv driver failed to connect: %s", 3657f0b8309SEdward Pilatowicz xsp->xdfss_pv); 3667f0b8309SEdward Pilatowicz return (B_FALSE); 3677f0b8309SEdward Pilatowicz } 3687f0b8309SEdward Pilatowicz 3697f0b8309SEdward Pilatowicz if (xsp->xdfss_tgt_is_cd && !xdf_media_req_supported(tgt_dip)) { 3707f0b8309SEdward Pilatowicz /* 3717f0b8309SEdward Pilatowicz * Unfortunatly, the dom0 backend driver doesn't support 3727f0b8309SEdward Pilatowicz * important media request operations like eject, so fail 3737f0b8309SEdward Pilatowicz * the probe (this should cause us to fall back to emulated 3747f0b8309SEdward Pilatowicz * hvm device access, which does support things like eject). 3757f0b8309SEdward Pilatowicz */ 3767f0b8309SEdward Pilatowicz return (B_FALSE); 3777f0b8309SEdward Pilatowicz } 3787f0b8309SEdward Pilatowicz 3797f0b8309SEdward Pilatowicz /* create kstat for iostat(1M) */ 3807f0b8309SEdward Pilatowicz if (xdf_kstat_create(xsp->xdfss_tgt_dip, (char *)xdfs_c_name, 3817f0b8309SEdward Pilatowicz tgt_instance) != 0) 3827f0b8309SEdward Pilatowicz return (B_FALSE); 3837f0b8309SEdward Pilatowicz 3847f0b8309SEdward Pilatowicz /* 3857f0b8309SEdward Pilatowicz * Now we need to mark ourselves as attached and drop xdfss_mutex. 3867f0b8309SEdward Pilatowicz * We do this because the final steps in the attach process will 3877f0b8309SEdward Pilatowicz * need to access the underlying disk to read the label and 3887f0b8309SEdward Pilatowicz * possibly the devid. 3897f0b8309SEdward Pilatowicz */ 3907f0b8309SEdward Pilatowicz xsp->xdfss_tgt_attached = B_TRUE; 3917f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 3927f0b8309SEdward Pilatowicz 3937f0b8309SEdward Pilatowicz if (!xsp->xdfss_tgt_is_cd && xdfs_c_bb_check(xsp)) { 3947f0b8309SEdward Pilatowicz cmn_err(CE_WARN, "pv disks with bad blocks are unsupported: %s", 3957f0b8309SEdward Pilatowicz xsp->xdfss_hvm); 3967f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 3977f0b8309SEdward Pilatowicz xdf_kstat_delete(xsp->xdfss_tgt_dip); 3987f0b8309SEdward Pilatowicz xsp->xdfss_tgt_attached = B_FALSE; 3997f0b8309SEdward Pilatowicz return (B_FALSE); 4007f0b8309SEdward Pilatowicz } 4017f0b8309SEdward Pilatowicz 4027f0b8309SEdward Pilatowicz /* 4037f0b8309SEdward Pilatowicz * Initalize cmlb. Note that for partition information cmlb 4047f0b8309SEdward Pilatowicz * will access the underly xdf disk device directly via 4057f0b8309SEdward Pilatowicz * xdfs_lb_rdwr() and xdfs_lb_getinfo(). There are no 4067f0b8309SEdward Pilatowicz * layered driver handles associated with this access because 4077f0b8309SEdward Pilatowicz * it is a direct disk access that doesn't go through 4087f0b8309SEdward Pilatowicz * any of the device nodes exported by the xdf device (since 4097f0b8309SEdward Pilatowicz * all exported device nodes only reflect the portion of 4107f0b8309SEdward Pilatowicz * the device visible via the partition/slice that the node 4117f0b8309SEdward Pilatowicz * is associated with.) So while not observable via the LDI, 4127f0b8309SEdward Pilatowicz * this direct disk access is ok since we're actually holding 4137f0b8309SEdward Pilatowicz * the target device. 4147f0b8309SEdward Pilatowicz */ 4157f0b8309SEdward Pilatowicz if (xdfs_cmlb_attach(xsp) != 0) { 4167f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 4177f0b8309SEdward Pilatowicz xdf_kstat_delete(xsp->xdfss_tgt_dip); 4187f0b8309SEdward Pilatowicz xsp->xdfss_tgt_attached = B_FALSE; 4197f0b8309SEdward Pilatowicz return (B_FALSE); 4207f0b8309SEdward Pilatowicz } 4217f0b8309SEdward Pilatowicz 4227f0b8309SEdward Pilatowicz /* setup devid string */ 4237f0b8309SEdward Pilatowicz xsp->xdfss_tgt_devid = NULL; 4247f0b8309SEdward Pilatowicz if (!xsp->xdfss_tgt_is_cd) 4257f0b8309SEdward Pilatowicz xdfs_c_devid_setup(xsp); 4267f0b8309SEdward Pilatowicz 4277f0b8309SEdward Pilatowicz (void) cmlb_validate(xsp->xdfss_cmlbhandle, 0, 0); 4287f0b8309SEdward Pilatowicz 4297f0b8309SEdward Pilatowicz /* Have the system report any newly created device nodes */ 4307f0b8309SEdward Pilatowicz ddi_report_dev(xsp->xdfss_dip); 4317f0b8309SEdward Pilatowicz 4327f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 4337f0b8309SEdward Pilatowicz return (B_TRUE); 4347f0b8309SEdward Pilatowicz } 4357f0b8309SEdward Pilatowicz 4367f0b8309SEdward Pilatowicz static boolean_t 4377f0b8309SEdward Pilatowicz xdfs_tgt_detach(xdfs_state_t *xsp) 4387f0b8309SEdward Pilatowicz { 4397f0b8309SEdward Pilatowicz ASSERT(MUTEX_HELD(&xsp->xdfss_mutex)); 4407f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_attached); 4417f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_holds >= 0); 4427f0b8309SEdward Pilatowicz 4437f0b8309SEdward Pilatowicz if ((xdfs_isopen(xsp)) || (xsp->xdfss_tgt_holds != 0)) 4447f0b8309SEdward Pilatowicz return (B_FALSE); 4457f0b8309SEdward Pilatowicz 4467f0b8309SEdward Pilatowicz ddi_devid_unregister(xsp->xdfss_dip); 4477f0b8309SEdward Pilatowicz if (xsp->xdfss_tgt_devid != NULL) 4487f0b8309SEdward Pilatowicz ddi_devid_free(xsp->xdfss_tgt_devid); 4497f0b8309SEdward Pilatowicz 4507f0b8309SEdward Pilatowicz xdf_kstat_delete(xsp->xdfss_tgt_dip); 4517f0b8309SEdward Pilatowicz xsp->xdfss_tgt_attached = B_FALSE; 4527f0b8309SEdward Pilatowicz return (B_TRUE); 4537f0b8309SEdward Pilatowicz } 4547f0b8309SEdward Pilatowicz 4557f0b8309SEdward Pilatowicz /* 4567f0b8309SEdward Pilatowicz * Xdf_shell interfaces that may be called from outside this file. 4577f0b8309SEdward Pilatowicz */ 4587f0b8309SEdward Pilatowicz void 4597f0b8309SEdward Pilatowicz xdfs_minphys(struct buf *bp) 4607f0b8309SEdward Pilatowicz { 4617f0b8309SEdward Pilatowicz xdfmin(bp); 4627f0b8309SEdward Pilatowicz } 4637f0b8309SEdward Pilatowicz 4647f0b8309SEdward Pilatowicz /* 4657f0b8309SEdward Pilatowicz * Cmlb ops vector, allows the cmlb module to directly access the entire 4667f0b8309SEdward Pilatowicz * xdf disk device without going through any partitioning layers. 4677f0b8309SEdward Pilatowicz */ 4687f0b8309SEdward Pilatowicz int 4697f0b8309SEdward Pilatowicz xdfs_lb_rdwr(dev_info_t *dip, uchar_t cmd, void *bufaddr, 4707f0b8309SEdward Pilatowicz diskaddr_t start, size_t count, void *tg_cookie) 4717f0b8309SEdward Pilatowicz { 4727f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 4737f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 4747f0b8309SEdward Pilatowicz int rv; 4757f0b8309SEdward Pilatowicz 4767f0b8309SEdward Pilatowicz if (xsp == NULL) 4777f0b8309SEdward Pilatowicz return (ENXIO); 4787f0b8309SEdward Pilatowicz 4797f0b8309SEdward Pilatowicz if (!xdfs_tgt_hold(xsp)) 4807f0b8309SEdward Pilatowicz return (ENXIO); 4817f0b8309SEdward Pilatowicz 4827f0b8309SEdward Pilatowicz rv = xdf_lb_rdwr(xsp->xdfss_tgt_dip, 4837f0b8309SEdward Pilatowicz cmd, bufaddr, start, count, tg_cookie); 4847f0b8309SEdward Pilatowicz 4857f0b8309SEdward Pilatowicz xdfs_tgt_release(xsp); 4867f0b8309SEdward Pilatowicz return (rv); 4877f0b8309SEdward Pilatowicz } 4887f0b8309SEdward Pilatowicz 4897f0b8309SEdward Pilatowicz /* 4907f0b8309SEdward Pilatowicz * Driver PV and HVM cb_ops entry points 4917f0b8309SEdward Pilatowicz */ 4927f0b8309SEdward Pilatowicz /*ARGSUSED*/ 4937f0b8309SEdward Pilatowicz static int 4947f0b8309SEdward Pilatowicz xdfs_open(dev_t *dev_p, int flag, int otyp, cred_t *credp) 4957f0b8309SEdward Pilatowicz { 4967f0b8309SEdward Pilatowicz ldi_ident_t li; 4977f0b8309SEdward Pilatowicz dev_t dev = *dev_p; 4987f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 4997f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 5007f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 5017f0b8309SEdward Pilatowicz dev_t tgt_devt = xsp->xdfss_tgt_dev | part; 5027f0b8309SEdward Pilatowicz int err = 0; 5037f0b8309SEdward Pilatowicz 5047f0b8309SEdward Pilatowicz if ((otyp < 0) || (otyp >= OTYPCNT)) 5057f0b8309SEdward Pilatowicz return (EINVAL); 5067f0b8309SEdward Pilatowicz 5077f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) { 5087f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL)) 5097f0b8309SEdward Pilatowicz return (ENOTSUP); 5107f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_open(dev_p, flag, otyp, credp)); 5117f0b8309SEdward Pilatowicz } 5127f0b8309SEdward Pilatowicz 5137f0b8309SEdward Pilatowicz /* allocate an ldi handle */ 5147f0b8309SEdward Pilatowicz VERIFY(ldi_ident_from_dev(*dev_p, &li) == 0); 5157f0b8309SEdward Pilatowicz 5167f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 5177f0b8309SEdward Pilatowicz 5187f0b8309SEdward Pilatowicz /* 5197f0b8309SEdward Pilatowicz * We translate all device opens (chr, blk, and lyr) into 5207f0b8309SEdward Pilatowicz * block device opens. Why? Because for all the opens that 5217f0b8309SEdward Pilatowicz * come through this driver, we only keep around one LDI handle. 5227f0b8309SEdward Pilatowicz * So that handle can only be of one open type. The reason 5237f0b8309SEdward Pilatowicz * that we choose the block interface for this is that to use 5247f0b8309SEdward Pilatowicz * the block interfaces for a device the system needs to allocate 5257f0b8309SEdward Pilatowicz * buf_ts, which are associated with system memory which can act 5267f0b8309SEdward Pilatowicz * as a cache for device data. So normally when a block device 5277f0b8309SEdward Pilatowicz * is closed the system will ensure that all these pages get 5287f0b8309SEdward Pilatowicz * flushed out of memory. But if we were to open the device 5297f0b8309SEdward Pilatowicz * as a character device, then when we went to close the underlying 5307f0b8309SEdward Pilatowicz * device (even if we had invoked the block interfaces) any data 5317f0b8309SEdward Pilatowicz * remaining in memory wouldn't necessairly be flushed out 5327f0b8309SEdward Pilatowicz * before the device was closed. 5337f0b8309SEdward Pilatowicz */ 5347f0b8309SEdward Pilatowicz if (xsp->xdfss_tgt_lh[part] == NULL) { 5357f0b8309SEdward Pilatowicz ASSERT(!xdfs_isopen_part(xsp, part)); 5367f0b8309SEdward Pilatowicz 5377f0b8309SEdward Pilatowicz err = ldi_open_by_dev(&tgt_devt, OTYP_BLK, flag, credp, 5387f0b8309SEdward Pilatowicz &xsp->xdfss_tgt_lh[part], li); 5397f0b8309SEdward Pilatowicz 5407f0b8309SEdward Pilatowicz if (err != 0) { 5417f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 5427f0b8309SEdward Pilatowicz ldi_ident_release(li); 5437f0b8309SEdward Pilatowicz return (err); 5447f0b8309SEdward Pilatowicz } 5457f0b8309SEdward Pilatowicz 5467f0b8309SEdward Pilatowicz /* Disk devices really shouldn't clone */ 5477f0b8309SEdward Pilatowicz ASSERT(tgt_devt == (xsp->xdfss_tgt_dev | part)); 5487f0b8309SEdward Pilatowicz } else { 5497f0b8309SEdward Pilatowicz ldi_handle_t lh_tmp; 5507f0b8309SEdward Pilatowicz 5517f0b8309SEdward Pilatowicz ASSERT(xdfs_isopen_part(xsp, part)); 5527f0b8309SEdward Pilatowicz 5537f0b8309SEdward Pilatowicz /* do ldi open/close to get flags and cred check */ 5547f0b8309SEdward Pilatowicz err = ldi_open_by_dev(&tgt_devt, OTYP_BLK, flag, credp, 5557f0b8309SEdward Pilatowicz &lh_tmp, li); 5567f0b8309SEdward Pilatowicz if (err != 0) { 5577f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 5587f0b8309SEdward Pilatowicz ldi_ident_release(li); 5597f0b8309SEdward Pilatowicz return (err); 5607f0b8309SEdward Pilatowicz } 5617f0b8309SEdward Pilatowicz 5627f0b8309SEdward Pilatowicz /* Disk devices really shouldn't clone */ 5637f0b8309SEdward Pilatowicz ASSERT(tgt_devt == (xsp->xdfss_tgt_dev | part)); 5647f0b8309SEdward Pilatowicz (void) ldi_close(lh_tmp, flag, credp); 5657f0b8309SEdward Pilatowicz } 5667f0b8309SEdward Pilatowicz ldi_ident_release(li); 5677f0b8309SEdward Pilatowicz 5687f0b8309SEdward Pilatowicz xsp->xdfss_otyp_count[otyp][part]++; 5697f0b8309SEdward Pilatowicz 5707f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 5717f0b8309SEdward Pilatowicz return (0); 5727f0b8309SEdward Pilatowicz } 5737f0b8309SEdward Pilatowicz 5747f0b8309SEdward Pilatowicz /*ARGSUSED*/ 5757f0b8309SEdward Pilatowicz static int 5767f0b8309SEdward Pilatowicz xdfs_close(dev_t dev, int flag, int otyp, cred_t *credp) 5777f0b8309SEdward Pilatowicz { 5787f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 5797f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 5807f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 5817f0b8309SEdward Pilatowicz int err = 0; 5827f0b8309SEdward Pilatowicz 5837f0b8309SEdward Pilatowicz ASSERT((otyp >= 0) && otyp < OTYPCNT); 5847f0b8309SEdward Pilatowicz 5857f0b8309SEdward Pilatowicz /* Sanity check the dev_t associated with this request. */ 5867f0b8309SEdward Pilatowicz ASSERT(getmajor(dev) == xdfs_major); 5877f0b8309SEdward Pilatowicz if (getmajor(dev) != xdfs_major) 5887f0b8309SEdward Pilatowicz return (ENXIO); 5897f0b8309SEdward Pilatowicz 5907f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) { 5917f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL)) 5927f0b8309SEdward Pilatowicz return (ENOTSUP); 5937f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_close(dev, flag, otyp, credp)); 5947f0b8309SEdward Pilatowicz } 5957f0b8309SEdward Pilatowicz 5967f0b8309SEdward Pilatowicz /* 5977f0b8309SEdward Pilatowicz * Sanity check that that the device is actually open. On debug 5987f0b8309SEdward Pilatowicz * kernels we'll panic and on non-debug kernels we'll return failure. 5997f0b8309SEdward Pilatowicz */ 6007f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 6017f0b8309SEdward Pilatowicz ASSERT(xdfs_isopen_part(xsp, part)); 6027f0b8309SEdward Pilatowicz if (!xdfs_isopen_part(xsp, part)) { 6037f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 6047f0b8309SEdward Pilatowicz return (ENXIO); 6057f0b8309SEdward Pilatowicz } 6067f0b8309SEdward Pilatowicz 6077f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_lh[part] != NULL); 6087f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_otyp_count[otyp][part] > 0); 6097f0b8309SEdward Pilatowicz if (otyp == OTYP_LYR) { 6107f0b8309SEdward Pilatowicz xsp->xdfss_otyp_count[otyp][part]--; 6117f0b8309SEdward Pilatowicz } else { 6127f0b8309SEdward Pilatowicz xsp->xdfss_otyp_count[otyp][part] = 0; 6137f0b8309SEdward Pilatowicz } 6147f0b8309SEdward Pilatowicz 6157f0b8309SEdward Pilatowicz if (!xdfs_isopen_part(xsp, part)) { 6167f0b8309SEdward Pilatowicz err = ldi_close(xsp->xdfss_tgt_lh[part], flag, credp); 6177f0b8309SEdward Pilatowicz xsp->xdfss_tgt_lh[part] = NULL; 6187f0b8309SEdward Pilatowicz } 6197f0b8309SEdward Pilatowicz 6207f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 6217f0b8309SEdward Pilatowicz 6227f0b8309SEdward Pilatowicz return (err); 6237f0b8309SEdward Pilatowicz } 6247f0b8309SEdward Pilatowicz 6257f0b8309SEdward Pilatowicz int 6267f0b8309SEdward Pilatowicz xdfs_strategy(struct buf *bp) 6277f0b8309SEdward Pilatowicz { 6287f0b8309SEdward Pilatowicz dev_t dev = bp->b_edev; 6297f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 6307f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 6317f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 6327f0b8309SEdward Pilatowicz dev_t tgt_devt; 6337f0b8309SEdward Pilatowicz struct buf *bp_clone; 6347f0b8309SEdward Pilatowicz 6357f0b8309SEdward Pilatowicz /* Sanity check the dev_t associated with this request. */ 6367f0b8309SEdward Pilatowicz ASSERT(getmajor(dev) == xdfs_major); 6377f0b8309SEdward Pilatowicz if (getmajor(dev) != xdfs_major) 6387f0b8309SEdward Pilatowicz goto err; 6397f0b8309SEdward Pilatowicz 6407f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) { 6417f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL)) 6427f0b8309SEdward Pilatowicz return (ENOTSUP); 6437f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_strategy(bp)); 6447f0b8309SEdward Pilatowicz } 6457f0b8309SEdward Pilatowicz 6467f0b8309SEdward Pilatowicz /* 6477f0b8309SEdward Pilatowicz * Sanity checks that the dev_t associated with the buf we were 6487f0b8309SEdward Pilatowicz * passed corresponds to an open partition. On debug kernels we'll 6497f0b8309SEdward Pilatowicz * panic and on non-debug kernels we'll return failure. 6507f0b8309SEdward Pilatowicz */ 6517f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 6527f0b8309SEdward Pilatowicz ASSERT(xdfs_isopen_part(xsp, part)); 6537f0b8309SEdward Pilatowicz if (!xdfs_isopen_part(xsp, part)) { 6547f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 6557f0b8309SEdward Pilatowicz goto err; 6567f0b8309SEdward Pilatowicz } 6577f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 6587f0b8309SEdward Pilatowicz 6597f0b8309SEdward Pilatowicz /* clone this buffer */ 6607f0b8309SEdward Pilatowicz tgt_devt = xsp->xdfss_tgt_dev | part; 6617f0b8309SEdward Pilatowicz bp_clone = bioclone(bp, 0, bp->b_bcount, tgt_devt, bp->b_blkno, 6627f0b8309SEdward Pilatowicz xdfs_iodone, NULL, KM_SLEEP); 6637f0b8309SEdward Pilatowicz bp_clone->b_chain = bp; 6647f0b8309SEdward Pilatowicz 6657f0b8309SEdward Pilatowicz /* 6667f0b8309SEdward Pilatowicz * If we're being invoked on behalf of the physio() call in 6677f0b8309SEdward Pilatowicz * xdfs_dioctl_rwcmd() then b_private will be set to 6687f0b8309SEdward Pilatowicz * XB_SLICE_NONE and we need to propegate this flag into the 6697f0b8309SEdward Pilatowicz * cloned buffer so that the xdf driver will see it. 6707f0b8309SEdward Pilatowicz */ 6717f0b8309SEdward Pilatowicz if (bp->b_private == (void *)XB_SLICE_NONE) 6727f0b8309SEdward Pilatowicz bp_clone->b_private = (void *)XB_SLICE_NONE; 6737f0b8309SEdward Pilatowicz 6747f0b8309SEdward Pilatowicz /* 6757f0b8309SEdward Pilatowicz * Pass on the cloned buffer. Note that we don't bother to check 6767f0b8309SEdward Pilatowicz * for failure because the xdf strategy routine will have to 6777f0b8309SEdward Pilatowicz * invoke biodone() if it wants to return an error, which means 6787f0b8309SEdward Pilatowicz * that the xdfs_iodone() callback will get invoked and it 6797f0b8309SEdward Pilatowicz * will propegate the error back up the stack and free the cloned 6807f0b8309SEdward Pilatowicz * buffer. 6817f0b8309SEdward Pilatowicz */ 6827f0b8309SEdward Pilatowicz ASSERT(xsp->xdfss_tgt_lh[part] != NULL); 6837f0b8309SEdward Pilatowicz return (ldi_strategy(xsp->xdfss_tgt_lh[part], bp_clone)); 6847f0b8309SEdward Pilatowicz 6857f0b8309SEdward Pilatowicz err: 6867f0b8309SEdward Pilatowicz bioerror(bp, ENXIO); 6877f0b8309SEdward Pilatowicz bp->b_resid = bp->b_bcount; 6887f0b8309SEdward Pilatowicz biodone(bp); 6897f0b8309SEdward Pilatowicz return (0); 6907f0b8309SEdward Pilatowicz } 6917f0b8309SEdward Pilatowicz 6927f0b8309SEdward Pilatowicz static int 6937f0b8309SEdward Pilatowicz xdfs_dump(dev_t dev, caddr_t addr, daddr_t blkno, int nblk) 6947f0b8309SEdward Pilatowicz { 6957f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 6967f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 6977f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 6987f0b8309SEdward Pilatowicz 6997f0b8309SEdward Pilatowicz if (!XDFS_HVM_MODE(xsp)) 7007f0b8309SEdward Pilatowicz return (ldi_dump(xsp->xdfss_tgt_lh[part], addr, blkno, nblk)); 7017f0b8309SEdward Pilatowicz 7027f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL)) 7037f0b8309SEdward Pilatowicz return (ENOTSUP); 7047f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_dump(dev, addr, blkno, nblk)); 7057f0b8309SEdward Pilatowicz } 7067f0b8309SEdward Pilatowicz 7077f0b8309SEdward Pilatowicz /*ARGSUSED*/ 7087f0b8309SEdward Pilatowicz static int 7097f0b8309SEdward Pilatowicz xdfs_read(dev_t dev, struct uio *uio, cred_t *credp) 7107f0b8309SEdward Pilatowicz { 7117f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 7127f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 7137f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 7147f0b8309SEdward Pilatowicz 7157f0b8309SEdward Pilatowicz if (!XDFS_HVM_MODE(xsp)) 7167f0b8309SEdward Pilatowicz return (ldi_read(xsp->xdfss_tgt_lh[part], uio, credp)); 7177f0b8309SEdward Pilatowicz 7187f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL)) 7197f0b8309SEdward Pilatowicz return (ENOTSUP); 7207f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_read(dev, uio, credp)); 7217f0b8309SEdward Pilatowicz } 7227f0b8309SEdward Pilatowicz 7237f0b8309SEdward Pilatowicz /*ARGSUSED*/ 7247f0b8309SEdward Pilatowicz static int 7257f0b8309SEdward Pilatowicz xdfs_write(dev_t dev, struct uio *uio, cred_t *credp) 7267f0b8309SEdward Pilatowicz { 7277f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 7287f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 7297f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 7307f0b8309SEdward Pilatowicz 7317f0b8309SEdward Pilatowicz if (!XDFS_HVM_MODE(xsp)) 7327f0b8309SEdward Pilatowicz return (ldi_write(xsp->xdfss_tgt_lh[part], uio, credp)); 7337f0b8309SEdward Pilatowicz 7347f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL)) 7357f0b8309SEdward Pilatowicz return (ENOTSUP); 7367f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_write(dev, uio, credp)); 7377f0b8309SEdward Pilatowicz } 7387f0b8309SEdward Pilatowicz 7397f0b8309SEdward Pilatowicz /*ARGSUSED*/ 7407f0b8309SEdward Pilatowicz static int 7417f0b8309SEdward Pilatowicz xdfs_aread(dev_t dev, struct aio_req *aio, cred_t *credp) 7427f0b8309SEdward Pilatowicz { 7437f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 7447f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 7457f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 7467f0b8309SEdward Pilatowicz 7477f0b8309SEdward Pilatowicz if (!XDFS_HVM_MODE(xsp)) 7487f0b8309SEdward Pilatowicz return (ldi_aread(xsp->xdfss_tgt_lh[part], aio, credp)); 7497f0b8309SEdward Pilatowicz 7507f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL) || 7517f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_strategy == NULL) || 7527f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_strategy == nodev) || 7537f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_aread == NULL)) 7547f0b8309SEdward Pilatowicz return (ENOTSUP); 7557f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_aread(dev, aio, credp)); 7567f0b8309SEdward Pilatowicz } 7577f0b8309SEdward Pilatowicz 7587f0b8309SEdward Pilatowicz /*ARGSUSED*/ 7597f0b8309SEdward Pilatowicz static int 7607f0b8309SEdward Pilatowicz xdfs_awrite(dev_t dev, struct aio_req *aio, cred_t *credp) 7617f0b8309SEdward Pilatowicz { 7627f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 7637f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 7647f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 7657f0b8309SEdward Pilatowicz 7667f0b8309SEdward Pilatowicz if (!XDFS_HVM_MODE(xsp)) 7677f0b8309SEdward Pilatowicz return (ldi_awrite(xsp->xdfss_tgt_lh[part], aio, credp)); 7687f0b8309SEdward Pilatowicz 7697f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL) || 7707f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_strategy == NULL) || 7717f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_strategy == nodev) || 7727f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_awrite == NULL)) 7737f0b8309SEdward Pilatowicz return (ENOTSUP); 7747f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_awrite(dev, aio, credp)); 7757f0b8309SEdward Pilatowicz } 7767f0b8309SEdward Pilatowicz 7777f0b8309SEdward Pilatowicz static int 7787f0b8309SEdward Pilatowicz xdfs_ioctl(dev_t dev, int cmd, intptr_t arg, int flag, cred_t *credp, 7797f0b8309SEdward Pilatowicz int *rvalp) 7807f0b8309SEdward Pilatowicz { 7817f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 7827f0b8309SEdward Pilatowicz int part = XDFS_DEV2PART(dev); 7837f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 7847f0b8309SEdward Pilatowicz int rv; 7857f0b8309SEdward Pilatowicz boolean_t done; 7867f0b8309SEdward Pilatowicz 7877f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) { 7887f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL)) 7897f0b8309SEdward Pilatowicz return (ENOTSUP); 7907f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_ioctl( 7917f0b8309SEdward Pilatowicz dev, cmd, arg, flag, credp, rvalp)); 7927f0b8309SEdward Pilatowicz } 7937f0b8309SEdward Pilatowicz 7947f0b8309SEdward Pilatowicz rv = xdfs_c_ioctl(xsp, dev, part, cmd, arg, flag, credp, rvalp, &done); 7957f0b8309SEdward Pilatowicz if (done) 7967f0b8309SEdward Pilatowicz return (rv); 797*aa1b14e7SSheshadri Vasudevan rv = ldi_ioctl(xsp->xdfss_tgt_lh[part], cmd, arg, flag, credp, rvalp); 798*aa1b14e7SSheshadri Vasudevan if (rv == 0) { 799*aa1b14e7SSheshadri Vasudevan /* Force Geometry Validation */ 800*aa1b14e7SSheshadri Vasudevan (void) cmlb_invalidate(xsp->xdfss_cmlbhandle, 0); 801*aa1b14e7SSheshadri Vasudevan (void) cmlb_validate(xsp->xdfss_cmlbhandle, 0, 0); 802*aa1b14e7SSheshadri Vasudevan } 803*aa1b14e7SSheshadri Vasudevan return (rv); 8047f0b8309SEdward Pilatowicz } 8057f0b8309SEdward Pilatowicz 8067f0b8309SEdward Pilatowicz static int 8077f0b8309SEdward Pilatowicz xdfs_hvm_prop_op(dev_t dev, dev_info_t *dip, ddi_prop_op_t prop_op, 8087f0b8309SEdward Pilatowicz int flags, char *name, caddr_t valuep, int *lengthp) 8097f0b8309SEdward Pilatowicz { 8107f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 8117f0b8309SEdward Pilatowicz void *xsp = ddi_get_soft_state(xdfs_ssp, instance); 8127f0b8309SEdward Pilatowicz 8137f0b8309SEdward Pilatowicz ASSERT(XDFS_HVM_MODE(xsp)); 8147f0b8309SEdward Pilatowicz 8157f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || (xdfs_hvm_cb_ops == NULL) || 8167f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_prop_op == NULL) || 8177f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_prop_op == nodev) || 8187f0b8309SEdward Pilatowicz (xdfs_hvm_cb_ops->cb_prop_op == nulldev)) 8197f0b8309SEdward Pilatowicz return (DDI_PROP_NOT_FOUND); 8207f0b8309SEdward Pilatowicz 8217f0b8309SEdward Pilatowicz return (xdfs_hvm_cb_ops->cb_prop_op(dev, dip, prop_op, 8227f0b8309SEdward Pilatowicz flags, name, valuep, lengthp)); 8237f0b8309SEdward Pilatowicz } 8247f0b8309SEdward Pilatowicz 8257f0b8309SEdward Pilatowicz static int 8267f0b8309SEdward Pilatowicz xdfs_prop_op(dev_t dev, dev_info_t *dip, ddi_prop_op_t prop_op, 8277f0b8309SEdward Pilatowicz int flags, char *name, caddr_t valuep, int *lengthp) 8287f0b8309SEdward Pilatowicz { 8297f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 8307f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 8317f0b8309SEdward Pilatowicz int rv; 8327f0b8309SEdward Pilatowicz dev_info_t *tgt_dip; 8337f0b8309SEdward Pilatowicz dev_t tgt_devt; 8347f0b8309SEdward Pilatowicz 8357f0b8309SEdward Pilatowicz /* 8367f0b8309SEdward Pilatowicz * Sanity check that if a dev_t or dip were specified that they 8377f0b8309SEdward Pilatowicz * correspond to this device driver. On debug kernels we'll 8387f0b8309SEdward Pilatowicz * panic and on non-debug kernels we'll return failure. 8397f0b8309SEdward Pilatowicz */ 8407f0b8309SEdward Pilatowicz ASSERT(ddi_driver_major(dip) == xdfs_major); 8417f0b8309SEdward Pilatowicz ASSERT((dev == DDI_DEV_T_ANY) || (getmajor(dev) == xdfs_major)); 8427f0b8309SEdward Pilatowicz if ((ddi_driver_major(dip) != xdfs_major) || 8437f0b8309SEdward Pilatowicz ((dev != DDI_DEV_T_ANY) && (getmajor(dev) != xdfs_major))) 8447f0b8309SEdward Pilatowicz return (DDI_PROP_NOT_FOUND); 8457f0b8309SEdward Pilatowicz 8467f0b8309SEdward Pilatowicz /* 8477f0b8309SEdward Pilatowicz * This property lookup might be associated with a device node 8487f0b8309SEdward Pilatowicz * that is not yet attached, if so pass it onto ddi_prop_op(). 8497f0b8309SEdward Pilatowicz */ 8507f0b8309SEdward Pilatowicz if (xsp == NULL) 8517f0b8309SEdward Pilatowicz return (ddi_prop_op(dev, dip, prop_op, flags, 8527f0b8309SEdward Pilatowicz name, valuep, lengthp)); 8537f0b8309SEdward Pilatowicz 8547f0b8309SEdward Pilatowicz /* If we're accessing the device in hvm mode, pass this request on */ 8557f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) 8567f0b8309SEdward Pilatowicz return (xdfs_hvm_prop_op(dev, dip, prop_op, 8577f0b8309SEdward Pilatowicz flags, name, valuep, lengthp)); 8587f0b8309SEdward Pilatowicz 8597f0b8309SEdward Pilatowicz /* 8607f0b8309SEdward Pilatowicz * Make sure we only lookup static properties. 8617f0b8309SEdward Pilatowicz * 8627f0b8309SEdward Pilatowicz * If there are static properties of the underlying xdf driver 8637f0b8309SEdward Pilatowicz * that we want to mirror, then we'll have to explicity look them 8647f0b8309SEdward Pilatowicz * up and define them during attach. There are a few reasons 8657f0b8309SEdward Pilatowicz * for this. Most importantly, most static properties are typed 8667f0b8309SEdward Pilatowicz * and all dynamic properties are untyped, ie, for dynamic 8677f0b8309SEdward Pilatowicz * properties the caller must know the type of the property and 8687f0b8309SEdward Pilatowicz * how to interpret the value of the property. the prop_op drivedr 8697f0b8309SEdward Pilatowicz * entry point is only designed for returning dynamic/untyped 8707f0b8309SEdward Pilatowicz * properties, so if we were to attempt to lookup and pass back 8717f0b8309SEdward Pilatowicz * static properties of the underlying device here then we would 8727f0b8309SEdward Pilatowicz * be losing the type information for those properties. Another 8737f0b8309SEdward Pilatowicz * reason we don't want to pass on static property requests is that 8747f0b8309SEdward Pilatowicz * static properties are enumerable in the device tree, where as 8757f0b8309SEdward Pilatowicz * dynamic ones are not. 8767f0b8309SEdward Pilatowicz */ 8777f0b8309SEdward Pilatowicz flags |= DDI_PROP_DYNAMIC; 8787f0b8309SEdward Pilatowicz 8797f0b8309SEdward Pilatowicz /* 8807f0b8309SEdward Pilatowicz * We can't use the ldi here to access the underlying device because 8817f0b8309SEdward Pilatowicz * the ldi actually opens the device, and that open might fail if the 8827f0b8309SEdward Pilatowicz * device has already been opened with the FEXCL flag. If we used 8837f0b8309SEdward Pilatowicz * the ldi here, it would also be possible for some other caller to 8847f0b8309SEdward Pilatowicz * try open the device with the FEXCL flag and get a failure back 8857f0b8309SEdward Pilatowicz * because we have it open to do a property query. Instad we'll 8867f0b8309SEdward Pilatowicz * grab a hold on the target dip. 8877f0b8309SEdward Pilatowicz */ 8887f0b8309SEdward Pilatowicz if (!xdfs_tgt_hold(xsp)) 8897f0b8309SEdward Pilatowicz return (DDI_PROP_NOT_FOUND); 8907f0b8309SEdward Pilatowicz 8917f0b8309SEdward Pilatowicz /* figure out dip the dev_t we're going to pass on down */ 8927f0b8309SEdward Pilatowicz tgt_dip = xsp->xdfss_tgt_dip; 8937f0b8309SEdward Pilatowicz if (dev == DDI_DEV_T_ANY) { 8947f0b8309SEdward Pilatowicz tgt_devt = DDI_DEV_T_ANY; 8957f0b8309SEdward Pilatowicz } else { 8967f0b8309SEdward Pilatowicz tgt_devt = xsp->xdfss_tgt_dev | XDFS_DEV2PART(dev); 8977f0b8309SEdward Pilatowicz } 8987f0b8309SEdward Pilatowicz 8997f0b8309SEdward Pilatowicz /* 9007f0b8309SEdward Pilatowicz * Cdev_prop_op() is not a public interface, and normally the caller 9017f0b8309SEdward Pilatowicz * is required to make sure that the target driver actually implements 9027f0b8309SEdward Pilatowicz * this interface before trying to invoke it. In this case we know 9037f0b8309SEdward Pilatowicz * that we're always accessing the xdf driver and it does have this 9047f0b8309SEdward Pilatowicz * interface defined, so we can skip the check. 9057f0b8309SEdward Pilatowicz */ 9067f0b8309SEdward Pilatowicz rv = cdev_prop_op(tgt_devt, tgt_dip, 9077f0b8309SEdward Pilatowicz prop_op, flags, name, valuep, lengthp); 9087f0b8309SEdward Pilatowicz 9097f0b8309SEdward Pilatowicz xdfs_tgt_release(xsp); 9107f0b8309SEdward Pilatowicz return (rv); 9117f0b8309SEdward Pilatowicz } 9127f0b8309SEdward Pilatowicz 9137f0b8309SEdward Pilatowicz /* 9147f0b8309SEdward Pilatowicz * Driver PV and HVM dev_ops entry points 9157f0b8309SEdward Pilatowicz */ 9167f0b8309SEdward Pilatowicz /*ARGSUSED*/ 9177f0b8309SEdward Pilatowicz static int 9187f0b8309SEdward Pilatowicz xdfs_getinfo(dev_info_t *dip, ddi_info_cmd_t infocmd, void *arg, 9197f0b8309SEdward Pilatowicz void **result) 9207f0b8309SEdward Pilatowicz { 9217f0b8309SEdward Pilatowicz dev_t dev = (dev_t)arg; 9227f0b8309SEdward Pilatowicz int instance = XDFS_DEV2UNIT(dev); 9237f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 9247f0b8309SEdward Pilatowicz 9257f0b8309SEdward Pilatowicz switch (infocmd) { 9267f0b8309SEdward Pilatowicz case DDI_INFO_DEVT2DEVINFO: 9277f0b8309SEdward Pilatowicz if (xsp == NULL) 9287f0b8309SEdward Pilatowicz return (DDI_FAILURE); 9297f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) 9307f0b8309SEdward Pilatowicz *result = XDFS_HVM_DIP(xsp); 9317f0b8309SEdward Pilatowicz else 9327f0b8309SEdward Pilatowicz *result = (void *)xsp->xdfss_dip; 9337f0b8309SEdward Pilatowicz break; 9347f0b8309SEdward Pilatowicz case DDI_INFO_DEVT2INSTANCE: 9357f0b8309SEdward Pilatowicz *result = (void *)(intptr_t)instance; 9367f0b8309SEdward Pilatowicz break; 9377f0b8309SEdward Pilatowicz default: 9387f0b8309SEdward Pilatowicz return (DDI_FAILURE); 9397f0b8309SEdward Pilatowicz } 9407f0b8309SEdward Pilatowicz return (DDI_SUCCESS); 9417f0b8309SEdward Pilatowicz } 9427f0b8309SEdward Pilatowicz 9437f0b8309SEdward Pilatowicz static int 9447f0b8309SEdward Pilatowicz xdfs_hvm_probe(dev_info_t *dip, char *path) 9457f0b8309SEdward Pilatowicz { 9467f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 9477f0b8309SEdward Pilatowicz int rv = DDI_PROBE_SUCCESS; 9487f0b8309SEdward Pilatowicz void *xsp; 9497f0b8309SEdward Pilatowicz 9507f0b8309SEdward Pilatowicz ASSERT(path != NULL); 9517f0b8309SEdward Pilatowicz cmn_err(CE_WARN, "PV access to device disabled: %s", path); 9527f0b8309SEdward Pilatowicz 9537f0b8309SEdward Pilatowicz (void) ddi_soft_state_zalloc(xdfs_ssp, instance); 9547f0b8309SEdward Pilatowicz VERIFY((xsp = ddi_get_soft_state(xdfs_ssp, instance)) != NULL); 9557f0b8309SEdward Pilatowicz 9567f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || 9577f0b8309SEdward Pilatowicz (xdfs_hvm_dev_ops->devo_probe == NULL) || 9587f0b8309SEdward Pilatowicz ((rv = xdfs_hvm_dev_ops->devo_probe(dip)) == DDI_PROBE_FAILURE)) { 9597f0b8309SEdward Pilatowicz ddi_soft_state_free(xdfs_ssp, instance); 9607f0b8309SEdward Pilatowicz cmn_err(CE_WARN, "HVM probe of device failed: %s", path); 9617f0b8309SEdward Pilatowicz kmem_free(path, MAXPATHLEN); 9627f0b8309SEdward Pilatowicz return (DDI_PROBE_FAILURE); 9637f0b8309SEdward Pilatowicz } 9647f0b8309SEdward Pilatowicz 9657f0b8309SEdward Pilatowicz XDFS_HVM_MODE(xsp) = B_TRUE; 9667f0b8309SEdward Pilatowicz XDFS_HVM_DIP(xsp) = dip; 9677f0b8309SEdward Pilatowicz XDFS_HVM_PATH(xsp) = path; 9687f0b8309SEdward Pilatowicz 9697f0b8309SEdward Pilatowicz return (rv); 9707f0b8309SEdward Pilatowicz } 9717f0b8309SEdward Pilatowicz 9727f0b8309SEdward Pilatowicz static int 9737f0b8309SEdward Pilatowicz xdfs_probe(dev_info_t *dip) 9747f0b8309SEdward Pilatowicz { 9757f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 9767f0b8309SEdward Pilatowicz xdfs_state_t *xsp; 9777f0b8309SEdward Pilatowicz dev_info_t *tgt_dip; 9787f0b8309SEdward Pilatowicz char *path; 9797f0b8309SEdward Pilatowicz int i, pv_disable; 9807f0b8309SEdward Pilatowicz 9817f0b8309SEdward Pilatowicz /* if we've already probed the device then there's nothing todo */ 9827f0b8309SEdward Pilatowicz if (ddi_get_soft_state(xdfs_ssp, instance)) 9837f0b8309SEdward Pilatowicz return (DDI_PROBE_PARTIAL); 9847f0b8309SEdward Pilatowicz 9857f0b8309SEdward Pilatowicz /* Figure out our pathname */ 9867f0b8309SEdward Pilatowicz path = kmem_alloc(MAXPATHLEN, KM_SLEEP); 9877f0b8309SEdward Pilatowicz (void) ddi_pathname(dip, path); 9887f0b8309SEdward Pilatowicz 9897f0b8309SEdward Pilatowicz /* see if we should disable pv access mode */ 9907f0b8309SEdward Pilatowicz pv_disable = ddi_prop_get_int(DDI_DEV_T_ANY, 9917f0b8309SEdward Pilatowicz dip, DDI_PROP_NOTPROM, "pv_disable", 0); 9927f0b8309SEdward Pilatowicz 9937f0b8309SEdward Pilatowicz if (xdfs_pv_disable || pv_disable) 9947f0b8309SEdward Pilatowicz return (xdfs_hvm_probe(dip, path)); 9957f0b8309SEdward Pilatowicz 9967f0b8309SEdward Pilatowicz /* 9977f0b8309SEdward Pilatowicz * This xdf shell device layers on top of an xdf device. So the first 9987f0b8309SEdward Pilatowicz * thing we need to do is determine which xdf device instance this 9997f0b8309SEdward Pilatowicz * xdf shell instance should be layered on top of. 10007f0b8309SEdward Pilatowicz */ 10017f0b8309SEdward Pilatowicz for (i = 0; xdfs_c_h2p_map[i].xdfs_h2p_hvm != NULL; i++) { 10027f0b8309SEdward Pilatowicz if (strcmp(xdfs_c_h2p_map[i].xdfs_h2p_hvm, path) == 0) 10037f0b8309SEdward Pilatowicz break; 10047f0b8309SEdward Pilatowicz } 10057f0b8309SEdward Pilatowicz 10067f0b8309SEdward Pilatowicz if ((xdfs_c_h2p_map[i].xdfs_h2p_hvm == NULL) || 10077f0b8309SEdward Pilatowicz ((tgt_dip = xdf_hvm_hold(xdfs_c_h2p_map[i].xdfs_h2p_pv)) == NULL)) { 10087f0b8309SEdward Pilatowicz /* 10097f0b8309SEdward Pilatowicz * UhOh. We either don't know what xdf instance this xdf 10107f0b8309SEdward Pilatowicz * shell device should be mapped to or the xdf node assocaited 10117f0b8309SEdward Pilatowicz * with this instance isnt' attached. in either case fall 10127f0b8309SEdward Pilatowicz * back to hvm access. 10137f0b8309SEdward Pilatowicz */ 10147f0b8309SEdward Pilatowicz return (xdfs_hvm_probe(dip, path)); 10157f0b8309SEdward Pilatowicz } 10167f0b8309SEdward Pilatowicz 10177f0b8309SEdward Pilatowicz /* allocate and initialize our state structure */ 10187f0b8309SEdward Pilatowicz (void) ddi_soft_state_zalloc(xdfs_ssp, instance); 10197f0b8309SEdward Pilatowicz xsp = ddi_get_soft_state(xdfs_ssp, instance); 10207f0b8309SEdward Pilatowicz mutex_init(&xsp->xdfss_mutex, NULL, MUTEX_DRIVER, NULL); 10217f0b8309SEdward Pilatowicz cv_init(&xsp->xdfss_cv, NULL, CV_DEFAULT, NULL); 10227f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 10237f0b8309SEdward Pilatowicz 10247f0b8309SEdward Pilatowicz xsp->xdfss_dip = dip; 10257f0b8309SEdward Pilatowicz xsp->xdfss_pv = xdfs_c_h2p_map[i].xdfs_h2p_pv; 10267f0b8309SEdward Pilatowicz xsp->xdfss_hvm = xdfs_c_h2p_map[i].xdfs_h2p_hvm; 10277f0b8309SEdward Pilatowicz xsp->xdfss_tgt_attached = B_FALSE; 10287f0b8309SEdward Pilatowicz cmlb_alloc_handle((cmlb_handle_t *)&xsp->xdfss_cmlbhandle); 10297f0b8309SEdward Pilatowicz 10307f0b8309SEdward Pilatowicz if (!xdfs_tgt_probe(xsp, tgt_dip)) { 10317f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 10327f0b8309SEdward Pilatowicz cmlb_free_handle(&xsp->xdfss_cmlbhandle); 10337f0b8309SEdward Pilatowicz ddi_soft_state_free(xdfs_ssp, instance); 10347f0b8309SEdward Pilatowicz ddi_release_devi(tgt_dip); 10357f0b8309SEdward Pilatowicz return (xdfs_hvm_probe(dip, path)); 10367f0b8309SEdward Pilatowicz } 10377f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 10387f0b8309SEdward Pilatowicz 10397f0b8309SEdward Pilatowicz /* 10407f0b8309SEdward Pilatowicz * Add a zero-length attribute to tell the world we support 10417f0b8309SEdward Pilatowicz * kernel ioctls (for layered drivers). 10427f0b8309SEdward Pilatowicz */ 10437f0b8309SEdward Pilatowicz (void) ddi_prop_create(DDI_DEV_T_NONE, dip, DDI_PROP_CANSLEEP, 10447f0b8309SEdward Pilatowicz DDI_KERNEL_IOCTL, NULL, 0); 10457f0b8309SEdward Pilatowicz 1046a10fbc55SEdward Pilatowicz kmem_free(path, MAXPATHLEN); 10477f0b8309SEdward Pilatowicz return (DDI_PROBE_SUCCESS); 10487f0b8309SEdward Pilatowicz } 10497f0b8309SEdward Pilatowicz 10507f0b8309SEdward Pilatowicz static int 10517f0b8309SEdward Pilatowicz xdfs_hvm_attach(dev_info_t *dip, ddi_attach_cmd_t cmd) 10527f0b8309SEdward Pilatowicz { 10537f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 10547f0b8309SEdward Pilatowicz void *xsp = ddi_get_soft_state(xdfs_ssp, instance); 10557f0b8309SEdward Pilatowicz int rv = DDI_FAILURE; 10567f0b8309SEdward Pilatowicz 10577f0b8309SEdward Pilatowicz XDFS_HVM_SANE(xsp); 10587f0b8309SEdward Pilatowicz 10597f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || 10607f0b8309SEdward Pilatowicz (xdfs_hvm_dev_ops->devo_attach == NULL) || 10617f0b8309SEdward Pilatowicz ((rv = xdfs_hvm_dev_ops->devo_attach(dip, cmd)) != DDI_SUCCESS)) { 10627f0b8309SEdward Pilatowicz cmn_err(CE_WARN, "HVM attach of device failed: %s", 10637f0b8309SEdward Pilatowicz XDFS_HVM_PATH(xsp)); 10647f0b8309SEdward Pilatowicz kmem_free(XDFS_HVM_PATH(xsp), MAXPATHLEN); 10657f0b8309SEdward Pilatowicz ddi_soft_state_free(xdfs_ssp, instance); 10667f0b8309SEdward Pilatowicz return (rv); 10677f0b8309SEdward Pilatowicz } 10687f0b8309SEdward Pilatowicz 10697f0b8309SEdward Pilatowicz return (DDI_SUCCESS); 10707f0b8309SEdward Pilatowicz } 10717f0b8309SEdward Pilatowicz 10727f0b8309SEdward Pilatowicz /* 10737f0b8309SEdward Pilatowicz * Autoconfiguration Routines 10747f0b8309SEdward Pilatowicz */ 10757f0b8309SEdward Pilatowicz static int 10767f0b8309SEdward Pilatowicz xdfs_attach(dev_info_t *dip, ddi_attach_cmd_t cmd) 10777f0b8309SEdward Pilatowicz { 10787f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 10797f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 10807f0b8309SEdward Pilatowicz 10817f0b8309SEdward Pilatowicz if (xsp == NULL) 10827f0b8309SEdward Pilatowicz return (DDI_FAILURE); 10837f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) 10847f0b8309SEdward Pilatowicz return (xdfs_hvm_attach(dip, cmd)); 10857f0b8309SEdward Pilatowicz if (cmd != DDI_ATTACH) 10867f0b8309SEdward Pilatowicz return (DDI_FAILURE); 10877f0b8309SEdward Pilatowicz 10887f0b8309SEdward Pilatowicz xdfs_c_attach(xsp); 10897f0b8309SEdward Pilatowicz return (DDI_SUCCESS); 10907f0b8309SEdward Pilatowicz } 10917f0b8309SEdward Pilatowicz 10927f0b8309SEdward Pilatowicz static int 10937f0b8309SEdward Pilatowicz xdfs_hvm_detach(dev_info_t *dip, ddi_detach_cmd_t cmd) 10947f0b8309SEdward Pilatowicz { 10957f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 10967f0b8309SEdward Pilatowicz void *xsp = ddi_get_soft_state(xdfs_ssp, instance); 10977f0b8309SEdward Pilatowicz int rv; 10987f0b8309SEdward Pilatowicz 10997f0b8309SEdward Pilatowicz XDFS_HVM_SANE(xsp); 11007f0b8309SEdward Pilatowicz 11017f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || 11027f0b8309SEdward Pilatowicz (xdfs_hvm_dev_ops->devo_detach == NULL)) 11037f0b8309SEdward Pilatowicz return (DDI_FAILURE); 11047f0b8309SEdward Pilatowicz 11057f0b8309SEdward Pilatowicz if ((rv = xdfs_hvm_dev_ops->devo_detach(dip, cmd)) != DDI_SUCCESS) 11067f0b8309SEdward Pilatowicz return (rv); 11077f0b8309SEdward Pilatowicz 11087f0b8309SEdward Pilatowicz kmem_free(XDFS_HVM_PATH(xsp), MAXPATHLEN); 11097f0b8309SEdward Pilatowicz ddi_soft_state_free(xdfs_ssp, instance); 11107f0b8309SEdward Pilatowicz return (DDI_SUCCESS); 11117f0b8309SEdward Pilatowicz } 11127f0b8309SEdward Pilatowicz 11137f0b8309SEdward Pilatowicz static int 11147f0b8309SEdward Pilatowicz xdfs_detach(dev_info_t *dip, ddi_detach_cmd_t cmd) 11157f0b8309SEdward Pilatowicz { 11167f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 11177f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 11187f0b8309SEdward Pilatowicz 11197f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) 11207f0b8309SEdward Pilatowicz return (xdfs_hvm_detach(dip, cmd)); 11217f0b8309SEdward Pilatowicz if (cmd != DDI_DETACH) 11227f0b8309SEdward Pilatowicz return (DDI_FAILURE); 11237f0b8309SEdward Pilatowicz 11247f0b8309SEdward Pilatowicz mutex_enter(&xsp->xdfss_mutex); 11257f0b8309SEdward Pilatowicz if (!xdfs_tgt_detach(xsp)) { 11267f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 11277f0b8309SEdward Pilatowicz return (DDI_FAILURE); 11287f0b8309SEdward Pilatowicz } 11297f0b8309SEdward Pilatowicz mutex_exit(&xsp->xdfss_mutex); 11307f0b8309SEdward Pilatowicz 11317f0b8309SEdward Pilatowicz cmlb_detach(xsp->xdfss_cmlbhandle, 0); 11327f0b8309SEdward Pilatowicz cmlb_free_handle(&xsp->xdfss_cmlbhandle); 11337f0b8309SEdward Pilatowicz ddi_release_devi(xsp->xdfss_tgt_dip); 11347f0b8309SEdward Pilatowicz ddi_soft_state_free(xdfs_ssp, instance); 11357f0b8309SEdward Pilatowicz ddi_prop_remove_all(dip); 11367f0b8309SEdward Pilatowicz return (DDI_SUCCESS); 11377f0b8309SEdward Pilatowicz } 11387f0b8309SEdward Pilatowicz 11397f0b8309SEdward Pilatowicz static int 11407f0b8309SEdward Pilatowicz xdfs_hvm_power(dev_info_t *dip, int component, int level) 11417f0b8309SEdward Pilatowicz { 11427f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 11437f0b8309SEdward Pilatowicz void *xsp = ddi_get_soft_state(xdfs_ssp, instance); 11447f0b8309SEdward Pilatowicz 11457f0b8309SEdward Pilatowicz XDFS_HVM_SANE(xsp); 11467f0b8309SEdward Pilatowicz 11477f0b8309SEdward Pilatowicz if ((xdfs_hvm_dev_ops == NULL) || 11487f0b8309SEdward Pilatowicz (xdfs_hvm_dev_ops->devo_power == NULL)) 11497f0b8309SEdward Pilatowicz return (DDI_FAILURE); 11507f0b8309SEdward Pilatowicz return (xdfs_hvm_dev_ops->devo_power(dip, component, level)); 11517f0b8309SEdward Pilatowicz } 11527f0b8309SEdward Pilatowicz 11537f0b8309SEdward Pilatowicz static int 11547f0b8309SEdward Pilatowicz xdfs_power(dev_info_t *dip, int component, int level) 11557f0b8309SEdward Pilatowicz { 11567f0b8309SEdward Pilatowicz int instance = ddi_get_instance(dip); 11577f0b8309SEdward Pilatowicz xdfs_state_t *xsp = ddi_get_soft_state(xdfs_ssp, instance); 11587f0b8309SEdward Pilatowicz 11597f0b8309SEdward Pilatowicz if (XDFS_HVM_MODE(xsp)) 11607f0b8309SEdward Pilatowicz return (xdfs_hvm_power(dip, component, level)); 11617f0b8309SEdward Pilatowicz return (nodev()); 11627f0b8309SEdward Pilatowicz } 11637f0b8309SEdward Pilatowicz 11647f0b8309SEdward Pilatowicz /* 11657f0b8309SEdward Pilatowicz * Cmlb ops vector 11667f0b8309SEdward Pilatowicz */ 11677f0b8309SEdward Pilatowicz static cmlb_tg_ops_t xdfs_lb_ops = { 11687f0b8309SEdward Pilatowicz TG_DK_OPS_VERSION_1, 11697f0b8309SEdward Pilatowicz xdfs_lb_rdwr, 11707f0b8309SEdward Pilatowicz xdfs_lb_getinfo 11717f0b8309SEdward Pilatowicz }; 11727f0b8309SEdward Pilatowicz 11737f0b8309SEdward Pilatowicz /* 11747f0b8309SEdward Pilatowicz * Device driver ops vector 11757f0b8309SEdward Pilatowicz */ 11767f0b8309SEdward Pilatowicz static struct cb_ops xdfs_cb_ops = { 11777f0b8309SEdward Pilatowicz xdfs_open, /* open */ 11787f0b8309SEdward Pilatowicz xdfs_close, /* close */ 11797f0b8309SEdward Pilatowicz xdfs_strategy, /* strategy */ 11807f0b8309SEdward Pilatowicz nodev, /* print */ 11817f0b8309SEdward Pilatowicz xdfs_dump, /* dump */ 11827f0b8309SEdward Pilatowicz xdfs_read, /* read */ 11837f0b8309SEdward Pilatowicz xdfs_write, /* write */ 11847f0b8309SEdward Pilatowicz xdfs_ioctl, /* ioctl */ 11857f0b8309SEdward Pilatowicz nodev, /* devmap */ 11867f0b8309SEdward Pilatowicz nodev, /* mmap */ 11877f0b8309SEdward Pilatowicz nodev, /* segmap */ 11887f0b8309SEdward Pilatowicz nochpoll, /* poll */ 11897f0b8309SEdward Pilatowicz xdfs_prop_op, /* cb_prop_op */ 11907f0b8309SEdward Pilatowicz 0, /* streamtab */ 11917f0b8309SEdward Pilatowicz D_64BIT | D_MP | D_NEW, /* Driver comaptibility flag */ 11927f0b8309SEdward Pilatowicz CB_REV, /* cb_rev */ 11937f0b8309SEdward Pilatowicz xdfs_aread, /* async read */ 11947f0b8309SEdward Pilatowicz xdfs_awrite /* async write */ 11957f0b8309SEdward Pilatowicz }; 11967f0b8309SEdward Pilatowicz 11977f0b8309SEdward Pilatowicz struct dev_ops xdfs_ops = { 11987f0b8309SEdward Pilatowicz DEVO_REV, /* devo_rev, */ 11997f0b8309SEdward Pilatowicz 0, /* refcnt */ 12007f0b8309SEdward Pilatowicz xdfs_getinfo, /* info */ 12017f0b8309SEdward Pilatowicz nulldev, /* identify */ 12027f0b8309SEdward Pilatowicz xdfs_probe, /* probe */ 12037f0b8309SEdward Pilatowicz xdfs_attach, /* attach */ 12047f0b8309SEdward Pilatowicz xdfs_detach, /* detach */ 12057f0b8309SEdward Pilatowicz nodev, /* reset */ 12067f0b8309SEdward Pilatowicz &xdfs_cb_ops, /* driver operations */ 12077f0b8309SEdward Pilatowicz NULL, /* bus operations */ 12087f0b8309SEdward Pilatowicz xdfs_power, /* power */ 12097f0b8309SEdward Pilatowicz ddi_quiesce_not_supported, /* devo_quiesce */ 12107f0b8309SEdward Pilatowicz }; 12117f0b8309SEdward Pilatowicz 12127f0b8309SEdward Pilatowicz /* 12137f0b8309SEdward Pilatowicz * Module linkage information for the kernel. 12147f0b8309SEdward Pilatowicz */ 12157f0b8309SEdward Pilatowicz static struct modldrv modldrv = { 12167f0b8309SEdward Pilatowicz &mod_driverops, /* Type of module. This one is a driver. */ 12177f0b8309SEdward Pilatowicz NULL, /* Module description. Set by _init() */ 12187f0b8309SEdward Pilatowicz &xdfs_ops, /* Driver ops. */ 12197f0b8309SEdward Pilatowicz }; 12207f0b8309SEdward Pilatowicz 12217f0b8309SEdward Pilatowicz static struct modlinkage modlinkage = { 12227f0b8309SEdward Pilatowicz MODREV_1, (void *)&modldrv, NULL 12237f0b8309SEdward Pilatowicz }; 12247f0b8309SEdward Pilatowicz 12257f0b8309SEdward Pilatowicz int 12267f0b8309SEdward Pilatowicz _init(void) 12277f0b8309SEdward Pilatowicz { 12287f0b8309SEdward Pilatowicz int rval; 12297f0b8309SEdward Pilatowicz 12307f0b8309SEdward Pilatowicz xdfs_major = ddi_name_to_major((char *)xdfs_c_name); 12317f0b8309SEdward Pilatowicz if (xdfs_major == (major_t)-1) 12327f0b8309SEdward Pilatowicz return (EINVAL); 12337f0b8309SEdward Pilatowicz 12347f0b8309SEdward Pilatowicz /* 12357f0b8309SEdward Pilatowicz * Determine the size of our soft state structure. The base 12367f0b8309SEdward Pilatowicz * size of the structure is the larger of the hvm clients state 12377f0b8309SEdward Pilatowicz * structure, or our shell state structure. Then we'll align 12387f0b8309SEdward Pilatowicz * the end of the structure to a pointer boundry and append 12397f0b8309SEdward Pilatowicz * a xdfs_hvm_state_t structure. This way the xdfs_hvm_state_t 12407f0b8309SEdward Pilatowicz * structure is always present and we can use it to determine the 12417f0b8309SEdward Pilatowicz * current device access mode (hvm or shell). 12427f0b8309SEdward Pilatowicz */ 12437f0b8309SEdward Pilatowicz xdfs_ss_size = MAX(xdfs_c_hvm_ss_size, sizeof (xdfs_state_t)); 12447f0b8309SEdward Pilatowicz xdfs_ss_size = P2ROUNDUP(xdfs_ss_size, sizeof (uintptr_t)); 12457f0b8309SEdward Pilatowicz xdfs_ss_size += sizeof (xdfs_hvm_state_t); 12467f0b8309SEdward Pilatowicz 12477f0b8309SEdward Pilatowicz /* 12487f0b8309SEdward Pilatowicz * In general ide usually supports 4 disk devices, this same 12497f0b8309SEdward Pilatowicz * limitation also applies to software emulating ide devices. 12507f0b8309SEdward Pilatowicz * so by default we pre-allocate 4 xdf shell soft state structures. 12517f0b8309SEdward Pilatowicz */ 12527f0b8309SEdward Pilatowicz if ((rval = ddi_soft_state_init(&xdfs_ssp, 12537f0b8309SEdward Pilatowicz xdfs_ss_size, XDFS_NODES)) != 0) 12547f0b8309SEdward Pilatowicz return (rval); 12557f0b8309SEdward Pilatowicz *xdfs_c_hvm_ss = xdfs_ssp; 12567f0b8309SEdward Pilatowicz 12577f0b8309SEdward Pilatowicz /* Install our module */ 12587f0b8309SEdward Pilatowicz if (modldrv.drv_linkinfo == NULL) 12597f0b8309SEdward Pilatowicz modldrv.drv_linkinfo = (char *)xdfs_c_linkinfo; 12607f0b8309SEdward Pilatowicz if ((rval = mod_install(&modlinkage)) != 0) { 12617f0b8309SEdward Pilatowicz ddi_soft_state_fini(&xdfs_ssp); 12627f0b8309SEdward Pilatowicz return (rval); 12637f0b8309SEdward Pilatowicz } 12647f0b8309SEdward Pilatowicz 12657f0b8309SEdward Pilatowicz return (0); 12667f0b8309SEdward Pilatowicz } 12677f0b8309SEdward Pilatowicz 12687f0b8309SEdward Pilatowicz int 12697f0b8309SEdward Pilatowicz _info(struct modinfo *modinfop) 12707f0b8309SEdward Pilatowicz { 12717f0b8309SEdward Pilatowicz if (modldrv.drv_linkinfo == NULL) 12727f0b8309SEdward Pilatowicz modldrv.drv_linkinfo = (char *)xdfs_c_linkinfo; 12737f0b8309SEdward Pilatowicz return (mod_info(&modlinkage, modinfop)); 12747f0b8309SEdward Pilatowicz } 12757f0b8309SEdward Pilatowicz 12767f0b8309SEdward Pilatowicz int 12777f0b8309SEdward Pilatowicz _fini(void) 12787f0b8309SEdward Pilatowicz { 12797f0b8309SEdward Pilatowicz int rval; 12807f0b8309SEdward Pilatowicz if ((rval = mod_remove(&modlinkage)) != 0) 12817f0b8309SEdward Pilatowicz return (rval); 12827f0b8309SEdward Pilatowicz ddi_soft_state_fini(&xdfs_ssp); 12837f0b8309SEdward Pilatowicz return (0); 12847f0b8309SEdward Pilatowicz } 1285