buf.9 (8d5d039f80a8d31947f4e84af20e8a56d0009c32) | buf.9 (3a858f3798eb709b5f316cdae50865332284211d) |
---|---|
1.\" Copyright (c) 1998 2.\" The Regents of the University of California. All rights reserved. 3.\" 4.\" Redistribution and use in source and binary forms, with or without 5.\" modification, are permitted provided that the following conditions 6.\" are met: 7.\" 1. Redistributions of source code must retain the above copyright 8.\" notice, this list of conditions and the following disclaimer. --- 26 unchanged lines hidden (view full) --- 35.Dt BUF 9 36.Os 37.Sh NAME 38.Nm buf 39.Nd "kernel buffer I/O scheme used in FreeBSD VM system" 40.Sh DESCRIPTION 41The kernel implements a KVM abstraction of the buffer cache which allows it 42to map potentially disparate vm_page's into contiguous KVM for use by | 1.\" Copyright (c) 1998 2.\" The Regents of the University of California. All rights reserved. 3.\" 4.\" Redistribution and use in source and binary forms, with or without 5.\" modification, are permitted provided that the following conditions 6.\" are met: 7.\" 1. Redistributions of source code must retain the above copyright 8.\" notice, this list of conditions and the following disclaimer. --- 26 unchanged lines hidden (view full) --- 35.Dt BUF 9 36.Os 37.Sh NAME 38.Nm buf 39.Nd "kernel buffer I/O scheme used in FreeBSD VM system" 40.Sh DESCRIPTION 41The kernel implements a KVM abstraction of the buffer cache which allows it 42to map potentially disparate vm_page's into contiguous KVM for use by |
43(mainly file system) devices and device I/O. This abstraction supports | 43(mainly file system) devices and device I/O. 44This abstraction supports |
44block sizes from DEV_BSIZE (usually 512) to upwards of several pages or more. 45It also supports a relatively primitive byte-granular valid range and dirty | 45block sizes from DEV_BSIZE (usually 512) to upwards of several pages or more. 46It also supports a relatively primitive byte-granular valid range and dirty |
46range currently hardcoded for use by NFS. The code implementing the | 47range currently hardcoded for use by NFS. 48The code implementing the |
47VM Buffer abstraction is mostly concentrated in 48.Pa /usr/src/sys/kern/vfs_bio.c . 49.Pp 50One of the most important things to remember when dealing with buffer pointers 51(struct buf) is that the underlying pages are mapped directly from the buffer | 49VM Buffer abstraction is mostly concentrated in 50.Pa /usr/src/sys/kern/vfs_bio.c . 51.Pp 52One of the most important things to remember when dealing with buffer pointers 53(struct buf) is that the underlying pages are mapped directly from the buffer |
52cache. No data copying occurs in the scheme proper, though some file systems 53such as UFS do have to copy a little when dealing with file fragments. The 54second most important thing to remember is that due to the underlying page | 54cache. 55No data copying occurs in the scheme proper, though some file systems 56such as UFS do have to copy a little when dealing with file fragments. 57The second most important thing to remember is that due to the underlying page |
55mapping, the b_data base pointer in a buf is always *page* aligned, not | 58mapping, the b_data base pointer in a buf is always *page* aligned, not |
56*block* aligned. When you have a VM buffer representing some b_offset and | 59*block* aligned. 60When you have a VM buffer representing some b_offset and |
57b_size, the actual start of the buffer is (b_data + (b_offset & PAGE_MASK)) | 61b_size, the actual start of the buffer is (b_data + (b_offset & PAGE_MASK)) |
58and not just b_data. Finally, the VM system's core buffer cache supports 59valid and dirty bits (m->valid, m->dirty) for pages in DEV_BSIZE chunks. Thus | 62and not just b_data. 63Finally, the VM system's core buffer cache supports 64valid and dirty bits (m->valid, m->dirty) for pages in DEV_BSIZE chunks. 65Thus |
60a platform with a hardware page size of 4096 bytes has 8 valid and 8 dirty | 66a platform with a hardware page size of 4096 bytes has 8 valid and 8 dirty |
61bits. These bits are generally set and cleared in groups based on the device 62block size of the device backing the page. Complete page's worth are often | 67bits. 68These bits are generally set and cleared in groups based on the device 69block size of the device backing the page. 70Complete page's worth are often |
63referred to using the VM_PAGE_BITS_ALL bitmask (i.e. 0xFF if the hardware page 64size is 4096). 65.Pp 66VM buffers also keep track of a byte-granular dirty range and valid range. | 71referred to using the VM_PAGE_BITS_ALL bitmask (i.e. 0xFF if the hardware page 72size is 4096). 73.Pp 74VM buffers also keep track of a byte-granular dirty range and valid range. |
67This feature is normally only used by the NFS subsystem. I'm not sure why it | 75This feature is normally only used by the NFS subsystem. 76I'm not sure why it |
68is used at all, actually, since we have DEV_BSIZE valid/dirty granularity | 77is used at all, actually, since we have DEV_BSIZE valid/dirty granularity |
69within the VM buffer. If a buffer dirty operation creates a 'hole', 70the dirty range will extend to cover the hole. If a buffer validation | 78within the VM buffer. 79If a buffer dirty operation creates a 'hole', 80the dirty range will extend to cover the hole. 81If a buffer validation |
71operation creates a 'hole' the byte-granular valid range is left alone and | 82operation creates a 'hole' the byte-granular valid range is left alone and |
72will not take into account the new extension. Thus the whole byte-granular | 83will not take into account the new extension. 84Thus the whole byte-granular |
73abstraction is considered a bad hack and it would be nice if we could get rid 74of it completely. 75.Pp 76A VM buffer is capable of mapping the underlying VM cache pages into KVM in 77order to allow the kernel to directly manipulate the data associated with | 85abstraction is considered a bad hack and it would be nice if we could get rid 86of it completely. 87.Pp 88A VM buffer is capable of mapping the underlying VM cache pages into KVM in 89order to allow the kernel to directly manipulate the data associated with |
78the (vnode,b_offset,b_size). The kernel typically unmaps VM buffers the moment | 90the (vnode,b_offset,b_size). 91The kernel typically unmaps VM buffers the moment |
79they are no longer needed but often keeps the 'struct buf' structure 80instantiated and even bp->b_pages array instantiated despite having unmapped | 92they are no longer needed but often keeps the 'struct buf' structure 93instantiated and even bp->b_pages array instantiated despite having unmapped |
81them from KVM. If a page making up a VM buffer is about to undergo I/O, the | 94them from KVM. 95If a page making up a VM buffer is about to undergo I/O, the |
82system typically unmaps it from KVM and replaces the page in the b_pages[] | 96system typically unmaps it from KVM and replaces the page in the b_pages[] |
83array with a place-marker called bogus_page. The place-marker forces any kernel | 97array with a place-marker called bogus_page. 98The place-marker forces any kernel |
84subsystems referencing the associated struct buf to re-lookup the associated | 99subsystems referencing the associated struct buf to re-lookup the associated |
85page. I believe the place-marker hack is used to allow sophisticated devices | 100page. 101I believe the place-marker hack is used to allow sophisticated devices |
86such as file system devices to remap underlying pages in order to deal with, 87for example, re-mapping a file fragment into a file block. 88.Pp | 102such as file system devices to remap underlying pages in order to deal with, 103for example, re-mapping a file fragment into a file block. 104.Pp |
89VM buffers are used to track I/O operations within the kernel. Unfortunately, | 105VM buffers are used to track I/O operations within the kernel. 106Unfortunately, |
90the I/O implementation is also somewhat of a hack because the kernel wants 91to clear the dirty bit on the underlying pages the moment it queues the I/O | 107the I/O implementation is also somewhat of a hack because the kernel wants 108to clear the dirty bit on the underlying pages the moment it queues the I/O |
92to the VFS device, not when the physical I/O is actually initiated. This | 109to the VFS device, not when the physical I/O is actually initiated. 110This |
93can create confusion within file system devices that use delayed-writes because | 111can create confusion within file system devices that use delayed-writes because |
94you wind up with pages marked clean that are actually still dirty. If not | 112you wind up with pages marked clean that are actually still dirty. 113If not |
95treated carefully, these pages could be thrown away! Indeed, a number of 96serious bugs related to this hack were not fixed until the 2.2.8/3.0 release. 97The kernel uses an instantiated VM buffer (i.e. struct buf) to place-mark pages | 114treated carefully, these pages could be thrown away! Indeed, a number of 115serious bugs related to this hack were not fixed until the 2.2.8/3.0 release. 116The kernel uses an instantiated VM buffer (i.e. struct buf) to place-mark pages |
98in this special state. The buffer is typically flagged B_DELWRI. When a 99device no longer needs a buffer it typically flags it as B_RELBUF. Due to | 117in this special state. 118The buffer is typically flagged B_DELWRI. 119When a 120device no longer needs a buffer it typically flags it as B_RELBUF. 121Due to |
100the underlying pages being marked clean, the B_DELWRI|B_RELBUF combination must 101be interpreted to mean that the buffer is still actually dirty and must be | 122the underlying pages being marked clean, the B_DELWRI|B_RELBUF combination must 123be interpreted to mean that the buffer is still actually dirty and must be |
102written to its backing store before it can actually be released. In the case | 124written to its backing store before it can actually be released. 125In the case |
103where B_DELWRI is not set, the underlying dirty pages are still properly 104marked as dirty and the buffer can be completely freed without losing that 105clean/dirty state information. ( XXX do we have to check other flags in 106regards to this situation ??? ). 107.Pp 108The kernel reserves a portion of its KVM space to hold VM Buffer's data | 126where B_DELWRI is not set, the underlying dirty pages are still properly 127marked as dirty and the buffer can be completely freed without losing that 128clean/dirty state information. ( XXX do we have to check other flags in 129regards to this situation ??? ). 130.Pp 131The kernel reserves a portion of its KVM space to hold VM Buffer's data |
109maps. Even though this is virtual space (since the buffers are mapped | 132maps. 133Even though this is virtual space (since the buffers are mapped |
110from the buffer cache), we cannot make it arbitrarily large because 111instantiated VM Buffers (struct buf's) prevent their underlying pages in the | 134from the buffer cache), we cannot make it arbitrarily large because 135instantiated VM Buffers (struct buf's) prevent their underlying pages in the |
112buffer cache from being freed. This can complicate the life of the paging | 136buffer cache from being freed. 137This can complicate the life of the paging |
113system. 114.Pp 115.\" .Sh SEE ALSO 116.\" .Xr <fillmein> 9 117.Sh HISTORY 118The 119.Nm 120manual page was originally written by 121.An Matthew Dillon 122and first appeared in 123.Fx 3.1 , 124December 1998. | 138system. 139.Pp 140.\" .Sh SEE ALSO 141.\" .Xr <fillmein> 9 142.Sh HISTORY 143The 144.Nm 145manual page was originally written by 146.An Matthew Dillon 147and first appeared in 148.Fx 3.1 , 149December 1998. |