Lines Matching full:we
17 * We use two different functions for validating bkeys, bch_ptr_invalid and
27 * them on disk, just unnecessary work - so we filter them out when resorting
30 * We can't filter out stale keys when we're resorting, because garbage
32 * unless we're rewriting the btree node those stale keys still exist on disk.
34 * We also implement functions here for removing some number of sectors from the
44 * There could be many of them on disk, but we never allow there to be more than
45 * 4 in memory - we lazily resort as needed.
47 * We implement code here for creating and maintaining auxiliary search trees
48 * (described below) for searching an individial bset, and on top of that we
62 * Since keys are variable length, we can't use a binary search on a bset - we
67 * So we need to construct some sort of lookup table. Since we only insert keys
69 * usually in sets that are mostly constant. We use two different types of
77 * into, we construct a binary search tree in an array - traversing a binary
84 * because they're more likely to be in L2, but also because we can prefetch
89 * (we don't want to fetch the key from the set, that would defeat the purpose),
90 * and a pointer to the key. We use a few tricks to compress both of these.
92 * To compress the pointer, we take advantage of the fact that one node in the
93 * search tree corresponds to precisely BSET_CACHELINE bytes in the set. We have
95 * returns what its index would be in an inorder traversal, so we only have to
99 * compress that, we take advantage of the fact that when we're traversing the
100 * search tree at every iteration we know that both our search key and the key
101 * we're looking for lie within some range - bounded by our previous
102 * comparisons. (We special case the start of a search so that this is true even
105 * So we know the key we're looking for is between a and b, and a and b don't
106 * differ higher than bit 50, we don't need to check anything higher than bit
109 * We don't usually need the rest of the bits, either; we only need enough bits
110 * to partition the key range we're currently checking. Consider key n - the
112 * immediately preceding n. The lowest bit we need to store in the auxiliary
115 * Note that this could be bit 0 - we might sometimes need all 80 bits to do the
116 * comparison. But we'd really like our nodes in the auxiliary search tree to be
119 * The solution is to make them fixed size, and when we're constructing a node
120 * check if p and n differed in the bits we needed them to. If they don't we
121 * flag that node, and when doing lookups we fallback to comparing against the
123 * happen a bit less than 1% of the time), we win - even on failures, that key
124 * is then more likely to be in cache than if we were doing binary searches all
125 * the way, since we're touching so much less memory.
132 * We need 7 bits for the exponent and 3 bits for the key's offset (since keys
134 * We need one node per 128 bytes in the btree node, which means the auxiliary
137 * Constructing these auxiliary search trees is moderately expensive, and we
139 * whenever we insert another key into it. For the unwritten set, we use a much
144 * These are much easier to keep up to date when we insert a key - we do it
145 * somewhat lazily; when we shift a key up we usually just increment the pointer
146 * to it, only when it would overflow do we go to the trouble of finding the
159 * We construct a binary tree in an array as if the array
179 * Conceptually it's a member of struct bkey_float, but we want
229 * to the memory we have allocated for this btree node. Additionally,