Lines Matching full:we

4 We need to evolve the API surface of BIO which is relevant to BIO_dgram (and the
15 - We use a sendmmsg/recvmmsg-like API. The alternative API was not considered
19 - We define our own structures rather than using the OS's `struct mmsghdr`.
28 - We do not have to expose functionality which we cannot guarantee
29 we can support on all platforms (for example, arbitrary control messages).
35 - For OSes which do not support `sendmmsg`, we emulate it using repeated
36 calls to `sendmsg`. For OSes which do not support `sendmsg`, we emulate it
40 - We do not define any flags at this time, as the flags previously considered
44 - We ensure the extensibility of our `BIO_MSG` structure in a way that preserves
61 - We do not support iovecs. The motivations for this are:
65 - The only way we could emulate iovecs on platforms which don't support
71 - We do not believe iovecs are needed to meet our performance requirements
78 - Even if we did support iovecs, we would have to impose a limit
79 on the number of iovecs supported, because we translate from our own
86 be used for a send operation. We support this, but require this functionality
186 We can either define `BIO_mmsghdr` as a typedef of `struct mmsghdr` or redefine
187 an equivalent structure. The former has the advantage that we can just pass the
190 Note that in `BIO_mem_dgram` we will have to process and therefore understand
191 the contents of `struct mmsghdr` ourselves. Therefore, initially we define a
195 The flags argument is defined by us. Initially we can support something like
200 If we go with this, there are some issues that arise:
206 - If we use OS-provided structures:
208 - We would need to include the OS headers which provide these
211 - If we choose to support these functions when OS support is not available
212 (see discussion below), We would need to define our own structures in this
215 - If we use our own structures:
217 - We would need to translate these structures during every call.
219 But we would need to have storage inside the BIO_dgram for *m* `struct
220 msghdr`, *m\*v* iovecs, etc. Since we want to support multithreaded use
225 of messages sent, so the existing semantics we are trying to match
226 lets us just send or receive fewer messages than we were asked to.
228 However, it does seem like we will need to limit *v*, the number of iovecs
229 per message. So what limit should we give to *v*, the number of iovecs? We
230 will need a fixed stack allocation of OS iovec structures and we can
231 allocate from this stack allocation as we iterate through the `BIO_msghdr`
232 we have been given. So in practice we could just only send messages
233 until we reach our iovec limit, and then return.
235 For example, suppose we allocate 64 iovecs internally:
248 So the only important thing we would need to document in this API
251 guarantee is to be made. e.g. if we allocate 64 iovecs internally,
256 iovecs are small, so we can afford to set the limit high enough
257 that it shouldn't cause any problems in practice. We can increase
258 the limit later without a breaking API change, but we cannot decrease
259 it later. So we might want to start with something small, like 8.
261 - We also need to decide what to do for OSes which don't support at least
275 - However there is a question here as to how we implement
276 the iovec arguments on platforms without `sendmsg`/`recvmsg`. (We cannot
277 use `writev`/`readv` because we need peer address information.) Logically
291 platforms. Possibly we could have an “iovec limit” variable in the
299 Could we use a simplified API? For example, could we have an API that returns
303 The problem here is we want to support “single-copy” (where the data is only
327 all read buffers being dequeued; see below.) For convenience we could have an
356 For send/receive addresses, we import the `BIO_(set|get)_dgram_(origin|dest)`
372 like setters and getters of the same variables from the name.) We probably want
374 destination, for example, and by separating these we allow the possibility of
376 we should choose less confusing names for these functions. Maybe
385 We probably want this for our own QUIC implementation built on top of this
386 anyway. Otherwise we will need another piece to do basically the same thing
387 and agglomerate multiple datagrams into a single BIO call. Unless we only
388 want use `sendmmsg` constructively in trivial cases (e.g. where we send two
398 QUIC implementation so probably not a big deal. We can always support
410 For (1) we have two options
413 `BIO_read_dequeue` path. We use an OpenSSL-provided default allocator
420 b. Use `recvfrom` directly. This means we have a `recvmmsg` path and a
423 The disadvantage of (a) is it yields an extra copy relative to what we have now,
425 syscall and we do not have to copy anything.
427 Since we will probably need to support platforms without
437 For (3) we have a legacy `BIO_read` but we have several datagrams still in the
438 RX queue. In this case we do have to copy - we have no choice. However this only
442 Subsequently for (3) we have to free the buffer using the free callback. This is
445 But since this seems a very strange API usage pattern, we may just want to fail
448 Probably not worth supporting this. So we can have the following rule:
457 We will also implement from scratch a BIO_dgram_pair. This will be provided as a
463 It is a functional assumption of the above design that we would never want to
467 If we did ever want to do this, multiple BIOs on the same FD is one possibility
472 If we wanted to support multithreaded use of the same FD using the same BIO, we
481 fill. We might want to have a way to specify how many buffers it should offer to