#
d5269a63 |
| 08-Dec-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Add an API for jumbo mbuf cluster allocation and also provide 4k clusters in addition to 9k and 16k ones.
struct mbuf *m_getjcl(int how, short type, int flags, int size) void *m_cljget(struct mbuf
Add an API for jumbo mbuf cluster allocation and also provide 4k clusters in addition to 9k and 16k ones.
struct mbuf *m_getjcl(int how, short type, int flags, int size) void *m_cljget(struct mbuf *m, int how, int size)
m_getjcl() returns an mbuf with a cluster of the specified size attached like m_getcl() does for 2k clusters.
m_cljget() is different from m_clget() as it can allocate clusters without attaching them to an mbuf. In that case the return value is the pointer to the cluster of the requested size. If an mbuf was specified, it gets the cluster attached to it and the return value can be safely ignored.
For size both take MCLBYTES, MJUM4BYTES, MJUM9BYTES, MJUM16BYTES.
Reviewed by: glebius Tested by: glebius Sponsored by: TCP/IP Optimization Fundraise 2005
show more ...
|
#
cd5bb63b |
| 05-Nov-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Free only those mbuf+clusters back to the packet zone that were allocated from there. All others get broken up and free'd individually to the mbuf and cluster zones.
The packet zone is a secondary
Free only those mbuf+clusters back to the packet zone that were allocated from there. All others get broken up and free'd individually to the mbuf and cluster zones.
The packet zone is a secondary zone to the mbuf zone. There is currently a limitation in UMA which prevents decreasing the packet zone stock when the mbuf and cluster zone are drained and all their members are part of packets. When this is fixed this change may be reverted.
show more ...
|
#
a5f77087 |
| 04-Nov-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Fix a logic error introduced with mandatory mbuf cluster refcounting and freeing of mbufs+clusters back to the packet zone.
|
Revision tags: release/6.0.0_cvs, release/6.0.0 |
|
#
56a4e45a |
| 02-Nov-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Mandatory mbuf cluster reference counting and groundwork for UMA based jumbo 9k and jumbo 16k cluster support.
All mbuf's with external storage attached are mandatory reference counted. For cluster
Mandatory mbuf cluster reference counting and groundwork for UMA based jumbo 9k and jumbo 16k cluster support.
All mbuf's with external storage attached are mandatory reference counted. For clusters and jumbo clusters UMA provides the refcnt storage directly. It does not have to be separatly allocated. Any other type of external storage gets its own refcnt allocated from an UMA mbuf refcnt zone instead of normal kernel malloc.
The refcount API MEXT_ADD_REF() and MEXT_REM_REF() is no longer publically accessible. The proper m_* functions have to be used.
mb_ctor_clust() and mb_dtor_clust() both handle normal 2K as well as 9k and 16k clusters.
Clusters and jumbo clusters may be obtained without attaching it immideatly to an mbuf. This is for high performance cluster allocation in network drivers where mbufs are attached after the cluster has been filled.
Tested by: rwatson Sponsored by: TCP/IP Optimizations Fundraise 2005
show more ...
|
#
fdcc028d |
| 30-Aug-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Changes and cleanups to m_sanity():
o for() instead of while() looping over mbuf chain o paren's around all flag checks o more verbose function and purpose description o some more style changes
Ba
Changes and cleanups to m_sanity():
o for() instead of while() looping over mbuf chain o paren's around all flag checks o more verbose function and purpose description o some more style changes
Based on feedback from: sam
show more ...
|
#
e0068c3a |
| 30-Aug-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Unbreak m_demote() and put back the 'all' flag. Without it we cannot correctly test for m_nextpkt in an mbuf chain.
|
#
fbe81638 |
| 30-Aug-2005 |
Andre Oppermann <andre@FreeBSD.org> |
o Remove the 'all' flag from m_demote(). Users can simply call it with m_demote(m->m_next) if they wish to start at the second mbuf in chain. o Test m_type with == instead of &. o Check m_nextpkt
o Remove the 'all' flag from m_demote(). Users can simply call it with m_demote(m->m_next) if they wish to start at the second mbuf in chain. o Test m_type with == instead of &. o Check m_nextpkt against NULL instead of implicit 0.
Based on feedback from: sam
show more ...
|
#
4da84431 |
| 29-Aug-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Add m_copymdata(struct mbuf *m, struct mbuf *n, int off, int len, int prep, int how).
Copies the data portion of mbuf (chain) n starting from offset off for length len to mbuf (chain) m. Depending
Add m_copymdata(struct mbuf *m, struct mbuf *n, int off, int len, int prep, int how).
Copies the data portion of mbuf (chain) n starting from offset off for length len to mbuf (chain) m. Depending on prep the copied data will be appended or prepended. The function ensures that the mbuf (chain) m will be fully writeable by making real (not refcnt) copies of mbuf clusters. For the prepending the function returns a pointer to the new start of mbuf chain m and leaves as much leading space as possible in the new first mbuf.
Reviewed by: glebius
show more ...
|
#
a048affb |
| 29-Aug-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Add m_sanity(struct mbuf *m, int sanitize) to do some heavy sanity checking on mbuf's and mbuf chains. Set sanitize to 1 to garble illegal things and have them blow up later when used/accessed.
m_s
Add m_sanity(struct mbuf *m, int sanitize) to do some heavy sanity checking on mbuf's and mbuf chains. Set sanitize to 1 to garble illegal things and have them blow up later when used/accessed.
m_sanity()'s main purpose is for KASSERT()'s and debugging of non- kosher mbuf manipulation (of which we have a number of).
Reviewed by: glebius
show more ...
|
#
ed111688 |
| 29-Aug-2005 |
Andre Oppermann <andre@FreeBSD.org> |
Add m_demote(struct mbuf *m, int all) to clean up mbuf (chain) from any tags and packet headers. If "all" is set then the first mbuf in the chain will be cleaned too.
This function is used before a
Add m_demote(struct mbuf *m, int all) to clean up mbuf (chain) from any tags and packet headers. If "all" is set then the first mbuf in the chain will be cleaned too.
This function is used before an mbuf, that arrived as packet with m->flags & M_PKTHDR, is appended to an mbuf chain using m->m_next (not m->m_nextpkt).
Reviewed by: glebius
show more ...
|
#
ab8ab90c |
| 30-Jul-2005 |
Sam Leffler <sam@FreeBSD.org> |
add m_align, a function to align any type of mbuf (i.e. it is a superset of M_ALIGN and MH_ALIGN)
Reviewed by: several
|
Revision tags: release/5.4.0_cvs, release/5.4.0 |
|
#
75ae2570 |
| 04-May-2005 |
Maksim Yevmenkin <emax@FreeBSD.org> |
Change m_uiotombuf so it will accept offset at which data should be copied to the mbuf. Offset cannot exceed MHLEN bytes. This is currently used to fix Ethernet header alignment problem on alpha and
Change m_uiotombuf so it will accept offset at which data should be copied to the mbuf. Offset cannot exceed MHLEN bytes. This is currently used to fix Ethernet header alignment problem on alpha and sparc64. Also change all users of m_uiotombuf to pass proper offset.
Reviewed by: jmg, sam Tested by: Sten Spans "sten AT blinkenlights DOT nl" MFC after: 1 week
show more ...
|
#
7ac139a9 |
| 17-Mar-2005 |
John-Mark Gurney <jmg@FreeBSD.org> |
add m_copyup function.. This can be used to help make our ip stack less alignment restrictive, and help performance on some ethernet cards which currently copy the entire packet a couple bytes to ge
add m_copyup function.. This can be used to help make our ip stack less alignment restrictive, and help performance on some ethernet cards which currently copy the entire packet a couple bytes to get the packet aligned properly...
Wordsmithing by: dwhite Obtained from: NetBSD (code only) I'll clean it up later: rwatson
show more ...
|
#
a4e71429 |
| 08-Mar-2005 |
Sam Leffler <sam@FreeBSD.org> |
allow the destination of m_move_pkthdr to have external storage (e.g. a cluster)
Glanced at by: rwatson, silby
|
#
2b2c7a6b |
| 06-Mar-2005 |
Alan Cox <alc@FreeBSD.org> |
The m_ext reference counts are potentially shared and modified asynchronously by different threads. Thus, declare as volatile the reference count that is accessed through m_ext's pointer, ref_cnt. R
The m_ext reference counts are potentially shared and modified asynchronously by different threads. Thus, declare as volatile the reference count that is accessed through m_ext's pointer, ref_cnt. Revert the previous change, revision 1.144, that casts as volatile a single dereference of ref_cnt.
Reviewed by: bmilekic, dwhite Problem reported by: kris MFC after: 3 days
show more ...
|
#
a1d0c3f2 |
| 03-Mar-2005 |
Doug White <dwhite@FreeBSD.org> |
Insert volatile cast to discourage gcc from optimizing the read outside of the while loop.
Suggested by: alc MFC after: 1 day
|
#
59d8b310 |
| 24-Feb-2005 |
Sam Leffler <sam@FreeBSD.org> |
change m_adj to reclaim unused mbufs instead of zero'ing m_len when trim'ing space off the back of a chain; this is indirect solution to a potential null ptr deref
Noticed by: Coverity Prevent analy
change m_adj to reclaim unused mbufs instead of zero'ing m_len when trim'ing space off the back of a chain; this is indirect solution to a potential null ptr deref
Noticed by: Coverity Prevent analysis tool (null ptr deref) Reviewed by: dg, rwatson
show more ...
|
#
9d8993bb |
| 23-Feb-2005 |
Sam Leffler <sam@FreeBSD.org> |
remove dead code
Noticed by: Coverity Prevent analysis tool Reviewed by: silby
|
#
3d2a3ff2 |
| 10-Feb-2005 |
Bosko Milekic <bmilekic@FreeBSD.org> |
Optimize the way reference counting is performed with Mbufs. We do not need to perform an extra memory fetch in the Packet (Mbuf+Cluster) constructor to initialize the reference counter anymore. Th
Optimize the way reference counting is performed with Mbufs. We do not need to perform an extra memory fetch in the Packet (Mbuf+Cluster) constructor to initialize the reference counter anymore. The reference counts are located in a separate memory region (in the slab header, because this zone is UMA_ZONE_REFCNT), so the memory fetch resulted very often in a cache miss. Additionally, and perhaps more significantly, optimize the free mbuf+cluster (packet) case, which is very common, to no longer require an atomic operation on free (to verify the reference counter) if the reference on the cluster has never been increased (also very common). Reduces an atomic on mbuf free on average.
Original patch submitted by: Gerrit Nagelhout <gnagelhout@sandvine.com>
show more ...
|
#
c711aea6 |
| 10-Feb-2005 |
Poul-Henning Kamp <phk@FreeBSD.org> |
Make a bunch of malloc types static.
Found by: src/tools/tools/kernxref
|
Revision tags: release/4.11.0_cvs, release/4.11.0 |
|
#
9454b2d8 |
| 07-Jan-2005 |
Warner Losh <imp@FreeBSD.org> |
/* -> /*- for copyright notices, minor format tweaks as necessary
|
#
a37c415e |
| 15-Dec-2004 |
Sam Leffler <sam@FreeBSD.org> |
fix m_append for case where additional mbufs are required
|
#
4873d175 |
| 08-Dec-2004 |
Sam Leffler <sam@FreeBSD.org> |
add m_append utility function to be used in forthcoming changes
|
Revision tags: release/5.3.0_cvs, release/5.3.0 |
|
#
7b125090 |
| 28-Sep-2004 |
John-Mark Gurney <jmg@FreeBSD.org> |
improve the mbuf m_print function.. Only pull length from pkthdr if there is one, detect mbuf loops and stop, add an extra arg so you can only print the first x bytes of the data per mbuf (print all
improve the mbuf m_print function.. Only pull length from pkthdr if there is one, detect mbuf loops and stop, add an extra arg so you can only print the first x bytes of the data per mbuf (print all if arg is -1), print flags using %b (bitmask)...
No code in the tree appears to use m_print, and it's just a maner of adding -1 as an additional arg to m_print to restore original behavior..
MFC after: 4 days
show more ...
|
#
01e9ccbd |
| 21-Jul-2004 |
Bosko Milekic <bmilekic@FreeBSD.org> |
Back out just a portion of Alfred's last commit. Remove the MBUF_CHECK (WITNESS) for code paths that always call uma_zalloc_arg() shortly after where the check was, because uma_zalloc_arg() already
Back out just a portion of Alfred's last commit. Remove the MBUF_CHECK (WITNESS) for code paths that always call uma_zalloc_arg() shortly after where the check was, because uma_zalloc_arg() already does a similar check.
No objections from Alfred. Thanks Alfred.
show more ...
|