#
b78b6b3a |
| 29-Jul-2013 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
Merge 3.11-rc3 into driver-core-next
We want these fixes in this branch.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
#
9c5891bd |
| 29-Jul-2013 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
Merge 3.11-rc3 into char-misc-next.
This resolves a merge issue with: drivers/misc/mei/init.c
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
#
78283dd2 |
| 29-Jul-2013 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
Merge 3.11-rc3 into usb-next
|
#
01731cf2 |
| 29-Jul-2013 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
Merge 3.11-rc3 into staging-next
We want these fixes here.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
Revision tags: v3.11-rc3 |
|
#
cb54b53a |
| 25-Jul-2013 |
Daniel Vetter <daniel.vetter@ffwll.ch> |
Merge commit 'Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux'
This backmerges Linus' merge commit of the latest drm-fixes pull:
commit 549f3a1218ba18fcde11ef0e22b07e6365645
Merge commit 'Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux'
This backmerges Linus' merge commit of the latest drm-fixes pull:
commit 549f3a1218ba18fcde11ef0e22b07e6365645788 Merge: 42577ca 058ca4a Author: Linus Torvalds <torvalds@linux-foundation.org> Date: Tue Jul 23 15:47:08 2013 -0700
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
We've accrued a few too many conflicts, but the real reason is that I want to merge the 100% solution for Haswell concurrent registers writes into drm-intel-next. But that depends upon the 90% bandaid merged into -fixes:
commit a7cd1b8fea2f341b626b255d9898a5ca5fabbf0a Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Jul 19 20:36:51 2013 +0100
drm/i915: Serialize almost all register access
Also, we can roll up on accrued conflicts.
Usually I'd backmerge a tagged -rc, but I want to get this done before heading off to vacations next week ;-)
Conflicts: drivers/gpu/drm/i915/i915_dma.c drivers/gpu/drm/i915/i915_gem.c
v2: For added hilarity we have a init sequence conflict around the gt_lock, so need to move that one, too. Spotted by Jani Nikula.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
show more ...
|
#
a3f86127 |
| 25-Jul-2013 |
Jiri Kosina <jkosina@suse.cz> |
Merge branch 'master' into for-next
Sync with Linus' master to be able to apply trivial patche to newer code.
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
#
d4c90b1b |
| 23-Jul-2013 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge branch 'for-3.11/drivers' of git://git.kernel.dk/linux-block
Pull block IO driver bits from Jens Axboe: "As I mentioned in the core block pull request, due to real life circumstances the dr
Merge branch 'for-3.11/drivers' of git://git.kernel.dk/linux-block
Pull block IO driver bits from Jens Axboe: "As I mentioned in the core block pull request, due to real life circumstances the driver pull request would be late. Now it looks like -rc2 late... On the plus side, apart form the rsxx update, these are all things that I could argue could go in later in the cycle as they are fixes and not features. So even though things are late, it's not ALL bad.
The pull request contains:
- Updates to bcache, all bug fixes, from Kent.
- A pile of drbd bug fixes (no big features this time!).
- xen blk front/back fixes.
- rsxx driver updates, some of them deferred form 3.10. So should be well cooked by now"
* 'for-3.11/drivers' of git://git.kernel.dk/linux-block: (63 commits) bcache: Allocation kthread fixes bcache: Fix GC_SECTORS_USED() calculation bcache: Journal replay fix bcache: Shutdown fix bcache: Fix a sysfs splat on shutdown bcache: Advertise that flushes are supported bcache: check for allocation failures bcache: Fix a dumb race bcache: Use standard utility code bcache: Update email address bcache: Delete fuzz tester bcache: Document shrinker reserve better bcache: FUA fixes drbd: Allow online change of al-stripes and al-stripe-size drbd: Constants should be UPPERCASE drbd: Ignore the exit code of a fence-peer handler if it returns too late drbd: Fix rcu_read_lock balance on error path drbd: fix error return code in drbd_init() drbd: Do not sleep inside rcu bcache: Refresh usage docs ...
show more ...
|
Revision tags: v3.11-rc2, v3.11-rc1, v3.10 |
|
#
f35546e0 |
| 28-Jun-2013 |
Jens Axboe <axboe@kernel.dk> |
Merge branch 'stable/for-jens-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen into for-3.11/drivers
Konrad writes:
It has the 'feature-max-indirect-segments' implemented in both b
Merge branch 'stable/for-jens-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen into for-3.11/drivers
Konrad writes:
It has the 'feature-max-indirect-segments' implemented in both backend and frontend. The current problem with the backend and frontend is that the segment size is limited to 11 pages. It means we can at most squeeze in 44kB per request. The ring can hold 32 (next power of two below 36) requests, meaning we can do 1.4M of outstanding requests. Nowadays that is not enough.
The problem in the past was addressed in two ways - but neither one went upstream. The first solution to this proposed by Justin from Spectralogic was to negotiate the segment size. This means that the ‘struct blkif_sring_entry’ is now a variable size. It can expand from 112 bytes (cover 11 pages of data - 44kB) to 1580 bytes (256 pages of data - so 1MB). It is a simple extension by just making the array in the request expand from 11 to a variable size negotiated. But it had limits: this extension still limits the number of segments per request to 255 (as the total number must be specified in the request, which only has an 8-bit field for that purpose).
The other solution (from Intel - Ronghui) was to create one extra ring that only has the ‘struct blkif_request_segment’ in them. The ‘struct blkif_request’ would be changed to have an index in said ‘segment ring’. There is only one segment ring. This means that the size of the initial ring is still the same. The requests would point to the segment and enumerate out how many of the indexes it wants to use. The limit is of course the size of the segment. If one assumes a one-page segment this means we can in one request cover ~4MB.
Those patches were posted as RFC and the author never followed up on the ideas on changing it to be a bit more flexible.
There is yet another mechanism that could be employed (which these patches implement) - and it borrows from VirtIO protocol. And that is the ‘indirect descriptors’. This very similar to what Intel suggests, but with a twist. The twist is to negotiate how many of these 'segment' pages (aka indirect descriptor pages) we want to support (in reality we negotiate how many entries in the segment we want to cover, and we module the number if it is bigger than the segment size).
This means that with the existing 36 slots in the ring (single page) we can cover: 32 slots * each blkif_request_indirect covers: 512 * 4096 ~= 64M. Since we ample space in the blkif_request_indirect to span more than one indirect page, that number (64M) can be also multiplied by eight = 512MB.
Roger Pau Monne took the idea and implemented them in these patches. They work great and the corner cases (migration between backends with and without this extension) work nicely. The backend has a limit right now off how many indirect entries it can handle: one indirect page, and at maximum 256 entries (out of 512 - so 50% of the page is used). That comes out to 32 slots * 256 entries in a indirect page * 1 indirect page per request * 4096 = 32MB.
This is a conservative number that can change in the future. Right now it strikes a good balance between giving excellent performance, memory usage in the backend, and balancing the needs of many guests.
In the patchset there is also the split of the blkback structure to be per-VBD. This means that the spinlock contention we had with many guests trying to do I/O and all the blkback threads hitting the same lock has been eliminated.
Also there are bug-fixes to deal with oddly sized sectors, insane amounts on th ring, and also a security fix (posted earlier).
show more ...
|
Revision tags: v3.10-rc7, v3.10-rc6, v3.10-rc5, v3.10-rc4, v3.10-rc3, v3.10-rc2 |
|
#
2d5dc3ba |
| 15-May-2013 |
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
xen-blkfront: Introduce a 'max' module parameter to alter the amount of indirect segments.
The max module parameter (by default 32) is the maximum number of segments that the frontend will negotiate
xen-blkfront: Introduce a 'max' module parameter to alter the amount of indirect segments.
The max module parameter (by default 32) is the maximum number of segments that the frontend will negotiate with the backend for indirect descriptors. Higher value means more potential throughput but more memory usage. The backend picks the minimum of the frontend and its default backend value.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
show more ...
|