Home
last modified time | relevance | path

Searched full:would (Results 1 – 25 of 2636) sorted by relevance

12345678910>>...106

/linux/Documentation/filesystems/
H A Ddirectory-locking.rst193 we would have Dn a parent of D1, which is a parent of D2, which is
196 so they would all hold simultaneously at the deadlock time and
197 we would have a loop.
215 Dn and D1 would have to be among those, with Dn locked before D1.
219 it would be the first parent to be locked. Therefore at least one of the
221 of another - otherwise the operation would not have progressed past
224 It can't be a parent and its child; otherwise we would've had
226 would have to be a descendent of its child.
229 Otherwise the child of the parent in question would've been a descendent
240 rename is crucial - without it a deadlock would be possible. Indeed,
[all …]
H A Didmappings.rst66 idmapping would map ``u1000`` down to ``k21000``. The third idmapping would map
75 we would fail to translate as the sets aren't order isomorphic over the full
143 how userspace would specify them.
182 simply the identity idmapping. This would mean id ``u1000`` read from disk
183 would be mapped to id ``k1000``. So an inode's ``i_uid`` and ``i_gid`` field
184 would contain ``k1000``.
187 then ``u1000`` read from disk would be mapped to ``k11000``. So an inode's
188 ``i_uid`` and ``i_gid`` would contain ``k11000``.
241 according to the filesystem's idmapping as this would give the wrong owner if
246 ``u3000:k20000:r10000`` then ``k21000`` would map back up to ``u4000``.
[all …]
H A Dfsverity.rst35 subject to the caveat that reads which would violate the hash will
54 accessed on a particular device. It would be slow and wasteful to
304 to be authenticated against the file digest that would be returned by
363 would circumvent the data verification.
532 fs-verity from being used in cases where it would be helpful.
662 they are marked Uptodate. Merely hooking ``->read_iter()`` would be
727 direct I/O would bypass fs-verity.
796 then you could simply do sha256(file) instead. That would be much
801 first read. However, it would be inefficient because every time a
828 :A: Write support would be very difficult and would require a
[all …]
/linux/Documentation/w1/masters/
H A Dds2490.rst32 was added to the API. The name is just a suggestion. It would take
52 clear the entire bulk in buffer. It would be possible to read the
60 with a OHCI controller, ds2490 running in the guest would operate
64 would fail. qemu sets a 50ms timeout and the bulk in would timeout
65 even when the status shows data available. A bulk out write would
66 show a successful completion, but the ds2490 status register would
68 reattaching would clear the problem. usbmon output in the guest and
/linux/tools/memory-model/Documentation/
H A DREADME8 depending on what you know and what you would like to learn. Please note
16 o You have some background in Linux-kernel concurrency, and would
32 o You are familiar with Linux-kernel concurrency, and would
36 o You would like a detailed understanding of what your compiler can
39 o You would like to mark concurrent normal accesses to shared
45 LKMM, and would like a quick reference: cheatsheet.txt
48 of LKMM, and would like to learn about LKMM's requirements,
/linux/Documentation/networking/
H A Dsnmp_counter.rst44 multicast packets, and would always be updated together with
137 would be increased even if the ICMP packet has an invalid type. The
139 IcmpOutMsgs would still be updated if the IP header is constructed by
207 IcmpMsgOutType8 would increase 1. And if kernel gets an ICMP Echo Reply
208 packet, IcmpMsgInType0 would increase 1.
215 IcmpInMsgs would be updated but none of IcmpMsgInType[N] would be updated.
225 counters would be updated. The receiving packet path use IcmpInErrors
227 is increased, IcmpInErrors would always be increased too.
263 packets would be delivered to the TCP layer, but the TCP layer will discard
266 counter would only increase 1.
[all …]
H A Dx25.rst18 implementation of LAPB. Therefore the LAPB modules would be called by
19 unintelligent X.25 card drivers and not by intelligent ones, this would
24 conform with the JNT "Pink Book", this would have a different interface to
25 the Packet Layer but there would be no confusion since the class of device
26 being served by the LLC would be completely separate from LAPB.
H A Drepresentors.rst83 netdevice, while in hardware offload it would apply to packets transmitted by
133 access that the IP stack "sees" would then be configurable through tc rules;
136 networking entity, would not be appropriate for the representor and would
140 terminates IP traffic in software; in that case the DMA traffic would *not*
191 Similarly, since a TC mirred egress action targeting the representor would (in
204 would mean that all IPv4 packets from the VF are sent out the physical port, and
207 the VF would get two copies, as the packet reception on ``PORT_DEV`` would
210 On devices without separate port and uplink representors, ``PORT_DEV`` would
/linux/Documentation/bpf/
H A Dringbuf.rst27 would solve the second problem automatically.
36 One way would be to, similar to ``BPF_MAP_TYPE_PERF_EVENT_ARRAY``, make
38 enforce "same CPU only" rule. This would be more familiar interface compatible
39 with existing perf buffer use in BPF, but would fail if application needed more
42 Additionally, given the performance of BPF ringbuf, many use cases would just
44 approach would be an overkill.
48 with lookup/update/delete operations. This approach would add a lot of extra
50 would also add another concept that BPF developers would have to familiarize
51 themselves with, new syntax in libbpf, etc. But then would really provide no
60 ring buffer for all CPUs, it's as simple and straightforward, as would be with
[all …]
/linux/Documentation/RCU/
H A DUP.rst26 from softirq, the list scan would find itself referencing a newly freed
47 its arguments would cause it to fail to make the fundamental guarantee
61 call_rcu() were to directly invoke the callback, the result would
65 In some cases, it would possible to restructure to code so that
70 the same critical section, then the code would need to create
82 or API changes would be required.
136 the process-context critical section. This would result in
150 simply immediately returned, it would prematurely signal the
151 end of the grace period, which would come as a nasty shock to
/linux/Documentation/firmware-guide/acpi/
H A Dosi.rst70 The ACPI BIOS flow would include an evaluation of _OS, and the AML
71 interpreter in the kernel would return to it a string identifying the OS:
83 of every possible version of the OS that would run on it, and needed to know
84 all the quirks of those OS's. Certainly it would make more sense
91 that anybody would install those old operating systems
104 eg. _OSI("3.0 Thermal Model") would return TRUE if the OS knows how
106 An old OS that doesn't know about those extensions would answer FALSE,
121 and its successors. To do otherwise would virtually guarantee breaking
156 which would increment, based on the version of the spec supported.
158 Unfortunately, _REV was also misused. eg. some BIOS would check
/linux/tools/perf/pmu-events/arch/x86/amdzen3/
H A Dother.json22 …Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled …
28 …Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled …
34 …Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled …
40 …Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled …
52 …Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled …
58 …Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled …
64 …Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled …
/linux/Documentation/admin-guide/device-mapper/
H A Dlog-writes.rst31 The log would show the following:
36 cases where a power failure at a particular point in time would create an
42 Any REQ_OP_DISCARD requests are treated like WRITE requests. Otherwise we would
48 If we logged DISCARD when it completed, the replay would look like this:
82 we're fsck'ing something reasonable, you would do something like
89 This would allow you to replay the log up to the mkfs mark and
104 Say you want to test fsync on your file system. You would do something like
/linux/Documentation/scsi/
H A Dlpfc.rst36 the LLDD would simply be queued for a short duration, allowing the device
38 to the system. If the driver did not hide these conditions, i/o would be
39 errored by the driver, the mid-layer would exhaust its retries, and the
40 device would be taken offline. Manual intervention would be required to
/linux/Documentation/security/
H A Dipe.rst16 of a locked-down system. This system would be born-secure, and have
19 specific data files would not be readable unless they passed integrity
20 policy. A mandatory access control system would be present, and
21 as a result, xattrs would have to be protected. This lead to a selection
22 of what would provide the integrity claims. At the time, there were two
44 policy would indicate what labels required integrity verification, which
45 presented an issue: EVM would protect the label, but if an attacker could
47 including the SELinux labels that would be used to determine whether the
95 1. Regression risk; many of these changes would result in
104 whose responsibility would be only the local integrity policy enforcement.
[all …]
/linux/Documentation/core-api/
H A Dunaligned-memory-access.rst25 reading 4 bytes of data from address 0x10005 would be an unaligned memory
56 to architecture. It would be easy to write a whole document on the differences
94 starting at address 0x10000. With a basic level of understanding, it would
95 not be unreasonable to expect that accessing field2 would cause an unaligned
101 above case it would insert 2 bytes of padding in between field1 and field2.
116 where padding would otherwise be inserted, and hence reduce the overall
126 For a natural alignment scheme, the compiler would only have to add a single
172 Think about what would happen if addr1 was an odd address such as 0x10003.
218 To avoid the unaligned memory access, you would rewrite it as follows::
/linux/Documentation/
H A DKconfig20 written, it would be possible that some of those files would
21 have errors that would break them for being parsed by
/linux/drivers/leds/
H A DTODO50 RGB LEDs are quite common, and it would be good to be able to turn LED
67 It would be also nice to have useful listing mode -- name, type,
70 In future, it would be good to be able to set rgb led to particular
74 ethernet interface would be nice.
/linux/Documentation/userspace-api/media/
H A Dgen-errors.rst25 is also returned when the ioctl would need to wait for an event,
36 change something that would affect the stream, or would require
67 that this request would overcommit the usb bandwidth reserved for
/linux/Documentation/gpu/
H A Ddrm-compute.rst33 to this flag, it would prevent cooperating userspace from forced terminating
41 driver-allocated CPU memory would be accounted to the correct cgroup, and
42 eviction would be made cgroup aware. This allows the GPU to be partitioned
46 The interface to the cgroup would be similar to the current CPU memory
/linux/Documentation/block/
H A Dbiovecs.rst11 More specifically, old code that needed to partially complete a bio would
13 ended up partway through a biovec, it would increment bv_offset and decrement
87 It used to be the case that submitting a partially completed bio would work
89 norm, not all drivers would respect bi_idx and those would break. Now,
98 where previously you would have used bi_idx you'd now use a bvec_iter,
/linux/virt/kvm/
H A Dbinary_stats.c38 * as in the limit) from any position, the typical usage would follow below
78 * The pos is 0 and the copylen and remain would be the size of header. in kvm_stats_read()
79 * The copy of the header would be skipped if offset is larger than the in kvm_stats_read()
116 * The descriptors copy would be skipped in the typical case that in kvm_stats_read()
117 * userspace periodically read stats data, since the pos would be in kvm_stats_read()
/linux/Documentation/scheduler/
H A Dsched-nice-design.rst19 rule so that nice +19 level would be _exactly_ 1 jiffy. To better
39 So that if someone wanted to really renice tasks, +19 would give a much
40 bigger hit than the normal linear rule would do. (The solution of
47 millisec) rescheduling. (and would thus trash the cache, etc. Remember,
78 and another task with +2, the CPU split between the two tasks would
/linux/Documentation/locking/
H A Dfutex-requeue-pi.rst7 left without an owner if it has waiters; doing so would break the PI
20 implementation would wake the highest-priority waiter, and leave the
55 upon a successful futex_wait system call, the caller would return to
57 would be modified as follows::
93 acquire the rt_mutex as it would open a race window between the
/linux/drivers/gpu/drm/exynos/
H A Dexynos_drm_gem.h22 * - a new handle to this gem object would be created
35 * P.S. this object would be transferred to user as kms_bo.handle so
73 * with this function call, gem object reference count would be increased.
80 * gem object reference count would be decreased.

12345678910>>...106