Lines Matching +full:three +full:- +full:conversion +full:- +full:cycles
51 on one machine, a VAX-11/780 with eight megabytes of memory.\**
58 person-to-person telephone messages to per-organization distribution
59 lists. After conversion to 4.2BSD, it was
82 For example, a remote login uses three processes and a
83 pseudo-terminal handler in addition to the local hardware terminal
97 This test was run on several occasions over a three month period.
113 Micro-operation benchmarks
117 programs was constructed and run on a VAX-11/750 with 4.5 megabytes
159 pipeself4 send 10,000 4-byte messages to yourself
160 pipeself512 send 10,000 512-byte messages to yourself
161 pipediscard4 send 10,000 4-byte messages to child who discards
162 pipediscard512 send 10,000 512-byte messages to child who discards
163 pipeback4 exchange 10,000 4-byte messages with child
164 pipeback512 exchange 10,000 512-byte messages with child
165 forks0 fork-exit-wait 1,000 times
166 forks1k sbrk(1024), fault page, fork-exit-wait 1,000 times
167 forks100k sbrk(102400), fault pages, fork-exit-wait 1,000 times
168 vforks0 vfork-exit-wait 1,000 times
169 vforks1k sbrk(1024), fault page, vfork-exit-wait 1,000 times
170 vforks100k sbrk(102400), fault pages, vfork-exit-wait 1,000 times
171 execs0null fork-exec ``null job''-exit-wait 1,000 times
173 execs1knull sbrk(1024), fault page, fork-exec ``null job''-exit-wait 1,000 times
175 execs100knull sbrk(102400), fault pages, fork-exec ``null job''-exit-wait 1,000 times
176 vexecs0null vfork-exec ``null job''-exit-wait 1,000 times
177 vexecs1knull sbrk(1024), fault page, vfork-exec ``null job''-exit-wait 1,000 times
178 vexecs100knull sbrk(102400), fault pages, vfork-exec ``null job''-exit-wait 1,000 times
179 execs0big fork-exec ``big job''-exit-wait 1,000 times
180 execs1kbig sbrk(1024), fault page, fork-exec ``big job''-exit-wait 1,000 times
181 execs100kbig sbrk(102400), fault pages, fork-exec ``big job''-exit-wait 1,000 times
182 vexecs0big vfork-exec ``big job''-exit-wait 1,000 times
183 vexecs1kbig sbrk(1024), fault pages, vfork-exec ``big job''-exit-wait 1,000 times
184 vexecs100kbig sbrk(102400), fault pages, vfork-exec ``big job''-exit-wait 1,000 times
192 are scaled to reflect their being run on a VAX-11/750, they
195 \** We assume that a VAX-11/750 runs at 60% of the speed of a VAX-11/780
328 that is 19% of the total cycles in the kernel,
329 or 11% of all cycles executed on the machine.
370 is 3% of the machine cycles,
387 This allows high input rates without the cost of per-character interrupts
412 in \fIwait\fP when searching for \fB\s-2ZOMBIE\s+2\fP and
413 \fB\s-2STOPPED\s+2\fP processes;
444 system (a VAX-11/780) can be spent in \fIschedcpu\fP and, on average,
445 5-10% of the kernel time is spent in \fIschedcpu\fP.
455 cache and the read-ahead policies.
458 that large amounts of read-ahead might be performed without
463 The tracing package was run over a three hour period during
464 a peak mid-afternoon period on a VAX 11/780 with four megabytes
520 In addition, 5 read-ahead requests were made each second
522 Despite the policies to rapidly reuse read-ahead buffers
523 that remain unclaimed, more than 90% of the read-ahead
528 of the buffer cache may be reduced significantly on memory-poor
558 The performance of TCP over slower long-haul networks
560 The first problem was a bug that prevented round-trip timing measurements
563 that was well-tuned for Ethernet, but was poorly chosen for
597 seqpage-v as above, but first make \fIvadvise\fP\|(2) call
599 randpage-v as above, but first make \fIvadvise\fP call
653 seqpage-v 579 812 3.8 5.3 216.0 237.7 8394 8351
655 randpage-v 572 562 6.1 7.3 62.2 77.5 8126 9852
682 randpage-v 8126 9852 765 786