xref: /linux/Documentation/arch/arm64/booting.rst (revision 6e7fd890f1d6ac83805409e9c346240de2705584)
1=====================
2Booting AArch64 Linux
3=====================
4
5Author: Will Deacon <will.deacon@arm.com>
6
7Date  : 07 September 2012
8
9This document is based on the ARM booting document by Russell King and
10is relevant to all public releases of the AArch64 Linux kernel.
11
12The AArch64 exception model is made up of a number of exception levels
13(EL0 - EL3), with EL0, EL1 and EL2 having a secure and a non-secure
14counterpart.  EL2 is the hypervisor level, EL3 is the highest priority
15level and exists only in secure mode. Both are architecturally optional.
16
17For the purposes of this document, we will use the term `boot loader`
18simply to define all software that executes on the CPU(s) before control
19is passed to the Linux kernel.  This may include secure monitor and
20hypervisor code, or it may just be a handful of instructions for
21preparing a minimal boot environment.
22
23Essentially, the boot loader should provide (as a minimum) the
24following:
25
261. Setup and initialise the RAM
272. Setup the device tree
283. Decompress the kernel image
294. Call the kernel image
30
31
321. Setup and initialise RAM
33---------------------------
34
35Requirement: MANDATORY
36
37The boot loader is expected to find and initialise all RAM that the
38kernel will use for volatile data storage in the system.  It performs
39this in a machine dependent manner.  (It may use internal algorithms
40to automatically locate and size all RAM, or it may use knowledge of
41the RAM in the machine, or any other method the boot loader designer
42sees fit.)
43
44
452. Setup the device tree
46-------------------------
47
48Requirement: MANDATORY
49
50The device tree blob (dtb) must be placed on an 8-byte boundary and must
51not exceed 2 megabytes in size. Since the dtb will be mapped cacheable
52using blocks of up to 2 megabytes in size, it must not be placed within
53any 2M region which must be mapped with any specific attributes.
54
55NOTE: versions prior to v4.2 also require that the DTB be placed within
56the 512 MB region starting at text_offset bytes below the kernel Image.
57
583. Decompress the kernel image
59------------------------------
60
61Requirement: OPTIONAL
62
63The AArch64 kernel does not currently provide a decompressor and
64therefore requires decompression (gzip etc.) to be performed by the boot
65loader if a compressed Image target (e.g. Image.gz) is used.  For
66bootloaders that do not implement this requirement, the uncompressed
67Image target is available instead.
68
69
704. Call the kernel image
71------------------------
72
73Requirement: MANDATORY
74
75The decompressed kernel image contains a 64-byte header as follows::
76
77  u32 code0;			/* Executable code */
78  u32 code1;			/* Executable code */
79  u64 text_offset;		/* Image load offset, little endian */
80  u64 image_size;		/* Effective Image size, little endian */
81  u64 flags;			/* kernel flags, little endian */
82  u64 res2	= 0;		/* reserved */
83  u64 res3	= 0;		/* reserved */
84  u64 res4	= 0;		/* reserved */
85  u32 magic	= 0x644d5241;	/* Magic number, little endian, "ARM\x64" */
86  u32 res5;			/* reserved (used for PE COFF offset) */
87
88
89Header notes:
90
91- As of v3.17, all fields are little endian unless stated otherwise.
92
93- code0/code1 are responsible for branching to stext.
94
95- when booting through EFI, code0/code1 are initially skipped.
96  res5 is an offset to the PE header and the PE header has the EFI
97  entry point (efi_stub_entry).  When the stub has done its work, it
98  jumps to code0 to resume the normal boot process.
99
100- Prior to v3.17, the endianness of text_offset was not specified.  In
101  these cases image_size is zero and text_offset is 0x80000 in the
102  endianness of the kernel.  Where image_size is non-zero image_size is
103  little-endian and must be respected.  Where image_size is zero,
104  text_offset can be assumed to be 0x80000.
105
106- The flags field (introduced in v3.17) is a little-endian 64-bit field
107  composed as follows:
108
109  ============= ===============================================================
110  Bit 0		Kernel endianness.  1 if BE, 0 if LE.
111  Bit 1-2	Kernel Page size.
112
113			* 0 - Unspecified.
114			* 1 - 4K
115			* 2 - 16K
116			* 3 - 64K
117  Bit 3		Kernel physical placement
118
119			0
120			  2MB aligned base should be as close as possible
121			  to the base of DRAM, since memory below it is not
122			  accessible via the linear mapping
123			1
124			  2MB aligned base such that all image_size bytes
125			  counted from the start of the image are within
126			  the 48-bit addressable range of physical memory
127  Bits 4-63	Reserved.
128  ============= ===============================================================
129
130- When image_size is zero, a bootloader should attempt to keep as much
131  memory as possible free for use by the kernel immediately after the
132  end of the kernel image. The amount of space required will vary
133  depending on selected features, and is effectively unbound.
134
135The Image must be placed text_offset bytes from a 2MB aligned base
136address anywhere in usable system RAM and called there. The region
137between the 2 MB aligned base address and the start of the image has no
138special significance to the kernel, and may be used for other purposes.
139At least image_size bytes from the start of the image must be free for
140use by the kernel.
141NOTE: versions prior to v4.6 cannot make use of memory below the
142physical offset of the Image so it is recommended that the Image be
143placed as close as possible to the start of system RAM.
144
145If an initrd/initramfs is passed to the kernel at boot, it must reside
146entirely within a 1 GB aligned physical memory window of up to 32 GB in
147size that fully covers the kernel Image as well.
148
149Any memory described to the kernel (even that below the start of the
150image) which is not marked as reserved from the kernel (e.g., with a
151memreserve region in the device tree) will be considered as available to
152the kernel.
153
154Before jumping into the kernel, the following conditions must be met:
155
156- Quiesce all DMA capable devices so that memory does not get
157  corrupted by bogus network packets or disk data.  This will save
158  you many hours of debug.
159
160- Primary CPU general-purpose register settings:
161
162    - x0 = physical address of device tree blob (dtb) in system RAM.
163    - x1 = 0 (reserved for future use)
164    - x2 = 0 (reserved for future use)
165    - x3 = 0 (reserved for future use)
166
167- CPU mode
168
169  All forms of interrupts must be masked in PSTATE.DAIF (Debug, SError,
170  IRQ and FIQ).
171  The CPU must be in non-secure state, either in EL2 (RECOMMENDED in order
172  to have access to the virtualisation extensions), or in EL1.
173
174- Caches, MMUs
175
176  The MMU must be off.
177
178  The instruction cache may be on or off, and must not hold any stale
179  entries corresponding to the loaded kernel image.
180
181  The address range corresponding to the loaded kernel image must be
182  cleaned to the PoC. In the presence of a system cache or other
183  coherent masters with caches enabled, this will typically require
184  cache maintenance by VA rather than set/way operations.
185  System caches which respect the architected cache maintenance by VA
186  operations must be configured and may be enabled.
187  System caches which do not respect architected cache maintenance by VA
188  operations (not recommended) must be configured and disabled.
189
190- Architected timers
191
192  CNTFRQ must be programmed with the timer frequency and CNTVOFF must
193  be programmed with a consistent value on all CPUs.  If entering the
194  kernel at EL1, CNTHCTL_EL2 must have EL1PCTEN (bit 0) set where
195  available.
196
197- Coherency
198
199  All CPUs to be booted by the kernel must be part of the same coherency
200  domain on entry to the kernel.  This may require IMPLEMENTATION DEFINED
201  initialisation to enable the receiving of maintenance operations on
202  each CPU.
203
204- System registers
205
206  All writable architected system registers at or below the exception
207  level where the kernel image will be entered must be initialised by
208  software at a higher exception level to prevent execution in an UNKNOWN
209  state.
210
211  For all systems:
212  - If EL3 is present:
213
214    - SCR_EL3.FIQ must have the same value across all CPUs the kernel is
215      executing on.
216    - The value of SCR_EL3.FIQ must be the same as the one present at boot
217      time whenever the kernel is executing.
218
219  - If EL3 is present and the kernel is entered at EL2:
220
221    - SCR_EL3.HCE (bit 8) must be initialised to 0b1.
222
223  For systems with a GICv3 interrupt controller to be used in v3 mode:
224  - If EL3 is present:
225
226      - ICC_SRE_EL3.Enable (bit 3) must be initialised to 0b1.
227      - ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b1.
228      - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across
229        all CPUs the kernel is executing on, and must stay constant
230        for the lifetime of the kernel.
231
232  - If the kernel is entered at EL1:
233
234      - ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1
235      - ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1.
236
237  - The DT or ACPI tables must describe a GICv3 interrupt controller.
238
239  For systems with a GICv3 interrupt controller to be used in
240  compatibility (v2) mode:
241
242  - If EL3 is present:
243
244      ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b0.
245
246  - If the kernel is entered at EL1:
247
248      ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
249
250  - The DT or ACPI tables must describe a GICv2 interrupt controller.
251
252  For CPUs with pointer authentication functionality:
253
254  - If EL3 is present:
255
256    - SCR_EL3.APK (bit 16) must be initialised to 0b1
257    - SCR_EL3.API (bit 17) must be initialised to 0b1
258
259  - If the kernel is entered at EL1:
260
261    - HCR_EL2.APK (bit 40) must be initialised to 0b1
262    - HCR_EL2.API (bit 41) must be initialised to 0b1
263
264  For CPUs with Activity Monitors Unit v1 (AMUv1) extension present:
265
266  - If EL3 is present:
267
268    - CPTR_EL3.TAM (bit 30) must be initialised to 0b0
269    - CPTR_EL2.TAM (bit 30) must be initialised to 0b0
270    - AMCNTENSET0_EL0 must be initialised to 0b1111
271    - AMCNTENSET1_EL0 must be initialised to a platform specific value
272      having 0b1 set for the corresponding bit for each of the auxiliary
273      counters present.
274
275  - If the kernel is entered at EL1:
276
277    - AMCNTENSET0_EL0 must be initialised to 0b1111
278    - AMCNTENSET1_EL0 must be initialised to a platform specific value
279      having 0b1 set for the corresponding bit for each of the auxiliary
280      counters present.
281
282  For CPUs with the Fine Grained Traps (FEAT_FGT) extension present:
283
284  - If EL3 is present and the kernel is entered at EL2:
285
286    - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
287
288  For CPUs with support for HCRX_EL2 (FEAT_HCX) present:
289
290  - If EL3 is present and the kernel is entered at EL2:
291
292    - SCR_EL3.HXEn (bit 38) must be initialised to 0b1.
293
294  For CPUs with Advanced SIMD and floating point support:
295
296  - If EL3 is present:
297
298    - CPTR_EL3.TFP (bit 10) must be initialised to 0b0.
299
300  - If EL2 is present and the kernel is entered at EL1:
301
302    - CPTR_EL2.TFP (bit 10) must be initialised to 0b0.
303
304  For CPUs with the Scalable Vector Extension (FEAT_SVE) present:
305
306  - if EL3 is present:
307
308    - CPTR_EL3.EZ (bit 8) must be initialised to 0b1.
309
310    - ZCR_EL3.LEN must be initialised to the same value for all CPUs the
311      kernel is executed on.
312
313  - If the kernel is entered at EL1 and EL2 is present:
314
315    - CPTR_EL2.TZ (bit 8) must be initialised to 0b0.
316
317    - CPTR_EL2.ZEN (bits 17:16) must be initialised to 0b11.
318
319    - ZCR_EL2.LEN must be initialised to the same value for all CPUs the
320      kernel will execute on.
321
322  For CPUs with the Scalable Matrix Extension (FEAT_SME):
323
324  - If EL3 is present:
325
326    - CPTR_EL3.ESM (bit 12) must be initialised to 0b1.
327
328    - SCR_EL3.EnTP2 (bit 41) must be initialised to 0b1.
329
330    - SMCR_EL3.LEN must be initialised to the same value for all CPUs the
331      kernel will execute on.
332
333 - If the kernel is entered at EL1 and EL2 is present:
334
335    - CPTR_EL2.TSM (bit 12) must be initialised to 0b0.
336
337    - CPTR_EL2.SMEN (bits 25:24) must be initialised to 0b11.
338
339    - SCTLR_EL2.EnTP2 (bit 60) must be initialised to 0b1.
340
341    - SMCR_EL2.LEN must be initialised to the same value for all CPUs the
342      kernel will execute on.
343
344    - HWFGRTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01.
345
346    - HWFGWTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01.
347
348    - HWFGRTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01.
349
350    - HWFGWTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01.
351
352  For CPUs with the Scalable Matrix Extension FA64 feature (FEAT_SME_FA64):
353
354  - If EL3 is present:
355
356    - SMCR_EL3.FA64 (bit 31) must be initialised to 0b1.
357
358 - If the kernel is entered at EL1 and EL2 is present:
359
360    - SMCR_EL2.FA64 (bit 31) must be initialised to 0b1.
361
362  For CPUs with the Memory Tagging Extension feature (FEAT_MTE2):
363
364  - If EL3 is present:
365
366    - SCR_EL3.ATA (bit 26) must be initialised to 0b1.
367
368  - If the kernel is entered at EL1 and EL2 is present:
369
370    - HCR_EL2.ATA (bit 56) must be initialised to 0b1.
371
372  For CPUs with the Scalable Matrix Extension version 2 (FEAT_SME2):
373
374  - If EL3 is present:
375
376    - SMCR_EL3.EZT0 (bit 30) must be initialised to 0b1.
377
378 - If the kernel is entered at EL1 and EL2 is present:
379
380    - SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1.
381
382  For CPUs with Memory Copy and Memory Set instructions (FEAT_MOPS):
383
384  - If the kernel is entered at EL1 and EL2 is present:
385
386    - HCRX_EL2.MSCEn (bit 11) must be initialised to 0b1.
387
388  For CPUs with the Extended Translation Control Register feature (FEAT_TCR2):
389
390  - If EL3 is present:
391
392    - SCR_EL3.TCR2En (bit 43) must be initialised to 0b1.
393
394 - If the kernel is entered at EL1 and EL2 is present:
395
396    - HCRX_EL2.TCR2En (bit 14) must be initialised to 0b1.
397
398  For CPUs with the Stage 1 Permission Indirection Extension feature (FEAT_S1PIE):
399
400  - If EL3 is present:
401
402    - SCR_EL3.PIEn (bit 45) must be initialised to 0b1.
403
404  - If the kernel is entered at EL1 and EL2 is present:
405
406    - HFGRTR_EL2.nPIR_EL1 (bit 58) must be initialised to 0b1.
407
408    - HFGWTR_EL2.nPIR_EL1 (bit 58) must be initialised to 0b1.
409
410    - HFGRTR_EL2.nPIRE0_EL1 (bit 57) must be initialised to 0b1.
411
412    - HFGRWR_EL2.nPIRE0_EL1 (bit 57) must be initialised to 0b1.
413
414The requirements described above for CPU mode, caches, MMUs, architected
415timers, coherency and system registers apply to all CPUs.  All CPUs must
416enter the kernel in the same exception level.  Where the values documented
417disable traps it is permissible for these traps to be enabled so long as
418those traps are handled transparently by higher exception levels as though
419the values documented were set.
420
421The boot loader is expected to enter the kernel on each CPU in the
422following manner:
423
424- The primary CPU must jump directly to the first instruction of the
425  kernel image.  The device tree blob passed by this CPU must contain
426  an 'enable-method' property for each cpu node.  The supported
427  enable-methods are described below.
428
429  It is expected that the bootloader will generate these device tree
430  properties and insert them into the blob prior to kernel entry.
431
432- CPUs with a "spin-table" enable-method must have a 'cpu-release-addr'
433  property in their cpu node.  This property identifies a
434  naturally-aligned 64-bit zero-initalised memory location.
435
436  These CPUs should spin outside of the kernel in a reserved area of
437  memory (communicated to the kernel by a /memreserve/ region in the
438  device tree) polling their cpu-release-addr location, which must be
439  contained in the reserved region.  A wfe instruction may be inserted
440  to reduce the overhead of the busy-loop and a sev will be issued by
441  the primary CPU.  When a read of the location pointed to by the
442  cpu-release-addr returns a non-zero value, the CPU must jump to this
443  value.  The value will be written as a single 64-bit little-endian
444  value, so CPUs must convert the read value to their native endianness
445  before jumping to it.
446
447- CPUs with a "psci" enable method should remain outside of
448  the kernel (i.e. outside of the regions of memory described to the
449  kernel in the memory node, or in a reserved area of memory described
450  to the kernel by a /memreserve/ region in the device tree).  The
451  kernel will issue CPU_ON calls as described in ARM document number ARM
452  DEN 0022A ("Power State Coordination Interface System Software on ARM
453  processors") to bring CPUs into the kernel.
454
455  The device tree should contain a 'psci' node, as described in
456  Documentation/devicetree/bindings/arm/psci.yaml.
457
458- Secondary CPU general-purpose register settings
459
460  - x0 = 0 (reserved for future use)
461  - x1 = 0 (reserved for future use)
462  - x2 = 0 (reserved for future use)
463  - x3 = 0 (reserved for future use)
464