1.. SPDX-License-Identifier: GPL-2.0 2 3============================ 4Ceph Distributed File System 5============================ 6 7Ceph is a distributed network file system designed to provide good 8performance, reliability, and scalability. 9 10Basic features include: 11 12 * POSIX semantics 13 * Seamless scaling from 1 to many thousands of nodes 14 * High availability and reliability. No single point of failure. 15 * N-way replication of data across storage nodes 16 * Fast recovery from node failures 17 * Automatic rebalancing of data on node addition/removal 18 * Easy deployment: most FS components are userspace daemons 19 20Also, 21 22 * Flexible snapshots (on any directory) 23 * Recursive accounting (nested files, directories, bytes) 24 25In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely 26on symmetric access by all clients to shared block devices, Ceph 27separates data and metadata management into independent server 28clusters, similar to Lustre. Unlike Lustre, however, metadata and 29storage nodes run entirely as user space daemons. File data is striped 30across storage nodes in large chunks to distribute workload and 31facilitate high throughputs. When storage nodes fail, data is 32re-replicated in a distributed fashion by the storage nodes themselves 33(with some minimal coordination from a cluster monitor), making the 34system extremely efficient and scalable. 35 36Metadata servers effectively form a large, consistent, distributed 37in-memory cache above the file namespace that is extremely scalable, 38dynamically redistributes metadata in response to workload changes, 39and can tolerate arbitrary (well, non-Byzantine) node failures. The 40metadata server takes a somewhat unconventional approach to metadata 41storage to significantly improve performance for common workloads. In 42particular, inodes with only a single link are embedded in 43directories, allowing entire directories of dentries and inodes to be 44loaded into its cache with a single I/O operation. The contents of 45extremely large directories can be fragmented and managed by 46independent metadata servers, allowing scalable concurrent access. 47 48The system offers automatic data rebalancing/migration when scaling 49from a small cluster of just a few nodes to many hundreds, without 50requiring an administrator carve the data set into static volumes or 51go through the tedious process of migrating data between servers. 52When the file system approaches full, new nodes can be easily added 53and things will "just work." 54 55Ceph includes flexible snapshot mechanism that allows a user to create 56a snapshot on any subdirectory (and its nested contents) in the 57system. Snapshot creation and deletion are as simple as 'mkdir 58.snap/foo' and 'rmdir .snap/foo'. 59 60Snapshot names have two limitations: 61 62* They can not start with an underscore ('_'), as these names are reserved 63 for internal usage by the MDS. 64* They can not exceed 240 characters in size. This is because the MDS makes 65 use of long snapshot names internally, which follow the format: 66 `_<SNAPSHOT-NAME>_<INODE-NUMBER>`. Since filenames in general can't have 67 more than 255 characters, and `<node-id>` takes 13 characters, the long 68 snapshot names can take as much as 255 - 1 - 1 - 13 = 240. 69 70Ceph also provides some recursive accounting on directories for nested 71files and bytes. That is, a 'getfattr -d foo' on any directory in the 72system will reveal the total number of nested regular files and 73subdirectories, and a summation of all nested file sizes. This makes 74the identification of large disk space consumers relatively quick, as 75no 'du' or similar recursive scan of the file system is required. 76 77Finally, Ceph also allows quotas to be set on any directory in the system. 78The quota can restrict the number of bytes or the number of files stored 79beneath that point in the directory hierarchy. Quotas can be set using 80extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg:: 81 82 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir 83 getfattr -n ceph.quota.max_bytes /some/dir 84 85A limitation of the current quotas implementation is that it relies on the 86cooperation of the client mounting the file system to stop writers when a 87limit is reached. A modified or adversarial client cannot be prevented 88from writing as much data as it needs. 89 90Mount Syntax 91============ 92 93The basic mount syntax is:: 94 95 # mount -t ceph user@fsid.fs_name=/[subdir] mnt -o mon_addr=monip1[:port][/monip2[:port]] 96 97You only need to specify a single monitor, as the client will get the 98full list when it connects. (However, if the monitor you specify 99happens to be down, the mount won't succeed.) The port can be left 100off if the monitor is using the default. So if the monitor is at 1011.2.3.4:: 102 103 # mount -t ceph cephuser@07fe3187-00d9-42a3-814b-72a4d5e7d5be.cephfs=/ /mnt/ceph -o mon_addr=1.2.3.4 104 105is sufficient. If /sbin/mount.ceph is installed, a hostname can be 106used instead of an IP address and the cluster FSID can be left out 107(as the mount helper will fill it in by reading the ceph configuration 108file):: 109 110 # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=mon-addr 111 112Multiple monitor addresses can be passed by separating each address with a slash (`/`):: 113 114 # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=192.168.1.100/192.168.1.101 115 116When using the mount helper, monitor address can be read from ceph 117configuration file if available. Note that, the cluster FSID (passed as part 118of the device string) is validated by checking it with the FSID reported by 119the monitor. 120 121Mount Options 122============= 123 124 mon_addr=ip_address[:port][/ip_address[:port]] 125 Monitor address to the cluster. This is used to bootstrap the 126 connection to the cluster. Once connection is established, the 127 monitor addresses in the monitor map are followed. 128 129 fsid=cluster-id 130 FSID of the cluster (from `ceph fsid` command). 131 132 ip=A.B.C.D[:N] 133 Specify the IP and/or port the client should bind to locally. 134 There is normally not much reason to do this. If the IP is not 135 specified, the client's IP address is determined by looking at the 136 address its connection to the monitor originates from. 137 138 wsize=X 139 Specify the maximum write size in bytes. Default: 64 MB. 140 141 rsize=X 142 Specify the maximum read size in bytes. Default: 64 MB. 143 144 rasize=X 145 Specify the maximum readahead size in bytes. Default: 8 MB. 146 147 mount_timeout=X 148 Specify the timeout value for mount (in seconds), in the case 149 of a non-responsive Ceph file system. The default is 60 150 seconds. 151 152 caps_max=X 153 Specify the maximum number of caps to hold. Unused caps are released 154 when number of caps exceeds the limit. The default is 0 (no limit) 155 156 rbytes 157 When stat() is called on a directory, set st_size to 'rbytes', 158 the summation of file sizes over all files nested beneath that 159 directory. This is the default. 160 161 norbytes 162 When stat() is called on a directory, set st_size to the 163 number of entries in that directory. 164 165 nocrc 166 Disable CRC32C calculation for data writes. If set, the storage node 167 must rely on TCP's error correction to detect data corruption 168 in the data payload. 169 170 dcache 171 Use the dcache contents to perform negative lookups and 172 readdir when the client has the entire directory contents in 173 its cache. (This does not change correctness; the client uses 174 cached metadata only when a lease or capability ensures it is 175 valid.) 176 177 nodcache 178 Do not use the dcache as above. This avoids a significant amount of 179 complex code, sacrificing performance without affecting correctness, 180 and is useful for tracking down bugs. 181 182 noasyncreaddir 183 Do not use the dcache as above for readdir. 184 185 noquotadf 186 Report overall filesystem usage in statfs instead of using the root 187 directory quota. 188 189 nocopyfrom 190 Don't use the RADOS 'copy-from' operation to perform remote object 191 copies. Currently, it's only used in copy_file_range, which will revert 192 to the default VFS implementation if this option is used. 193 194 recover_session=<no|clean> 195 Set auto reconnect mode in the case where the client is blocklisted. The 196 available modes are "no" and "clean". The default is "no". 197 198 * no: never attempt to reconnect when client detects that it has been 199 blocklisted. Operations will generally fail after being blocklisted. 200 201 * clean: client reconnects to the ceph cluster automatically when it 202 detects that it has been blocklisted. During reconnect, client drops 203 dirty data/metadata, invalidates page caches and writable file handles. 204 After reconnect, file locks become stale because the MDS loses track 205 of them. If an inode contains any stale file locks, read/write on the 206 inode is not allowed until applications release all stale file locks. 207 208More Information 209================ 210 211For more information on Ceph, see the home page at 212 https://ceph.com/ 213 214The Linux kernel client source tree is available at 215 - https://github.com/ceph/ceph-client.git 216 217and the source for the full system is at 218 https://github.com/ceph/ceph.git 219