This article relies too much on references to primary sources. (March 2018) (Learn how and when to remove this template message)
|Original author(s)||Inktank Storage (Sage Weil, Yehuda Sadeh Weinraub, Gregory Farnum, Josh Durgin, Samuel Just, Wido den Hollander)|
|Developer(s)||Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE|
13.2.0 "Mimic" / 1 June 2018
13.1.0 "Mimic" / May 11, 2018
|Written in||C++, Python|
|Operating system||Linux, FreeBSD|
|Type||Distributed object store|
In computing, Ceph (pronounced or ) is a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level, and freely available.
Ceph replicates data and makes it fault-tolerant, using commodity hardware and requiring no specific hardware support. As a result of its design, the system is both self-healing and self-managing, aiming to minimize administration time and other costs.
On April 21, 2016, the Ceph development team released "Jewel", the first Ceph release in which CephFS is considered stable. The CephFS repair and disaster recovery tools are feature-complete (snapshots, multiple active metadata servers and some other functionality is disabled by default).
The August, 2017 release (codename "Luminous") introduced the production-ready BlueStore storage format which avoids many shortcomings of the old filesystem-based filestore, providing better latency and additional storage features.
All of these are fully distributed, and may run on the same set of servers. Clients directly interact with all of them.
Ceph does striping of individual files across multiple nodes to achieve higher throughput, similar to how RAID0 stripes partitions across multiple hard drives. Adaptive load balancing is supported whereby frequently accessed objects are replicated over more nodes. As of September 2017 , BlueStore is the default and recommended storage type for production environments, which is Ceph's own storage implementation providing better latency and configurability than the filestore backend, and avoiding the shortcomings of the filesystem based storage involving additional processing and caching layers. The Filestore backend is still considered useful and very stable; XFS is the recommended underlying filesystem type for production environments, while Btrfs is recommended for non-production environments. ext4 filesystems are not recommended because of resulting limitations on the maximum RADOS objects length.
Ceph implements distributed object storage. Ceph's software libraries provide client applications with direct access to the reliable autonomic distributed object store (RADOS) object-based storage system, and also provide a foundation for some of Ceph's features, including RADOS Block Device (RBD), RADOS Gateway, and the Ceph File System.
The "librados" software libraries provide access in C, C++, Java, PHP, and Python. The RADOS Gateway also exposes the object store as a RESTful interface which can present as both native Amazon S3 and OpenStack Swift APIs.
Ceph's object storage system allows users to mount Ceph as a thin-provisioned block device. When an application writes data to Ceph using a block device, Ceph automatically stripes and replicates the data across the cluster. Ceph's RADOS Block Device (RBD) also integrates with Kernel-based Virtual Machines (KVMs).
Ceph RBD interfaces with the same Ceph object storage system that provides the librados interface and the CephFS file system, and it stores block device images as objects. Since RBD is built on librados, RBD inherits librados's abilities, including read-only snapshots and revert to snapshot. By striping images across the cluster, Ceph improves read access performance for large block device images.
The block device can be virtualized, providing block storage to virtual machines, in virtualization platforms such as Apache CloudStack, OpenStack, OpenNebula, Ganeti, and Proxmox Virtual Environment.
Ceph's file system (CephFS) runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and file names of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.
Clients mount the POSIX-compatible file system using a Linux kernel client. On March 19, 2010, Linus Torvalds merged the Ceph client into Linux kernel version 2.6.34 which was released on May 16, 2010. An older FUSE-based client is also available. The servers run as regular Unix daemons.
Ceph was initially created by Sage Weil (developer of the Webring concept and co-founder of DreamHost) for his doctoral dissertation, which was advised by Professor Scott A. Brandt in the Jack Baskin School of Engineering at the University of California, Santa Cruz and sponsored by the United States Department of Energy (DOE) and Oak Ridge National Laboratory (ORNL), Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Intel Corporation, Microsoft Corporation, SAP Laboratories and others.
After his graduation in fall 2007, Weil continued to work on Ceph full-time, and the core development team expanded to include Yehuda Sadeh Weinraub and Gregory Farnum. In 2012, Weil created Inktank Storage for professional services and support for Ceph.
In October 2015, the Ceph Community Advisory Board was formed to assist the community in driving the direction of open source software-defined storage technology. The charter advisory board includes Ceph community members from global IT organizations that are committed to the Ceph project, including individuals from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.
The name "Ceph" is an abbreviation of "cephalopod", a class of molluscs that includes the octopus. The name (emphasized by the logo) suggests the highly parallel behavior of an octopus and was chosen to connect the file system with UCSC's mascot, a banana slug called "Sammy". Both cephalopods and banana slugs are molluscs.