This article may require cleanup to meet Wikipedia's quality standards. (May 2010) (Learn how and when to remove this template message)
Many older operating systems support only their one "native" file system, which does not bear any name apart from the name of the operating system itself.
This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: Section contains possible spam. (September 2016) (Learn how and when to remove this template message)
Disk file systems are usually block-oriented. Files in a block-oriented file system are sequences of blocks, often featuring fully random-access read, write, and modify operations.
These file systems have built-in checksumming and either mirroring or parity for extra redundancy on one or several block devices:
Solid state media, like flash memory, are similar to disks in their interfaces, but have different problems. On low level, they require special handling such as wear leveling and different error detection and correction algorithms. Typically a device such as solid-state drive handles such operations internally and therefore a regular file system can be used. However, for certain specialized installations (embedded systems, industrial applications) a file system optimized for plain flash memory is needed.
In record-oriented file systems files are stored as a collection of records. They are typically associated with mainframe and minicomputer operating systems. Programs read and write whole records, rather than bytes or arbitrary byte ranges, and can seek to a record boundary but not within records. The more sophisticated record-oriented file systems have more in common with simple databases than with other file systems.
Shared-disk file systems (also called shared-storage file systems, SAN file system, Clustered file system or even cluster file systems) are primarily used in a storage area network where all nodes directly access the block storage where the file system is located. This makes it possible for nodes to fail without affecting access to the file system from the other nodes. Shared-disk file systems are normally used in a high-availability cluster together with storage on hardware RAID. Shared-disk file systems normally do not scale over 64 or 128 nodes.
Distributed file systems are also called network file systems. Many implementations have been made, they are location dependent and they have access control lists (ACLs), unless otherwise stated below.
Distributed file systems, which also are parallel and fault tolerant, stripe and replicate data over multiple servers for high performance and to maintain data integrity. Even if a server fails no data is lost. The file systems are used in both high-performance computing (HPC) and high-availability clusters.
|Alluxio (formerly Tachyon)||Alluxio Open Foundation||Apache License 2.0||Linux and macOS||Alluxio, formerly known as Tachyon, is the world's first memory speed virtual distributed storage system. It unifies data access and bridges computation frameworks and underlying storage systems. Applications only need to connect with Alluxio to access data stored in any underlying storage systems. Additionally, Alluxio's memory-centric architecture enables data access orders of magnitude faster than existing solutions.|
|BeeGFS (formerly FhGFS)||Fraunhofer Society||Open-source (GPLv2 & BeeGFS EULA)||Linux||A free to use file system with optional professional support, designed for easy usage and high performance, used on some of the fastest computer clusters in the world. BeeGFS allows replication of storage volumes with automatic failover and self-healing.|
|Ceph||Inktank Storage, a company acquired by Red Hat||LGPL||Linux kernel||A massively scalable object store. CephFS was merged into the Linux kernel in 2010. Ceph's foundation is the reliable autonomic distributed object store (RADOS), which provides object storage via programmatic interface and S3 or Swift REST APIs, block storage to QEMU/KVM/Linux hosts, and POSIX filesystem storage which can be mounted by Linux kernel and FUSE clients.|
|Chiron FS||is a fuse-based, transparent replication file system, layering on an existing file system and implementing at the file system level what RAID 1 does at the device level. A notably convenient consequence is the possibility of picking single target directories, without the need of replicating entire partitions. (The project has no visible activity after 2008, a status request in Oct. 2009 in the chironfs-forum is unanswered)|
|CloudStore||Kosmix||Apache License 2.0||Google File System workalike. Replaced by Quantcast File System (QFS)|
|Cosmos||Microsoft internal||internal software||Focuses on fault tolerance, high throughput and scalability. Designed for terabyte and petabyte sized data sets and processing with Dryad.|
|dCache||DESY and others||A write once filesystem, accessible via various protocols|
|FS-Manager||CDNetworks||proprietary software||Linux||Focused on Content Delivery Network|
|General Parallel File System (GPFS)||IBM||proprietary||AIX, Linux and Windows||Support replication between attached block storage. Symmetric or asymmetric (configurable)|
|Gfarm file system||Asia Pacific Grid||X11 License||Linux, macOS, FreeBSD, NetBSD and Solaris||Uses OpenLDAP or PostgreSQL for metadata and FUSE or LUFS for mounting|
|GlusterFS||Gluster, a company acquired by Red Hat||GNU General Public License v3||Linux, NetBSD, FreeBSD, OpenSolaris||A general purpose distributed file system for scalable storage. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. GlusterFS is the main component in Red Hat Storage Server.|
|Google File System (GFS)||internal software||Focus on fault tolerance, high throughput and scalability|
|Hadoop Distributed File System||Apache Software Foundation||Apache License 2.0||Cross-platform||Open source GoogleFS clone|
|IBRIX Fusion||IBRIX||proprietary software|
|Infinit||Infinit International, Inc||proprietary||cross-platform||A POSIX-compliant file system for both local area network and wide area networks. Infinit replicates blocks of data between the various storage resources composing the infrastructure (being local or through cloud API) in order to guarantee data reliability (durability and availability) through Byzantine fault tolerance and data rebalancing (i.e. self healing).|
|LizardFS||Skytechnology||GPL||cross-platform||An open source, highly available POSIX-compliant file system that supports Windows clients.|
|Lustre||originally developed by Cluster File Systems and currently supported by OpenSFS||GPL||Linux||A POSIX-compliant, high-performance filesystem used on a majority of systems in the Top-500 list of HPC systems. Lustre has high availability via storage failover.|
|MapR FS||MapR||Proprietary||Linux||Highly scalable, POSIX compliant, fault tolerant, read/write filesystem with a distributed, fault tolerant metadata service. It provides an HDFS and NFS interface to clients as well as a noSQL table interface and Apache Kafka compatible messaging system|
|MogileFS||Danga Interactive||GPL||Linux (but may be ported)||Is not POSIX compliant, uses a flat namespace, application level, uses MySQL or Postgres for metadata and HTTP for transport.|
|MooseFS||Core Technology||GPLv2/proprietary||Linux/NetBSD/FreeBSD/macOS/OpenSolaris||MooseFS is a fault tolerant, highly available and high performance scale-out network distributed file system. It spreads data over several physical commodity x86 servers, which are visible to the user as one namespace. For standard file operations MooseFS acts like any other Unix-like file systems|
|ObjectiveFS||Objective Security Corporation||proprietary||Linux, macOS||POSIX-compliant shared distributed filesystem. Uses object store as a backend. Runs on AWS S3, GCS and object store devices.|
|OneFS distributed file system||Isilon||FreeBSD||BSD based OS on dedicated Intel based hardware, serving NFS v3 and SMB/CIFS to Windows, macOS, Linux and other UNIX clients under a proprietary software|
|Panasas ActiveScale File System (PanFS)||Panasas||proprietary software||Linux||Uses object storage devices|
|PeerFS||Radiant Data Corporation||proprietary software||Linux||Focus on high availability and high performance and uses peer-to-peer replication with multiple sources and targets|
|Quobyte||Quobyte||Proprietary software||Linux||All in one data center file system (file, block and object storage). Commercial successor of XtreemFS, founded by the XtreemFS development team.|
|RozoFS||Rozo Systems||GNU GPLv2||Linux||A POSIX DFS focused on fault-tolerance and high-performance, based on the Mojette erasure code to reduce significantly the amount of redundancy (compared to plain replication).|
|Scality||Scality ring||Proprietary software||Linux||A POSIX file system focused on high availability and performance. Also provides S3/rest/nfs interfaces.|
|Tahoe-LAFS||Tahoe-LAFS Software Foundation||GNU GPL 2+ and other||Windows, Linux, macOS||secure, decentralized, fault-tolerant, peer-to-peer distributed data store and distributed file system|
|TerraGrid Cluster File System||Terrascale Technologies Inc||proprietary software||Linux||Implements on demand cache coherency and uses industrial standard iSCSI and a modified version of the XFS file system|
|XtreemFS||Contrail E.U. project, the German MoSGrid project and the German project "First We Take Berlin"||open-source (BSD)||Linux, Solaris, macOS, Windows||cross-platform file system for wide area networks. It replicates the data for fault tolerance and caches metadata and data to improve performance over high-latency links. SSL and X.509 certificates support makes XtreemFS usable over public networks. It also supports Striping for usage in a cluster.|
Some of these may be called cooperative storage cloud.
These are not really file systems; they allow access to file systems from an operating system standpoint.