Distributed File System (DFS) Overview | HDFS in Hadoop
Add to List Share
The Hadoop Distributed File System (HDFS)
A subproject of the Apache Hadoop Project is a circulated, exceedingly fault tolerant file system intended to keep running on minimal hardware equipment. HDFS gives high-throughput access to application data and is suitable for applications with expansive data sets. This video shows the essential components of HDFS and gives a high level perspective of the HDFS architecture.
HDFS has numerous similitudes with other DFS, yet is diverse in a few regards. One detectable distinction is HDFS's write-once-read-many model that unwinds simultaneousness control necessities, improves information coherency, and empowers high-throughput access.
Another remarkable characteristic of HDFS is the viewpoint that it is typically better to find preparing logic near the data as opposed to moving the data to the application space.
HDFS thoroughly confines information keeping in touch with one essayist at once. Bytes are constantly attached to the end of a stream, and byte streams are ensured to be stored in the order written.
So watch the video till end to know about HDFS
For more updates on courses and tips follow us on:
- Facebook: https://www.facebook.com/Simplilearn
- Twitter: https://twitter.com/simplilearn
Get the android app: http://bit.ly/1WlVo4u
Get the iOS app: http://apple.co/1HIO5J0