Alluxio holds a unique place in the big data ecosystem, residing between storage systems (such as Amazon S3, Apache HDFS or OpenStack Swift) and computation frameworks and applications (such as Apache Spark or Hadoop MapReduce) to provide a central point of access with a memory-centric design. Alluxio works best when computation frameworks and distributed storage are decoupled and Alluxio is deployed alongside a cluster’s computation framework.

For user applications and computation frameworks, Alluxio is the storage underneath that usually collocates with the computation frameworks, so that Alluxio can provide fast storage, facilitating data sharing and locality between jobs, regardless of whether they are running on the same computation engine. As a result, Alluxio can serve the data at memory speed when data is local, or the computation cluster network speed when data is in Alluxio. Data is only read once from the under storage system on the first time it’s accessed. Therefore, the data access can be significantly accelerated when the access to the under storage is not fast.

For under storage systems, Alluxio bridges the gap between big data applications and traditional storage systems, and expands the set of workloads available to utilize the data. Since Alluxio hides the integration of under storage systems from applications, any under storage can back all the applications and frameworks running on top of Alluxio. Also, when mounting multiple under storage systems simultaneously, Alluxio can serve as a unifying layer for any number of varied data sources.



Alluxio’s design uses a single primary master and multiple workers. At a high level, Alluxio can be divided into three components: the master, workers, and clients. The master and workers together make up the Alluxio servers, which are the components a system admin would maintain and manage. The clients are used to talk to Alluxio servers by the applications, such as Spark or MapReduce jobs, Alluxio command-line, or the FUSE layer.


Alluxio master service can be deployed as one primary master and several standby masters for fault tolerance. When the primary master goes down, a standby master is elected to become the leader master.


Primary Master

There is only one primary master in an Alluxio cluster. The primary master is responsible for managing the global metadata of the system. This includes file system metadata (e.g. the namespace tree), block metadata (e.g block locations), and worker capacity metadata (free and used space). Alluxio clients interact with the leader master to read or modify this metadata. In addition, all workers periodically send heartbeat information to the primary master to maintain their participation in the cluster. The primary master does not initiate communication with other components; it only responds to requests via RPC services. Additionally, the primary master writes journals to a distributed persistent storage to allow for recovery of master state information.

Standby Masters

Standby masters read journals written by the primary master to keep their own copies of master state up-to-date. They also write journal checkpoints for faster recovery in the future. They do not process any requests from other Alluxio components.


Alluxio workers are responsible for managing user-configurable local resources allocated to Alluxio (e.g. memory, SSDs, HDDs etc.). Alluxio workers store data as blocks and serve client requests that read or write data by reading or creating new blocks within their local resources. Workers are only responsible for managing blocks; the actual mapping from files to blocks is only stored by the master.

Also, Alluxio workers perform data operations on the under store (e.g. data transfer or under store metadata operations). This brings two important benefits: The data read from the under store can be stored in the worker and be available immediately to other clients, and the client can be lightweight and does not depend on the under storage connectors.

Because RAM usually offers limited capacity, blocks in a worker can be evicted when space is full. Workers employ eviction policies to decide which data to keep in the Alluxio space. For more on this topic, please check out the documentation for Tiered Storage.



The Alluxio client provides users a gateway to interact with the Alluxio servers. It initiates communication with the primary master to carry out metadata operations and with workers to read and write data that is stored in Alluxio. It also provides a native filesystem API in Java, and supports multiple client languages including REST, Go and Python. In addition to that, Alluxio also supports APIs that are compatible with HDFS API as well as Amazon S3 API.

Need help? Ask a Question