If you're looking for Apache Flume Interview Questions & Answers for Experienced or Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Apache Flume has a market share of about 70.37%. So, You still have opportunity to move ahead in your career in Apache Flume Development. Mindmajix offers Advanced Apache Flume Interview Questions 2018 that helps you in cracking your interview & acquire dream career as Apache Flume Developer.
1. What is Flume?
Flume is a reliable distributed service for collection and aggregation of large amount of streaming data into HDFS. Most of the Bigdata analysts use Apache Flume to push data from different sources like Twitter, Facebook, & LinkedIn into Hadoop, Strom, Solr, Kafka & Spark.
2. Why we are using Flume?
Most often Hadoop developer use this tool to get log data from social media sites. It’s developed by Cloudera for aggregating and moving very large amount of data. The primary use is gather log files from different sources and asynchronously persists in the Hadoop cluster.
3. What is Flume Agent?
A Flume agent is a JVM process that holds the Flume core components (Source, Channel, Sink) through which events flow from an external source like web-servers to destination like HDFS. Agent is heart of the Apache Flume.
4. What is Flume event?
A unit of data with set of string attributes called Flume event. The external source like web-server sends events to the source. Internally Flume has inbuilt functionality to understand the source format. For example Avro sends events from Avro sources to the Flume.
Each log file is considered as an event. Each event has header and value sectors, which has header information and appropriate value that assign to the particular header.
5. What are Flume Core components?
Subscribe to our youtube channel to get new updates..!
Source, Channels and Sink are core components in Apache Flume.
When Flume source receives event from external sources, it stores the event in one or multiple channels.
Flume channel is temporarily store & keeps the event until it’s consumed by the Flume sink. It acts as Flume repository.
Flume Sink removes the event from channel and put into an external repository like HDFS or Move to the next Flume agent.
6. Can Flume provides 100% reliability to the data flow?
Yes, it provide end-to-end reliability of the flow. By default Flume uses a transactional approach in the data flow. Sources and sinks encapsulated in a transactional repository provides by the channels. This channels responsible to pass reliably from end to end in the flow. So it provides 100% reliability to the data flow.
7. Can you explain about configuration files?
The agent configuration is stored in local configuration file.It comprises of each agent’s source, sink and channel information.
Each core component such as source, sink and channel has properties such as name, type and set of properties.
for example Avro source need hostname, port number to receive data from external client.
Memory channel should have maximum queue size in the form of capacity.
Sink should have File System URI, Path to create files, frequency of file rotation and more configurations.
8. What are the complicated steps in Flume configuration?
Flume can processing streaming data, so if started once, there is no stop/end to the process. asynchronously it can flows data from source to HDFS via Agent. First of all Agent should know individual components how they are connected to load data. So configuration is trigger to load streaming data. For example consumerKey, consumerSecret, accessToken and accessTokenSecret are key factors to download data from Twitter.
9. What are the important steps in the configuration?
Configuration file is the heart of the Apache Flume’s agent.
Every Source must have atleast one channel.
Every Sink must have only one channel.
Every Component must have a specific type.
10. Apache Flume support third-party plugins also?
Yes, Flume has 100% plugin-based architecture. It can load and ships data from external sources to external destinations which separately from Flume. So that most of the bigdata analysts use this tool for streaming data.
11. Can you explain Consolidation in Flume?
The beauty of Flume is Consolidation, it collect data from different sources even it’s different flume Agents. Flume source can collect all data flow from different sources and flows through channel and sink. Finally send this data to HDFS or target destination.
12. Can Flume can distributes data to multiple destinations?
Upcoming Batches - Apache Flume Training!
6:30 AM IST
7:00 AM IST
6:30 AM IST
6:30 AM IST
Yes, it support multiplexing flow. The event flows from one sources to multiple channels and multiple destinations. It’s acheived by defining a flow multiplexer.
In above example, data flows and replicated to HDFS and another sink to destination and another destination is input to another agent.
13. Agent communicate with other Agents?
No, each agent runs independently. Flume can easily scale horizontally. As a result there is no single point of failure.
14. What are interceptors?
It’s one of the most frequently asked Flume interview question. Interceptors are used to filter the events between source and channel, channel and sink. These channels can filter un-necessary or targeted log files. Depends on requirements you can use n number of interceptors.
15. What are Channel selectors?
Channel selectors control and separating the events and allocate to a particular channel. There are default/ replicated channel selectors. Replicated channel selectors can replicated the data in multiple/all channels.
Multiplexing channel selectors used to separate and aggregate the data based on the event’s header information. It means based on Sink’s destination, the event aggregate into the particular sink.
Leg example: One sink connected with Hadoop, another with S3 another with Hbase, at that time, Multiplexing channel selectors can separate the events and flow to the particular sink.
16. What is sink processors?
Sink processors is a mechanism by which you can create a fail-over task and load balancing.