Professional Documents
Culture Documents
1 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
Contents
Architecture
Control flow
Modes Of Deployment
Entity Management actions
Instance Management actions
Retention
Replication
Cross entity validations
Updating process and feed definition
Handling late input data
Idempotency
Falcon EL Expressions
Lineage
Security
Recipes
Monitoring
Backwards Compatibility Instructions
Architecture
Introduction
Falcon is a feed and process management platform over hadoop. Falcon essentially transforms user's feed and process configurations into repeated
actions through a standard workflow engine. Falcon by itself doesn't do any heavy lifting. All the functions and workflow state management
requirements are delegated to the workflow scheduler. The only thing that Falcon maintains is the dependencies and relationship between these
entities. This is adequate to provide integrated and seamless experience to the developers using the falcon platform.
Scheduler
Falcon system has picked Oozie as the default scheduler. However the system is open for integration with other schedulers. Lot of the data
processing in hadoop requires scheduling to be based on both data availability as well as time. Oozie currently supports these capabilities off the
shelf and hence the choice.
Control flow
Though the actual responsibility of the workflow is with the scheduler (Oozie), Falcon remains in the execution path, by subscribing to messages that
each of the workflow may generate. When Falcon generates a workflow in Oozie, it does so, after instrumenting the workflow with additional steps
which includes messaging via JMS. Falcon system itself subscribes to these control messages and can perform actions such as retries, handling late
input arrival etc.
Feed Schedule flow
4/7/2015 3:40 AM
Falcon - Contents
2 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
Modes Of Deployment
There are two basic components of Falcon set up. Falcon Prism and Falcon Server. As the name suggests Falcon Prism splits the request it gets to
the Falcon Servers. More details below:
Distributed Mode
Distributed mode is the mode which you might me using most of the time. This is for organisations which have multiple instances of hadoop clusters,
and multiple workflow schedulers to handle them. Here we have 2 components: Prism and Server. Both Prism and server have there own setup
(runtime and startup properties) and there config locations. In this mode Prism acts as a contact point for Falcon servers. Below are the requests that
can be sent to prism and server in this mode:
Prism: submit, schedule, submitAndSchedule, Suspend, Resume, Kill, instance management Server: schedule, suspend, resume, instance
management
As observed above submit and kill are kept exclusively as Prism operations to keep all the config stores in sync and to support feature of
idempotency. Request may also be sent from prism but directed to a specific server using the option "-colo" from CLI or append the same in web
request, if using API.
When a cluster is submitted it is by default sent to all the servers configured in the prism. When is feed is SUBMIT / SCHEDULED request is only
sent to the servers specified in the feed / process definitions. Servers are mentioned in the feed / process via CLUSTER tags in xml definition.
Communication between prism and falcon server (for submit/update entity function) is secured over https://
Prism server needs to present a valid client certificate for the falcon server to accept the action.
Startup property file in both falcon & prism server need to be configured with the following configuration if TLS is enabled. * keystore.file *
keystore.password
Prism Setup
4/7/2015 3:40 AM
Falcon - Contents
3 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
Configuration Store
Configuration store is file system based store that the Falcon system maintains where the entity definitions are stored. File System used for the
configuration store can either be a local file system or HDFS file system. It is recommended that the store be maintained outside of the system where
Falcon is deployed. This is needed for handling issues relating to disk failures or other permanent failures of the system where Falcon is deployed.
Configuration store also maintains an archive location where prior versions of the configuration or deleted configurations are maintained. They are
never accessed by the Falcon system and they merely serve to track historical changes to the entity definitions.
Atomic Actions
Often times when Falcon performs entity management actions, it may need to do several individual actions. If one of the action were to fail, then the
system could be in an inconsistent state. To avoid this, all individual operations performed are recorded into a transaction journal. This journal is then
used to undo the overall user action. In some cases, it is not possible to undo the action. In such cases, Falcon attempts to keep the system in an
consistent state.
Storage
Falcon introduces a new abstraction to encapsulate the storage for a given feed which can either be expressed as a path on the file system, File
System Storage or a table in a catalog such as Hive, Catalog Storage.
Feed should contain one of the two storage options. Locations on File System or Table in a Catalog.
File System Storage
This is expressed as a location on the file system. Location specifies where the feed is available on this cluster. A location tag specifies the type of
location like data, meta, stats and the corresponding paths for them. A feed should at least define the location for type data, which specifies the HDFS
path pattern where the feed is generated periodically. ex: type="data" path="/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic" The
granularity of date pattern in the path should be at least that of a frequency of a feed.
This is modeled as a URI (similar to an ISBN URI). It does not have any reference to Hive or HCatalog. Its quite generic so it can be tied to other
implementations of a catalog registry. The catalog implementation specified in the startup config provides implementation for the catalog URI.
Top-level partition has to be a dated pattern and the granularity of date pattern should be at least that of a frequency of a feed.
Examples:
4/7/2015 3:40 AM
Falcon - Contents
4 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
All the following operation can also be done using Falcon's RESTful API.
Submit
Entity submit action allows a new cluster/feed/process to be setup within Falcon. Submitted entity is not scheduled, meaning it would simply be in the
configuration store within Falcon. Besides validating against the schema for the corresponding entity being added, the Falcon system would also
perform inter-field validations within the configuration file and validations across dependent entities.
List
List all the entities within the falcon config store for the entity type being requested. This will include both scheduled and submitted entity
configurations.
Dependency
Returns the dependencies of the requested entity. Dependency list include both forward and backward dependencies (depends on & is dependent
on). For example, a feed would show process that are dependent on the feed and the clusters that it depends on.
Schedule
Feeds or Processes that are already submitted and present in the config store can be scheduled. Upon schedule, Falcon system wraps the required
repeatable action as a bundle of oozie coordinators and executes them on the Oozie scheduler. (It is possible to extend Falcon to use an alternate
workflow engine other than Oozie). Falcon overrides the workflow instance's external id in Oozie to reflect the process/feed and the nominal time.
This external Id can then be used for instance management functions.
The schedule copies the user specified workflow and library to a staging path, and the scheduler references the workflow and lib from the staging
path.
Suspend
This action is applicable only on scheduled entity. This triggers suspend on the oozie bundle that was scheduled earlier through the schedule
function. No further instances are executed on a suspended process/feed.
Resume
Puts a suspended process/feed back to active, which in turn resumes applicable oozie bundle.
Status
Gets the current status of the entity.
Definition
Gets the current entity definition as stored in the configuration store. Please note that user documentations in the entity will not be retained.
Delete
Delete operation on the entity removes any scheduled activity on the workflow engine, besides removing the entity from the falcon configuration store.
Delete operation on an entity would only succeed if there are no dependent entities on the deleted entity.
Update
Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently not allowed. Feed update can cause
cascading update to all the processes already scheduled. Process update triggers update in falcon if entity is updated/the user specified workflow/lib
is updated. The following set of actions are performed in Oozie to realize an update:
Suspend the previously scheduled Oozie coordinator. This is to prevent any new action from being triggered.
Update the coordinator to set the end time to "now"
Resume the suspended coordinators
Schedule as per the new process/feed definition with the start time as "now"
Update optionally takes effective time as a parameter which is used as the end time of previously scheduled coordinator. So, the updated
configuration will be effective since the given timestamp.
4/7/2015 3:40 AM
Falcon - Contents
5 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
3. rerun: -rerun is the option that you will use most often from instance management. As the name suggest this option is used to rerun a particular
instance or instances of the process. The rerun option reruns all parent workflow for the instance, which in turn rerun all the sub-workflows for it.
This option is valid for any instance in terminal state, i.e. KILLED, SUCCEEDED, FAILED. User can also set properties in the request, which will
give options what types of actions should be rerun like, only failed, run all etc. These properties are dependent on the workflow engine being used
along with falcon.
4. suspend: -suspend is used to suspend a instance or instances for the given process. This option pauses the parent workflow at the state, which
it was in at the time of execution of this command. This command is similar to SUSPEND process command in functionality only difference being,
SUSPEND process suspends all the instance whereas suspend instance suspend only that instance or instances in the range.
5. resume: -resume option is used to resume any instance that is in suspended state. (Note: due to a bug in oozie resume option in some
cases may not actually resume the suspended instance/ instances)
6. kill: -kill option can be used to kill an instance or multiple instances
7. summary: -summary option via CLI can be used to get the consolidated status of the instances between the specified time period. Each status
along with the corresponding instance count are listed for each of the applicable colos.
In all the cases where your request is syntactically correct but logically not, the instance / instances are returned with the same status as earlier.
Example: trying to resume a KILLED / SUCCEEDED instance will return the instance with KILLED / SUCCEEDED, without actually performing any
operation. This is so because only an instance in SUSPENDED state can be resumed. Same thing is valid for rerun a SUSPENDED or RUNNING
options etc.
Retention
In coherence with it's feed lifecycle management philosophy, Falcon allows the user to retain data in the system for a specific period of time for a
scheduled feed. The user can specify the retention period in the respective feed/data xml in the following manner for each cluster the feed can belong
to :
The 'limit' attribute can be specified in units of minutes/hours/days/months, and a corresponding numeric value can be attached to it. It essentially
instructs the system to retain data spanning from the current moment to the time specified in the attribute spanning backwards in time. Any data
beyond the limit (past/future) is erased from the system.
With the integration of Hive, Falcon also provides retention for tables in Hive catalog.
Example:
If retention period is 10 hours, and the policy kicks in at time 't', the data retained by system is essentially the one in range [t-10h, t]. Any data before
t-10h and after t is removed from the system.
The 'action' attribute can attain values of DELETE/ARCHIVE. Based upon the tag value, the data eligible for removal is either deleted/archived.
Replication
Falcon's feed lifecycle management also supports Feed replication across different clusters out-of-the-box. Multiple source clusters and target
clusters can be defined in feed definition. Falcon replicates the data using hadoop's distcp version 2 across different clusters whenever a feed is
scheduled.
The frequency at which the data is replicated is governed by the frequency specified in the feed definition. Ideally, the feeds data path should have
the same granularity as that for frequency of the feed, i.e. if the frequency of the feed is hours(3), then the data path should be to level /${YEAR}
/${MONTH}/${DAY}/${HOUR}.
4/7/2015 3:40 AM
Falcon - Contents
6 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
If more than 1 source cluster is defined, then partition expression is compulsory, a partition can also have a constant. The expression is required to
avoid copying data from different source location to the same target location, also only the data in the partition is considered for replication if it is
present. The partitions defined in the cluster should be less than or equal to the number of partition declared in the feed definition.
Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster, a coordinator is scheduled which pulls the
data using distcp from source cluster. So in the above example, 2 coordinators are scheduled in backupCluster, one which pulls the data from
sourceCluster1 and another from sourceCluster2. Also, for every feed instance which is replicated Falcon sends a JMS message on success or
failure of replication instance.
Replication can be scheduled with the past date, the time frame considered for replication is the minimum overlapping window of start and end time of
source and target cluster, ex: if s1 and e1 is the start and end time of source cluster respectively, and s2 and e2 of target cluster, then the coordinator
is scheduled in target cluster with start time max(s1,s2) and min(e1,e2).
A feed can also optionally specify the delay for replication instance in the cluster tag, the delay governs the replication instance delays. If the
frequency of the feed is hours(2) and delay is hours(1), then the replication instance will run every 2 hours and replicates data with an offset of 1 hour,
i.e. at 09:00 UTC, feed instance which is eligible for replication is 08:00; and 11:00 UTC, feed instance of 10:00 UTC is eligible and so on.
Archival as Replication
Falcon allows users to archive data from on-premice to cloud, either Azure WASB or S3. It uses the underlying replication for archiving data from
4/7/2015 3:40 AM
Falcon - Contents
7 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
source to target. The archival URI is specified as the overridden location for the target cluster.
Example:
Relation between feed's retention limit and feed's late arrival cut off period:
For reasons that are obvious, Falcon has an external validation that ensures that the user always specifies the feed retention limit to be more than the
feed's allowed late arrival period. If this rule is violated by the user, the feed submission call itself throws back an error.
The above schematic shows the dependencies between entities in Falcon. The arrow in above diagram points from a dependency to the dependent.
Let's just get one simple rule stated here, which we will keep referring to time and again while talking about entities: A dependency in the system
cannot be removed unless all it's dependents are removed first. This holds true for all transitive dependencies also.
Now, let's follow it up with a simple illustration of an Falcon Job:
Let's consider a process P that refers to feed F1 as an input feed, and generates feed F2 as an output feed. These feeds/processes are supposed to
be associated with a cluster C1.
The order of submission of this job would be in the following order:
C1->F1/F2(in any order)->P
The order of removal of this job from the system is in the exact opposite order, i.e.:
P->F1/F2(in any order)->C1
Please note that there might be multiple process referring to a particular feed, or a single feed belonging to multiple clusters. In that event, any of the
dependencies cannot be removed unless ALL of their dependents are removed first. Attempting to do so will result in an error message and a 400
Bad Request operation.
4/7/2015 3:40 AM
Falcon - Contents
8 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
for the same is the 'name' attribute for the individual cluster.
Example:
Feed XML:
Example:
Process XML:
Feed xml:
* The time interpretation for corresponding tags indicating the start and end instances for a particular input feed in the process xml should lie well
within the time span of the period specified in <validity> tag of the particular feed.
Example:
1. In the following scenario, process submission will result in an error:
Process XML:
Feed XML:
4/7/2015 3:40 AM
Falcon - Contents
9 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
Explanation: The process timelines for the feed range between a 40 minute interval between [-60m,-20m] from the current timestamp (which lets
assume is 'today' as per the 'now' directive). However, the feed validity is between a 1 year period in 2009, which makes it anachronistic.
2. The following example would work just fine:
Process XML:
Feed XML:
since at the time of charting this document (03/03/2012), the feed validity is able to encapsulate the process input's start and end instances.
Failure to follow any of the above rules would result in a process submission failure.
NOTE: Even though the above check ensures that the timelines are not anachronistic, if the input data is not present in the system for the specified
time period, the process can be submitted and scheduled, but all instances created would result in a WAITING state unless data is actually provided
in the cluster.
NOTE: Feeds configured with table storage does not support late input data handling at this point. This will be made available in the near future.
Idempotency
All the operations in Falcon are Idempotent. That is if you make same request to the falcon server / prism again you will get a SUCCESSFUL return if
it was SUCCESSFUL in the first attempt. For example, you submit a new process / feed and get SUCCESSFUL message return. Now if you run the
same command / api request on same entity you will again get a SUCCESSFUL message. Same is true for other operations like schedule, kill,
suspend and resume. Idempotency also by takes care of the condition when request is sent through prism and fails on one or more servers. For
example prism is configured to send request to 3 servers. First user sends a request to SUBMIT a process on all 3 of them, and receives a response
SUCCESSFUL from all of them. Then due to some issue one of the servers goes down, and user send a request to schedule the submitted process.
This time he will receive a response with PARTIAL status and a FAILURE message from the server that has gone down. If the users check he will find
the process would have been started and running on the 2 SUCCESSFUL servers. Now the issue with server is figured out and it is brought up.
Sending the SCHEDULE request again through prism will result in a SUCCESSFUL response from prism as well as other three servers, but this time
PROCESS will be SCHEDULED only on the server which had failed earlier and other two will keep running as before.
Falcon EL Expressions
Falcon expression language can be used in process definition for giving the start and end instance for various feeds.
4/7/2015 3:40 AM
Falcon - Contents
10 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
Before going into how to use falcon EL expressions it is necessary to understand what does instance and instance start time refer to with respect to
Falcon.
Lets consider a part of process definition below:
The above definition says that the process will start at 2nd of Jan 2010 at 1 am and will end at 3rd of Jan 2011 at 3 am on cluster corp. Also process
will start a user-defined workflow (which we will call instance) every 30 mins.
This means starting 2010-01-02T01:00Z every 30 mins a instance will start will run user defined workflow. Now if this workflow needs some input data
and produce some output, user needs to give that in <inputs> and <outputs> tags. Since the inputs that the process takes can be distributed over a
wide range we use the limits by giving "start" and "end" instance for input. Output is only one location so only instance is given. The timeout specifies,
the how long a given instance should wait for input data before being terminated by the workflow engine.
Coming back to instance start time, since a instance will start every 30 mins starting 2010-01-02T01:00Z, the time it is scheduled to start is called its
instance time. For example first few instance time for above example are:
Now lets go to how to use expression language. Only thing to keep in mind is all EL evaluation are done based on the start time of that instance, and
very instance will have different inputs / outputs based on the feed instance given in process definition.
All the parameters in various El can be both positive, zero or negative values. Positive values indicate so many units in future, zero means the base
time EL has been resolved to, and negative values indicate corresponding units in past.
Note: if no instance is created at the resolved time, then the instance immediately before it is considered.
4/7/2015 3:40 AM
Falcon - Contents
11 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
Lineage
Falcon adds the ability to capture lineage for both entities and its associated instances. It also captures the metadata tags associated with each of the
entities as relationships. The following relationships are captured:
owner of entities - User
data classification tags
groups defined in feeds
Relationships between entities
Clusters associated with Feed and Process entity
Input and Output feeds for a Process
Instances refer to corresponding entities
Lineage is exposed in 3 ways:
REST API
CLI
Dashboard - Interactive lineage for Process instances
This feature is enabled by default but could be disabled by removing the following from:
Lineage is only captured for Process executions. A future release will capture lineage for lifecycle policies such as replication and retention.
Security
Security is detailed in Security.
Recipes
Recipes is detailed in Recipes.
4/7/2015 3:40 AM
Falcon - Contents
12 of 12
http://falcon.apache.org/0.6-incubating/FalconDocumentation.html
Monitoring
Monitoring and Operationalizing Falcon is detailed in Operability.
Backwards Compatibility
Backwards compatibility instructions are detailed here.
4/7/2015 3:40 AM