-
Notifications
You must be signed in to change notification settings - Fork 201
writers
A RecordWriter
writes the payload of a record to a data sink. Easy Batch comes with common record writers to write data to a variety of data sinks:
- Databases
- Files
- JMS queues
- The standard output/error
- etc
Record writers write the payload of records to the data sink, except the BlockingQueueRecordWriter
which writes the record itself (and not its payload) to the target queue.
Here is a table of built-in writers and how to use them:
Data sink | Writer | Module |
---|---|---|
Output stream | OutputStreamRecordWriter | easybatch-core |
Standard output | StandardOutputRecordWriter | easybatch-core |
Standard error | StandardErrorRecordWriter | easybatch-core |
File | FileRecordWriter | easybatch-core |
MsExcel File | MsExcelRecordWriter | easybatch-msexcel |
String | StringRecordWriter | easybatch-core |
Collection | CollectionRecordWriter | easybatch-core |
Relational database | JdbcRecordWriter | easybatch-jdbc |
Relational database | JpaRecordWriter | easybatch-jpa |
Relational database | HibernateRecordWriter | easybatch-hibernate |
MongoDB | MongoDBRecordWriter | easybatch-mongodb |
BlockingQueue | BlockingQueueRecordWriter | easybatch-core |
BlockingQueue | RoundRobinBlockingQueueRecordWriter | easybatch-core |
BlockingQueue | ContentBasedBlockingQueueRecordWriter | easybatch-core |
BlockingQueue | RandomBlockingQueueRecordWriter | easybatch-core |
Jms Queue | JmsQueueRecordWriter | easybatch-jms |
Jms Queue | RoundRobinJmsQueueRecordWriter | easybatch-integration |
Jms Queue | ContentBasedJmsQueueRecordWriter | easybatch-integration |
Jms Queue | RandomJmsQueueRecordWriter | easybatch-integration |
Jms Queue | BroadcastJmsQueueRecordWriter | easybatch-integration |
-
The
JdbcRecordWriter
handles database transactions. A transaction will be created and committed/rolled back after each batch. -
The
JpaRecordWriter
expects a Java object as input and not aRecord
. Make sure to map records to your domain object type before passing them to theJpaRecordWriter
. TheJpaRecordWriter
handles database transactions. A transaction will be created and committed/rolled back after each batch. -
The
HibernateRecordWriter
expects a Java object as input and not aRecord
. Make sure to map records to your domain object type before passing them to theHibernateRecordWriter
. This writer handles database transactions. A transaction will be created and committed/rolled back after each batch.
Sometimes, the data sink may be temporarily unavailable. In this case, the record writer will fail to write data and the job will be aborted.
The RetryableRecordWriter
can be used to retry writing data using a delegate RecordWriter
with a RetryPolicy
.
Job job = new JobBuilder()
.writer(new RetryableRecordWriter(unreliableDataSinkWriter, new RetryPolicy(5, 1, SECONDS)))
.build();
This will make the writer retries at most 5 times waiting one second between each attempt. If after 5 attempts the data sink is still unreachable, the job will be aborted.
When activated, batch scanning will be kicked in when an exception occurs during the batch writing. Records will be attempted to be written one by one as a singleton batch. This allows to skip faulty records and continue the job execution instead of failing the entire job at the first failed batch.
Heads up❗️: This feature works well with transactional writers where a failed write operation can be re-executed without side effects. However, a known limitation is that when used with a non-transactional writer, items might be written twice (like in the case of a file writer where the output stream is flushed before the
exception occurs). To prevent this, a manual rollback action should be done in BatchListener#onBatchWritingException
method.
Easy Batch is created by Mahmoud Ben Hassine with the help of some awesome contributors
-
Introduction
-
User guide
-
Job reference
-
Component reference
-
Get involved