Skip to content

Commit 3aa5d90

Browse files
committed
Update What's new section for 5.2.0-M2
1 parent ff1184f commit 3aa5d90

File tree

1 file changed

+110
-0
lines changed

1 file changed

+110
-0
lines changed

spring-batch-docs/modules/ROOT/pages/whatsnew.adoc

+110
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,11 @@ This section highlights the major changes in Spring Batch 5.2. For the complete
66
Spring Batch 5.2 includes the following features:
77

88
* xref:whatsnew.adoc#dependencies-upgrade[Dependencies upgrade]
9+
* xref:whatsnew.adoc#mongodb-job-repository-support[MongoDB job repository support]
10+
* xref:whatsnew.adoc#new-resourceless-job-repository[New resourceless job repository]
11+
* xref:whatsnew.adoc#composite-item-reader-implementation[Composite Item Reader implementation]
12+
* xref:whatsnew.adoc#new-adapters-for-java-util-function-apis[New adapters for java.util.function APIs]
13+
* xref:whatsnew.adoc#concurrent-steps-with-blocking-queue-item-reader-and-writer[Concurrent steps with blocking queue item reader and writer]
914
* xref:whatsnew.adoc#query-hints-support[Query hints support in JPA item readers]
1015
* xref:whatsnew.adoc#data-class-support[Data class support in JDBC item readers]
1116
* xref:whatsnew.adoc#configurable-line-separator-in-recursivecollectionlineaggregator[Configurable line separator in RecursiveCollectionLineAggregator]
@@ -25,6 +30,111 @@ In this release, the Spring dependencies are upgraded to the following versions:
2530
* Spring Kafka 3.3.0
2631
* Micrometer 1.14.0
2732

33+
[[mongodb-job-repository-support]]
34+
== MongoDB job repository support
35+
36+
This release introduces the first NoSQL job repository implementation which is backed by MongoDB.
37+
Similar to relational job repository implementations, Spring Batch comes with a script to create the
38+
necessary collections in MongoDB in order to save and retrieve batch meta-data.
39+
40+
This implementation requires MongoDB version 4 or later and is based on Spring Data MongoDB.
41+
In order to use this job repository, all you need to do is define a `MongoTemplate` and a
42+
`MongoTransactionManager` which are required by the newly added `MongoDBJobRepositoryFactoryBean`:
43+
44+
```
45+
@Bean
46+
public JobRepository jobRepository(MongoTemplate mongoTemplate, MongoTransactionManager transactionManager) throws Exception {
47+
MongoJobRepositoryFactoryBean jobRepositoryFactoryBean = new MongoJobRepositoryFactoryBean();
48+
jobRepositoryFactoryBean.setMongoOperations(mongoTemplate);
49+
jobRepositoryFactoryBean.setTransactionManager(transactionManager);
50+
jobRepositoryFactoryBean.afterPropertiesSet();
51+
return jobRepositoryFactoryBean.getObject();
52+
}
53+
```
54+
55+
Once the MongoDB job repository defined, you can inject it in any job or step as a regular job repository.
56+
You can find a complete example in the https://github.com/spring-projects/spring-batch/blob/main/spring-batch-core/src/test/java/org/springframework/batch/core/repository/support/MongoDBJobRepositoryIntegrationTests.java[MongoDBJobRepositoryIntegrationTests].
57+
58+
[[new-resourceless-job-repository]]
59+
== New resourceless job repository
60+
61+
In v5, the in-memory Map-based job repository implementation was removed for several reasons.
62+
The only job repository implementation that was left in Spring Batch was the JDBC implementation, which requires a data source.
63+
While this works well with in-memory databases like H2 or HSQLDB, requiring a data source was a strong constraint
64+
for many users of our community who used to use the Map-based repository without any additional dependency.
65+
66+
In this release, we introduce a `JobRepository` implementation that does not use or store batch meta-data in any form
67+
(not even in-memory). It is a "NoOp" implementation that throws away batch meta-data and does not interact with any resource
68+
(hence the name "resourceless job repository", which is named after the "resourceless transaction manager").
69+
70+
This implementation is intended for use-cases where restartability is not required and where the execution context is not involved
71+
in any way (like sharing data between steps through the execution context, or partitioned steps where partitions meta-data is
72+
shared between the manager and workers through the execution context, etc).
73+
74+
This implementation is suitable for one-time jobs executed in their own JVM. It works with transactional steps (configured with
75+
a `DataSourceTransactionManager` for instance) as well as non-transactional steps (configured with a `ResourcelessTransactionManager`).
76+
The implementation is not thread-safe and should not be used in any concurrent environment.
77+
78+
[[composite-item-reader-implementation]]
79+
== Composite Item Reader implementation
80+
81+
Similar to the `CompositeItemProcessor` and `CompositeItemWriter`, we introduce a new `CompositeItemReader` implementation
82+
that is designed to read data sequentially from several sources having the same format. This is useful when data is spread
83+
over different resources and writing a custom reader is not an option.
84+
85+
A `CompositeItemReader` works like other composite artifacts, by delegating the reading operation to regular item readers
86+
in order. Here is a quick example showing a composite reader that reads persons data from a flat file then from a database table:
87+
88+
```
89+
@Bean
90+
public FlatFileItemReader<Person> itemReader1() {
91+
return new FlatFileItemReaderBuilder<Person>()
92+
.name("personFileItemReader")
93+
.resource(new FileSystemResource("persons.csv"))
94+
.delimited()
95+
.names("id", "name")
96+
.targetType(Person.class)
97+
.build();
98+
}
99+
100+
@Bean
101+
public JdbcCursorItemReader<Person> itemReader2() {
102+
String sql = "select * from persons";
103+
return new JdbcCursorItemReaderBuilder<Person>()
104+
.name("personTableItemReader")
105+
.dataSource(dataSource())
106+
.sql(sql)
107+
.beanRowMapper(Person.class)
108+
.build();
109+
}
110+
111+
@Bean
112+
public CompositeItemReader<Person> itemReader() {
113+
return new CompositeItemReader<>(Arrays.asList(itemReader1(), itemReader2()));
114+
}
115+
```
116+
117+
[[new-adapters-for-java-util-function-apis]]
118+
== New adapters for java.util.function APIs
119+
120+
Similar to `FucntionItemProcessor` that adapts a `java.util.function.Function` to an item processor, this release
121+
introduces several new adapters for other `java.util.function` interfaces like `Supplier`, `Consumer` and `Predicate`.
122+
123+
The newly added adapters are: `SupplierItemReader`, `ConsumerItemWriter` and `PredicateFilteringItemProcessor`.
124+
For more details about these new adapters, please refer to the https://github.com/spring-projects/spring-batch/tree/main/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/function[org.springframework.batch.item.function] package.
125+
126+
[[concurrent-steps-with-blocking-queue-item-reader-and-writer]]
127+
== Concurrent steps with blocking queue item reader and writer
128+
129+
The https://en.wikipedia.org/wiki/Staged_event-driven_architecture[staged event-driven architecture] (SEDA) is a
130+
powerful architecture style to process data in stages connected by queues. This style is directly applicable to data
131+
pipelines and easily implemented in Spring Batch thanks to the ability to design jobs as a sequence of steps.
132+
133+
The only missing piece here is how to read and write data to intermediate queues. This release introduces an item reader
134+
and item writer to read data from and write it to a `BlockingQueue`. With these two new classes, one can design a first step
135+
that prepares data in a queue and a second step that consumes data from the same queue. This way, both steps can run concurrently
136+
to process data efficiently in a non-blocking, event-driven fashion.
137+
28138
[[query-hints-support]]
29139
== Query hints support in JPA item readers
30140

0 commit comments

Comments
 (0)