-
Notifications
You must be signed in to change notification settings - Fork 18
/
API
203 lines (163 loc) · 6.34 KB
/
API
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
#################################################################################################
# WINDFLOW ACCEPTED SIGNATURES #
#################################################################################################
This file lists all the possible signatures that can be used to create WindFlow operators. In case
you provide a functional logic having a wrong signature during the creation of an operator (through
its builder), you receive a specific error message during the compilation phase (through some static
asserts).
For basic and window-based operators, the functional logic, as well as the key extractor and the
closing logic can be provided as functions, lambdas or functor objects providing operator() with
the right signatures.
For GPU operators, the functional logic and the key extractor logic must be provided as a __host__
__device__ lambda or through a functor object exposing a __host__ __device__ operator() method
with the right signatures.
SOURCE
------
void(Source_Shipper<tuple_t> &);
void(Source_Shipper<tuple_t> &, RuntimeContext &);
KAFKA_SOURCE
------------
bool(std::optional<std::reference_wrapper<RdKafka::Message>>, Source_Shipper<result_t> &);
bool(std::optional<std::reference_wrapper<RdKafka::Message>>, Source_Shipper<result_t> &, KafkaRuntimeContext &);
FILTER
------
bool(tuple_t &);
bool(tuple_t &, RuntimeContext &);
FILTER_GPU
----------
__host__ __device__ bool(tuple_t &);
__host__ __device__ bool(tuple_t &, state_t &);
P_FILTER
--------
bool(tuple_t &, state_t &);
bool(tuple_t &, state_t &, RuntimeContext &);
MAP
---
void(tuple_t &);
void(tuple_t &, RuntimeContext &);
result_t(const tuple_t &);
result_t(const tuple_t &, RuntimeContext &);
MAP_GPU
-------
__host__ __device__ void(tuple_t &);
__host__ __device__ void(tuple_t &, state_t &);
P_MAP
-----
void(tuple_t &, state_t &);
void(tuple_t &, state_t &, RuntimeContext &);
result_t(const tuple_t &, state_t &);
result_t(const tuple_t &, state_t &, RuntimeContext &);
FLATMAP
-------
void(const tuple_t &, Shipper<result_t> &);
void(const tuple_t &, Shipper<result_t> &, RuntimeContext &);
P_FLATMAP
---------
void(const tuple_t &, state_t &, Shipper<result_t> &);
void(const tuple_t &, state_t &, Shipper<result_t> &, RuntimeContext &);
REDUCE
------
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
REDUCE_GPU
----------
__host__ __device__ tuple_t(const tuple_t &, const tuple_t &);
P_REDUCE
--------
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
KEYED_WINDOWS
-------------
void(const Iterable<tuple_t> &, result_t &);
void(const Iterable<tuple_t> &, result_t &, RuntimeContext &);
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
P_KEYED_WINDOWS
---------------
void(const Iterable<tuple_t> &, result_t &);
void(const Iterable<tuple_t> &, result_t &, RuntimeContext &);
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
PARALLEL_WINDOWS
----------------
void(const Iterable<tuple_t> &, result_t &);
void(const Iterable<tuple_t> &, result_t &, RuntimeContext &);
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
PANED_WINDOWS
-------------
The corresponding builder needs two parameters (for the PLQ and WLQ logics) with the following accepted signatures:
* PLQ
void(const Iterable<tuple_t> &, tuple_t &);
void(const Iterable<tuple_t> &, tuple_t &, RuntimeContext &);
void(const tuple_t &, tuple_t &);
void(const tuple_t &, tuple_t &, RuntimeContext &);
* WLQ
void(const Iterable<tuple_t> &, result_t &);
void(const Iterable<tuple_t> &, result_t &, RuntimeContext &);
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
MAPREDUCE_WINDOWS
-----------------
The corresponding builder needs two parameters (for the MAP and REDUCE logics) with the following accepted signatures:
* MAP
void(const Iterable<tuple_t> &, tuple_t &);
void(const Iterable<tuple_t> &, tuple_t &, RuntimeContext &);
void(const tuple_t &, tuple_t &);
void(const tuple_t &, tuple_t &, RuntimeContext &);
* REDUCE
void(const Iterable<tuple_t> &, result_t &);
void(const Iterable<tuple_t> &, result_t &, RuntimeContext &);
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
FFAT_Windows
------------
The corresponding builder needs two parameters (for the lift and combine logics) with the following accepted signatures:
* Lift
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
* Combine
void(const result_t &, const result_t &, result_t &);
void(const result_t &, const result_t &, result_t &, RuntimeContext &);
FFAT_Windows_GPU
----------------
The corresponding builder needs two parameters (for the lift and combine logics) with the following accepted signatures:
* Lift
__host__ __device__ void(const tuple_t &, result_t &);
* Combine
__host__ __device__ void(const result_t &, const result_t &, result_t &);
SINK
----
void(std::optional<tuple_t> &);
void(std::optional<tuple_t> &, RuntimeContext &);
void(std::optional<std::reference_wrapper<tuple_t>>);
void(std::optional<std::reference_wrapper<tuple_t>>, RuntimeContext &);
P_SINK
------
void(std::optional<tuple_t> &, state_t &);
void(std::optional<tuple_t> &, state_t &, RuntimeContext &);
void(std::optional<std::reference_wrapper<tuple_t>>, state_t &);
void(std::optional<std::reference_wrapper<tuple_t>>, state_t &, RuntimeContext &);
KAFKA_SINK
----------
wf::wf_kafka_sink_msg(tuple_t &);
wf::wf_kafka_sink_msg(tuple_t &, KafkaRuntimeContext &);
CLOSING LOGIC
-------------
void(RuntimeContext &);
CLOSING LOGIC (Kafka Operators)
-------------------------------
void(KafkaRuntimeContext &);
KEY EXTRACTOR
-------------
key_t(const tuple_t &); -> for CPU operators
__host__ __device__ key_t(const tuple_t &); // for GPU operators
Nota bene: for GPU operators, the user must provide a __device__ implementation of
operator()== and operator()< functions defined for the key type key_t.
SPLITTING LOGIC
---------------
integral_t(const tuple_t &);
std::vector<integral_t>(const tuple_t &);
integral_t(tuple_t &);
std::vector<integral_t>(tuple_t &);
Nota bene: integral_t is any C++ integral type (e.g., short, int, long, and so forth).