forked from booksbyus/zguide
-
Notifications
You must be signed in to change notification settings - Fork 0
/
chapter3.txt
1468 lines (1076 loc) · 82.2 KB
/
chapter3.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
.output chapter3.wd
++ Chapter Three - Advanced Request-Reply Patterns
In Chapter Two we worked through the basics of using 0MQ by developing a series of small applications, each time exploring new aspects of 0MQ. We'll continue this approach in this chapter, as we explore advanced patterns built on top of 0MQ's core request-reply pattern.
We'll cover:
* How to create and use message envelopes for request-reply.
* How to use the REQ, REP, DEALER, and ROUTER sockets.
* How to set manual reply addresses using identities.
* How to do custom random scatter routing.
* How to do custom least-recently used routing.
* How to build a higher-level message class.
* How to build a basic request-reply broker.
* How to choose good names for sockets.
* How to simulate a cluster of clients and workers.
* How to build a scalable cloud of request-reply clusters.
* How to use pipeline sockets for monitoring threads.
+++ Request-Reply Envelopes
In the request-reply pattern, the envelope holds the return address for replies. It is how a 0MQ network with no state can create round-trip request-reply dialogs.
You don't in fact need to understand how request-reply envelopes work to use them for common cases. When you use REQ and REP, your sockets build and use envelopes automatically. When you write a device, and we covered this in the last chapter, you just need to read and write all the parts of a message. 0MQ implements envelopes using multipart data, so if you copy multipart data safely, you implicitly copy envelopes too.
However, getting under the hood and playing with request-reply envelopes is necessary for advanced request-reply work. It's time to explain how ROUTER works, in terms of envelopes:
* When you receive a message from a ROUTER socket, it shoves a brown paper envelope around the message and scribbles on with indelible ink, "This came from Lucy". Then it gives that to you. That is, the ROUTER socket gives you what came off the wire, wrapped up in an envelope with the reply address on it.
* when you send a message to a ROUTER socket, it rips off that brown paper envelope, tries to read its own handwriting, and if it knows who "Lucy" is, sends the contents back to Lucy. That is the reverse process of receiving a message.
If you leave the brown envelope alone, and then pass that message to another ROUTER socket (e.g. by sending to a DEALER connected to a ROUTER), the second ROUTER socket will in turn stick another brown envelope on it, and scribble the name of that DEALER on it.
The whole point of this is that each ROUTER knows how to send replies back to the right place. All you need to do, in your application, is respect the brown envelopes. Now the REP socket makes sense. It carefully slices open the brown envelopes, one by one, keeps them safely aside, and gives you (the application code that owns the REP socket) the original message. When you send the reply, it re-wraps the reply in the brown paper envelopes, so it can hand the resulting brown package back to the ROUTER sockets down the chain.
Which lets you insert ROUTER-DEALER devices into a request-reply pattern like this:
[[code]]
[REQ] <--> [REP]
[REQ] <--> [ROUTER--DEALER] <--> [REP]
[REQ] <--> [ROUTER--DEALER] <--> [ROUTER--DEALER] <--> [REP]
...etc.
[[/code]]
If you connect a REQ socket to a ROUTER socket, and send one request message, this is what you get when you receive from the ROUTER socket:
[[code type="textdiagram"]]
+---------------+
Frame 1 | Reply address | <----- Envelope
+---+-----------+
Frame 2 | | <------ Empty message part
+---+-------------------------------------+
Frame 3 | Data |
+-----------------------------------------+
Figure # - Single-hop request-reply envelope
[[/code]]
Breaking this down:
* The data in frame 3 is what the sending application sends to the REQ socket.
* The empty message part in frame 2 is prepended by the REQ socket when it sends the message to the ROUTER socket.
* The reply address in frame 1 is prepended by the ROUTER before it passes the message to the receiving application.
Now if we extend this with a chain of devices, we get envelope on envelope, with the newest envelope always stuck at the beginning of the stack:
[[code type="textdiagram"]]
(Next envelope will go here)
+---------------+
Frame 1 | Reply address | <----- Envelope (ROUTER)
+---------------+
Frame 2 | Reply address | <----- Envelope (ROUTER)
+---------------+
Frame 3 | Reply address | <----- Envelope (ROUTER)
+---+-----------+
Frame 4 | | <------ Empty message part (REQ)
+---+-------------------------------------+
Frame 5 | Data |
+-----------------------------------------+
Figure # - Multihop request-reply envelope
[[/code]]
Here now is a more detailed explanation of the four socket types we use for request-reply patterns:
* DEALER just load-balances (deals out) the messages you send to all connected peers, and fair-queues (deals in) the messages it receives. It is exactly like a PUSH and PULL socket combined.
* REQ prepends an empty message part to every message you send, and removes the empty message part from each message you receive. It then works like DEALER (and in fact is built on DEALER) except it also imposes a strict send / receive cycle.
* ROUTER prepends an envelope with reply address to each message it receives, before passing it to the application. It also chops off the envelope (the first message part) from each message it sends, and uses that reply address to decide which peer the message should go to.
* REP stores all the message parts up to the first empty message part, when you receive a message and it passes the rest (the data) to your application. When you send a reply, REP prepends the saved envelopes to the message and sends it back using the same semantics as ROUTER (and in fact REP is built on top of ROUTER), but matching REQ, imposes a strict receive / send cycle.
REP requires that the envelopes end with an empty message part. If you're not using REQ at the other end of the chain then you must add the empty message part yourself.
So the obvious question about ROUTER is, where does it get the reply addresses from? And the obvious answer is, it uses the socket's identity. As we already learned, a socket can be transient in which case the //other// socket (ROUTER in this case) generates an identity that it can associate with the socket. Or, the socket can be durable in which case it explicitly tells the other socket (ROUTER, again) its identity and ROUTER can use that rather than generating a temporary label.
This is what it looks like for transient sockets:
[[code type="textdiagram"]]
+-----------+
| |
| Client |
| |
+-----------+ +---------+
| REQ | | Data | Client sends this
\-----+-----/ +---------+
|
| "My identity is empty"
v
/-----------\ +---------+
| ROUTER | | UUID | ROUTER invents UUID to
+-----------+ +-+-------+ use as reply address
| | | |
| Service | +-+-------+
| | | Data |
+-----------+ +---------+
Figure # - ROUTER invents a UUID for transient sockets
[[/code]]
This is what it looks like for durable sockets:
[[code type="textdiagram"]]
+-----------+
| | zmq_setsockopt (socket,
| Client | ZMQ_IDENTITY, "Lucy", 4);
| |
+-----------+ +---------+
| REQ | | Data | Client sends this
\-----+-----/ +---------+
|
| "Hi, my name is Lucy"
v
/-----------\ +---------+
| ROUTER | | 'Lucy' | ROUTER uses identity of
+-----------+ +-+-------+ client as reply address
| | | |
| Service | +-+-------+
| | | Data |
+-----------+ +---------+
Figure # - ROUTER uses identity if it knows it
[[/code]]
Let's observe the above two cases in practice. This program dumps the contents of the message parts that a ROUTER socket receives from two REP sockets, one not using identities, and one using an identity 'Hello':
[[code type="example" title="Identity check" name="identity"]]
[[/code]]
Here is what the dump function prints:
[[code]]
----------------------------------------
[017] 00314F043F46C441E28DD0AC54BE8DA727
[000]
[026] ROUTER uses a generated UUID
----------------------------------------
[005] Hello
[000]
[038] ROUTER socket uses REQ's socket identity
[[/code]]
+++ Custom Request-Reply Routing
We already saw that ROUTER uses the message envelope to decide which client to route a reply back to. Now let me express that in another way: //ROUTER will route messages asynchronously to any peer connected to it, if you provide the correct routing address via a properly constructed envelope.//
So ROUTER is really a fully controllable router. We'll dig into this magic in detail.
But first, and because we're going to go off-road into some rough and possibly illegal terrain now, let's look closer at REQ and REP. Few people know this, but despite their kindergarten approach to messaging, REQ and REP are actually colorful characters:
* REQ is a **mama** socket, doesn't listen but always expects an answer. Mamas are strictly synchronous and if you use them they are always the 'request' end of a chain.
* REP is a **papa** socket, always answers, but never starts a conversation. Papas are strictly synchronous and if you use them, they are always the 'reply' end of a chain.
The thing about Mama sockets is, as we all learned as kids, you can't speak until spoken to. Mamas do not have simple open-mindedness of papas, nor the ambiguous "sure, whatever" shrugged-shoulder aloofness of a dealer. So to speak to a mama socket, you have to get the mama socket to talk to you first. The good part is mamas don't care if you reply now, or much later. Just bring a good sob story and a bag of laundry.
Papa sockets on the other hand are strong and silent, and pedantic. They do just one thing, which is to give you an answer to whatever you ask, perfectly framed and precise. Don't expect a papa socket to be chatty, or to pass a message on to someone else, this is just not going to happen.
While we usually think of request-reply as a to-and-fro pattern, in fact it can be fully asynchronous, as long as we understand that any mamas or papas will be at the end of a chain, never in the middle of it, and always synchronous. All we need to know is the address of the peer we want to talk to, and then we can then send it messages asynchronously, via a router. The router is the one and only 0MQ socket type capable of being told "send this message to X" where X is the address of a connected peer.
These are the ways we can know the address to send a message to, and you'll see most of these used in the examples of custom request-reply routing:
* If it's an transient socket, i.e. did not set any identity, the router will generate a UUID and use that to refer to the connection when it delivers you an incoming request envelope.
* If it is a durable socket, the router will give the peer's identity when it delivers you an incoming request envelope.
* Peers with explicit identities can send them via some other mechanism, e.g. via some other sockets.
* Peers can have prior knowledge of each others' identities, e.g. via configuration files or some other magic.
There are four custom routing patterns, one for each of the socket types we can connect to a router:
* Router-to-dealer.
* Router-to-mama (REQ).
* Router-to-papa (REP).
* Router-to-router.
In each of these cases we have total control over how we route messages, but the different patterns cover different use-cases and message flows. Let's break it down over the next sections with examples of different routing algorithms.
But first some warnings about custom routing:
* This goes against a fairly solid 0MQ rule: //delegate peer addressing to the socket//. The only reason we do it is because 0MQ lacks a wide range of routing algorithms.
* Future versions of 0MQ will probably do some of the routing we're going to build here. That means the code we design now may break, or become redundant in the future.
* While the built-in routing has certain guarantees of scalability, such as being friendly to devices, custom routing doesn't. You will need to make your own devices.
So overall, custom routing is more expensive and more fragile than delegating this to 0MQ. Only do it if you need it. Having said that, let's jump in, the water's great!
+++ Router-to-Dealer Routing
The router-to-dealer pattern is the simplest. You connect one router to many dealers, and then distribute messages to the dealers using any algorithm you like. The dealers can be sinks (process the messages without any response), proxies (send the messages on to other nodes), or services (send back replies).
If you expect the dealer to reply, there should only be one router talking to it. Dealers have no idea how to reply to a specific peer, so if they have multiple peers, they will load-balance between them, which would be weird. If the dealer is a sink, any number of routers can talk to it.
What kind of routing can you do with a router-to-dealer pattern? If the dealers talk back to the router, e.g. telling the router when they finished a task, you can use that knowledge to route depending on how fast a dealer is. Since both router and dealer are asynchronous, it can get a little tricky. You'd need to use zmq_poll[3] at least.
We'll make an example where the dealers don't talk back, they're pure sinks. Our routing algorithm will be a weighted random scatter: we have two dealers and we send twice as many messages to one as to the other.
[[code type="textdiagram"]]
+-------------+
| |
| Client | Send to "A" or "B"
| |
+-------------+
| ROUTER |
\------+------/
|
|
+-------+-------+
| |
| |
v v
/-----------\ /-----------\
| DEALER | | DEALER |
| "A" | | "B" |
+-----------+ +-----------+
| | | |
| Worker | | Worker |
| | | |
+-----------+ +-----------+
Figure # - Router to dealer custom routing
[[/code]]
Here's code that shows how this works:
[[code type="example" title="Router-to-dealer" name="rtdealer"]]
[[/code]]
Some comments on this code:
* The router doesn't know when the dealers are ready, and it would be distracting for our example to add in the signaling to do that. So the router just does a "sleep (1)" after starting the dealer threads. Without this sleep, the router will send out messages that can't be routed, and 0MQ will discard them.
* Note that this behavior is specific to ROUTER sockets. PUB sockets will also discard messages if there are no subscribers, but all other socket types will queue sent messages until there's a peer to receive them.
To route to a dealer, we create an envelope like this:
[[code type="textdiagram"]]
+-------------+
Frame 1 | Address |
+-------------+-------------------------+
Frame 2 | Data |
+---------------------------------------+
Figure # - Routing envelope for dealer
[[/code]]
The router socket removes the first frame, and sends the second frame, which the dealer gets as-is. When the dealer sends a message to the router, it sends one frame. The router prepends the dealer's address and gives us back a similar envelope in two parts.
Something to note: if you use an invalid address, the router discards the message silently. There is not much else it can do usefully. In normal cases this either means the peer has gone away, or that there is a programming error somewhere and you're using a bogus address. In any case you cannot ever assume a message will be routed successfully until and unless you get a reply of some sorts from the destination node. We'll come to creating reliable patterns later on.
Dealers in fact they work exactly like PUSH and PULL combined. It's however illegal and pointless to connect PULL or PUSH to a request-reply socket.
+++ Least-Recently Used Routing (LRU Pattern)
Like we said, mamas (REQ sockets, if you really insist on it) don't listen to you, and if you try to speak out of turn they'll ignore you. You have to wait for them to say something, //then// you can give a sarcastic answer. This is very useful for routing because it means we can keep a bunch of mamas waiting for answers. In effect, mamas tell us when they're ready.
You can connect one router to many mamas, and distribute messages as you would to dealers. Mamas will usually want to reply, but they will let you have the last word. However it's one thing at a time:
* Mama speaks to router
* Router replies to mama
* Mama speaks to router
* Router replies to mama
* etc.
Like dealers, mamas can only talk to one router and since mamas always start by talking to the router, you should never connect one mama to more than one router unless you are doing sneaky stuff like multi-pathway redundant routing. I'm not even going to explain that now, and hopefully the jargon is complex enough to stop you trying this until you need it.
[[code type="textdiagram"]]
+-------------+
| |
| Client | Send to "A" or "B"
| |
+-------------+
| ROUTER |
\-------------/
^
| (1) Mama says Hi
|
+-------+-------+
| |
| | (2) Router gives laundry
v v
/-----------\ /-----------\
| REQ | | REQ |
| "A" | | "B" |
+-----------+ +-----------+
| | | |
| Worker | | Worker |
| | | |
+-----------+ +-----------+
Figure # - Router to mama custom routing
[[/code]]
What kind of routing can you do with a router-to-mama pattern? Probably the most obvious is "least-recently-used" (LRU), where we always route to the mama that's been waiting longest. Here is an example that does LRU routing to a set of mamas:
[[code type="example" title="Router-to-mama" name="rtmama"]]
[[/code]]
For this example the LRU doesn't need any particular data structures above what 0MQ gives us (message queues) because we don't need to synchronize the workers with anything. A more realistic LRU algorithm would have to collect workers as they become ready, into a queue, and the use this queue when routing client requests. We'll do this in a later example.
To prove that the LRU is working as expected, the mamas print the total tasks they each did. Since the mamas do random work, and we're not load balancing, we expect each mama to do approximately the same amount but with random variation. And that is indeed what we see:
[[code]]
Processed: 8 tasks
Processed: 8 tasks
Processed: 11 tasks
Processed: 7 tasks
Processed: 9 tasks
Processed: 11 tasks
Processed: 14 tasks
Processed: 11 tasks
Processed: 11 tasks
Processed: 10 tasks
[[/code]]
Some comments on this code
* We don't need any settle time, since the mamas explicitly tell the router when they are ready.
* We're generating our own identities here, as printable strings, using the zhelpers.h s_set_id function. That's just to make our life a little simpler. In a realistic application the mamas would be fully anonymous and then you'd call zmq_recv[3] and zmq_send[3] directly instead of the zhelpers s_recv() and s_send() functions, which can only handle strings.
* Worse, we're using //random// identities. Don't do this in real code, please. Randomized durable sockets are not good in real life, they exhaust and eventually kill nodes.
* If you copy and paste example code without understanding it, you deserve what you get. It's like watching Spiderman leap off the roof and then trying that yourself.
To route to a mama, we must create a mama-friendly envelope like this:
[[code type="textdiagram"]]
+-------------+
Frame 1 | Address |
+---+---------+
Frame 2 | | <------ Empty message part
+---+-----------------------------------+
Frame 3 | Data |
+---------------------------------------+
Figure # - Routing envelope for mama (REQ)
[[/code]]
+++ Address-based Routing
Papas are, if we care about them at all, only there to answer questions. And to pay the bills, fix the car when mama drives it into the garage wall, put up shelves, and walk the dog when it's raining. But apart from that, papas are only there to answer questions.
In a classic request-reply pattern a router wouldn't talk to a papa socket at all, but rather would get a dealer to do the job for it. That's what dealers are for: to pass questions onto random papas and come back with their answers. Routers are generally more comfortable talking to mamas. OK, dear reader, you may stop the psychoanalysis. These are analogies, not life stories.
It's worth remembering with 0MQ that the classic patterns are the ones that work best, that the beaten path is there for a reason, and that when we go off-road we take the risk of falling off cliffs and getting eaten by zombies. Having said that, let's plug a router into a papa and see what the heck emerges.
The special thing about papas, all joking aside, is actually two things:
* One, they are strictly lockstep request-reply.
* Two, they accept an envelope stack of any size and will return that intact.
In the normal request-reply pattern, papas are anonymous and replaceable (wow, these analogies //are// scary), but we're learning about custom routing. So, in our use-case we have reason to send a request to papa A rather than papa B. This is essential if you want to keep some kind of a conversation going between you, at one end of a large network, and a papa sitting somewhere far away.
A core philosophy of 0MQ is that the edges are smart and many, and the middle is vast and dumb. This does mean the edges can address each other, and this also means we want to know how to reach a given papa. Doing routing across multiple hops is something we'll look at later but for now we'll look just at the final step: a router talking to a specific papa:
[[code type="textdiagram"]]
+-------------+
| |
| Client | Send to "A" or "B"
| |
+-------------+
| ROUTER |
\-------------/
^
|
|
+-------+-------+
| |
| |
v v
/-----------\ /-----------\
| REP | | REP |
| "A" | | "B" |
+-----------+ +-----------+
| | | |
| Worker | | Worker |
| | | |
+-----------+ +-----------+
Figure # - Router to papa custom routing
[[/code]]
This example shows a very specific chain of events:
* The client has a message that it expects to route back (via another router) to some node. The message has two addresses (a stack), an empty part, and a body.
* The client passes that to the router but specifies a papa address first.
* The router removes the papa address, uses that to decide which papa to send the message to.
* The papa receives the addresses, empty part, and body.
* It removes the addresses, saves them, and passes the body to the worker.
* The worker sends a reply back to the papa.
* The papa recreates the envelope stack and sends that back with the worker's reply to the router.
* The router prepends the papa's address and provides that to the client along with the rest of the address stack, empty part, and the body.
It's complex but worth working through until you understand it. Just remember a papa is garbage in, garbage out.
[[code type="example" title="Router-to-papa" name="rtpapa"]]
[[/code]]
Run this program and it should show you this:
[[code]]
----------------------------------------
[020] This is the workload
----------------------------------------
[001] A
[009] address 3
[009] address 2
[009] address 1
[000]
[017] This is the reply
[[/code]]
Some comments on this code:
* In reality we'd have the papa and router in separate nodes. This example does it all in one thread because it makes the sequence of events really clear.
* zmq_connect[3] doesn't happen instantly. When the papa socket connects to the router, that takes a certain time and happens in the background. In a realistic application the router wouldn't even know the papa existed until there had been some previous dialog. In our toy example we'll just {{sleep (1);}} to make sure the connection's done. If you remove the sleep, the papa socket won't get the message. (Try it.)
* We're routing using the papa's identity. Just to convince yourself this really is happening, try sending to a wrong address, like "B". The papa won't get the message.
* The s_dump and other utility functions (in the C code) come from the zhelpers.h header file. It becomes clear that we do the same work over and over on sockets, and there are interesting layers we can build on top of the 0MQ API. We'll come back to this later when we make a real application rather than these toy examples.
To route to a papa, we must create a papa-friendly envelope like this:
[[code type="textdiagram"]]
+-------------+
Frame 1 | Address | <--- Zero or more of these
+---+---------+
Frame 2 | | <------ Exactly one empty message part
+---+-----------------------------------+
Frame 3 | Data |
+---------------------------------------+
Figure # - Routing envelope for papa aka REP
[[/code]]
+++ A Request-Reply Message Broker
We'll recap the knowledge we have so far about doing weird stuff with 0MQ message envelopes, and build the core of a generic custom routing queue device that we can properly call a //message broker//. Sorry for all the buzzwords. What we'll make is a //queue device// that connects a bunch of //clients// to a bunch of //workers//, and lets you use //any routing algorithm// you want. What we'll do is //least-recently used//, since it's the most obvious use-case apart from load-balancing.
To start with, let's look back at the classic request-reply pattern and then see how it extends over a larger and larger service-oriented network. The basic pattern is:
[[code type="textdiagram"]]
+--------+
| Client |
+--------+
| REQ |
+---+----+
|
|
+-----------+-----------+
| | |
| | |
+---+----+ +---+----+ +---+----+
| REP | | REP | | REP |
+--------+ +--------+ +--------+
| Worker | | Worker | | Worker |
+--------+ +--------+ +--------+
Figure # - Basic request-reply
[[/code]]
This extends to multiple papas, but if we want to handle multiple mamas as well we need a device in the middle, which normally consists of a router and a dealer back to back, connected by a classic ZMQ_QUEUE device that just copies message parts between the two sockets as fast as it can:
[[code type="textdiagram"]]
+--------+ +--------+ +--------+
| Client | | Client | | Client |
+--------+ +--------+ +--------+
| REQ | | REQ | | REQ |
+---+----+ +---+----+ +---+----+
| | |
+-----------+-----------+
|
+---+----+
| ROUTER |
+--------+
| Device |
+--------+
| DEALER |
+---+----+
|
+-----------+-----------+
| | |
+---+----+ +---+----+ +---+----+
| REP | | REP | | REP |
+--------+ +--------+ +--------+
| Worker | | Worker | | Worker |
+--------+ +--------+ +--------+
Figure # - Stretched request-reply
[[/code]]
The key here is that the router stores the originating mama address in the request envelope, the dealer and papas don't touch that, and so the router knows which mama to send the reply back to. Papas are anonymous and not addressed in this pattern, all papas are assumed to provide the same service.
In the above design, we're using the built-in load balancing routing that the dealer socket provides. However we want for our broker to use a least-recently used algorithm, so we take the router-mama pattern we learned, and apply that:
[[code type="textdiagram"]]
+--------+ +--------+ +--------+
| Client | | Client | | Client |
+--------+ +--------+ +--------+
| REQ | | REQ | | REQ |
+---+----+ +---+----+ +---+----+
| | |
+-----------+-----------+
|
+---+----+
| ROUTER | Frontend
+--------+
| Device | LRU queue
+--------+
| ROUTER | Backend
+---+----+
|
+-----------+-----------+
| | |
+---+----+ +---+----+ +---+----+
| REQ | | REQ | | REQ |
+--------+ +--------+ +--------+
| Worker | | Worker | | Worker |
+--------+ +--------+ +--------+
Figure # - Stretched request-reply with LRU
[[/code]]
Our broker - a router-to-router LRU queue - can't simply copy message parts blindly. Here is the code, it's fairly complex but the core logic is reusable in any request-reply broker that wants to do LRU routing:
[[code type="example" title="LRU queue broker" name="lruqueue"]]
[[/code]]
The difficult part of this program is (a) the envelopes that each socket reads and writes, and (b) the LRU algorithm. We'll take these in turn, starting with the message envelope formats.
First, recall that a mama REQ socket always puts on an empty part (the envelope delimiter) on sending and removes this empty part on reception. The reason for this isn't important, it's just part of the 'normal' request-reply pattern. What we care about here is just keeping mama happy by doing precisely what she needs. Second, the router always adds an envelope with the address of whomever the message came from.
We can now walk through a full request-reply chain from client to worker and back. In the code we set the identity of client and worker sockets to make it easier to print the message frames if we want to. Let's assume the client's identity is "CLIENT" and the worker's identity is "WORKER". The client sends a single frame:
[[code type="textdiagram"]]
+---+-------+
Frame 1 | 5 | HELLO | Data part
+---+-------+
Figure # - Message that client sends
[[/code]]
What the queue gets, when reading off the router frontend socket is this:
[[code type="textdiagram"]]
+---+--------+
Frame 1 | 6 | CLIENT | Identity of client
+---+--------+
Frame 2 | 0 | Empty message part
+---+-------+
Frame 3 | 5 | HELLO | Data part
+---+-------+
Figure # - Message coming in on frontend
[[/code]]
The broker sends this to the worker, prefixed by the address of the worker, taken from the LRU queue, plus an additional empty part to keep the mama at the other end happy:
[[code type="textdiagram"]]
+---+--------+
Frame 1 | 6 | WORKER | Identity of worker
+---+--------+
Frame 2 | 0 | Empty message part
+---+--------+
Frame 3 | 6 | CLIENT | Identity of client
+---+--------+
Frame 4 | 0 | Empty message part
+---+-------+
Frame 5 | 5 | HELLO | Data part
+---+-------+
Figure # - Message sent to backend
[[/code]]
This complex envelope stack gets chewed up first by the backend router socket, which removes the first frame. Then the mama socket in the worker removes the empty part, and provides the rest to the worker:
[[code type="textdiagram"]]
+---+--------+
Frame 1 | 6 | CLIENT | Identity of client
+---+--------+
Frame 2 | 0 | Empty message part
+---+-------+
Frame 3 | 5 | HELLO | Data part
+---+-------+
Figure # - Message delivered to worker
[[/code]]
Which is exactly the same as what the queue received on its frontend router socket. The worker has to save the envelope (which is all the parts up to and including the empty message part) and then it can do what's needed with the data part.
On the return path the messages are the same as when they come in, i.e. the backend socket gives the queue a message in five parts, and the queue sends the frontend socket a message in three parts, and the client gets a message in one part.
Now let's look at the LRU algorithm. It requires that both clients and workers use mama sockets, and that workers correctly store and replay the envelope on messages they get. The algorithm is:
* Create a pollset which polls the backend always, and the frontend only if there are one or more workers available.
* Poll for activity with infinite timeout.
* If there is activity on the backend, we either have a "ready" message or a reply for a client. In either case we store the worker address (the first part) on our LRU queue, and if the rest is a client reply we send it back to that client via the frontend.
* If there is activity on the frontend, we take the client request, pop the next worker (which is the least-recently used), and send the request to the backend. This means sending the worker address, empty part, and then the three parts of the client request.
You should now see that you can reuse and extend the LRU algorithm with variations based on the information the worker provides in its initial "ready" message. For example, workers might start up and do a performance self-test, then tell the broker how fast they are. The broker can then choose the fastest available worker rather than LRU or round-robin.
+++ A High-Level API for 0MQ
Reading and writing multipart messages using the native 0MQ API is like eating a bowl of hot noodle soup, with fried chicken and extra vegetables, using a toothpick. Look at the core of the worker thread from our LRU queue broker:
[[code language="C"]]
while (1) {
// Read and save all frames until we get an empty frame
// In this example there is only 1 but it could be more
char *address = s_recv (worker);
char *empty = s_recv (worker);
assert (*empty == 0);
free (empty);
// Get request, send reply
char *request = s_recv (worker);
printf ("Worker: %s\n", request);
free (request);
s_sendmore (worker, address);
s_sendmore (worker, "");
s_send (worker, "OK");
free (address);
}
[[/code]]
That code isn't even reusable, because it can only handle one envelope. And this code already does some wrapping around the 0MQ API. If we used the libzmq API directly this is what we'd have to write:
[[code language="C"]]
while (1) {
// Read and save all frames until we get an empty frame
// In this example there is only 1 but it could be more
zmq_msg_t address;
zmq_msg_init (&address);
zmq_recv (worker, &address, 0);
zmq_msg_t empty;
zmq_msg_init (&empty);
zmq_recv (worker, &empty, 0);
// Get request, send reply
zmq_msg_t payload;
zmq_msg_init (&payload);
zmq_recv (worker, &payload, 0);
int char_nbr;
printf ("Worker: ");
for (char_nbr = 0; char_nbr < zmq_msg_size (&payload); char_nbr++)
printf ("%c", *(char *) (zmq_msg_data (&payload) + char_nbr));
printf ("\n");
zmq_msg_init_size (&payload, 2);
memcpy (zmq_msg_data (&payload), "OK", 2);
zmq_send (worker, &address, ZMQ_SNDMORE);
zmq_close (&address);
zmq_send (worker, &empty, ZMQ_SNDMORE);
zmq_close (&empty);
zmq_send (worker, &payload, 0);
zmq_close (&payload);
}
[[/code]]
What we want is an API that lets us receive and send an entire message in one shot, including all envelopes. One that lets us do what we want with the absolute least lines of code. The 0MQ core API itself doesn't aim to do this, but nothing prevents us making layers on top, and part of learning to use 0MQ intelligently is to do exactly that.
Making a good message API is fairly difficult, especially if we want to avoid copying data around too much. We have a problem of terminology: 0MQ uses "message" to describe both multipart messages, and individual parts of a message. We have a problem of semantics: sometimes it's natural to see message content as printable string data, sometimes as binary blobs.
So one solution is to use three concepts: //string// (already the basis for s_send and s_recv), //frame// (a message part), and //message// (a list of one or more frames). Here is the worker code, rewritten onto an API using these concepts:
[[code language="C"]]
while (1) {
zmsg_t *zmsg = zmsg_recv (worker);
zframe_print (zmsg_last (zmsg), "Worker: ");
zframe_reset (zmsg_last (zmsg), "OK", 2);
zmsg_send (&zmsg, worker);
}
[[/code]]
Replacing 22 lines of code with four is a good deal, especially since the results are easy to read and understand. We can continue this process for other aspects of working with 0MQ. Let's make a wishlist of things we would like in a higher-level API:
* //Automatic handling of sockets.// I find it really annoying to have to close sockets manually, and to have to explicitly define the linger timeout in some but not all cases. It'd be great to have a way to close sockets automatically when I close the context.
* //Portable thread management.// Every non-trivial 0MQ application uses threads, but POSIX threads aren't portable. So a decent high-level API should hide this under a portable layer.
* //Portable clocks.// Even getting the time to a millisecond resolution, or sleeping for some milliseconds, is not portable. Realistic 0MQ applications need portable clocks, so our API should provide them.
* //A reactor to replace zmq_poll[3].// The poll loop is simple but clumsy. Writing a lot of these, we end up doing the same work over and over: calculating timers, and calling code when sockets are ready. A simple reactor with socket readers, and timers, would save a lot of repeated work.
* //Proper handling of Ctrl-C.// We already saw how to catch an interrupt. It would be useful if this happened in all applications.
Turning this wishlist into reality gives us [http://zero.mq/c czmq], a high-level C API for 0MQ. This high-level binding in fact developed out of earlier versions of the Guide. It combines nicer semantics for working with 0MQ with some portability layers, and (importantly for C but less for other languages) containers like hashes and lists.
Here is the LRU queue broker rewritten to use czmq:
[[code type="example" title="LRU queue broker using czmq" name="lruqueue2"]]
[[/code]]
One thing czmq provides is clean interrupt handling. This means that Ctrl-C will cause any blocking 0MQ call to exit with a return code -1 and errno set to EINTR. The czmq message recv methods return NULL in such case. So, you can cleanly exit a loop like this:
[[code language="C"]]
while (1) {
zstr_send (client, "HELLO");
char *reply = zstr_recv (client);
if (!reply)
break; // Interrupted
printf ("Client: %s\n", reply);
free (reply);
sleep (1);
}
[[/code]]
Or, if you're doing zmq_poll, test on the return code:
[[code language="C"]]
int rc = zmq_poll (items, zlist_size (workers)? 2: 1, -1);
if (rc == -1)
break; // Interrupted
[[/code]]
The previous example still uses zmq_poll[3]. So how about reactors? The czmq {{zloop}} reactor is simple but functional. It lets you:
* Set a reader on any socket, i.e. code that is called whenever the socket has input.
* Cancel a reader on a socket.
* Set a timer that goes off once or multiple times at specific intervals.
{{zloop}} of course uses zmq_poll[3] internally. It rebuilds its poll set each time you add or remove readers, and it calculates the poll timeout to match the next timer. Then, it calls the reader and timer handlers for each socket and timer that needs attention.
When we use a reactor pattern, our code turns inside out. The main logic looks like this:
[[code language="C"]]
zloop_t *reactor = zloop_new ();
zloop_reader (reactor, self->backend, s_handle_backend, self);
zloop_start (reactor);
zloop_destroy (&reactor);
[[/code]]
While the actual handling of messages sits inside dedicated functions or methods. You may not like the style, it's a matter of taste. What it does help with is mixing timers and socket activity. In the rest of this text we'll use zmq_poll[3] in simpler cases, and {{zloop}} in more complex examples.
Here is the LRU queue broker rewritten once again, this time to use {{zloop}}:
[[code type="example" title="LRU queue broker using zloop" name="lruqueue3"]]
[[/code]]
Getting applications to properly shut-down when you send them Ctrl-C can be tricky. If you use the zctx class it'll automatically set-up signal handling, but your code still has to cooperate. You must break any loop if zmq_poll returns -1 or if any of the recv methods (zstr_recv, zframe_recv, zmsg_recv) return NULL. If you have nested loops, it can be useful to make the outer ones conditional on {{!zctx_interrupted}}.
+++ Asynchronous Client-Server
In the router-to-dealer example we saw a 1-to-N use case where one client talks asynchronously to multiple workers. We can turn this upside-down to get a very useful N-to-1 architecture where various clients talk to a single server, and do this asynchronously:
[[code type="textdiagram"]]
+-----------+ +-----------+
| | | |
| Client | | Client |
| | | |
+-----------+ +-----------+
| DEALER | | DEALER |
\-----------/ \-----------/
^ ^
| |
| |
+-------+-------+
|
|
v
/------+------\
| ROUTER |
+-------------+
| |
| Server |
| |
+-------------+
Figure # - Asynchronous Client-Server
[[/code]]
Here's how it works:
* Clients connect to the server and send requests.
* For each request, the server sends 0 to N replies.
* Clients can send multiple requests without waiting for a reply.
* Servers can send multiple replies without waiting for new requests.
Here's code that shows how this works:
[[code type="example" title="Asynchronous client-server" name="asyncsrv"]]
[[/code]]
Just run that example by itself. Like other multi-task examples, it runs in a single process but each task has its own context and conceptually acts as a separate process. You will see three clients (each with a random ID), printing out the replies they get from the server. Look carefully and you'll see each client task gets 0 or more replies per request.
Some comments on this code:
* The clients send a request once per second, and get zero or more replies back. To make this work using zmq_poll[3], we can't simply poll with a 1-second timeout, or we'd end up sending a new request only one second //after we received the last reply//. So we poll at a high frequency (100 times at 1/100th of a second per poll), which is approximately accurate. This means the server could use requests as a form of heartbeat, i.e. detecting when clients are present or disconnected.
* The server uses a pool of worker threads, each processing one request synchronously. It connects these to its frontend socket using an internal queue. To help debug this, the code implements its own queue device logic. In the C code, you can uncomment the zmsg_dump() calls to get debugging output.
The socket logic in the server is fairly wicked. This is the detailed architecture of the server:
[[code type="textdiagram"]]
+---------+ +---------+ +---------+
| | | | | |
| Client | | Client | | Client |
| | | | | |
+---------+ +---------+ +---------+
| DEALER | | DEALER | | DEALER |
\---------/ \---------/ \---------/
connect connect connect
| | |
| | |
+-------------+-------------+
|
/----------------------|----------------------\
: v :
: bind :
: /-----------\ :
: | ROUTER | :
: +-----------+ :
: | | :
: | Server | :
: | | :
: +-----------+ :
: | DEALER | :
: \-----------/ :
: bind :
: | :
: +-------------+-------------+ :
: | | | :
: v v v :
: connect connect connect :
: /---------\ /---------\ /---------\ :
: | DEALER | | DEALER | | DEALER | :
: +---------+ +---------+ +---------+ :
: | | | | | | :
: | Worker | | Worker | | Worker | :
: | | | | | | :
: +---------+ +---------+ +---------+ :
: :
\---------------------------------------------/
Figure # - Detail of asynchronous server
[[/code]]
Note that we're doing a dealer-to-router dialog between client and server, but internally between the server main thread and workers we're doing dealer-to-dealer. If the workers were strictly synchronous, we'd use REP. But since we want to send multiple replies we need an async socket. We do //not// want to route replies, they always go to the single server thread that sent us the request.
Let's think about the routing envelope. The client sends a simple message. The server thread receives a two-part message (real message prefixed by client identity). We have two possible designs for the server-to-worker interface:
* Workers get unaddressed messages, and we manage the connections from server thread to worker threads explicitly using a router socket as backend. This would require that workers start by telling the server they exist, which can then route requests to workers and track which client is 'connected' to which worker. This is the LRU pattern we already covered.
* Workers get addressed messages, and they return addressed replies. This requires that workers can properly decode and recode envelopes but it doesn't need any other mechanisms.
The second design is much simpler, so that's what we use:
[[code]]
client server frontend worker
[ DEALER ]<---->[ ROUTER <----> DEALER <----> DEALER ]
1 part 2 parts 2 parts
[[/code]]
When you build servers that maintain stateful conversations with clients, you will run into a classic problem. If the server keeps some state per client, and clients keep coming and going, eventually it will run of resources. Even if the same clients keep connecting, if you're using transient sockets (no explicit identity), each connection will look like a new one.
We cheat in the above example by keeping state only for a very short time (the time it takes a worker to process a request) and then throwing away the state. But that's not practical for many cases.
To properly manage client state in a stateful asynchronous server you must:
* Do heartbeating from client to server. In our example we send a request once per second, which can reliably be used as a heartbeat.
* Store state using the client identity as key. This works for both durable and transient sockets.
* Detect a stopped heartbeat. If there's no request from a client within, say, two seconds, the server can detect this and destroy any state it's holding for that client.
+++ Router-to-Router (N-to-N) Routing
We've seen ROUTER/router sockets talking to dealers, mamas, and papas. The last case is routers talking to routers. One use-case for this is a web farm that has redundant HTTP front-ends talking to an array of asynchronous back-end workers. Each worker accepts requests from any of the front-end HTTP servers, and processes them asynchronously, sending asynchronous replies back. A fully asynchronous worker has some internal concurrency but we don't really care about that here. What interests us is how N workers can talk to N front-ends.
[[code type="textdiagram"]]
+-----------+ +-----------+
| | | |
| HTTP | | HTTP |
| Front-end | | Front-end |
| | | |
+-----------+ +-----------+
| ROUTER | | ROUTER |
\-----------/ \-----------/
connect connect
^ ^
| |
+--------+--------+
|
|
+--------+--------+
| |
v v
bind bind
/-----------\ /-----------\
| ROUTER | | ROUTER |
+-----------+ +-----------+
| | | |
| Worker | | Worker |
| | | |
+-----------+ +-----------+
Figure # - N-to-N routing
[[/code]]
Here's a simplified example with a single front-end and a single worker, cross connected and routing to each other. We just send a message each way, and dump the message envelopes:
[[code type="example" title="Cross-connected routers" name="rtrouter"]]
[[/code]]
The program produces this output:
[[code]]
----------------------------------------
[008] SERVER
[000]
[014] send to worker
----------------------------------------
[006] WORKER
[000]
[014] send to server
[[/code]]
Some comments on this code:
* We need to give the two sockets time to connect and exchange identities. If we don't, then they will discard the messages you try to send to them, not recognizing the address. Try commenting out the sleep(1), and then trying again.
* We can set and use identities on both bound and connected sockets, as this example shows.
Although the router-to-router pattern looks ideal for asynchronous N-to-N routing, it has some pitfalls. First, any design with N-to-N connections will not scale beyond a small number of clients and servers. You should really create a device in the middle that turns it into two 1-to-N patterns. This gives you a structure like the LRU queue broker, though you would use DEALER at the front-end and worker sides to get streaming.
Second, it may become confusing if you try to put two ROUTER sockets at the same logical level. One must bind, one must connect, and request-reply is inherently asymmetric. However, the next point takes care of this.
Third, one side of the connection has to know the identity of the other, //up front//. You cannot do router-to-router flows between two transient sockets. In practice this means you need a name service, configuration data, or some other magic to define and share the identities of one of the peers. It's convenient therefore to treat the more static side of the flow as 'server', and give it a fixed, known identity, and then treat the dynamic side as 'client'. The client will have to connect to the server, then send it a message using the server's known identity as address, and then the server can respond to the client.
+++ Worked Example: Inter-Broker Routing
Let's take everything we've seen so far, and scale things up. Our best client calls us urgently and asks for a design of a large cloud computing facility. He has this vision of a cloud that spans many data centers, each a cluster of clients and workers, and that works together as a whole.
Because we're smart enough to know that practice always beats theory, we propose to make a working simulation using 0MQ. Our client, eager to lock down the budget before his own boss changes his mind, and having read great things about 0MQ on Twitter, agrees.
++++ Establishing the Details
Several espressos later, we want to jump into writing code but a little voice tells us to get more details before making a sensational solution to entirely the wrong problem. What kind of work is the cloud doing?, we ask. The client explains:
* Workers run on various kinds of hardware, but they are all able to handle any task. There are several hundred workers per cluster, and as many as a dozen clusters in total.
* Clients create tasks for workers. Each task is an independent unit of work and all the client wants is to find an available worker, and send it the task, as soon as possible. There will be a lot of clients and they'll come and go arbitrarily.
* The real difficulty is to be able to add and remove clusters at any time. A cluster can can leave or join the cloud instantly, bringing all its workers and clients with it.
* If there are no workers in their own cluster, clients' tasks will go off to other available workers in the cloud.