...
Code Block | ||||
---|---|---|---|---|
| ||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 41908 root 20 0 5074516 3180 2344 R 324.5 0.0 0:31.75 apples_receiver 42192 root 20 0 1072372 2928 2104 R 162.6 0.0 0:17.33 apples_sender2 42266 root 20 0 1072372 2900 2076 R 160.6 0.0 0:14.36 apples_sender2 |
O/S Scheduling and/or CPU Affiliation
While no efforts were made to pin any of the application threads to CPUs, it was obvious during single sender/receiver tests that either O/S scheduling or CPU "luck" made a difference in overall throughput as illustrated below with some message rates exceeding the pure NNG application's rate.
Code Block | ||||
---|---|---|---|---|
| ||||
sender_1 | <A2SEND> finished attempted: 1000000 good: 999070 bad: 0 drops: 930 rate: 43478
app0_1 | =app0= finished received: 999070 rate: 34450 msg/sec
sender_1 | <A2SEND> finished attempted: 1000000 good: 998847 bad: 0 drops: 1153 rate: 47619
app0_1 | =app0= finished received: 998847 rate: 39953 msg/sec
sender_1 | <A2SEND> rmr timeout value was set to 1
sender_1 | <A2SEND> finished attempted: 1000000 good: 998681 bad: 0 drops: 1319 rate: 50000
app0_1 | =app0= finished received: 998681 rate: 41611 msg/sec
sender_1 | <A2SEND> finished attempted: 1000000 good: 998596 bad: 0 drops: 1404 rate: 50000
app0_1 | =app0= finished received: 998596 rate: 39943 msg/sec
sender_1 | <A2SEND> finished attempted: 1000000 good: 999093 bad: 0 drops: 907 rate: 43478
app0_1 | =app0= finished received: 999093 rate: 35681 msg/sec
sender_1 | <A2SEND> finished attempted: 1000000 good: 998339 bad: 0 drops: 1661 rate: 52631
app0_1 | =app0= finished received: 998339 rate: 43406 msg/sec
sender_1 | <A2SEND> finished attempted: 1000000 good: 999116 bad: 0 drops: 884 rate: 38461
app0_1 | =app0= finished received: 999116 rate: 33303 msg/sec
|
Conclusions
The impact on message rate when using RMR is caused by a couple of factors:
...