IPP efficiency #873
-
I would like to discuss IPP efficiency This is triggered by Some years ago when I developed "/usr/lib/cups/backend/monitor" I think the main difference why IPP needs much more resources In particular IPP implements to tell the client In contrast in automated environments The CUPS "ipp" backend is an IPP client program I am wondering if those more resources are unavoidable To avoid misunderstandings: |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
OK, so I'll start with the protocol concerns - yes, IPP has more overhead than LPD or AppSocket. AppSocket is just a bare, raw TCP/IP socket and LPD is a very simple command-based protocol while IPP is a fully-functional message-based protocol with encryption and authentication. WRT the number of attributes and values that are passed around, that depends on the request and response, and we do try to optimize things for the standard commands, IPP backend, and web interface. Overall the size of print data normally dwarfs any protocol overhead. The communications that the IPP backend does during submission are necessary for knowing that the job is submitted successfully, report any issues in the printer, etc. None of that is possible with the older protocols... WRT specific performance issues, part of it is the current single-threaded design of cupsd and part of it is the "one job at a time" despooling from clients. These are being addressed in CUPS 3.0. |
Beta Was this translation helpful? Give feedback.
-
@michaelrsweet Mostly out of curiosity I will re-do my tests |
Beta Was this translation helpful? Give feedback.
-
FYI here some first test results: I run CUPS 2.4.7 on my homeoffice laptop. I have two queues for the test:
The 'tofile' backend is the The 'rawtofile' queue is a raw queue that was set up via
The 'monitoripptorawtofile' queue is another raw queue
The 'monitor' backend is the one from
The 'monitor' backend calls the actual backend 'ipp' with device URI
So A complication was that I run Gnome on my homeoffice laptop
and that 'gsd-print' does additional IPP communication To keep that actual print data minimal
Some data from the resulting /tmp/monitor.pcap :
So the IPP communication was 285 TCP packets The 10 biggest packets with payload length 2694 were all of the form
I don't know any details here but at first glance this looks Adding
makes the IPP communication a lot smaller. With
With 'ipp://localhost/printers/rawtofile?waitjob=false&waitprinter=false' |
Beta Was this translation helpful? Give feedback.
OK, so I'll start with the protocol concerns - yes, IPP has more overhead than LPD or AppSocket. AppSocket is just a bare, raw TCP/IP socket and LPD is a very simple command-based protocol while IPP is a fully-functional message-based protocol with encryption and authentication. WRT the number of attributes and values that are passed around, that depends on the request and response, and we do try to optimize things for the standard commands, IPP backend, and web interface. Overall the size of print data normally dwarfs any protocol overhead.
The communications that the IPP backend does during submission are necessary for knowing that the job is submitted successfully, report any issues in…