Remote Debugging Protocol Stream Transport: Difference between revisions
(First draft.) |
(Add section on Zero-copy bulk data.) |
||
Line 50: | Line 50: | ||
TCP/IP streams and USB streams meet these requirements. | TCP/IP streams and USB streams meet these requirements. | ||
== Implementation Notes == | |||
=== Zero-copy Bulk Data === | |||
Mozilla added bulk data packets to the protocol to download profiling data from devices with limited memory more efficiently. Profiling data sets need to be as large as possible, as larger sets can cover a longer period of time or more frequent samples. However, converting a large data set to a JavaScript object, converting the object to a JSON text, and sending the text over the connection entails making several temporary copies of the data, and thus limits the amount that can be collected. We wanted to let small devices transmit profile data while making as few temporary copies as possible. Since it seemed likely that other sorts of tools would need to exchange large binary blocks efficiently, we wanted this capability to be usable by any protocol participant, rather than being tailored to the profiler's specific case. | |||
In our implementation of this Stream Transport, when a participant wishes to transmit a bulk data packet, it provides the data's length in bytes, and a callback function. When data can be sent, the transport passes the callback function the underlying <code>nsIOutputStream</code>, and the callback writes the data directly to the stream. Thus, the transport itself requires no intermediate copies of the data; the packet can be sent as efficiently as the underlying tool can manage. Similarly, when a participant receives a bulk data packet, the transport passes the actor name and the transport's underlying <code>nsIInputStream</code> directly to a callback function registered for the purpose. The callback function can then consume the data directly, and again, the transport itself requires no intermediate copies. | |||
<!-- Local Variables: --> | <!-- Local Variables: --> | ||
<!-- eval: (visual-line-mode) --> | <!-- eval: (visual-line-mode) --> | ||
<!-- End: --> | <!-- End: --> |
Revision as of 03:43, 8 November 2012
The Mozilla debugging protocol is specified in terms of packets exchanged between a client and server, where each packet is either a JSON text or a block of bytes (a "bulk data" packet). The protocol does not specify any particular mechanism for carrying packets from one party to the other. Implementations may choose whatever transport they like, as long as packets arrive reliably, undamaged, and in order.
This page describes the Mozilla Remote Debugging Protocol Stream Transport, a transport layer suitable for carrying Mozilla debugging protocol packets over a reliable, ordered byte stream, like a TCP/IP stream or a pipe. Debugger user interfaces can use it to exchange packets with debuggees in other processes (say, for debugging Firefox chrome code), or on other machines (say, for debugging Firefox OS apps running on a phone or tablet).
(The Stream Transport is not the only transport used by Mozilla. For example, when using Firefox's built-in script debugger, the client and server are in the same process, so for efficiency they use a transport that simply exchanges the JavaScript objects corresponding to the JSON texts specified by the protocol, and avoid serializing packets altogether.)
Packets
Once the underlying byte stream is established, transport participants may immediately begin sending packets, using the forms described here. The transport requires no initial handshake or setup, and no shutdown exchange: the first bytes on the stream in each direction are those of the first packet, if any; the last bytes on the stream in each direction are the final bytes of the last packet sent, if any.
The transport defines two types of packets: JSON and bulk data.
JSON Packets
A JSON packet has the form:
length:JSON
where length is a series of decimal ASCII digits, JSON is a well-formed JSON text (as defined in RFC 4627) encoded in UTF-8, and length, interpreted as a number, is the length of JSON in bytes.
Bulk Data Packets
A bulk data packet has the form:
bulk actor length:data
where:
bulk
is the four ASCII characters 'b', 'u', 'l', and 'k';- there is exactly one space character (the single byte 0x20) between
bulk
and actor, and between actor and length; - actor is a sequence of Unicode characters, encoded in UTF-8, containing no spaces or colons;
- length is a sequence of decimal ASCII digits; and
- data is a sequence of bytes whose length is length interpreted as a number.
The actor field is the name of the actor sending or receiving the packet. (Actors are server-side entities, so if the packet was sent by the client, actor names the recipient; and if the packet was sent by the server, actor names the sender.) The protocol imposes the same syntactic restrictions on actor names that we require here.
Which actor names are valid at any given point in an exchange is established by the remote debugging protocol.
The content of a bulk data packet is exactly the sequence of bytes appearing as data. Data is not UTF-8 text.
Stream Requirements
The Stream Transport requires the underlying stream to have the following properties:
- It must be transparent: each transmitted byte is carried to the recipient without modification. Bytes whose values are ASCII control characters or fall outside the range of ASCII altogether must be carried unchanged; line terminators are left alone.
- It must be reliable: every transmitted byte makes it to the recipient, or else the connection is dropped altogether. Errors introduced by hardware, say, must be detected and corrected, or at least reported (and the connection dropped). The Stream Transport includes no checksums of its own; those are the stream's responsibility. (So, for example, a plain serial line is not suitable for use as an underlying stream.)
- It must be ordered: bytes are received in the same order they are transmitted, and bytes are not duplicated. (UDP packets, for example, may be duplicated or arrive out of order.)
TCP/IP streams and USB streams meet these requirements.
Implementation Notes
Zero-copy Bulk Data
Mozilla added bulk data packets to the protocol to download profiling data from devices with limited memory more efficiently. Profiling data sets need to be as large as possible, as larger sets can cover a longer period of time or more frequent samples. However, converting a large data set to a JavaScript object, converting the object to a JSON text, and sending the text over the connection entails making several temporary copies of the data, and thus limits the amount that can be collected. We wanted to let small devices transmit profile data while making as few temporary copies as possible. Since it seemed likely that other sorts of tools would need to exchange large binary blocks efficiently, we wanted this capability to be usable by any protocol participant, rather than being tailored to the profiler's specific case.
In our implementation of this Stream Transport, when a participant wishes to transmit a bulk data packet, it provides the data's length in bytes, and a callback function. When data can be sent, the transport passes the callback function the underlying nsIOutputStream
, and the callback writes the data directly to the stream. Thus, the transport itself requires no intermediate copies of the data; the packet can be sent as efficiently as the underlying tool can manage. Similarly, when a participant receives a bulk data packet, the transport passes the actor name and the transport's underlying nsIInputStream
directly to a callback function registered for the purpose. The callback function can then consume the data directly, and again, the transport itself requires no intermediate copies.