TCPlp: System Design and Analysis of Full-Scale TCP in Low-Power Networks

TCPlp: System Design and Analysis of Full-Scale TCP in Low-Power Networks

Sam Kumar
UC Berkeley
   Michael P Andersen
UC Berkeley
   Hyung-Sin Kim
UC Berkeley
   David E. Culler
UC Berkeley

TCPlp: System Design and Analysis of Full-Scale TCP in Low-Power Networks

Sam Kumar
UC Berkeley and Michael P Andersen
UC Berkeley and Hyung-Sin Kim
UC Berkeley and David E. Culler
UC Berkeley

Abstract

Low power and lossy networks (LLNs) enable diverse applications integrating many embedded devices, often requiring interconnectivity between LLNs and existing TCP/IP networks. However, the sensor network community has been reluctant to adopt TCP, providing only highly simplified TCP implementations on sensor platforms and, instead, developing LLN-specific protocols to provide connectivity. We present a full-scale TCP implementation, based on the TCP protocol logic in FreeBSD, capable of operating over IEEE 802.15.4 within the memory constraints of Cortex-M0+ based platforms. We systematically investigate the behavior of a full-featured TCP implementation in the LLN setting. It provides a 5x to 40x improvement in throughput compared to prior studies. Moreover, we find that TCP is more robust in LLNs than studies of TCP over traditional WLANs would suggest. We empirically demonstrate that, in a lossy environment typical of LLNs, TCP can achieve power consumption comparable to CoAP, a representative LLN-specific reliability protocol. We discuss the potential role of TCP in sensor networks, observing that gateway-free, retransmission-based reliable transport would be an asset to sensor network applications. We conclude that TCP should have a place in the LLN architecture moving forward.

1 Introduction

The use of IP and 6LoWPAN [80] in low power and lossy networks (LLNs) has become commonplace, along with standard IPv6 protocols for LLNs, such as RPL [105] and CoAP [25]. Most wireless sensor network (WSN) operating systems, such as TinyOS [75], RIOT OS [15], and Contiki OS [35], ship with implementations enabled and configured. This extends into industry, with major vendors offering branded and supported 6LoWPAN stacks (TI’s SimpleLink, Atmel’s SmartConnect), and a consortium forming around 6LoWPAN-based interoperability (the Thread Group [48]).

Despite the wide use of IP, TCP has received little adoption. Many embedded IP stacks (e.g., OpenThread [81]) do not even support TCP, and those that do implement only a subset of the features (Table 1). The widely-held perception is that IP holds merit, but TCP is ill-suited to WSNs.

The research community has proposed a plethora of alternative WSN-specialized protocols and systems [85, 92, 104] for transporting data reliably, such as PSFQ [102], STCP [57], RCRT [84], Flush [67], RMST [99], Wisden [108], CRRT [9], and CoAP [21], or for transporting data without reliability, like CODA [103], ESRT [93], Fusion [54], CentRoute [100], Surge [74], and RBC [110]. In justifying these protocols, the research community has claimed that TCP is a poor choice for reasons such as:

  • “TCP is not light weight … and may not be suitable for implementation in low-cost sensor nodes with limited processing, memory and energy resources.” [85] (Similar argument in [32, 57].)

  • That “TCP is a connection-oriented protocol” makes it a poor match for WSNs, “where actual data might be only in the order of a few bytes.” [89] (Similar argument in [85].)

  • “TCP uses a single packet drop to infer that the network is congested.” This “can result in extremely poor transport performance because wireless links tend to exhibit relatively high packet loss rates.” [84] (Similar argument in [33], [34], [57].)

  • “TCP provides 100% reliability. It is not only costly in terms of energy consumption, but also not required by many applications in WSNs.” [85] (Similar argument in [102].)

Although existing studies are quick to dismiss TCP, citing one or more of the above reasons, they do little, if anything, to establish these potential shortcomings as legitimate. Some assert the above statements as fact, or cite other papers that do; others refer to the literature about TCP in WLANs, which, although similar to LLNs, have higher throughput and fewer resource constraints. It is easy to use intuition, or consider examination of TCP in WLANs, and expect TCP to behave a certain way in LLNs, but it is important for the research community to empirically validate such assumptions. Moreover, low-power networking has matured substantially over the past two decades, with the adoption of IP in LLNs, emergence of more capable embedded hardware, and rise of the Internet of Things (IoT). In this context, this paper seeks to determine: Does this assumption actually hold in modern WSNs? Is TCP still unsuitable for use in low power and lossy networks?

Our results in this paper indicate the contrary. We leverage the full-featured TCP implementation in the FreeBSD Operating System, which has received more than three decades of tuning and testing, and refactor it to work with the Berkeley Low Power IP Stack (BLIP) in TinyOS, the Generic Network Stack (GNRC) in RIOT OS, and the OpenThread network stack, on two modern WSN platforms (§4). Using the resulting TCP implementation, which we call TCPlp [71], we systematically investigate how TCP behaves in an IEEE 802.15.4-based network and explore how it can be used in WSNs.

We find that full-scale TCP fits well within the CPU and memory constraints of modern WSN platforms4, §6). Owing to the low bandwidth of a low power wireless link, a small window size ( KiB) is sufficient to fill the bandwidth-delay product and achieve good TCP performance. This translates into small send/receive buffers that fit comfortably within the memory of modern WSN hardware. As a result, full-scale TCP operates well in LLNs, with 5–40 times higher throughput compared to the existing (simplified) embedded TCP stacks.

Although some findings from the wireless TCP literature carry over to our setting, TCP in low power wireless networks does not conform to existing performance models for TCP in traditional networks or WLANs8). Given that a small window size is sufficient for good performance due to the low bandwidth of LLNs, TCP is much more resilient to spurious packet losses, as the congestion window can recover to a full window very quickly after loss (§7). Ironically, low bandwidth makes it easier to mask link loss, which is arguably the primary challenge to achieving good wireless TCP performance.

Finally, we evaluate TCP in the context of a real IoT sensor application, demonstrating that TCP is capable of operating at low power, comparable to alternatives tailored specifically for WSNs9). We find this significant because, as history shows, application characteristics must offer substantial opportunity for optimization to justify reliable transport protocols other than TCP.

We conclude that TCP is very capable of running on IEEE 802.15.4 networks and low-cost embedded devices in WSN application scenarios, and that using a fully-featured TCP stack yields considerable benefit.

2 Background: Embedded TCP

Since the introduction of the Transmission Control Protocol (TCP) in 1980, there has been a large body of work focusing on improving it as the Internet evolved. This work includes congestion control algorithms [8, 39, 46], performance on wireless links [17], header extensions to improve performance [59], and performance tuning of full-scale implementations [27], among other factors.

In the late 1990s and early 2000s, developers attempted to bring TCP/IP to embedded and resource-constrained systems to connect them to the Internet, usually over serial or Ethernet. Such systems [23, 61] were often designed with a specific application—often, a web server—in mind. These TCP/IP stacks were tailored to the specific applications at hand and were not suitable for general use.

uIP (“micro IP”) [32], introduced in 2002, was a standalone general TCP/IP stack optimized for 8-bit microcontrollers and Ethernet. To minimize resource consumption to run on such platforms, uIP omits standard features of TCP; for example, it only allows a single outstanding (unACKed) TCP segment per connection, rather than supporting a sliding window of unacknowledged data.

Since the introduction of uIP, embedded networks have changed substantially. With wireless sensor networks and IEEE 802.15.4, various lightweight/low-power networking protocols have been developed to overcome lossy links with strict energy and resource constraints, from B-MAC [87], X-MAC [24], and A-MAC [38], to Trickle [76] and CTP [44]. Researchers have viewed TCP as unsuitable, however, questioning end-to-end recovery, loss-triggered congestion control, and bi-directional data flow in low power and lossy networks (LLNs) [34], often building application-specific transport solutions [93, 99, 102].

In 2007, the 6LoWPAN adaptation layer [80] was introduced, enabling IPv6 over IEEE 802.15.4. Since then, IPv6 was incorporated into LLNs, bringing forth the Internet of Things (IoT) [53]. IPv6-based routing protocols such as RPL [105] and application protocols such as CoAP [25] were designed to support LLNs. Representative operating systems such as TinyOS [75] and Contiki OS [35] implement UDP/RPL/IPv6/6LoWPAN network stacks with IEEE 802.15.4-compatible link layer protocols in 16-bit microcontroller-based platforms like TelosB [88].

Despite this acceptance of IPv6, TCP is not widely adopted as a transport solution in LLNs. The few LLN studies that use TCP [37, 50, 53, 55, 66, 112] generally use a highly simplified TCP stack (Table 1), such as uIP. One study [37] uses the Linux TCP stack, but does not adequately capture the resource constraints of LLNs—it uses traditional computers (PCs) for the end hosts—and does not consider the effects of hidden terminals. Furthermore, much of the existing work [37, 53] uses TCP as a workload to evaluate a new link- or network-layer protocol for LLNs, instead of evaluating TCP in its own right.

uIP BLIP GNRC TCPlp
Flow Control Yes Yes Yes Yes
Congestion Control N/A No Yes Yes
RTT Estimation Yes No Yes Yes
MSS Option Yes No Yes Yes
TCP Timestamps No No No Yes
OOO Reassembly No No Yes Yes
Selective ACKs No No No Yes
Delayed ACKs No No No Yes
Table 1: Feature comparison among different TCP stacks for low-power embedded devices (uIP in Contiki, BLIP in TinyOS, GNRC in RIOT, and FreeBSD-based TCPlp)

Recently, low power 32-bit microcontrollers with more processing power and memory space have become readily available, e.g., via Atmel’s Cortex-M series [11, 12, 68]. This prompts us to re-evaluate TCP’s viability in LLNs.

3 Example Application: Anemometry

In order to ground our study of TCP, we consider a specific WSN application in which it might be used: the deployment of anemometers, sensors that measure air velocity.

3.1 TCP for Anemometry

Anemometers may be deployed in a building to diagnose problems with the Heating, Ventilation, and Cooling system (HVAC), and also to collect air flow measurements for improved HVAC control. This requires anemometers in difficult-to-reach locations, such as in air flow ducts, where it is infeasible to run wires. The anemometers, therefore, must be battery-powered and must transmit readings wirelessly, making LLNs attractive.

We used anemometers based on the Hamilton platform [12, 63], each consisting of four ultrasonic transceivers arranged as vertices of a tetrahedron. To measure the air velocity, each transceiver, in turn, emits a burst of ultrasound, and the impulse is measured by the other three transceivers. This process results in a total of 12 measurements.

Calculating the air velocity from these measurements is computationally infeasible on the anemometer itself, because Hamilton does not have hardware floating point support and the computations require complex trigonometry. Measurements must be transmitted over the network to a server that processes the data. Furthermore, a specific property of the analytics is that it requires a contiguous stream of data to maintain calibration. Thus, the application requires a high sample rate (1 Hz), and is sensitive to data loss.111Given the higher data rate requirements of this application, we plan to use a higher-capacity battery than the standard AA batteries used in most motes. The higher cost of such a battery is justified by the higher cost of the anemometer transducers. A protocol for reliable delivery, like TCP, is therefore necessary.

3.2 Design Choice: Thread Network Stack

Given that placing a border router with LAN connectivity in the vicinity of every low-power sensor node is a significant deployment burden, it is common to transmit sensor readings over multiple wireless LLN hops [69]. Therefore, we needed to choose a routing protocol to form a multihop wireless topology.

Note that, while the anemometer itself must be battery-powered and wireless, it is reasonable to have a powered wireless LLN router node within range of it.222The assumption of powered “core routers” is reasonable for most IoT use cases, which are typically indoors [69]. Recent IoT protocols, such as Thread [48] and BLEmesh [47] take advantage of powered core routers. This motivates Thread333It is noteworthy that Thread has a large amount of industry support with a consortium already consisting of over 100 members [6], and is used in real IoT products sold by Nest/Google [7]. Given this industry trend, using Thread makes our work timely. [48], a recently developed protocol standard that constructs a multihop wireless network with powered, always-on router nodes and battery-powered, duty-cycled444Low-power radios consume almost as much energy listening for a packet as they do when actually sending or receiving [13]. To optimize for power consumption, it is customary to duty-cycle the radio, keeping it in a low-power sleep state most of the time [87, 53]. leaf nodes. Thread decouples routing management from energy efficiency, providing a full-mesh topology among routers, frequent route updates, and asymmetric bidirectional routing for reliability. Each leaf node duty cycles its radio, and simply chooses a core router with good link quality, called its parent, as its next hop to all other nodes.

The duty cycling uses listen-after-send [95]. A leaf node’s parent stores downstream packets destined for that leaf node, until the leaf node sends it a data request message. A leaf node, therefore, can keep its radio powered off most of the time; infrequently, it sends a data request message to its parent, and turns on its radio for a short interval afterward to listen for downstream packets queued at its parent. Leaf nodes may send upstream traffic at any time.

4 Implementation of a Full-Scale TCP Stack in Embedded Operating Systems

We develop a TCP stack for LLNs, called TCPlp, which we plan to open-source upon publication. We implemented TCPlp on two embedded platforms: Hamilton [12] and Firestorm [11]. Hamilton has a 48 MHz Cortex-M0+ with 256KB of ROM and 32KB of RAM. For this platform, our software uses RIOT OS [15], which provides the GNRC IP stack. For multihop experiments, we use OpenThread [81], an open-source implementation of the Thread [48] protocol, with RIOT OS. Firestorm is built around a SAM4L 48 MHz Cortex-M4 with 512KB of ROM and 64KB of RAM. For this, we use TinyOS [75], which provides the BLIP IP stack.

Both platforms use the AT86RF233 radio, which supports IEEE 802.15.4. It can perform link-layer retransmissions and CSMA-CA automatically, without interaction from the microcontroller. However, the AT86RF233 automatically enters low power mode during CSMA backoff, during which it does not listen for incoming frames [13]. This behavior, which we call deaf listening, interacts poorly with TCP, because TCP requires bidirectional flow of packets—data in one direction and ACKs in the other. Therefore, we do not make use of the radio capability for hardware CSMA and link retries. Instead these operations are performed in software, making sure to put the radio in “listen” mode in between CSMA attempts and link retries.

Platform CPU Arch. ROM RAM
TelosB 16-bit, 25 MHz 48 KiB 10 KiB
Hamilton 32-bit, 48 MHz 256 KiB 32 KiB
Firestorm 32-bit, 48 MHz 512 KiB 64 KiB
Rasp. Pi 32-bit, 700 MHz SD Card 256 MB
Table 2: Comparison of the platforms we used (Hamilton and Firestorm) to TelosB and Raspberry Pi. Compare these values to the memory footprint of TCPlp in Tables 3 and 4.

Table 2 compares the platforms used in this study to the TelosB [88], an older LLN platform widely used in past studies. While these platforms have more code and data memory than the TelosB, they are heavily resource-constrained compared to a Raspberry Pi. Nevertheless, these extra resources make it more possible than ever to run a full TCP stack on low-power embedded hardware.

4.1 Implementation

That only a few full-scale TCP stacks exist, with a body of literature covering decades of tweaking and refining, demonstrates that developing a feature-complete implementation of TCP is complex and error-prone [86]. Therefore, we leverage the TCP implementation in FreeBSD 10.3 [42] for TCPlp. This allows us to demonstrate that modern WSN systems are capable of running a full-scale TCP stack. Simplified TCP implementations such as those in BLIP and uIP are no longer necessary. Furthermore, using a robust implementation helps ground our measurement study in Sections 6 to 9. Using the well-tested FreeBSD implementation makes us more confident that our observations are due to the TCP protocol, not an artifact of the TCP implementation we used.

We did have to make major adaptations to the FreeBSD implementation for correct and performant operation on each embedded operating system. First, we adapted its concurrency model. TinyOS does not have threads, but instead supports fully event-driven execution. RIOT OS supports threads, but uses a separate thread for each layer of its GNRC network stack, passing packets between them using Inter-Process Communication (IPC). The OpenThread port for RIOT OS uses two threads, one for the send path and another for the receive path. We adapted FreeBSD to work with each of these concurrency models. Second, we use tickless timers provided by TinyOS and RIOT OS for TCP, as opposed to the Callout subsystem [56] used by FreeBSD. Third, we adapted TCP’s buffering (§4.3). Fourth, unlike FreeBSD, we distinguish at the protocol level between active sockets, which are used to send and receive bytes on a TCP connection, and passive sockets, which are used to listen for incoming connections, because passive sockets require less memory than active sockets.

Based on the principles above, we implemented full-scale TCP features for TCPlp. As shown in Table 1, TCPlp includes features from FreeBSD that improve standard bi-directional communication, like a sliding window, segment reassembly, New Reno congestion control, zero-window probes, delayed ACKs, selective ACKs, TCP timestamps, and header prediction [27].

TCPlp, however, omits some features in FreeBSD’s TCP/IP stack. We omit dynamic window scaling, as buffer sizes large enough to necessitate it ( KiB) would not even fit in memory. We omit support for the urgent pointer, as it not recommended for use [45] and would only complicate buffering. Security-related features and optimizations, such as host cache, TCP signatures, SYN cache, and SYN cookies are outside the scope of this work. We do, however, retain challenge acknowledgments [90].

4.2 Memory Usage: Connection State

Tables 3 and 4 depict the memory footprints of TCPlp on TinyOS and RIOT OS. The memory required for the protocol and application state of an active TCP socket fits in a few hundred bytes, less than 1% of the available RAM on the Cortex-M4 (Firestorm) and 2% of that on the Cortex-M0+ (Hamilton). Even though we chose to implement relatively heavyweight features not traditionally included in embedded TCP stacks, connection state fits well within available RAM.

Protocol Event Sched. User Library
ROM 21352 B 1696 B 5384 B
RAM (Active) 488 B 40 B 36 B
RAM (Passive) 16 B 16 B 36 B
Table 3: Memory usage of TCPlp on TinyOS. Our implementation of TCPlp spans three modules: (1) protocol implementation, (2) event scheduler that injects callbacks into userspace, (3) userland library to make TCP system calls.
Protocol Socket Layer posix_sockets
ROM 19972 B 6216 B 5468 B
RAM (Active) 364 B 88 B 48 B
RAM (Passive) 12 B 88 B 48 B
Table 4: Memory usage of TCPlp on RIOT OS. We also include the memory footprint of RIOT’s posix_sockets module, used by TCPlp to provide a Unix-like interface.

4.3 Memory Usage: Data Buffering

Existing embedded TCP stacks, such as uIP and BLIP, allow only one TCP packet in the air, eschewing careful implementation of send and receive buffers [66]. These buffers, however, are key to supporting TCP’s sliding window functionality. We observe in §6.2 that TCPlp performs well with only 2-3 KiB send and receive buffers, which comfortably fit in memory even when naïvely pre-allocated at compile time. Given that buffers dominate TCPlp’s memory usage, however, we discuss techniques to optimize their memory usage.

4.3.1 Send Buffer: Zero-Copy

Zero-copy techniques [18, 30, 62, 77, 78] were devised for low-latency and high-throughput situations where the time for the CPU to copy memory was a significant bottleneck. Our situation is very different; the radio, not the CPU, is the bottleneck, owing to the low bandwidth of IEEE 802.15.4. By using a zero-copy send buffer, however, we can avoid allocating memory to intermediate buffers that would otherwise be needed to copy data, thereby reducing the network stack’s total memory usage.

In TinyOS, for example, the BLIP network stack supports vectored I/O; an outgoing packet passed to the IPv6 layer is specified as an iovec. Instead of allocating memory in the packet heap for each outgoing packet and then copying data out of the send buffer when sending out packets, TCPlp simply creates iovecs that point to existing data in the send buffer. This decreases the size of the packet heap, without affecting performance.

Furthermore, the Firestorm platform supports an embedded Lua runtime for applications on top of a TinyOS kernel. Data is provided to TCP by the application as Lua strings, which are immutable. We leverage zero-copy again, this time for the transfer of data from the application to TCP. The TCPlp send buffer is a linked list of nodes, each with a pointer to a data buffer. When the string provided by the application is sufficiently large, each node simply points to the memory backing the Lua string, instead of allocating a separate buffer for that data. The result is that the memory allocated to the send buffer can be very small—it only needs to contain a few nodes of a linked list.

Unfortunately, such zero-copy optimizations were not possible for the RIOT/OpenThread implementation, because (1) we provided a C API as opposed to the Lua API, meaning that buffers are mutable and cannot be safely aliased, and (2) OpenThread does not support vectored I/O for sending packets at the IPv6 layer. The result is that the TCPlp implementation requires a few kilobytes of additional memory for the send buffer on this platform.

4.3.2 Receive Buffer: In-Place Reassembly Queue

Not all zero-copy optimizations are useful in the embedded setting. When a TCP packet arrives at a FreeBSD system, it is passed to the TCP implementation as an mbuf [107]. The receive buffer and reassembly buffer are mbuf chains, so data need not be copied out of mbufs in order to add them to either buffer or recover from out-of-order delivery. Furthermore, buffer sizes are chosen dynamically [97], and are merely a limit on their maximum size. In our memory-constrained setting, this is dangerous because (1) the memory usage is nondeterministic, in that there is additional overhead due to headers if the data is delivered in many small packets as opposed to a few large packets, and (2) adding packets to the receive buffer reserves space in the packet heap, potentially causing future packet allocations to fail, leading to deadlock.

(a) Depiction of a naive fixed-size receive buffer. Note the relationship between the size of the advertised window, size of the buffered data, and size of the receive buffer.
(b) Depiction of the final receive buffer with in-place reassembly queue. In-sequence buffered data (yellow) is kept in a circular buffer, and out-of-order segments (red) are written in the space past the received data. A bitmap (not shown) records out-of-order data.
Figure 1: Comparison of naïve and final receive buffers

We opted for a flat array-based circular buffer for the receive buffer in TCPlp, primarily owing to its determinism in a limited-memory environment: buffer space is reserved at compile-time. To reduce memory consumption, however, we observe that out-of-order data can be stored in the same buffer as receive buffer, by simply adding a bitmap to record which bytes correspond to out of order data. We call this an in-place reassembly queue and demonstrate it in Figure 1.

5 Roadmap and Methodology

We use TCPlp to explore the interactions between full-scale TCP and a low-power link, seeking to characterize the behavior of TCP in the LLN setting. Sections 6, 7, and 9, and Appendices A and C, comprise a measurement study with extensive experiments, seeking to characterize the behavior of TCP in the LLN setting. Our experiments are performed using the Hamilton/RIOT OS platform. Results from Firestorm/TinyOS are occasionally provided for comparison. Although the AT86RF233 radio supports high, non-standard data rates up to 2 Mb/s, we use the standard 250 kb/s data rate for fair comparison with prior work.

Our experimental study is built step-by-step as follows:

Preliminary study. We begin our measurement study in §6, by characterizing how TCP interacts with a low-power network stack, resource-constrained hardware, and a low-bandwidth wireless link. To isolate and measure these interactions, we conduct our experiments in a low-loss single-hop environment, using an always-on MAC protocol. As depicted in Figure 2, an embedded TCP endpoint (Hamilton) communicates using TCP to a conventional Linux endpoint (computer) via a border router (combination of a Hamilton and a Raspberry Pi). The embedded endpoint is placed at a distance of about 5.5 meters from the border router. IEEE 802.15.4 frames were sent in channel 26, which is the farthest channel from those used by WiFi routers.

Figure 2: Our experimental setup

Multi-hop study. We continue our study in §7 and §8 by characterizing the behavior of TCP over multiple wireless hops, using our findings to develop a simple model for TCP performance in LLNs. We use a testbed of Hamilton nodes deployed in an office (Figure 3). Although our focus is on TCP behavior, not routing behavior, it is important to use a real routing protocol, to ensure that the topologies used in our experiments are realistic. We use OpenThread [81], an open-source implementation of Thread. We did not interfere in OpenThread’s routing decisions, except where explicitly mentioned for experimental consistency.

Figure 3: Snapshot of uplink routes in OpenThread topology at transmission power of -8 dBm. Node 1 is the border router through which packets enter and leave the LLN.

Application study. We complete our study in §9 by evaluating TCPlp in the context of a real IoT application, namely the deployment of anemometers (§3). Using our multi-hop testbed (Figure 3), we compare TCPlp to CoAP [21], a representative LLN-specialized reliability protocol.

For the Preliminary and Multi-Hop Studies, throughput is our primary performance metric, because (1) it is important that a transport-layer protocol efficiently uses the limited bandwidth provided by a low power link, and (2) characterizing TCP performance, especially as it relates to congestion control, is most meaningful in a high-throughput scenario. For the Application Study, we additionally focus on power consumption, because power consumption is most meaningful to characterize in the context of an application.

6 TCP in a Low Power Embedded Network

In this section, we characterize how full-scale TCP interacts with a low power network stack, resource-constrained hardware, and a low-bandwidth link.

6.1 Impact of Maximum Segment Size

Physical Layer Bandwidth Frame Size Tx Time
Gigabit Ethernet 1 Gb/s 1500 B 0.012 ms
Fast Ethernet 100 Mb/s 1500 B 0.12 ms
WiFi 54 Mb/s 1500 B 0.22 ms
Ethernet 10 Mb/s 1500 B 1.2 ms
IEEE 802.15.4 250 kb/s 127 B 4.1 ms
Table 5: Comparison of IEEE 802.15.4 with traditional TCP/IP links.
Header First Frame Other Frames
IEEE 802.15.4 23 B 23 B
6LoWPAN Frag. 5 B 5 B to 12 B
IPv6 2 B to 28 B 0 B
TCP 20 B to 44 B 0 B
Total 50 B to 107 B 28 B to 35 B
Table 6: Impact of 6LoWPAN fragmentation on header overhead for protocols used. Header overhead is highly variable in the first frame, and significantly less in subsequent frames.
Figure 4: Effect of varying the Maximum Segment Size

In traditional TCP/IP networks, it is customary to set the Maximum Segment Size (MSS) as large as is supported by the links used, in order to minimize header overhead. IEEE 802.15.4 frames, however, are an order of magnitude smaller than frames in traditional networks, as demonstrated in Table 5. Therefore, low power networks use an adaptation layer called 6LoWPAN [80] that allows IP packets to be fragmented into multiple 802.15.4 frames. 6LoWPAN also provides a mechanism to compress the large IPv6 header.

Choosing a small MSS incurs heavy overhead due to TCP/IP headers, which are significant compared to the maximum 802.15.4 frame size (Table 6). Using a large MSS, however, relies on 6LoWPAN fragmentation, which decreases reliability because the loss of one frame results in the loss of an entire packet. Existing work [14] has identified this tradeoff and investigated it in simulation in the context of power consumption. We investigate the tradeoff in the context of achievable throughput in a live network.

Figure 4 shows the bandwidth as the MSS varies. We could not perform the experiment for MSS = 1 frame, because the Linux TCP stack does not respect the negotiated MSS when it is too small. As expected, we see poor performance at small MSS due to header overhead. Given that performance gains diminish when the MSS becomes larger than 5 frames, we use MSS = 5 frames (408 bytes in RIOT’s GNRC network stack) for our additional experiments. Despite the small frame size of IEEE 802.15.4, we can effectively amortize header overhead for TCP using 6LoWPAN fragmentation and TCP’s MSS.

6.2 Impact of Buffer Size

(a) Goodput of TCP (downlink)
(b) RTT of TCP (downlink)
Figure 5: Effect of varying window (receive buffer) size
[112] [50] [66] [53][52] [37] This Paper (Hamilton Platform)
TCP Stack uIP uIP BLIP Arch Rock Linux TCPlp (RIOT OS, OpenThread)
Max. Seg Size 1 Frame 4 Frames 1 Frame 1024 bytes ??? 5 Frames
Window Size 1 Seg. 1 Seg. 1 Seg. 1 Seg. variable 1848 bytes (4 Seg.)
Goodput (One Hop) 1.5 kb/s 12 kb/s 4.8 kb/s 15 kb/s ??? 75 kb/s
Goodput (Multi-Hop) 0.55 kb/s55footnotemark: 5 12 kb/s 2.4 kb/s 9.6 kb/s 16 kb/s 20 kb/s
Table 7: Comparison of TCPlp to existing TCP implementations used in network studies over 802.15.4 networks

Whereas primitive embedded TCP implementations, like uIP, allow only one in-flight segment, full-scale TCP requires buffers that occupy a significant amount of memory (§4.3). In this section, we vary the size of the buffers, and thereby the window size, to study how it affects the bandwidth. We expect throughput to increase with the window size, with diminishing returns once the window size exceeds the bandwidth-delay product (BDP). The result is shown in Figure 5(a). Having solved the deaf-listening problem by implementing CSMA in software (§4), this graph matches the expectation. Goodput levels off at approximately 1.5 KiB, indicating that the buffer size needed to fill the BDP fits comfortably in memory. Indeed, the BDP in this case is .666We use kb/s as opposed to kb/s for the bandwidth due to the SPI transfer overhead identified in §6.4.

Goodput at a window size of two packets (816 bytes) is unusually high. Upon investigation we found that this is an artifact of RIOT’s network stack’s concurrency architecture and having only TCP flow in this controlled study.

6.3 Impact of Network Stack Design

We consider TCP throughput between two embedded nodes connected directly over the IEEE 802.15.4 link, over a single hop without any border router. In this setup, we are able to produce a 63 kb/s throughput over a TCP connection between two Hamilton motes using RIOT’s GNRC network stack. For comparison, we are able to achieve 71 kb/s using the BLIP stack on Firestorm, and 75 kb/s using the OpenThread network stack with Hamilton. This suggests that our results are reproducible across multiple platforms and embedded network stacks. The minor performance degradation in GNRC is partially explained by its greater header overhead due to implementation differences, and by its IPC-based thread-per-layer concurrency architecture [26]. This suggests that the implementation of the underlying network stack could affect TCP performance.

6.4 Upper Bound on Single-Hop Goodput

The delivered goodput over TCP ( 70 kb/s) is substantially higher than prior work (Table 7). It is also, however, substantially less than the 250 kb/s link capacity afforded by IEEE 802.15.4. Overhead from the hardware platform and operating system, mostly SPI transfer to the radio [82], explains this discrepancy. Although transmitting a full-sized 127 byte packet takes 4.1 ms in the air, we measure the actual time to transmit the packet, including the additional overhead, to be 8.2 ms. Sending a single five-frame TCP segment conveys 462 bytes of application-layer data, and takes 41 ms (at minimum, assuming no link-layer retries or additional CSMA attempts are required). With delayed acknowledgments, half of these segments require a TCP ACK to be sent by the receiver, adding on average another 4.1 ms overhead to each segment. Therefore, an upper bound on achievable goodput is kb/s for a single hop. Our empirical figures are near this upper bound (up to 75 kb/s), indicating that TCP-layer CPU processing does not limit throughput.

66footnotetext: Number not stated in paper; inferred from graph, for three hops.

7 TCP Over Multiple Wireless Hops

(a) TCP goodput, one hop
(b) TCP goodput, three hops
(c) RTT, three hops
(d) Frames sent, three hops
Figure 6: Effect of varying time between link-layer retransmissions. Reported “segment loss” is the loss rate of TCP segments, not individual IEEE 802.15.4 frames. It includes only losses not masked by link-layer retries.

This section investigates the behavior of TCP over multiple IEEE 802.15.4 hops.

7.1 Hidden Terminals and Link Retries

Prior work over traditional WLANs has shown that hidden terminals are an obstacle to achieving good TCP performance over multiple wireless hops [43]. Data packets and acknowledgments may collide, degrading performance even for a single flow in isolation. Using RTS/CTS for hidden terminal avoidance has been shown to be effective in WLANs. However, this technique has an unacceptably high overhead in LLNs [106] because data frames are small, comparable in size to the additional control frames required.

In this section, we show how to avoid hidden terminals by adding a delay between link-layer retries in addition to CSMA backoff. After a failed link transmission, a node waits for a random duration between and , before attempting to re-transmit the frame. The idea is that if two frames collide due to a hidden terminal, they are likely to be re-transmitted at different times, avoiding future collisions.

We modified OpenThread, which previously had no delay between link retries, to implement this behavior. Then, we measured the performance of TCP, by allowing OpenThread to choose a topology and running TCP over a path with the desired number of hops. As expected, single-hop performance (Figure 6(a)) decreases somewhat as the delay between link retries increases; hidden terminals are not an issue in that setting. Packet loss is high for the multihop experiment (Figure 6(b)) when the link retry delay is 0, as is to be expected from hidden terminals. Adding a small delay between link retries, however, is effective at reducing this packet loss. Making the delay too large raises the RTT (Figure 6(c)), degrading performance as the BDP exceeds the window size (4 segments, or 1848 bytes).

Figure 6(d) shows that using a larger link delay makes more efficient use of the network, as fewer frames are transmitted in the network in total (fewer link retries were needed, on average to send each frame). This suggests that a moderate delay ( ms) is preferable to a small delay ( ms), even though both provide the same throughput.

7.2 Upper Bound on Multi-Hop Goodput

Comparing Figures 6(a) and 6(b), the reader may notice that the goodput we are able to achieve over three wireless hops is substantially smaller than the goodput we are able to achieve over a single hop. Prior work has observed similar throughput reductions over multiple hops [66, 82]. It is due to radio scheduling constraints inherent in the multihop setting, which we describe in this section. Our analysis ignores overhead due to TCP, and considers only the transfer of frames from one node to another node . Let be the bandwidth over a single hop.

Consider a two-hop setup: . cannot receive a frame from while sending a frame to , because its radio cannot transmit and receive simultaneously. Thus, the maximum achievable bandwidth over two hops is .

Now consider a three-hop setup: . Based on the same argument, if a frame is being transferred over the hop , then neither nor can be active. Furthermore, if a frame is being transferred over , then can hear that frame. Therefore, cannot transfer a frame at that time; if it does, then its frame will collide at with the frame being transferred over . So, at most one of the three hops can transfer a frame at a time. Therefore, the maximum achievable bandwidth is .

In setups with more than three hops, every set of three adjacent hops is subject to this constraint. However, the first hop and fourth hop could transfer frames simultaneously. Therefore, the maximum bandwidth is still .

To validate this analysis, we varied the number of hops from 1 to 4, and measured the achievable throughput, keeping the delay between link retries fixed at ms. Goodput over one hop was 64.1 kb/s; over two hops, 28.3 kb/s; over three hops, 19.5 kb/s; and over four hops,777For the four hop experiment, we decreased the transmission power used by the radio, so that OpenThread would select a route with more hops. 17.5 kb/s. This roughly fits the model.

This analysis justifies why the same window size works well for both the one-hop experiments and the three-hop experiments in Section 7.1. Although the RTT is three times higher, the bandwidth-delay product is approximately the same. Crucially, this means that the 2 KiB buffer size we determined in §6.2, which fits comfortably in memory, remains applicable for up to three wireless hops. Increasing the window size used for three-hop experiments did not appreciably affect the achievable goodput. On the contrary, we had to increase the window size in order to achieve 17.5 kb/s for four hops (in the previous paragraph’s experiment), which is consistent with our analysis.

7.3 TCP Congestion Control in LLNs

Recall that, due to memory constraints and low achievable throughput, we use a small window size of only 1848 bytes (4 TCP segments). This profoundly impacts TCP’s congestion control mechanism. For example, consider Figure 6(b). It is remarkable that throughput is almost the same at ms and ms, despite having 6% packet loss in the first case and less than 1% packet loss in the second.

(a) TCP cwnd for , three hops
(b) TCP loss recovery, three hops
Figure 7: Congestion behavior of TCP over IEEE 802.15.4

Figure 7(a) depicts the congestion window over a 100 second interval during the ms experiment.888All congestion events in Figure 7(a) were fast retransmissions, except for one timeout at s. cwnd is temporarily set to MSS during fast retransmissions due to an artifact of FreeBSD’s implementation of SACK recovery. To make the graph easier to read, we removed fluctuations in cwnd which resulted from “bad retransmissions” that the FreeBSD implementation, in the course of its normal execution, discovered and corrected. Interestingly, the cwnd graph is far from the popular saw-tooth shape (e.g., Figure 11(b) in [16]); cwnd is almost always maxed out even though losses are frequent (6%). This is specific to small buffers. In traditional environments, where links have higher throughput and buffers are large, it takes longer for cwnd to recover after packet loss, greatly limiting the sending rate with frequent packet losses. In contrast, in LLNs, where send/receive buffers are small, the congestion window is able to recover to the maximum size quickly after packet loss, making TCP performance robust to packet loss.

Congestion control behavior also provides additional insight into loss patterns, as shown in Figure 7(b). Fast retransmissions (used for isolated losses) become less frequent as increases, suggesting that they are primarily induced by hidden-terminal-related losses. On the other hand, TCP timeouts do not become less frequent as is increased, suggesting that they are induced by another source of packet loss, such as route changes or wireless interference.

7.4 Simultaneous TCP Flows

Given that TCP’s congestion control mechanisms behave differently in LLNs than in traditional networks, it is natural to ask if they are still effective. To study this question, we set up two TCP flows that have different source nodes but both transmit to the border router, and measured whether the throughput was shared fairly between the two flows. We found that, initially, throughput sharing with a large window size was not fair due to tail drops at one of the relay nodes. Implementing Random Early Detection (RED), however, solved this problem. Thus, TCP congestion control is effective for multiple TCP flows in LLNs. Because the results are similar to observations in traditional networks, we relegate our data and experiments to Appendix A.

8 Model of TCP Performance

Models for TCP performance have been developed in traditional networks [73, 79, 83]. One accepted model of TCP’s macroscopic behavior [72, 79] is

(1)

where , the TCP goodput, is written in terms of the maximum segment size MSS, round-trip time RTT, and packet loss rate (). Some existing work examining TCP in LLNs makes use of this formula to ground new algorithms [55]. However, this model assumes that cwnd is limited by packet loss, not the buffer size. Our findings in Section 7.3 suggest that this assumption does not hold in LLNs, because the window size is much smaller.

This motivates us to develop a new model of TCP performance according to our observations. We note that comprehensive models of TCP, which take window size limitations into account, already exist [83]. However, our goal here is not comprehensiveness, but to develop a simple model that, like Equation 1, provides clear insights into TCP behavior, albeit in LLNs instead of traditional networks.

The intuition behind our model is as follows. Observations in Section 7.3 suggest that we can neglect the time it takes the congestion window to recover after packet loss. We model a TCP connection as binary: either it is sending data with a full window, or it is not sending new data. The periods where TCP does not send new data are caused by packet loss, after which time is spent performing retransmissions.

Based on this assumption, we derive the following model (see Appendix B for the full derivation),

(2)

where is the window size in packets, sized to the bandwidth-delay product. Figures 6(a) and 6(b) include the predicted goodput, calculated according to Equation 2 using the empirical RTT and segment loss rate for each experiment, as dotted lines. Our model of TCP goodput closely matches the empirical results. In contrast, Equation 1, unaware that buffer sizes are small, predicts goodput to be hundreds of kb/s for the single hop experiment, and over 50 kb/s for the three-hop experiment, given the empirical packet loss rates.

We can compare Equations 1 and 2 to understand, at a macrosopic level, the differences in TCP’s behavior in the LLN setting compared to traditional networks. Our primary observation is that TCP in LLNs is more robust to small amounts of packet loss than TCP in traditional WLANs. This is due to the additive constant in the denominator, which makes less sensitive to when is small. Furthermore, because , the reduction in bandwidth due to small packet loss is lower in Equation 2 than in Equation 1.

9 TCP in a WSN Application

We evaluate TCPlp in the context of the anemometer application described in §3. Our goal is to demonstrate that TCP is practical for real IoT use cases.

9.1 Constrained Application Protocol

To provide context for the performance of TCPlp, we compare it to an existing LLN protocol that provides reliability, namely the Constrained Application Protocol (CoAP) [98]. We compare against CoAP because it is gaining momentum in both academia [28, 101, 70, 94, 20, 96] and industry [3, 60], with adoption by Cisco [31, 5], Nest/Google [4], and Arm [2, 1]. CoAP was designed to benefit not only resource-constrained devices that cannot run TCP, but also more powerful IoT devices by allowing them to “use less power and fewer network resources” [21].

We ported the CoAP implementation in Contiki OS, which has a mature network stack and is commonly used in LLN studies, to run in our OpenThread/RIOT environment. For the server-side implementation of CoAP, we use Californium [70], a feature-rich Java implementation of CoAP. To send data in batches, we use CoAP’s blockwise transfer feature [22]. We discovered, however, that Californium’s implementation of blockwise transfer drops the entire batch if even one packet in the batch is not successfully transferred (which could happen if, e.g., the client gives up on that block after four retries). To present CoAP in the best possible light and make the experiments comparable, we implement our own blockwise transfer that does not have this issue.

We also include CoCoA [20], a research proposal that augments CoAP with RTT estimation, in our evaluation. We use an open-source implementation [19] of CoCoA written for an older version of Contiki OS, and adapt it to work with our port of Contiki’s CoAP implementation.

9.2 Experimental Setup

We use the same testbed as the previous section, and set the radio transmission power to -8 dBm, so that OpenThread formed a 3-to-5 hop topology. A snapshot of the topology is shown in Figure 3. In our experiments, nodes sent data to a server running on Amazon EC2. The RTT from the border router to the cloud server was ms, much smaller than the RTT within the low-power mesh ( ms, where uplink packets each consist of 5 link-layer frames).

For the anemometer, each sensor reading is 82 bytes. Nodes 12–15 each generate one reading every 1 second, and send it to the cloud server using either TCP or CoAP. We use most of the remaining RAM as an application-layer queue to prevent data from being lost if CoAP or TCP is in backoff after packet loss and cannot send out new data immediately. For TCP, the application-layer queue is configured to store up to 64 readings; for CoAP, it can store 104 readings. We use a larger queue for CoAP because an additional 40 readings fit in TCP’s send buffer.

By default, leaf nodes in OpenThread (§3.2) poll their parent for downstream packets once every four minutes. Both TCP and CoAP, however, require acknowledgments to every message. Therefore, we decrease the data request interval for a leaf node to 100 ms999This must be significantly smaller than the time between data samples, so a CoAP node can “catch up” if it falls behind due to loss. Our choice of 100 ms is comparable to ContikiMAC’s default sleep interval of 125 ms. We can apply this idea even when ll nodes are duty cycled (Appendix C). when it is expecting an TCP ACK or CoAP response, and use the default four minutes otherwise.

Data may be lost due to overflow at the application-layer queue. We measure a solution’s reliability as the proportion of generated sensor readings that are delivered to the server. Measured this way, reliability depends on not only the transport’s efficiency (TCP vs. CoAP), but also on the rate at which readings are generated. A solution with only 50% reliability may very well achieve 100% reliability if time between new data samples is sufficiently increased.

We instrumented RIOT’s radio driver to measure the radio duty cycle, the proportion of time during which the radio was not in its low-power sleep mode, and we instrumented RIOT’s scheduler to measure the CPU duty cycle, the proportion of time during which a thread was executing on the CPU. These measurements reflect the power consumption.

9.3 Performance in Favorable Conditions

We begin by performing experiments in our testbed at night; there is still a non-negligible amount of interference in our testbed at night, but it is much less and more predictable than during the day. We compare three setups: (1) default CoAP, (2) CoAP with CoCoA, and (3) TCPlp. We also compare two sending scenarios: (1) sending each sensor reading right away (“No Batching”), and (2) sending sensor readings in batches (“Batching”). For the “Batching” scenario, each sensor puts readings into the application-layer queue when the readings are collected, and only invokes the transport layer (TCP or CoAP) to drain the queue once it contains 64 readings.101010We could have used flash storage to provide a deeper queue and larger batches, but flash storage has implications for power consumption that are outside the scope of this paper. We size each packet in a CoAP batch to be the same size as segments in TCP (five frames).

(a) Radio duty cycle
(b) CPU duty cycle
Figure 8: Effect of batching on power consumption

All setups achieved 100% reliability due to end-to-end acknowledgements (figures are omitted for brevity). Figures 8(a) and 8(b) also show that all the three protocols consume similar power; TCP is comparable to LLN-specific solutions.

On the other hand, both the radio and CPU duty cycle are significantly smaller with batching than without batching. In the “No Batching” case, each 82-byte sensor reading requires two frames to transmit with both TCP and CoAP, even though most of the second frame is unused, resulting in inefficiency. In contrast, by sending data in batches, nodes can amortize the cost of sending data and waiting for a response. Given this result, batching sensor readings is the more realistic workload, so we use it to continue our evaluation of TCPlp and CoAP.

9.4 Resilience to Packet Loss

In this section, we inject (uniformly random) packet loss at the border router and measure how well each solution performs in the presence of packet loss. The result is shown in Figure 9. Note that the injected loss rate corresponds to the packet-level loss rate after link retries and 6LoWPAN reassembly. Although we plot loss rate up to 21%, we consider loss rates % exceptional; we focus on the loss rate up to 15%. A number of WSN studies have already achieved % end-to-end packet delivery, using only link/routing layer techniques (not transport) [36, 64, 65]. In our testbed environment, we have not observed the loss rate exceed 15% for an extended time, even with wireless interference.

(a) Reliability
(b) Transport-layer retransmissions
(c) Radio duty cycle
(d) CPU duty cycle
Figure 9: Performance with injected packet loss

All three protocols perform well with 10% loss rate. As loss rate increases to 15%, however, CoCoA performs poorly, significantly worse than CoAP or TCP. The reason is that CoCoA attempts to measure RTT for retransmitted packets, and conservatively calculates the RTT relative to the first transmission. This results in an inflated RTT value that causes CoCoA to unnecessarily delay when recovering from packet loss, causing the application-layer queue to overflow. In contrast, full-scale TCP is immune to this problem despite measuring the RTT, because the TCP timestamp option allows TCP to unambiguously determine the RTT even for retransmitted segments. As a result, both CoAP and TCP achieve nearly 100% reliability at packet loss rates less than 15%, as shown in Figure 9(a).

Figures 9(c) and 9(d) also show that, overall, TCP and CoAP perform comparably in terms of radio and CPU duty cycle. TCPlp appears to have a slightly lower radio/CPU duty cycle at moderate packet loss. This may be due to TCP’s sliding window, which allows it to tolerate some ACK losses, and therefore perform fewer retransmissions (Figure 9(b)). TCP can also trigger retransmissions based on duplicate ACKs and Selective ACKs, perhaps resulting in additional efficiency. Figure 9(b) does indeed show that, although most of TCP’s retransmissions are explained by timeouts, a significant portion were triggered in other ways. In contrast, CoAP must rely on a timeout to detect every loss, which has intrinsic limitations [111].

With exceptionally high packet loss rates (15%), CoAP achieves higher reliability than TCP, because it “gives up” after just 4 retransmissions; it exponentially increases the wait time between those retransmissions, but then resets its RTO to 3 seconds when giving up and moving to the next packet. In contrast, TCP performs up to 12 retransmissions with exponential backoff. The result is that TCP backs off further than CoAP upon consecutive packet losses, witnessed by the smaller retransmission count in Figure 9(b), causing the application-layer queue to overflow more. We note that this performance gap could be filled by parameter tuning.

9.5 Performance in Lossy Conditions

Next, we compare TCPlp and CoAP in an environment with real wireless interference and link dynamics. To this end, we performed experiments over the course of a full day to include regular human activity in an office. We run the same workload as above, using batching. To ensure that TCPlp and CoAP are subject to similar interference patterns, we run them simultaneously. We hardcoded the first hop from each TCPlp or CoAP node to be the same, to keep the interference pattern seen by TCP and CoAP as similar as possible.

Figure 10: Radio duty cycle of TCP and CoAP flows in a lossy wireless environment, in one representative trial

Initially, radio duty cycle was dominated by the time a leaf node spends listening for a downstream packet (called an “indirect message”) after sending a data request message to its parent. Therefore, we made several improvements to OpenThread: (1) we prioritized indirect messages over the current packet being sent, to minimize the time leaf nodes spend waiting for a downstream packet, (2) we enabled link-layer retries for indirect messages, to decrease the frequency of data request timeouts at leaf nodes, and (3) we decreased the data request timeout and performed link-layer retries more rapidly for indirect messages, to deliver them to leaves more quickly. Given the high level of daytime interference, we decreased the MSS from five frames to three frames. These changes improved performance of both TCP and CoAP.

Figure 10 depicts the radio duty cycle of TCP and CoAP for a trial representative of our overall results. CoAP maintains a lower duty cycle than TCPlp outside of working hours, when there is less interference; TCPlp has a slightly lower duty cycle than CoAP during working hours, when there is more wireless interference. TCPlp’s better performance at a higher loss rate is explained by our results from §9.4. At a lower packet loss rate, TCP performs worse than CoAP due to hidden terminal losses; more retries, on average, are required for indirect messages, causing leaf nodes to stay awake longer after sending a data request message. Overall, CoAP and TCPlp perform similarly (Table 8).

Protocol Reliability Radio DC CPU DC
TCPlp 99.3% 2.29% 0.973%
CoAP 99.5% 1.84% 0.834%
Unrel., no batch 93.4% 1.13% 0.52%
Unrel., with batch 95.3% 0.734% 0.30%
Table 8: Performance of TCPlp and CoAP in the testbed for a full day, averaged over multiple trials

9.6 The Cost of Reliability

Not all protocols used in the LLN literature provide reliability. Even CoAP has the option to send “nonconfirmable” messages that are delivered unreliably—they are simply sent as UDP packets without any transport-layer acknowledgment or retransmissions. Note that, for unreliable UDP-based solutions, such as nonconfirmable messages in CoAP, the data request interval can remain at OpenThread’s four-minute default, because no downstream packets are expected at the transport layer.

For completeness, we measured the performance when using CoAP, in the same setups as before, without reliability. The result is shown in the last two rows of Table 8. For this setup, using a reliability protocol increases the radio/CPU duty cycle by 3x compared to the unreliable alternative, and also guarantees nearly111111They do not achieve 100% reliability because we are sampling at a high rate: 82 B/s. A WSN application with a lower sample rate may see 100% reliability with both CoAP and TCP. 100% reliability. Although the duty cycle increases by approximately 3x when using a reliability protocol, the decrease in battery life will actually be less than 3x. This is because power consumption to obtain data samples, and power consumption when idle, would likely contribute significantly to the total power consumption.

10 Discussion

TCP is the de facto reliability protocol in the Internet. Over the past 30 years, new physical-, datalink-, and application-layer protocols have evolved alongside TCP, and supporting good TCP performance was often a consideration in their design. TCP serves as an obvious performance benchmark for new transport-layer proposals, and such proposals are often based on TCP.

In contrast, when LLN research got started nearly two decades ago, running TCP was infeasible due to severe resource constraints of early LLN hardware. The original system architecture for networked sensors [51], for example, targeted an 8-bit MCU with only 512 bytes of memory. It naturally became taken for granted that TCP is too heavy for LLNs. As LLN research progressed, canonical LLN protocols were designed without taking TCP into account, resulting in incompatibilities with TCP, such as deaf listening in low-power radios (§4) and scheduling problems in OpenThread’s duty cycling (§9.5).

Our work has demonstrated that, after two decades of hardware evolution, TCP is now viable in LLNs and performs comparably to LLN-specialized protocols like CoAP. Given these results, TCP may benefit LLNs in several ways:

Interoperability. Given that Internet services have been developed based on TCP/IP for decades, using TCP in LLNs strengthens their interoperability as part of the IoT. Other approaches usually require application-layer gateways, which limit applications to use a specific application protocol and encourage vertically integrated silos [109]. Enabling TCP end-to-end, we believe, is a step toward realizing the IoT vision of diverse, intelligent devices automatically discovering and using the resources provided by each other.

Versatility. While LLN-specific protocols like CoAP are designed to support a small set of well-defined use cases, TCP implements a duplex bytestream abstraction that is more general than LLN-specialized protocols. For example, TCP could be used for an interactive shell for configuration/debugging; this is not possible with existing LLN protocols designed for transfer of sensor readings.

Layering. By contributing a sophisticated transport layer, TCP brings more structure to the LLN stack, which is valuable in its own right [29]. Furthermore, the transport layer has unique insight into traffic patterns (e.g., whether an ACK is expected) that can inform the link-layer duty cycling protocol. A duty-cycling protocol like the one we used in §9 could be further optimized using TCP-layer information, such as TCP’s RTT estimate, timeout setting, and header information (e.g., the “PSH” bit). Using an informative transport layer like TCP helps to frame the problem.

Although TCP may bring value to LLNs in these ways, it is not a panacea. History in the traditional Internet, however, tells us that, despite its shortcomings, TCP remains the de facto reliability protocol, except for specific use cases where another protocol provides substantial benefit (e.g., video-conferencing). We propose that TCP serve as both the default reliability protocol in LLNs, and a benchmark to validate new transport proposals in LLNs, much as it does in the traditional Internet. Even where new LLN protocols are needed, TCP might be a valuable starting point for designing them. For example, we saw in §9 that CoAP has slightly lower power consumption at night, whereas TCP performed comparably or better during the day, in the presence of more interference. Once we understand the advantages and disadvantages of each protocol, can we create a meaningful synergy of their techniques?

11 Conclusion

This paper presented the first systematic study of full-scale TCP’s behavior in LLNs. To this end, we provided TCPlp, a full-scale TCP implementation for LLNs based on the protocol logic in the FreeBSD Operating System, and extensively evaluated TCPlp in various scenarios. Our findings are that:

  • Send/receive buffers large enough to support performant TCP in LLNs fit comfortably in the available memory on LLN platforms.

  • Adding a random delay between link retries is an effective solution to the hidden terminal problem, for TCP.

  • Owing to smaller buffer sizes, TCP’s congestion control mechanism behaves differently in LLNs, making TCP more resilient to packet loss.

  • TCP’s power consumption, in the context of a real IoT application, is comparable to that of LLN-specialized protocols like CoAP.

These findings lead us to the conclusion that commodity LLN hardware has crossed a critical resource threshold to run full-scale TCP stack and TCP is well-suited to LLNs after all. We hope that, as part of the LLN architecture, full-scale TCP can both benefit LLNs as a versatile transport layer, and provide seamless interoperability for IoT as low power networks become mainstream.

References

  • [1] Device management connect. https://www.arm.com/products/iot/pelion-iot-platform/device-management/connect. Accessed: 2018-09-09.
  • [2] Java speaks coap. https://community.arm.com/iot/b/blog/posts/java-speaks-coap. Accessed: 2018-09-09.
  • [3] Mqtt and coap, iot protocols. https://www.eclipse.org/community/eclipse_newsletter/2014/february/article2.php. Accessed: 2018-09-09.
  • [4] Openthread. https://openthread.io/. Accessed: 2018-09-09.
  • [5] Software configuration guide, cisco ios release 15.2(5)ex (catalyst digital building series switches). https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst_digital_building_series_switches/software/15-2_5_ex/configuration_guide/b_1525ex_consolidated_cdb_cg/b_1525ex_consolidated_cdb_cg_chapter_0111101.html. Accessed: 2018-09-09.
  • [6] Thread group. https://www.threadgroup.org/thread-group#OurMembers. Accessed: 2018-09-11.
  • [7] What is thread. https://www.threadgroup.org/What-is-Thread#threadready. Accessed: 2018-09-12.
  • [8] Afanasyev, A., Tilley, N., Reiher, P., and Kleinrock, L. Host-to-host congestion control for tcp. IEEE Communications surveys & tutorials 12, 3 (2010), 304–342.
  • [9] Alam, M. M., and Hong, C. S. Crrt: congestion-aware and rate-controlled reliable transport in wireless sensor networks. IEICE transactions on communications 92, 1 (2009), 184–199.
  • [10] Allman, M., Paxson, V., and Blanton, E. Tcp congestion control. RFC 5681, 2009.
  • [11] Andersen, M. P., Fierro, G., and Culler, D. E. System design for a synergistic, low power mote/ble embedded platform. In 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN) (2016), IEEE, pp. 1–12.
  • [12] Andersen, M. P., Kim, H.-S., and Culler, D. E. Hamilton: a cost-effective, low power networked sensor for indoor environment monitoring. In Proceedings of the 4th ACM International Conference on Systems for Energy-Efficient Built Environments (2017), ACM, p. 36.
  • [13] Atmel Corporation. Low Power, 2.4GHz Transceiver for ZigBee, RF4CE, IEEE 802.15.4, 6LoWPAN, and ISM Applications, July 2014. Preliminary Datasheet.
  • [14] Ayadi, A., Maillé, P., and Ros, D. Tcp over low-power and lossy networks: tuning the segment size to minimize energy consumption. In New Technologies, Mobility and Security (NTMS), 2011 4th IFIP International Conference on (2011), IEEE, pp. 1–5.
  • [15] Baccelli, E., Hahm, O., Gunes, M., Wahlisch, M., and Schmidt, T. C. Riot os: Towards an os for the internet of things. In Computer Communications Workshops (INFOCOM WKSHPS), 2013 IEEE Conference on (2013), IEEE, pp. 79–80.
  • [16] Balakrishnan, H., Padmanabhan, V. N., Seshan, S., and Katz, R. H. A comparison of mechanisms for improving tcp performance over wireless links. IEEE/ACM transactions on networking 5, 6 (1997), 756–769.
  • [17] Balakrishnan, H., Seshan, S., Amir, E., and Katz, R. H. Improving tcp/ip performance over wireless networks. In Proceedings of the 1st annual international conference on Mobile computing and networking (1995), ACM, pp. 2–11.
  • [18] Bershad, B., Anderson, T., Lazowska, E., and Levy, H. Lightweight remote procedure call. In Proceedings of the Twelfth ACM Symposium on Operating Systems Principles (New York, NY, USA, 1989), SOSP ’89, ACM, pp. 102–113.
  • [19] Betzler, A. er-cocoa, 2018. https://github.com/abetzler/er-cocoa.
  • [20] Betzler, A., Gomez, C., Demirkol, I., and Paradells, J. Coap congestion control for the internet of things. IEEE Communications Magazine 54, 7 (July 2016), 154–160.
  • [21] Bormann, C., Castellani, A. P., and Shelby, Z. Coap: An application protocol for billions of tiny internet nodes. IEEE Internet Computing 16, 2 (March 2012), 62–67.
  • [22] Bormann, C., and Shelby, Z. Block-wise transfers in the constrained application protocol (coap). RFC 7959, 2016.
  • [23] Borriello, G., and Want, R. Embedded computation meets the world wide web. Communications of the ACM 43, 5 (2000), 59–66.
  • [24] Buettner, M., Yee, G. V., Anderson, E., and Han, R. X-mac: a short preamble mac protocol for duty-cycled wireless sensor networks. In Proceedings of the 4th international conference on Embedded networked sensor systems (2006), ACM, pp. 307–320.
  • [25] Castellani, A. P., Gheda, M., Bui, N., Rossi, M., and Zorzi, M. Web services for the internet of things through coap and exi. In 2011 IEEE International Conference on Communications Workshops (ICC) (2011), IEEE, pp. 1–6.
  • [26] Clark, D. D. The structuring of systems using upcalls. In Proceedings of the Tenth ACM Symposium on Operating Systems Principles (New York, NY, USA, 1985), SOSP ’85, ACM, pp. 171–180.
  • [27] Clark, D. D., Jacobson, V., Romkey, J., and Salwen, H. An analysis of tcp processing overhead. IEEE Communications magazine 27, 6 (1989), 23–29.
  • [28] Colitti, W., Steenhaut, K., Caro, N. D., Buta, B., and Dobrota, V. Evaluation of constrained application protocol for wireless sensor networks. In 2011 18th IEEE Workshop on Local Metropolitan Area Networks (LANMAN) (Oct 2011), pp. 1–6.
  • [29] Culler, D., Dutta, P., Ee, C. T., Fonseca, R., Hui, J., Levis, P., Polastre, J., Shenker, S., Stoica, I., Tolle, G., and Zhao, J. Towards a sensor network architecture: Lowering the waistline. In Proceedings of the 10th Conference on Hot Topics in Operating Systems - Volume 10 (Berkeley, CA, USA, 2005), HOTOS’05, USENIX Association, pp. 24–24.
  • [30] Druschel, P., and Peterson, L. L. Fbufs: A high-bandwidth cross-domain transfer facility. In Proceedings of the Fourteenth ACM Symposium on Operating Systems Principles (New York, NY, USA, 1993), SOSP ’93, ACM, pp. 189–202.
  • [31] Duffy, P. Beyond mqtt: A cisco view on iot protocols. https://blogs.cisco.com/digital/beyond-mqtt-a-cisco-view-on-iot-protocols. Accessed: 2018-09-09.
  • [32] Dunkels, A. Full tcp/ip for 8-bit architectures. In Proceedings of the 1st international conference on Mobile systems, applications and services (2003), ACM, pp. 85–98.
  • [33] Dunkels, A., Alonso, J., and Voigt, T. Making tcp/ip viable for wireless sensor networks. SICS Research Report (2003).
  • [34] Dunkels, A., Alonso, J., Voigt, T., Ritter, H., and Schiller, J. Connecting wireless sensornets with tcp/ip networks. In International Conference on Wired/Wireless Internet Communications (2004), Springer, pp. 143–152.
  • [35] Dunkels, A., Gronvall, B., and Voigt, T. Contiki-a lightweight and flexible operating system for tiny networked sensors. In Local Computer Networks, 2004. 29th Annual IEEE International Conference on (2004), IEEE, pp. 455–462.
  • [36] Duquennoy, S., Al Nahas, B., Landsiedel, O., and Watteyne, T. Orchestra: Robust mesh networks through autonomously scheduled tsch. In Proceedings of the 13th ACM conference on embedded networked sensor systems (2015), ACM, pp. 337–350.
  • [37] Duquennoy, S., Österlind, F., and Dunkels, A. Lossy links, low power, high throughput. In Proceedings of the 9th ACM Conference on Embedded Networked Sensor Systems (2011), ACM, pp. 12–25.
  • [38] Dutta, P., Dawson-Haggerty, S., Chen, Y., Liang, C.-J. M., and Terzis, A. Design and evaluation of a versatile and efficient receiver-initiated link layer for low-power wireless. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2010), SenSys ’10, ACM, pp. 1–14.
  • [39] Fall, K., and Floyd, S. Simulation-based comparisons of tahoe, reno and sack tcp. ACM SIGCOMM Computer Communication Review 26, 3 (1996), 5–21.
  • [40] Floyd, S. Tcp and explicit congestion notification. ACM SIGCOMM Computer Communication Review 24, 5 (1994), 8–23.
  • [41] Floyd, S., and Jacobson, V. Random early detection gateways for congestion avoidance. IEEE/ACM Transactions on networking 1, 4 (1993), 397–413.
  • [42] Foundation, T. F. Freebsd 10.3, 2016. https://www.freebsd.org/releases/10.3R/announce.html.
  • [43] Gerla, M., Tang, K., and Bagrodia, R. Tcp performance in wireless multi-hop networks. In Mobile Computing Systems and Applications, 1999. Proceedings. WMCSA’99. Second IEEE Workshop on (1999), IEEE, pp. 41–50.
  • [44] Gnawali, O., Fonseca, R., Jamieson, K., Moss, D., and Levis, P. Collection tree protocol. In Proceedings of the 7th ACM conference on embedded networked sensor systems (2009), ACM, pp. 1–14.
  • [45] Gont, F., and Yourtchenko, A. On the implementation of the tcp urgent mechanism. Tech. rep., 2011.
  • [46] Grieco, L. A., and Mascolo, S. Performance evaluation and comparison of westwood+, new reno, and vegas tcp congestion control. ACM SIGCOMM Computer Communication Review 34, 2 (2004), 25–38.
  • [47] Group, B. M. W. Mesh profile v1.0, 2017.
  • [48] Group, T. Thread, 2016. https://threadgroup.org.
  • [49] Henderson, T., Floyd, S., Gurtov, A., and Nishida, Y. The newreno modification to tcp’s fast recovery algorithm. RFC 6582, 2012.
  • [50] Hewage, K., Duquennoy, S., Iyer, V., and Voigt, T. Enabling tcp in mobile cyber-physical systems. In Mobile Ad Hoc and Sensor Systems (MASS), 2015 IEEE 12th International Conference on (2015), IEEE, pp. 289–297.
  • [51] Hill, J., Szewczyk, R., Woo, A., Hollar, S., Culler, D., and Pister, K. System architecture directions for networked sensors. In Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems (New York, NY, USA, 2000), ASPLOS IX, ACM, pp. 93–104.
  • [52] Hui, J. personal communication.
  • [53] Hui, J. W., and Culler, D. E. Ip is dead, long live ip for wireless sensor networks. In Proceedings of the 6th ACM conference on Embedded network sensor systems (2008), ACM, pp. 15–28.
  • [54] Hull, B., Jamieson, K., and Balakrishnan, H. Mitigating congestion in wireless sensor networks. In Proceedings of the 2Nd International Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2004), SenSys ’04, ACM, pp. 134–147.
  • [55] Im, H. TCP Performance Enhancement in Wireless Networks. PhD thesis, Seoul National University, 2015.
  • [56] Italiano, D., and Motin, A. Calloutng: a new infrastructure for timer facilities in the freebsd kernel.
  • [57] Iyer, Y. G., Gandham, S., and Venkatesan, S. Stcp: a generic transport layer protocol for wireless sensor networks. In Proceedings. 14th International Conference on Computer Communications and Networks, 2005. ICCCN 2005. (Oct 2005), pp. 449–454.
  • [58] Jacobson, V. Congestion avoidance and control. In ACM SIGCOMM computer communication review (1988), vol. 18, ACM, pp. 314–329.
  • [59] Jacobson, V., Braden, R., and Borman, D. Tcp extensions for high performance. Tech. rep., 1992.
  • [60] Johnson, S. Constrained application protocol: Coap is iot’s ’modern’ protocol. https://www.omaspecworks.org/constrained-application-protocol-coap-is-iots-modern-protocol/, https://internetofthingsagenda.techtarget.com/feature/Constrained-Application-Protocol-CoAP-is-IoTs-modern-protocol. Accessed: 2018-09-09.
  • [61] Ju, H.-T., Choi, M.-J., and Hong, J. W. An efficient and lightweight embedded web server for web-based network element management. Int. Journal of Network Management 10, 5 (2000), 261–275.
  • [62] Khalidi, Y. A., and Thadani, M. N. An efficient zero-copy i/o framework for unix. Tech. rep., Mountain View, CA, USA, 1995.
  • [63] Kim, H.-S., Andersen, M. P., Chen, K., Kumar, S., Zhao, W. J., Ma, K., and Culler, D. E. System architecture directions for post-soc/32-bit networked sensors. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2018), SenSys ’18, ACM, pp. 264–277.
  • [64] Kim, H.-S., Cho, H., Kim, H., and Bahk, S. Dt-rpl: Diverse bidirectional traffic delivery through rpl routing protocol in low power and lossy networks. Computer Networks 126 (2017), 150–161.
  • [65] Kim, H.-S., Cho, H., Lee, M.-S., Paek, J., Ko, J., and Bahk, S. Marketnet: An asymmetric transmission power-based wireless system for managing e-price tags in markets. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems (2015), ACM, pp. 281–294.
  • [66] Kim, H.-S., Im, H., Lee, M.-S., Paek, J., and Bahk, S. A measurement study of tcp over rpl in low-power and lossy networks. Journal of Communications and Networks 17, 6 (2015), 647–655.
  • [67] Kim, S., Fonseca, R., Dutta, P., Tavakoli, A., Culler, D., Levis, P., Shenker, S., and Stoica, I. Flush: A reliable bulk transport protocol for multihop wireless networks. In Proceedings of the 5th International Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2007), SenSys ’07, ACM, pp. 351–365.
  • [68] Ko, J., Klues, K., Richter, C., Hofer, W., Kusy, B., Bruenig, M., Schmid, T., Wang, Q., Dutta, P., and Terzis, A. Low power or high performance? a tradeoff whose time has come (and nearly gone). In European Conference on Wireless Sensor Networks (2012), Springer, pp. 98–114.
  • [69] Ko, J., Lim, J. H., Chen, Y., Musvaloiu-E, R., Terzis, A., Masson, G. M., Gao, T., Destler, W., Selavo, L., and Dutton, R. P. Medisn: Medical emergency detection in sensor networks. ACM Trans. Embed. Comput. Syst. 10, 1 (Aug. 2010), 11:1–11:29.
  • [70] Kovatsch, M., Lanter, M., and Shelby, Z. Californium: Scalable cloud services for the internet of things with coap. In 2014 International Conference on the Internet of Things (IOT) (Oct 2014), pp. 1–6.
  • [71] Kumar, S., Andersen, M. P., Kim, H.-S., and Culler, D. E. Bringing full-scale tcp to low-power networks. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2018), SenSys ’18, ACM, pp. 386–387.
  • [72] Kurose, J., and Ross, K. Computer Networking: A Top-Down Approach, 6th ed. Pearson, 2013, ch. 3, pp. 278–279.
  • [73] Lakshman, T., and Madhow, U. The performance of tcp/ip for networks with high bandwidth-delay products and random loss. IEEE/ACM Transactions on Networking (ToN) 5, 3 (1997), 336–350.
  • [74] Levis, P., Lee, N., Welsh, M., and Culler, D. Tossim: Accurate and scalable simulation of entire tinyos applications. In Proceedings of the 1st International Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2003), SenSys ’03, ACM, pp. 126–137.
  • [75] Levis, P., Madden, S., Polastre, J., Szewczyk, R., Whitehouse, K., Woo, A., Gay, D., Hill, J., Welsh, M., Brewer, E., et al. Tinyos: An operating system for sensor networks. In Ambient intelligence. Springer, 2005, pp. 115–148.
  • [76] Levis, P., Patel, N., Culler, D., and Shenker, S. Trickle: A self-regulating algorithm for code propagation and maintenance in wireless sensor networks. In Proc. of the 1st USENIX/ACM Symp. on Networked Systems Design and Implementation (2004).
  • [77] Li, Y.-C., and Chiang, M.-L. Lyranet: a zero-copy tcp/ip protocol stack for embedded operating systems. In 11th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA’05) (Aug 2005), pp. 123–128.
  • [78] Maeda, C., and Bershad, B. N. Protocol service decomposition for high-performance networking. In Proceedings of the Fourteenth ACM Symposium on Operating Systems Principles (New York, NY, USA, 1993), SOSP ’93, ACM, pp. 244–255.
  • [79] Mathis, M., Semke, J., Mahdavi, J., and Ott, T. The macroscopic behavior of the tcp congestion avoidance algorithm. ACM SIGCOMM Computer Communication Review 27, 3 (1997), 67–82.
  • [80] Montenegro, G., Kushalnagar, N., Hui, J., and Culler, D. Transmission of ipv6 packets over ieee 802.15.4 networks. IETF RFC 4944 (2007).
  • [81] Nest, G. Openthread, 2017. https://github.com/openthread/openthread.
  • [82] Österlind, F., and Dunkels, A. Approaching the maximum 802.15. 4 multi-hop throughput. In The Fifth ACM Workshop on Embedded Networked Sensors (HotEmNets 2008), 2-3 June 2008, Charlottesville, Virginia, USA (2008).
  • [83] Padhye, J., Firoiu, V., Towsley, D., and Kurose, J. Modeling tcp throughput: A simple model and its empirical validation. ACM SIGCOMM Computer Communication Review 28, 4 (1998), 303–314.
  • [84] Paek, J., and Govindan, R. Rcrt: Rate-controlled reliable transport for wireless sensor networks. In Proceedings of the 5th International Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2007), SenSys ’07, ACM, pp. 305–319.
  • [85] Pang, Q., Wong, V. W. S., and Leung, V. C. M. Reliable data transport and congestion control in wireless sensor networks. Int. J. Sen. Netw. 3, 1 (Dec. 2008), 16–24.
  • [86] Paxson, V., Allman, M., Dawson, S., Fenner, W., Griner, J., Heavens, I., Lahey, K., Semke, J., and Volz, B. Known tcp implementation problems. RFC 2525, 1999.
  • [87] Polastre, J., Hill, J., and Culler, D. Versatile low power media access for wireless sensor networks. In Proceedings of the 2nd international conference on Embedded networked sensor systems (2004), ACM, pp. 95–107.
  • [88] Polastre, J., Szewczyk, R., and Culler, D. Telos: enabling ultra-low power wireless research. In Proceedings of the 4th international symposium on Information processing in sensor networks (2005), IEEE Press, p. 48.
  • [89] Rahman, M. A., Saddik, A. E., and Gueaieb, W. Wireless Sensor Network Transport Layer: State of the Art. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 221–245.
  • [90] Ramaiah, A., Stewart, M., and Dalal, M. Improving tcp’s robustness to blind in-window attacks. RFC 5961, 2010.
  • [91] Ramakrishnan, K., Floyd, S., and Black, D. The addition of explicit congestion notification (ecn) to ip. RFC 3168, 2001.
  • [92] Rathnayaka, A., and Potdar, V. M. Wireless sensor network transport protocol: A critical review. Journal of Network and Computer Applications 36, 1 (2013), 134 – 146.
  • [93] Sankarasubramaniam, Y., Akan, Ö. B., and Akyildiz, I. F. Esrt: event-to-sink reliable transport in wireless sensor networks. In Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing (2003), ACM, pp. 177–188.
  • [94] Santos, D. F., Almeida, H. O., and Perkusich, A. A personal connected health system for the internet of things based on the constrained application protocol. Computers & Electrical Engineering 44 (2015), 122 – 136.
  • [95] Schmid, T., Shea, R., Srivastava, M. B., and Dutta, P. Disentangling wireless sensing from mesh networking. In Proceedings of the 6th Workshop on Hot Topics in Embedded Networked Sensors (2010), ACM, p. 3.
  • [96] Seitz, K., Serth, S., Krentz, K.-F., and Meinel, C. Enabling en-route filtering for end-to-end encrypted coap messages. In Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems (New York, NY, USA, 2017), SenSys ’17, ACM, pp. 33:1–33:2.
  • [97] Semke, J., Mahdavi, J., and Mathis, M. Automatic tcp buffer tuning. In Proceedings of the ACM SIGCOMM ’98 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (New York, NY, USA, 1998), SIGCOMM ’98, ACM, pp. 315–323.
  • [98] Shelby, Z., Hartke, K., and Bormann, C. The constrained application protocol (coap). RFC 7252, 2014.
  • [99] Stann, F., and Heidemann, J. Rmst: Reliable data transport in sensor networks. In Sensor Network Protocols and Applications, 2003. Proceedings of the First IEEE. 2003 IEEE International Workshop on (2003), IEEE, pp. 102–112.
  • [100] Stathopoulos, T., Girod, L., Heidemann, J., and Estrin, D. Mote herding for tiered wireless sensor networks. Tech. Rep. 58, University of California, Los Angeles, Center for Embedded Networked Computing, Dec. 2005.
  • [101] Villaverde, B. C., Pesch, D., Alberola, R. D. P., Fedor, S., and Boubekeur, M. Constrained application protocol for low power embedded networks: A survey. In 2012 Sixth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (July 2012), pp. 702–707.
  • [102] Wan, C.-Y., Campbell, A. T., and Krishnamurthy, L. Psfq: a reliable transport protocol for wireless sensor networks. In Proceedings of the 1st ACM international workshop on Wireless sensor networks and applications (2002), ACM, pp. 1–11.
  • [103] Wan, C.-Y., Eisenman, S. B., and Campbell, A. T. Coda: Congestion detection and avoidance in sensor networks. In Proceedings of the 1st International Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2003), SenSys ’03, ACM, pp. 266–279.
  • [104] Wang, C., Sohraby, K., Hu, Y., Li, B., and Tang, W. Issues of transport control protocols for wireless sensor networks. In Proceedings. 2005 International Conference on Communications, Circuits and Systems, 2005. (May 2005), vol. 1, pp. 422–426 Vol. 1.
  • [105] Winter, T., Thubert, P., Brandt, A., Hui, J., Kelsey, R., Levis, P., Pister, K., Struik, R., Vasseur, J., and Alexander, R. Rpl: Ipv6 routing protocol for low-power and lossy networks.
  • [106] Woo, A., and Culler, D. E. A transmission control scheme for media access in sensor networks. In Proceedings of the 7th annual international conference on Mobile computing and networking (2001), ACM, pp. 221–235.
  • [107] Wright, G. R., and Stevens, W. R. TCP/IP Illustrated, vol. 2. Addison-Wesley Publishing Company, 1995, ch. 2.
  • [108] Xu, N., Rangwala, S., Chintalapudi, K. K., Ganesan, D., Broad, A., Govindan, R., and Estrin, D. A wireless sensor network for structural monitoring. In Proceedings of the 2Nd International Conference on Embedded Networked Sensor Systems (New York, NY, USA, 2004), SenSys ’04, ACM, pp. 13–24.
  • [109] Zachariah, T., Klugman, N., Campbell, B., Adkins, J., Jackson, N., and Dutta, P. The internet of things has a gateway problem. In Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications (New York, NY, USA, 2015), HotMobile ’15, ACM, pp. 27–32.
  • [110] Zhang, H., Arora, A., Choi, Y.-r., and Gouda, M. G. Reliable bursty convergecast in wireless sensor networks. In Proceedings of the 6th ACM International Symposium on Mobile Ad Hoc Networking and Computing (New York, NY, USA, 2005), MobiHoc ’05, ACM, pp. 266–276.
  • [111] Zhang, L. Why tcp timers don’t work well. In Proceedings of the ACM SIGCOMM Conference on Communications Architectures &Amp; Protocols (New York, NY, USA, 1986), SIGCOMM ’86, ACM, pp. 397–405.
  • [112] Zheng, T., Ayadi, A., and Jiang, X. Tcp over 6lowpan for industrial applications: An experimental study. In New Technologies, Mobility and Security (NTMS), 2011 4th IFIP International Conference on (2011), IEEE, pp. 1–4.
Hops Flows Goodput Packet Loss Median RTT
1 A 69.6 kb/s 0.00% 213 ms
1 B 55.4 kb/s 0.0439% 284 ms
1/1 A/B 41.7/35.2 kb/s 0.0437/0.219% 356/416 ms
3 A 19.3 kb/s 0.441% 772 ms
3 B 19.8 kb/s 0.436% 755 ms
3/3 A/B 10.9/9.43 kb/s 1.95/3.00% 995/1075 ms
Table 9: Fairness among multiple TCP flows

Appendix A Simultaneous TCP Flows

In Sections 7 and 8, we found that cwnd behaves differently in LLNs, owing to the small buffer sizes used, in a way that makes TCP is more resilient to packet loss. In this appendix, we examine TCP’s behavior when multiple flows compete for limited bandwidth, to evaluate whether TCP’s congestion control mechanisms remain effective despite this behavior.

In our experiments, multiple nodes in the testbed simultaneously transfer data to the border router for five minutes. Our two objectives are to achieve fairness and efficiency. Fairness means that all flows get approximately equal shares of goodput. Efficiency means that the aggregate throughput is not much worse than that of a single flow.

Our first experiment does this for a single hop. Two nodes, both one hop away from the border router, transfer data upstream. Our second experiment does this for three hops; two nodes, both three hops away from the border router with all but the first hop in common, transfer data upstream. Our results are shown in Table 9. Bandwidth is shared efficiently and fairly between the two flows, for both the single-hop and multi-hop experiments.

Sharing is less fair when we increase the buffer size of the two senders from 4 segments (1848 bytes) to 7 segments (3234 bytes). Results varied, sometimes producing good results but at other times producing inefficient or unfair results. However, implementing Random Early Detection (RED) [41] on the relay nodes, and using it in conjunction with Explicit Congestion Notification (ECN) [40, 91], mostly alleviated these issues. This solution also required us to modify OpenThread to reassemble 6LoWPAN fragments into IP packets at each hop, instead of performing the reassembly end-to-end. Using RED/ECN helped keep the RTT small ( s) despite the larger buffer size.

(a) Basic listen-after-send protocol with sparse traffic
(b) TCP-friendly duty-cycling protocol (adaptive sleep interval control) with bursty TCP traffic
Figure 11: Examples of the duty-cycling protocols discussed in Section C. ‘B’ denotes a link layer beacon, ‘A’ denotes a link-layer ACK, ‘D’ denotes a frame carrying data, ‘TD’ denotes a frame carrying TCP data, and ‘TA’ denotes a TCP ACK. When a duty-cycled node receives a frame with the pending bit set, it continues listening to receive the next frame.

Appendix B Derivation of TCP Model

This appendix provides the derivation of Equation 2, the model of TCP performance proposed in §8.

We think of a TCP flow as a sequence of bursts. A burst is a sequence of full windows of data successfully transferred, which ends in a packet loss. After this loss, the next burst begins. Let be the size of TCP’s flow window, measured in segments (for our experiments in §7.3, we would have ). Define as the average number of windows sent in a burst. The goodput of TCP is the number of bytes sent in each burst, which is , divided by the duration of each burst. A burst lasts for the time to transmit windows of data, plus the time to recover from the packet loss that ended the burst. The time to transmit windows is . We define to be the time to recover from the packet loss. Then we have

(3)

The value of depends on the packet loss rate. We define a new variable, , which denotes the probability that at least one packet in a window is lost. Then .

To complete the model, we must estimate and .

The value of depends on whether a fast retransmission is performed or the retransmission timer expires (called an RTO). After a fast retransmission, TCP enters a “fast recovery” state [10, 49]. However, the lost packet, and three packets afterward which resulted in duplicate ACKs, account for the entire send buffer, which can hold only four segments. Therefore, TCP cannot send new data during fast recovery, and instead stalls for one RTT, until the ACK for the fast retransmission is received.121212Choosing a larger window size of at least , would alleviate this problem [97]. If an RTO occurs, the total time lost is the excess time budgeted to the retransmit timer beyond one RTT, plus the time to retransmit the lost segments. We denote the time budgeted to the retransmit timer as ETO. So the total time lost due to a timeout, assuming it takes about 2 RTTs to recover lost segments, is .

Based on our discussion in Section 7.3, these two types of losses may be caused by different factors. Therefore, we do not attempt to distinguish them on basis of probability. Instead, we use a very simple model: . This overestimates the recovery time from fast retransmissions but underestimates the recovery time from RTOs; our hope is that these differences average out over many packet losses.

To model , we assume that, in each window, segment losses are independent. This gives us , where is the probability of an individual segment being lost (after link retries). Because is likely to be small (less than 20%), we apply the approximation that for small . This gives us .

Applying these equations for and , along with some minor algebraic manipulation to put our equation in a similar form to Equation 1, we obtain our model for TCP performance in LLNs, for small and :

(4)

Appendix C Adaptive Sleep Interval for Bursty Traffic

In §9, we operated TCP over a duty-cycled link by decreasing the data request interval when there is unACKed data in the send buffer, since an ACK is expected at that time. While effective for the anemometer application, which only sends data and only duty cycles the last hop, that protocol does not easily generalize to applications that also receive data or operate in a situation where all links are duty cycled.

This section investigates a different approach to TCP over a duty-cycled link. First, we characterize the performance of TCP over a duty-cycled link that is not adaptive. This shows the limit of what can be achieved by, for example, statically reducing the sleep interval of OpenThread’s duty cycling protocol to be on the order of seconds rather than minutes.131313We could easily solve the problem by reducing the sleep interval to the order of milliseconds, but this would result in a high idle duty cycle and power consumption due to sending data request messages so often. Then, we show how the sleep interval can be adapted according to traffic patterns, that allows high-throughput traffic for both sending data and receiving data. Although we evaluate this using a listen-after-send duty-cycling protocol similar to Thread, the technique that we use to adapt the sleep interval does not require knowledge of TCP connection state, so it generalizes even to settings where all nodes are duty-cycled.141414Sampled/scheduling techniques for low power radio operation also have a concept of a sleep interval, so our technique is not limited to listen-after-send techniques. Intermediate relay nodes can choose to decrease their sleep interval based on the packets that they choose to forward.

As depicted in Figure 11(a) below, a duty-cycling node sleeps during a time called the sleep interval. Then it wakes up and sends a beacon (IEEE 802.15.4 data request command) to an always-on router to check if the router has a packet to send to it. If the router has a packet to send, it sends an IEEE 802.15.4 ACK in which the “pending bit” of the header is set. After receiving the ACK with the pending bit set, the duty-cycled node listens on the channel for a time called the wakeup interval to receive the data from the router. If the pending bit is not set, it goes to sleep immediately. When a duty-cycled node has a frame to send upstream, it may send the frame at any time during the sleep interval.

OpenThread [81], an open-source implementation of Thread, allows the router to send only one packet per beacon reception. To better support bursty, high-throughput traffic, we design the router to also set the pending bit for a data packet if it has more packets to send, as in [37]. When a duty-cycled node receives a data frame with the pending bit set, it continues listening. This enables the router to send all of the packets in its queue for a particular duty-cycled node once it receives a beacon from that node.

c.1 Experimental Study

(a) RTT of TCP, duty-cycled link
(b) TCP goodput, duty-cycled link
Figure 12: Effect of varying sleep interval of the embedded endpoint

We evaluate the performance of TCP over our duty-cycling protocol in the same setup used in Section 6, with the embedded endpoint duty-cycled and the border router always-on.

Figure 12(a) shows the average RTT of TCP over our duty-cycling protocol. Interestingly, the average RTT is approximately equal to the length of the sleep interval in the uplink case, not one-half of the sleep interval as one might expect. This is because of TCP’s self-clocking behavior [58]. The duty-cycled node transmits the next segment when it receives an ACK for an in-flight segment. This ACK is delivered during the wakeup interval, causing the next data-containing segment to be sent at the beginning of the sleep interval. This causes the RTT of TCP segments to be approximately the same as the duration of the sleep interval.

For downlink data transfer, Figure 13(b) shows that the RTT is generally close to a multiple of the sleep interval duration. One explanation is asymmetry in our duty-cycling protocol. For packets sent downlink, the duty-cycled node receives packets until the queue at the border router is empty. But in the uplink direction, the duty-cycled node immediately stops sending packets at the end of the sleep interval—even if there are packets left in its queue—and starts listening. A continuous stream of data packets sent downlink could last longer than the sleep interval, causing TCP ACKs to wait in the uplink queue for multiple periods of the duty-cycle before being sent. This also explains the difference between uplink and downlink goodput in Figure 12(b).

Figure 12(b) shows the performance of full-scale TCP over this link. At a sleep interval of 20 ms, the throughput is similar to the results in Section 6. However, the throughput drops substantially as the sleep interval increases.

The reason is that the size of the send and receive buffers limit the performance of TCP when the sleep interval is large. In order to achieve high throughput over such a link, TCP must be able to provide enough bytes in-flight to fill the bandwidth-delay product. For example, if the sleep interval duration is , the RTT is on average, so TCP must be able to keep unacknowledged bytes in flight to achieve a bandwidth of . However, the limited size of the send/receive buffers restricts how many unacknowledged bytes can be in flight, resulting in bandwidth degradation. To achieve a 64 kb/s throughput, similar to what we could achieve with an always-on link in Section 6, with a sleep interval of 2 seconds, the the send and receive buffers used by embedded TCP need to be at least 16 kilobytes in size. This is not feasible within the memory constraints of our platform.

(a) Uplink experiment
(b) Downlink experiment
Figure 13: RTT of TCP over a duty-cycled link with a fixed sleep interval duration of two seconds

c.2 TCP-Friendly Duty-Cycling Protocol

The results in the previous section indicate that using a duty-cycled link significantly affects TCP performance, when the sleep interval large. This motivates us to design a TCP-friendly duty-cycling protocol that provides a long sleep interval most of the time, but decreases the sleep interval during TCP bursts for high throughput.

(a) Uplink experiment
(b) Downlink experiment
Figure 14: RTT distribution of TCP over a duty-cycled link with an adaptive sleep interval

Consider a sensor node that periodically collects sensor readings, stores them in an internal log, and sends them in a TCP burst. The burst may be triggered by an HTTP request. Most of the time, no network I/O is needed, and the node can be duty-cycled with a long sleep interval. However, while data is being transferred, the sleep interval duration can be temporarily decreased to achieve high throughput.

Our implementation of this adaptive behavior draws inspiration from the Trickle algorithm [76], which is used to achieve both low overhead and fast recovery in modern WSN routing protocols such as CTP [44] and RPL [105]. When the duty-cycled node receives a packet from the border router, it decreases the duration of the sleep interval to the minimum value . If it does not receive a packet during a sleep interval, the duration of the sleep interval is doubled, clamped at a maximum value . We believe that this design would quickly respond to bursty flows to provide high throughput, and quickly return to the maximum sleep interval duration in the absence of any flows, providing low energy consumption when idle.

We use this simple implementation to evaluate the efficacy of a Trickle-based algorithm for adjusting the sleep interval duration. We increased the size of the send and receive buffers, so that each could hold 6 full-sized packets. The minimum sleep interval was and the maximum sleep interval was . Given that sending a beacon to the border router and receiving a response takes approximately 5 ms, the link operates at a 0.1% duty-cycle when idle. Under these conditions, we were able to achieve an uplink throughput of 68.6 kb/s and a downlink throughput of 55.6 kb/s. These results indicate that a Trickle-based algorithm is indeed effective at supporting both high-throughput TCP and an extremely low idle duty-cycle.

Interestingly, the uplink throughput on a duty-cycled link is higher than the throughput on an always-on link. We believe that this is because the duty-cycling protocol schedules use of the channel more efficiently than the always-on link protocol. Either the duty-cycled node is sending but not listening, or is listening but not sending; therefore there is less contention for the channel than in the always-on case, where both nodes can send at any time.

Figure 14 shows a disparity in the RTT between the uplink and downlink experiments. While the RTT in the uplink experiment is usually less than 200 ms, the RTT in the downlink experiment is often longer. We believe that this is due to the same phenomenon described at the end of §C.1.

Additional optimizations are possible via cross-layer hints. For example, one could use the push bit (PSH) in the TCP header of a received packet as indication as to whether more data is expected. Similarly, one could use TCP connection state, such as the RTT estimate or bytes in flight, to provide context-aware duty cycling. We did not perform such optimizations in our study, but they may be useful for handling more elaborate traffic patterns.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
316674
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description