The objective of the IoT Benchmark consortium is to raise the bar in the quality of experimental data, and provide researchers and engineers in both academia and industry with an objective view of the strengths and weaknesses among existing protocols.

The challenge

Evaluation and comparison of low-power wireless protocols is a complex endeavor.

  • There is a wide variety of settings: physical setup, definition of metrics, use of different sets of metrics, different traffic patterns, etc. Results are often not comparable, even when it may seem like it.
  • Comparing against baseline protocols is challenging. Existing implementations are not always available.
  • The literature contains both comparisons between protocols only (software) and between complete solutions (platforms and protocols, i.e. hardware + software).

Our vision

The IoT Benchmark consortium built itself bottom-up, driven and pushed by the low-power wireless networking academic community. Our objective is to design a comprehensive benchmark not only consisting of problem sets, but also providing tools and methodologies for performance evaluation of low-power networking solutions.

To feed the process, we have been interacting with other research communities already using benchmarking (robotics, database, etc.). Together, we co-organize CPSBench 2018, the 1st Workshop on Benchmarking Cyber-Physical Networks and Systems (satellite workshop of CPSWeek).
Discussions with various IoT companies also triggered a lot of interest: they face similar problems for evaluating their products and comparing with competitors.
A standardized benchmark is also called for by industry.

It is still not clear however how strictly the benchmark problems should be defined. There is a fundamental trade-off between accuracy and generality in the benchmark design space.

The more accurately benchmark problems are defined, the better for fair comparisons, but the less practical and usable the benchmark becomes. It is therefore paramount to finely balance the benchmark design to ensure its usability and ultimately its adoption by the community.

Thus, an ideal benchmark would

  • Provide a set of tools and practices for performance evaluation
  • Enable fair comparisons between new and existing approaches,
    even when code is not openly available
  • Enable repeatability of experimental results

Ultimately, the benchmark would serve as a reference for the evaluation of academic research works but also of existing and future products from the IoT industry.


  • 2016, June
    • Small group discuss the idea of a benchmark
  • 2016, August
    • Poster at SenSys (11 unique affiliations)
    • Drafts goals and challenges
  • 2017, February
    • Ad-hoc meeting at EWSN, Uppsala
    • Group expands
  • 2017, May
    • Plenary meeting in Milan
    • Group expands
  • 2017, October
    • Plenary meeting in Stockholm
    • Group expands
  • 2017, December-onwards
    • Bi-monthly telcos
  • 2018, February