“To those who believe that science is a lifestyle and not only job, for your time, for your efforts and for your trust on the edition of this book.” “To the community of researchers and technologists that believed that their efforts and experience can be collected in a book for the new generations.” Martin Serrano
Martin SerranoNikolaos IsarisHans SchaffersJohn DomingueMichael BonifaceThanasis Korakis
European federated testbeds ecosystem. | |
The NITOS Indoor deployment. | |
The NITOS testbed portal. | |
NITOS distribution in EUROPE. | |
FIRE evolution longer term vision 2020. | |
X as a Service. | |
FIRE scenarios for 2020. | |
Overall strategic direction of FIRE. | |
Benefits for experiments to use and for testbeds to join the federation of testbeds – overview. | |
Testbeds involved in Fed4FIRE federation of testbeds. | |
Fed4FIRE architecture components. | |
Workflow for testbeds joining the federation. | |
Overview of the proposals and accepted experiments through the open call mechanism. | |
Utilization of Fd4FIRE testbeds by experiments. | |
Number of simultaneously used testbeds in experiments. | |
Number of simultaneously used test bed technologies in experiments. | |
One-stop shop approach in Fed4FIRE federation. | |
Financial flow within federation of testbeds. | |
The FLEX testbed federation in Europe. | |
The OMF6 architecture. | |
The OMF-5.4 architecture. | |
OML measurement library architecture. | |
The LTErf service architecture; single northbound interfaces are mapped to several southbound depending on the type of the equipment. | |
Mobile operator android tools for monitoring in FLEX. | |
The Inter-RAT cross-technology handover framework. | |
OHC comparison against other technologies for seamless handovers. | |
CoordSS Network model for semantic based coordination. | |
CoordSS experimental setup. | |
FLOW offloading framework. | |
FLOW PGW extensions for FLEX. | |
FLOW experiment setup. | |
Throughput per each (offloaded) client. | |
Geographical distribution of MONROE Nodes. MONROE builds on the existing NorNet Edge (NNE) infrastructure, consisting of 200 dedicated operational nodes spread across Norway. | |
Building blocks of the MONROE system. | |
Experiment creation and deployment phases. | |
Resources availability in MONROE. | |
MONROE experiment status. | |
Scheduling system. | |
MONROE visualization GUI snapshot for RTT and signal strength monitoring in near real time. | |
Violin plots of the RTT measurements for different operators in Spain (ES), Norway (NO) and Sweden (SE). | |
CDF of the download goodput ( | |
Web performance results: the Average Time to First Byte and the Complete Page Load Time for operators in Spain (ES), Norway (NO) and Sweden (SE) for two target websites www.bbc.com and en.wikipedia.org. | |
Coverage reading from MONROE nodes operating aboard trains in Oslo, NO. | |
PerformNetworks architecture. | |
PerformNetworks orchestration architecture. | |
Reference architecture and experiments. | |
Integrated prototype architecture. | |
Smart Environmental Monitoring use-case architecture. | |
Smart Environmental Monitoring dashboard. | |
Smart Energy use-case architecture. | |
Energy mobile application. | |
Energy web dashboard. | |
Smart environmental monitoring trial use case. | |
Routes followed by the bus and location of the sensor units: low-power wake-up (green) and Airbase air quality (yellow) devices. | |
Sensor devices. Barometric wake-up device ( | |
Delay Tolerant Network devices. Bus passing close to a low-power wake-up sensor device installed in a bus stop. Detail of the equipment (gateway) installed in the bus. | |
Temperature measurements captured by a low power wake-up device in Sant Vicenˆˆe7 dels Horts. | |
NO | |
Average wake-up range of the DTN and wake-up based solution (in meters). | |
Battery evolution of the low power wake-up devices. | |
Smart Energy trial use case. | |
Active devices (ActiveDIN on the | |
No appliance control. | |
Controlled appliance. | |
Average uptime per house. | |
LoWPAN. | |
The BonFIRE infrastructure. | |
The BonFIRE architecture. | |
BonFIRE cloud and network infrastructure adapters. | |
SFA-BonFIRE mapping. | |
Transitions in governance and experimenter relationships. | |
Four foundation elements of a multi-venue media experimentation service. | |
EXPERIMEDIA High-level technical architecture. | |
EXPERIMEDIA smart venues. | |
User centric observation and benefits model. | |
FIESTA interoperability model for heterogeneous IoT testbed experimentation. | |
FIESTA evolution towards an ecosystem for global IoT experimentation. | |
FIESTA EaaS experimental infrastructure overview. | |
FIESTA functional blocks architecture. | |
Overview of the IoT Lab architecture defining the federation strategy and showing the modular architecture. | |
IoT Lab platform deployment. | |
IoT Lab – network architecture with all its components. | |
IoT Lab IPv6-based network integration representing the four main testbed profiles. | |
A sensing trigger message. | |
Sequence diagram of the Crowdsensing steps. | |
Crowd participation in TBaaS. | |
The IoT Lab Incentive Model. | |
IoT Lab Leaderboards. | |
Heineken factory in Patras, Greece. | |
Crowd-driven Research Model enabling anonymous end-users to trigger and drive experimentation process in cooperation with researchers. | |
The federation of heterogeneous resources (provided by testbeds). | |
NITOS and PlanetLab Europe federation via SFAWrap. | |
OMF 6 architecture. | |
MySlice and manifold architecture. | |
Extensions of the NITOS testbed: Icarus nodes on the left and directional antennas on the right. | |
WiMAX/LTE Base Station in NITOS testbed. | |
Mobile robots in w-ilab.t testbed. | |
Demonstration of a complex experiment controlling heterogeneous resources with a single script. | |
The OneLab Portal. | |
Traffic-aware 802.11 airtime management scenario. | |
Example illustrating two co-located wireless networks of different technology. | |
The proposed co-existence scheme for avoiding interference between Wi-Fi and TSCH. | |
Deployment of a single local program across several platforms. | |
Sequence diagram for the deployment of the portable testbed. | |
Conceptual diagram of WiSHFUL architecture. | |
WiSHFUL architecture, UPIs, and supported platforms. | |
WiSHFUL adaptation modules. | |
Example illustrating a hidden node scenario. As nodes A and B are outside their carrier sensing range the packet transmissions from A and B would collide at node C. | |
UML class diagram showing the hybrid MAC relevant configuration. | |
Illustration of exclusive slots allocation in TDMA. | |
Overview of the components on the wireless node in the Linux-Wi-Fi prototype. | |
IO graph illustrating the number of packets sent over time. The color indicates a particular flow. | |
TCP/IP performance. | |
Live performance statistics showing the average network throughput (kbits/sec) over time. | |
Live capture of RSSI (dBm) measured by the USRP over time. | |
Throughput performance of 6 wireless nodes executing CSMA with exponential backoff. | |
Throughput performance with 3 stations employing MEDCA backoff. | |
Switching from CSMA to TDMA. The upper graph shows the overall percentage of packet loss, the lower graph illustrates the overall throughput. The X marks the switch. | |
Portable testbed overview. | |
LIVEstats platform main use case scenario. | |
Infrastructure characterization in a Stadium. | |
LIVEstats platform general architecture diagram. | |
General setup of the experiments – JFED tool. | |
Setup of the experiments – JFED tool. | |
Components for the general set up of the experiments. | |
The content assist functionality of the EDL textual editor. | |
The RAWFIE Web portal and the authoring tool. | |
The editors’ toolbars and buttons (above: the textual editor – below: the visual editor). | |
An example of inserting code templates in the textual editor. | |
The content assist functionality. | |
Error identification by the parser. | |
Waypoints definition for two USVs. | |
Node selection. | |
The addition of a node in the visual editor. | |
A part of the custom validation script. | |
Launching an experiment. | |
Illustration of the RINA structure: DIFs and internal organisation of IPC Processes (IPCPs). | |
Relation between protocol machines. | |
Environment of the shim DIF for Hypervisors. | |
A simple example of link-state routing. | |
Inputs and outputs of the Link-state based PDU Forwarding Table generator policy. | |
Major design choices of the IRATI implementation. | |
IPC Process split between the user-space and kernel. | |
Software architecture of the IRATI RINA implementation. | |
VM to Host goodput experiment results. | |
Host to Host goodput experiment results. | |
Physical connectivity graph for the routing experiments. | |
Results of the link-state routing experiments. | |
Scenario for the performance evaluation of prototype 2. | |
Performance evaluation results. | |
Experimental setup for application-location independence. | |
Creation of an IRATI VM with the OFELIA Control Framework. | |
VLAN box in the OFELIA testbed. | |
eLearning, FORGE and FIRE research facilities. | |
Learning design visual representation (Lockyer et al., 2013). | |
The instructional triangle of learning designs. | |
TPACK model of educational practice. | |
FORGE architectural approach. | |
Reference architecture for a FORGE widget. | |
The FORGE methodology flowchart. | |
The widget-based FORGE architecture for learning analytics. | |
Screenshot of a web-based learner interface at the iMinds’ WLAN and LTE lab. | |
iMinds’ iLab.t testing facilities used for the WLAN and LTE labs. | |
Interaction of different components between learner and FIRE facilities for the iMinds’ WLAN and LTE labs. | |
Average score for the qualitative survey questions. | |
Preference of students (in 2016) for using the WLAN course as online home assignment versus teaching this via traditional in-classroom lectures. | |
Preference of students (in 2016) for using the LTE course as online home assignment versus teaching this via traditional in-classroom lectures. | |
The TCP congestion control widget. | |
UPMC lab course questionnaire average score (2014 and 2015). | |
Experiments in queue before sending to PLE. | |
Experiment sent to PLE. | |
Experiment completed and result available. | |
TCD’s IRIS testbed. | |
Responses from survey of first version of TCDs OFDM course. | |
Real-time constellation measurements. | |
Load balancer with multiple experiment instances and graceful degradation via hot-standby. | |
Problem statement. | |
Triangle high level testing framework. | |
High level approach. | |
Testing framework architecture. | |
An example of the RINA architecture, with the same type of layer (DIF) repeated as required over different scopes. Different sets of policies customise each layer to its operational range. | |
An example of a RINA operator network. | |
An all-in-one smart city mobility solution. | |
Positioning at the start of the EMBERS project. | |
The EMBERS project and its result. | |
Multiple testing schemes. | |
Antenna experiments overview. | |
Scheduler overview. | |
EPC architecture overview. | |
PaaS deployment extended for IoT in WAZIUP. | |
Functional overview of WAZIUP. | |
Components of the WAZIUP platform. | |
Deployment of sensor nodes around a gateway use case integration. | |
WAZIUP deployment scenarios. | |
FUTEBOL consortium geographically distributed in Brazil and Europe. | |
FUTEBOL approach. | |
FESTIVAL architecture. | |
FESTIVAL EaaS platform modules. | |
Reference architecture of TRESCIMO. | |
Current TRESCIMO federated testbed architecture (based on). | |
Testbed interconnections. | |
Message flow between FITeagle, OpenSDNCore and devices. | |
FITeagle resource adapters. | |
OpenBaton functional architecture. | |
Architecture used in the PoC to illustrate the cooperation of the M2M frameworks. | |
Detailed prototype architecture version 2. | |
Example of DR related messages sent to a resident. | |
Example of an experimental topology created using jFed client. | |
SmartFIRE testbeds. | |
OMF6 architecture. | |
MySlice and SFA. | |
ICN-OMF framework. | |
MOFI-OMF framework. | |
OMF6 extensions for FlowVisor. | |
Seamless mobility scenario. | |
Experimentation messages for handover. | |
Seamless service mobility scenario as multi-screen service. | |
Topology of the experimentation on a content-based communication network. | |
End-to-end delay in the content-based communication network. |
FIRE Roadmap Milestones 2014–2020 | |
Brief description of Fed4FIRE facilities per testbed category | |
Functionalities of Fed4FIRE lifecycle | |
Coordinated and uncoordinated shared spectrum access with Wi-Fi stations | |
Shared spectrum access with coordinated Wi-Fi networks and (un)coordinated LTE eNodeBs | |
SLA setup for the FLOW offloading experiment | |
Pre-trial questionnaire summary | |
Post-trial questionnaire summary | |
Example open access experiments | |
Benefits and opportunities for experimenters | |
Supported platforms, OSs and drivers | |
Battery of preparatory tests with NITOS and IMinds WiLAB2 testbeds | |
Pilot use cases | |
Functional domains | |
AM Software used by SmartFIRE testbeds |
Source: FED4FIRE Project.
2014–2016 | 2016–2018 | 2018–2020 | |
FIRE Resources Solutions | Testbeds will be established in the domain of software services (2016)Gradual implementation of converged federation (2016) | Cutting-edge FIRE testbeds are established in key areas such as 5G, IoT, Big Data (2016–2017) A converged set of resources is aligned with 5G architectures (2017–2018) | Continuing to establish cutting-edge FIRE testbeds in key areas such as 5G, IoT, Big Data (2018–2020)A converged set of resources is aligned with 5G architectures (2018–2020) |
FIRE Services and Access Solutions | Open Access is implemented as a requirement (2015–2016) Projects are funded that develop services supporting reproducibility (M16)EaaS solutions will get harmonized and interoperable (2016–2017)All FIRE Open Access projects get integrated into one single portal for offering coherent package of services (2015–2016) | Mechanisms are set in place that support cross-facility experimentation through a central experimentation facility (2016)A FIRE Broker initiative is implemented providing broker services across the FIRE portfolio (2017) | Implementation of a new financing model to ensure sustainability of resources (2019)FIRE legal entity enables pay-per-use services (2018–2019)FIRE facilities implement secure and trustworthy resources capabilities (2019) |
FIRE Experimenters Solutions | Alignment of EC units leads to cross-domain access to facilities and services (2016–2017)FIRE is made accessible to wider communities by offering community APIs (2015–2016) | Alignment of FIRE and 5G in terms of facilities, services and experimentation actions (2016–2017)Introduction of accelerator functionality for “technology accelerator” | SMEs are key target group of FIRE, with Open Calls specifically dedicated to SMEs (2018)Professionalisation of FIRE services marketingIntroduction of startup funding as part of “full-service accelerator” |
FIRE Framing conditions solutions | Professionalization of FIRE’s internal organization (2015)Collaboration agreements in place between FIRE and large initiatives such as 5G PPP (2015) | A Network of Future Internet Initiatives is established (2016–2017)Cross-initiative collaboration in the Future Internet domain is implemented to enable seamless interconnection | FIRE, within NFII, is operating as legal entity to ensure sustainability and professionalisation |
Wired Testbeds: | |
Virtual Wall (iMinds) | Emulation environment with 100 nodes interconnected via a non-blocking 1.5 Tb/s Ethernet switch and a display wall for experiment visualization |
PlanetLab Europe (UPMC) | European arm of the global PlanetLab system, providing access to Internet-connected Linux virtual machines world-wide |
Ultra Access (UC3M, Stanford) | Next Generation of Optical Access research testbed |
10G Trace Tester (UAM) | 10 Gbps Trace Reproduction Testbed for Testing Software-Defined Networks |
PL-LAB (PSNC) | Distributed laboratory in Poland focusing on Parallel Internet paradigms |
Wireless Testbeds: | |
Norbit (NICTA) | Indoor Wi-Fi testbed located in Sydney, Australia |
w-iLab.t (iMinds) | For Wi-Fi and sensor networking experimentation |
NITOS (UTH) | Outdoor testbed featuring Wi-Fi, WiMAX, and LTE |
Netmode (NTUA) | Wi-Fi testbed with indoor facilities |
SmartSantander (UC) | Large scale smart city deployment in the Spanish city of Santander |
FuSeCo (FOKUS) | Future Seamless Communication Playground, integrating various state of the art wireless broadband networks |
PerformLTE (UMA) | Realistic environment composed of radio access equipment, commercial user equipment, and core networks connected to Internet |
C-Lab (UPC) | Community Network Lab involving people and technology to create digital social environments for experimentation |
IRIS (TCD) | Implementing Radio In Software, a virtual computation platform for advanced wireless research |
LOG-a-TEC (JSI) | Cognitive radio testbed for spectrum sensing in TV whitespaces and applications in sensor networks |
Open Flow Testbeds: | |
UBristol OFELIA island | Testbed for Future Internet technologies, specifically Software Defined Networking (SDN)/OpenFlow and virtualization |
i2CAT OFELIA island | Testbed for Future Internet technologies, specifically Software Defined Networking (SDN)/OpenFlow and virtualization |
Koren (NIA) | High-speed research network in Korea interconnecting six nodes with OpenFlow and DCN switchess |
NITOS (UTH) | Outdoor testbed featuring Wi-Fi, WiMAX, and LTE |
Cloud Computing Testbeds: | |
BonFIRE (EPCC, Inria) | Multi-cloud testbed for services experimentation |
Virtual Wall (iMinds) | Emulation environment with 100 nodes interconnected via a non-blocking 1.5 Tb/s Ethernet switch and a display wall for experiment visualization |
Other Technologies: | |
FIONA (Adele Robots) | Cloud platform for creating, improving and using virtual robots |
Tengu (iMinds) | Big data analysis (iMinds) |
Function | Description | |
Resource discovery | Finding available resources across all testbeds, and acquiring the necessary information to match required specifications. | |
Resource specification | Specification of the resources required during the experiment, including compute, network, storage and software libraries. | |
Resource reservation | Allocation of a time slot in which exclusive access and control of particular resources is granted. | |
Resource provisioning | Direct (API) | Instantiation of specific resources directly through the testbed API, responsibility of the experimenter to select individual resources. |
Orchestrated | Instantiation of resources through a functional component, which automatically chooses resources that best fit the experimenter’s requirements. | |
Experiment control | Control of the testbed resources and experimenter scripts during experiment execution through predefined or real-time interactions and commands. | |
Monitoring | Facility monitoring | Instrumentation of resources to supervise the behavior and performance of testbeds, allowing system administrators or first level support operators to verify that testbeds performance. |
Infrastructure monitoring | Instrumentation by the testbed itself of resources to collect data on the behavior and performance of services, technologies, and protocols. | |
Measuring | Experiment measuring | Collection of experimental data generated by frameworks or services that the experimenter can deploy on its own. |
Permanent storage | Storage of experiment related information beyond the experiment lifetime, such as experiment description, disk images and measurements. | |
Resource release | Release of experiment resources after deletion or expiration the experiment. |
Wi-Fi Throughput (Mb/s) | |||
Min | Average | Max | |
Uncoordinated | 11.5 | 19.6 | 22.8 |
Coordinated | 22.8 | 22.8 | 22.8 |
Wi-Fi Throughput (Mb/s) | |||
Min | Average | Max | |
Uncoordinated | 10.6 | 16.7 | 22.8 |
Coordinated | 22.8 | 22.8 | 22.8 |
NITOS LTE Node | SLA for DL Bandwidth |
Node054 | 15 Mbps |
Node058 | 20 Mbps |
Node074 | 10 Mbps |
Node076 | 30 Mbps |
Node077 | 7.5 Mbps |
Node083 | 5 Mbps |
Yes | No | |
Awareness of energy consumption: |
62% | 38% |
Response to behaviour change request: |
80% | 20% |
Willingness to change behaviour: |
85% | 15% |
Device control: |
71% | 29% |
Control preference: |
86% | 14% |
Yes | No | |
Energy Consciousness: |
69% | 31% |
Change in consumption: |
Reduction: 54% Increase: 0% | No change: 46% |
Motive for change: |
Financial: 31%Security of Supply: 46%Social: 8%Security of supply and financial: 46% | |
Control preference: |
85% | 15% |
Communication Medium: |
100% | 0% |
Experiment | Description |
MODA Clouds Alladin (Atos) | Atos Research and Innovation, Slovakia, are investigating a multi-Cloud application in BonFIRE that delivers telemedicine health care for patients at home. The application provides an integrated online clinical, educational and social support network for mild to moderate dementia sufferers and their caregivers. The aim of the experiment is to analyse the application behaviour in a multi-Cloud environment and improving its robustness and flexibility for peak load usage. |
Sensor Cloud (Deri) | Digital Enterprise Research Institute (DERI) at the National University of Ireland, Galway, came to BonFIRE for testing scalability and stability of a stream middleware platform called Linked Stream Middleware (LSM, developed for the EC-FP7 OpenIoT and Vital projects). The experiment in BonFIRE utilises multiple sites with sensors generating up to 100,000 streaming items per second consumed by up to 100,000 clients. The data processing modules such as data acquisition and stream processing engines are run on the BonFIRE cloud infrastructure. |
SWAN (SCC) | This is an experiment conducted by SSC Services to analyse how one of their software solutions, SWAN, can handle large amounts of data transferred between business partners under different networking conditions. SSC Services have utilised the iMinds Virtual Wall site to achieve fine-grained control of the networking conditions in order to identify critical Quality of Service (QoS) thresholds for their application when varying latency and bandwidth. Moreover, investigating possible actions and optimisations to the SWAN components to deal with worsening conditions, to be able to deliver the expected QoS to the business partners. |
ERNET | ERNET India are developing software for moving e-learning services into the Cloud and are using BonFIRE to analyse the benefits of Cloud delivery models, including multi-site deployment. In particular, they investigate fault tolerance. |
JUNIPER | BonFIRE also facilitates other research projects, giving access to multiple partners to perform an experiment. One of these projects is the EC-FP7 project JUNIPER (Java Platform for High-Performance and Real-Time Large Scale Data), which deals with efficient and real-time exploitation of large streaming data from unstructured data sources. The JUNIPER platform helps Big Data analytic applications meet requirements of performance, guarantees, and scalability by enabling access to large scale computing infrastructures, such as Cloud Computing and HPC. In JUNIPER, the BonFIRE Cloud premises are used to initially port pilot applications to a production-like Cloud infrastructure. The JUNIPER experiment benefits from the availability of geographically distributed, heterogeneous, sites and the availability of fine grained monitoring information (at the infrastructure level) to test and benchmark the developed software stack. Another important advantage of BonFIRE to JUNIPER is that some of the sites owning HPC facilities, e.g., HLRS (Stuttgart), provide a transparent access (bridge) from Cloud to HPC, which is of a great importance for JUNIPER experiments. |
Socio-Technical Benefits for Experimenters: Testing Opportunities | Economic Benefits for Experimenters: Exploitation Opportunities |
observation of individual and community behavioursexperience of scaling for large-scale short-lived communitiesadaptation to the environment, considering physical, social and ethical constraintsadaptation of content according to individual and/or group preferencesreal-time orchestration allowing for adaptive narrativessensors and devices for detection and tracking of feature pointsdevice capabilities both remote and at a venuecooperative or collaborative frameworks including dealing with selfish or malicious users | access to a potential market, direct salesworking with a customer’s customerscreation of high impact showcases, indirect salesengagement and collaboration with stakeholders, potential partners/suppliers |
Supported Platforms, Operating Systems and Drivers | |||
Technology | Operating System | Platform | Driver |
IEEE-802.11 | Linux, Windows | Atheros, Broadcom | Ath9k, NDIS driver, WMP |
IEEE-802.15.4 | Contiki, TinyOS | MSP430, CC2x20, CC283x | Contiki/TinyOS drivers, TAISC |
SDR | Linux, Windows | USRP, Xylink ZebBoard | Iris, LabView, GNU radio |
Testbed Infrastructure Used | Tests Description |
iMinds WiLab2 | Reserving and accessing nodes (jFed, SSH) Mounting images on nodesTesting connectivity |
NITOS lab: NODE1–040 (Nitos Outdoor Testbed: Grid, Orbit)NODE041–049 (Nitos Office Testbed: Icarus)NODE050–085 (Nitos Indoor RF Isolated Testbed) | Reserving and accessing nodes (jFed, SSH)Mounting images on nodesTesting connectivity |
Results | Section B.2.1.1 |
Identifier | WiFi 001 | WiFi 002 | WiFi 003 |
NITOS lab infrastructure used | WiFi nodes | WiFI nodes | WiFi nodes |
No. of server instances | >30 | >20 | >40 |
Number of Clients | 15–360 | 15–360 | 15–600 |
Repetitions | 2 | 3 | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together | ||
Results | Section B.2.1.2.1 | Section B.2.1.2.2 | Section B.2.1.2.3 |
Identifier | LTE 001 | LTE 002 | LTE 003 |
NITOS lab infrastructure used | LTE nodes | LTE nodes | LTE nodes |
No. of server instances | 30 | 10 | 40 |
Number of Clients | 15–360 | 15–360 | 15–600 |
Repetitions | 3 | 3 | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together | ||
Results | Section B.2.1.3.1 | Section B.2.1.3.2 | Section B.2.1.3.3 |
Identifier | WiMAX 001 | WiMAX 002 | WiMAX 003 |
NITOS lab infrastructure used | WiMAX nodes | WiMAX nodes | WiMAX nodes |
No. of server instances | 30 | 30 | 30 |
Number of Clients | 15–360 | 15–360 | 15–600 |
Repetitions | 3 | 3 | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together | ||
Results | Section B.2.1.4.1 | Section B.2.1.4.2 | Section B.2.1.4.3 |
Virtual Wall (iMinds) | At first, we used iMinds because we had to test the experiment somewhere, and there was a certain confusion regarding the testbed we had to use. After speaking to Donatos Stravopoulos, we began using NITOS. |
w-iLab.t (iMinds) | This testbed was used at first, when we were still in the learning process of how to interact with the platform via the jFED application. After a process of learning how to use the platform, we began using SSH to access it. |
NITOS (UTH) | This is the testbed we used mainly. The nodes we used are mainly the following ones: Grid Nodes in the “Outdoor testbed” (node16 to node35). The Orbit nodes seemed to be working quite well (node02 to node09), we were advised not to fully rely on them, due to the fact that they are not very modern, and apparently, there was some errors associated with said nodes. In the “Indoor RF Isolated Testbed”, we mainly used the LTE nodes (node054 to node058), that were AMAZING in response time and speed. |
Fed4FIRE portal | The reservation system works really well, allowing us to see beforehand what nodes are available, and specifying the kind of node in each case (this last bit was really useful). |
JFed | We started using jFed at first, but the inability that it had to correctly interact with the scheduling functionality made it a bit cumbersome after a time, preferring, in time, to use SSH and other console commands to access the gateway and nodes. |
OMF | OMF was used to create and mount images on the nodes. It was really useful, because once the node was reserved, and the image had been created, with all of the tools (and even source code) that we were going to use, it was a no-fuss kind of procedure. Really smooth, and very appreciated in order to maintain the homogeneity of the distinct environments. |
JFed timeline | At first, we used jFed almost exclusively, so we consulted the availability of nodes via this tool. Later on, we developed a series of command-line aliases and tools that, together with the web portal reservation system, allowed us to be more efficient. |
For the experiment, we have been using the following technologies mainly: NodeJS, WebSockets, MySQL and SSH, this last one being the main way to communicate between the client machine and the gateway, and then between the gateway and the node itself. |
Identifier | TESTBED | RESOURCE | EXPERIMENT | RESULT |
Prep_001 | NITOS | Nodes 005, 006, 007, 014, 015, 016, 021, 024, 046 | Creating an image | Resource reservation failed |
Prep_002 | NITOS | Node 029, 033, 035, 005, 007 | Creating an image | Resource reservation failed |
Prep_003 | NITOS | Node 035, 033, 014, 015, 021, 023 | Creating an image | Resource reservation failed |
Prep_004 | NITOS | Node 005, 052, 085, 062,095 | Creating an image | Resource reservation failed |
Prep_005 | NITOS | Node 033 | Creating an image | Resource reservation failed |
Prep_006 | NITOS | Node 007 | Creating an image | Reservation OK → SSH → Connection closed by remote host (KO) |
Prep_007 | NITOS | Node 006 | Creating an image | Reservation OK → SSH → Connection closed by remote host (KO) |
Prep_008 | IMinds WiLAB2 | Internet | Creating an Image | Reservation of resources failed |
Prep_009 | IMinds WiLAB2 | Airswitch | Creating an Image | Reservation of resources failed |
Prep_010 | IMinds WiLAB2 | Coreswitch | Creating an Image | Reservation of resources failed |
Prep_011 | IMinds WiLAB2 | Poeswitch | Creating an Image | Reservation of resources failed |
Prep_012 | NITOS | Node 005+ Channel 2 (wireless) | Creating an Image | Reservation OK → SSH → Connection closed by remote host (KO) |
Identifier | WiFi 001 | |
NITOS lab infrastructure used | Servers hub: Node 068Clients hub: Node 064 | |
No. of server instances | 31 | |
Number of Clients | 15–360 | |
Repetitions | 2 | |
Objective | Time to serve all clients. Independent result and median value for the three of them together | |
Results | Completed |
Identifier | WiFi 002 | |
NITOS lab infrastructure used | Servers hub: Node 054Clients hub: Node 058 | |
No. of server instances | 23 | |
Number of Clients | 15–360 | |
Repetitions | 3 | |
Objective | Time to serve all clients. Independent result and median value for the three of them together | |
Status | Completed |
Identifier | WiFi 003 | |
NITOS lab infrastructure used | Servers hub: Node 054Clients hub: Node 058 | |
No. of server instances | 40 | |
Number of Clients | 15–600 | |
Repetitions | 3 | |
Objective | Time to serve all clients. Independent result and median value for the three of them together | |
Status | Failed |
Identifier | LTE 001 |
NITOS lab infrastructure used | Servers hub: Node 050Clients hub: Node 054 |
No. of server instances | 30 |
Number of Clients | 15–360 |
Repetitions | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together |
Results | Completed |
Identifier | LTE 002 |
NITOS lab infrastructure used | Servers hub: Node 050Clients hub: Node 054 |
No. of server instances | 10 |
Number of Clients | 15–360 |
Repetitions | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together |
Results | Completed |
Identifier | LTE 003 |
NITOS lab infrastructure used | Servers hub: Node 050Clients hub: Node 054 |
No. of server instances | 40 |
Number of Clients | 15–600 |
Repetitions | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together |
Results | Failed |
Identifier | WiMAX 001 |
NITOS lab infrastructure used | Servers hub: Node 041Clients hub: Node 044 |
No. of server instances | 30 |
Number of Clients | 15–360 |
Repetitions | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together |
Results | Completed |
Identifier | WiMAX 002 |
NITOS lab infrastructure used | Servers hub: Node 047Clients hub: Node 048 |
No. of server instances | 30 |
Number of Clients | 15–360 |
Repetitions | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together |
Results | Completed |
Identifier | WiMAX 003 |
NITOS lab infrastructure used | Servers hub: Node 047Clients hub: Node 048 |
No. of server instances | 30 |
Number of Clients | 15–600 |
Repetitions | 3 |
Objective | Time to serve all clients. Independent result and median value for the three of them together |
Results | Failed |
Application Domain | Use Cases |
Precision agriculture |
|
Cattle rustling |
|
Logistics and transportation |
|
Fish farming |
|
Environment and Urban agriculture |
|
Functional Domains | Description |
Application platform | Application writing, deploying, hosting and execution. |
IoT platform | The connectivity of IoT devices, the sensors data and metadata. |
Stream and data analytic | Data brokering, stream processing and data analytics. |
Security and privacy | Management of the identification, roles and connections of users. Also includes data anonymisation of the data and securisation of the transmissions. |
Platform Management | Status of the components, deployment of the platform. |
Institution | Testbed | AM Software |
UTH | NITOS | OMF-SFA Broker |
UMU | GAIA | OMF-SFA Broker |
SNU | ICN-OMF | OMF-SFA Broker |
iMinds | w-iLab.t | OMF-SFA Broker |
KISTI | KISTI-Emulab | ProtoGeni |
KAIST | Open WiFi+ | OMF-SFA Broker |
ETRI | MOFI | OMF-SFA Broker |
GIST | OF@TEIN | SFA Wrap |