<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="9788793237049.xsl"?>
<book id="home" xmlns:xlink="http://www.w3.org/1999/xlink">
<bookinfo>
<title>Advances in Intelligent Robotics and Collaborative Automation</title>
<affiliation><emphasis role="strong">Editors</emphasis></affiliation>
<authorgroup>
<author><firstname>Richard</firstname>
<surname>Duro</surname>
</author>
</authorgroup>
<authorgroup>
<author><firstname>Yuriy</firstname>
<surname>Kondratenko</surname>
</author>
</authorgroup>
<publisher>
<publishername>River Publishers</publishername>
</publisher>
<isbn>9788793237049</isbn>
</bookinfo>
<preface class="preface" id="preface01">
<title>RIVER PUBLISHERS SERIES IN AUTOMATION, CONTROL AND ROBOTICS</title>
<para>Volume 1</para>
<para><emphasis>Series Editors</emphasis></para>
<para><emphasis role="strong">Tarek Sobh</emphasis><?lb?><emphasis>University of Bridgeport,</emphasis><?lb?><emphasis>USA</emphasis></para>
<para><emphasis role="strong">Andr&#x00E9; Veltman</emphasis><?lb?><emphasis>PIAK and TU Eindhoven,</emphasis><?lb?><emphasis>The Netherlands</emphasis></para>
<para>The &#x0201C;River Publishers Series in Automation, Control and Robotics&#x0201D; is a series of comprehensive academic and professional books which focus on the theory and applications of automation, control and robotics. The series focuses on topics ranging from the theory and use of control systems, automation engineering, robotics and intelligent machines.</para>
<para>Books published in the series include research monographs, edited volumes, handbooks and textbooks. The books provide professionals, researchers, educators, and advanced students in the field with an invaluable insight into the latest research and developments.</para>
<para>Topics covered in the series include, but are by no means restricted to the following:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Robots and Intelligent Machines</para></listitem>
<listitem>
<para>Robotics</para></listitem>
<listitem>
<para>Control Systems</para></listitem>
<listitem>
<para>Control Theory</para></listitem>
<listitem>
<para>Automation Engineering</para></listitem>
</itemizedlist>
<para>For a list of other books in this series, visit www.riverpublishers.com<break/>http://www.riverpublishers.com/series.php?msg=Automation,_Control_and_Robotics</para>
</preface>
<preface class="preface" id="preface02">
<title>Preface</title>
<para>This book provides an overview of a series of advanced trends in robotics as well as of design and development methodologies for intelligent robots and their intelligent components.</para>
<para>All the contributionswere discussed at the International IEEE Conference IDAACS-2013 (Berlin, Germany, 12&#x02013;14 June, 2013). The IDAACS workshop series is established as a forum for high quality reports on state-of-the-art theory, technology and applications of intelligent data acquisition and advanced computer systems. All of these techniques and applications have experienced a rapid expansion in the last few years that has resulted in more intelligent, sensitive, and accurate methods for the acquisition of data and its processing applied to manufacturing process control and inspection, environmental and medical monitoring and diagnostics. One of the most interesting paradigms that encompass much of the research presented at IDAACs is Intelligent Robotic Systems, and this is the area this book concentrates on.</para>
<para>The success of IDAACS arises not only from the importance of the topics it focuses on, but also because of its nature as a unique forum for establishing scientific contacts between research teams and scientists from different countries. This purpose has become one of the main reasons for the rapid success of IDAACS, as it turns out to be one of the few events in this area of research where Western and former Eastern European scientists can discuss and exchange ideas and information, allowing them to characterize common and articulated research activities and creating the environment for establishing joint research collaborations. It provides an opportunity for all the participants to discuss topics with colleagues from different spheres such as academia, industry, and public and private research institutions. Even though this book concentrates on providing insights into what is being done in the area of robotics and intelligent systems, the papers that were selected reflect the variety of research presented during the workshop as well as the very diverse fields that may benefit from these techniques.</para>
<para>In terms of structure, the 13 chapters of the book are grouped into four sections: Robots, Control and Intelligence, Sensing and Collaborative Automation. The chapters have been thought out to provide an easy to follow introduction to the topics that are addressed, including the most relevant references, so that anyone interested in them can start their introduction to the topic through these references. At the same time, all of them correspond to different aspects of work in progress being carried out in various laboratories throughout the world and, therefore, provide information on the state of the art of some of these topics.</para>
<para>The first part, &#x0201C;Robots&#x0201D;, includes three contributions:</para>
<para>&#x0201C;A Modular Architecture for Developing Robots for Industrial Applications&#x0201D;, by A. Fa&#x00ED;&#x00F1;a, F. Orjales, D. Souto, F. Bellas and R. J. Duro, considers ways to make feasible the use of robots in many sectors characterized by dynamic and unstructured environments. The authors propose a new approach, based on modular robotics, to allow the fast deployment of robots to solve specific tasks. In this approach, the authors start by defining the industrial settings the architecture is aimed at and then extract the main features that would be required from a modular robotic architecture to operate successfully in this context. Finally, a particular heterogeneous modular robotic architecture is designed from these requirements and a laboratory implementation of it is built in order to test its capabilities and show its versatility using a set of different configurations including manipulators, climbers and walkers.</para>
<para>S. Osadchy, V. Zozulya and A. Timoshenko, in &#x0201C;The Dynamic Characteristics of a Manipulator with Parallel Kinematic Structure Based on Experimental Data&#x0201D;, studies two identification techniques which the authors found most useful in examining the dynamic characteristics of a manipulator with a parallel kinematic structure as an object of control. These techniques emphasize a frequency domain approach. If all input/output signals of an object can be measured then the first one of such techniques may be used for identification. In the case when all disturbances cannot be measured the second identification technique may be used.</para>
<para>In &#x0201C;An Autonomous Scale Ship Model for Parametric Rolling Towing Tank Testing&#x0201D;, M. M&#x00ED;guez Gonz&#x00E1;lez, A. Deibe, F. Orjales, B. Priego and F. L&#x00F3;pez Pe&#x00F1;a analyze a special kind of robotic system model, in particular, a self-propelled scale ship model for model testing, with the main characteristic of not having any material link to a towing device to carry out the tests. This model has been fully instrumented in order to acquire all the significant raw data, process them onboard and communicate with an inshore station.</para>
<para>The second part &#x0201C;Control and Intelligence&#x0201D; includes four contributions: In &#x0201C;Autonomous Knowledge Discovery Based on Artificial Curiosity Driven Learning by Interaction&#x0201D;, K. Madani, D. M. Ramik and C. Sabourin investigate the development of a real-time intelligent system allowing a robot to discover its surrounding world and to learn autonomously new knowledge about it by semantically interacting with humans. The learning is performed by observation and by interaction with a human. The authors provide experimental results both using simulated environments and implementing the approach on a humanoid robot in a real-world environment including everyday objects. The proposed approach allows a humanoid robot to learn without negative input and from a small number of samples.</para>
<para>F. Kulakov and S. Chernakova, in &#x0201C;Information Technology for Interactive Robot Task Training through Demonstration of Movement&#x0201D;, consider the problem of remote robot control, which includes the solution of the following routine problems: surveillance of the remote working area, remote operation of the robot situated in the remote working area, as well as pre-training of the robot. Authors propose a new technique for robot control using intelligent multimodal human-machine interfaces (HMI). The application of the new training technology is very promising for space robots as well as for modern assembly plants, including the use of micro-and nanorobots.</para>
<para>In &#x0201C;A Multi-Agent Reinforcement Learning Approach for the Efficient Control of Mobile Robots&#x0201D;, U. Dziomin, A. Kabysh, R. Stetter and V. Golovko present a multi-agent control architecture for the efficient control of a multi-wheeled mobile platform. The proposed control architecture is based on the decomposition of a platform into a holonic, homogenous, multi-agent system. The multi-agent system incorporates multiple Q-learning agents, which permits them to effectively control every wheel relative to other wheels. The learning process consists module positioning&#x02014;where the agents learn to minimize the error of orientation, and cooperative movement&#x02014;where the agents learn to adjust the desired velocity in order to conform to the desired position in formation. Experiments with a simulation model and the real robot are discussed in details.</para>
<para>D. Oskin, A. Dyda, S. Longhi and A. Monteri&#x00F9;, in &#x0201C;Underwater Robot Intelligent Control Based on Multilayer Neural Network&#x0201D;, analyse the design of an intelligent neural network based control system for underwater robots. A new algorithm for intelligent controller learning is derived using the speed gradient method. The proposed systems provide robot dynamics close to the reference ones. Simulation results of neural network control systems for underwater robot dynamics with parameter and partial structural uncertainty have confirmed the perspectives and effectiveness of the developed approach.</para>
<para>The third part &#x0201C;Sensing&#x0201D; includes four contributions:</para>
<para>&#x0201C;Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots&#x0201D;, by Y. Kondratenko and V. Kondratenko, discusses advanced trends in the design of modern tactile sensors andsensor systems for intelligent robots. The detection of slip displacement signals provides information on three approaches for using slip displacement signals, in particular, for the correction of the clamping force, the identification of manipulated object mass and the correction of the robot control algorithm.The chapter presents the analysis of different methods for the detectionof slip displacement signals, as well as new sensor schemes, mathematical models and correction methods.</para>
<para>T. Happek, U. Lang, T. Bockmeier, D. Neubauer and A. Kuznietsov, in &#x0201C;Distributed Data Acquisition and Control Systems for a Sized Autonomous Vehicle&#x0201D;, present an autonomous car with distributed data processing. The car is controlled by a multitude of independent sensors. For lane detection, a camera is used, which detects the lane marks using a Hough transformation. Once the camera detects these, one of them is selected to be followed by the car. This lane is verified by the other sensors of the car. These sensors check the route for obstructions or allow the car to scan a parking space and to park on the roadside if the gap is large enough.</para>
<para>In &#x0201C;Polymetric Sensing in Intelligent Systems&#x0201D;, Yu. Zhukov, B. Gordeev, A. Zivenko and A. Nakonechniy examine the up-to-date relationship between the theory of polymetric measurements and the state of the art in intelligent system sensing. The chapter discusses concepts of polymetric measurements, corresponding to monitoring information systems used in different technologies and some prospects for polymetric sensing in intelligent systems and robots. The application of the described concepts in technological processes ready to be controlled by intelligent systems is illustrated.</para>
<para>D. Popescu, G. Stamatescu, A. Maciuca and M. Strutu, in &#x0201C;Design and Implementation of Wireless SensorNetwork Based on Multilevel Femtocells for Home Monitoring&#x0201D;, propose an intelligent femtocell-based sensor network for home monitoring of elderly or people with chronic diseases. The femtocell is defined as a small sensor network which is placed into the patient&#x02019;s house and consists of both mobile and fixed sensors disposed on three layers. The first layer contains body sensors attached to the patient that monitor different health parameters, patient location, position and possible falls. The second layer is dedicated for ambient sensors and routing inside the cell. The third layer contains emergency ambient sensors that cover burglary events or toxic gas concentration, distributed by necessities. Cell implementation is based on The IRIS family of motes running the embedded software for resource constrained devices, TinyOS. Experimental results within the system architecture are presented for a detailed analysis and validation.</para>
<para>The fourth part &#x0201C;Collaborative automation&#x0201D; includes two contributions:</para>
<para>In &#x0201C;Common Framework Model for Multi-purpose Underwater Data Collection Devices Deployed with Remotely Operated Vehicles&#x0201D;, M. Caraivan, V. Dache and V. Sgarciu presents a common framework model for multi-purpose underwater sensors used for offshore exploration. The development of real-time applications for marine operations focusing on modern modeling and simulation methods are discussed with addressing deployment challenges of underwater sensor networks &#x0201C;Safe-Nets&#x0201D; by using Remotely Operated Vehicles.</para>
<para>Finally, S. Gansemer, J. Sell, U. Grossmann, E. Eren, B. Horster, T. Horster-M&#x00F6;ller and C. Rusch, in &#x0201C;M2M in Agriculture - Business Models and Security Issues&#x0201D; consider the machine-to-machine communication (M2M) as one of the major ICT innovations. A concept for process optimization in agricultural business using M2M technologies is presented using three application scenarios. Within that concept standardization and communication as well as security aspects are discussed.</para>
<para>The papers selected for this book are extended and improved versions of those presented at the workshop and as such are significantly expanded with respect to the original ones presented at the workshop. Obviously, this set of papers are just a sample of the dozens of presentations and results that were seen at IDAACS 2011, but we do believe that they provide an overview of some of the problems in the area of robotics systems and intelligent automation and the approaches and techniques that relevant research groups within this area are employing to try to solve them. We would like to express our appreciation to all authors for their contributions as well as to reviewers for their timely and interesting comments and suggestions. We certainly look forward to working with you again.</para>
<blockquote role="flushright">
<para>Yuriy Kondratenko</para>
<para>Richard Duro</para></blockquote>
</preface>
<preface class="preface" id="preface03">
<title>List of Figures</title>
<table-wrap position="float" id="T1">
<table cellspacing="5" cellpadding="5" frame="none" rules="none">
<tbody>
<tr><td valign="top" width="15%"><emphasis role="strong"><link linkend="F1-1">Figure <xref linkend="F1-1" remap="1.1"/></link></emphasis></td><td valign="top" align="left">Diagram of the selected missions, tasks and sub-tasks considered, and the required actuators and effectors.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F1-2">Figure <xref linkend="F1-2" remap="1.2"/></link></emphasis></td><td valign="top" align="left">Different types of modules developed in this project: three effectors on the left part, a linker on the top, a slider on the right, and in the middle there is a rotational module, a hinge module and a telescopic module.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F1-3">Figure <xref linkend="F1-3" remap="1.3"/></link></emphasis></td><td valign="top" align="left">Control board for the slider module and its main components.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F1-4">Figure <xref linkend="F1-4" remap="1.4"/></link></emphasis></td><td valign="top" align="left">Spherical manipulator moving a load from one place to another.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F1-5">Figure <xref linkend="F1-5" remap="1.5"/></link></emphasis></td><td valign="top" align="left">Cartesian manipulators for Static missions.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F1-6">Figure <xref linkend="F1-6" remap="1.6"/></link></emphasis></td><td valign="top" align="left">A snake Robot that can inspect inside a pipe.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F1-7">Figure <xref linkend="F1-7" remap="1.7"/></link></emphasis></td><td valign="top" align="left">Climber and Walker Robots for linear and surface missions.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F1-8">Figure <xref linkend="F1-8" remap="1.8"/></link></emphasis></td><td valign="top" align="left">A Biped Robot able to overpass obstacles.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link></emphasis></td><td valign="top" align="left">Kinematic diagram of single-section<break/>mechanism.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link></emphasis></td><td valign="top" align="left">Block diagram of the mechanism with parallel kinematics.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-3">Figure <xref linkend="F2-3" remap="2.3"/></link></emphasis></td><td valign="top" align="left">Graphs of changes in the length of the rods.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link></emphasis></td><td valign="top" align="left">Graphs of changes in the projections of the resistance moments.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-5">Figure <xref linkend="F2-5" remap="2.5"/></link></emphasis></td><td valign="top" align="left">Graphs of chances in the coordinates of the platform&#x02019;s center of mass.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-6">Figure <xref linkend="F2-6" remap="2.6"/></link></emphasis></td><td valign="top" align="left">Bode diagrams of the mechanism with a parallel structure.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-7">Figure <xref linkend="F2-7" remap="2.7"/></link></emphasis></td><td valign="top" align="left">Block diagram of the mechanism with a parallel kinematics.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-8">Figure <xref linkend="F2-8" remap="2.8"/></link></emphasis></td><td valign="top" align="left">Manipulator with a controlled diode motor-operated drive.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-9">Figure <xref linkend="F2-9" remap="2.9"/></link></emphasis></td><td valign="top" align="left">Grapas of the vector u componente changes.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-10">Figure <xref linkend="F2-10" remap="2.10"/></link></emphasis></td><td valign="top" align="left">Grapas of vector x component changes.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-11">Figure <xref linkend="F2-11" remap="2.11"/></link></emphasis></td><td valign="top" align="left">The scheme simulation model of the mechanism with parallel kinematics.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-12">Figure <xref linkend="F2-12" remap="2.12"/></link></emphasis></td><td valign="top" align="left">Graphs of the change of coordinates X of the center of mass of the platform.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F2-13">Figure <xref linkend="F2-13" remap="2.13"/></link></emphasis></td><td valign="top" align="left">Graphs of thee changes of Y coordinates of the center of mass of the platform.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-1">Figure <xref linkend="F3-1" remap="3.1"/></link></emphasis></td><td valign="top" align="left">Ship scale model overview.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-2">Figure <xref linkend="F3-2" remap="3.2"/></link></emphasis></td><td valign="top" align="left">Connectivity between Mini-PC and sensors<break/>onboard.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-3">Figure <xref linkend="F3-3" remap="3.3"/></link></emphasis></td><td valign="top" align="left">Graphical user interface to monitor/control the<break/>model from an external workstation.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-4">Figure <xref linkend="F3-4" remap="3.4"/></link></emphasis></td><td valign="top" align="left">Double PID Speed control.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-5">Figure <xref linkend="F3-5" remap="3.5"/></link></emphasis></td><td valign="top" align="left">Speed control. complementary filter.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-6">Figure <xref linkend="F3-6" remap="3.6"/></link></emphasis></td><td valign="top" align="left">Propulsion system block.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-7">Figure <xref linkend="F3-7" remap="3.7"/></link></emphasis></td><td valign="top" align="left">Roll and pitch motions in parametric roll resonance. Conventional carriage-towed model.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-8">Figure <xref linkend="F3-8" remap="3.8"/></link></emphasis></td><td valign="top" align="left">Conventional carriage-towed model during<break/>testing.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-9">Figure <xref linkend="F3-9" remap="3.9"/></link></emphasis></td><td valign="top" align="left">Proposed model during testing.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-10">Figure <xref linkend="F3-10" remap="3.10"/></link></emphasis></td><td valign="top" align="left">Roll and pitch motions in parametric roll resonance. Proposed model.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-11">Figure <xref linkend="F3-11" remap="3.11"/></link></emphasis></td><td valign="top" align="left">Comparison between experimental and numerical data. Resonant case.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-12">Figure <xref linkend="F3-12" remap="3.12"/></link></emphasis></td><td valign="top" align="left">Comparison between experimental and numerical data. Non-Resonant case.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-13">Figure <xref linkend="F3-13" remap="3.13"/></link></emphasis></td><td valign="top" align="left">Experimental stability diagrams. Fn = 0.1. (Left), Fn = 0.15 (Right), GM = 0.370 m.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-14">Figure <xref linkend="F3-14" remap="3.14"/></link></emphasis></td><td valign="top" align="left">Comparison between experimental and numerical stability diagrams. Fn = 0.1. (Left), Fn = 0.15 (Right), GM = 0.370 m.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-15">Figure <xref linkend="F3-15" remap="3.15"/></link></emphasis></td><td valign="top" align="left">Forecast results. 30 neuron, 3 layer MP. 10 seconds prediction. Resonant case.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F3-16">Figure <xref linkend="F3-16" remap="3.16"/></link></emphasis></td><td valign="top" align="left">Forecast results. 30 neuron, 3 layer mp. 10 Seconds prediction. Non Resonant case.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-1">Figure <xref linkend="F4-1" remap="4.1"/></link></emphasis></td><td valign="top" align="left">General Bloc-diagram of the proposed curiosity driven architecture (left) and principle of curiosity-based stimulation-satisfaction mechanism for knowledge acquisition (right).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-2">Figure <xref linkend="F4-2" remap="4.2"/></link></emphasis></td><td valign="top" align="left">A Human would describe this Apple as &#x0201C;Red&#x0201D; in spite of the fact, that this is not the only visible<break/>color.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-3">Figure <xref linkend="F4-3" remap="4.3"/></link></emphasis></td><td valign="top" align="left">A Human would describe this Toy-frog as green in spite of the fact, that this is not the only visible<break/>color.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-4">Figure <xref linkend="F4-4" remap="4.4"/></link></emphasis></td><td valign="top" align="left">Bloc-diagram of relations between observations, features, beliefs and utterances in sense of terms defined in the text.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-5">Figure <xref linkend="F4-5" remap="4.5"/></link></emphasis></td><td valign="top" align="left">Upper: the WCS color table. lower: the WCS color table interpreted by robot taught to distinguish warm (marked by red), cool (blue) and neutral (white)<break/>colors.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-6">Figure <xref linkend="F4-6" remap="4.6"/></link></emphasis></td><td valign="top" align="left">Evolution of number of correctly described objects with increasing number of exposures of each color to the simulated robot.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-7">Figure <xref linkend="F4-7" remap="4.7"/></link></emphasis></td><td valign="top" align="left">Examples of obtained visual colors&#x02019; interpretations (lower images) and corresponding original images (upper images) for several testing objects from COIL database.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-8">Figure <xref linkend="F4-8" remap="4.8"/></link></emphasis></td><td valign="top" align="left">Block diagram the implementation&#x02019;s<break/>architecture.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-9">Figure <xref linkend="F4-9" remap="4.9"/></link></emphasis></td><td valign="top" align="left">Example of English phrase and the corresponding syntactic analysis output generated by<break/>treetagger.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-10">Figure <xref linkend="F4-10" remap="4.10"/></link></emphasis></td><td valign="top" align="left">Flow diagram of communication between a robot and a human which is used in this work.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-11">Figure <xref linkend="F4-11" remap="4.11"/></link></emphasis></td><td valign="top" align="left">Everyday objects used in the experiments in this<break/>work.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-12">Figure <xref linkend="F4-12" remap="4.12"/></link></emphasis></td><td valign="top" align="left">Tutor pointing an aid-kit detected by robot describing its name and color to the robot (left-side picture). Pointing, in the same way, another visible objects detected by robot, tutor describes them to the robot (Right-Side picture).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-13">Figure <xref linkend="F4-13" remap="4.13"/></link></emphasis></td><td valign="top" align="left">Tutor pointing a yellow chocolate box which has been seen, interpreted and learned (by the robot) in terms of colors then asking the robot to describe the chosen object (left-side picture). Tutor pointing an unseen white teddy-bear asking the robot to describe the chosen object (right-side picture).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F4-14">Figure <xref linkend="F4-14" remap="4.14"/></link></emphasis></td><td valign="top" align="left">Images from a video sequence showing the robot searching for the book (left-side picture) and robot&#x02019;s camera view and visualization of color interpretation of the searched object (right-side picture).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-1">Figure <xref linkend="F5-1" remap="5.1"/></link></emphasis></td><td valign="top" align="left">Images of the Space Station for two positions: &#x0201C;Convenient for observation&#x0201D; and &#x0201C;Convenient for grabbing&#x0201D; objects with the virtual manipulator</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-2">Figure <xref linkend="F5-2" remap="5.2"/></link></emphasis></td><td valign="top" align="left">&#x0201C;Sensitized Glove&#x0201D; with a camera and the process of training the robot by means of demonstration</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-3">Figure <xref linkend="F5-3" remap="5.3"/></link></emphasis></td><td valign="top" align="left">Formation of images of 3 characteristic points of the object.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-4">Figure <xref linkend="F5-4" remap="5.4"/></link></emphasis></td><td valign="top" align="left">Functional diagram of robot task training regarding survey motions and object grabbing motions using THM and RCS.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-5">Figure <xref linkend="F5-5" remap="5.5"/></link></emphasis></td><td valign="top" align="left">Robot task training to execute survey movements, based on the movements of the operator&#x02019;s head: Training the robot to execute survey motions to insect surroundings (a); Training process (b); Reproduction of earlier trained movements (c).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-6">Figure <xref linkend="F5-6" remap="5.6"/></link></emphasis></td><td valign="top" align="left">Variations of the &#x0201C;Sensitized Glove&#x0201D;<break/>construction.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-7">Figure <xref linkend="F5-7" remap="5.7"/></link></emphasis></td><td valign="top" align="left">Using the special glove for training the robot manipulator.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-8">Figure <xref linkend="F5-8" remap="5.8"/></link></emphasis></td><td valign="top" align="left">Stand for teaching robots to execute motions of surveillance and grabbing objects.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-9">Figure <xref linkend="F5-9" remap="5.9"/></link></emphasis></td><td valign="top" align="left">Training of motion coordination of two robot manipulators by natural movements of human head and hand.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F5-10">Figure <xref linkend="F5-10" remap="5.10"/></link></emphasis></td><td valign="top" align="left">Training with the use of a system for the recognition of hand movements and gestures without &#x0201C;Sensitized Gloves&#x0201D; against the real background of the operator&#x02019;s work station.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link></emphasis></td><td valign="top" align="left">Production mobile robot: Production mobile platform (a); Driving module (b).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-2">Figure <xref linkend="F6-2" remap="6.2"/></link></emphasis></td><td valign="top" align="left">Organizational structure of a Holonic Multi-Agent System. Lines indicate the communication<break/>channels.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-3">Figure <xref linkend="F6-3" remap="6.3"/></link></emphasis></td><td valign="top" align="left">Model of Influence Based Multi-Agent Reinforcement Learning in the Case of a Holonic Homogenous Multi-Agent System.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-4">Figure <xref linkend="F6-4" remap="6.4"/></link></emphasis></td><td valign="top" align="left">The Maneuverability of one module.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-5">Figure <xref linkend="F6-5" remap="6.5"/></link></emphasis></td><td valign="top" align="left">Mobile Robot Trajectory Decomposition.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-6">Figure <xref linkend="F6-6" remap="6.6"/></link></emphasis></td><td valign="top" align="left">Holonic Decomposition of the Mobile Platform. Dashed lines represent the boundary of a Multi-Agent System (the Holon). Introduction of the Head Agent Leads to a reduction of communication<break/>costs.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-7">Figure <xref linkend="F6-7" remap="6.7"/></link></emphasis></td><td valign="top" align="left">Virtual Structure with a Virtual Coordinate Frame composed of Four Modules with a known Virtual Center.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-8">Figure <xref linkend="F6-8" remap="6.8"/></link></emphasis></td><td valign="top" align="left">A unified view of the control architecture for a Mobile Platform.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-9">Figure <xref linkend="F6-9" remap="6.9"/></link></emphasis></td><td valign="top" align="left">State of the Module with Respect to Reference<break/>Beacon.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-10">Figure <xref linkend="F6-10" remap="6.10"/></link></emphasis></td><td valign="top" align="left">A Decision tree of the reward function.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-11">Figure <xref linkend="F6-11" remap="6.11"/></link></emphasis></td><td valign="top" align="left">Result Topology of the Q-Function.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-12">Figure <xref linkend="F6-12" remap="6.12"/></link></emphasis></td><td valign="top" align="left">Initial and Final Agente Positions.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-13">Figure <xref linkend="F6-13" remap="6.13"/></link></emphasis></td><td valign="top" align="left">Execution of a Learned Control System to turn modules to the center, which is placed on the rear right relative to the platform.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-14">Figure <xref linkend="F6-14" remap="6.14"/></link></emphasis></td><td valign="top" align="left">Agents Team Driving Process.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-15">Figure <xref linkend="F6-15" remap="6.15"/></link></emphasis></td><td valign="top" align="left">The Experiment of modules turning as in the Car Kinematics Scheme (1&#x02013;6 screenshots) and movement around a White Beacon (7&#x02013;9).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F6-16">Figure <xref linkend="F6-16" remap="6.16"/></link></emphasis></td><td valign="top" align="left">The Experiment shows that the radius doesn&#x02019;t change during movement.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-1">Figure <xref linkend="F7-1" remap="7.1"/></link></emphasis></td><td valign="top" align="left">Neural network structure.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-2">Figure <xref linkend="F7-2" remap="7.2"/></link></emphasis></td><td valign="top" align="left">Transient processes in NN control system (&#x003B1; = 0.01, &#x003B3; = 250).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-3">Figure <xref linkend="F7-3" remap="7.3"/></link></emphasis></td><td valign="top" align="left">Forces and Torque in NN control system (&#x003B1; = 0.01, &#x003B3; = 250).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-4">Figure <xref linkend="F7-4" remap="7.4"/></link></emphasis></td><td valign="top" align="left">Examples of hidden layer weight coefficients evolution (&#x003B1; = 0.01,&#x003B3; = 250).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-5">Figure <xref linkend="F7-5" remap="7.5"/></link></emphasis></td><td valign="top" align="left">Examples of output layer weight coefficients evolution (&#x003B1; = 0.01, &#x003B3; = 250).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-6">Figure <xref linkend="F7-6" remap="7.6"/></link></emphasis></td><td valign="top" align="left">Transient processes in NN control system (&#x003B1; = 0.01, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-7">Figure <xref linkend="F7-7" remap="7.7"/></link></emphasis></td><td valign="top" align="left">Forces ant Torque in NN control system (&#x003B1; = 0.01, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-8">Figure <xref linkend="F7-8" remap="7.8"/></link></emphasis></td><td valign="top" align="left">Examples of hidden layer weight coefficients evolution (&#x003B1; = 0.01, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-9">Figure <xref linkend="F7-9" remap="7.9"/></link></emphasis></td><td valign="top" align="left">Examples of output layer weight coefficients evolution (&#x003B1; = 0.01, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-10">Figure <xref linkend="F7-10" remap="7.10"/></link></emphasis></td><td valign="top" align="left">Scheme of the NN control system.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-11">Figure <xref linkend="F7-11" remap="7.11"/></link></emphasis></td><td valign="top" align="left">Transient processes with modified NN-control<break/>(&#x003B1; = 0, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-12">Figure <xref linkend="F7-12" remap="7.12"/></link></emphasis></td><td valign="top" align="left">Forces and torque with modified NN control (&#x003B1; = 0, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-13">Figure <xref linkend="F7-13" remap="7.13"/></link></emphasis></td><td valign="top" align="left">Examples of hidden layer weight coefficients evolution (&#x003B1; = 0, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-14">Figure <xref linkend="F7-14" remap="7.14"/></link></emphasis></td><td valign="top" align="left">Examples of output layer weight coefficients evolution (&#x003B1; = 0, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-15">Figure <xref linkend="F7-15" remap="7.15"/></link></emphasis></td><td valign="top" align="left">Transient processes with modified NN control<break/>(&#x003B1; =0.001, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-16">Figure <xref linkend="F7-16" remap="7.16"/></link></emphasis></td><td valign="top" align="left">Forces and Torque with modified NN control<break/>(&#x003B1; = 0.001, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-17">Figure <xref linkend="F7-17" remap="7.17"/></link></emphasis></td><td valign="top" align="left">Examples of hidden layer weight coefficients evolution (&#x003B1; =0.001, &#x003B3; =200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F7-18">Figure <xref linkend="F7-18" remap="7.18"/></link></emphasis></td><td valign="top" align="left">Examples of output layer weight coefficients evolution (&#x003B1; =0.001, &#x003B3; = 200).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link></emphasis></td><td valign="top" align="left">Grasping and lifting an object with the robot&#x02019;s arm: Initial positions of the gripper fingers (1,2) and object (3) (a); Creating the required clamping force <emphasis>F<subscript>ob</subscript></emphasis> by the gripper fingers during object slippage in the lifting process (b).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link></emphasis></td><td valign="top" align="left">Series of trial motions with increasing clamping force <emphasis>F</emphasis> of gripper fingers based on object<break/>slippage.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-3">Figure <xref linkend="F8-3" remap="8.3"/></link></emphasis></td><td valign="top" align="left">The algorithm for solving different robot tasks based on slip signal detection.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link></emphasis></td><td valign="top" align="left">Magnetic SDS: 1&#x02013; Rod; 2&#x02013; Head; 3&#x02013; Permanent magnet; 4&#x02013; Hall sensor.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-5">Figure <xref linkend="F8-5" remap="8.5"/></link></emphasis></td><td valign="top" align="left">Model of magnetic sensitive element.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-6">Figure <xref linkend="F8-6" remap="8.6"/></link></emphasis></td><td valign="top" align="left">Simulation results for <emphasis>B<subscript>y</subscript></emphasis> (<emphasis>P</emphasis>) based on the mathematical model (8.2).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-7">Figure <xref linkend="F8-7" remap="8.7"/></link></emphasis></td><td valign="top" align="left">Comparative analysis of modeling and experimental results.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-8">Figure <xref linkend="F8-8" remap="8.8"/></link></emphasis></td><td valign="top" align="left">The ball as sensitive element of SDS: 1&#x02013; Finger of robot&#x02019;s gripper; 2&#x02013; Cavity for SDS instalation; 3&#x02013; Guides; 4&#x02013; Sensitive element (<emphasis>a</emphasis> ball); 5&#x02013; Spring; 6&#x02013; Conductive rubber; 7, 8&#x02013; Fiber optic light guides; 9&#x02013; <emphasis>a</emphasis> Sleeve; 10&#x02013; Light; 11&#x02013; Photodetector; 13&#x02013;Cover; 14&#x02013;Screw; 15&#x02013;Hole.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-9">Figure <xref linkend="F8-9" remap="8.9"/></link></emphasis></td><td valign="top" align="left">Light-reflecting surface of the sensitive ball with reflecting and absorbing portions (12) for light<break/>signal.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link></emphasis></td><td valign="top" align="left">Capacitated SDS for the detection of object slippage in different directions: 1&#x02013; Main cavity of robot&#x02019;s gripper; 2&#x02013; Additional cavity; 3&#x02013; Gripper&#x02019;s finger; 4&#x02013; Rod; 5&#x02013; Tip; 6&#x02013; Elastic working surface; 7&#x02013; Spring; 8&#x02013; Resilient element; 9, 10&#x02013; Capacitor<break/>plates.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-11">Figure <xref linkend="F8-11" remap="8.11"/></link></emphasis></td><td valign="top" align="left">Intelligent sensor system for identification of object slippage direction: 3&#x02013; Gripper&#x02019;s finger; 4&#x02013; Rod; 9, 10&#x02013; Capacitor plates; 11&#x02013; Converter &#x0201C;capacitance-voltage&#x0201D;; 12&#x02013; Delay element; 13, 18, 23&#x02013; Adders; 14, 15, 21, 26&#x02013; Threshold elements; 16&#x02013; Multi-Inputs element OR; 17&#x02013; Computer information-control system; 19, 20, 24, 25&#x02013; Channels for sensor information processing; 22, 27&#x02013; Elements NOT; 28&#x02013;39&#x02013; Elements AND.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-12">Figure <xref linkend="F8-12" remap="8.12"/></link></emphasis></td><td valign="top" align="left">Self-adjusting gripper of an intelligent robot with angle movement of clamping rollers: 1, 2&#x02013; Finger; 3, 4&#x02013; Guide groove; 5, 6&#x02013; Roller; 7, 8&#x02013; Roller axis; 9, 15, 20&#x02013; Spring; 10&#x02013; Object; 11, 18&#x02013; Elastic working surface; 12&#x02013; Clamping force sensor; 13, 14&#x02013; Ele ctroconductive contacts; 16, 19&#x02013; Fixator; 17&#x02013; Stock; 21&#x02013; Adjusting screw; 22&#x02013; Deepening; 23&#x02013; Finger&#x02019;s drive.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-13">Figure <xref linkend="F8-13" remap="8.13"/></link></emphasis></td><td valign="top" align="left">Self-clamping gripper of an intelligent robot with plane-parallel displacement of the clamping roller: Front view (a); Top view (b).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-14">Figure <xref linkend="F8-14" remap="8.14"/></link></emphasis></td><td valign="top" align="left">Experimental self-clamping gripper with plane-parallel displacement of the clamping rollers.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F8-15">Figure <xref linkend="F8-15" remap="8.15"/></link></emphasis></td><td valign="top" align="left">Intelligent robot with 4 degrees of freedom for experemental investigations of SDS.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-1">Figure <xref linkend="F9-1" remap="9.1"/></link></emphasis></td><td valign="top" align="left">Dimensions of the test track.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-2">Figure <xref linkend="F9-2" remap="9.2"/></link></emphasis></td><td valign="top" align="left">Schematic base of the model car.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-3">Figure <xref linkend="F9-3" remap="9.3"/></link></emphasis></td><td valign="top" align="left">The infrared sensors distributed alongside the car.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-4">Figure <xref linkend="F9-4" remap="9.4"/></link></emphasis></td><td valign="top" align="left">Position of the camera.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-5">Figure <xref linkend="F9-5" remap="9.5"/></link></emphasis></td><td valign="top" align="left">Schematic signal flow of the vehicle.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-6">Figure <xref linkend="F9-6" remap="9.6"/></link></emphasis></td><td valign="top" align="left">Overview of the data processing system.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-7">Figure <xref linkend="F9-7" remap="9.7"/></link></emphasis></td><td valign="top" align="left">Comparison between original and in-range<break/>image.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-8">Figure <xref linkend="F9-8" remap="9.8"/></link></emphasis></td><td valign="top" align="left">Original image without and with Hough-lines.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-9">Figure <xref linkend="F9-9" remap="9.9"/></link></emphasis></td><td valign="top" align="left">Hough-Lines and sorted points along the<break/>Hough-Lines.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-10">Figure <xref linkend="F9-10" remap="9.10"/></link></emphasis></td><td valign="top" align="left">Sorted Points and Least-Square Parable.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-11">Figure <xref linkend="F9-11" remap="9.11"/></link></emphasis></td><td valign="top" align="left">Parables and driving lane.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-12">Figure <xref linkend="F9-12" remap="9.12"/></link></emphasis></td><td valign="top" align="left">Detection of stop lines.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F9-13">Figure <xref linkend="F9-13" remap="9.13"/></link></emphasis></td><td valign="top" align="left">Driving along a set path: Track model (a); Lateral deviation and heading angle (b).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-1">Figure <xref linkend="F10-1" remap="10.1"/></link></emphasis></td><td valign="top" align="left">The main idea of the replacement of the distributed multi-Sensor system by a polymetric perceptive<break/>agent.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-2">Figure <xref linkend="F10-2" remap="10.2"/></link></emphasis></td><td valign="top" align="left">TDR Coaxial probe immersed into the liquid and the corresponding polymetric signal.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-3">Figure <xref linkend="F10-3" remap="10.3"/></link></emphasis></td><td valign="top" align="left">Time diagrams for polymetric signal formation using additional reference time intervals.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-4">Figure <xref linkend="F10-4" remap="10.4"/></link></emphasis></td><td valign="top" align="left">Disposition of the measuring probe in the tank, position of the reflector and corresponding signals for the cases with air and vapour.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link></emphasis></td><td valign="top" align="left">Example of the general structure of lascos hardware components and elements.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link></emphasis></td><td valign="top" align="left">The general structure of lascos software elements and functions.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-7">Figure <xref linkend="F10-7" remap="10.7"/></link></emphasis></td><td valign="top" align="left">The General Structure of LASCOS Holonic agencies functions.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-8">Figure <xref linkend="F10-8" remap="10.8"/></link></emphasis></td><td valign="top" align="left">Safe Storming Diagram Interface of LASCOS.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F10-9">Figure <xref linkend="F10-9" remap="10.9"/></link></emphasis></td><td valign="top" align="left">The Main Window of Ballast System Operations Interface.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-1">Figure <xref linkend="F11-1" remap="11.1"/></link></emphasis></td><td valign="top" align="left">Hybrid femtocell configuration.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-2">Figure <xref linkend="F11-2" remap="11.2"/></link></emphasis></td><td valign="top" align="left">Data and information flow within and outside the femtocell.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-3">Figure <xref linkend="F11-3" remap="11.3"/></link></emphasis></td><td valign="top" align="left">System architecture.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-4">Figure <xref linkend="F11-4" remap="11.4"/></link></emphasis></td><td valign="top" align="left">Sensor deployment example.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-5">Figure <xref linkend="F11-5" remap="11.5"/></link></emphasis></td><td valign="top" align="left">Embedded carbon dioxide sensor node<break/>module.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-6">Figure <xref linkend="F11-6" remap="11.6"/></link></emphasis></td><td valign="top" align="left">Accelerometer node placement on the patient&#x02019;s<break/>body.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-7">Figure <xref linkend="F11-7" remap="11.7"/></link></emphasis></td><td valign="top" align="left">X Axis experiment acceleration with<break/>thresholding.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-8">Figure <xref linkend="F11-8" remap="11.8"/></link></emphasis></td><td valign="top" align="left">Y Axis experiment acceleration with<break/>thresholding.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-9">Figure <xref linkend="F11-9" remap="11.9"/></link></emphasis></td><td valign="top" align="left">Measurement data: humidity (a); temperature (b).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F11-10">Figure <xref linkend="F11-10" remap="11.10"/></link></emphasis></td><td valign="top" align="left">Measurement data: barometric pressure (a);<break/>light (b).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-1">Figure <xref linkend="F12-1" remap="12.1"/></link></emphasis></td><td valign="top" align="left">Pelamis wave converter Orkney, U.K.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-2">Figure <xref linkend="F12-2" remap="12.2"/></link></emphasis></td><td valign="top" align="left">Axial symmetric absorption buoy.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-3">Figure <xref linkend="F12-3" remap="12.3"/></link></emphasis></td><td valign="top" align="left">Wave Dragon - Overtopping devices principle.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-4">Figure <xref linkend="F12-4" remap="12.4"/></link></emphasis></td><td valign="top" align="left">Archimedes Waveswing (AWS).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-5">Figure <xref linkend="F12-5" remap="12.5"/></link></emphasis></td><td valign="top" align="left">Wind farms in North Sea.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-6">Figure <xref linkend="F12-6" remap="12.6"/></link></emphasis></td><td valign="top" align="left">Possible underwater sensor network deployment nearby Jack-up rig.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-7">Figure <xref linkend="F12-7" remap="12.7"/></link></emphasis></td><td valign="top" align="left">Sound pressure diagram:1&#x02013; Equilibrium; 2&#x02013; Sound; 3&#x02013; Environment Pressure; 4&#x02013; Instantaneous Pressure of Sound.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-8">Figure <xref linkend="F12-8" remap="12.8"/></link></emphasis></td><td valign="top" align="left">Signal-Noise Ratio (SNR).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-9">Figure <xref linkend="F12-9" remap="12.9"/></link></emphasis></td><td valign="top" align="left">Most important underwater data &#x00026; voice cables<break/>(2008).</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-10">Figure <xref linkend="F12-10" remap="12.10"/></link></emphasis></td><td valign="top" align="left">Graphical representation of actuators&#x02019; supports</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-11">Figure <xref linkend="F12-11" remap="12.11"/></link></emphasis></td><td valign="top" align="left">Illustration of the geometrical support and spatial distribution of an actuator.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-12">Figure <xref linkend="F12-12" remap="12.12"/></link></emphasis></td><td valign="top" align="left">Graphical representation of the sensor supports.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-13">Figure <xref linkend="F12-13" remap="12.13"/></link></emphasis></td><td valign="top" align="left">Titan 4 Manipulator 7-F.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-14">Figure <xref linkend="F12-14" remap="12.14"/></link></emphasis></td><td valign="top" align="left">Master Arm Control.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-15">Figure <xref linkend="F12-15" remap="12.15"/></link></emphasis></td><td valign="top" align="left">Titan 4 &#x02013; Stow dimensions.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-16">Figure <xref linkend="F12-16" remap="12.16"/></link></emphasis></td><td valign="top" align="left">RigMaster 5-F.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-17">Figure <xref linkend="F12-17" remap="12.17"/></link></emphasis></td><td valign="top" align="left">RigMaster range of motion &#x02013; Side view.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-18">Figure <xref linkend="F12-18" remap="12.18"/></link></emphasis></td><td valign="top" align="left">Triton XLS ROV in simulation scenario.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-19">Figure <xref linkend="F12-19" remap="12.19"/></link></emphasis></td><td valign="top" align="left">Triton XLS schilling robotics 7-Function arm<break/>in scenario.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-20">Figure <xref linkend="F12-20" remap="12.20"/></link></emphasis></td><td valign="top" align="left">Typical hierarchical file layout.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-21">Figure <xref linkend="F12-21" remap="12.21"/></link></emphasis></td><td valign="top" align="left">Spherical-shaped model designed for common framework; a >= b; c is Tether/Cable entry point<break/>diameter.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-22">Figure <xref linkend="F12-22" remap="12.22"/></link></emphasis></td><td valign="top" align="left">Underwater multi-purpose devices prototypes<break/>01 &#x02013; 05.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-23">Figure <xref linkend="F12-23" remap="12.23"/></link></emphasis></td><td valign="top" align="left">Grouping method for multiple simultaneous sensor deployment.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F12-24">Figure <xref linkend="F12-24" remap="12.24"/></link></emphasis></td><td valign="top" align="left">Basic sensor device holder designed for<break/>simulation.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-1">Figure <xref linkend="F13-1" remap="13.1"/></link></emphasis></td><td valign="top" align="left">Synchronization of standards.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-2">Figure <xref linkend="F13-2" remap="13.2"/></link></emphasis></td><td valign="top" align="left">M2M teledesk framework.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-3">Figure <xref linkend="F13-3" remap="13.3"/></link></emphasis></td><td valign="top" align="left">Femtocell communication in agriculture.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-4">Figure <xref linkend="F13-4" remap="13.4"/></link></emphasis></td><td valign="top" align="left">Information- and controlflow of scenario<break/>harvesting process.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-5">Figure <xref linkend="F13-5" remap="13.5"/></link></emphasis></td><td valign="top" align="left">Value chain of M2M-Teledesk.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-6">Figure <xref linkend="F13-6" remap="13.6"/></link></emphasis></td><td valign="top" align="left">Service-links between delivering and receiving services.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-7">Figure <xref linkend="F13-7" remap="13.7"/></link></emphasis></td><td valign="top" align="left">Relation matrix A, transfer price vector B and sales price vector D.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-8">Figure <xref linkend="F13-8" remap="13.8"/></link></emphasis></td><td valign="top" align="left">Vector of variable costs C and vector of marginal return per unit M.</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="F13-9">Figure <xref linkend="F13-9" remap="13.9"/></link></emphasis></td><td valign="top" align="left">Large file encryption.</td></tr>
</tbody>
</table>
</table-wrap>
</preface>
<preface class="preface" id="preface04">
<title>List of Tables</title>
<table-wrap position="float" id="T1">
<table cellspacing="5" cellpadding="5" frame="none" rules="none">
<tbody>
<tr><td valign="top"><emphasis role="strong"><link linkend="T1-1">Table <xref linkend="T1-1" remap="1.1"/></link></emphasis></td><td valign="top" align="left">Actuator Modules</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T3-1">Table <xref linkend="T3-1" remap="3.1"/></link></emphasis></td><td valign="top" align="left">Measured Parameters</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T6-1">Table <xref linkend="T6-1" remap="6.1"/></link></emphasis></td><td valign="top" align="left">The Environment Information</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T6-2">Table <xref linkend="T6-2" remap="6.2"/></link></emphasis></td><td valign="top" align="left">Agent Actions</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T8-1">Table <xref linkend="T8-1" remap="8.1"/></link></emphasis></td><td valign="top" align="left">The base of production rules &#x0201C;IF-THEN&#x0201D; for<break/>indetification of the slip displacement direction</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T10-1">Table <xref linkend="T10-1" remap="10.1"/></link></emphasis></td><td valign="top" align="left">Quantity and Sensor Types for the Traditional Cargo Sensory System (Example)</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T11-1">Table <xref linkend="T11-1" remap="11.1"/></link></emphasis></td><td valign="top" align="left">Main characteristics of IEEE 802.15.1 and<break/>802.15.4</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T11-2">Table <xref linkend="T11-2" remap="11.2"/></link></emphasis></td><td valign="top" align="left">XMesh power configuration matrix</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T11-3">Table <xref linkend="T11-3" remap="11.3"/></link></emphasis></td><td valign="top" align="left">XMesh performance summary table</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T11-4">Table <xref linkend="T11-4" remap="11.4"/></link></emphasis></td><td valign="top" align="left">Ambiental Monitoring Summary</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T12-1">Table <xref linkend="T12-1" remap="12.1"/></link></emphasis></td><td valign="top" align="left">Symbols Definition and Corresponding I.S.<break/>Measurement Units</td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="T13-1">Table <xref linkend="T13-1" remap="13.1"/></link></emphasis></td><td valign="top" align="left">Revenue, costs and business potential for partners along M2M value chain</td></tr>
</tbody>
</table>
</table-wrap>
</preface>
<preface class="preface" id="preface05">
<title>List of Abbreviations</title>
<table-wrap position="float" id="T13-1">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups"><tbody>
<tr>
<td valign="top" align="left">ABS</td>
<td valign="top" align="left">Acrylonitrile butadiene styrene</td>
</tr>
<tr>
<td valign="top" align="left">ABS</td>
<td valign="top" align="left">American Bureau of Shipping</td>
</tr>
<tr>
<td valign="top" align="left">ADC</td>
<td valign="top" align="left">analog to digital converter</td>
</tr>
<tr>
<td valign="top" align="left">AES</td>
<td valign="top" align="left">Advanced Encryption Standard</td>
</tr>
<tr>
<td valign="top" align="left">AI</td>
<td valign="top" align="left">Artificial Intelligence</td>
</tr>
<tr>
<td valign="top" align="left">ANN</td>
<td valign="top" align="left">Artificial Neural Network</td></tr>
<tr>
<td valign="top" align="left">AquaRET</td>
<td valign="top" align="left">Aquatic Renewable Energy Technologies</td>
</tr>
<tr>
<td valign="top" align="left">ASN</td>
<td valign="top" align="left">Ambient sensor network</td></tr>
<tr>
<td valign="top" align="left">AUV</td>
<td valign="top" align="left">Autonomous Underwater Vehicle</td>
</tr>
<tr>
<td valign="top" align="left">AWS</td>
<td valign="top" align="left">Archimedes Waveswing</td>
</tr>
<tr>
<td valign="top" align="left">BCU</td>
<td valign="top" align="left">Behavior Control Unit</td></tr>
<tr>
<td valign="top" align="left">BOP</td>
<td valign="top" align="left">Blow-Out Preventer</td>
</tr>
<tr>
<td valign="top" align="left">BSN</td>
<td valign="top" align="left">Body sensor network</td>
</tr>
<tr>
<td valign="top" align="left">CAN</td>
<td valign="top" align="left">Controller Area Network</td></tr>
<tr>
<td valign="top" align="left">CCD</td>
<td valign="top" align="left">charge-coupled device</td>
</tr>
<tr>
<td valign="top" align="left">CCF</td>
<td valign="top" align="left">Conscious Cognitive Functions</td>
</tr>
<tr>
<td valign="top" align="left">CCMP</td>
<td valign="top" align="left">Counter Mode/CBC-MAC Protocol</td>
</tr>
<tr>
<td valign="top" align="left">CCSDBS</td>
<td valign="top" align="left">computer-aided floating dock ballasting process control and monitoring system</td>
</tr>
<tr>
<td valign="top" align="left">CMF</td>
<td valign="top" align="left">continuous max-flow algorithm</td>
</tr>
<tr>
<td valign="top" align="left">CO</td>
<td valign="top" align="left">Carbon monoxide</td>
</tr>
<tr>
<td valign="top" align="left">CO<subscript>2</subscript></td>
<td valign="top" align="left">Carbon dioxide</td>
</tr>
<tr>
<td valign="top" align="left">COIL</td>
<td valign="top" align="left">Columbia Object Image Library</td>
</tr>
<tr>
<td valign="top" align="left">CP</td>
<td valign="top" align="left">characteristic points</td>
</tr>
<tr>
<td valign="top" align="left">CRL</td>
<td valign="top" align="left">Certficate Revocation List</td>
</tr>
<tr>
<td valign="top" align="left">CS(SC)</td>
<td valign="top" align="left">system of coordinates</td>
</tr>
<tr>
<td valign="top" align="left">CSMA/CA</td>
<td valign="top" align="left">Carrier sense multiple access/collision avoidance</td>
</tr>
<tr>
<td valign="top" align="left">CU</td>
<td valign="top" align="left">Communication Unit</td>
</tr>
<tr>
<td valign="top" align="left">DB</td>
<td valign="top" align="left">database</td>
</tr>
<tr>
<td valign="top" align="left">DC</td>
<td valign="top" align="left">Direct Current</td>
</tr>
<tr>
<td valign="top" align="left">DHCP</td>
<td valign="top" align="left">Dynamic Host Configuration Protocol</td>
</tr>
<tr>
<td valign="top" align="left">DHSS</td>
<td valign="top" align="left">Direct hopping spread spectrum</td>
</tr>
<tr>
<td valign="top" align="left">DMA</td>
<td valign="top" align="left">decision-making agency</td>
</tr>
<tr>
<td valign="top" align="left">DMP</td>
<td valign="top" align="left">decision-making person</td></tr>
<tr>
<td valign="top" align="left">DNS</td>
<td valign="top" align="left">Domain Name System</td>
</tr>
<tr>
<td valign="top" align="left">ECG</td>
<td valign="top" align="left">Electrocardiogram</td>
</tr>
<tr>
<td valign="top" align="left">EE</td>
<td valign="top" align="left">external environment</td></tr>
<tr>
<td valign="top" align="left">ESC</td>
<td valign="top" align="left">Electronic Speed Control</td>
</tr>
<tr>
<td valign="top" align="left">ESN</td>
<td valign="top" align="left">Emergency sensor network</td>
</tr>
<tr>
<td valign="top" align="left">EU</td>
<td valign="top" align="left">European Union</td>
</tr>
<tr>
<td valign="top" align="left">FHSS</td>
<td valign="top" align="left">Frequency hopping spread spectrum</td>
</tr>
<tr>
<td valign="top" align="left">FIM</td>
<td valign="top" align="left">Fisher Information Matrix</td></tr>
<tr>
<td valign="top" align="left">FMN</td>
<td valign="top" align="left">Femtocell management node</td>
</tr>
<tr>
<td valign="top" align="left">FPSO</td>
<td valign="top" align="left">Floating production storage and o&#x0FB04;oading</td>
</tr>
<tr>
<td valign="top" align="left">FSM</td>
<td valign="top" align="left">frame-structured model</td></tr>
<tr>
<td valign="top" align="left">GIS</td>
<td valign="top" align="left">geographic information systems</td>
</tr>
<tr>
<td valign="top" align="left">GM</td>
<td valign="top" align="left">graphical model</td>
</tr>
<tr>
<td valign="top" align="left">GM</td>
<td valign="top" align="left">Ship Metacentric Height</td>
</tr>
<tr>
<td valign="top" align="left">GND</td>
<td valign="top" align="left">Ground</td></tr>
<tr>
<td valign="top" align="left">GPS</td>
<td valign="top" align="left">Global Positioning System</td>
</tr>
<tr>
<td valign="top" align="left">GSM</td>
<td valign="top" align="left">Global System for Mobile Communications</td>
</tr>
<tr>
<td valign="top" align="left">GUI</td>
<td valign="top" align="left">Graphical User Interface</td>
</tr>
<tr>
<td valign="top" align="left">H<superscript>2</superscript>MAS</td>
<td valign="top" align="left">holonic homogenous multi-agent system</td>
</tr>
<tr>
<td valign="top" align="left">HLAU</td>
<td valign="top" align="left">High-level Knowledge Acquisition Unit</td>
</tr>
<tr>
<td valign="top" align="left">HMI</td>
<td valign="top" align="left">human-machine interfaces</td>
</tr>
<tr>
<td valign="top" align="left">HTPS</td>
<td valign="top" align="left">hand position tracking system</td>
</tr>
<tr>
<td valign="top" align="left">HTTPS</td>
<td valign="top" align="left">Secure Hypertext Transfer Protocol</td>
</tr>
<tr>
<td valign="top" align="left">ICT</td>
<td valign="top" align="left">Information and Communication Technology</td>
</tr>
<tr>
<td valign="top" align="left">IE</td>
<td valign="top" align="left">internal environment</td>
</tr>
<tr>
<td valign="top" align="left">IEEE</td>
<td valign="top" align="left">Institute of electrical and electronics engineers</td>
</tr>
<tr>
<td valign="top" align="left">IMI</td>
<td valign="top" align="left">intelligent multimodal interface</td>
</tr>
<tr>
<td valign="top" align="left">IMM</td>
<td valign="top" align="left">intellectual multy modal system</td>
</tr>
<tr>
<td valign="top" align="left">IMU</td>
<td valign="top" align="left">Inertial Measurement Unit</td>
</tr>
<tr>
<td valign="top" align="left">INE</td>
<td valign="top" align="left">information environment agency</td>
</tr>
<tr>
<td valign="top" align="left">IP</td>
<td valign="top" align="left">Internet Protocol</td>
</tr>
<tr>
<td valign="top" align="left">IRR</td>
<td valign="top" align="left">ideal rational robot</td>
</tr>
<tr>
<td valign="top" align="left">IS</td>
<td valign="top" align="left">International System (Measuring)</td>
</tr>
<tr>
<td valign="top" align="left">ISM</td>
<td valign="top" align="left">Industrial Scientific Medical</td>
</tr>
<tr>
<td valign="top" align="left">ISO</td>
<td valign="top" align="left">International Standardization Organization</td>
</tr>
<tr>
<td valign="top" align="left">ITTC</td>
<td valign="top" align="left">International Towing Tank Conference</td>
</tr>
<tr>
<td valign="top" align="left">JRE</td>
<td valign="top" align="left">Java Runtime Environment</td>
</tr>
<tr>
<td valign="top" align="left">KB</td>
<td valign="top" align="left">knowledge base</td>
</tr>
<tr>
<td valign="top" align="left">LASCOS</td>
<td valign="top" align="left">loading and safety control system</td>
</tr>
<tr>
<td valign="top" align="left">LHP</td>
<td valign="top" align="left">left half-plane</td>
</tr>
<tr>
<td valign="top" align="left">LKAU</td>
<td valign="top" align="left">Low-level Knowledge Acquisition Unit</td>
</tr>
<tr>
<td valign="top" align="left">LNG</td>
<td valign="top" align="left">liquefied natural gas</td></tr>
<tr>
<td valign="top" align="left">LPG</td>
<td valign="top" align="left">liquefied petroleum gas</td>
</tr>
<tr>
<td valign="top" align="left">M2M</td>
<td valign="top" align="left">Machine To Machine</td></tr>
<tr>
<td valign="top" align="left">MAS</td>
<td valign="top" align="left">multi-agent system</td>
</tr>
<tr>
<td valign="top" align="left">MEE</td>
<td valign="top" align="left">combination of the EE</td></tr>
<tr>
<td valign="top" align="left">MEMS</td>
<td valign="top" align="left">Microelectromechanical Systems</td>
</tr>
<tr>
<td valign="top" align="left">MFM</td>
<td valign="top" align="left">motion shape models</td>
</tr>
<tr>
<td valign="top" align="left">MIE</td>
<td valign="top" align="left">particular IE</td>
</tr>
<tr>
<td valign="top" align="left">MIT</td>
<td valign="top" align="left">Massachusetts Institute of Technology</td>
</tr>
<tr>
<td valign="top" align="left">MiWi</td>
<td valign="top" align="left">Wireless protocol designed by Microchip Technology</td>
</tr>
<tr>
<td valign="top" align="left">MM</td>
<td valign="top" align="left">mathematical model</td></tr>
<tr>
<td valign="top" align="left">MMI</td>
<td valign="top" align="left">man-machine interface</td>
</tr>
<tr>
<td valign="top" align="left">MOE</td>
<td valign="top" align="left">model of the OE</td></tr>
<tr>
<td valign="top" align="left">MP</td>
<td valign="top" align="left">Multilayer Perceptron</td>
</tr>
<tr>
<td valign="top" align="left">MPNN</td>
<td valign="top" align="left">Multilayer Perceptron Neural Network</td>
</tr>
<tr>
<td valign="top" align="left">NEM</td>
<td valign="top" align="left">navigation environment monitoring</td>
</tr>
<tr>
<td valign="top" align="left">NN</td>
<td valign="top" align="left">Neural Network</td></tr>
<tr>
<td valign="top" align="left">NU</td>
<td valign="top" align="left">Navigation Unit</td>
</tr>
<tr>
<td valign="top" align="left">OCSP</td>
<td valign="top" align="left">Online Certificate Status Protocol</td></tr>
<tr>
<td valign="top" align="left">ODA</td>
<td valign="top" align="left">Operations Data Aquisition</td>
</tr>
<tr>
<td valign="top" align="left">OE</td>
<td valign="top" align="left">objects of the environment</td>
</tr>
<tr>
<td valign="top" align="left">OpenCV</td>
<td valign="top" align="left">Open Computer Vision</td>
</tr>
<tr>
<td valign="top" align="left">OPI</td>
<td valign="top" align="left">operator interface agency</td>
</tr>
<tr>
<td valign="top" align="left">OWC</td>
<td valign="top" align="left">OscillatingWater Column</td>
</tr>
<tr>
<td valign="top" align="left">PC</td>
<td valign="top" align="left">Personal computer</td>
</tr>
<tr>
<td valign="top" align="left">PCB</td>
<td valign="top" align="left">printed circuit board</td>
</tr>
<tr>
<td valign="top" align="left">PCBs</td>
<td valign="top" align="left">Printed Circuit Boards</td>
</tr>
<tr>
<td valign="top" align="left">PID</td>
<td valign="top" align="left">Proportional-Integral-Derivative</td>
</tr>
<tr>
<td valign="top" align="left">PPA</td>
<td valign="top" align="left">polymetric perceptive agency</td>
</tr>
<tr>
<td valign="top" align="left">PT</td>
<td valign="top" align="left">Process Transparency</td>
</tr>
<tr>
<td valign="top" align="left">RADIUS</td>
<td valign="top" align="left">Remote Authentication Dias-in User Service</td>
</tr>
<tr>
<td valign="top" align="left">RC</td>
<td valign="top" align="left">Radio Control</td>
</tr>
<tr>
<td valign="top" align="left">RCS</td>
<td valign="top" align="left">robot control system</td>
</tr>
<tr>
<td valign="top" align="left">RF</td>
<td valign="top" align="left">Radio Frequency</td>
</tr>
<tr>
<td valign="top" align="left">RHP</td>
<td valign="top" align="left">right half-plane</td>
</tr>
<tr>
<td valign="top" align="left">RMS</td>
<td valign="top" align="left">Root Mean Square</td>
</tr>
<tr>
<td valign="top" align="left">ROV</td>
<td valign="top" align="left">Remotely (Underwater) Operated Vehicle</td>
</tr>
<tr>
<td valign="top" align="left">RSN</td>
<td valign="top" align="left">Robust Secure Network</td>
</tr>
<tr>
<td valign="top" align="left">RSU</td>
<td valign="top" align="left">Remote Software Update</td>
</tr>
<tr>
<td valign="top" align="left">SADCO</td>
<td valign="top" align="left">systems for automated distant (remote) control</td>
</tr>
<tr>
<td valign="top" align="left">SDS</td>
<td valign="top" align="left">slip displacement sensor</td>
</tr>
<tr>
<td valign="top" align="left">SIFT</td>
<td valign="top" align="left">Scale-invariant feature transform</td>
</tr>
<tr>
<td valign="top" align="left">SIM</td>
<td valign="top" align="left">Subscriber Identity Module</td></tr>
<tr>
<td valign="top" align="left">SMA</td>
<td valign="top" align="left">sensory monitoring agency</td>
</tr>
<tr>
<td valign="top" align="left">SNR</td>
<td valign="top" align="left">Signal to Noise Ratio</td>
</tr>
<tr>
<td valign="top" align="left">SPM</td>
<td valign="top" align="left">ship state parameters monitoring</td>
</tr>
<tr>
<td valign="top" align="left">SSM</td>
<td valign="top" align="left">sea state monitoring</td>
</tr>
<tr>
<td valign="top" align="left">Tbps</td>
<td valign="top" align="left">Terrabytes per second</td>
</tr>
<tr>
<td valign="top" align="left">TDR</td>
<td valign="top" align="left">time domain reflectometry</td>
</tr>
<tr>
<td valign="top" align="left">THM</td>
<td valign="top" align="left">tracking the head motion</td>
</tr>
<tr>
<td valign="top" align="left">TMS</td>
<td valign="top" align="left">Tether Management System</td>
</tr>
<tr>
<td valign="top" align="left">TSHP</td>
<td valign="top" align="left">system for tracking hand movements</td>
</tr>
<tr>
<td valign="top" align="left">TSP</td>
<td valign="top" align="left">telesensor programming</td>
</tr>
<tr>
<td valign="top" align="left">UART</td>
<td valign="top" align="left">Universal Asynchronous Receiver Transmitter</td>
</tr>
<tr>
<td valign="top" align="left">UAV</td>
<td valign="top" align="left">Unmanned Aerial Vehicle</td></tr>
<tr>
<td valign="top" align="left">UCF</td>
<td valign="top" align="left">Unconscious Cognitive Functions</td>
</tr>
<tr>
<td valign="top" align="left">UMTS</td>
<td valign="top" align="left">Universal Mobile Telecommunications System</td>
</tr>
<tr>
<td valign="top" align="left">UPEC</td>
<td valign="top" align="left">University Paris-Est Creteil</td>
</tr>
<tr>
<td valign="top" align="left">UR</td>
<td valign="top" align="left">Underwater Robot</td>
</tr>
<tr>
<td valign="top" align="left">USART</td>
<td valign="top" align="left">Universal Synchronous/Asynchronous Receiver Transmitter</td>
</tr>
<tr>
<td valign="top" align="left">USB</td>
<td valign="top" align="left">Universal Serial Bus</td>
</tr>
<tr>
<td valign="top" align="left">USD</td>
<td valign="top" align="left">United States Dollars</td>
</tr>
<tr>
<td valign="top" align="left">UUV</td>
<td valign="top" align="left">Unmanned Underwater Vehicle</td>
</tr>
<tr>
<td valign="top" align="left">VB.NET</td>
<td valign="top" align="left">Visual Basic .NET</td>
</tr>
<tr>
<td valign="top" align="left">VSM</td>
<td valign="top" align="left">virtual ship model</td>
</tr>
<tr>
<td valign="top" align="left">WCM</td>
<td valign="top" align="left">weather conditions model</td>
</tr>
<tr>
<td valign="top" align="left">WCS</td>
<td valign="top" align="left">Windows Color System</td>
</tr>
<tr>
<td valign="top" align="left">WIFI</td>
<td valign="top" align="left">Wireless Fidelity</td>
</tr>
<tr>
<td valign="top" align="left">WLAN</td>
<td valign="top" align="left">Wireless Local Area Network</td></tr>
</tbody>
</table>
</table-wrap>
</preface>
<preface class="preface" id="preface06">
<title>Contents</title>
<table-wrap position="float">
<table cellspacing="5" cellpadding="5" frame="none" rules="none">
<tbody>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch01">1 A Modular Architecture for Developing Robots for Industrial Applications</link></emphasis><?lb?>A. Fa&#x00ED;&#x00F1;a, F. Orjales, D. Souto, F. Bellas and R. J. Duro</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/01_Chapter_01.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch02">2 The Dynamic Characteristics of a Manipulator with Parallel Kinematic Structure Based on Experimental Data</link></emphasis><?lb?>S. Osadchy, V. Zozulya and A. Timoshenko</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/02_Chapter_02.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch03">3 An Autonomous Scale Ship Model for Parametric Rolling Towing Tank Testing</link></emphasis><?lb?>M. M&#x00ED;guez Gonz&#x00E1;lez, A. Deibe, F. Orjales, B. Priego and F. L&#x00F3;pez Pe&#x00F1;a</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/03_Chapter_03.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch04">4 Autonomous Knowledge Discovery Based on Artificial Curiosity-Driven Learning by Interaction</link></emphasis><?lb?>K. Madani, D. M. Ramik and C. Sabourin</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/04_Chapter_04.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch05">5 Information Technology for Interactive Robot Task Training Through Demonstration of Movement</link></emphasis><?lb?>F. Kulakov and S. Chernakova</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/05_Chapter_05.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch06">6 A Multi-Agent Reinforcement Learning Approach for the Efficient Control of Mobile Robots</link></emphasis><?lb?>U. Dziomin, A. Kabysh, R. Stetter and V. Golovko</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/06_Chapter_06.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch07">7 Underwater Robot Intelligent Control Based on Multilayer Neural Network</link></emphasis><?lb?>D. A. Oskin, A. A. Dyda, S. Longhi and A. Monteri&#x00F9;</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/07_Chapter_07.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch08">8 Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots</link></emphasis><?lb?>Y. P. Kondratenko and V. Y. Kondratenko</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/08_Chapter_08.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch09">9 Distributed Data Acquisition and Control Systems for a Sized Autonomous Vehicle</link></emphasis><?lb?>T. Happek, U. Lang, T. Bockmeier, D. Neubauer and A. Kuznietsov</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/09_Chapter_09.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch10">10 Polymetric Sensing in Intelligent Systems</link></emphasis><?lb?>Yu. Zhukov, B. Gordeev, A. Zivenko and A. Nakonechniy</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/10_Chapter_10.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch11">11 Design and Implementation of Wireless Sensor Network Based on Multilevel Femtocells for Home Monitoring</link></emphasis><?lb?>D. Popescu, G. Stamatescu, A. M&#x01CE;ciuc&#x01CE; and M. Stru&#x00163;<emphasis>u</emphasis></td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/11_Chapter_11.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch12">12 Common Framework Model for Multi-Purpose Underwater Data Collection Devices Deployed with Remotely Operated Vehicles</link></emphasis><?lb?>M.C. Caraivan, V. Dache and V. Sgarciu</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/12_Chapter_12.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch13">13 M2M in Agriculture &#x02013; Business Models and Security Issues</link></emphasis><?lb?>S. Gansemer, J. Sell, U. Grossmann, E. Eren, B. Horster, T. Horster-M&#x00F6;ller and C. Rusch</td><td valign="top" align="left" width="20%"><ulink url="http://riverpublishers.com/dissertations_xml/9788793237049/pdf/13_Chapter_13.pdf">Download As PDF</ulink></td></tr>
</tbody>
</table>
</table-wrap>
</preface>
<chapter class="chapter" id="ch01" label="1" xreflabel="1">
<title>A Modular Architecture for Developing Robots for Industrial Applications</title>
<para><emphasis role="strong">A. Fa&#x00ED;&#x00F1;a<superscript><emphasis role="strong">1</emphasis></superscript>, F. Orjales<superscript><emphasis role="strong">2</emphasis></superscript>, D. Souto<superscript><emphasis role="strong">2</emphasis></superscript>, F. Bellas<superscript><emphasis role="strong">2</emphasis></superscript> and R.J. Duro<superscript><emphasis role="strong">2</emphasis></superscript></emphasis></para>
<para><superscript>1</superscript>IT University of Copenhagen, Denmark</para>
<para><superscript>2</superscript>Integrated Group for Engineering Research, University of Coruna, Spain</para>
<para>Corresponding author: A. Fa&#x00ED;&#x00F1;a &lt;anfv@itu.dk&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>This chapter is concerned with proposing ways to make feasible the use of robots in many sectors characterized by dynamic and unstructured environments. In particular, we are interested in addressing the problem through a new approach, based on modular robotics, to allow the fast deployment of robots to solve specific tasks. A series of authors have previously proposed modular architectures, albeit mostly in laboratory settings. For this reason, their designs were usually more focused on what could be built instead of what was necessary for industrial operations. The approach presented here addresses the problem the other way around. In this line, we start by defining the industrial settings the architecture is aimed at and then extract the main features that would be required from a modular robotic architecture to operate successfully in this context. Finally, a particular heterogeneous modular robotic architecture is designed from these requirements and a laboratory implementation of it is built in order to test its capabilities and show its versatility using a set of different configurations including manipulators, climbers and walkers.</para>
<para><emphasis role="strong">Keywords:</emphasis> Modular robots, industrial automation, multi-robot systems</para>
</section>
<section class="lev1" id="sec1-1">
<title>1.1 Introduction</title>
<para>There are several industrial sectors, such as shipyards or construction, where the use of robots is still very low. These sectors are characterized by presenting dynamic and unstructured work environments where the work is not carried out in a chain production line, but rather, the workers have to move to the structures that are being built and these structures change during the construction process. Basically, these are the main reasons to explain the low level of automation in these sectors. Despite this, there are some cases in which robot systems have been considered in order to increase automation in these areas. However, they were developed for very specific tasks, that is, as specialists. Some examples are robots for operations such as grit-blasting [1, 2], welding [3], painting [4, 5], installation of structures [6, 7] or inspection [8, 9]. Nevertheless, their global impact on the sector is still low [10]. The main reason for this low penetration is the high cost of the development of a robot for a specialized task and the large number of different types of tasks that must be carried out in these industries. In other words, it is not practical to have a large group of expensive robots, each one of which will only be used for a particular task and will be doing nothing the rest of the time.</para>
<para>In the last few years, in order to increase the level of automation in the aforementioned environments, several approaches have been proposed based on multi-component robotics systems as an alternative to the use of one robot for each task [11&#x02013;13]. These approaches seek to obtain simple robotic systems capable of adapting, easily and quickly, to different environments and tasks according to the requirements of the situation.</para>
<para>Multi-component robotic systems can be classified into three categories: distributed, linked and modular robots [14]; however, in this work, only the last category will be taken into account. Thus, we explore an approach based on modular robotics, which basically seeks the re-utilization of pre-designed robotic modules. We want to develop an architecture that with a small set of modules can lead to many different types of robots for performing different tasks.</para>
<para>In the last two decades, several proposals of modular architectures for autonomous robots have been made [15, 16]. An early approach to modular architectures resulted in what was called &#x02018;modular mobile robotic systems.&#x02019; These robots can move around the environment, and they can connect to one another to form complex structures for performing tasks that cannot be carried out by a single unit. Examples are CEBOT [17] or SMC-Rover [18]. Another type of modular architecture is lattice robots. These robots can form compact three-dimensional structures or lattices over which one module or a set of them can move. Atron [19] and Tetrobot systems [20] are examples of this architecture.</para>
<para>A different approach to modularity is provided by the so-called chain-based architecture, examples of which are modular robots such as Polybot [21], M-TRAN [22] or Superbot [23]. This kind of architecture has shown its versatility in several tasks such as carrying or handling payloads, climbing staircases or ropes or locomotion in long tests or in sandy terrains [24&#x02013;26]. In addition, some of them were designed specifically for dynamic and unstructured environments. This is the case of the Superbot system, which was developed for unsupervised operation in real environments, resisting abrasion and physical impacts, and including enhanced sensing and communications capabilities.</para>
<para>However, and despite the emphasis on real environments, they are mostly laboratory concept testing approaches with an emphasis on autonomous robots and self-reconfigurable systems rather than on. That is, these architectures were not designed to work in industrial settings and, consequently, their components and characteristics were not derived from an analysis of the needs and particularities of these environments. In fact, they are mostly based on the use of a single type of module to simplify their implementation. Additionally, these homogeneous architectures lead to the need of using a large number of modules to perform some very simple tasks.</para>
<para>On the other hand, we can find another expression of modular robotics, which appears as a result of the addition of modularity to robot architectures. An example is modular manipulators which have mostly been studied for their use in industrial environments. These types of manipulators can be re-coupled to achieve, for example, larger load capacities or to extend their workspace. Most of them can obtain a representation of their morphology or configuration and automatically obtain their direct and inverse kinematics and dynamics. There are homogeneous architectures and there are also architectures with modules specialized in different movements but mainly with rotational joints. Nevertheless, they are usually aimed at static tasks [27, 28] and are much less versatile than real complete modular architectures. In this line, companies such as Schunk Intec Inc or Robotics Design Inc. are commercializing products inspired by this last approach. Both companies have developed modular robotic manipulators with mechanical connections, but these manipulators still need an external control unit configured with the arm&#x02019;s topology.</para>
<para>Currently, new research lines have emerged proposing models that take characteristics of the two areas commented above. For example, some research groups have begun to propose complete versatile heterogeneous modular systems that are designed with industrial applications in mind. An example of this approach is the work of [29] and their heterogeneous architecture. These authors propose a heterogeneous architecture, but in its development, they concentrate on using spherical actuators with 3 degrees of freedom and with a small number of attachment faces in each module. Similarly, other authors have proposed the use of a modular methodology to build robots flexibly and quickly with low costs [30]. This architecture is based on two different rotational modules and several end-effectors such as grippers, suckers and wheels or feet. It has shown its strong potential in a wall-climbing robot application [31]. These approaches are quite interesting, but they still lack some of the features that would be desirable from a real industrially usable heterogeneous modular architecture. For instance, the actuator modules in the first architecture are not independent; they need a power and communications module in order to work. The second system only allows serial chain topologies, which reduces its versatility, or the robot is not able to recognize its own configuration in both architectures.</para>
<para>In this chapter, we are going to address in a top-down manner the main features a modular robotic system or architecture needs to display in order to be adequate for operation in dynamic and unstructured industrial environments. From these features, we will propose a particular architecture and will implement a reduced scale prototype of it. To provide an idea of its appropriateness and versatality, we will finally present some practical applications using the prototype modules.</para>
<para>The rest of the chapter is structured as follows: Section 2 is devoted to the definition of the main characteristics the proposed architecture should have to operate in industrial environments and what design decisions will be taken. Section 3 contains different solutions we have adopted through the presentation of a prototype implementation. Section 4 shows different configurations that the architecture can adopt. Finally, Sections 5 and 6 correspond to the introduction of this architecture in real environments and the main conclusions of the chapter, respectively.</para>
</section>
<section class="lev1" id="sec1-2">
<title>1.2 Main Characteristics for Industrial Operation and Design Decisions</title>
<para>Different aspects need to be kept in mind to decide on a modular robotic architecture for operation in a set of industrial environments. On the one hand, it is necessary to determine the types of environments the architecture is designed for and their principal characteristics, the missions the robots will need to perform in these environments and the implications these have on the motion and actuation capabilities of the robots. Obviously, there are also a series of general characteristics that should be fulfilled when considering industrial operation in general. Consequently, we will first start by identifying here the main features and characteristics a modular architecture should display in order to be able to handle a general dynamic and unstructured industrial environment. This provides the requirements to be met by the architecture so that we can address the problem of providing a set of solutions to comply with these requirements. An initial list of required features would be the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Versatility: The system has to allow to easily build a large number of different configurations in order to adapt to specific tasks;</para></listitem>
<listitem>
<para>Fast deployment: The change of configuration or morphology has to be performed easily and in a short time so that robot operation is not disrupted;</para></listitem>
<listitem>
<para>Fault tolerance: In case of the total failure of a module, the robot has to be able to continue operating minimizing the effects of this loss;</para></listitem>
<listitem>
<para>Robustness: The modules have to be robust to allow working in dirty environments and resisting external forces;</para></listitem>
<listitem>
<para>Reduced cost: The system has to be cheap in terms of manufacturing and operating costs to achieve an economically feasible solution;</para></listitem>
<listitem>
<para>Scalability: The system has to be able to operate with a large number of modules. In fact, limits on the number of modules should be avoided.</para></listitem></itemizedlist>
<para>To fulfil these requirements, a series of decisions were made. Firstly, the new architecture will be based on a modular chain architecture made up of heterogeneous modules. This type of architecture has been selected because it is well known that it is the general architecture that maximizes versatility. On the other hand, using homogeneous modules is the most common option in modular systems [15, 16, 21&#x02013;23], because it facilitates module reuse. However, it also limits the range of possible configurations and makes the control of the robot much more complex. In the types of tasks we are considering here, there are several situations that would require a very simple module (e.g., a linear displacement actuator), but which would be very difficult (complex morphology), or even impossible in some cases, to obtain using any of the homogeneous architectures presented. Thus, for the sake of flexibility and versatility, we have chosen to use a set of heterogeneous modules (specialized modules for each type of movement). This solution makes it easier for the resulting robots to perform complex movements as complex kinematic chains can be easily built by joining a small set of different types of modules. Moreover, each module was designed to perform a single basic movement, that is, only one degree of freedom is allowed. This permits using simple mechanisms within the modules, which increases the robustness of the system and reduces the operating and manufacturing costs.</para>
<para>Having decided on the nature of the architecture, now the problem is to decide what modules would be ideal in terms of having the smallest set of modules that covers all possible tasks in a domain. In addition, it should be taken into account that the number of different types of modules needs to be low in order to accomplish the scalability and reduced production cost requirements. To do this, we chose to follow a top-down design strategy. To this end, we studied some typical unstructured industrial environments (shipyards) and defined a set of general missions that needed automation. These missions were then subdivided into tasks and these into operations or sub-tasks that were necessary. From these we deduced the kinematic pairs and finally a simple set of actuator and end-effector modules that would cover the whole domain was obtained. This approach differentiates the architecture presented here from other systems, which are usually designed with a bottom-up strategy (the modules are designed as the first step and then the authors try to figure out how they can be applied).</para>
<para>We have only considered five general types of modules in the architecture:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Actuators: Modules with motors to generate the robot&#x02019;s motions;</para></listitem>
<listitem>
<para>Effectors: Modules to interact with the environment such as magnets, suckers or grippers;</para></listitem>
<listitem>
<para>Expansion: Modules that increase computational capabilities, memory or autonomy through batteries;</para></listitem>
<listitem>
<para>Sensors: Modules to measure and obtain data from the environment such as cameras and infrared or ultrasonic sensors;</para></listitem>
<listitem>
<para>Linkers: Modules used to join other modules mechanically.</para></listitem></itemizedlist>
<para>The architecture incorporates these five types of modules, but in this work, we have focused only on the actuator modules. They are the ones around which the morphological aspects of the robots gravitate, and we only employ other modules when strictly necessary to show application examples. Therefore, each module includes a processing unit, one motor, a battery, capabilities to communicate with other modules and the necessary sensors to control its motions. This approach permits achieving a fast deployment of functional robots and their versatility as compared to cases where they require external control units.</para>
<fig id="F1-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-1">Figure <xref linkend="F1-1" remap="1.1"/></link></label>
<caption><para>Diagram of the selected missions, tasks and sub-tasks considered, and the required actuators and effectors.</para></caption>
<graphic xlink:href="graphics/ch01_fig1.jpg"/>
</fig>
<para>The process followed to decide on the different actuator modules corresponds with a top-down design process as presented in <link linkend="F1-1">Figure <xref linkend="F1-1" remap="1.1"/></link>. As a first step, we have considered three basic kinds of general mission the modular robot could accomplish. These are the surface, linear and static missions (top layer). Surface missions are those related with tasks requiring covering any kind of surface (like cleaning a tank). Linear missions are those implying a linear displacement (like weld inspection) and Static missions are those where the robotic unit has a fixed position (like an industrial manipulator). The next layer shows the set of possible particular tasks we have considered as necessary according to the previous types of mission, such as grit-blasting, tank cleaning etc. The sub-task layer represents the low-level operations the modular system must carry out to accomplish the task of the previous layer. The next layer represents the kinematic pairs that can be used to perform all the sub-tasks of the last layer. As mentioned above, these pairs only have one degree of freedom. In this case, we have only chosen two kinds of kinematic pairs: prismatic and revolution joints. Nevertheless, each joint was implemented in two different modules in order to specialize the modules to different motion primitives. For the prismatic joint, we have defined a telescopic module with a contraction/expansion motion and a slider module with a linear motion over its structure. The revolution joint also leads to two specialized modules: a rotational module where the rotational axis goes through the two parts of the module, like in wheels or pulleys, and a hinge module. Finally, in the last layer we can see five examples of different effector modules.</para>
<para>Once the actuator modules have been defined, we have to specify the shape or morphology and the connecting faces of each module. Also, and again to increase the versatility of the architecture, each module has been endowed with a large number of attachment faces. This also permits reducing the number of mechanical adapters needed to build different structures. The distribution of the attachment faces will be located on cubic nodes or connection bays within each module. This solution allows creating complex configurations, even closed chains, with modules that are perpendicular, again increasing the versatility of the architecture.</para>
<para>These mechanical connections have to be easily operated in order to allow for the speedy deployment of different configurations. To this end, each attachment face has been provided with mechanisms for transmitting energy and communications between modules in order to avoid external wires. We have also included mechanisms (proprioceptors) that allow the robot to know its morphology or configuration, that is, what module is attached to what face. This last feature is important because it allows the robot to calculate its direct and inverse kinematics and dynamics in order to control its motion in response to high-level commands from an operator.</para>
<para>The robots developed have to be connected to an external power supply with one cable to guarantee the energy needed by all the actuators, effectors and sensors. Nevertheless, the energy is shared among the modules to avoid wires form module to module. In addition, each module contains a small battery to prevent the risk of failure by a sudden loss of energy. These batteries, combined with the energy bus between the modules, allow the robot to place itself in a secure state, maximizing the fault tolerance and the robustness of the system.</para>
<para>Finally, for the sake of robustness, we decided that the communications between modules should allow three different communication paths: a fast and global channel of communications between all the modules that make up a robot, a local channel of communications between two attached modules and a global and wireless communication method. These three redundant channels allow efficient and redundant communications, even between modules that are not physically connected or when a module in the communications path has failed.</para>
<para>Summarizing, the general structure of a heterogeneous modular robotic architecture has been obtained from the set of requirements imposed by operation in an industrial environment and the tasks the robots must perform within it. It turns out that given the complexity of shipyard environments, on which the design was based, the design decisions that were made have led to an architecture that can be quite versatile and adequate for many other tasks and environments. In the following section, we will provide a more in-depth description of the components of the architecture and their characteristics as they were implemented for tests.</para>
</section>
<section class="lev1" id="sec1-3">
<title>1.3 Implementation of a Heterogeneous Modular Architecture Prototype</title>
<para>In the previous section, the main features and components of the developed architecture were presented. Here we are going to provide a description of the different solutions of actuator modules we have adopted through the presentation of a prototype implementation. Throughout this section, the design and morphology of the modules will be explained as well as the different systems needed for it to operate, such as the energy supply system, communications, control system, etc.</para>
<fig id="F1-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-2">Figure <xref linkend="F1-2" remap="1.2"/></link></label>
<caption><para>Different types of modules developed in this project: three effectors on the left part, a linker on the top, a slider on the right, and in the middle there is a rotational module, a hinge module and a telescopic module.</para></caption>
<graphic xlink:href="graphics/ch01_fig02.jpg"/>
</fig>
<para><link linkend="F1-2">Figure <xref linkend="F1-2" remap="1.2"/></link> displays some of the different types of modules that were developed. On the left part, it presents some of the effectors, on the top a linker, a slider on the right and a rotational module, a telescopic module and a hinge in the center. The different modules (actuators, linkers and effectors) have been fully designed and a prototype implementation has been built for each one of them. They all comprise nodes built using fiber glass from milled printed circuit boards (PCBs). These parts are soldered to achieve a solid but lightweight structure. Each module is characterized by having one or more nodes, which act as connections bays. The shape of the nodes varies depending on the type of module (e.g., it is a cube for the nodes of the slider and telescopic modules). All of the free sides of these nodes provide a connection mechanism that allows connecting them to other modules. The size of the nodes without the connection mechanism is 48x48x48 mm; it is 54x54x54 mm including the connectors.</para>
<section class="lev2" id="sec1-3-1">
<title>1.3.1 Actuator Modules</title>
<para>To develop the prototype of the architecture, four different types of actuator modules have been built in accordance to the main features of the architecture described in the previous section. The modules only present one degree of freedom in order to increase robustness and they have different types of joints so that it is easy to build most of the kinematic chains used by real robotic systems in industry. To this end, two linear actuators (slider and telescopic modules) and two rotational actuators (rotational and hinge modules) have been developed. In the case of linear actuators, the slider module has a central node capable of a linear displacement between the end nodes. Any other module can be connected to this central node. The telescopic module only has two nodes and the distance between them can be modified.</para>
<para>On the other hand, the rotational modules have two nodes and allow their relative rotation. These modules are differentiated by the position of the rotation shaft. Whereas the rotational axis of the rotation module goes through the center of both modules, in the hinge it is placed in the union of both nodes and perpendicularly to the line connecting their centers. The main characteristics of the actuator modules are described in <link linkend="T1-1">Table <xref linkend="T1-1" remap="1.1"/></link>.</para>
<table-wrap position="float" id="T1-1">
<label><link linkend="T1-1">Table <xref linkend="T1-1" remap="1.1"/></link></label>
<caption><para>Actuator Modules</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="bottom" align="left"></td>
<td valign="bottom" align="left">Slider</td>
<td valign="bottom" align="left">Telescopic</td>
<td valign="bottom" align="left">Rotational</td>
<td valign="bottom" align="left">Hinge</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Type of movement</td>
<td valign="top" align="left">Linear</td>
<td valign="top" align="left">Linear</td>
<td valign="top" align="left">Rotational</td>
<td valign="top" align="left">Rotational</td>
</tr>
<tr>
<td valign="top" align="left">Stroke</td>
<td valign="top" align="left">189mm</td>
<td valign="top" align="left">98mm</td>
<td valign="top" align="left">360&#x00BA; (1 turn)</td>
<td valign="top" align="left">200&#x00BA;</td>
</tr>
<tr>
<td valign="top" align="left">N&#x00BA; nodes</td>
<td valign="top" align="left">3</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">2</td>
</tr>
<tr>
<td valign="top" align="left">N&#x00BA; connection faces per node</td>
<td valign="top" align="left">5&#x02013;4-5</td>
<td valign="top" align="left">5&#x02013;5</td>
<td valign="top" align="left">5&#x02013;5</td>
<td valign="top" align="left">1&#x02013;1</td>
</tr>
<tr>
<td valign="top" align="left">Weight</td>
<td valign="top" align="left">360g</td>
<td valign="top" align="left">345g</td>
<td valign="top" align="left">250g</td>
<td valign="top" align="left">140g</td></tr>
</tbody>
</table>
</table-wrap>
<section class="lev3" id="sec1-3-1-1">
<title>1.3.1.1 Slider module</title>
<para>This module has two end nodes that are joined together using three carbon fiber tubes and an additional node that slides along the tubes between the end nodes. The distance between the end nodes is 249 mm and the stroke of the slider node is 189 mm. One of the end nodes has a servo with a pulley, which moves a drive belt. The node on the other end has the return pulley and the slider node is fixed to the drive belt. The central node contains the electronics of the module, with power and data wires connecting it to one of the end nodes. There is also a mechanism that coils the wires to adapt them to the position of the slider node.</para>
</section>
<section class="lev3" id="sec1-3-1-2">
<title>1.3.1.2 Telescopic module</title>
<para>The telescopic module has two nodes and the distance between them can increase or decrease. Each node has two carbon fiber tubes attached to it. There is an ABS plastic part at the end of the tubes. These parts have two holes with plain bearings to fit the tubes of the other node. One node contains a servo with a drive pulley and the return pulley is in the ABS part of this node. The drive belt that runs in these pulleys is connected to the ABS part of the opposite node. The other node has the electronic board.</para>
</section>
<section class="lev3" id="sec1-3-1-3">
<title>1.3.1.3 Rotational module</title>
<para>This module has two nodes that can rotate with respect to each other. A low friction washer between the nodes and a shaft prevents misalignments. One node carries a servo with a gear that engages another gear coupled to the shaft. The reduction ratio is 15:46. The servo is modified and its potentiometer is outside attached to a shaft that is operating at a 1:2 ratio with respect to the main shaft. This configuration permits rotations of the module of 360&#x00BA;.</para>
</section>
<section class="lev3" id="sec1-3-1-4">
<title>1.3.1.4 Hinge module</title>
<para>This module does not have any connection bay in its structure, only one connection mechanism in each main block. A shaft joins two main parts built from milled PCBs. These parts rotate relative to each other. The reduction of the servo to the shaft is 1:3. The potentiometer of the servo is joined to the shaft to sense the real position of the module.</para>
</section>
</section>
<section class="lev2" id="sec1-3-2">
<title>1.3.2 Connection Mechanism</title>
<para>Different types of physical couplings between modules can be found in the literature, including magnetic couplings, mechanical couplings or even shape memory wires. In this work, we have decided to use a mechanical connection due to the high force requirements in some tasks and due to the power consumption of other options, like in the case of magnetic couplings.</para>
<para>Several mechanical connectors have been developed for modular robots, but most designers focus their efforts in the mechanical aspects, paying less attention to power transmission and communications between modules. Here we have designed a connection mechanism that is able to join two modules mechanically and, at the same time, transmit power and communications. Currently, the connector is manually operated, but its automation is under development.</para>
<para>The connector design can be seen in <link linkend="F1-2">Figure <xref linkend="F1-2" remap="1.2"/></link> and it has two main parts: a printed circuit board and a resin structure. The resin structure has four pins and four sockets to allow four connections in a multiple of 90 degrees like in [16] and [27]. Inside the resin structure, there is a PCB that can rotate 15 degrees. The PCB has to be forced to fit inside the resin structure, so the PCB remains fixed. When two connectors are faced, the rotation of the PCB of one connector blocks the pins of the other one, and vice versa. The space between the pins of the two connectors is the same as the thickness of the two connector PCBs.</para>
<para>The PCB has four concentric copper tracks on the top side. A mill breaks these tracks in order to provide a cantilever. A small quantity of solder is deposited in the end of the cantilever track. When two connectors are attached, this solder forces the cantilever tracks to bend, so a force is generated. This force maintains the electrical contacts fixed even under vibrations.</para>
<para>Two of the tracks are wider than the other two because they are employed to transmit power (GND and +24V). The other two tracks are employed to transmit data: a CAN bus and local asynchronous communications. The local asynchronous communications track in each connector is directly connected to the microcontroller, while the other tracks are shared for all the connectors of the module. To share these tracks in the node, we choose a surface mount and insulating displacement connector placed at the bottom of the PCB. This solution is used to serially connect the PCBs of the node together in a long string and it allows two modules on the same robot to communicate even in the case of a failure in a module in the path of the message.</para>
</section>
<section class="lev2" id="sec1-3-3">
<title>1.3.3 Energy</title>
<para>A need for the modular system to require a wire or tether to obtain power or perform communications would limit the resulting robots&#x02019; motions and their independence. Therefore, one aim of this work is for the architecture to allow for fully autonomous modular robots. This is achieved by means of the installation of batteries in each module and, when the robot needs more power, expansion modules with additional batteries can be attached to it. However, in industrial environments it is often the case that the tools the robots need to use do require cables and hoses to feed them (welding equipment, sandblasting heads, etc.) and, for the sake of simplicity and length of time the robot can operate, it makes a lot of sense to use external power supplies. For this reason, the architecture also allows for tethered operation when this is more convenient, making sure that the power line reaches just one of the modules and then it is internally distributed among the rest of the modules.</para>
<para>The modules developed in this work are powered at 24V, but each module has its own dc converter to reduce the voltage to 5V to power the servomotors and the different electronic systems embedded in each module.</para>
</section>
<section class="lev2" id="sec1-3-4">
<title>1.3.4 Sensors</title>
<para>All of the modules contain specific sensors to measure the position of their actuator. To this end, the linear modules have a quadrature encoder with 0.32 mm accuracy in their position. The rotational modules are servo controlled, so it is not necessary to know the position of the module. But, in order to improve the precision of the system, we have added a circuit that senses the value of the potentiometer after applying a low pass filter.</para>
<para>Furthermore, all the modules have an accelerometer to provide their spatial orientation. In addition, the local communications established in each attachment face permit identifying the type and the face of the module that is connected to it. This feature, combined with the accelerometer, allows determining the morphology and attitude of the robot without any external help.</para>
<para>All the above-mentioned sensors are particular to each individual module. It means that they only get the data from the module as well. Nevertheless, to perform some tasks (welding, inspection, measuring, etc.), it is necessary to provide to the robot with specific sensors such as camera, ultrasound sensor or whatever. These specific sensor modules are attached to the actuator module that requires it. They are basically nodes (morphologically similar to the rest of the nodes in most modules) with the particular sensor and the processing capabilities to acquire and communicate the data from the particular sensor.</para>
</section>
<section class="lev2" id="sec1-3-5">
<title>1.3.5 Communications</title>
<para>One of the most difficult tasks in modular robotics is the design of the communications systems (local and global). On the one hand, it has to ensure the adequate coordination between modules, and on the other hand, it has to be able to respond quickly to possible changes in the robot&#x02019;s morphology. That is, it has to adapt when a new module is attached, unattached or even when one module fails. The robot&#x02019;s general morphology has to be detected through the aggregation of the values of the local sensing elements in each module as well as the information they have on the modules they are linked to. For this, we use an asynchronous local communications line for inter-module identification (morphological proprioception).</para>
<para>On the other hand, a CAN bus is used for global communications. It allows performing tasks requiring a critical temporal coordination between remote modules. Also, a MiWi wireless communications system is implemented as a redundant system that is used when we have isolated robotic units or when the CAN bus is saturated.</para>
<para>Additionally, all the modules, except the rotational one, have a micro-USB connection to allow communications to an external computer. This feature and a boot loader allow us to employ a USB memory to load the program without the use of a programmer for microcontrollers. <link linkend="F1-3">Figure <xref linkend="F1-3" remap="1.3"/></link> shows the printed circuit board (PCB) of the slider module containing all the communications elements.</para>
<fig id="F1-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-3">Figure <xref linkend="F1-3" remap="1.3"/></link></label>
<caption><para>Control board for the slider module and its main components.</para></caption>
<graphic xlink:href="graphics/ch01_fig3.jpg"/>
</fig>
</section>
<section class="lev2" id="sec1-3-6">
<title>1.3.6 Control</title>
<para>The system control is responsible for controlling and coordinating all the local tasks within each module, as well as the behaviour of the robot. To do this, in this work, each module carries its own electronics board with its micro-controller (PIC32MX575F512) and a DC/DC converter for power supply. The micro-controller is responsible of the low-level tasks of the module: controlling the actuator, managing the communications stacks and measuring the values of its sensors. As each actuator module has its own characteristics (number of connection faces, encoder type, etc.) and the available space inside the modules is very limited, we have developed a specific PCB for each kind of actuator module. As an example, <link linkend="F1-3">Figure <xref linkend="F1-3" remap="1.3"/></link> shows the top and bottom side of the control board for the slider module.</para>
<para>Besides the low-level tasks, this solution permits choosing the type of control to be implemented: centralized or distributed. While in a distributed control scheme, each of the modules contributes to the final behaviour through the control of its own actions depending on its sensors or communications to other modules. In a centralized control scheme, one of the modules would be in charge of controlling the actions of all the other modules, with the advantage of having redundant units in case of failure. Additionally, all modules employ the CAN bus to coordinate their actions and to synchronize their clocks. Obviously, this architecture allows for any intermediate type of control scheme.</para>
</section>
</section>
<section class="lev1" id="sec1-4">
<title>1.4 Some Configurations for Practical Applications</title>
<para>In this section, we will implement some example configurations using the architecture to show how easy it is to build different types of robots as well as how versatile the architecture is. For the sake of clarity and in order to show the benefits of a heterogeneous architecture, we will show simple configurations developed with only a few modules (videos of these configurations can be found in vimeo.com/afaina/ad-hoc-morphologies).</para>
<para>All the experiments were carried out with the same setup. First, the modules were manually assembled in the configuration to test and we connected one cable for power supply and an USB cable to connect one module to a laptop. After powering up the system, the module that communicates to the laptop is selected as a master module. This master module uses the CAN bus to find other connected modules. Then, it uses the asynchronous communications and the orientation of each module to discover the topology of the robot.</para>
<section class="lev2" id="sec1-4-1">
<title>1.4.1 Manipulators</title>
<para>One of the most important pillars of industrial automation are manipulators. Traditional manipulators present a rigid architecture, which complicates their use in different tasks, and they are very heavy and big to be transported in dynamic and unstructured environments. Nevertheless, modular manipulators can be very flexible as they can be entirely reconfigured to adapt to a specific task and the modules can be transported easily across complex environments and then they can be directly assembled on the workplace.</para>
<fig id="F1-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-4">Figure <xref linkend="F1-4" remap="1.4"/></link></label>
<caption><para>Spherical manipulator moving a load from one place to another.</para></caption>
<graphic xlink:href="graphics/ch01_fig04.jpg"/>
</fig>
<para>The configuration choice of the manipulator is highly application dependent and it is mostly determined by the workspace shape and size, as well as other factors such as the load to be lifted, the required speed, etc. For instance, the different types of modules in the architecture can also be used to easily implement spherical or polar manipulators. These type of manipulators present a rotational joint at their base and a linear joint for the radial movements as well as another rotational joint to control their height. Thus, a spherical manipulator is constructed using just five modules as shown in the pictures of <link linkend="F1-4">Figure <xref linkend="F1-4" remap="1.4"/></link>. This robot has a magnetic effector to adhere to the metal surface: a rotational module, a hinge module and a prismatic module for motion and a final magnetic effector to manipulate metal pieces. We can see how the robot is able to take an iron part using the electromagnet placed at the end of the manipulator and carry it to another place. The whole process takes around 10 seconds.</para>
<para>Another very common type of manipulator is the cartesian robot. They are constructed using just linear joints and are characterized by a cubic workspace. The ease with which it is possible to produce speed and position control mechanisms for them, their ability to move large loads and their great stability are their major advantages.</para>
<para>An example of a very simple and fully functional cartesian robot is displayed on the left image of <link linkend="F1-5">Figure <xref linkend="F1-5" remap="1.5"/></link>. It is constructed using only two linear modules and a telescopic module for the implementation of its motions, two magnetic effectors to adhere to the metal surface and a smaller magnet that is used as a final effector. The two large magnets used to adhere the robot to the metal surface provide better stability than the previous spherical robot and reduce the vibrations on the small magnetic end-effector. In addition, we could implement a gantry style manipulator, as we can observe on the right image of <link linkend="F1-5">Figure <xref linkend="F1-5" remap="1.5"/></link>. This gantry manipulator has great stability as it uses four magnets to adhere to the surface and provides a very stable structure to achieve a high accuracy positioning of its end-effector. Furthermore, this implementation can lift and move heavier loads as it has two pairs of modules working in parallel.</para>
<fig id="F1-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-5">Figure <xref linkend="F1-5" remap="1.5"/></link></label>
<caption><para>Cartesian manipulators for Static missions.</para></caption>
<graphic xlink:href="graphics/ch01_fig05.jpg"/>
</fig>
</section>
<section class="lev2" id="sec1-4-2">
<title>1.4.2 Climber and Walker Robots</title>
<para>The most appropriate configurations to carry small loads or sensors and to move the robots themselves to the workplace are the so-called climber or walker robot configurations. Modular robots should be able to get to hard to reach places and, more importantly, their architecture should allow for their reconfiguration into appropriate morphologies to move through different types of terrains, different sized tunnels or over obstacles. This reconfigurability allows reaching and working in areas where it would be impossible for other kinds of robots to operate. Consequently, being able to obtain simple modular configurations that allow for these walking or climbing operations is important, and in this section we will describe three configurations using our architecture that allow for this.</para>
<para>One of the most prototypical configuration in modular robots is a serial chain of hinge modules. It is called a worm configuration and can be employed for inspection tasks inside pipes using a camera or to pass through narrow passages. Here, we show on <link linkend="F1-6">Figure <xref linkend="F1-6" remap="1.6"/></link> that we can achieve a working worm type robot using two hinge modules of our architecture. The whole sequence takes around 8 seconds, but the robot&#x02019;s speed could be increased if we use a worm configuration with more modules.</para>
<fig id="F1-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-6">Figure <xref linkend="F1-6" remap="1.6"/></link></label>
<caption><para>A snake Robot that can inspect inside a pipe.</para></caption>
<graphic xlink:href="graphics/ch01_fig06.jpg"/>
</fig>
<para>Another example of how using this architecture a functional robot climber can be constructed with just with a few modules is the linear wall climber. This robot consists in a combination of a slider module for motion and two magnet effectors to stick to the metal surface. This simple robot, which is displayed on <link linkend="F1-7">Figure <xref linkend="F1-7" remap="1.7"/></link>. (left), can be used on tasks like measuring ship rib thickness or inspecting a linear weld.</para>
<fig id="F1-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-7">Figure <xref linkend="F1-7" remap="1.7"/></link></label>
<caption><para>Climber and Walker Robots for linear and surface missions.</para></caption>
<graphic xlink:href="graphics/ch01_fig07.jpg"/>
</fig>
<para>Obviously, the linear climber is unable to avoid obstacles or to turn. Thus, a possibility to achieve configurations with greater capabilities is to use a few more modules. A wall climber robot is shown in <link linkend="F1-7">Figure <xref linkend="F1-7" remap="1.7"/></link> (right). It can be constructed through the combination of two slider modules, each one of them with two magnetic effectors to adhere to the metal surface, a linear module and a hinge module between them. This configuration allows the robot to move and to turn, making it useful for surface inspection tasks performed with an ultrasonic sensor or other final effectors.</para>
<para>Approximations that are more complex can be created with better locomotion capabilities using other sets of modules. For example, a well-known way to move through an environment is by walking. This way of moving also allows stepping over small obstacles or irregularities. A very simple implementation of a walking robot is shown in <link linkend="F1-8">Figure <xref linkend="F1-8" remap="1.8"/></link>. This configuration is made up of two hinge modules, each one of them with a magnetic effector, joined together by a rotational module. This biped robot is capable of walking over irregular surfaces, stepping over small obstacles and even of moving from a horizontal to a slanted surface.</para>
<fig id="F1-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-8">Figure <xref linkend="F1-8" remap="1.8"/></link></label>
<caption><para>A Biped Robot able to overpass obstacles.</para></caption>
<graphic xlink:href="graphics/ch01_fig08.jpg"/>
</fig>
</section>
</section>
<section class="lev1" id="sec1-5">
<title>1.5 Towards Industrial Applications</title>
<para>In this work, we have analyzed the main features and characteristics that a modular architecture should display in order to be able to handle a general dynamic and unstructured industrial environment: versatility, fast deployment, fault tolerance, robustness, reduced cost and scalability. Currently, modular commercial systems have achieved a good fault tolerance, robustness and reduced cost, but they still lack versatility to operate in dynamic industrial environments and their deployment needs at least some hours.</para>
<para>Here, we have developed a new modular architecture taking into account these dynamic environments. An initial analysis has shown that some important features for an architecture of this type is that it should be an heterogeneous modular architecture with a high number of standardized connection faces, different channels of communication, common power buses, and an autonomous and independent control for each module or the robot&#x02019;s ability to discover its morphology. In order to test the architecture in useful tasks, we have implemented some modular prototypes. The results show that we can deploy complex robots for specific tasks in a few minutes and they can be easily controlled through a GUI in a laptop. Furthermore, we can deploy different configurations for similar tasks where we can increase the stability and accuracy of the robot&#x02019;s end-effector using parallel robots.</para>
<para>An industrial implementation of this architecture is still in a development stage, but it will allow working reliably in dynamic and unstructured environments. It will have the same features of our architecture but with an industrial-oriented implementation. The main changes will affect to the robustness of the modules and the connectors. First, modules will be able to support loads and momentums generated by the most typical configurations. In addition, it will be ruggedized to work in real environments, which can present dust or humidity. Regarding the connectors, they will be able to support high loads and, at the same time, they will allow the fast deployment of the robot configurations. We can find one connector with these characteristics in [32] but, additionally, they will have to distribute the energy and communications buses. As the robots have to work in environments with a high presence of ferromagnetic material, such as shipyards, we cannot use the magnetometer values to calculate the relative orientation of the module. Therefore, we will include a sensor to measure the relative orientation between the modules. Finally, one important issue to address is the security of the operators who work near the robots. Most industrial robots are designed to work in close environments with a shield for the worker&#x02019;s protection. Thereby, our modules will have to be compliant for security reasons. This solution is currently used by some companies that sell compliant robots able to work in the presence of humans [33].</para>
</section>
<section class="lev1" id="sec1-6">
<title>1.6 Conclusions</title>
<para>A new heterogeneous modular robotic architecture has been presented which permits building robots in a fast and easy way. The design of the architecture is based on the main features that we consider, in a top-down fashion, that a modular robotic system must have in order to work in industrial environments. A prototype implementation of the architecture was created through the construction of a basic set of modules that allows for the construction of different types of robots. The modules provide for autonomous processing and control, one degree of freedom actuation and a set of communications capabilities so that, through their cooperation, different functional robot structures can be achieved. To demonstrate the versatility of the architecture, a set of robots was built and tested for simple operations, such as manipulation, climbing or walking. Obviously, this prototype implementation is not designed to work in real industrial environments. Nevertheless, the high level of flexibility achieved with very few modules shows that this approach is very promising. We are now addressing the implementation of the architecture in more rugged modules that allow testing in realistic environments.</para>
</section>
<section class="lev1" id="sec1-7">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>C. Fernandez-Andres, A. Iborra, B. Alvarez, J. Pastor, P. Sanchez, J. Fernandez-Merono and N. Ortega, &#x02018;Ship shape in Europe: cooperative robots in the ship repair industry&#x02019;, Robotics &#x00026; Automation Magazine, IEEE, vol. 12, no. 3, pp. 65&#x02013;77, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Fernandez-Andres%2C+A%2E+Iborra%2C+B%2E+Alvarez%2C+J%2E+Pastor%2C+P%2E+Sanchez%2C+J%2E+Fernandez-Merono+and+N%2E+Ortega%2C+%27Ship+shape+in+Europe%3A+cooperative+robots+in+the+ship+repair+industry%27%2C+Robotics+%26+Automation+Magazine%2C+IEEE%2C+vol%2E+12%2C+no%2E+3%2C+pp%2E+65-77%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Souto, A. Fai&#x00F1;a, A. Deibe, F. Lopez-Pe&#x00F1;a and R. J. Duro, &#x02018;A robot for the unsupervised grit-blasting of ship hulls&#x02019;, International Journal of Advanced Robotic Systems, vol. 9, no. 82, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Souto%2C+A%2E+Fai%F1a%2C+A%2E+Deibe%2C+F%2E+Lopez-Pe%F1a+and+R%2E+J%2E+Duro%2C+%27A+robot+for+the+unsupervised+grit-blasting+of+ship+hulls%27%2C+International+Journal+of+Advanced+Robotic+Systems%2C+vol%2E+9%2C+no%2E+82%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. de Santos, M. Armada and M. Jimenez, &#x02018;Ship building with rower&#x02019;, IEEE Robotics &#x00026; Automation Magazine, vol. 7, no. 4, pp. 35&#x02013;43, 2000. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+de+Santos%2C+M%2E+Armada+and+M%2E+Jimenez%2C+%27Ship+building+with+rower%27%2C+IEEE+Robotics+%26+Automation+Magazine%2C+vol%2E+7%2C+no%2E+4%2C+pp%2E+35-43%2C+2000%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>B. Naticchia, A. Giretti and A. Carbonari, &#x02018;Set up of a robotized system for interior wall painting&#x02019;, in Proceedings of the 23rd International Symposium on Automation and Robotics in Construction (ISARC), pp. 3&#x02013;5, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=B%2E+Naticchia%2C+A%2E+Giretti+and+A%2E+Carbonari%2C+%27Set+up+of+a+robotized+system+for+interior+wall+painting%27%2C+in+Proceedings+of+the+23rd+International+Symposium+on+Automation+and+Robotics+in+Construction+%28ISARC%29%2C+pp%2E+3-5%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Kim, M. Jung, Y. Cho, J. Lee and U. Jung, &#x02018;Conceptual design and feasibility analyses of a robotic system for automated exterior wall painting&#x02019;, International Journal of Advanced Robotic Systems, vol. 4, no. 4, pp. 417&#x02013;430, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Kim%2C+M%2E+Jung%2C+Y%2E+Cho%2C+J%2E+Lee+and+U%2E+Jung%2C+%27Conceptual+design+and+feasibility+analyses+of+a+robotic+system+for+automated+exterior+wall+painting%27%2C+International+Journal+of+Advanced+Robotic+Systems%2C+vol%2E+4%2C+no%2E+4%2C+pp%2E+417-430%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Yu, S. Lee, C. Han, K. Lee and S. Lee, &#x02018;Development of the curtain wall installation robot: Performance and efficiency tests at a construction site&#x02019;, Autonomous Robots, vol. 22, no. 3, pp. 281&#x02013;291, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Yu%2C+S%2E+Lee%2C+C%2E+Han%2C+K%2E+Lee+and+S%2E+Lee%2C+%27Development+of+the+curtain+wall+installation+robot%3A+Performance+and+efficiency+tests+at+a+construction+site%27%2C+Autonomous+Robots%2C+vol%2E+22%2C+no%2E+3%2C+pp%2E+281-291%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>P. Gonzalez de Santos, J. Estremera, E. Garcia and M. Armada, &#x02018;Power assist devices for installing plaster panels in construction&#x02019;, Automation in Construction, vol. 17, no. 4, pp. 459&#x02013;466, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=P%2E+Gonzalez+de+Santos%2C+J%2E+Estremera%2C+E%2E+Garcia+and+M%2E+Armada%2C+%27Power+assist+devices+for+installing+plaster+panels+in+construction%27%2C+Automation+in+Construction%2C+vol%2E+17%2C+no%2E+4%2C+pp%2E+459-466%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Balaguer, A. Gimenez and CM Abderrahim. &#x02018;Roma robots for inspection of steel based infrastructures&#x02019;, Industrial Robot: An International Journal, vol. 29, no. 3, pp. 246&#x02013;251, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Balaguer%2C+A%2E+Gimenez+and+CM+Abderrahim%2E+%27Roma+robots+for+inspection+of+steel+based+infrastructures%27%2C+Industrial+Robot%3A+An+International+Journal%2C+vol%2E+29%2C+no%2E+3%2C+pp%2E+246-251%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Shang, T. Sattar, S. Chen and B. Bridge. &#x02018;Design of a climbing robot for inspecting aircraft wings and fuselage&#x02019;, Industrial Robot: An International Journal, vol. 34, no. 6, pp. 495&#x02013;502, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Shang%2C+T%2E+Sattar%2C+S%2E+Chen+and+B%2E+Bridge%2E+%27Design+of+a+climbing+robot+for+inspecting+aircraft+wings+and+fuselage%27%2C+Industrial+Robot%3A+An+International+Journal%2C+vol%2E+34%2C+no%2E+6%2C+pp%2E+495-502%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>T. Yoshida, &#x02018;A short history of construction robots research &#x00026; development in a Japanese company&#x02019;, in Proceedings of the International Symposium on Automation and Robotics in Construction, 2006, pp. 188&#x02013;193. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+Yoshida%2C+%27A+short+history+of+construction+robots+research+%26+development+in+a+Japanese+company%27%2C+in+Proceedings+of+the+International+Symposium+on+Automation+and+Robotics+in+Construction%2C+2006%2C+pp%2E+188-193%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. A. C. Parker, H. Zhang and C. R. Kube, &#x02018;Blind bulldozing. Multiple robot nest construction&#x02019;, In IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2003), vol. 2, pp. 2010&#x02013;2015, 2003. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+A%2E+C%2E+Parker%2C+H%2E+Zhang+and+C%2E+R%2E+Kube%2C+%27Blind+bulldozing%2E+Multiple+robot+nest+construction%27%2C+In+IEEE%2FRSJ+International+Conference+on+Intelligent+Robots+and+Systems%2C+%28IROS+2003%29%2C+vol%2E+2%2C+pp%2E+2010-2015%2C+2003%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>N. Correll and A. Martinoli, &#x02018;Multirobot inspection of industrial machinery&#x02019;, Robotics &#x00026; Automation Magazine, IEEE, vol. 16, no. 1, pp. 103&#x02013;112, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=N%2E+Correll+and+A%2E+Martinoli%2C+%27Multirobot+inspection+of+industrial+machinery%27%2C+Robotics+%26+Automation+Magazine%2C+IEEE%2C+vol%2E+16%2C+no%2E+1%2C+pp%2E+103-112%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Breitenmoser, F. T&#x00E2;che, G. Caprari, R. Siegwart and R. Moser, &#x02018;Magnebike: toward multi climbing robots for power plant inspection&#x02019;, In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: Industry track, pp. 1713&#x02013;1720. International Foundation for Autonomous Agents and Multiagent Systems, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Breitenmoser%2C+F%2E+T%E2che%2C+G%2E+Caprari%2C+R%2E+Siegwart+and+R%2E+Moser%2C+%27Magnebike%3A+toward+multi+climbing+robots+for+power+plant+inspection%27%2C+In+Proceedings+of+the+9th+International+Conference+on+Autonomous+Agents+and+Multiagent+Systems%3A+Industry+track%2C+pp%2E+1713-1720%2E+International+Foundation+for+Autonomous+Agents+and+Multiagent+Systems%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. J. Duro, M. Gra&#x00F1;a and J. de Lope, &#x02018;On the potential contributions of hybrid intelligent approaches to multicomponent robotic system development&#x02019;, Information Sciences, vol. 180 no. 14, pp. 2635&#x02013;2648, 2010. Special Section on Hybrid Intelligent Algorithms and Applications. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+J%2E+Duro%2C+M%2E+Gra%F1a+and+J%2E+de+Lope%2C+%27On+the+potential+contributions+of+hybrid+intelligent+approaches+to+multicomponent+robotic+system+development%27%2C+Information+Sciences%2C+vol%2E+180+no%2E+14%2C+pp%2E+2635-2648%2C+2010%2E+Special+Section+on+Hybrid+Intelligent+Algorithms+and+Applications%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Yim, W. M. Shen, B. Salemi, D. Rus, M. Moll, H. Lipson et al., &#x02018;Modular self-reconfigurable robot systems&#x02019;, IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 43&#x02013;52, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Yim%2C+W%2E+M%2E+Shen%2C+B%2E+Salemi%2C+D%2E+Rus%2C+M%2E+Moll%2C+H%2E+Lipson+et+al%2E%2C+%27Modular+self-reconfigurable+robot+systems%27%2C+IEEE+Robotics+and+Automation+Magazine%2C+vol%2E+14%2C+no%2E+1%2C+pp%2E+43-52%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>K. Stoy, D. Brandt and D. J. Christensen, &#x02018;Self-reconfigurable robots: An Introduction&#x02019;, MIT Press, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+Stoy%2C+D%2E+Brandt+and+D%2E+J%2E+Christensen%2C+%27Self-reconfigurable+robots%3A+An+Introduction%27%2C+MIT+Press%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>T. Fukuda, S. Nakagawa, Y. Kawauchi and M. Buss, &#x02018;Self-organizing robots based on cell structures-cebot&#x02019;, In IEEE International Workshop on Intelligent Robots, pp. 145&#x02013;150, 1988. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+Fukuda%2C+S%2E+Nakagawa%2C+Y%2E+Kawauchi+and+M%2E+Buss%2C+%27Self-organizing+robots+based+on+cell+structures-cebot%27%2C+In+IEEE+International+Workshop+on+Intelligent+Robots%2C+pp%2E+145-150%2C+1988%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Kawakami, A. Torii, K. Motomura and S. Hirose, &#x02018;Smc rover: planetary rover with transformable wheels&#x02019;, In SICE 2002. Proceedings of the 41st SICE Annual Conference, vol. 1, pp. 157&#x02013;162 vol.1, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Kawakami%2C+A%2E+Torii%2C+K%2E+Motomura+and+S%2E+Hirose%2C+%27Smc+rover%3A+planetary+rover+with+transformable+wheels%27%2C+In+SICE+2002%2E+Proceedings+of+the+41st+SICE+Annual+Conference%2C+vol%2E+1%2C+pp%2E+157-162+vol%2E1%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. W. Jorgensen, E. H. Ostergaard and H. H. Lund, &#x02018;Modular atron: Modules for a self-reconfigurable robot&#x02019;, In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2068&#x02013;2073, 2004. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+W%2E+Jorgensen%2C+E%2E+H%2E+Ostergaard+and+H%2E+H%2E+Lund%2C+%27Modular+atron%3A+Modules+for+a+self-reconfigurable+robot%27%2C+In+IEEE%2FRSJ+International+Conference+on+Intelligent+Robots+and+Systems%2C+pp%2E+2068-2073%2C+2004%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. J. Hamlin and A. C. Sanderson, &#x02018;Tetrobot modular robotics: prototype and experiments&#x02019;. In Intelligent Robots and Systems &#x02019;96, IROS 96, Proceedings of the 1996 IEEE/RSJ International Conference on, vol. 2, pp. 390&#x02013;395, 1996. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+J%2E+Hamlin+and+A%2E+C%2E+Sanderson%2C+%27Tetrobot+modular+robotics%3A+prototype+and+experiments%27%2E+In+Intelligent+Robots+and+Systems+%2796%2C+IROS+96%2C+Proceedings+of+the+1996+IEEE%2FRSJ+International+Conference+on%2C+vol%2E+2%2C+pp%2E+390-395%2C+1996%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Yim, D. Duff and K. Roufas, &#x02018;Polybot: a modular reconfigurable robot&#x02019;, in IEEE International Conference on Robotics and Automation, vol. 1. IEEE; pp. 514&#x02013;520, 2000. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Yim%2C+D%2E+Duff+and+K%2E+Roufas%2C+%27Polybot%3A+a+modular+reconfigurable+robot%27%2C+in+IEEE+International+Conference+on+Robotics+and+Automation%2C+vol%2E+1%2E+IEEE%3B+pp%2E+514-520%2C+2000%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Murata, E. Yoshida, A. Kamimura, H. Kurokawa, K. Tomita and S. Kokaji, &#x02018;M-tran: Self-reconfigurable modular robotic system&#x02019;, IEEE/ASME transactions on mechatronics, vol. 7, no. 4, pp. 431&#x02013;441, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Murata%2C+E%2E+Yoshida%2C+A%2E+Kamimura%2C+H%2E+Kurokawa%2C+K%2E+Tomita+and+S%2E+Kokaji%2C+%27M-tran%3A+Self-reconfigurable+modular+robotic+system%27%2C+IEEE%2FASME+transactions+on+mechatronics%2C+vol%2E+7%2C+no%2E+4%2C+pp%2E+431-441%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>B. Salemi, M. Moll and W. Shen, &#x02018;Superbot: A deployable, multifunctional, and modular self-reconfigurable robotic system&#x02019;, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3636&#x02013;3641, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=B%2E+Salemi%2C+M%2E+Moll+and+W%2E+Shen%2C+%27Superbot%3A+A+deployable%2C+multifunctional%2C+and+modular+self-reconfigurable+robotic+system%27%2C+in+IEEE%2FRSJ+International+Conference+on+Intelligent+Robots+and+Systems+%28IROS%29%2C+pp%2E+3636-3641%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Brandt, D. Christensen and H. Lund, &#x02018;Atron robots: Versatility from self-reconfigurable modules&#x02019;, in International Conference on Mechatronics and Automation, 2007. (ICMA), pp. 26&#x02013;32, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Brandt%2C+D%2E+Christensen+and+H%2E+Lund%2C+%27Atron+robots%3A+Versatility+from+self-reconfigurable+modules%27%2C+in+International+Conference+on+Mechatronics+and+Automation%2C+2007%2E+%28ICMA%29%2C+pp%2E+26-32%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>N. Ranasinghe, J. Everist and W.-M. Shen, &#x02018;Modular robot climbers&#x02019;, in Workshop on Self-Reconfigurable Robots, Systems &#x00026; Applications in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=N%2E+Ranasinghe%2C+J%2E+Everist+and+W%2E-M%2E+Shen%2C+%27Modular+robot+climbers%27%2C+in+Workshop+on+Self-Reconfigurable+Robots%2C+Systems+%26+Applications+in+IEEE%2FRSJ+International+Conference+on+Intelligent+Robots+and+Systems+%28IROS%29%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. Hou, N. Ranasinghe, B. Salemi and W. Shen, &#x02018;Wheeled locomotion for payload carrying with modular robot&#x02019;, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1331&#x02013;1337, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+Hou%2C+N%2E+Ranasinghe%2C+B%2E+Salemi+and+W%2E+Shen%2C+%27Wheeled+locomotion+for+payload+carrying+with+modular+robot%27%2C+IEEE%2FRSJ+International+Conference+on+Intelligent+Robots+and+Systems+%28IROS%29%2C+pp%2E+1331-1337%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>T. Matsumaru, &#x02018;Design and control of the modular robot system: Tomms&#x02019;, in IEEE International Conference on Robotics and Automation (ICRA), pp. 2125&#x02013;2131, 1995. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+Matsumaru%2C+%27Design+and+control+of+the+modular+robot+system%3A+Tomms%27%2C+in+IEEE+International+Conference+on+Robotics+and+Automation+%28ICRA%29%2C+pp%2E+2125-2131%2C+1995%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. J. Paredis, H. B. Brown and P. K. Khosla, &#x02018;A rapidly deployable manipulator system&#x02019;, in IEEE International Conference on Robotics and Automation, pp. 1434&#x02013;1439, 1996. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+J%2E+Paredis%2C+H%2E+B%2E+Brown+and+P%2E+K%2E+Khosla%2C+%27A+rapidly+deployable+manipulator+system%27%2C+in+IEEE+International+Conference+on+Robotics+and+Automation%2C+pp%2E+1434-1439%2C+1996%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Baca, M. Ferre and R. Aracil, &#x02018;A heterogeneous modular robotic design for fast response to a diversity of tasks&#x02019;, Robotics and Autonomous Systems, vol. 60, no. 4, pp. 522&#x02013;531, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Baca%2C+M%2E+Ferre+and+R%2E+Aracil%2C+%27A+heterogeneous+modular+robotic+design+for+fast+response+to+a+diversity+of+tasks%27%2C+Robotics+and+Autonomous+Systems%2C+vol%2E+60%2C+no%2E+4%2C+pp%2E+522-531%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Guan, L. Jiang, X. Zhang, H. Zhang and X. Zhou, &#x02018;Development of novel robots with modular methodology&#x02019;, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2385&#x02013;2390, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Guan%2C+L%2E+Jiang%2C+X%2E+Zhang%2C+H%2E+Zhang+and+X%2E+Zhou%2C+%27Development+of+novel+robots+with+modular+methodology%27%2C+IEEE%2FRSJ+International+Conference+on+Intelligent+Robots+and+Systems+%28IROS%29%2C+pp%2E+2385-2390%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Guan, H. Zhu, W. Wu, X. Zhou, Li Jiang, C. Cai et al., &#x02018;A Modular Biped Wall-Climbing Robot With High Mobility and Manipulating Function&#x02019;, IEEE/ASME Transactions on Mechatronics, vol. 18, no. 6, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Guan%2C+H%2E+Zhu%2C+W%2E+Wu%2C+X%2E+Zhou%2C+Li+Jiang%2C+C%2E+Cai+et+al%2E%2C+%27A+Modular+Biped+Wall-Climbing+Robot+With+High+Mobility+and+Manipulating+Function%27%2C+IEEE%2FASME+Transactions+on+Mechatronics%2C+vol%2E+18%2C+no%2E+6%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Nilsson, &#x02018;Connectors for self-reconfiguring robots&#x02019;, IEEE/ASME Transactions on Mechatronics, vol. 7, no. 4, pp. 473&#x02013;474, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Nilsson%2C+%27Connectors+for+self-reconfiguring+robots%27%2C+IEEE%2FASME+Transactions+on+Mechatronics%2C+vol%2E+7%2C+no%2E+4%2C+pp%2E+473-474%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>E. Guizzo and E. Ackerman, &#x02018;The rise of the robot worker&#x02019;, IEEE Spectrum, vol. 49, no. 10, pp. 34&#x02013;41, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=E%2E+Guizzo+and+E%2E+Ackerman%2C+%27The+rise+of+the+robot+worker%27%2C+IEEE+Spectrum%2C+vol%2E+49%2C+no%2E+10%2C+pp%2E+34-41%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch02" label="2" xreflabel="2">
<title>The Dynamic Characteristics of a Manipulator with Parallel Kinematic Structure Based on Experimental Data</title>
<para><emphasis role="strong">S. Osadchy<superscript><emphasis role="strong">1</emphasis></superscript>, V. Zozulya<superscript><emphasis role="strong">2</emphasis></superscript> and A. Timoshenko<superscript><emphasis role="strong">3</emphasis></superscript></emphasis></para>
<para><superscript>1,2</superscript>Faculty of Automation and Energy, Kirovograd National Technical<break/>University, Ukraine</para>
<para><superscript>3</superscript>Faculty of Automation and Energy, Kirovograd Flight Academy<break/>National Aviation University, Ukraine</para>
<para>Corresponding author: S. Osadchy &lt;srg2005@ukr.net&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>The chapter presents two identification techniques which the authors found most useful in examining the dynamic characteristics of a manipulator with a parallel kinematic structure as an object of control. These techniques emphasize a frequency domain approach. If all input/output signals of an object can be measured, then the first one of such techniques may be used for identification. In the case when all disturbances can&#x02019;t be measured, the second identification technique may be used.</para>
<para><emphasis role="strong">Keywords:</emphasis> Manipulator with parallel kinematics, structural identification, control system</para>
</section>
<section class="lev1" id="sec2-1">
<title>2.1 Introduction</title>
<para>Mechanisms with parallel kinematics [1, 2] compose the basis for the construction of single-stage and multi-stage manipulators. A single-stage manipulator consists of an immobile basis, a mobile platform and six guide rods. Each rod can be represented as two semi rods A<subscript>ij</subscript> and an active kinematics pair B<subscript>ij</subscript> (<link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link>).</para>
<fig id="F2-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link></label>
<caption><para>Kinematic diagram of single-section mechanism.</para></caption>
<graphic xlink:href="graphics/ch02_fig001.jpg"/>
</fig>
<para>We will consider two systems of co-ordinates: inertial O<subscript>0</subscript>X<subscript>0</subscript>Y<subscript>0</subscript>Z<subscript>0</subscript> with the origin in the center of the base O<subscript>0</subscript> and mobile O<subscript>1</subscript>X<subscript>1</subscript>Y<subscript>1</subscript>Z<subscript>1</subscript>, with the origin O<subscript>1</subscript> in the platform center of mass. From <link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link> it is evident, that such mechanisms consist of thirteen mobile links and eighteen kinematics pairs. That is why, in accordance with [2], the number of its possible motions equals six.</para>
<para>Let us propose the following definitions ( <emphasis>j&#x02208;</emphasis>[1:6]): <emphasis>l<subscript>1j</subscript></emphasis> &#x02013; length of rod number <emphasis>j</emphasis>; <emphasis>M<subscript>x</subscript>,M<subscript>y</subscript>,M<subscript>z</subscript>&#x02013;</emphasis> projections of the net resistance moment vector on the axes of the co-ordinate system <emphasis>O<subscript>0</subscript>X<subscript>0</subscript>Y<subscript>0</subscript>Z<subscript>0</subscript></emphasis>.</para>
<para>Obviously, while lengths <emphasis>l</emphasis><subscript>1,<emphasis>j</emphasis></subscript> are changing, then the co-ordinates of the platform&#x02019;s center of mass and the projections of the resistance moment vector are changing too.</para>
<para>From the point of view of automatic control theory, the mechanism with parallel kinematics belongs to the array of mobile control objects with two multidimensional entrances (control signals and disturbances) and one output vector (platform co-ordinates).</para>
<fig id="F2-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link></label>
<caption><para>Block diagram of the mechanism with parallel kinematics.</para></caption>
<graphic xlink:href="graphics/ch02_fig002.jpg"/>
</fig>
</section>
<section class="lev1" id="sec2-2">
<title>2.2 Purpose and Task of Research</title>
<para>The main purpose of this research is to construct a mathematical model which characterizes the interrelation between control signals, disturbances and co-ordinates of the platform center of mass on the base of experimental data.</para>
<para>If one assembles the length of rod changes in the vector of control signals u<subscript>1</subscript>, the projections of force resistance moment changes in the disturbance vector &#x003C8; and the coordinates of the platform center of mass changes in the output vector x</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-1.jpg"/></para>
<para>then the block diagram of the mechanism with parallel kinematics can be represented as shown on the <link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link> where <emphasis>W</emphasis><subscript><emphasis>u</emphasis></subscript> is an operator which characterizes the influence of the control signals vector <emphasis>u</emphasis> on the output vector <emphasis>x</emphasis> and <emphasis>W</emphasis><subscript>&#x003C8;</subscript> is an operator which describes the influence of the disturbance vector <emphasis>&#x003C8;</emphasis> on the output vector <emphasis>x</emphasis>. In this case, in order to find the mathematical model, it is necessary to define these operators. If we want to find such operators based on experimental data, then two variants of the research task can be enunciated.</para>
<para>The first variant will be applied if the components of vectors u<subscript>1</subscript>, x and &#x003C8; can be measured fully (the complete data). The second variant will be applied in the case when only the components of vectors u1 and x can be measured (the incomplete data).</para>
<para>So the research on the dynamics of the mechanism with parallel kinematics can be formulated as follows: to find transfer function matrices W<subscript><emphasis>u</emphasis></subscript>, W<subscript>&#x003C8;</subscript> and also to estimate the influence of vectors u<subscript>1</subscript> and &#x003C8; on vector x on the base of known complete or incomplete experimental data.</para>
<para>The solution of such a problem has been found as a result of three stages of:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>The development of algorithms for the structural identification of a multivariable Dynamic object with the help of complete or incomplete data;</para></listitem>
<listitem>
<para>Collecting and processing experimental data about vectors <emphasis>u</emphasis><subscript>1</subscript><emphasis>, x</emphasis> and <emphasis>&#x003C8;</emphasis>;</para></listitem>
<listitem>
<para>The verification of the results of the structural identification.</para></listitem></itemizedlist>
</section>
<section class="lev1" id="sec2-3">
<title>2.3 Algorithm for the Structural Identification of the Multivariable Dynamic Object with the Help of the Complete Data</title>
<para>Let&#x02019;s suppose the identification object dynamics is characterized by a transfer function matrix <emphasis>W<subscript>ob</subscript></emphasis> (<link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link>), which may have unstable poles. Suppose that as a result of the processing of regular components of vectors <emphasis>u</emphasis> and <emphasis>x,</emphasis> the Laplace transformations <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inu.jpg"/> and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inx.jpg"/> are defined</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-2.jpg"/></para>
<para>Thus, the Laplace transformation of output vector <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inx.jpg"/> has unstable poles of vector <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inu.jpg"/> and unstable poles of matrix <emphasis>W<subscript>ob</subscript></emphasis>. Therefore, it is possible to remove all unstable poles from <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inx.jpg"/>[3] which differ from the unstable poles of <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inu.jpg"/> and to define a diagonal polynomial matrix <emphasis>W<subscript>2</subscript></emphasis> such as</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-3.jpg"/></para>
<para>In this case, the interdependence between vectors <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/iny.jpg"/> and <emphasis>U<subscript>p</subscript></emphasis> is expressed with the help of equation</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-4.jpg"/></para>
<para>where <emphasis>F<subscript>1p</subscript></emphasis> is a transfer function matrix, all the poles of which are located in the left half-plane (LHP) of complex variables. It is equal to</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-5.jpg"/></para>
<para>Consequently, the identification problem consists in determining a physically implemented matrix F<emphasis><subscript>1p</subscript></emphasis> that minimizes a quality functional</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-6.jpg"/></para>
<para>where <emphasis>&#x1D700; &#x02013;</emphasis> identification error, which is equal to</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-7.jpg"/></para>
<para><emphasis>A</emphasis>&#x02013; is a positively defined polynomial weight matrix.</para>
<para>To solve this problem, the ratio (2.7) must be submitted in a vector-matrix form</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-8.jpg"/></para>
<para>The Hermitian conjugated vector <emphasis>&#x1D700;<subscript>&#x02217;</subscript></emphasis> from Equation (2.6) is equal to</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-9.jpg"/></para>
<para>After introducing the expressions (2.8) and (2.9) into the Equation (2.6), the quality functional can be shown as follows:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-10.jpg"/></para>
<para>Thus, the problem of structural identification is reduced to the minimization of the functional (2.10) on the class of a steady variation matrix of transfer functions F<emphasis><subscript>1p</subscript></emphasis>. Such minimization has been carried out as a result of the application of the Wiener-Kolmogorov procedure. In accordance with such procedure [5], the first variation of the quality functional (2.10) has been defined as</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-11.jpg"/></para>
<para>where A<subscript>0</subscript> is a result of the factorization [4] of the matrix <emphasis>A</emphasis> the determinante pf which has zeros with the negative real parts</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-12.jpg"/></para>
<para><emphasis>D</emphasis> is a fraction-rational matrix with particularities in the left half-plane (LHP) which is defined on the basis of the algorithms in articles [3, 4] from the following equation</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-13.jpg"/></para>
<para>where <emphasis>L</emphasis> is a singular matrix, each element of which is equal to one; bottom index &#x0002A; designates the Hermitian conjugate operation; H<subscript>0</subscript>+H<subscript>+</subscript>+H<subscript>-</subscript> is a fraction-rational matrix which is equal to</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-14.jpg"/></para>
<para><emphasis>L<superscript>+</superscript></emphasis> is the pseudo inverse to matrix <emphasis>L</emphasis> [5]; matrix <emphasis>H<subscript>0</subscript></emphasis> is the result of the division; <emphasis>H<subscript>+</subscript></emphasis> is a proper fractional rational matrix with poles that are analytic only in the right half-plane (RHP); <emphasis>H<subscript>-</subscript></emphasis>is a proper fractional rational matrix with poles that are analytic in LHP. In accordance with the chosen minimization procedure, a steady and physically realized variation <emphasis>F<subscript>1p</subscript></emphasis> which delivers a minimum to the functional (2.10) is equal to</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-15.jpg"/></para>
<para>If one takes into account matrices <emphasis>W<subscript>2</subscript></emphasis>, <emphasis>F <subscript><!--br role="newline" /--></subscript>1p</emphasis> from Equations (2.3) and (2.15), then an unknown object transfer function matrix <emphasis>W<subscript>ob</subscript></emphasis> can be identified with the help of the following expression</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-17.jpg"/></para>
<para>where <emphasis>W<subscript>-</subscript></emphasis> is a fraction-rational matrix with particularities in the RHP.</para>
<para>An algorithm for the structural identification of the multivariable dynamic object with an unstable part on the base of the vectors u and x emplees the implementation of the following operations:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Search the matrix <emphasis>W<subscript>2</subscript></emphasis> as a result of the left-hand removal of the unstable poles from <emphasis>X<subscript>p</subscript></emphasis>, which differ from the poles of <emphasis>U<subscript>p</subscript></emphasis>;</para></listitem>
<listitem>
<para>Factorization of the weight matrix <emphasis>A</emphasis> from (2.12);</para></listitem>
<listitem>
<para>Identification of the analytical complex variable matrix <emphasis>D</emphasis><break/>Equation (2.13);</para></listitem>
<listitem>
<para>Calculation of <emphasis>H<subscript>0</subscript> + H<subscript>+</subscript></emphasis> as a result of the separation (2.14);</para></listitem>
<listitem>
<para>Calculation of <emphasis>F<subscript>1p</subscript></emphasis> on the basis of the Equation (2.15);</para></listitem>
<listitem>
<para>Identifying <emphasis>W<subscript>ob2</subscript></emphasis> by the separation of the product (2.16).</para></listitem></itemizedlist>
<para>In this way, we have substantiated the algorithm for the structural identification of the multivariable dynamic object with the help of the complete experimental data.</para>
</section>
<section class="lev1" id="sec2-4">
<title>2.4 Algorithm for the Structural Identification of the Multivariable Dynamic Object with the Help of Incomplete Data</title>
<para>Let&#x02019;s suppose that the identification object dynamics is characterized by a system of ordinary differential equations with constant coefficients. The Fourier transformation of such system, subject to the zero initial conditions, can be shown as follows:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-18.jpg"/></para>
<para>where <emphasis>P</emphasis> and <emphasis>M</emphasis> are polynomial matrices of the appropriate dimensions; <emphasis>&#x003C8;</emphasis> is the Fourier image of a centered multivariable stationary random process with the unknown spectral densities matrix <emphasis>S<subscript>&#x003C8;&#x003C8;</subscript></emphasis>. Let us admit also that vectors <emphasis>u</emphasis> and <emphasis>x</emphasis> are the centered multivariable stationary random processes with the matrices of the spectral and cross-spectral densities <emphasis>S<subscript>xx</subscript>, S<subscript>uu</subscript>, S<subscript>xu</subscript>, S<subscript>ux</subscript></emphasis> known as a result of the experimental data processing. It is considered that the random process <emphasis>&#x003C8;</emphasis> can be formed by a filter with the transfer function matrix <emphasis>&#x003A8;</emphasis> and is equal to</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-19.jpg"/></para>
<para>where &#x00394; is the vector of the single &#x003B4;-correlated &#x0201C;white&#x0201D; noises.</para>
<para>If one takes into account expression (2.19), then Equation (2.18) can be rewritten as follows:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-20.jpg"/></para>
<para>and a transfer function matrix which must be identified can be defined as the expression</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-21.jpg"/></para>
<para>So, the Equation (2.20) can be simplified to the form</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-22.jpg"/></para>
<para>where y is an extended vector of the external influences</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-23.jpg"/></para>
<para>Thus, the identification problem can be formulated as follows. Using the records of the vectors <emphasis>x</emphasis> and <emphasis>y</emphasis>, choose the sectional matrix string <emphasis>&#x003D5;</emphasis> (2.21) that provides minimum to the following quality functional</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-24.jpg"/></para>
<para>where <emphasis>J</emphasis> is equal to the sum of the identification errors variances as the elements of the identification errors vector &#x1D700;</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-25.jpg"/></para>
<para><emphasis>S</emphasis>&#x02032;<subscript>&#x1D700;&#x1D700;</subscript> is a transposed matrix of the identification errors spectral densities</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-26.jpg"/></para>
<para><emphasis>S&#x02032;<subscript>xx</subscript>, S&#x02032;<subscript>uu</subscript>, S&#x02032;<subscript>yy</subscript></emphasis> are the transposed spectral density matrices of the vectors <emphasis>x</emphasis>, <emphasis>u, y</emphasis>; <emphasis>S&#x02032;<subscript>xy</subscript>, S&#x02032;<subscript>yx</subscript>, S&#x02032;<subscript>ux</subscript></emphasis> is the transposed cross-spectral density matrices between vectors <emphasis>x</emphasis> and <emphasis>y, y</emphasis> and <emphasis>x, u</emphasis> and <emphasis>x</emphasis>; <emphasis>S&#x02032;<subscript>&#x00394;x</subscript></emphasis> is a transposed cross-spectral density matrix which is found on the basis of the Wiener&#x02019;s factorization [3] of the additional connection equation</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-29.jpg"/></para>
<para><emphasis>R</emphasis> is a positively defined polynomial weight matrix.</para>
<para>An algorithm for the set problem decision, which is grounded in [8] and allows defining the sought after matrix <emphasis>&#x003D5;</emphasis> which minimizes the functional (2.24), has the following form</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-30.jpg"/></para>
<para>in which matricees <emphasis>R<subscript>0</subscript></emphasis> and <emphasis>D</emphasis> are results of the Wiener&#x02019;s factorization [3] of matrices <emphasis>R</emphasis> and <emphasis>S&#x02032;<subscript>yy</subscript></emphasis> so that</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-31.jpg"/></para>
<para><emphasis>K<subscript>0</subscript>+K<subscript>+</subscript></emphasis>is a transfer function matrix with the stable poles, which is defined as a result of the following equation right part separation [7]</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-32.jpg"/></para>
<para>An algorithm for the structural identification of a multivariable dynamic object with the help of the stochastic stationary components of vectors <emphasis>u<subscript>1</subscript></emphasis> and <emphasis>x</emphasis> implies the following operations:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Search for the spectral and cross-spectral densities matrices <emphasis>S<subscript>xx</subscript>, S<subscript>uu</subscript>, S<subscript>yy</subscript>, S<subscript>ux</subscript>, S<subscript>xu</subscript></emphasis> on the base of the experimental data processing;</para></listitem>
<listitem>
<para>Factorization of the weight matrix <emphasis>R</emphasis> from (2.31);</para></listitem>
<listitem>
<para>Factorization of the additional connection Equation (2.29);</para></listitem>
<listitem>
<para>Factorization of the transposed spectral densities matrix (2.28);</para></listitem>
<listitem>
<para>The Wiener&#x02019;s separation of the matrix (2.32);</para></listitem>
<listitem>
<para>Calculation of matrix &#x003D5; based on Equation (2.30);</para></listitem>
<listitem>
<para>Identification of matrices <emphasis>&#x003D5;<subscript>11</subscript></emphasis> and <emphasis>&#x003D5;<subscript>12</subscript></emphasis>with the help of Equation (2.21).</para></listitem></itemizedlist>
<para>In this way, we substantiate the algorithm for the structural identification of the multivariable object on the base of the incomplete experimental data.</para>
</section>
<section class="lev1" id="sec2-5">
<title>2.5 The Dynamics of the Mechanism with a Parallel Structure Obtained by Means of the Complete Data Identification</title>
<para>To identify the models of a dynamics, we used tracks of changes in the components of vectors <emphasis>u</emphasis>, <emphasis>&#x003C8;</emphasis> and <emphasis>x</emphasis>, obtained as a result of the behavior of a modeling platform using a virtual model. The case was considered when the motion platform center of mass <emphasis>O<subscript>1</subscript></emphasis> remained in the plane of the manipulator symmetry <emphasis>O<subscript>0</subscript>X<subscript>0</subscript>Y<subscript>0</subscript></emphasis>. Thus, it is evident that in this case, instead of six rods only three (<link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link>) may be considered and the dimension of vector <emphasis>u</emphasis> (2.1) is equal 3.</para>
<fig id="F2-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-3">Figure <xref linkend="F2-3" remap="2.3"/></link></label>
<caption><para>Graphs of changes in the length of the rods.</para></caption>
<graphic xlink:href="graphics/ch02_fig003.jpg"/>
</fig>
<fig id="F2-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link></label>
<caption><para>Graphs of changes in the projections of the resistance moments.</para></caption>
<graphic xlink:href="graphics/ch02_fig004.jpg"/>
</fig>
<fig id="F2-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-5">Figure <xref linkend="F2-5" remap="2.5"/></link></label>
<caption><para>Graphs of chances in the coordinates of the platform&#x02019;s center of mass.</para></caption>
<graphic xlink:href="graphics/ch02_fig005.jpg"/>
</fig>
<para>As a result of the computational experiment, all the above vectors&#x02019; components were obtained and all graphs of their changes (Figures 2.3&#x02013;2.5) were built.</para>
<para>For solving of the identification problem, the control and perturbations vectors were combined into a single vector <emphasis>u</emphasis> of the input signals</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-33.jpg"/></para>
<para>In this case, the identification problem is to estimate the order and the parameters of a differential equations system which characterizes the mechanism motion.</para>
<para><emphasis role="strong">The state space dynamic model</emphasis> of the mechanism <emphasis>is defined with the help of the</emphasis> System Identification Toolbox of the Matlab environment. Considering the structure of vector <emphasis>u</emphasis>, defined by (2.33), allows to obtain the equation of the hexapod&#x02019;s state as follows:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-34.jpg"/></para>
<para>where the matrices <emphasis>B<subscript>u</subscript></emphasis>, <emphasis>B<subscript>&#x003C8;</subscript></emphasis>, <emphasis>D<subscript>u</subscript></emphasis> and <emphasis>D<subscript>&#x003C8;</subscript></emphasis> are easily determined.</para>
<para>The analysis of the obtained model of dynamics shows that the moving object is fully controllable and observable.</para>
<para>As a result, the Laplace transformation of the left and right parts of the Equations (2.34) obtained the following relations</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-35.jpg"/></para>
<para>where <emphasis>y(s)</emphasis>, <emphasis>x(s)</emphasis>, <emphasis>u(s)</emphasis>, <emphasis>&#x003C8;(s)</emphasis> &#x02013; the Laplace image of the vector, <emphasis>E<subscript>n</subscript></emphasis> &#x02013; the identity matrix, <emphasis>s</emphasis> &#x02013; the independent complex variable.</para>
<para>After solving the system of Equations (2.35) with respect to the vector of the initial coordinates of the mechanism <emphasis>x,</emphasis> the following matrices of the transfer functions <emphasis>W<subscript>u</subscript></emphasis> and <emphasis>W<subscript>&#x003C8;</subscript></emphasis> (<link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link>) were obtained</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-36.jpg"/></para>
<para>Substituting the appropriate numerical matrices <emphasis>C</emphasis>, <emphasis>B<subscript>u</subscript></emphasis>, <emphasis>B<subscript>&#x003C8;</subscript></emphasis>, <emphasis>D<subscript>u</subscript></emphasis>, <emphasis>D<subscript>&#x003C8;</subscript></emphasis>, in expressions (2.36), (2.37) allowed determining that</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-38.jpg"/></para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-39.jpg"/></para>
<para>The analysis of the matrix structure of (2.38) and (2.39) and the Bode diagrams (<link linkend="F2-6">Figure <xref linkend="F2-6" remap="2.6"/></link>) shows that this mechanism can be classified as a multi-resistant mechanical filter with the input signals and the disturbances energy bands lying in the filter spectral band pass. The eigen frequency of such a filter is close to <emphasis>11s<superscript>-1</superscript></emphasis>and depends on the moments of inertia and the mass of the moving elements of the mechanism (<link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link>).</para>
<para><emphasis role="strong"><emphasis>The ordinary differential equations dynamics model of the system</emphasis></emphasis> can be obtained if you present the transfer function matrices W<subscript><emphasis>u</emphasis></subscript> and W<subscript>&#x003C8;</subscript> as a product of the polynomial matrices P, M and M<subscript>&#x003C8;</subscript> with the minimum order polynomials:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-40.jpg"/></para>
<para>To find the polynomial matrices <emphasis>P</emphasis> and <emphasis>M</emphasis> with the minimum order polynomials, we propose the following algorithm:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>By moving the poles to the right [3], the transfer function matrix <emphasis>W<subscript>u</subscript></emphasis> should be introduced as follows:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-42.jpg"/></para>
</listitem></itemizedlist>
<para>and the diagonal polynomial matrix <emphasis>D<subscript><emphasis>R</emphasis></subscript></emphasis> should be found;</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>On the basis of the polynomial matrices <emphasis>N<subscript>R</subscript></emphasis> and <emphasis>D<subscript>R</subscript></emphasis> found by the CMFR algorithm, substantiated in [7], the unknown matrices <emphasis>P</emphasis> and <emphasis>M</emphasis> should be identified</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-43.jpg"/></para>
</listitem>
<listitem>
<para>From Equation (2.41) and the known matrices <emphasis>P</emphasis> and <emphasis>W<subscript>&#x003C8;</subscript></emphasis> the polynomial matrix <emphasis>M<subscript>&#x003C8;</subscript></emphasis> should be found</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-44.jpg"/></para>
</listitem></itemizedlist>
<fig id="F2-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-6">Figure <xref linkend="F2-6" remap="2.6"/></link></label>
<caption><para>Bode diagrams of the mechanism with a parallel structure.</para></caption>
<graphic xlink:href="graphics/ch02_fig006.jpg"/>
</fig>
<para>The application of the algorithms (2.42) and (2.43) to the original data represented by expressions (2.38), (2.39) made it possible to obtain the polynomial matrices <emphasis>P</emphasis>, <emphasis>M</emphasis>, <emphasis>M<subscript>&#x003C8;</subscript></emphasis>. Then, application of the inverse Laplace transform under the zero initial conditions allowed determining the following system of ordinary differential equations</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-45.jpg"/></para>
<para>where <emphasis>P<subscript>i</subscript></emphasis>, <emphasis>M<subscript>i</subscript></emphasis>, <emphasis>&#x003C8;<subscript>i</subscript></emphasis> &#x02013; the numeric matrices.</para>
<para>Representing the dynamic model (2.45) allowed to reconstruct a block diagram of a parallel kinematics mechanism in the standard form (<link linkend="F2-7">Figure <xref linkend="F2-7" remap="2.7"/></link>), where <emphasis>P<superscript>-<emphasis>1</emphasis></superscript></emphasis> is an inverse of matrix <emphasis>P</emphasis>.</para>
<fig id="F2-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-7">Figure <xref linkend="F2-7" remap="2.7"/></link></label>
<caption><para>Block diagram of the mechanism with a parallel kinematics.</para></caption>
<graphic xlink:href="graphics/ch02_fig007.jpg"/>
</fig>
</section>
<section class="lev1" id="sec2-6">
<title>2.6 The Dynamics of the Mechanism with a Parallel Structure Obtained by Means of the Incomplete Data Identification</title>
<para>The incomplete experimental data arises when not all entrance signals of vector <emphasis>u</emphasis> (<link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link>) can be measured and recorded. Such a situation appears during the dynamics identification of a manipulator with controlled diode motor-operated drive (<link linkend="F2-8">Figure <xref linkend="F2-8" remap="2.8"/></link>). In this case, only signals of the platforms center of mass set position which form vector <emphasis>u <subscript><!--br role="newline" /--></subscript>1</emphasis> and signals of the platform&#x02032;s center of mass current position which form vector <emphasis>x</emphasis> are accessible for measuring. Thus, the task of the identification is to define matrices <emphasis>&#x003D5;<subscript>11</subscript>, &#x003D5;<subscript>12</subscript></emphasis>from Equation (2.21) by records of vectors <emphasis>u<subscript></subscript>1</emphasis> and <emphasis>x</emphasis>.</para>
<fig id="F2-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-8">Figure <xref linkend="F2-8" remap="2.8"/></link></label>
<caption><para>Manipulator with a controlled diode motor-operated drive.</para></caption>
<graphic xlink:href="graphics/ch02_fig008.jpg"/>
</fig>
<para>Solving this task is achieved as a result of algorithm (2.30&#x02013;2.32) applied to the estimations of the spectral and cross-spectral density matrices <emphasis>S<subscript>uu</subscript>, S<subscript>xx</subscript>, S<subscript>ux</subscript></emphasis> and <emphasis>S<subscript>&#x00394;x</subscript></emphasis>. For the illustration of this algorithm application, we used the records of the &#x00AB;input-output&#x00BB; vectors <emphasis>u, x</emphasis> with the following structure</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-46.jpg"/></para>
<para>where <emphasis>u <subscript><!--br role="newline" /--></subscript>1</emphasis> --the required value of the manipulators&#x02032; platform center of mass <emphasis>O<subscript></subscript>1</emphasis> projection on the axis <emphasis>O<subscript>0</subscript>X<subscript>0</subscript></emphasis> (<link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link>); <emphasis>u<subscript></subscript>2</emphasis> &#x02013; the required value of the manipulators platform center of mass <emphasis>O <subscript><!--br role="newline" /--></subscript>1</emphasis> projection on the axis<emphasis>O<subscript>0</subscript>Y<subscript>0</subscript></emphasis>; <emphasis>u<subscript></subscript>3</emphasis> &#x02013; the required value of the manipulators&#x02019; platform center of mass <emphasis>O <subscript><!--br role="newline" /--></subscript>1</emphasis>projection on the axis <emphasis>O<subscript>0</subscript>Z<subscript>0</subscript></emphasis>(<link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link>).</para>
<para>As a result of the experiment, all the above vectors&#x02019; (2.46) components were obtained and all the graphs of their changes (Figures 2.9, 2.10) were built.</para>
<fig id="F2-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-9">Figure <xref linkend="F2-9" remap="2.9"/></link></label>
<caption><para>Grapas of the vector u componente changes.</para></caption>
<graphic xlink:href="graphics/ch02_fig009.jpg"/>
</fig>
<fig id="F2-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-10">Figure <xref linkend="F2-10" remap="2.10"/></link></label>
<caption><para>Grapas of vector x component changes.</para></caption>
<graphic xlink:href="graphics/ch02_fig0010.jpg"/>
</fig>
<para>In accordance with this algorithm (2.26&#x02013;2.32) on the first stage of the calculations, estimations of matrices <emphasis>S<subscript>uu</subscript>, S<subscript>xx</subscript>, S<subscript>ux</subscript></emphasis> were found.</para>
<para>Approximation of such estimations made it possible to construct the following spectral densities matrices with the help of the logarithmic frequency descriptions method [4]</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-47.jpg"/></para>
<para>The introduction of the found matrices (2.47) into the Equation (2.29) and its factorization made it possible to find the cross spectral density <emphasis>S<subscript>&#x00394;x</subscript></emphasis></para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-48.jpg"/></para>
<para>The factorization [4] of the transposed spectral densities matrix <emphasis>S&#x02032;<subscript>yy</subscript></emphasis> from expression (2.31) allowed finding the following matrix <emphasis>D</emphasis></para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-49.jpg"/></para>
<para>Taking into account the dimension of the output co-ordinates vector <emphasis>x</emphasis> (2.46), we have accepted that matrix <emphasis>R</emphasis> is equal to the identity matrix. At that rate matrix <emphasis>R <subscript><!--br role="newline" /--></subscript>0</emphasis>alsoequals1.Substitutionoftheresults(2.47--2.49)inexpression(2.32)anditsseparationalloweddefiningthat</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-50.jpg"/></para>
<para>Substitution of matrices (2.49), (2.50) in expression (2.30) and taking into account the vectors <emphasis>u</emphasis> and <emphasis>x</emphasis> made it possible to solve the problem and to find such matrices <emphasis>&#x003D5;<subscript>11</subscript></emphasis> and <emphasis>&#x003D5;<subscript>12</subscript></emphasis> as</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-51.jpg"/></para>
<para>Taking into account the flow diagram on <link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link> and the physical sense of matrices <emphasis>&#x003D5;<subscript>11</subscript></emphasis> and <emphasis>&#x003D5;<subscript>12</subscript></emphasis> made it possible to formulate the equation</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-53.jpg"/></para>
<para>For the definition of the incomplete data identification error, the Equation <emphasis role="up">( <xref rid="#x1-5009r26"><!--ref: GrindEQ__2_26_--></xref>)</emphasis> is used and the error spectral density is found in the form which is shown below:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-54.jpg"/></para>
<para>The identification error mathematical mean is equal to zero and its relative variances is equal to</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq2-55.jpg"/></para>
<para>Obviously it is clear that the main part of the error <emphasis>&#x1D700;</emphasis> oscillations power density is concentrated in the area of the infrasonic frequencies. The presence of such an error is explained by the limited duration of the experiment.</para>
</section>
<section class="lev1" id="sec2-7">
<title>2.7 Verification of the Structural Identification Results</title>
<para><emphasis role="strong"><emphasis>Verification of the identification results</emphasis></emphasis> was implemented with the help of the modeling tool SIMULINK from Matlab. The <emphasis>p</emphasis>rinciple of the verification of the identification results exactness was based on the comparison of vector x records to the results of the platform center of mass position simulation.</para>
<fig id="F2-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-11">Figure <xref linkend="F2-11" remap="2.11"/></link></label>
<caption><para>The scheme simulation model of the mechanism with parallel kinematics.</para></caption>
<graphic xlink:href="graphics/ch02_fig0011.jpg"/>
</fig>
<fig id="F2-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-12">Figure <xref linkend="F2-12" remap="2.12"/></link></label>
<caption><para>Graphs of the change of coordinates X of the center of mass of the platform.</para></caption>
<graphic xlink:href="graphics/ch02_fig0012.jpg"/>
</fig>
<fig id="F2-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-13">Figure <xref linkend="F2-13" remap="2.13"/></link></label>
<caption><para>Graphs of thee changes of Y coordinates of the center of mass of the platform.</para></caption>
<graphic xlink:href="graphics/ch02_fig0013.jpg"/>
</fig>
<para>On Figure 2<emphasis>.</emphasis>11, the simulation model block u is designed to create the set of the input signals. The block m generates a set of projections of the net resistance moment vector on the axis of the co-ordinate system O<subscript>0</subscript>X<subscript>0</subscript>Y<subscript>0</subscript>Z<subscript>0</subscript>. The output of the block xy formed the vector x of records. The blocks Wu and Wpsi are designed for storing the matrices of the transfer functions W<subscript><emphasis>u</emphasis></subscript> and W<subscript>&#x003C8;</subscript>.</para>
<para>According to the simulation results, the grapas (Figures 2.12 and 2.13) are built. Analysis of these graphs show that they are close enough.</para>
</section>
<section class="lev1" id="sec2-8">
<title>2.8 Conclusions</title>
<para>The conducted research on the mechanism with a parallel structure dynamics made it possible to obtain the following scientific and practical results:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Substantiate two new algorithms for the structural identification of the multivariable moving object dynamic models. The first one of them allows to define the structure and parameters of a transfer function matrix of the object with unstable poles using the regular vectors &#x0201D;input-output&#x0201D;. The second one allows identifying not only the model of a mobile object but also the model of the non-observed stationary stochastic disturbance;</para></listitem>
<listitem>
<para>Three types of models which characterize the dynamics of the manipulator with a parallel kinematics are identified. This allows to use different modern multidimensional optimal control systems synthesis methods for designing the optimal mechatronic system;</para></listitem>
<listitem>
<para>It is shown that the mechanism with a parallel kinematics as an object of control is a multi-resistant mechanical filter.</para></listitem></itemizedlist>
</section>
<section class="lev1" id="sec2-9">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>J. P. Merlet, &#x02018;Parallel robots. Solid mechanics and its application&#x02019;, V.74 &#x02013; Kluwer Academic Publishers, 2000, pp. 394 <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+P%2E+Merlet%2C+%27Parallel+robots%2E+Solid+mechanics+and+its+application%27%2C+V%2E74+-+Kluwer+Academic+Publishers%2C+2000%2C+pp%2E+394" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. V. Volkomorov, J. P. Hagan, A. P. Karpenko, &#x02018;Modeling and optimization of some parallel mechanisms&#x02019;, Information Technology. Application 2010, No. 5. pp.1&#x02013;32 (in Russian). <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+V%2E+Volkomorov%2C+J%2E+P%2E+Hagan%2C+A%2E+P%2E+Karpenko%2C+%27Modeling+and+optimization+of+some+parallel+mechanisms%27%2C+Information+Technology%2E+Application+2010%2C+No%2E+5%2E+pp%2E1-32+%28in+Russian%29%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. C. Davis, &#x02018;Factoring the spectral matrix&#x02019;, IEEE Trans. Automat. Cointr. &#x02013; 1963.- AC-8, N 4, pp. 296&#x02013;305. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+C%2E+Davis%2C+%27Factoring+the+spectral+matrix%27%2C+IEEE+Trans%2E+Automat%2E+Cointr%2E+-+1963%2E-+AC-8%2C+N+4%2C+pp%2E+296-305%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>V. N. Azarskov, L. N. Blokhin, L. S. Zhitetsky, &#x02018;The methodology of constructing optimal systems stochastic stabilization&#x02019;, Monograph, Ed. Blokhin L. N. - K.: NAU Book Publishers, 2006, pp. 440 - Bibliography: pp. 416&#x02013;428 (in Russian). <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=V%2E+N%2E+Azarskov%2C+L%2E+N%2E+Blokhin%2C+L%2E+S%2E+Zhitetsky%2C+%27The+methodology+of+constructing+optimal+systems+stochastic+stabilization%27%2C+Monograph%2C+Ed%2E+Blokhin+L%2E+N%2E+-+K%2E%3A+NAU+Book+Publishers%2C+2006%2C+pp%2E+440+-+Bibliography%3A+pp%2E+416-428+%28in+Russian%29%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. R. Gantmakher, &#x02018;Theory matrits.-4th ed&#x02019;, Nauka, 1988. p. 552 (in Russian). <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+R%2E+Gantmakher%2C+%27Theory+matrits%2E-4th+ed%27%2C+Nauka%2C+1988%2E+p%2E+552+%28in+Russian%29%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>V. Kucera &#x02018;Discrete line control: the polynomial equation approach&#x02019;. Praha: Akademia, 1979, pp. 206 <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=V%2E+Kucera+%27Discrete+line+control%3A+the+polynomial+equation+approach%27%2E+Praha%3A+Akademia%2C+1979%2C+pp%2E+206" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. A. Aliev, V. A. Bordyug, V. B. Larin, &#x02018;Time and frequency methods for the synthesis of optimal regulators&#x02019;, Baku: Institute of Physics of the Academy of Sciences, 1988, pp. 46 (in Russian). <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+A%2E+Aliev%2C+V%2E+A%2E+Bordyug%2C+V%2E+B%2E+Larin%2C+%27Time+and+frequency+methods+for+the+synthesis+of+optimal+regulators%27%2C+Baku%3A+Institute+of+Physics+of+the+Academy+of+Sciences%2C+1988%2C+pp%2E+46+%28in+Russian%29%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch03" label="3" xreflabel="3">
<title>An Autonomous Scale Ship Model for Parametric Rolling Towing Tank Testing</title>
<para><emphasis role="strong">M. M&#x00ED;guez Gonz&#x00E1;lez, A. Deibe, F. Orjales, B. Priegoand F. L&#x00F3;pez Pe&#x00F1;a</emphasis></para>
<para>Integrated Group for Engineering Research, University of A Coru&#x00F1;a, Spain<break/>Corresponding author: M. M&#x00ED;guez Gonz&#x00E1;lez &lt;mmiguez@udc.es&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>This chapter presents the work carried out for developing a self-propelled scale ship model for model testing, with the main characteristic of not having any material link to a towing device to carry out the tests. This model has been fully instrumented in order to acquire all the significant raw data, process them onboard and communicate with an inshore station.</para>
<para><emphasis role="strong">Keywords:</emphasis> Ship scale model, autonomous systems, data acquisition, parametric resonance, towing tank tests</para>
</section>
<section class="lev1" id="sec3-1">
<title>3.1 Introduction</title>
<para>Ship scale model testing has traditionally been the only way to accurately determine ship resistance, propulsion, maneuvering and seakeeping characteristics. These tests are usually carried out in complex facilities, where a large carriage, to which the model is attached, moves it following a desired path along a water tank.</para>
<para>Ship model testing could be broadly divided into four main types of tests. Resistance tests, either in waves or still water, intended to obtain the resistance of the ship without taking the effects of its propulsion system into consideration. Propulsion tests aimed at analyzing the performance of the ship propeller when it is in operation together with the ship itself. Maneuvering tests the objective of which is to analyze the capability of the ship for carrying out a set of defined maneuvers. And finally, seakeeping tests aimed at studying the behavior of the ship while sailing in waves [1].</para>
<para>The former two tests are carried out in towing tanks, which are slender water channels where the model is attached to the carriage that tows it along the center of the tank. The latter two, on the other hand, are usually performed in the so-called ocean basins where the scale model can be either attached to a carriage or radio controlled with no mechanical connection to it.</para>
<para>However, there exist certain kinds of phenomena in the field of seakeeping that can be studied in towing tanks (which are cheaper and have more availability than ocean basins) and that are characterized by showing very large amplitude nonlinear motions. The interactions between the carriage and the model due to these motions, which are usually limited to a maximum in most towing tanks, and the lack of space under the carriage, reduce the applicability of slender channels for these types of tests.</para>
<para>One of these phenomena is that of ship parametric roll resonance, also known as parametric rolling. This is a well-known dynamical issue affecting ships, especially containerships, fishing vessels and cruise ships, and it can generate very large amplitude roll motions in a very sudden way, reaching the largest amplitudes in just a few rolling cycles. Parametric roll is due to the periodic alternation of wave crests and troughs along the ship, which produce the changes in ship transverse stability that lead to the aforementioned roll motions.</para>
<para>Resonance is most likely to happen when the ship sails in longitudinal seas and when a certain set of conditions are present, which include a wave encounter frequency ranging twice the ship&#x02019;s natural roll frequency, a wavelength almost equal to the ship length and a wave amplitude larger than a given threshold [2].</para>
<para>Traditionally, parametric roll tests in towing tanks have been carried out by using a carriage towed model, where the model is free to move in just some of the 6 degrees of freedom (typically heave, roll and pitch, the ones most heavily influencing the phenomenon) [3, 4]. However, this arrangement limits the possibility of analyzing the influence of the restrained degrees of freedom [5], which may also be of interest while analyzing parametric roll resonance, or it may interfere on its development.</para>
<para>The main objective of the present work is to overcome the described difficulties for carrying out scale tests in slender towing tanks where large amplitude motions are involved, while preserving the model capabilities for being used in propulsion (free-running), maneuvering and seakeeping tests.</para>
<para>This has been done by using a self-propelled and self-controlled scale model, able to freely move in the six degrees of freedom and to measure, store and process all the necessary data without direct need of the towing carriage. In addition, the model could be used for any other tests, both in towing tanks or ocean basins, with the advantage of being independent of the aforementioned carriage.</para>
<para>The type and amount of data to be collected is defined after taking into consideration the typology of tests to be carried out. Taking into account that all three, propulsion, maneuvering and seakeeping tests should be considered, all the data related with the motion of the ship, together with those monitoring the propulsion system and the heading control system, have to be collected. Moreover, a processing, storage and communication with shore system was also implemented and installed onboard.</para>
<para>Finally, the results obtained by using the model in a test campaign aimed at predicting the appearance of parametric roll, where a detection and prevention algorithm has been implemented in the onboard control system, are presented.</para>
</section>
<section class="lev1" id="sec3-2">
<title>3.2 System Architecture</title>
<para>The first implementation of the proposed system has been on a scale ship model that has been machined from high-density (250 kg/m<superscript>3</superscript>) polyurethane blocks to a scale of 1/15th, painted and internally covered by epoxy reinforced fiberglass. Mobile lead weights have been installed on supporting elements, allowing longitudinal, transverse and vertical displacements for a correct mass arrangement. Moreover, two small weights have been fitted into a transverse slider for fast and fine-tuning of both the transverse position of the center of gravity and the longitudinal moment of inertia.</para>
<para>The propulsion system consists of a 650W brushless, outrunner, three-phase motor, an electronic speed control (ESC) and a two-stage planetary gearbox, which move a four-bladed propeller. The ESC electronically generates a three-phase low voltage electric power source to the brushless motor. With this ESC, the speed and direction of the brushless motor can be controlled. It can even be forced to work as a dynamic brake if need be. The rudder is linked to a radio control servo with a direct push-pull linkage, so that the rudder deflection angle can be directly controlled with the servo command.</para>
<para>Both the brushless motor and the rudder may be controlled either by using a radio link or by the onboard model control system, which is the default behavior. The user can choose between both control methods any time; there&#x02019;s always a human at the external transmitter to ensure maximum safety during tests.</para>
<section class="lev2" id="sec3-2-1">
<title>3.2.1 Data Acquisition</title>
<para>In order to ensure that the model can be used for propulsion, maneuvering and seakeeping tests and is in accordance with the ITTC (International Towing Tank Conference) requirements, the following parameters have been measured (<link linkend="T3-1">Table <xref linkend="T3-1" remap="3.1"/></link>):</para>
<table-wrap position="float" id="T3-1">
<label><link linkend="T3-1">Table <xref linkend="T3-1" remap="3.1"/></link></label>
<caption><para>Measured Parameters</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="bottom" align="left">Type of Test</td>
<td valign="bottom" align="left">ITTC Guideline</td>
<td valign="bottom" align="left">Measured Parameters</td>
<td valign="bottom" align="left"></td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Propulsion</td>
<td valign="top" align="left">7.5-02-03-01.1</td>
<td valign="top" align="left">Model speed<?lb?>Resistance/External tow force<?lb?>Propeller thrust and torque<?lb?>Propeller rate of revolution<?lb?>Model trim and sinkage</td>
<td valign="top" align="left">&#x0221A;<?lb?>&#x0221A;<?lb?>&#x0221A;<?lb?>&#x0221A;<?lb?>&#x0221A;</td>
</tr>
<tr>
<td valign="top" align="left">Maneuverability<?lb?>(additional to<?lb?>Propulsion)</td>
<td valign="top" align="left">7.5-02-06-01</td>
<td valign="top" align="left">Time<?lb?>Position<?lb?>Heading<?lb?>Yaw rate<?lb?>Roll angle<?lb?>Rudder angle<?lb?>Force and moment on steering devices</td>
<td valign="top" align="left">&#x0221A;<?lb?>&#x0221A;<?lb?>&#x0221A;<?lb?>&#x0221A;<?lb?>&#x0221A;<?lb?>&#x0221A;<?lb?>x</td>
</tr>
<tr>
<td valign="top" align="left">Seakeeping<?lb?>(additional to<?lb?>Maneuverability)</td>
<td valign="top" align="left">7.5-02-07-02.1</td>
<td valign="top" align="left">Model motions (6 d.o.f.)<?lb?>Model motion rates (6 d.o.f.)<?lb?>Model accelerations (6 d.o.f.)<?lb?>Impact pressures<?lb?>Water on deck<?lb?>Drift angle</td>
<td valign="top" align="left">&#x0221A;<?lb?>&#x0221A;<?lb?>&#x0221A;<?lb?>x<?lb?>x<?lb?>x</td>
</tr>
</tbody>
</table>
</table-wrap>
<para>In order to obtain data for all the representative parameters, the following sensors have been installed onboard:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Inertial Measurement Unit (IMU): having nine MEMS embedded sensors; including 3 axis accelerometers, 3 axis gyros and 3 axis magnetometers. The IMU has an internal processor that provides information on accelerations in the OX, OY and OZ axis, angular rates around these three axes and quaternion based orientation vectors (roll, pitch, yaw), both in RAW format and filtered by using Kalman techniques. This sensor has been placed near the ship&#x02019;s center of gravity with the objective of improving its performance;</para></listitem>
<listitem>
<para>Thrust sensor: a thrust gauge has been installed to measure the thrust generated by the propeller at the thrust bearing;</para></listitem>
<listitem>
<para>Revolution and torque sensor: in order to measure the propeller revolutions and the torque generated by the engine, a torque and rpm sensor has been installed between both elements;</para></listitem>
<listitem>
<para>Sonars: intended to measure the distance to the towing tank walls and feed an automatic heading control system;</para></listitem>
<listitem>
<para>Not directly devoted to tested magnitudes, there are also a temperature sensor, battery voltage sensors and current sensors.</para></listitem></itemizedlist>
<para>Data acquisition is achieved through an onboard mounted PC, placed forward on the bottom of the model. The software in charge of the data acquisition and processing and ship control is written in Microsoft .Net, and installed in this PC. This software is described in the following section. There is a Wi-Fi antenna at the ship&#x02019;s bow, connected to the onboard PC that enables a Wi-Fi link to an external, inshore workstation. This workstation is used to monitor the ship operation.</para>
<para>An overview of the model where its main components are shown is included in <link linkend="F3-1">Figure <xref linkend="F3-1" remap="3.1"/></link>.</para>
<fig id="F3-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-1">Figure <xref linkend="F3-1" remap="3.1"/></link></label>
<caption><para>Ship scale model overview.</para></caption>
<graphic xlink:href="graphics/ch03_fig01.jpg"/>
</fig>
</section>
<section class="lev2" id="sec3-2-2">
<title>3.2.2 Software Systems</title>
<para>In this section, the software designed and implemented to control the ship is described. Regarding the operational part of the developed software, there are three main tasks that have to be performed in real time in order to monitor and control the ship: acquisition, computation and actuation. In every time step, once the system is working, all the sensor measurements are collected. The indispensable sensors that need to be connected to the system are: the Inertial Measurement Unit (IMU) which provides the acceleration, magnetic, angular rate and attitude measurements, the sonars and thrust gauge, in this case connected to a data acquisition board, and the revolution and torque sensor (<link linkend="F3-2">Figure <xref linkend="F3-2" remap="3.2"/></link>).</para>
<fig id="F3-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-2">Figure <xref linkend="F3-2" remap="3.2"/></link></label>
<caption><para>Connectivity between Mini-PC and sensors onboard.</para></caption>
<graphic xlink:href="graphics/ch03_fig02.jpg"/>
</fig>
<para>Once the sensor data are acquired, the system computes the proper signals to modify the rudder servo and the motor ESC commands. These signals can be set manually (using the software interface from the Wi-Fi linked workstation, or the external RC transmitter) or automatically. Applying a controller over the rudder servo, using the information from the sonar signals, it is possible to keep the model centered and in course along the towing tank. Another controller algorithm is used to control the ship speed.</para>
<para>The software is based on VB. NET and to interact with the system from an external workstation, a simple graphical user interface has been implemented (<link linkend="F3-3">Figure <xref linkend="F3-3" remap="3.3"/></link>).</para>
<fig id="F3-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-3">Figure <xref linkend="F3-3" remap="3.3"/></link></label>
<caption><para>Graphical user interface to monitor/control the model from an external workstation.</para></caption>
<graphic xlink:href="graphics/ch03_fig03.jpg"/>
</fig>
<para>From an external workstation, the user can start and finish the test, activate the sensors to measure, monitor the data sensors in real time, control the rudder servo and the motor manually using a slider or stop the motor and finish the test. All the acquired data measurements are saved in a file for a future analysis of the test.</para>
</section>
<section class="lev2" id="sec3-2-3">
<title>3.2.3 Speed Control</title>
<para>The speed control of the model is done by setting an rpm command, which keeps engine revolutions constant by using a dedicated controller programmed in the governing software. Alternatively, a servo command may be used for setting a constant power input for the engine. In calm waters and for a given configuration of the ship model, there is a relationship between ship speed and propeller revolutions. By performing some preliminary tests at different speeds, this relation can be adjusted and used thereafter for testing within a simple, open loop controller. However, in case of testing with waves, these waves introduce an additional and strong drag component on the ship forward movement, and there is no practical way of establishing a similar sort of relationship. For these cases, the towing carriage is used as a reference and the speed is maintained by keeping the ship model in a steady relative position to the carriage.</para>
<para>The speed control strategy to cope with this composed speed was initially tested as shown in <link linkend="F3-4">Figure <xref linkend="F3-4" remap="3.4"/></link>. It was initially done by means of a double PID controller; the upper section of the controller tries to match the ship speed with a set point selected by the user, <emphasis>c<subscript>v</subscript></emphasis>. This portion of the controller uses the derivative of the ship position along the tank, <emphasis>x</emphasis>, as an estimation of the ship speed, <emphasis>e<subscript>vx</subscript></emphasis>. The position <emphasis>x</emphasis> is measured through the Laser Range Finder sensor placed at the beginning of the towing tank, facing the ship poop, to capture the ship position along the tank, and send this information through a dedicated RF modem pair, to the Mini-PC onboard. The bottom section, on the other hand, uses the integral of the ship acceleration in its local x-axis from the onboard IMU, <emphasis>v<subscript>a</subscript></emphasis>, as an estimation of the ship speed, <emphasis>e<subscript>va</subscript></emphasis>. Each branch has its own PID controller, and the sum of both outputs is used to command the motor.</para>
<fig id="F3-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-4">Figure <xref linkend="F3-4" remap="3.4"/></link></label>
<caption><para>Double PID Speed control.</para></caption>
<graphic xlink:href="graphics/ch03_fig004.jpg"/>
</fig>
<para>Both speed estimations come from different sensors, in different coordinate systems, with different noise perturbations and, over all, they have different natures. The estimation based on the derivative of the position along the tank has little or zero drift over time, and its mean value matches the real speed on the tank <emphasis>x</emphasis> axis, and changes slowly. On the other hand, the estimation based on the acceleration along the ship&#x02019;s local <emphasis>x</emphasis>-axis is computed by the onboard IMU, from its MEMS sensors, and is prone to severe noises, drift over time and changes quickly. Furthermore, the former estimation catches the slow behavior of the ship speed, and the latter its quick changes. This is the reason to use different PID controllers with both estimations. The resulting controller follows the user-selected speed setpoint, with the upper branch eliminating any steady-state speed error, while minimizing quick speed changes with the lower branch.</para>
<para>Later on, a different speed control approach was introduced in order to improve its performance. Being the Laser Range Finder output an absolute measure of ship position, the speed obtained from its derivative is significantly more robust than the one estimated from the IMU in the first control scheme, and has no drift over time or with temperature. The new speed control algorithm is based in a complementary filter [6]. This filter estimates the ship speed from two different speed estimations, with different and complementary frequency components, as shown in <link linkend="F3-5">Figure <xref linkend="F3-5" remap="3.5"/></link>.</para>
<fig id="F3-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-5">Figure <xref linkend="F3-5" remap="3.5"/></link></label>
<caption><para>Speed control. complementary filter.</para></caption>
<graphic xlink:href="graphics/ch03_fig005.jpg"/>
</fig>
<para>This two signal complementary filtering is based upon the use and availability of two independent noisy measurements of the ship speed, <emphasis>v</emphasis>(<emphasis>s</emphasis>): the one from the derivative of the range finder position estimation (<emphasis>v</emphasis>(<emphasis>s</emphasis>)<emphasis>+n</emphasis><subscript>1</subscript>(<emphasis>s</emphasis>)) and the one from the integration of IMU acceleration (<emphasis>v</emphasis>(<emphasis>s</emphasis>)<emphasis>+n</emphasis><subscript>2</subscript>(<emphasis>s</emphasis>)). Each of these signals has their own spectral characteristics, here modeled by their different noise levels, <emphasis>n</emphasis><subscript>1</subscript>(<emphasis>s</emphasis>) and <emphasis>n</emphasis><subscript>2</subscript>(<emphasis>s</emphasis>). If both signals have complementary spectral characteristics, transfer functions may be chosen in such a way as to minimize speed estimation error. The general requirement is that one of the transfer functions complement the other. Thus, for both measurements of the speed signal [7]:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg57.jpg"/></para>
<para>This will allow the signal component to pass through the system undistorted since the output of the system will always sum to one. In this case, <emphasis>n</emphasis><subscript>1</subscript> is predominantly high-frequency noise and <emphasis>n</emphasis><subscript>1</subscript> is low-frequency noise; these two noise sources have complementary spectral characteristics. Choosing <emphasis>H</emphasis><subscript>1</subscript>(<emphasis>s</emphasis>) to be a low-pass filter, and <emphasis>H</emphasis><subscript>2</subscript>(<emphasis>s</emphasis>) a high-pass filter, both with the same, suitably cut frequency, the output <emphasis>v</emphasis> will not suffer from any delay in dynamic response due to low-pass filtering, and will be free of both high- and low-frequency noise.</para>
</section>
<section class="lev2" id="sec3-2-4">
<title>3.2.4 Track-Keeping Control</title>
<para>Regarding heading control, IMU and sonar data are used for keeping the model centered and in course along the towing tank. In case these values are not accurate enough, heading control may be switched to a manual mode and an external RC transmitter could be used for course keeping. At first, the signals of the sonars to maintain the model centered on the tank and a Kalman filter taken data from the IMU were used to keep the course, the magnetometers&#x02019; signals being of primary importance in this Kalman filter.</para>
<para>During testing, this arrangement showed to be not very effective because the steel rails of the carriage, placed all along at both sides of the tank, induced a shift in the magnetometer signals when the ship model was not perfectly centered at the tank. In addition, the magnetometers were also sensible to the electrical power lines coming across the tank. For these reasons only the sonar signals were used to keep both course and position, with the help of the relative position to the carriage, which have been used to keep the speed constant anyway.</para>
</section>
<section class="lev2" id="sec3-2-5">
<title>3.2.5 Other Components</title>
<para>The power for all the elements is provided by two 12 V D.C batteries, placed abaft in the ship, providing room in their locations for longitudinal and transverse mass adjustment. These batteries have enough capacity for a whole day of operation.</para>
<para>The main propulsion system block, consisting of the already mentioned brushless motor (1), electronic speed control (ESC, not in view), two-stage planetary gearbox (2), rotational speed and torque sensor (3), elastic couplings (4), and an output shaft with Kardan couplings (5), is represented in <link linkend="F3-6">Figure <xref linkend="F3-6" remap="3.6"/></link>. The disposition of these elements in a single block allows a modular implementation of the whole system and simplifies the operations needed to install and uninstall it from a given ship model.</para>
<fig id="F3-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-6">Figure <xref linkend="F3-6" remap="3.6"/></link></label>
<caption><para>Propulsion system block.</para></caption>
<graphic xlink:href="graphics/ch03_fig06.jpg"/>
</fig>
<para>Finally, two adjustable weights have been placed in both sides of the ship model, and another one has been placed forward in the centerline. In both cases, enough room has been left as to allow transversal and vertical mass adjustment. Moreover, two sliders with 0.5 kg weights have been installed for fine tuning of the mass distribution.</para>
</section>
</section>
<section class="lev1" id="sec3-3">
<title>3.3 Testing</title>
<para>As mentioned above, the proposed model is mainly intended to be used in seakeeping tests in towing tanks, where the carriage may interfere in the motion of the ship. The application of the proposed model to one of these tests, aimed at predicting and preventing the appearance of parametric roll resonance, will be presented in this section.</para>
<para>As it has been already described, parametric roll resonance could generate large roll motions and lead to fatal consequences. The need for a detection system, and even for a guidance system, has been recursively stated by the maritime sector [8]. However, the main goal is to obtain systems that can, in a first stage, detect the appearance of parametric roll resonance, but that, in a second stage, can prevent it from developing.</para>
<para>As mentioned, there are some specific conditions that have to be present for parametric roll to appear, regarding both ship and waves characteristics. Wave encounter frequency should be around twice the ship natural roll frequency, their amplitude should be over a certain threshold that depends on ship characteristics, and their wavelength should be near the ship&#x00B4;s length. Ship roll damping should be small enough not to dissipate all the energy that is generated by the parametric excitation. And finally, ship restoring arm variations due to wave passing along the hull, should be large enough as to counteract the effect of roll damping.</para>
<para>If parametric roll resonance wants to be prevented, it is obvious that there are three main approaches that could be used, and that consist in avoiding the presence of at least one of the aforementioned conditions.</para>
<para>The first alternative consists in acting on the ship damping components, or using stabilizing forces that oppose the heeling arms generated during the resonance process. The second alternative, aimed at reducing the amplitude of restoring arm variations, necessarily implies introducing modifications in hull forms. Finally, the third alternative is focused on avoiding wave encounter frequency being twice the ship natural roll frequency. If we consider a ship sailing at a given loading condition (and so with a constant natural roll frequency) in a specific seaway (frequency and amplitude), the only way of acting on the frequency ratio is by modifying encounter frequency. So, this alternative would be based on ship speed and heading variations that take the vessel out of the risk area defined by <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in60.jpg"/>. This last approach is the one adopted in this work.</para>
<section class="lev2" id="sec3-3-1">
<title>3.3.1 Prediction System</title>
<para>Regarding the prediction system, Artificial Neural Networks (ANN) have been used as a roll motion forecaster, exploiting their capabilities as nonlinear time series estimators, in order to subsequently detect any parametric roll event through the analysis of the forecasted time series [9]. In particular, multilayer perceptron neural networks (MPNNs) have been the structures selected to predict ship roll motion in different situations. The base neural network architecture used in this work is a multilayer perceptron network with two hidden layers, 30 neurons per layer, and one output layer. This initial structure is modified by increasing the number of neurons per layer, or the number of layers, in order to compare the prediction performance of different architectures and according to the complexity of the cases under analysis.</para>
<para>The network is fed with time series of roll motion, which are 20 seconds long and sampled at a frequency F<subscript>s</subscript> = 2Hz; hence the input vector <emphasis role="strong">x</emphasis><superscript>0</superscript> has 40 components. The network has only one output, which is the one step- ahead prediction. By substituting part of the input vector with the network output values, and recursively executing the system, longer predictions can be obtained from the initial 20 seconds. However, as the number of iterations grows, the prediction performance deteriorates accordingly.</para>
<para>The length of the roll time series has been selected taking into account two factors. On the one hand, the natural roll periods of the vessels chosen for the testing. In the loading conditions considered for these tests, this value is 7.48 seconds. On the other hand, parametric roll fully develops in a short period of time, usually no more than four rolling cycles [10].</para>
<para>The training of the artificial neural network forecaster has been achieved by using the roll time series obtained in the towing tank tests that will be described in the following section.</para>
<para>These algorithms have been implemented within the ship onboard control system, and their performance analyzed in some of the aforementioned towing tank tests.</para>
</section>
<section class="lev2" id="sec3-3-2">
<title>3.3.2 Prevention System</title>
<para>Once parametric resonance has been detected, the control system should act modifying the model speed or heading in order to prevent the development of the phenomenon. In order to do so, the concept of stability diagrams has been applied. These diagrams display the areas in which, for a given loading condition and as a function of wave height and encounter frequency &#x02013; natural roll frequency ratio and for different forward speeds, parametric roll takes place. From the analysis of these regions, the risk state of the ship at every moment could be determined and its speed and heading modified accordingly to take the model out of the risk area.</para>
<para>In order to set up these stability diagrams, a mathematical model of the ship behavior has been developed, validated, and implemented within the ship control system in order to compute the stability areas for the different loading conditions tested.</para>
<para>This model is a one degree of freedom nonlinear model of the ship roll motion:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg61.jpg"/></para>
<para>where <emphasis>I<subscript>xx</subscript></emphasis> and <emphasis>A</emphasis><subscript>44</subscript> are respectively the mass and added mass moments of inertia in roll, <emphasis>B</emphasis><subscript>44, <emphasis>T</emphasis></subscript> represents the nonlinear damping term and <emphasis>C</emphasis><subscript>44</subscript>(&#x003D5;,<emphasis>t</emphasis>) is the time varying nonlinear restoring coefficient. In this case, the moment and the added moment of inertia have been obtained by measuring the natural roll frequency from a zero speed roll decay test, carried out with the developed ship scale model.</para>
<para>While trying to model parametric roll resonance, it has been seen that both heave and pitch motions have a clear influence in the appearance of the phenomenon, together with the effects of the wave moving along the hull. So, including the influence of these factors is of paramount importance for a good performance of the mathematical model. In this work, the influence of these factors has been taken into account while computing the restoring term, by adopting the quasi-static &#x0201C;look up table&#x0201D; approach, described by [11] and required by the ABS Guidelines [12] for modelling the variation of the ship restoring capabilities in longitudinal waves.</para>
<para>Moreover, and regarding the roll damping, it has been shown that it is essential for a good simulation of large amplitude roll motion, to consider this parameter as highly nonlinear. In order to account for this fact, roll damping has been decomposed into two terms, one linear component, which is supposed to be dominant at small roll angles, and a quadratic one, which is necessary to model the effects of damping at large roll angles. This approach has also been applied by many authors for modelling parametric roll, that is, [13], with accurate results. In order to determine the two damping coefficients (<emphasis>B</emphasis><subscript>44, <emphasis>a</emphasis></subscript>,<emphasis>B</emphasis><subscript>44, <emphasis>b</emphasis></subscript>) in the most accurate way, roll decay tests for different forward speeds have been carried out in still water for the loading condition under analysis. The procedure followed for determining the damping coefficients from these tests is that described in [14].</para>
<para>Once the model was correctly set up and validated, it was executed for different combinations of wave height and frequency, the maximum roll amplitude for these time series was computed and the stability diagrams developed.</para>
</section>
<section class="lev2" id="sec3-3-3">
<title>3.3.3 Towing Tank Tests and Results</title>
<para>The proposed system has been used to perform different tests, some of which have been published elsewhere [15]. The main objective of these tests was to analyze, predict and prevent the phenomenon of parametric roll resonance. It is in this sort of tests, characterized by large amplitude oscillations in both roll and pitch motions, where the proposed system performs best as it can take information on board without disturbing the free motion of the ship model.</para>
<fig id="F3-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-7">Figure <xref linkend="F3-7" remap="3.7"/></link></label>
<caption><para>Roll and pitch motions in parametric roll resonance. Conventional carriage-towed model.</para></caption>
<graphic xlink:href="graphics/ch03_fig08.jpg"/>
</fig>
<para>To illustrate the influence of the towing device on the measures obtained in this kind of tests, <link linkend="F3-7">Figure <xref linkend="F3-7" remap="3.7"/></link> is presented. On it, the pitch and roll motions of a conventional carriage-towed model (<link linkend="F3-8">Figure <xref linkend="F3-8" remap="3.8"/></link>) in a similar parametric rolling test, are included.</para>
<fig id="F3-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-8">Figure <xref linkend="F3-8" remap="3.8"/></link></label>
<caption><para>Conventional carriage-towed model during testing.</para></caption>
<graphic xlink:href="graphics/ch03_fig07.jpg"/>
</fig>
<para>As it can be observed, the ship pitch motion presents a series of peaks (the most relevant in seconds 140, 180, 220 and 300), which are due to the interference of the towing device. These interferences not only influence the model pitch motion, but could also affect the development of parametric roll and so, the reliability of the test.</para>
<para>The tests campaign has been carried out in the towing tank of the Escuela T&#x00E9;cnica Superior de Ingenieros Navales of the Technical University of Madrid. This tank is 100 meters long, 3.8 meters wide and 2.2 meters deep. It is equipped with a screen type wave generator, directed by a wave generation software, capable of generating longitudinal regular and irregular waves according to a broad set of parameters and spectra. The basin is also equipped with a towing carriage able to develop a speed of up to 4.5 m/s. As it has already been mentioned, in this test campaign, trials at different forward speeds and also at zero speed have been carried out.</para>
<para>Regarding the zero speed runs, in order to keep the model in position and to try to avoid as much as possible any interferences of the restraining devices in the ship motions, two fixing ropes with two springs have been tightened to the sides of the basin and to a rotary element fixed to the model bow. Moreover, another restraining rope has been fitted between the stern of the model and the towing carriage, stowed just behind of it. However, this last rope has been kept loose and partially immersed, being enough for keeping the model head to seas without producing a major influence on its motion. In the forward speed test runs, the model has been left sailing completely free, with the exception of a security rope that would be used just in case the control of the ship was lost.</para>
<para>In order to set the adequate speed for each test run, a previous calibration for different wave conditions has been carried out to establish the needed engine output power (engine servo command) for reaching the desired speed as a function of wave height and frequency. The exact speed value developed in each of the test runs has been measured by following the model with the towing carriage, which has provided the instantaneous speed along the run. The calibration curve has been updated as the different runs were completed, providing more information for subsequent tests. However and considering that the model is free to move in the six degrees of freedom, the instantaneous speed of the model may be affected by surge motion, especially at the conditions with highest waves.</para>
<fig id="F3-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-9">Figure <xref linkend="F3-9" remap="3.9"/></link></label>
<caption><para>Proposed model during testing.</para></caption>
<graphic xlink:href="graphics/ch03_fig09.jpg"/>
</fig>
<para>A total number of 105 test runs have been carried out in head regular waves. Different combinations of wave height (ranging from 0.255 m to 1.245 m model scale) and ratio between encounter frequency and natural roll frequency (from 1.80 to 2.30) have been considered for three different values of forward speed (Froude numbers 0.1, 0.15 and 0.2) and zero speed, and two different values of metacentric height (0.370 m and 0.436 m). From the whole set of test runs, 55 correspond to the 0.370 m GM case, while 50 correspond to a GM of 0.436 m.</para>
<para>The results obtained from the different test runs have been applied for determining ship roll damping coefficients and validating the performance of the mathematical model described in the previous subsection, for validating the correctness of the so computed stability diagrams, and for training and testing the ANN detection system.</para>
<fig id="F3-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-10">Figure <xref linkend="F3-10" remap="3.10"/></link></label>
<caption><para>Roll and pitch motions in parametric roll resonance. Proposed model.</para></caption>
<graphic xlink:href="graphics/ch03_fig10.jpg"/>
</fig>
<para>In addition to this, the results of pitch and roll motion obtained with the proposed model are presented in <link linkend="F3-10">Figure <xref linkend="F3-10" remap="3.10"/></link>, for the sake of comparison with the results obtained with the conventional model (<link linkend="F3-7">Figure <xref linkend="F3-7" remap="3.7"/></link>). As it can be seen, the pitch time series doesn&#x02019;t present the peaks observed in the conventional model measurements, as no interference between model and carriage occurs in this case.</para>
<fig id="F3-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-11">Figure <xref linkend="F3-11" remap="3.11"/></link></label>
<caption><para>Comparison between experimental and numerical data. Resonant case.</para></caption>
<graphic xlink:href="graphics/ch03_fig11.jpg"/>
</fig>
<fig id="F3-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-12">Figure <xref linkend="F3-12" remap="3.12"/></link></label>
<caption><para>Comparison between experimental and numerical data. Non-Resonant case.</para></caption>
<graphic xlink:href="graphics/ch03_fig12.jpg"/>
</fig>
<section class="lev3" id="sec3-3-3-1">
<title>3.3.3.1 Mathematical model validation</title>
<para>As a first step, the results obtained from the towing tank tests have been used for validating the performance of the roll motion nonlinear mathematical model to predict the amplitude of the resulting roll motion.</para>
<para>In most cases, the results obtained from the model are quite accurate, and correctly simulate the roll motion of the ship both in resonant and non-resonant conditions.</para>
<para>Examples for both situations could be seen in Figures 3.11 and 3.12:</para>
</section>
<section class="lev3" id="sec3-3-3-2">
<title>3.3.3.2 Validation of stability diagrams</title>
<para>Once the model has been validated, it has been recursively executed for a set of combinations of different wave frequencies and heights, covering the frequency ratio range from 1.70 to 2.40 and wave heights from 0.20 to 1.20 m, at each operational condition under consideration (four speeds), in order to set up the stability diagrams. Once computed, the results have been compared to those obtained in the towing tank tests. To illustrate this comparison, the results obtained from these experiments (<link linkend="F3-13">Figure <xref linkend="F3-13" remap="3.13"/></link>) have been superimposed on the corresponding mathematical plots (<link linkend="F3-14">Figure <xref linkend="F3-14" remap="3.14"/></link>); light dots represent non-resonant conditions, while resonance is shown by the dark dots.</para>
<fig id="F3-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-13">Figure <xref linkend="F3-13" remap="3.13"/></link></label>
<caption><para>Experimental stability diagrams. Fn = 0.1. (Left), Fn = 0.15 (Right),  GM = 0.370 m.</para></caption>
<graphic xlink:href="graphics/ch03_fig0013.jpg"/>
</fig>
<fig id="F3-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-14">Figure <xref linkend="F3-14" remap="3.14"/></link></label>
<caption><para>Comparison between experimental and numerical stability diagrams. Fn = 0.1. (Left), Fn = 0.15 (Right), GM = 0.370 m.</para></caption>
<graphic xlink:href="graphics/ch03_fig14.jpg"/>
</fig>
<para>As can be seen in <link linkend="F3-14">Figure <xref linkend="F3-14" remap="3.14"/></link>, the shape of the unstable region matches the results obtained in the towing tank tests, that together with the accurate results obtained while computing roll amplitude, show a good performance of the mathematical model.</para>
</section>
<section class="lev3" id="sec3-3-3-3">
<title>3.3.3.3 Prediction system tests</title>
<para>To forecast the onset and development of the parametric roll phenomena, some standard perceptron ANNs have been used. Several ANN architectures were tested and the overall best results were obtained with 3 layers of 30 neurons each.</para>
<para>The training cases for the ANNs have been obtained from the experiments that have been carried out with different values of wave frequency and amplitude at a Froude number of 0.2, consisting of 14 time series averaging a full scale length of 420 seconds. IMU output data was sampled by the on-board computer at a rate of 50 data sets per second. As for this particular ship model, the roll period of oscillation is of several seconds, the data used for training the ANNs was under-sampled to 2 Hz, which was more than enough to capture the events and permit reducing the size of the data set used. This resulted in a total number of 11169 training cases.</para>
<para>Encounter frequency &#x02013; natural roll frequency ratio ranged from 1.8 to 2.3, implying that there would be cases where parametric roll was not present. Due to this fact, the performance of the system in a condition where only small roll amplitudes appear due to small transversal excitations or when roll motions decrease (cases that were not present in the mathematical model tests as no other excitation was present apart from head waves) could be evaluated.</para>
<para>The tests have been performed on both regular and irregular waves in cases ranging from small to heavy parametric roll. In regular waves, the RMS error when predicting 10 seconds ahead has been of the order of 10<superscript>-3</superscript> in cases presenting large roll amplitudes and it reduces to 10<superscript>-4</superscript> in cases with small amplitudes.</para>
<fig id="F3-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-15">Figure <xref linkend="F3-15" remap="3.15"/></link></label>
<caption><para>Forecast results. 30 neuron, 3 layer MP. 10 seconds prediction. Resonant case.</para></caption>
<graphic xlink:href="graphics/ch03_fig15.jpg"/>
</fig>
<fig id="F3-16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-16">Figure <xref linkend="F3-16" remap="3.16"/></link></label>
<caption><para>Forecast results. 30 neuron, 3 layer mp. 10 Seconds prediction. Non Resonant case.</para></caption>
<graphic xlink:href="graphics/ch03_fig16.jpg"/>
</fig>
<para>Two examples of these predictions are shown in Figures 3.15 and 3.16, which include one case with fully developed parametric roll, and another one without any resonant motions. On these figures, the ANN forecast is represented by the dotted line, while the real data is represented by full lines. <link linkend="F3-15">Figure <xref linkend="F3-15" remap="3.15"/></link> is a typical example of a case presenting large amplitudes, while in <link linkend="F3-16">Figure <xref linkend="F3-16" remap="3.16"/></link> no resonance motion takes place. As can be observed in both cases, the results are very good.</para>
<para>Further details of the characteristics and performance of the forecasting ANN system have been presented by the authors in [16]. There, the forecasting system has been implemented on a ship model instrumented with accelerometers and tested by using standard towing tank methods. The data used for the <link linkend="F3-7">Figure <xref linkend="F3-7" remap="3.7"/></link> plot has been obtained during this testing campaign.</para>
</section>
</section></section>
<section class="lev1" id="sec3-4">
<title>3.4 Conclusions and Future Work</title>
<para>The development and implementation of an autonomous scale ship model for towing tank testing has been presented as well as some of the results obtained with it during a real towing tank test campaign. The system is aimed to be installed on board of self-propelled models, acting as an autopilot that controls speed and track, the latter by maintaining course and keeping the model centered in the tank. It also has an IMU with a 3-axis accelerometer, a gyroscope and a magnetometer and, in addition, it measures the torque, rotational speed and propulsive force at the propeller. A model ship so instrumented is able to move without any restriction along any of its six degrees of motion; consequently, the system produces optimal measurements even in tests cases presenting motions of large amplitude.</para>
<para>At is present development stage, the system only needs to use the towing carriage as a reference for speed and position. A more advanced version that could eliminate the use of this carriage is under development. This towing carriage, together with its rails, propulsion and instrumentation, is a very costly piece of hardware. The final version of the system could be constructed at a fraction of this cost and it will be a true towless towing tank, as it will allow performing any standard towing tank test without the need of an actual tow.</para>
</section>
<section class="lev1" id="sec3-5">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>T. I. Fossen, &#x02018;Handbook of Marine Craft Hydrodynamics and Motion Control&#x02019;, John Wiley &#x00026; Sons, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+I%2E+Fossen%2C+%27Handbook+of+Marine+Craft+Hydrodynamics+and+Motion+Control%27%2C+John+Wiley+%26+Sons%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>W. N. France, M. Levadou, T. W. Treakle, J. R. Paulling, R. K. Michel and C. Moore, &#x02018;An investigation of head-sea parametric rolling and its influence on container lashing systems&#x02019;, Marine Technology, vol. 40(1), pp. 1&#x02013;19, 2003. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=W%2E+N%2E+France%2C+M%2E+Levadou%2C+T%2E+W%2E+Treakle%2C+J%2E+R%2E+Paulling%2C+R%2E+K%2E+Michel+and+C%2E+Moore%2C+%27An+investigation+of+head-sea+parametric+rolling+and+its+influence+on+container+lashing+systems%27%2C+Marine+Technology%2C+vol%2E+40%281%29%2C+pp%2E+1-19%2C+2003%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Francescutto, &#x02018;An experimental investigation of parametric rolling in head waves&#x02019;, Journal of Offshore Mechanics and Arctic Engineering, vol. 123, pp. 65&#x02013;69, 2001. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Francescutto%2C+%27An+experimental+investigation+of+parametric+rolling+in+head+waves%27%2C+Journal+of+Offshore+Mechanics+and+Arctic+Engineering%2C+vol%2E+123%2C+pp%2E+65-69%2C+2001%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>I. Drummen, &#x02018;Experimental and Numerical Investigation of Nonlinear Wave-Induced Load Effects in Containerships considering Hydroelasticity&#x02019;, PhD Thesis, Norwegian University of Science and Technology, Norway, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=I%2E+Drummen%2C+%27Experimental+and+Numerical+Investigation+of+Nonlinear+Wave-Induced+Load+Effects+in+Containerships+considering+Hydroelasticity%27%2C+PhD+Thesis%2C+Norwegian+University+of+Science+and+Technology%2C+Norway%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>International Towing Tank Conference (ITTC), &#x02018;Testing and Extrapolation Methods. Loads and Responses, Stability. Model Tests on Intact Stability&#x02019;, ITTC 7.5&#x02013;02-07&#x02013;04.1. 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=International+Towing+Tank+Conference+%28ITTC%29%2C+%27Testing+and+Extrapolation+Methods%2E+Loads+and+Responses%2C+Stability%2E+Model+Tests+on+Intact+Stability%27%2C+ITTC+7%2E5-02-07-04%2E1%2E+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. Brown, P. Hwang, &#x02018;Introduction to Random Signals and Applied Kalman Filtering, Second Edition&#x02019;, John Wiley and Sons Inc., 1992. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+Brown%2C+P%2E+Hwang%2C+%27Introduction+to+Random+Signals+and+Applied+Kalman+Filtering%2C+Second+Edition%27%2C+John+Wiley+and+Sons+Inc%2E%2C+1992%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>E. R. Bachmann, &#x02018;Inertial and Magnetic Tracking of Limb Segment Orientation for Inserting Humans into Synthetic Environments&#x02019;, PhD Thesis, Naval Posgraduate School, USA, 2000. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=E%2E+R%2E+Bachmann%2C+%27Inertial+and+Magnetic+Tracking+of+Limb+Segment+Orientation+for+Inserting+Humans+into+Synthetic+Environments%27%2C+PhD+Thesis%2C+Naval+Posgraduate+School%2C+USA%2C+2000%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>K. D&#x00F8;hlie, &#x02018;Parametric Roll - a problem solved?&#x02019;, DNV Container Ship Update, 1, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+D%F8hlie%2C+%27Parametric+Roll+-+a+problem+solved%B4%27%2C+DNV+Container+Ship+Update%2C+1%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. M. Golden, &#x02018;Mathematical methods for neural network analysis and design&#x02019;, The MIT Press, 1996. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+M%2E+Golden%2C+%27Mathematical+methods+for+neural+network+analysis+and+design%27%2C+The+MIT+Press%2C+1996%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>International Maritime Organization (IMO), &#x02018;Revised Guidance to the Master for Avoiding Dangerous Situations in Adverse Weather and Sea Conditions (Vol. IMO MSC.1/Circ. 1228)&#x02019;, IMO Maritime Safety Committee, 82<superscript>nd</superscript> session, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=International+Maritime+Organization+%28IMO%29%2C+%27Revised+Guidance+to+the+Master+for+Avoiding+Dangerous+Situations+in+Adverse+Weather+and+Sea+Conditions+%28Vol%2E+IMO+MSC%2E1%2FCirc%2E+1228%29%27%2C+IMO+Maritime+Safety+Committee%2C+82nd+session%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. Bulian, &#x02018;Nonlinear parametric rolling in regular waves - a general procedure for the analytical approximation of the GZ curve and its use in time domain simulations&#x02019;, Ocean Engineering, 32 (3&#x02013;4), pp. 309&#x02013;330, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+Bulian%2C+%27Nonlinear+parametric+rolling+in+regular+waves+-+a+general+procedure+for+the+analytical+approximation+of+the+GZ+curve+and+its+use+in+time+domain+simulations%27%2C+Ocean+Engineering%2C+32+%283-4%29%2C+pp%2E+309-330%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>American Bureau of Shipping (ABS), &#x02018;Guide for the Assessment of Parametric Roll Resonance in the Design of Container Carriers&#x02019;, 2004. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=American+Bureau+of+Shipping+%28ABS%29%2C+%27Guide+for+the+Assessment+of+Parametric+Roll+Resonance+in+the+Design+of+Container+Carriers%27%2C+2004%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. A. S. Neves, C. A. Rodr&#x00ED;guez, &#x02018;On unstable ship motions resulting from strong non-linear coupling&#x02019;, Ocean Engineering, 33 (14, 15), 1853&#x02013;1883, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+A%2E+S%2E+Neves%2C+C%2E+A%2E+Rodr%EDguez%2C+%27On+unstable+ship+motions+resulting+from+strong+non-linear+coupling%27%2C+Ocean+Engineering%2C+33+%2814%2C+15%29%2C+1853-1883%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Himeno, &#x02018;Prediction of Ship Roll Damping. A State of the Art&#x02019;, Department of Naval Architecture and Marine Engineering, The University of Michigan College of Engineering, USA, 1981. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Himeno%2C+%27Prediction+of+Ship+Roll+Damping%2E+A+State+of+the+Art%27%2C+Department+of+Naval+Architecture+and+Marine+Engineering%2C+The+University+of+Michigan+College+of+Engineering%2C+USA%2C+1981%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. M&#x00ED;guez Gonz&#x00E1;lez, F. L&#x00F3;pez Pe&#x00F1;a, V. D&#x00ED;az Cas&#x00E1;s, L. P&#x00E9;rez Rojas, &#x02018;Experimental Parametric Roll Resonance Characterization of a Stern Trawler in Head Seas&#x02019;, Proceedings of the 11<superscript>th</superscript> International Conference on the Stability of Ships and Ocean Vehicles, Athens, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+M%EDguez+Gonz%E1lez%2C+F%2E+L%F3pez+Pe%F1a%2C+V%2E+D%EDaz+Cas%E1s%2C+L%2E+P%E9rez+Rojas%2C+%27Experimental+Parametric+Roll+Resonance+Characterization+of+a+Stern+Trawler+in+Head+Seas%27%2C+Proceedings+of+the+11th+International+Conference+on+the+Stability+of+Ships+and+Ocean+Vehicles%2C+Athens%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. L&#x00F3;pez Pe&#x00F1;a, M. M&#x00ED;guez Gonz&#x00E1;lez, V. D&#x00ED;az Cas&#x00E1;s, R. J. Duro, D. Pena Agras, &#x02018;An ANN Based System for Forecasting Ship Roll Motion&#x02019;, Proceedings of the 2013 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Milano, Italy, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+L%F3pez+Pe%F1a%2C+M%2E+M%EDguez+Gonz%E1lez%2C+V%2E+D%EDaz+Cas%E1s%2C+R%2E+J%2E+Duro%2C+D%2E+Pena+Agras%2C+%27An+ANN+Based+System+for+Forecasting+Ship+Roll+Motion%27%2C+Proceedings+of+the+2013+IEEE+International+Conference+on+Computational+Intelligence+and+Virtual+Environments+for+Measurement+Systems+and+Applications%2C+Milano%2C+Italy%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch04" label="4" xreflabel="4">
<title>Autonomous Knowledge Discovery Based on Artificial Curiosity-Driven Learning by Interaction</title>
<para><emphasis role="strong">K. Madani, D. M. Ramik and C. Sabourin</emphasis></para>
<para>Images, Signals &#x00026; Intelligent Systems Lab. (LISSI / EA 3956) University PARIS-EST Cr&#x00E9;teil (UPEC) &#x02013;S&#x00E9;nart-FB Institute of Technolog, Lieusaint, France</para>
<para>Corresponding author: K. Madani &lt;madani@u-pec.fr&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>In this work, we investigate the development of a real-time intelligent system allowing a robot to discover its surrounding world and to learn autonomously new knowledge about it by semantically interacting with humans. The learning is performed by observation and by interaction with a human. We describe the system in a general manner, and then we apply it to autonomous learning of objects and their colors. We provide experimental results both using simulated environments and implementing the approach on a humanoid robot in a real-world environment including every-day objects. We show that our approach allows a humanoid robot to learn without negative input and from a small number of samples.</para>
<para><emphasis role="strong">Keywords:</emphasis> Visual saliency, autonomous learning, intelligent system, artificial curiosity, automated interpretation, semantic robot-human interaction</para>
</section>
<section class="lev1" id="sec4-1">
<title>4.1 Introduction</title>
<para>In recent years, there has been a substantial progress in robotic systems able to robustly recognize objects in the real world using a large database of pre-collected knowledge (see [1] for a notable example). There has been, however, comparatively less advance in the autonomous acquisition of such knowledge: if contemporary robots are often fully automatic, they are rarely fully autonomous in their knowledge acquisition. If the aforementioned substantial progress is commonsensical regarding the last-decades&#x02019; significant developments in methodological and algorithmic approaches relating visual information processing, pattern recognition and artificial intelligence, the languishing in the machine&#x02019;s autonomous knowledge acquisition is also obvious regarding the complexity of the additional necessary skills to achieve such &#x0201C;not algorithmic&#x0201D; but &#x0201C;cognitive&#x0201D; task.</para>
<para>Emergence of cognitive phenomena in machines have been and remain an active part of research efforts since the rise of Artificial Intelligence (AI) in the middle of the last century, but the fact that human-like machine-cognition is still beyond the reach of contemporary science only proves how difficult the problem is. In fact, nowadays there are many systems, such as sensors, computers or robotic bodies, that outperform human capacities; nonetheless, none of the existing robots can be called truly intelligent. In other words, robots sharing everyday life with humans are still far away. Somewhat, it is due to the fact that we are still far from fully understanding the human cognitive system. Partly, it is so because it is not easy to emulate human cognitive skills and complex mechanisms relating those skills. Nevertheless, the concepts of bio-inspired or human-like machine-cognition remain the foremost sources of inspiration for achieving intelligent systems (intelligent machines, intelligent robots, etc&#x02026;). This is the way we have taken (e.g. through inspiration from biological and human knowledge acquisition mechanisms) to design the investigated human-like machine-cognition based system able to acquire high-level semantic knowledge from visual information (e.g. from observation). It is important to emphasize that the term &#x0201C;cognitive system&#x0201D; means here that characteristics of such a system tend to those of human cognitive systems. This means that a cognitive system, which is supposed to be able to comprehend the surrounding world on its own, but whose comprehension would be non-human, would afterward be incompetent of communicating about it with its human counterparts. In fact, human-inspired knowledge representation and human-like communication (namely semantic) about the acquired knowledge become key points expected from such a system. To achieve the aforementioned capabilities, such a cognitive system should thus be able to develop its own high-level representation of facts from low-level visual information (such as images). Accordingly to the expected autonomy, the processing from the &#x0201C;sensory level&#x0201D; (namely visual level) to the &#x0201C;semantic level&#x0201D; should be performed solely by the robot, without human supervision. However, this does not mean excluding interaction with humans, which is, on the contrary, vital for any cognitive system, be it human or machine. Thus, the investigated system has to share its perceptual high-level knowledge of the world with the human by interacting with him. The human on his turn shares with the cognitive robot his knowledge about the world using natural speech (utterances) completing observations made by the robot.</para>
<para>In fact, if a humanoid robot is required to learn to share the living space with its human counterparts and to reason about it in &#x0201C;human terms&#x0201D;, it has to face at least two important challenges. One, coming from the world itself, is the vast number of objects and situations the robot may encounter in the real world. The other one comes from humans&#x02019; richness concerning various ways they use to address those objects or situations using natural language. Moreover, the way we perceive the world and speak about it is strongly culturally dependent. This is shown in [2] regarding usage of color terms by different people around the world, or in [3] regarding cultural differences in description of spatial relations. A robot supposed to defeat those challenges cannot rely solely on a priori knowledge that has been given to it by a human expert. On the contrary, it should be able to learn on-line, within the environment in which it evolves and by interaction with the people it encounters in that environment (see [4] for a survey on human-robot interaction and learning and [5] for an overview of the problem of anchoring). This learning should be completely autonomous, but still able to benefit from interaction with humans in order to acquire their way of describing the world. This will inherently require that the robot has the ability of learning without an explicit negative evidence or &#x0201C;negative training set&#x0201D; and from a relatively small number of samples. This important capacity is observed in children learning the language [6]. This problem has been addressed to different degrees in various works. For example, in [7] a computational model of word-meaning, acquisition by interaction is presented. In [8], the authors present a computational model for the acquisition of a lexicon describing simple objects. In [9], a humanoid robot is taught to associate simple shapes to human lexicon. In [10], a humanoid robot is taught through a dialog with untrained users with the aim to learn different objects and to grasp them properly. More advanced works on robots&#x02019; autonomous learning and dialog are given by [11, 12].</para>
<para>In this chapter, we describe an intelligent system, allowing robots (as for example humanoid robots) to learn and to interpret the world in which they evolve using appropriate terms from human language, while not making use of a priori knowledge. This is done by word-meaning anchoring based on learning by observation and by interaction with its human counterpart. Our model is closely inspired by human infants&#x02019; early-ages learning behaviour (e.g. see [13, 14]). The goal of this system is to allow a humanoid robot to anchor the heard terms to its sensory-motor experience and to flexibly shape this anchoring according to its growing knowledge about the world. The described system can play a key role in linking existing object extraction and learning techniques (e.g. SIFT matching or salient object extraction techniques) on one side, and ontologies on the other side. The former ones are closely related to perceptual reality, but are unaware of the meaning of objects they are treated, while the latter ones are able to represent complex semantic knowledge about the world, but, they are unaware of the perceptual reality of concepts, which they are handling.</para>
<para>The rest of this chapter is structured as follows. Section 4.2 describes the architecture of the proposed approach. In this section, we detail our approach by outlining its architecture and principles, we explain how beliefs about the world are generated and evaluated by the robot and we describe the role of human-robot interaction in the learning process. Validation of the presented system on colors learning and interpretation, using simulation facilities, is reported in Section 4.3. Section 4.4 focuses on the implementation and validation of the proposed approach on a real robot in a real-world environment. Finally, Section 4.5 discusses the achieved results and outlines future work.</para>
</section>
<section class="lev1" id="sec4-2">
<title>4.2 Proposed System and Role of Curiosity</title>
<para>Curiosity is a key skill for human cognition and thus it appears as an appealing concept in conceiving artificial systems that gather knowledge, especially when they are supposed to gather knowledge autonomously. Accordingly to Berlyne&#x02019;s Theory of human curiosity [15], two kinds of curiosities stimulate the human&#x02019;s cognitive mechanism. The first one is the so-called &#x0201C;perceptual curiosity&#x0201D;, which leads to increased perception of stimuli. It is a lower-level function, more related to perception of new, surprising or unusual sensory input. It relates reflexive or repetitive perceptual experiences. The other one is called &#x0201C;epistemic curiosity&#x0201D;, which is more related to the &#x0201C;desire for knowledge that motivates individuals to learn new ideas, to eliminate information-gaps, and to solve intellectual problems.</para>
<para>According to [16] and [17], the general concept of the presented architecture could include one unconscious visual level which may contain a number of Unconscious Cognitive Functions (UCF) and one conscious visual level which may contain a number of Conscious Cognitive Functions (CCF). Conformably with the aforementioned concept of two kinds of curiosity, an example of knowledge extraction from visual perception, involving both kinds of curiosity, is shown on <link linkend="F4-1">Figure <xref linkend="F4-1" remap="4.1"/></link>. The perceptual curiosity motivates or stimulates what we call the low-level knowledge acquisition and concerns &#x0201C;reflexive&#x0201D; (unconscious) processing level. It seeks &#x0201C;surprising&#x0201D; or &#x0201C;attention-drawing&#x0201D; information in given visual data. The task of the perceptual curiosity is realized by perceptual saliency detection mechanisms. This gives the basis for operation of high-level knowledge acquisition, which is stimulated by epistemic curiosity. Being previously defined as the process that motivates to &#x0201C;learn new ideas, eliminate information-gaps, and solve intellectual problems&#x0201D;: as those relating the interpretation of visual information or the belief&#x02019;s generation concerning the observed objects.</para>
<fig id="F4-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-1">Figure <xref linkend="F4-1" remap="4.1"/></link></label>
<caption><para>General Bloc-diagram of the proposed curiosity driven architecture (left) and principle of curiosity-based stimulation-satisfaction mechanism for knowledge acquisition (right).</para></caption>
<graphic xlink:href="graphics/ch04_fig001.jpg"/>
</fig>
<para>The problem of learning brings an inherent problem of distinguishing the pertinent sensory information and the impertinent one. The solution to this task is not obvious even if we achieve joint attention in the robot. This is illustrated on <link linkend="F4-2">Figure <xref linkend="F4-2" remap="4.2"/></link>. If a human points to one object (e.g. an apple) among many others, and describes it as &#x0201C;red&#x0201D;, the robot still has to distinguish which of the detected colors and shades of the object the human is referring to.</para>
<fig id="F4-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-2">Figure <xref linkend="F4-2" remap="4.2"/></link></label>
<caption><para>A Human would describe this Apple as &#x0201C;Red&#x0201D; in spite of the fact, that this is not the only visible color.</para></caption>
<graphic xlink:href="graphics/ch04_fig002.jpg"/>
</fig>
<para>To achieve correct anchoring in spite of such an uncertainty, we adopt the following strategy. The robot extracts features from important objects found in the scene along with the words the tutor used to describe the objects. Then, the robot generates its beliefs about which word could describe which feature. The beliefs are used as organisms in a genetic algorithm. Here, the appropriate fitness function is of major importance. To calculate the fitness, we train a classifier based on each belief and using it, we try to interpret the objects the robot has already seen. We compare the utterances pronounced by the human tutor in the presence of each such an object with the utterances the robot would use to describe it based on the current belief. The closer the robot&#x02019;s description is to the one given by the human, the higher the fitness is. Once the evolution has been finished, the belief with the highest fitness is adopted by the robot and is used to interpret occurrences of new (unseen) objects. On <link linkend="F4-3">Figure <xref linkend="F4-3" remap="4.3"/></link>, important parts of the system proposed in this paper are depicted.</para>
<fig id="F4-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-3">Figure <xref linkend="F4-3" remap="4.3"/></link></label>
<caption><para>A Human would describe this Toy-frog as green in spite of the fact, that this is not the only visible color.</para></caption>
<graphic xlink:href="graphics/ch04_fig003.jpg"/>
</fig>
<section class="lev2" id="sec4-2-1">
<title>4.2.1 Interpretation from Observation</title>
<para>Let us suppose a robot equipped with a sensor observing the surrounding world. The world is represented as a set of features <emphasis>I = {i<subscript>1</subscript>, i<subscript>2</subscript>, ..., i<subscript>k</subscript>}</emphasis>,which can be acquired by this sensor [18]. Each time the robot makes an observation <emphasis>o,</emphasis> a human tutor gives it a set of utterances <emphasis>U<subscript>m</subscript></emphasis> describing the important (e.g. salient) objects found. Let us denote the set of all utterances ever given about the world as <emphasis>U.</emphasis> The observation <emphasis>o</emphasis> is defined as an ordered pair <emphasis>o = {I<subscript>l</subscript>, U<subscript>m</subscript>}</emphasis>, where <emphasis>I<subscript>l</subscript> &#x02286; I</emphasis>, expressed by Equation <emphasis role="up">( <xref rid="#x1-4001r1"><!--ref: GrindEQ__4_1_--></xref>)</emphasis>, stands for the set of features obtained from observation and <emphasis>U<subscript>m</subscript> &#x02286; U</emphasis> is a set of utterances (describing <emphasis>o</emphasis>) given in the context of that observation. <emphasis>i<subscript>p</subscript></emphasis> denotes the pertinent information for a given <emphasis>u</emphasis> (i.e. features that can be described semantically as <emphasis>u</emphasis> in the language used for communication between the human and the robot), <emphasis>i<subscript>i</subscript></emphasis> the impertinent information <emphasis>i<subscript>i</subscript></emphasis> (i.e. features that are not described by the given <emphasis>u,</emphasis> but might be described by another <emphasis>u<subscript>i</subscript> &#x02208; U</emphasis>) and sensor noise &#x1D700;.The goal for the robot is to distinguish the pertinent information present in the observation from the impertinent one and to correctly map the utterances to appropriate perceived stimuli (features). In other words, the robot is required to establish a word-meaning relationship between the uttered words and its own perception of the world. The robot is further allowed to interact with the human in order to clarify or verify its interpretations.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq4-1.jpg"/></para>
<para>Let us define an interpretation <emphasis>X(u) = {u, I<subscript>j</subscript>}</emphasis> of an utterance <emphasis>u</emphasis> an ordered pair where <emphasis>I<subscript>j</subscript> &#x02286; I</emphasis> is a set of features from <emphasis>I</emphasis>. So, the belief <emphasis>B</emphasis> is defined according to Equation <emphasis role="up">( <xref rid="#x1-4002r2"><!--ref: GrindEQ__4_2_--></xref>)</emphasis> as an ordered set of <emphasis>X(u)</emphasis> interpreting utterances <emphasis>u</emphasis> from <emphasis>U.</emphasis></para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq4-2.jpg"/></para>
<para>According to the criterion expressed by <emphasis role="up">( <xref rid="#x1-4003r3"><!--ref: GrindEQ__4_3_--></xref>)</emphasis>, one can calculate the belief <emphasis>B</emphasis>, which interprets in the most coherent way the observations made so far: in other words, by looking for such a belief, which minimizes across all the observations <emphasis>o<subscript>q</subscript> &#x02208; O</emphasis> the difference between the utterances <emphasis>U<subscript>Hq</subscript></emphasis> made by the human, and those utterances <emphasis>U<subscript>Bq</subscript>,</emphasis> made by the system by using the belief <emphasis>B.</emphasis> Thus, <emphasis>B</emphasis> is a mapping from the set <emphasis>U</emphasis> to <emphasis>I</emphasis>: all members of <emphasis>U</emphasis> map to one or more members of <emphasis>I</emphasis> and no two members of <emphasis>U</emphasis> map to the same member of <emphasis>I.</emphasis></para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq4-3.jpg"/></para>
<para><link linkend="F4-4">Figure <xref linkend="F4-4" remap="4.4"/></link> gives, through example, an alternative scheme of the defined notions and their relationship. It depicts a scenario in which two observations <emphasis>o</emphasis><subscript>1</subscript> and <emphasis>o</emphasis><subscript>2</subscript> are made corresponding to two description <emphasis>U</emphasis><subscript>1</subscript>and <emphasis>U</emphasis><subscript>2</subscript> of those observations, respectively.</para>
<fig id="F4-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-4">Figure <xref linkend="F4-4" remap="4.4"/></link></label>
<caption><para>Bloc-diagram of relations between observations, features, beliefs and utterances in sense of terms defined in the text.</para></caption>
<graphic xlink:href="graphics/ch04_fig04.jpg"/>
</fig>
<para>On first observation, features <emphasis>i</emphasis><subscript>1</subscript> and <emphasis>i</emphasis><subscript>2</subscript> were obtained along with utterances <emphasis>u</emphasis><subscript>1</subscript> and <emphasis>u</emphasis><subscript>2</subscript>, respectively. Likewise for the second observation, features <emphasis>i</emphasis><subscript>3</subscript>, <emphasis>i</emphasis><subscript>4</subscript> and <emphasis>i</emphasis><subscript>5</subscript> were obtained along with utterance <emphasis>u</emphasis><subscript>3</subscript>. In this example, it is easily visible that the entire set of features <emphasis>I = {i<subscript>1</subscript>, ..., i<subscript>5</subscript>}</emphasis> contains two sub-sets <emphasis>I</emphasis><subscript>1</subscript> and <emphasis>I</emphasis><subscript>2</subscript>. Similarly the ensemble of whole utterances <emphasis>{u<subscript>1</subscript>, u<subscript>2</subscript>, u<subscript>3</subscript>}</emphasis> give the set <emphasis>U<subscript>H</subscript></emphasis> and their sub-sets <emphasis>U</emphasis><subscript>1</subscript> and <emphasis>U</emphasis><subscript>2</subscript> refer to the corresponding observations (e.g. <emphasis>q</emphasis> &#x02208;{1,2}). In this view, an interpretation <emphasis>X</emphasis>(<emphasis>u</emphasis><subscript>1</subscript>) is a relation of <emphasis>u</emphasis><subscript>1</subscript> with a set of features from <emphasis>I</emphasis> (namely <emphasis>I</emphasis><subscript>1</subscript>). Then, a belief <emphasis>B</emphasis> is a mapping (relation) from the set <emphasis>U</emphasis> to <emphasis>I</emphasis>. All members of <emphasis>U</emphasis> map to one or more members of <emphasis>I</emphasis> and no two members of <emphasis>U</emphasis> are associated to the same member of <emphasis>I</emphasis>.</para>
</section>
<section class="lev2" id="sec4-2-2">
<title>4.2.2 Search for the Most Coherent Interpretation</title>
<para>The system has to look for a belief <emphasis>B,</emphasis> which would make the robot describe a particular scene with utterances as close and as coherent as possible to those made by a human on the same scene. For this purpose, instead of performing the exhaustive search over all possible beliefs, we propose to search for a suboptimal belief by means of a genetic algorithm. For doing that, we assume that each organism within it has its genome constituted by a belief, which, results into genomes of equal size |<emphasis>U</emphasis>| containing interpretations <emphasis>X(u)</emphasis> of all utterances from <emphasis>U.</emphasis> The task of coherent belief generation is to generate beliefs which are coherent with the observed reality.</para>
<para>In our genetic algorithm, the genomes&#x02019; generation is a belief generation process generating genomes (e.g. beliefs) as follows. For each interpretation <emphasis>X(u)</emphasis> the process explores the whole the set <emphasis>O</emphasis>. For each observation <emphasis>o<subscript>q</subscript></emphasis> &#x02208; <emphasis>O</emphasis>, if <emphasis>u &#x02208; U<subscript>Hq</subscript></emphasis> then features <emphasis>i<subscript>q</subscript> &#x02208; I<subscript>j</subscript></emphasis> (with <emphasis>I<subscript>j</subscript> &#x02286; I</emphasis>) are extracted. As described in (1), the extracted set contains pertinent as well as impertinent features. The coherent belief generation is done by deciding, which features <emphasis>i<subscript>q</subscript> &#x02208; I<subscript>j</subscript></emphasis> may possibly be the pertinent ones. The decision is driven by two principles. The first one is the principle of &#x0201C;proximity&#x0201D;, stating that any feature <emphasis>i</emphasis> is more likely to be selected as pertinent in the context of, if its distance to other already selected features is comparatively small. The second principle is the &#x0201C;coherence&#x0201D; with all the observations in <emphasis>O</emphasis>. This means that any observation <emphasis>o<subscript>q</subscript> &#x02208; O</emphasis>, corresponding to <emphasis>u &#x02208; U<subscript>Hq</subscript></emphasis>, has to have at least one feature <emphasis>i</emphasis> assigned into <emphasis>I<subscript>j</subscript></emphasis> of the current <emphasis>X(u) = {u, I<subscript>j</subscript>}</emphasis> [19]. Thus, it is both the similarity of features and the combination of certain utterances, describing observations from <emphasis>O</emphasis> (characterized by certain features), that guide the belief generation process. These beliefs may be seen as &#x0201C;informed guesses&#x0201D; on the interpretation of the world as perceived by the robot.</para>
<para>To evaluate a given organism, a classifier is trained, whose classes are the utterances from <emphasis>U</emphasis> and the training data for each class <emphasis>u &#x02208; U</emphasis> are those corresponding to <emphasis>X(u) = {u, I<subscript>j</subscript>}</emphasis>, i.e. the features associated with the given <emphasis>u</emphasis> in the genome. This classifier is used through the whole set <emphasis>O</emphasis> of observations, classifying utterances <emphasis>u &#x02208; U</emphasis> describing each o<subscript>q</subscript> &#x02208; Oaccording to its extracted features. Such a classification results in the set of utterances <emphasis>U<subscript>Bq</subscript></emphasis> (meaning that a belief <emphasis>B</emphasis> is tested regarding the <emphasis>q</emphasis><superscript>th</superscript> observation). The fitness function evaluating the fitness of each above - mentioned organism is defined as &#x0201C;disparity&#x0201D; between <emphasis>U<subscript>Bq</subscript></emphasis> and <emphasis>U<subscript>Hq</subscript></emphasis> (defined in the previous subsection) which is computed according to the Equation (<xref rid="#x1-5001r4"><!--ref: GrindEQ__4_4_--></xref>), where <emphasis>v</emphasis> is given by Equation (<xref rid="#x1-5002r5"><!--ref: GrindEQ__4_5_--></xref>) representing the number of utterances that are not present in both sets <emphasis>U<subscript>Bq</subscript></emphasis> and <emphasis>U<subscript>Hq</subscript></emphasis>, which means that they are either missed or are superfluous utterances interpreting the given features.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq4-4.jpg"/></para>
<para>At the end of the above-described genetic evolution process, the globally best fitting organism is chosen as the belief that best explains the observations <emphasis>O</emphasis> made (by the robot) so far about the surrounding world.</para>
</section>
<section class="lev2" id="sec4-2-3">
<title>4.2.3 Human-Robot Interaction</title>
<para>Human beings learn both by observation and by interaction with the world and with other human beings. The former is captured in our system in the &#x0201C;best interpretation search&#x0201D; outlined in previous subsections. The latter type of learning requires that the robot be able to communicate with its environment and is facilitated by learning by observation, which may serve as its bootstrap. In our approach, the learning by interaction is carried out in two kinds of interactions: human-to-robot and robot-to-human. The human-to-robot interaction is activated anytime the robot interprets the world wrongly. When the human receives a wrong response (from the robot), he provides the robot a new observation by uttering the desired interpretation. The robot takes this new corrective knowledge about the world into account and searches for a new interpretation according to this new observation. The robot-to-human interaction may be activated when the robot attempts to interpret a particular feature. If the classifier trained with the current belief classifies the given feature with a very low confidence, then this may be a sign that this feature is a borderline example. In this case, it may be beneficial to clarify its true nature. Thus, led by epistemic curiosity, the robot asks its human counterpart to make an utterance about the uncertain observation. If the robot does not interpret according to the utterance given by the human (the robot&#x02019;s interpretation was wrong), this observation is recorded as new knowledge and a search for the new interpretation is started.</para>
<para>Using these two ways of interactive learning, the robot&#x02019;s interpretation of the world evolves both in amount, covering increasingly more phenomena as they are encountered, and in quality, shaping the meaning of words (utterances) to conform with the perceived world.</para>
<fig id="F4-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-5">Figure <xref linkend="F4-5" remap="4.5"/></link></label>
<caption><para>Upper: the WCS color table. lower: the WCS color table interpreted by robot taught to distinguish warm (marked by red), cool (blue) and neutral (white) colors.</para></caption>
<graphic xlink:href="graphics/ch04_fig005.jpg"/>
</fig>
</section>
</section>
<section class="lev1" id="sec4-3">
<title>4.3 Validation Results by Simulation</title>
<para>In the simulated environment, images of real-world objects were presented to the system alongside with textual tags describing colors present on each object. The images were taken from the Columbia Object Image Library database (COIL: it contains 1000 color images of different views of 100 objects). Five fluent English speakers were asked to describe each object in terms of colors. We restricted the choice of colors to &#x0201C;Black&#x0201D;, &#x0201C;Gray&#x0201D;, &#x0201C;White&#x0201D;, &#x0201C;Red&#x0201D;, &#x0201C;Green&#x0201D;, &#x0201C;Blue&#x0201D; and &#x0201C;Yellow&#x0201D;, based on the color opponent process theory [20]. The tagging of the entire set of images was highly coherent across the subjects. In each run of the experiment, we have randomly chosen a tagged set. The utterances were given in the form of text extracted from the descriptions. The object was accepted as correctly interpreted if the system&#x02019;s and the human&#x02019;s interpretations were equal.</para>
<para>The rate of correctly described objects from the test set was approximately 91% after the robot had fully learned. <link linkend="F4-5">Figure <xref linkend="F4-5" remap="4.5"/></link> gives the result of interpretation by the system of the colors of the WCS table regarding &#x0201C;Warm&#x0201D; and &#x0201C;Cool&#x0201D; colors.</para>
<fig id="F4-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-6">Figure <xref linkend="F4-6" remap="4.6"/></link></label>
<caption><para>Evolution of number of correctly described objects with increasing number of exposures of each color to the simulated robot.</para></caption>
<graphic xlink:href="graphics/ch04_fig06.jpg"/>
</fig>
<fig id="F4-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-7">Figure <xref linkend="F4-7" remap="4.7"/></link></label>
<caption><para>Examples of obtained visual colors&#x02019; interpretations (lower images) and corresponding original images (upper images) for several testing objects from COIL database.</para></caption>
<graphic xlink:href="graphics/ch04_fig07.jpg"/>
</fig>
<para><link linkend="F4-6">Figure <xref linkend="F4-6" remap="4.6"/></link> shows the learning rate versus the increasing number of exposures of each color. It is pertinent to emphasize the weak number of learned examples (required examples) leading to a correct recognition rate of 91%. Finally, <link linkend="F4-7">Figure <xref linkend="F4-7" remap="4.7"/></link> gives an example of objects&#x02019; colors interpretation by the system.</para>
</section>
<section class="lev1" id="sec4-4">
<title>4.4 Implementation on Real Robot and Validation Results</title>
<para>The validation of the proposed system has been performed on the basis of both simulation of the designed system and by an implementation on a real humanoid robot <footnote id="fn4_1" label="1"> <para>A video capturing different parts of the experiment may be found online on: http://youtu.be/W5FD6zXihOo</para></footnote>. As real robot we have considered the NAO robot (a small humanoid robot from Aldebaran Robotics) which provides a number of facilities such as onboard camera (vision), communication devices and onboard speech generator. The fact that the above-mentioned facilities were already available offers a huge save of time, even if those faculties remain quite basic in that kind of robot.</para>
<para>Although the usage of the presented system is not specifically bound to humanoid robots, it is pertinent to state two main reasons why a humanoid robot has been used for the system&#x02019;s validation. The first reason for this is that from the definition of the term &#x0201C;humanoid&#x0201D;, a humanoid robot aspires to make its perception close to the human one, entailing a more human-like experience of the world. This is an important aspect to be considered in the context of sharing knowledge between a human and a robot. Some aspects of this problem are discussed in [21]. The second reason is that humanoid robots are specifically designed in order to interact with humans in a &#x0201C;natural&#x0201D; way by using a loudspeaker and microphone set. Thus, required facilities for bi-directional communication with humans through speech synthesis and speech recognition are already available on such kinds of robots. This is of major importance when speaking is a central item for natural human-robot interaction.</para>
<section class="lev2" id="sec4-4-1">
<title>4.4.1 Implementation</title>
<para>The core of the implementation&#x02019;s architecture is split into five main units: Communication Unit (CU), Navigation Unit (NU), Low-level Knowledge Acquisition Unit (LKAU), High-level Knowledge Acquisition Unit (HLAU) and Behavior Control Unit (BCU). <link linkend="F4-8">Figure <xref linkend="F4-8" remap="4.8"/></link> illustrates the bloc-diagram of the implementation&#x02019;s architecture. The aforementioned units control NAO robot (symbolized by its sensors, its actuators and its interfaces in <link linkend="F4-8">Figure <xref linkend="F4-8" remap="4.8"/></link>) through its already available hardware and software facilities. In other words, the above-mentioned architecture controls the whole robot&#x02019;s behavior.</para>
<fig id="F4-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-8">Figure <xref linkend="F4-8" remap="4.8"/></link></label>
<caption><para>Block diagram the implementation&#x02019;s architecture.</para></caption>
<graphic xlink:href="graphics/ch04_fig008.jpg"/>
</fig>
<para>The purpose of NU is to allow the robot to position itself in space with respect to objects around it and to use this knowledge to navigate within the surrounding environment. Capacities needed in this context are obstacle avoidance and determination of distance to objects. Its sub-unit handling spatial orientation receives its inputs from the camera and from the LKAU. To get to the bottom of the obstacle avoidance problem, we have adopted a technique based on ground color modeling. Inspired by the work presented in [22], color model of the ground helps the robot to distinguish free-space from obstacles. The assumption is made that obstacles repose on ground (i.e. overhanging and floating objects are not taken into account). With this assumption, the distance of obstacles can be inferred from monocular camera data. In [23], some aspects of distance estimation from a static monocular camera have been mentioned, proffering the robot the capacity to infer distances and sizes of surrounding objects.</para>
<para>The LKAU ensures gathering of visual knowledge, such as detection of salient objects and their learning (by the sub-unit in charge of salient object detection) and sub-recognition (see [18, 24]). Those activities are carried out mostly in an &#x0201C;unconscious&#x0201D; manner, that is, they are run as an automatism in &#x0201C;background&#x0201D; while collecting salient objects and learning them. The learned knowledge is stored in Long-term Memory for further use.</para>
<para>The HKAU is the center where the intellectual behavior of the robot is constructed. Receiving its features from the LKAU (visual features) and from the CU (linguistic features), this unit processes the belief generation, the most coherent beliefs emergence and constructs the high-level semantic representation of acquired visual knowledge. Unlike the LKAU, this unit represents conscious and intentional cognitive activity. In some way, it operates as a baby who learns from observation and from verbal interaction with adults about what he observes developing in this way his own representation and his own opinion about the observed world [25].</para>
<fig id="F4-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-9">Figure <xref linkend="F4-9" remap="4.9"/></link></label>
<caption><para>Example of English phrase and the corresponding syntactic analysis output generated by treetagger.</para></caption>
<graphic xlink:href="graphics/ch04_fig009.jpg"/>
</fig>
<para>The CU is in charge of robot communications. It includes an output communication channel and an input communication channel. The output channel is composed of a Text-To-Speech engine which generates human voice through loudspeakers. It receives the text from the BCU. The input channel takes its input from a microphone and through an Automated Speech Recognition engine (available in NAO) the syntax and semantic analysis (designed and incorporated in BCU) it provides the BCU labeled chain of strings representing the heard speech. As it has been mentioned, the syntax analysis is not available on NAO. Thus it has been incorporated in BCU. To perform syntax analysis, the TreeTagger tool is used. Developed by the ICL at University of Stuttgart, the TreeTagger tool is a tool for annotating text with part-of-speech and lemma information. <link linkend="F4-9">Figure <xref linkend="F4-9" remap="4.9"/></link> shows, through a simple example of an English phrase, the operational principle of syntactic analysis performed by this tool. &#x0201C;Part-of-speech&#x0201D; row gives tokens explanation and the &#x0201C;Lemma&#x0201D; row shows lemmas output, which is the neutral form of each word in the phrase. This information along with known grammatical rules for creation of English phrases may further serve to determine the nature of the phrase as declarative (for example: &#x0201C;This is a Box&#x0201D;), interrogative (for example: &#x0201C;What is the name of this object?&#x0201D;) or imperative (for example: &#x0201C;Go to the office&#x0201D;). It can be also used to extract the subject, the verb and other parts of speech, which are further processed in order to make emerge the appropriate action by the robot. <link linkend="F4-10">Figure <xref linkend="F4-10" remap="4.10"/></link> gives the flow diagram of communication between the robot and a human as it has been implemented in this work.</para>
<para>The BCU plays the role of a coordinator of robot&#x02019;s behavior. It handles data flows and issues command signals for other units, controlling the behavior of the robot and its suitable reactions to external events (including its interaction with humans). BCU received its inputs from all other units and returns its outputs to each concerned unit including robot&#x02019;s devices (e.g. sensors, actuators and interfaces) [25].</para>
<fig id="F4-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-10">Figure <xref linkend="F4-10" remap="4.10"/></link></label>
<caption><para>Flow diagram of communication between a robot and a human which is used in this work.</para></caption>
<graphic xlink:href="graphics/ch04_fig10.jpg"/>
</fig>
</section>
<section class="lev2" id="sec4-4-2">
<title>4.4.2 Validation Results</title>
<para>A total of 25 every-day objects was collected for experimental purposes of (<link linkend="F4-11">Figure <xref linkend="F4-11" remap="4.11"/></link>). They have been randomly divided into two sets for training and for testing. The learning set objects were placed around the robot and then a human tutor pointed to each of them calling it by its name. Using its 640x480 monocular color camera, the robot discovered and learned the objects from its surrounding environment containing objects from the above-mentioned set.</para>
<fig id="F4-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-11">Figure <xref linkend="F4-11" remap="4.11"/></link></label>
<caption><para>Everyday objects used in the experiments in this work.</para></caption>
<graphic xlink:href="graphics/ch04_fig11.jpg"/>
</fig>
<para>The first validation involving the robot has aimed at verifying the leaning, color interpretation, interaction with human and description abilities of the proposed (e.g. investigated) system. To do this, the robot has been asked to learn a subset of the 25 objects: in terms of associating the name of each detected object to that object. At the same time, a second learning process has been performed involving the interaction with the tutor who has successively pointed the above-learned objects describing (e.g. telling) to the robot the color of each object. Here below, an example of the Human-Robot interactive learning is reported:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para><emphasis role="strong">Human</emphasis>: [pointing a red aid-kit] &#x0201C;This is a first-aid-kit!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Robot</emphasis>: &#x0201C;I will remember that this is a first-aid-kit.&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Human</emphasis>: &#x0201C;It is red and white.&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Robot</emphasis>: &#x0201C;OK, the first-aid-kit is red and the white.&#x0201D;</para></listitem></itemizedlist>
<para>After learning the names and colors of the observed objects, the robot is asked to describe a number of objects including also some of the already learned objects but in a different posture (for example the yellow chocolate box presented in reverse posture, etc.) and a number of still unseen objects (as for example a red apple or a white teddy-bear). The robot has successfully described, in coherent linguistics, the presented seen and unseen objects. Here below is an example of Human-Robot interaction during the recognition phase:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para><emphasis role="strong">Human</emphasis>: [pointing the unseen white teddy-bear]: &#x0201C;Describe this!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Robot</emphasis>: &#x0201C;It is white!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Human</emphasis>: [pointing the already seen, but reversed, yellow chocolate box]: &#x0201C;Describe this!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Robot</emphasis>: &#x0201C;It is yellow!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Human</emphasis>: [pointing the unseen apple]: &#x0201C;Describe this!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Robot</emphasis>: &#x0201C;It is red!&#x0201D;</para></listitem></itemizedlist>
<para><link linkend="F4-12">Figure <xref linkend="F4-12" remap="4.12"/></link> shows two photographs of the above-reported experimental validation, where the robot completes its knowledge by interacting with a human and learning from him. <link linkend="F4-13">Figure <xref linkend="F4-13" remap="4.13"/></link> shows another two photographs where the robot describes different objects it could detect within its surrounding environment.</para>
<fig id="F4-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-12">Figure <xref linkend="F4-12" remap="4.12"/></link></label>
<caption><para>Tutor pointing an aid-kit detected by robot describing its name and color to the robot (left-side picture). Pointing, in the same way, another visible objects detected by robot, tutor describes them to the robot (Right-Side picture).</para></caption>
<graphic xlink:href="graphics/ch04_fig012.jpg"/>
</fig>
<fig id="F4-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-13">Figure <xref linkend="F4-13" remap="4.13"/></link></label>
<caption><para>Tutor pointing a yellow chocolate box which has been seen, interpreted and learned (by the robot) in terms of colors then asking the robot to describe the chosen object (left-side picture). Tutor pointing an unseen white teddy-bear asking the robot to describe the chosen object (right-side picture).</para></caption>
<graphic xlink:href="graphics/ch04_fig13.jpg"/>
</fig>
<fig id="F4-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-14">Figure <xref linkend="F4-14" remap="4.14"/></link></label>
<caption><para>Images from a video sequence showing the robot searching for the book (left-side picture) and robot&#x02019;s camera view and visualization of color interpretation of the searched object (right-side picture).</para></caption>
<graphic xlink:href="graphics/ch04_fig14.jpg"/>
</fig>
<para>In another experiment, taking into account the previously learned objects, among which the robot has detected and has learned a black book (the only available book in the learning set), the robot has been asked to search for the book in a new (e.g. unseen) environment where a number of different objects, including the above-mentioned black book, have been placed with two new books (unseen) as well as other objects looking similar to a book. The robot&#x02019;s behavior and Human-Robot interaction during this experiment is given here below:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para><emphasis role="strong">Human</emphasis>: &#x0201C;Go find the book!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Robot</emphasis>: &#x0201C;OK, I am searching for the book!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Robot</emphasis>: [after visually exploring the new surrounding environment]: &#x0201C;I can see three books!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Human</emphasis>: &#x0201C;Get the red one!&#x0201D;</para></listitem>
<listitem>
<para><emphasis role="strong">Robot</emphasis>: [after turning toward the &#x0201C;red and grey&#x0201D; book and after walking and reaching the aforementioned book&#x02019;s location] &#x0201C;I am near the red book!&#x0201D;</para></listitem></itemizedlist>
<para><link linkend="F4-14">Figure <xref linkend="F4-14" remap="4.14"/></link> shows two photographs of the robot during its searching for the required &#x0201C;red&#x0201D; book. Besides the correct recognition of the desired object (asked by the tutor), what is pertinent and interesting to note is that the robot also found two other unseen books. What is also very pertinent to emphasize, and very interesting, is that even if there is no &#x0201C;red&#x0201D; book in that environment, the robot has correctly interpreted the fact that the red book required by the human was the &#x0201C;red and grey&#x0201D; book: the only book that may coherently be considered as &#x0201C;red&#x0201D; by the human. A video showing the experimental validation may be found on http://youtu.be/W5FD6zXihOo. More details of the presented work with complementary results can be found in [19, 25].</para>
</section>
</section>
<section class="lev1" id="sec4-5">
<title>4.5 Conclusions</title>
<para>This chapter has presented, discussed and validated a cognitive system for high-level knowledge acquisition from visual perception based on the notion of artificial curiosity. Driving as well the lower as the higher levels of the presented cognitive system, the emergent artificial curiosity allows such a system to learn in an autonomous manner new knowledge about the unknown surrounding world and to complete (enrich or correct) its knowledge by interacting with a human. Experimental results, performed as well on a simulation platform as using the NAO robot, show the pertinence of the investigated concepts as well as the effectiveness of the designed system. Although it is difficult to make a precise comparison due to different experimental protocols, the results we obtained show that our system is able to learn faster and from significantly fewer examples than most of more-or-less similar implementations.</para>
<para>Based on the results obtained, it is thus justified to say that a robot endowed with such artificial curiosity-based intelligence will necessarily include autonomous cognitive capabilities. With respect to this, the further perspectives regarding the autonomous cognitive robot presented in this chapter will focus on integration of the investigated concepts in other kinds of robots, such as mobile robots. There, it will play the role of an underlying system for machine cognition and knowledge acquisition. This knowledge will be subsequently available as the basis for tasks proper for machine intelligence such as reasoning, decision making and an overall autonomy.</para>
</section>
<section class="lev1" id="sec4-6">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>D. Meger, P. E. Forss&#x00E9;n, K. Lai, S. Helmer, S. McCann, T. Southey, M. Baumann, J. J. Little and D. G. Lowe, &#x02018;Curious George: An attentive semantic robot&#x02019;, Robot. Auton. Syst., vol. 56, no. 6, pp. 503&#x02013;511, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Meger%2C+P%2E+E%2E+Forss%E9n%2C+K%2E+Lai%2C+S%2E+Helmer%2C+S%2E+McCann%2C+T%2E+Southey%2C+M%2E+Baumann%2C+J%2E+J%2E+Little+and+D%2E+G%2E+Lowe%2C+%27Curious+George%3A+An+attentive+semantic+robot%27%2C+Robot%2E+Auton%2E+Syst%2E%2C+vol%2E+56%2C+no%2E+6%2C+pp%2E+503-511%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>P. Kay, B. Berlin and W. Merrifield, &#x02018;Biocultural Implications of Systems of Color Naming&#x02019;, Journal of Linguistic Anthropology, vol. 1, no. 1, pp. 12&#x02013;25, 1991. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=P%2E+Kay%2C+B%2E+Berlin+and+W%2E+Merrifield%2C+%27Biocultural+Implications+of+Systems+of+Color+Naming%27%2C+Journal+of+Linguistic+Anthropology%2C+vol%2E+1%2C+no%2E+1%2C+pp%2E+12-25%2C+1991%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Bowerman, &#x02018;How Do Children Avoid Constructing an Overly General Grammar in the Absence of Feedback about What is Not a Sentence?&#x02019;, Papers and Reports on Child Language Development, 1983. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Bowerman%2C+%27How+Do+Children+Avoid+Constructing+an+Overly+General+Grammar+in+the+Absence+of+Feedback+about+What+is+Not+a+Sentence%B4%27%2C+Papers+and+Reports+on+Child+Language+Development%2C+1983%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. A. Goodrich and A. C. Schultz, &#x02018;Human-robot interaction: a survey&#x02019;, Found. Trends Hum.-Comput. Interact., vol. 1, no. 3, pp. 203&#x02013;275, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+A%2E+Goodrich+and+A%2E+C%2E+Schultz%2C+%27Human-robot+interaction%3A+a+survey%27%2C+Found%2E+Trends+Hum%2E-Comput%2E+Interact%2E%2C+vol%2E+1%2C+no%2E+3%2C+pp%2E+203-275%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Coradeschi and A. Saffiotti, &#x02018;An introduction to the anchoring problem&#x02019;, Robotics &#x00026; Autonomous Sys., vol. 43, pp. 85&#x02013;96, 2003. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Coradeschi+and+A%2E+Saffiotti%2C+%27An+introduction+to+the+anchoring+problem%27%2C+Robotics+%26+Autonomous+Sys%2E%2C+vol%2E+43%2C+pp%2E+85-96%2C+2003%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>T. Regier, &#x02018;A Model of the Human Capacity for Categorizing Spatial Relations&#x02019;, Cognitive Linguistics, vol. 6, no. 1, pp. 63&#x02013;88, 1995. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+Regier%2C+%27A+Model+of+the+Human+Capacity+for+Categorizing+Spatial+Relations%27%2C+Cognitive+Linguistics%2C+vol%2E+6%2C+no%2E+1%2C+pp%2E+63-88%2C+1995%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. de Greeff, F. Delaunay and T. Belpaeme, &#x02018;Human-robot interaction in concept acquisition: a computational model&#x02019;, Proc. of Int. Conf. on Development and Learning, vol. 0, pp. 1&#x02013;6, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+de+Greeff%2C+F%2E+Delaunay+and+T%2E+Belpaeme%2C+%27Human-robot+interaction+in+concept+acquisition%3A+a+computational+model%27%2C+Proc%2E+of+Int%2E+Conf%2E+on+Development+and+Learning%2C+vol%2E+0%2C+pp%2E+1-6%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>P. Wellens, M. Loetzsch and L. Steels, &#x02018;Flexible word meaning in embodied agents&#x02019;, Connection Science, vol. 20, no. 2&#x02013;3, pp. 173&#x02013;191, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=P%2E+Wellens%2C+M%2E+Loetzsch+and+L%2E+Steels%2C+%27Flexible+word+meaning+in+embodied+agents%27%2C+Connection+Science%2C+vol%2E+20%2C+no%2E+2-3%2C+pp%2E+173-191%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Saunders, C. L. Nehaniv and C. Lyon, &#x02018;Robot learning of lexical semantics from sensorimotor interaction and the unrestricted speech of human tutors&#x02019;, Proc. of 2nd International Symposium on New Frontiers in Human-Robot Interaction, Leicester, pp. 95&#x02013;102, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Saunders%2C+C%2E+L%2E+Nehaniv+and+C%2E+Lyon%2C+%27Robot+learning+of+lexical+semantics+from+sensorimotor+interaction+and+the+unrestricted+speech+of+human+tutors%27%2C+Proc%2E+of+2nd+International+Symposium+on+New+Frontiers+in+Human-Robot+Interaction%2C+Leicester%2C+pp%2E+95-102%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>L&#x00FC;tkebohle, J. Peltason, L. Schillingmann, B. Wrede, S. Wachsmuth, C. Elbrechter and R. Haschke, &#x02018;The curious robot - structuring interactive robot learning&#x02019;, Proc. of the 2009 IEEE international conference on Robotics and Automation, Kobe, pp. 2154&#x02013;2160, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=L%FCtkebohle%2C+J%2E+Peltason%2C+L%2E+Schillingmann%2C+B%2E+Wrede%2C+S%2E+Wachsmuth%2C+C%2E+Elbrechter+and+R%2E+Haschke%2C+%27The+curious+robot+-+structuring+interactive+robot+learning%27%2C+Proc%2E+of+the+2009+IEEE+international+conference+on+Robotics+and+Automation%2C+Kobe%2C+pp%2E+2154-2160%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>T. Araki, T. Nakamura, T. Nagai, K. Funakoshi, M. Nakano and N. Iwahashi, &#x02018;Autonomous acquisition of multimodal information for online object concept formation by robots&#x02019;, Proc. of IEEE/ IROS, pp. 1540&#x02013;1547, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+Araki%2C+T%2E+Nakamura%2C+T%2E+Nagai%2C+K%2E+Funakoshi%2C+M%2E+Nakano+and+N%2E+Iwahashi%2C+%27Autonomous+acquisition+of+multimodal+information+for+online+object+concept+formation+by+robots%27%2C+Proc%2E+of+IEEE%2F+IROS%2C+pp%2E+1540-1547%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Skocaj, M. Kristan, A. Vrecko, M. Mahnic, M. Janicek, G.-J. M. Kruijff, M. Hanheide, N. Hawes, T. Keller, M. Zillich and K. Zhou, &#x02018;A system for interactive learning in dialogue with a tutor&#x02019;, Proc.of IEEE/ IROS, pp. 3387&#x02013;3394, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Skocaj%2C+M%2E+Kristan%2C+A%2E+Vrecko%2C+M%2E+Mahnic%2C+M%2E+Janicek%2C+G%2E-J%2E+M%2E+Kruijff%2C+M%2E+Hanheide%2C+N%2E+Hawes%2C+T%2E+Keller%2C+M%2E+Zillich+and+K%2E+Zhou%2C+%27A+system+for+interactive+learning+in+dialogue+with+a+tutor%27%2C+Proc%2Eof+IEEE%2F+IROS%2C+pp%2E+3387-3394%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Yu, &#x02018;The emergence of links between lexical acquisition and object categorization: a computational study&#x02019;, Connection Science, vol. 17, 3&#x02013;4, pp. 381&#x02013;397, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Yu%2C+%27The+emergence+of+links+between+lexical+acquisition+and+object+categorization%3A+a+computational+study%27%2C+Connection+Science%2C+vol%2E+17%2C+3-4%2C+pp%2E+381-397%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. R. Waxman and S. A. Gelman, &#x02018;Early word-learning entails reference, not merely associations&#x02019;, Trends in cognitive science, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+R%2E+Waxman+and+S%2E+A%2E+Gelman%2C+%27Early+word-learning+entails+reference%2C+not+merely+associations%27%2C+Trends+in+cognitive+science%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. E. Berlyne, &#x02018;A theory of human curiosity&#x02019;, British Journal of Psychology, vol. 45, no. 3, August, pp. 180&#x02013;191, 1954. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+E%2E+Berlyne%2C+%27A+theory+of+human+curiosity%27%2C+British+Journal+of+Psychology%2C+vol%2E+45%2C+no%2E+3%2C+August%2C+pp%2E+180-191%2C+1954%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>K. Madani, C. Sabourin, &#x02018;Multi-level cognitive machine-learning based concept for human-like artificial walking: Application to autonomous stroll of humanoid robots&#x02019;, Neurocomputing, S.I. on Linking of phenomenological data and cognition, pp. 1213&#x02013;1228, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+Madani%2C+C%2E+Sabourin%2C+%27Multi-level+cognitive+machine-learning+based+concept+for+human-like+artificial+walking%3A+Application+to+autonomous+stroll+of+humanoid+robots%27%2C+Neurocomputing%2C+S%2EI%2E+on+Linking+of+phenomenological+data+and+cognition%2C+pp%2E+1213-1228%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>K. Madani, D. Ramik, C. Sabourin, &#x02018;Multi-level cognitive machine-learning based concept for Artificial Awareness: application to humanoid robot&#x02019;s awareness using visual saliency&#x02019;, J. of Applied Computational Intelligence and Soft Computing,. DOI: 10.1155/2012/354785, 2012. (available on: http://dx.doi.org/10.1155/2012/354785). <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+Madani%2C+D%2E+Ramik%2C+C%2E+Sabourin%2C+%27Multi-level+cognitive+machine-learning+based+concept+for+Artificial+Awareness%3A+application+to+humanoid+robot%27s+awareness+using+visual+saliency%27%2C+J%2E+of+Applied+Computational+Intelligence+and+Soft+Computing%2C%2E+DOI%3A+10%2E1155%2F2012%2F354785%2C+2012%2E+%28available+on%3A+http%3A%2F%2Fdx%2Edoi%2Eorg%2F10%2E1155%2F2012%2F354785%29%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. M. Ramik, C. Sabourin, K. Madani, &#x02018;A Machine Learning based Intelligent Vision System for Autonomous Object Detection and Recognition&#x02019;, J. of Applied Intelligence, Springer, Vol. 40, Issue 2, pp. 358&#x02013;374, 2014. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+M%2E+Ramik%2C+C%2E+Sabourin%2C+K%2E+Madani%2C+%27A+Machine+Learning+based+Intelligent+Vision+System+for+Autonomous+Object+Detection+and+Recognition%27%2C+J%2E+of+Applied+Intelligence%2C+Springer%2C+Vol%2E+40%2C+Issue+2%2C+pp%2E+358-374%2C+2014%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D-M. Ramik, C. Sabourin. K. Madani, &#x02018;From Visual Patterns to Semantic Description: a Cognitive Approach Using Artificial Curiosity as the Foundation&#x02019;, Pattern Rgognition Letters, Elsevier, vol. 34, no. 14, pp. 1577&#x02013;1588, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D-M%2E+Ramik%2C+C%2E+Sabourin%2E+K%2E+Madani%2C+%27From+Visual+Patterns+to+Semantic+Description%3A+a+Cognitive+Approach+Using+Artificial+Curiosity+as+the+Foundation%27%2C+Pattern+Rgognition+Letters%2C+Elsevier%2C+vol%2E+34%2C+no%2E+14%2C+pp%2E+1577-1588%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Schindler and J. W. v. Goethe, &#x02018;Goethe&#x02019;s theory of colour applied by Maria Schindler&#x02019;, New Knowledge Books, East Grinstead, Eng., 1964. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Schindler+and+J%2E+W%2E+v%2E+Goethe%2C+%27Goethe%27s+theory+of+colour+applied+by+Maria+Schindler%27%2C+New+Knowledge+Books%2C+East+Grinstead%2C+Eng%2E%2C+1964%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>V. Klingspor, J. Demiris, M. Kaiser, &#x02018;Human-Robot-Communication and Machine Learning&#x02019;, Applied Artificial Intelligence, pp. 719&#x02013;746, 1997. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=V%2E+Klingspor%2C+J%2E+Demiris%2C+M%2E+Kaiser%2C+%27Human-Robot-Communication+and+Machine+Learning%27%2C+Applied+Artificial+Intelligence%2C+pp%2E+719-746%2C+1997%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Hofmann, M. Jngel, M. Ltzsch, &#x02018;A vision based system for goal-directed obstacle avoidance used in the rc&#x02019;03 obstacle avoidance challenge&#x02019;, Lecture Notes in Artificial Intelligence, Proc. of 8th International Workshop on RoboCup, pp. 418&#x02013;425, 2004. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Hofmann%2C+M%2E+Jngel%2C+M%2E+Ltzsch%2C+%27A+vision+based+system+for+goal-directed+obstacle+avoidance+used+in+the+rc%2703+obstacle+avoidance+challenge%27%2C+Lecture+Notes+in+Artificial+Intelligence%2C+Proc%2E+of+8th+International+Workshop+on+RoboCup%2C+pp%2E+418-425%2C+2004%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. M. Ramik, C. Sabourin, K. Madani, &#x02018;On human inspired semantic slam&#x02019;s feasibility&#x02019;, Proc. of the 6th International Workshop on Artificial Neural Networks and Intelligent Information Processing (ANNIIP 2010), ICINCO 2010, INSTICC Press, Funchal, pp. 99&#x02013;108, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+M%2E+Ramik%2C+C%2E+Sabourin%2C+K%2E+Madani%2C+%27On+human+inspired+semantic+slam%27s+feasibility%27%2C+Proc%2E+of+the+6th+International+Workshop+on+Artificial+Neural+Networks+and+Intelligent+Information+Processing+%28ANNIIP+2010%29%2C+ICINCO+2010%2C+INSTICC+Press%2C+Funchal%2C+pp%2E+99-108%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. Moreno, D. M. Ramik, M. Gra&#x00F1;a, K. Madani, &#x02018;Image Segmentation on the Spherical Coordinate Representation of the RGB Color Space&#x02019;, IET Image Processing, vol. 6, no. 9, pp. 1275&#x02013;1283, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+Moreno%2C+D%2E+M%2E+Ramik%2C+M%2E+Gra%F1a%2C+K%2E+Madani%2C+%27Image+Segmentation+on+the+Spherical+Coordinate+Representation+of+the+RGB+Color+Space%27%2C+IET+Image+Processing%2C+vol%2E+6%2C+no%2E+9%2C+pp%2E+1275-1283%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. M. Ramik, C. Sabourin, K. Madani, &#x02018;Autonomous Knowledge Acquisition based on Artificial Curiosity: Application to Mobile Robots in Indoor Environment&#x02019;, J. of Robotics and Autonomous Systems, Elsevier, Vol. 61, no. 12, pp. 1680&#x02013;1695, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+M%2E+Ramik%2C+C%2E+Sabourin%2C+K%2E+Madani%2C+%27Autonomous+Knowledge+Acquisition+based+on+Artificial+Curiosity%3A+Application+to+Mobile+Robots+in+Indoor+Environment%27%2C+J%2E+of+Robotics+and+Autonomous+Systems%2C+Elsevier%2C+Vol%2E+61%2C+no%2E+12%2C+pp%2E+1680-1695%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch05" label="5" xreflabel="5">
<title>Information Technology for Interactive Robot Task Training Through Demonstration of Movement</title>
<para><footnote id="fn5_1" label="1"><para>The paper is published with financial support from the Russian Foundation for Basic Research, projects 14-08-01225-a, 15-07-04415-a, 15-01-02021-a.</para></footnote></para>
<para><emphasis role="strong">F. Kulakov and S. Chernakova</emphasis></para>
<para>Laboratory of Informational Technologies for Control and Robots,<break/>St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, Russia</para>
<para>Corresponding author: F. Kulakov &lt;kufelix@yandex.ru&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>Remote robot control (telecontrol) includes the solution of the following routine problems: surveillance of the remote working area, remote operation of the robot situated in the remote working area, as well as pre-training of the robot. The current paper describes a new technique for robot control using intelligent multimodal human-machine interfaces (HMI). The first part of the paper explains the robot control algorithms, including testing of the results of learning and of movement reproduction by the robot. The application of the new training technology is very promising for space robots as well as for modern assembly plants, including the use of micro-and nano-robots.</para>
<para><emphasis role="strong">Keywords:</emphasis> Robot, telecontrol, task training by demonstration, human-machine interfaces</para>
</section>
<section class="lev1" id="sec5-1">
<title>5.1 Introduction</title>
<para>The concept of telesensor programming (TSP) and relevant task-oriented robot control techniques for use in space robotics was first proposed by G. Herzinger [1].</para>
<para>Within the framework of the ROTEX program, implemented in April 1993 for the SPACE-LAB space station, a simulation environment for multisensory semiautonomous robot systems, with powerful man-machine interfaces (laser range finders, 3D-stereo graphics and force/torque effort reflection), was developed. This allowed the space robot manipulator to be remotely programmed (teleprogrammed) from Earth.</para>
<para>The solution for the problem of remote control under non-deterministic delays in the communications channel is based on the use of TSP with training by demonstration for the sensitized robot.</para>
<para>Tasks such as assembly, joining of connectors and catching flying objects were practiced. Actually, it was the first time that a human remotely trained a robot through direct movement demonstration using a graphic model with robot sensor simulation.</para>
<para>The effectiveness of interactive control (demonstration training) is highlighted in all cases of the application of pre-training technology to space and medical robots, as the most natural way to transfer the operator&#x02019;s experience (SKILL TRANSFER) in order to ensure autonomous robot manipulator operation in a complex non-deterministic environment [2&#x02013;5].</para>
<para>However, in these studies it was only possible to conduct training with the immediate recording of the movement trajectory positioning data and the possibility of motion correction as per the signals from the robot&#x02019;s sensors.</para>
<para>These studies did not solve the problem of complex robot motion representation as a certain data structure that is easily adjustable by humans, or &#x0201C;independently&#x0201D; modified by the autonomous robot, depending on changes in the remote environment.</para>
<para>The current paper describes a new information technology-based approach for interactive training by demonstration of the human operator&#x02019;s natural hand movements based on motion representation in the form of a frame-structured model (FSM).</para>
<para>Here, a frame means a description of the shape of motion with indications of its metric characteristics, methods and sequence of execution of the separate parts of the movement. Training by demonstration means intelligent robot manipulator programming aimed at training the robot for autonomous work with the objects (among the objects) without point-to-point trajectory recording. That is, by providing only separate fragments of movement in the training stage, and sequentially executing them, depending on the task.</para>
<para>In order to train a robot manipulator to move among objects it was suggested to use a remotely operated camera, fixed to the so-called &#x0201C;sensitized glove&#x0201D;. This allows not only the registration of position and orientation of the hand in space, but also the position of the object (experimental models) models&#x02019; characteristic points relative to the camera on the hand.</para>
</section>
<section class="lev1" id="sec5-2">
<title>5.2 Conception and Principles of Motion Modeling</title>
<section class="lev2" id="sec5-2-1">
<title>5.2.1 Generalized Model of Motion</title>
<para>A variety of robot motion types in the external environment (EE), including manipulation of items (objects and tools) as well as the complexity and variability of EE configurations, are typical for aerospace, medical, industrial, technological and assembly operations.</para>
<para>Let us consider the problem of training the robot manipulator to perform motion relative to EE objects in two cases: examination motion and manipulative motion. The main issue in forming the motion patterns, set, in this case, by the motions of the operator&#x02019;s head and arm, is to have a method for recording and reproducing the three-dimensional trajectories of the robot manipulator grip relative to EE objects.</para>
<para>The problem of the alignment of the topology and the semantics of objects, well known in geographic information systems (GIS), is basically close to the problem of motion modeling and route planning in robotics.</para>
<para>In the case of navigational routing tasks using intelligent GIS, the authors basically consider motion along a plane (on the surface of the sphere) or several planes (echelon gratings). Moreover, in most cases, the moving object is taken as a mathematical point, not having its own orientation in space.</para>
<para>The motion path configuration in space often does not matter, so routing is carried out over the shortest distance. Thus, while following the curvature of the relief, the motion tracks its shape.</para>
<para>For object shape modeling and motion formation, we propose using a common structured description language, which considers that the object shape model is defined and described by a frame of its elements, and the motion trajectory model is described by a frame of descriptions of the elementary motions. It is important to note that the elementary motions (fragments) can be given appropriate names and be considered to be the language operators, providing the possibility of describing robot actions in a rather compact manner.</para>
<para>For interactive motion demonstration robot training, we propose using a combination of the EE (MEE) objects&#x02019; shape models and the motion shape models (MFM). In this case, the generalized frame-structured model (FSM) is defined as a method for storing information not only about the shape of the EE objects, but also about the shape of the motion trajectory.</para>
<para>The description language used in FSM is a multilevel hierarchical system of frames, similar to M. Minski frames [6], containing a description of the shape elements, metric characteristics and methods and procedures for working with these objects. MFM, as one of the FSM components, stores the structure of the shape of motion trajectories demonstrated by the human operator during the process of training the robot to perform specified movements [7, 8].</para>
<para>The generalized FSM of the remotely operated robot IE includes:</para>
<fig id="F5-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-1">Figure <xref linkend="F5-1" remap="5.1"/></link></label>
<caption><para>Images of the Space Station for two positions: &#x0201C;Convenient for observation&#x0201D; and &#x0201C;Convenient for grabbing&#x0201D; objects with the virtual manipulator.</para></caption>
<graphic xlink:href="graphics/ch05_fig0001.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Object models, models of the objects&#x02019; topology (location) in a particular IE (MIE);</para></listitem>
<listitem>
<para>Models of different typical motions and topology models (interrelations, locations) of these movements in a particular IE (MIE).</para></listitem></itemizedlist>
<para>It is also proposed to store, in the MIE, the coordinates and images of objects from positions convenient both for remote-camera observation (which enables the most accurate measurement of the coordinates of the characteristic features of object images) and for grabbing objects with the robot gripper (<link linkend="F5-1">Figure <xref linkend="F5-1" remap="5.1"/></link>) [9].</para>
<para>Training of motion can be regarded as a transfer of knowledge of motor, sensory, and behavioral skills from a human operator to the robot control system (RCS), which in this case should be a multimodal man-machine interface (MMI), developed to the greatest possible extent (intelligent) to provide adequate and effective perception of human actions. Consequently, it is assumed that a generalized model of description of the robot knowledge on the EE based on the FSM will be created, including the robot itself and its possible (necessary) actions within it.</para>
<para>The preliminary results of the research on algorithms and technologies for the robot manipulator task training by demonstration, using the motion description in the form of MFM, are presented below.</para>
</section>
<section class="lev2" id="sec5-2-2">
<title>5.2.2 Algorithm for Robot Task Training by Demonstration</title>
<para>In order to task-train the robot by demonstration, a special device, the so-called &#x0201C;sensitized glove,&#x0201D; is put on the hand of the trainer. It is equipped with a television camera and check points (markers) [10].</para>
<para>This allows the execution of two functions simultaneously (<link linkend="F5-2">Figure <xref linkend="F5-2" remap="5.2"/></link>):</para>
<fig id="F5-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-2">Figure <xref linkend="F5-2" remap="5.2"/></link></label>
<caption><para>&#x0201C;Sensitized Glove&#x0201D; with a camera and the process of training the robot by means of demonstration.</para></caption>
<graphic xlink:href="graphics/ch05_fig002.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Using the television camera on the glove, record the image and determine the coordinates of the objects&#x02019; characteristic points, over which the hand of the human operator moves;</para></listitem>
<listitem>
<para>Using the sensors of the intelligent MMI system, determine the spatial position and orientation of the hand in the work location by means of 3&#x02013;4 check points (markers) on the glove.</para></listitem></itemizedlist>
<para>Considering the processes for task-training a robot to perform elementary operations and reproducing these operations, an important feature is revealed. This feature consists in the fact that algorithms for training and reproduction present fragments, which are used in different operations without modifications or with very minor changes, and may also be repeated several times in a single operation.</para>
<para>From among the various movements of the robot manipulator, most of them can be represented as a sequence of a limited number of elementary motions (motion fragments), for example:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Transfer motion of the gripper along an arbitrary complex trajectory <emphasis>g = g(l)</emphasis> from the current position to a certain final position;</para></listitem>
<listitem>
<para>Correction motion, using the sequence of characteristic points (CP) of the EE objects&#x02019; images, as input information;</para></listitem>
<listitem>
<para>Surveillance movement in the process by which the following are sequentially created: matrices of the gripper position <emphasis>T<subscript>b</subscript>, T<subscript>b1</subscript>, T<subscript>b2</subscript></emphasis>, joint coordinate vectors <emphasis>g<subscript>b</subscript>, g<subscript>b1</subscript>, g<subscript>b2</subscript></emphasis>, and geometric trajectory <emphasis>g = g(l)</emphasis>;</para></listitem>
<listitem>
<para>Movement to a convenient position for surveillance;</para></listitem>
<listitem>
<para>Movement to a convenient position for grabbing;</para></listitem>
<listitem>
<para>Movement for &#x0201C;tracking&#x0201D; the object (approaching the object);</para></listitem>
<listitem>
<para>Movement to grab the object.</para></listitem></itemizedlist>
<para>In traditional training systems using one or the other method, a sequence of points of the motion trajectory of the robot gripper is obtained. It can be represented as a function of some parameter <emphasis>l</emphasis>, which can be considered as the preliminary result of training the robot to perform the fragment of the gripper movement from one position to the other:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg100.jpg"/></para>
<para>where: <emphasis>l<subscript>b</subscript></emphasis> &#x02013; parameter of the trajectory in the initial position, <emphasis>l</emphasis>&#x02013; parameter of the trajectory in the current position, <emphasis>l<subscript>e</subscript></emphasis>&#x02013; parameter of the trajectory in the final position.</para>
<para>In this case, the training algorithm for performing motions ensures the formation of geometric trajectory <emphasis>g(l)</emphasis> and includes the following:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Formation of a sequence of triplets of the two-dimensional vectors <emphasis>x<subscript>imb</subscript><superscript>(1)</superscript>, x<subscript>imb</subscript><superscript>(2)</superscript>, x<subscript>imb</subscript><superscript>(3)</superscript>; x<subscript>imI</subscript><superscript>(1)</superscript>, x<subscript>imI</subscript><superscript>(2)</superscript>, x<subscript>imI</subscript><superscript>(3)</superscript>; &#x02026;; x<subscript>ime</subscript><superscript>(1)</superscript>, x<subscript>ime</subscript><superscript>(2)</superscript>, x<subscript>ime</subscript><superscript>(3)</superscript></emphasis>, conforming to the image positions of the 3 CP on the object during training;</para></listitem>
<listitem>
<para>Formation of the sequence <emphasis>T<subscript>b</subscript>, T<subscript>I</subscript>, T<subscript>II</subscript>, &#x02026;, T<subscript>e</subscript></emphasis> of the matrices of the glove position;</para></listitem>
<listitem>
<para>Solution of systems of equations <emphasis role="up">( <xref rid="#x1-5002r1"><!--ref: GrindEQ__5_1_--></xref>)</emphasis>:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq5-1.jpg"/></para>
</listitem></itemizedlist>
<para>where: <emphasis>k<superscript>(i)</superscript></emphasis>is a variable scale, defined as: <emphasis>k<superscript>(i)</superscript> = f /d<superscript>(i)</superscript>-f</emphasis>, where <emphasis>d<superscript>(i)</superscript></emphasis>is the distance from the point to the TV camera showing the plane; <emphasis>f</emphasis> is the focal distance of the lens,  <emphasis role="strong">&#x0005E;T</emphasis>is a (2x4) matrix made up of the first two rows of matrix: <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in101.jpg"/>, characterizing the rotation and displacement of the system of coordinates (CS), in conjunction with the camera on the glove, relative to the object CS, where a is the direction cosine matrix of the reference CS rotation angle, <emphasis>X<subscript>n</subscript></emphasis>the displacement vector of the beginning of the CS and  <emphasis>X<superscript>(i)</superscript></emphasis> &#x02013; the two-dimensional vectors of the position of the image of the characteristic points of the object in the image plane.</para>
<para>This data is sufficient to construct a sequence of matrices of the gripper positions <emphasis>T<subscript>b</subscript>, T<subscript>1</subscript>, T<subscript>II</subscript>, &#x02026;, T<subscript>e</subscript></emphasis> during movement. The orientation blocks in these matrices are matrices <emphasis>&#x003B1;<subscript>b</subscript></emphasis>, <emphasis>&#x003B1;<subscript>1</subscript></emphasis>, <emphasis>&#x003B1;<subscript>II</subscript></emphasis>, &#x02026;, <emphasis>&#x003B1;<subscript>e</subscript></emphasis>. The block of the gripper pole position corresponds to the initial position of the gripper. According to this sequence, the geometric, and, in line with it, the temporal motion trajectory of the gripper can be built.</para>
<para>When teaching this action, the operator must move his hand with the glove on it in the manner in which the gripper should move during the process of the surveillance motion, whereas the position of the operator&#x02019;s wrist can be arbitrary and convenient for the operator.</para>
<para>Furthermore, for each case of teaching a new motion, it is necessary to memorize a new volume of motion information in the form of several sets of coordinates mentioned above.</para>
<para>When teaching the motions, e.g. IE surveillance, it is necessary to store a considerable amount of information in the memory of the robot control system (RCS), including:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Values of matrix <emphasis>T,</emphasis> which characterize the position and orientation of the glove in the coordinate system of the operator&#x02019;s workstation, corresponding to initial <emphasis>T<subscript>b</subscript></emphasis>, final <emphasis>T<subscript>e</subscript></emphasis> and several intermediate <emphasis>T<subscript>1</subscript>, T<subscript>II</subscript>, &#x02026;</emphasis> gripper positions, which it must take when performing movements;</para></listitem>
<listitem>
<para>Several images of the object from the glove-mounted TV camera, corresponding to the various gripper positions, to control the accuracy of training;</para></listitem>
<listitem>
<para>Characteristic identification signs, characteristic points (CP) of the different images of the object, at different glove positions during the training process;</para></listitem>
<listitem>
<para>Coordinates of the CP of the images of the object in the base coordinate system;</para></listitem>
<listitem>
<para>Parameters of gripper opening and the permissible compressive force applied to the subject.</para></listitem></itemizedlist>
<para>To reduce the amount of information and to present the motion trajectory in a language is close to the natural one, it is suggested to use a frame-structured description in the motion shape model (MFM), the basic principles of which are described in the previous papers by the authors [11, 12].</para>
</section>
<section class="lev2" id="sec5-2-3">
<title>5.2.3 Algorithm for Motion Reproduction after Task Training by Demonstration</title>
<para>The specific feature of the robot&#x02019;s motion reproduction in a real IE is that fragments of elementary movements, stored during task training can follow a different sequence depending on the external conditions when reproduced. Due to the aforementioned features, it appears to be reasonable to teach the robot to do different fragments of motion in various combinations of the given fragments.</para>
<para>The number of applied elementary motions (fragments) increases along with the number of reproduced operations. However, this increase will be much smaller than the increase in the number of operations for which the robot is used. It is important to note that proper names can be assigned to the given elementary motions and they can be considered to be operators of the language with the help of which the robot&#x02019;s actions can be described in a sufficiently compact manner.</para>
<para>On the basis of the frame-structured description of the MFM, obtained during task training, the so-called &#x0201C;tuning of the MFM&#x0201D; for a specific task is performed before starting the reproduction of motion by the robot in a particular IE situation.</para>
<para>Practically, this is done by masking or selection of only those descriptions of motion in the MFD that satisfy the conditions of the task and the external conditions of the situation in the IE according to their purpose and shapes (semantic and topological features). The selected movements are automatically converted into a sequence of elementary movements <emphasis>g = g(l)</emphasis>.</para>
<para>In the case of the reproduction of the elementary motion along the trained trajectory <emphasis>g = g(l)</emphasis> in systems without sensor offsetting it is necessary to construct a parameter change function <emphasis>l(t)</emphasis> in the area <emphasis>l<subscript>b</subscript> &#x02264; l &#x02264; l<subscript>e</subscript></emphasis>. Typically, the initial and final velocities <emphasis>l(t)</emphasis> are known, and they are most commonly equal to zero:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg102.jpg"/></para>
<para>In the simplest case of the formation of <emphasis>l(t),</emphasis> three intervals can be singled out in it: the &#x0201C;acceleration&#x0201D; interval from the initial velocity (<emphasis>l&#x02032;<subscript>b</subscript></emphasis>) to some permissible speed (<emphasis>l&#x02032;<subscript>d</subscript></emphasis>), the interval of motion at a predetermined speed and the deceleration interval from the predetermined velocity to zero (<emphasis>l&#x02032;<subscript>e</subscript></emphasis>).</para>
<para>During acceleration and deceleration a constant acceleration (<emphasis>l&#x02032;&#x02032;<subscript>d</subscript></emphasis>) must take place. Its value should be such that the value of the velocity <emphasis>g</emphasis>&#x02032; and acceleration <emphasis>g&#x02032;&#x02032;</emphasis> vectors can be physically implementable under the existing restrictions of the control vector <emphasis>(U)</emphasis> of the robot manipulator&#x02019;s motors.</para>
<para>The values of these limitations can be determined based on the consideration of the dynamic model <emphasis>(R)</emphasis> of the robot manipulator, which connects the control vector <emphasis>(U)</emphasis> to the motion dynamics vectors <emphasis>(g, g&#x02032;, g&#x02032;&#x02032;)</emphasis>:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg103.jpg"/></para>
<para>In the case of the motion reproduction transfer of function <emphasis>l = l(t)</emphasis>, it is defined by the following ratio:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>During the acceleration interval (<emphasis>0</emphasis><emphasis role="strong"></emphasis><emphasis>&#x0003C; t = t<subscript>1</subscript></emphasis>), where <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in103.jpg"/></para></listitem>
<listitem>
<para>During the interval of motion at a constant velocity - (<emphasis>t<subscript>1</subscript> &#x02264; t &#x02264; t<subscript>2</subscript></emphasis>), where <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in103-1.jpg"/></para></listitem>
<listitem>
<para>During the deceleration interval (<emphasis>t<subscript>2</subscript> &#x02264; t &#x02264; t<subscript>3</subscript></emphasis>), where</para>
<para><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in103-2.jpg"/></para>
</listitem></itemizedlist>
<para>The reproduction of movement over time by the robot is carried out as per the implementation of the obtained function <emphasis>l(t)</emphasis> in the motion trajectory <emphasis>g(l)</emphasis>:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg103-1.jpg"/></para>
<para>To determine the drives&#x02019; control vector <emphasis>U = R (g, g&#x02032;, g&#x02032;&#x02032;)</emphasis> the substitution of values <emphasis>g, g&#x02032;, g&#x02032;&#x02032;</emphasis> by the values of function <emphasis>g = g (l (t))</emphasis> is carried out. This results in the formation of the control function of the motors of the robot manipulator over time.</para>
<para>It should be noted that a man performs natural motions with constant acceleration, in contrast to the robot manipulator, whose motions are characterized by a constant rate (speed). Therefore, the robot has to perform its motions as per its own dynamics, which differ from the dynamic properties of the human operator.</para>
</section>
<section class="lev2" id="sec5-2-4">
<title>5.2.4 Verification of Results for the Task of Training the Telecontrolled (Remote Controlled) Robot</title>
<para>Remotely operated robots must be sufficiently autonomous and trainable to be able to efficiently perform operations in remote environments distant from the human operator. Naturally, task training for space robots must be performed in advance right here on Earth, and medical robots shall be trained out of the operation theaters.</para>
<para>At the same time, a possibility for the remote correction of training outcomes must be provided, for possible additional training by the human operator, located at a considerable distance from the robot in space or from a remotely controlled medical robot.</para>
<para>For greater reproduction reliability, it is necessary to implement an automated process control over motion reproduction by the remotely controlled robot using copies of the MFM and MEE from the RCS. Remote control over the robot movements by a human operator must be carried out using prediction, taking into consideration the potential interference and time delays in the communications channel.</para>
<para>Actual remote control of the space robot or the remotely operated medical robot must be carried out as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>With some time advance (prediction), simultaneously with working motion execution by the robot, control over the current robot motion is performed on the simulator, which includes the MEE, MFM and the intelligent MMI system;</para></listitem>
<listitem>
<para>Data from the RCS, arriving with delay, is reflected on the MEE and MFM and is compared to the predicted movement in order to provide the possibility of emergency correction;</para></listitem>
<listitem>
<para>The trajectory of motion relative to the real location of the MEEs is automatically adjusted by the RCS as per sensor signals;</para></listitem>
<listitem>
<para>By human command (or automatically by the RCS) correction of parameters and operational replacement of the motion fragments are carried out in accordance to the pre-trained alternative variants of the working motion.</para></listitem></itemizedlist>
<para>After the execution by the robot of the regular working movement, actual motion trajectories in the form of a description in the language of MFM, compiled after an automatic motion analysis in the RCS, are transferred from the RCS to the human operator in the modeling environment. This information, together with the results of the real EE scanning by the robot during the robot&#x02019;s execution of the working movement, is to be used for correction of the MFM and MEE.</para>
<para>In the absence of significant corrections in the process of executing the working movement, the training is considered to be correct, understanding between the human operator and the RCS is considered to be adequate, and the results of the robot task training can be used in the future.</para>
</section>
<section class="lev2" id="sec5-2-5">
<title>5.2.5 Major Advantages of Task Training by Demonstration</title>
<para>The proposed algorithm for task training by demonstration of the motion has a number of advantages over conventional methods of programming trajectories or motion copying, when the operator, for example, moves in space the manipulator&#x02019;s gripper along the desired trajectory with continuous recording of the current coordinates in the memory of the robot. Let us list the main ones.</para>
<para><emphasis role="strong">Using the professional skills and intuitive experience of</emphasis>the human being. The human being, using his professional skills and intuitive experiences, demonstrates motions by hand, which are automatically analyzed by the MMI (for configuration acceptance and safety) and are conveyed to the robot in the form of a generalized MFM description. Conventional means of supervisory control, in which remote control or a joystick, are used to set the generalized command, are further developed in this case.</para>
<para><emphasis role="strong">Simplicity and operational efficiency of training.</emphasis>Training is performed by simple movements of the human hand without any programming of the complex spatial displacements. It is more natural and easier for the human being to control the position of his hand during the execution of movement, than doing the same using the buttons, mouse or joystick. Experiments have shown that practically everyone can learn to control a robot through hand motion and it can be done in just a few hours. Time and cost of personnel training, for the control and training of robots are significantly reduced.</para>
<para><emphasis role="strong">Relaxation in the requirements for motion accuracy.</emphasis>Instead of the exact copying and recording of arrays of coordinates of the motion trajectory during robot manipulator training, the operator gives only assignment (name) and shape of the spatial motion trajectory, including the manipulation of items, tools and EE objects. The free movement is set by the human being and is reproduced by the robot at a certainly safe distance from the objects; therefore, minute accuracy of such movements is not required. In the case where the robot gripper approaches the object, the motion is automatically adjusted according to information from the sensors, including the force-torque sensing. There is no need to copy the exact motions by the remotely operated robots, which are commonly used in partially nondeterministic EE, when there is no precise information about the location of the robot and obstacles.</para>
<para><emphasis role="strong">Reliability of control over the autonomous robot.</emphasis>One of the advantages lies in the fact that there is no need for the operator to be close to the working area or to be present in the working area of the remotely operated robot, for example, inside the space station, or on the outer surface of the orbital station for operational intervention in the robot&#x02019;s actions, avoiding therefore delays and interferences in the communications channels. Based on the descriptions of the MFM and MEE, the intelligent RCS can automatically adjust the shape, and even the sequence of the trained motions of the robot.</para>
<para><emphasis role="strong">Ease of control, correction and transfer of motion experience.</emphasis>The visual appearance of motion presentation in a MFM, its proximity to natural language of the frame structured movement description, allow reliable checking, in-flow change of composition, sequence and shape of the complex working movements directly according to the motion description text using the graphical model of robot manipulator, as well as a human model (&#x0201C;avatar&#x0201D;) [13].</para>
</section>
</section>
<section class="lev1" id="sec5-3">
<title>5.3 Algorithms and Models for Teaching Movements</title>
<section class="lev2" id="sec5-3-1">
<title>5.3.1 Task Training by Demonstration of Movement among<break/>the Objects of the Environment</title>
<para>Robot task training and remote control is performed using the modeling environment, which contains an EE model (MEE), a model of the shape of motion (MFM) and an intelligent system for the multimodal interface (IMI), creating the so-called human &#x0201C;presence effect&#x0201D; in a remote EE using the three-dimensional virtual models and images of real items. Using the IMI, the operator can observe 3-D images on either side, like in holographs, can touch or move the virtual objects, feeling with his hand the tactile and force impact through simulations of object weight or its weight in zero-gravity environment [14, 16].</para>
<para>Instead of real EE models the virtual MEE image can be used, as well as a computer hand model, controlled by the hand position tracking system (HPTS), included in the IMI [17].</para>
<para>The process of training the robot to move among OE objects implies that the operator&#x02019;s hand, dressed in a sensitized glove, executes the motion in OE space that must be subsequently performed by the manipulator gripper. In order to do this, it is necessary to perform the following operations (in on-line mode) in the training stage:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Demonstrate a fragment of the operator&#x02019;s hand motion among objects of the OE model or of the virtual hand model in the graphical model of the OE (MOE);</para></listitem>
<listitem>
<para>Register through the IMM system and store in memory the fragment of motion containing the timestamps and a corresponding vector <emphasis>(X)</emphasis> of 6 dimensions (<emphasis>x = x (l), y = y (l), z = z (l)</emphasis>) and orientation <emphasis>(</emphasis>&#x003C6;<emphasis><subscript>x</subscript> =</emphasis>&#x003C6;<emphasis><subscript>x</subscript>(l),</emphasis> &#x003C6;<emphasis><subscript>y</subscript> =</emphasis>&#x003C6;<emphasis><subscript>y</subscript>(l),</emphasis> &#x003C6;<emphasis><subscript>z</subscript> =</emphasis> &#x003C6;<emphasis><subscript>z</subscript>(l))</emphasis> of the operator&#x02019;s hand;</para></listitem>
<listitem>
<para>Recognize the motion shape through the IMI system and record the results in the form of a frame-based description in MFM;</para></listitem>
<listitem>
<para>Record the images of objects, obtained through the TV camera on the glove in the process of moving the hand and carry out recognition, identification and measurement of the coordinates of the characteristic points of the objects&#x02019; images and enter this data into the MOE;</para></listitem>
<listitem>
<para>Add to the MFM the information about the location of MOE objects relative to the glove at the moment of execution of the fragment of movement.</para></listitem></itemizedlist>
<para>The position of the sensitized glove relative to objects of the EO model is determined during the process of training by demonstration by solving the so-called &#x0201C;navigation task&#x0201D;. This research offers a unique solution of the navigation task for the given case [18].</para>
<para>While training the robot by means of demonstration, the objects (models) of the OE come in view of the TV camera fixed on the sensitized glove. There can be objects of manipulation or foreign objects (obstacles).</para>
<para>For the industrial and aerospace application of robots, the objects generally have regular geometric shapes, angles and edges, which may be used as characteristic features and characteristic points.</para>
<para>Characteristic points (CP) can be small-sized (point) details of objects, which can be easily distinguished on the image, as well as special marks or pointed sources of light. These points are the easiest way to determine the position and orientation of the camera on the sensitized glove relative to the OE objects, that is, to solve the so-called &#x0201C;navigation problem&#x0201D; during the process of robot task training.</para>
<para>Let us consider a case, where the position vectors of the object&#x02019;s CP <emphasis>X<superscript>(i)</superscript></emphasis>,<superscript></superscript><break/>(<emphasis>i</emphasis> = 1, 2, 3 &#x02013; No. of CP) in a coordinate system associated to the object (CS) are known beforehand. Images of 3 CPs of the object (<emphasis>X<subscript>im</subscript><superscript>(1)</superscript>, X<subscript>im</subscript><superscript>(2)</superscript>, X<subscript>im</subscript><superscript>(3)</superscript></emphasis>) on the TV camera&#x02019;s image surface are projections of the real points (CP1 ... CP3) on this plane in a variable scale <emphasis>k<superscript>(i)</superscript>=f /d<superscript>(i)</superscript>-f</emphasis>, inversely proportional to the distance <emphasis>d<superscript>(I)</superscript></emphasis> from the point to the imaging plane of the lens, where <emphasis>f</emphasis> is the focal length of the lens.</para>
<para>Let us assume that the CS, associated with the camera lens, and, therefore, with the glove, is located as shown in <link linkend="F5-3">Figure <xref linkend="F5-3" remap="5.3"/></link>, i.e. axes <emphasis>x<subscript>1</subscript></emphasis> and <emphasis>x<subscript>2</subscript></emphasis> of the CS are located in the image plane, <emphasis>x<subscript>3</subscript></emphasis> is perpendicular to them and is directed away from the lens towards the object. In <link linkend="F5-3">Figure <xref linkend="F5-3" remap="5.3"/></link>: <emphasis>x<subscript>1</subscript>, x<subscript>2</subscript>, x<subscript>3</subscript></emphasis> are the axes of the coordinate system associated with the object; <emphasis>x<superscript>(1)</superscript></emphasis>, <emphasis>x<superscript>(2)</superscript></emphasis>, <emphasis>x<superscript>(3)</superscript></emphasis> are the vectors defining the position of characteristic points in the coordinate system of the camera lens; <emphasis>x<subscript>im</subscript><superscript>(2)</superscript><subscript>1</subscript>, x<subscript>im</subscript><superscript>(2)</superscript><subscript>2</subscript></emphasis> are 2 projections of the vector from the center of the CCD matrix to the image of point 2 (this can also be shown for points 1 and 3).</para>
<fig id="F5-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-3">Figure <xref linkend="F5-3" remap="5.3"/></link></label>
<caption><para>Formation of images of 3 characteristic points of the object.</para></caption>
<graphic xlink:href="graphics/ch05_fig003.jpg"/>
</fig>
<para>Then, distance <emphasis>d<superscript>(i)</superscript></emphasis> is equal to the projection of the <emphasis>i</emphasis>-th CP on the third axis of the CS associated with the camera: <emphasis>d<superscript>(i)</superscript> = x<subscript>im3</subscript><superscript>(i)</superscript></emphasis>, and the location of the object <emphasis>X<superscript>(i)</superscript></emphasis> in the image plane will be represented by two-dimensional vectors:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq5-2.jpg"/></para>
<para>where:  <emphasis role="strong">&#x0005E;T</emphasis> is a (2&#x000D7;4) matrix made up of the first two rows of matrix <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in109.jpg"/> characterizing the rotation and displacement of the CS associated to a camera on the glove, relative to the CS of the object, where a is a direction cosine matrix relative to the turning angle of the SC and <emphasis>X<subscript>n</subscript>= (X<subscript>n1</subscript>, X<subscript>n2</subscript>, X<subscript>n3</subscript>)</emphasis> is the displacement vector of the SC&#x02019;s origin,</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg109.jpg"/></para>
<para>Then: <emphasis>d<superscript>(i)</superscript> = x<subscript>im3</subscript><superscript>(i)</superscript>= T<subscript>3</subscript>. x<superscript>(i)</superscript>,</emphasis> where <emphasis>T<subscript>3</subscript></emphasis> is the third row of matrix T.</para>
<para>It is obvious that matrix <emphasis>T</emphasis> completely determines the spatial position of the glove in the CS associated to the object, and its elements can be found as the result of solving the abovementioned navigation problem of determining the spatial position of the glove during training.</para>
<para>During the CP image processing, vectors <emphasis>x<subscript>im</subscript><superscript>(i)</superscript>, i</emphasis> = 1, 2, 3 are determined, so the left side of Equations <emphasis role="up">( <xref rid="#x1-10002r2"><!--ref: GrindEQ__5_2_--></xref>)</emphasis> is known, and these equations represent a system of six equations concerning 12 unknown elements of matrix <emphasis>T</emphasis>, which are the three components of vector <emphasis>X<subscript>n</subscript></emphasis> and nine elements of matrix a.</para>
<para>Since the elements of matrix a are linked by six more equations of orthogonality and orthonormality, there are a total of 12 equations, that is, as many as the unknowns. These are obviously sufficient to determine the desired matrix <emphasis>T.</emphasis></para>
<para>During the &#x0201C;training&#x0201D; motion of the operator&#x02019;s hand at a given frequency, a procedure involving an operation for the selection of the object&#x02019;s CP image and an operation for calculating the values of two-dimensional vectors <emphasis>x<subscript>im</subscript><superscript>(i)</superscript>, i</emphasis> = 1, 2, 3 and their position in the image plane must be performed.</para>
<para>As a result of these actions, a sequence of values of the vector triplets from the starting one <emphasis>X<subscript>imb</subscript><superscript>(i=1,2,3)</superscript></emphasis> to the finishing one <emphasis><emphasis>x</emphasis><subscript>ime</subscript><superscript>(i=1,2,3)</superscript>:</emphasis> (<emphasis><emphasis>x</emphasis><subscript>imb</subscript><superscript>(1)</superscript>, X<subscript>imb</subscript><superscript>(2)</superscript>, X<subscript>imb</subscript><superscript>(3)</superscript>); (<emphasis>x</emphasis><subscript>imI</subscript><superscript>(1)</superscript>, X<subscript>imI</subscript><superscript>(2)</superscript>, X<subscript>imI</subscript><superscript>(3)</superscript>); (<emphasis>x</emphasis><subscript>imII</subscript><superscript>(1)</superscript>, X<subscript>imII</subscript><superscript>(2)</superscript>, X<subscript>imII</subscript><superscript>(3)</superscript>); &#x02026;(<emphasis>x</emphasis><subscript>ime</subscript><superscript>(1)</superscript>, X<subscript>ime</subscript><superscript>(2)</superscript>, X<subscript>ime</subscript><superscript>(3)</superscript>)</emphasis> is accumulated in the IMI database, corresponding to the sequence of the glove&#x02019;s positions during the movement of the operator&#x02019;s hand, which will be later reproduced by the robot. Each element of this sequence carries enough information to solve the navigation task, that is, to obtain the sequence <emphasis>T<subscript>b</subscript>, T<subscript>I</subscript>, T<subscript>II</subscript>, &#x02026;, T<subscript>e</subscript></emphasis> of the matrix values, which is the result of training.</para>
<para>After training, the robot reproduces a gripper motion based on sequence <emphasis>T<subscript>b</subscript>, T<subscript>I</subscript>, T<subscript>II</subscript>, &#x02026;, T<subscript>e</subscript></emphasis> using a motion correction algorithm based on the signals from the camera, located in the gripper, by solving the so-called &#x0201C;correction task&#x0201D; of the gripper relative to real OE objects.</para>
</section>
<section class="lev2" id="sec5-3-2">
<title>5.3.2 Basic Algorithms for Robot Task Training by Demonstration</title>
<para>The most typical example of the robot&#x02019;s interaction with OE objects is the manipulation of arbitrarily oriented objects. In practice, the task of grabbing objects has several cases. The simplest case is when there is one known object. The robot must be trained to perform an operation of grabbing this object irrespective of any minor changes in its position and orientation.</para>
<para>A more complicated case is when the position and orientation of a known object are not known beforehand. The most typical case is when there are several known objects with a priori unknown positions and orientations. And an even more complex task is when among the known objects there are unknown objects and obstacles that may hinder the grabbing procedure.</para>
</section>
<section class="lev2" id="sec5-3-3">
<title>5.3.3 Training Algorithm for the Environmental Survey Motion</title>
<para>During the training to perform the environmental survey motion, the operator&#x02019;s hand executes one or more types of search movements: rotation of the hand at two angles, zigzag motion, etc. Information about the typical search motions is recorded in the MFM. Survey motion may consist of several fragments of different movements. The sequence and shape of these motions, dependent on the task, are determined during the training phase and stored in the MFM. After the execution of separate motion fragments, a break can be taken for further analysis of the OE objects&#x02019; images.</para>
<para>Any OE object recognition algorithm suitable for a particular purpose can be used, including a structural recognition algorithm [19].</para>
<para>It is necessary to note that object image analysis must include the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Recognition of the target subject through a set of characteristic features (XT<emphasis>1</emphasis>, XT<emphasis>2</emphasis>, &#x02026;XT<emphasis>k</emphasis>) that are sufficient to identify it using the description stored in the MOE;</para></listitem>
<listitem>
<para>Selection of a set of reference points (XT<emphasis>1</emphasis>, XT<emphasis>2</emphasis>, &#x02026;XT<emphasis>n</emphasis>) that are sufficient for navigating the robot gripper in the space of the real OE, from among a set of points (XT<emphasis>1</emphasis>, XT<emphasis>2</emphasis>, &#x02026;XT<emphasis>k</emphasis>), usually no more than <emphasis>n</emphasis> = 4&#x02013;6, depending on their location.</para></listitem></itemizedlist>
<para>If the number of CPs observed on the object is insufficient <emphasis>(k &#x0003C; n),</emphasis> it is necessary to perform the following search motions:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Change the position or orientation of the camera on the glove so that <emphasis>k = n</emphasis> or change the CP filter parameters in the recognition algorithm;</para></listitem>
<listitem>
<para>Change the observation conditions or camera parameters for reliable detection of CP, such as lighting of the operator workstation or focus of the camera lens:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Add artificial (contrasting, color) marks on the graphical model or on the object model to recommend the use of these marks on real objects;</para></listitem>
<listitem>
<para>If <emphasis>k &#x02265; n,</emphasis> it is possible to skip to the calculation of the spatial position and orientation of the glove relative to the object in accordance to the algorithm of the &#x0201C;navigation task&#x0201D; (see above).</para></listitem></itemizedlist>
</listitem></itemizedlist>
<para>Once the specified object is detected and identified and the position and orientation of the hand (glove) relative to this object is determined, the training to execute the first survey motion is deemed finished.</para>
<para>The purpose of the next motion the robot is trained to execute involves a gripper motion to the so-called &#x0201C;convenient for observation&#x0201D; position. In this position, the maximum identification reliability and measurement accuracy of the gripper position relative to the object are achieved.</para>
<para>The variants of the shift from the starting point of object detection to the position which is &#x0201C;convenient for observation&#x0201D; must be shown by the movement of the operator&#x02019;s wrist using his intuitive experience.</para>
<para>There is also an option of task training by demonstration, for survey movement, performed through natural head movements. In this case, the camera with a reference device is fixed on the operator&#x02019;s head.</para>
<para>The training process ends automatically, for example, upon a signal from the IMI system after reaching the specified recognition reliability and measurement accuracy parameters for the position of a hand or a head relative to the OE object. A training halt signal can be given by voice (using the speech recognition system of the IMI) or by pressing the button on the glove. In this case, the object coordinates defined by its image are recorded in the MFM as a vector of coordinates (<emphasis>X<subscript>0</subscript></emphasis>).</para>
</section>
<section class="lev2" id="sec5-3-4">
<title>5.3.4 Training Algorithm for Grabbing a Single Object</title>
<para>In this case, the grabbing process consists of three movements:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Gripper motion from the initial position to the &#x0201C;convenient for grabbing&#x0201D; the object position;</para></listitem>
<listitem>
<para>Object gripping motion, for example, a simple translational motion along the gripper axis;</para></listitem>
<listitem>
<para>Object grabbing motion, such as a simple closing of the gripper, exerting a given compression force.</para></listitem></itemizedlist>
<para>Let&#x02019;s consider the task training to perform only the first action, where the operator freely moves his hand, with sensitized glove on it, to the object (model) from the initial position at the most convenient for grabbing side and sets it at a short distance from the object with the desired orientation. Information about the motion path and the hand position relative to the object, at least at the end point of the motion, is memorized in the MFM through the IMI system, which is necessary for adjusting the robot&#x02019;s gripper position relative to the object during the motion reproduction. It&#x02019;s also desirable that at least 1 or 2 CPs of the object&#x02019;s image get into the camera&#x02019;s view in the &#x0201C;convenient for grabbing&#x0201D; gripper position, so that the grabbing process can be supervised.</para>
<para>The training of the grabbing motion is performed along the easiest path, in order to reduce the guidance inaccuracy. If the gripper is equipped with object detection sensors, then the movement ends upon receiving signals from these sensors.</para>
<para>During the training to grab objects, it is necessary to memorize the transverse dimensions of the object at the spot of grabbing and the gripper compression force, sufficient for holding the object without damaging it. This operation can be implemented using additional circuit-torque and tactile sensors in the robot gripper.</para>
<para>In case of the presence of multiple OE objects, the training implies a more complex process of identification of the objects&#x02019; images and the necessity to train additional motions, such as obstacle avoidance, changing of altitude convenient for survey in case of any shading, flashing and interference to image recognition by the camera on the glove, as well as for the camera in the robot manipulator&#x02019;s gripper, during the reproduction of movements.</para>
</section>
<section class="lev2" id="sec5-3-5">
<title>5.3.5 Special Features of the Algorithm for Reproduction of Movements</title>
<para>As a result of performing the required number of training movements by the human hand, &#x0201C;motion experience&#x0201D; is formed, which is accumulated in the form of a frame-structured description in the MFM, stored in the memory of the intelligent IMI system.</para>
<para>The transfer of the &#x0201C;motion experience&#x0201D; from the human to the robot occurs, for example, by simply copying the MFM and MEE from the IMI memory to the RCS memory or even by transferring this data to the remotely controlled robot over communications channels. Of course, preliminary checking of training outcomes is performed, for example, on a graphical model of the robot.</para>
<para>Prior to the robot performing the trained movements, in accordance to the assigned task and the EE conditions, the MFM is tuned, as already mentioned (in Part I of the current paper), for example, by masking (searching) among the total volume of the MFM data for the required types of motions. Descriptions of motions, selected according to the intended purpose and shape of the trajectory, are converted by the RCS into typical motion trajectories for their reproduction by the robot in real EE.</para>
<para>When the robot-manipulator reproduces motions in a real EE, after training by demonstration, it is possible to execute, for example, the following typical motion trajectories:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Survey movement in combination with EE image analysis in order to identify the object to be taken;</para></listitem>
<listitem>
<para>Shifting of the gripper into the &#x0201C;convenient for observation&#x0201D; position;</para></listitem>
<listitem>
<para>Corrective gripper movement relative to the object based on the signals from the robot&#x02019;s sensors;</para></listitem>
<listitem>
<para>Shifting of the gripper to the &#x0201C;convenient for taking&#x0201D; position;</para></listitem>
<listitem>
<para>Motion for grabbing the real object;</para></listitem>
<listitem>
<para>Motion for taking the object.</para></listitem></itemizedlist>
<para>Before the work starts, a complete check of the TSHP operation and telecontrol system is carried out. Then, operation of the TSHP is checked using a graphical model (GM) at the control station without using a robot and exchanging information over the communications lines. Checks of the training outcomes are performed using the surveillance MFM or manipulation MFM, located in the RCS, without switching on the robot at this moment. If necessary, additional adjustment of the MFM is performed through task training by demonstration of the natural human-hand movements and their storage in the MFM of the RCS.</para>
<para>The robot is switched on and it executes motions in terms of the original EE inspection, selection of objects, position selection, convenient for grabbing or convenient for visual control over object grabbing and manipulations, as well as safe obstacle avoidance, before a transition to remote control mode is performed.</para>
<para>The human operator sits in front of the monitor screen, which displays a graphical model or a real object image, and controls the robot through natural movements of the head and hand with the glove.</para>
</section>
<section class="lev2" id="sec5-3-6">
<title>5.3.6 Some Results of Experimental Studies</title>
<para>The effectiveness of the proposed training technology using demonstrations of the movements, the algorithms and theoretical calculations was tested on the basis of the &#x0201C;virtual reality&#x0201D; environment at the laboratory of Information Technology in Robotics, SPIIRAS (St. Petersburg) [20].</para>
<para>The hardware-software environment includes:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Two six-stage robotic manipulators of the &#x00AB;Puma&#x00BB; class, equipped with remotely operated stereo cameras and force-torque sensing;</para></listitem>
<listitem>
<para>Models of the fragment of the space station surface, two graphic stations to work with three-dimensional models of the external environment (MEE);</para></listitem>
<listitem>
<para>Intelligent multimodal interface (IMI) with a system for tracking hand movements (TSHP) and a system for tracking the head motions (THM) of the human operator.</para></listitem></itemizedlist>
<para>The &#x0201C;Virtual reality&#x0201D; environment enables the performance of experimental studies of various information technology approaches for remote control and task training of robots:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>&#x0201C;Immersion technologies&#x0201D; of the human operator in the remote environment using the robot-like device that moves surveillance stereo cameras in the room with models of the fragment of the space station surface and containers for scientific equipment;</para></listitem>
<listitem>
<para>&#x0201C;Virtual observer&#x0201D; technologies using the model of the freely flying machine (equipped with the surveillance camera, which allows the examination of the three-dimensional graphical model of the space station), as well as the simulation of an astronaut&#x02019;s work in outer space;</para></listitem>
<listitem>
<para>Technologies for training and remotely controlling a space (medical) robot manipulator with a force-torque sensing system, which provides operational safety during manipulative operations, reflection of forces and torques on the control handle, including when working with virtual objects.</para></listitem></itemizedlist>
<para>Experimental studies were performed on some algorithms for training by demonstration and remote control of a robot manipulator, including:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Training of the robot manipulator to execute survey motions through motions of the human head;</para></listitem>
<listitem>
<para>Scanning the surroundings by the robot and remotely operated camera on the glove;</para></listitem>
<listitem>
<para>Using the IMI for training by demonstration of hand movements and human voice commands;</para></listitem>
<listitem>
<para>Training the robot manipulator to grab items by demonstration of the operator&#x02019;s hand movements.</para></listitem></itemizedlist>
<para>The motion reproduction of the robot manipulator among the real EE objects based on the use of the virtual graphical models of the EE and the robot manipulator with force-torque sensing system was also practiced in the experimental environment.</para>
</section>
<section class="lev2" id="sec5-3-7">
<title>5.3.7 Overview of the Environment for Task Training by Demonstration of the Movements of the Human Head</title>
<para>A functional diagram of the equipment for remotely monitoring the EE is shown on <link linkend="F5-4">Figure <xref linkend="F5-4" remap="5.4"/></link>.</para>
<fig id="F5-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-4">Figure <xref linkend="F5-4" remap="5.4"/></link></label>
<caption><para>Functional diagram of robot task training regarding survey motions and object grabbing motions using THM and RCS.</para></caption>
<graphic xlink:href="graphics/ch05_fig004.jpg"/>
</fig>
<para>The operator, located in the control room, sets coordinates and orientations of the manipulator gripper and remotely operated camera on it using the tracking system for head position (THM). He observes the obtained EE image on the monitor screen.</para>
<para>Before starting, the human operator must be able to verify the THM in an off-line mode. For this purpose, a graphical model (GM) and a special communications module for controlling 6 coordinates were developed. Training the robot manipulator to execute EE surveillance motions by demonstration is carried out in the following way (<link linkend="F5-5">Figure <xref linkend="F5-5" remap="5.5"/></link>).</para>
<fig id="F5-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-5">Figure <xref linkend="F5-5" remap="5.5"/></link></label>
<caption><para>Robot task training to execute survey movements, based on the movements of the operator&#x02019;s head: Training the robot to execute survey motions to insect surroundings (a); Training process (b); Reproduction of earlier trained movements (c).</para></caption>
<graphic xlink:href="graphics/ch05_fig0005.jpg"/>
</fig>
<para>The human operator performs the EE inspection based on his personal experience in object surveillance. The robot repeats the action, using the surveillance procedure and shape of the trajectory of the human head movement. In this case, the cursor can first be moved around the obtained panoramic image, increasing (decreasing) the scale of separate fragments, and then, after accuracy validation, the actual motion of the robot manipulator can be executed.</para>
</section>
<section class="lev2" id="sec5-3-8">
<title>5.3.8 Training the Robot to Grab Objects by Demonstration of Operator Hand Movements</title>
<para>There are several variations of the implementation of the &#x0201C;sensitized glove&#x0201D; (<link linkend="F5-6">Figure <xref linkend="F5-6" remap="5.6"/></link>): a remote-operated camera in the bracelet with control points and laser pointers, bracelet with active control points (infrared diodes), manipulation object - stick with the active control points [21].</para>
<fig id="F5-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-6">Figure <xref linkend="F5-6" remap="5.6"/></link></label>
<caption><para>Variations of the &#x0201C;Sensitized Glove&#x0201D; construction.</para></caption>
<graphic xlink:href="graphics/ch05_fig0006.jpg"/>
</fig>
<para>When training by demonstration of human hand movements, through a sensitized glove with camera and control points, a greater range and closeness to natural movements is achieved, as compared to the use of joysticks or handle like &#x0201C;Master-Arm&#x00BB; (<link linkend="F5-7">Figure <xref linkend="F5-7" remap="5.7"/></link>).</para>
<fig id="F5-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-7">Figure <xref linkend="F5-7" remap="5.7"/></link></label>
<caption><para>Using the special glove for training the robot manipulator.</para></caption>
<graphic xlink:href="graphics/ch05_fig0007.jpg"/>
</fig>
<fig id="F5-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-8">Figure <xref linkend="F5-8" remap="5.8"/></link></label>
<caption><para>Stand for teaching robots to execute motions of surveillance and grabbing objects.</para></caption>
<graphic xlink:href="graphics/ch05_fig008.jpg"/>
</fig>
<para>This provides for the natural coordination of movements of the hand and head of the human operator. Using the head, the human controls the movement of the remotely operated surveillance camera, fixed, for example, on an additional manipulator, and with the hand he controls the position and orientation of the main robot gripper (<link linkend="F5-8">Figure <xref linkend="F5-8" remap="5.8"/></link>).</para>
<fig id="F5-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-9">Figure <xref linkend="F5-9" remap="5.9"/></link></label>
<caption><para>Training of motion coordination of two robot manipulators by natural movements of human head and hand.</para></caption>
<graphic xlink:href="graphics/ch05_fig0009.jpg"/>
</fig>
<fig id="F5-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-10">Figure <xref linkend="F5-10" remap="5.10"/></link></label>
<caption><para>Training with the use of a system for the recognition of hand movements and gestures without &#x0201C;Sensitized Gloves&#x0201D; against the real background of the operator&#x02019;s work station.</para></caption>
<graphic xlink:href="graphics/ch05_fig0010.jpg"/>
</fig>
<para>The coordination of simultaneous control using the operator&#x02019;s hand and head during training and remote control through natural human operator movements was put into practice in order to control the complex objects (<link linkend="F5-9">Figure <xref linkend="F5-9" remap="5.9"/></link>).</para>
<para>A new prototype of the intelligent IMI equipment with recognition of the operator&#x02019;s hand without markers, while performing manual control and training by demonstration of natural hand movements, was experimentally studied (<link linkend="F5-10">Figure <xref linkend="F5-10" remap="5.10"/></link>). In the future it is planned to continue research on new algorithms for training and remote robot control of intelligent mechatronic systems based on the use of advanced intelligent multimodal human-machine interface systems and new motion modeling principles using frame-structured MFM descriptions, including for medical robots, mechatronic systems and telemedicine [22].</para>
</section>
</section>
<section class="lev1" id="sec5-4">
<title>5.4 Conclusions</title>
<para>A new information technology approach for training robots (mechatronic systems) by demonstration of movement is based on the use of a frame-structured data representation in the models of the shape of the movements that makes it easy to adjust the movement&#x02019;s semantics and topology both for the human operator and for the autonomous sensitized robot.</para>
<para>Algorithms for training by demonstration of natural movements of the human operator&#x02019;s hand using a television camera, fixed on the so-called &#x0201C;sensitized glove&#x0201D;, allow not only the application during the training process of graphical models of objects in surroundings but also of full-scale models, which enables the operator the possibility to practice optimal motions of the remote-controlled robots under real conditions.</para>
<para>It is sufficient to demonstrate the shape of a human hand movement to the intelligent system of the IMI and to enter it into the MFM, and then this movement can be executed automatically, for example, by a robot manipulator with adjustment and navigation among the surrounding objects based on the signals from the sensors.</para>
</section>
<section class="lev1" id="sec5-5">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>G. Herzinger, G. Grunwald, B. Brunner and J Heindl, &#x02018;A sensor-based telerobot system for the space robot experiment ROTEX&#x02019;, Proc. 2<superscript>nd</superscript> Internat. Symp. on Experimental Robots (ISER). Toulouse. France. 1991. June 25&#x02013;27. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+Herzinger%2C+G%2E+Grunwald%2C+B%2E+Brunner+and+J+Heindl%2C+%27A+sensor-based+telerobot+system+for+the+space+robot+experiment+ROTEX%27%2C+Proc%2E+2nd+Internat%2E+Symp%2E+on+Experimental+Robots+%28ISER%29%2E+Toulouse%2E+France%2E+1991%2E+June+25-27%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. Herzinger, J. Heindl, K. Landzettel and B. Brunner, &#x02018;Multisensory shared autonomy &#x02013; a key issue in the space robot technology experiment ROTEX&#x02019;, Proc. IEEE Conf. on Intelligent Robots and Systems (IROS). Raleigh. 1992. July 7&#x02013;10. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+Herzinger%2C+J%2E+Heindl%2C+K%2E+Landzettel+and+B%2E+Brunner%2C+%27Multisensory+shared+autonomy+-+a+key+issue+in+the+space+robot+technology+experiment+ROTEX%27%2C+Proc%2E+IEEE+Conf%2E+on+Intelligent+Robots+and+Systems+%28IROS%29%2E+Raleigh%2E+1992%2E+July+7-10%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Klaus H. Strobl, Wolfgang Sepp, Eric Wahl, Tim Bodenm&#x00FC;ller, Michael Suppa, Javier F. Seara, Gerd Hirzinger, &#x02018;The DLR Multisensory Hand-Guided Device: the Laser Stripe Profiler&#x02019;, ICRA 2004: 1927&#x02013;1932. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Klaus+H%2E+Strobl%2C+Wolfgang+Sepp%2C+Eric+Wahl%2C+Tim+Bodenm%FCller%2C+Michael+Suppa%2C+Javier+F%2E+Seara%2C+Gerd+Hirzinger%2C+%27The+DLR+Multisensory+Hand-Guided+Device%3A+the+Laser+Stripe+Profiler%27%2C+ICRA+2004%3A+1927-1932%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Michael Pardowitz, Steffen Knoop, R&#x00FC;diger Dillmann, R. D. Zollner, &#x02018;Incremental Learning of Tasks From User Demonstrations, Past Experiences, and Vocal Comments&#x02019;, IEEE Transactions on Systems, Man, and Cybernetics, Part B 37(2): 322&#x02013;332 (2007). <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Michael+Pardowitz%2C+Steffen+Knoop%2C+R%FCdiger+Dillmann%2C+R%2E+D%2E+Zollner%2C+%27Incremental+Learning+of+Tasks+From+User+Demonstrations%2C+Past+Experiences%2C+and+Vocal+Comments%27%2C+IEEE+Transactions+on+Systems%2C+Man%2C+and+Cybernetics%2C+Part+B+37%282%29%3A+322-332+%282007%29%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. Dillmann, O. Rogalla, M. Ehrenmann, R. Zoellner and M. Bordegoni, &#x02018;Learning robot behavior and skills based on human demonstration and advice&#x02019;, &#x02018;The machine learning paradigm&#x02019;, 9th International Symposium of Robots Research (ISSR), pp. 229&#x02013;238, Oct., 1999. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+Dillmann%2C+O%2E+Rogalla%2C+M%2E+Ehrenmann%2C+R%2E+Zoellner+and+M%2E+Bordegoni%2C+%27Learning+robot+behavior+and+skills+based+on+human+demonstration+and+advice%27%2C+%27The+machine+learning+paradigm%27%2C+9th+International+Symposium+of+Robots+Research+%28ISSR%29%2C+pp%2E+229-238%2C+Oct%2E%2C+1999%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Minsky, &#x02018;Frames for knowledge representation&#x02019;, M.: &#x0201C;Energia&#x0201D;, 1979. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Minsky%2C+%27Frames+for+knowledge+representation%27%2C+M%2E%3A+%22Energia%22%2C+1979%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. E. Chernakova, F. M. Kulakov, A. I. Nechayev, &#x02018;Simulation of the outdoor environment for training by demonstration&#x02019;, Collected works of the SPII RAS. Issue No. 1.: St. Petersburg: SPIIRAS, 2001. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+E%2E+Chernakova%2C+F%2E+M%2E+Kulakov%2C+A%2E+I%2E+Nechayev%2C+%27Simulation+of+the+outdoor+environment+for+training+by+demonstration%27%2C+Collected+works+of+the+SPII+RAS%2E+Issue+No%2E+1%2E%3A+St%2E+Petersburg%3A+SPIIRAS%2C+2001%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. E. Chernakova, F. M. Kulakov, A. I. Nechayev, A. I. Burdygin, &#x02018;Multiphase method and algorithm for measuring the spatial coordinates of objects for training the assembly robots&#x02019;, Collected works of SPIIRAS. Issue No. 1.: St. Petersburg: SPIIRAS, 2001. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+E%2E+Chernakova%2C+F%2E+M%2E+Kulakov%2C+A%2E+I%2E+Nechayev%2C+A%2E+I%2E+Burdygin%2C+%27Multiphase+method+and+algorithm+for+measuring+the+spatial+coordinates+of+objects+for+training+the+assembly+robots%27%2C+Collected+works+of+SPIIRAS%2E+Issue+No%2E+1%2E%3A+St%2E+Petersburg%3A+SPIIRAS%2C+2001%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. M. Kulakov, &#x02018;Technology for the Creation of Virtual Objects in the Real World&#x02019;, Workshop Conference, Binghamton University, NY, 4&#x02013;7 March, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+M%2E+Kulakov%2C+%27Technology+for+the+Creation+of+Virtual+Objects+in+the+Real+World%27%2C+Workshop+Conference%2C+Binghamton+University%2C+NY%2C+4-7+March%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Advanced System for Learning and Optimal Control of Assembly Robots, edited by F. M. Kulakov. SPIIRAS, St. Petersburg, 1999, pp. 76 <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Advanced+System+for+Learning+and+Optimal+Control+of+Assembly+Robots%2C+edited+by+F%2E+M%2E+Kulakov%2E+SPIIRAS%2C+St%2E+Petersburg%2C+1999%2C+pp%2E+76" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. M. Kulakov, V. B. Naumov, &#x02018;Application of fuzzy logic and sensitized glove for programming robots&#x02019;, Collected works of the III International Conference &#x0201C;Current Problems of Informatics-98&#x0201D;. Voronezh, pp. 59&#x02013;61. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+M%2E+Kulakov%2C+V%2E+B%2E+Naumov%2C+%27Application+of+fuzzy+logic+and+sensitized+glove+for+programming+robots%27%2C+Collected+works+of+the+III+International+Conference+%22Current+Problems+of+Informatics-98%22%2E+Voronezh%2C+pp%2E+59-61%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. E. Chernakova, F. M. Kulakov, A. I. Nechayev, &#x02018;Training robot by method of demonstration with the use of &#x0201C;sensitized&#x0201D; glove&#x02019;, Works of the First International Conference on Mechatronics and Robots. St. Petersburg, May 29 &#x02013; June 2, 2000, pp. 155&#x02013;164. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+E%2E+Chernakova%2C+F%2E+M%2E+Kulakov%2C+A%2E+I%2E+Nechayev%2C+%27Training+robot+by+method+of+demonstration+with+the+use+of+%22sensitized%22+glove%27%2C+Works+of+the+First+International+Conference+on+Mechatronics+and+Robots%2E+St%2E+Petersburg%2C+May+29+-+June+2%2C+2000%2C+pp%2E+155-164%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Chernakova S. E., Timofeev A. V., Nechaev A. I., Litvinov M. V., Gulenko I. E., Andreev V. A., &#x02018;Multimodal Man-Machine Interface and Virtual Reality for Assistive Medical Systems&#x02019;, International Journal &#x0201C;Information Theories and Applications&#x0201D; (iTECH-2006). Varna, Bulgaria, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Chernakova+S%2E+E%2E%2C+Timofeev+A%2E+V%2E%2C+Nechaev+A%2E+I%2E%2C+Litvinov+M%2E+V%2E%2C+Gulenko+I%2E+E%2E%2C+Andreev+V%2E+A%2E%2C+%27Multimodal+Man-Machine+Interface+and+Virtual+Reality+for+Assistive+Medical+Systems%27%2C+International+Journal+%22Information+Theories+and+Applications%22+%28iTECH-2006%29%2E+Varna%2C+Bulgaria%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. M. Kulakov, S. E. Chernakova, &#x02018;Information technology of task training robots by demonstration of motions&#x02019;, Part I &#x0201C;Concept, principles of modeling movements&#x0201D;, &#x0201C;Mechatronics, Automation, Control&#x0201D;. Moscow, No. 6, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+M%2E+Kulakov%2C+S%2E+E%2E+Chernakova%2C+%27Information+technology+of+task+training+robots+by+demonstration+of+motions%27%2C+Part+I+%22Concept%2C+principles+of+modeling+movements%22%2C+%22Mechatronics%2C+Automation%2C+Control%22%2E+Moscow%2C+No%2E+6%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. M. Kulakov, &#x02018;Potential techniques of management supple movement of robots and their virtual models Part I&#x02019;, &#x0201C;Mechatronics, Automation, Control&#x0201D;. Moscow. No. 11. 2003. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+M%2E+Kulakov%2C+%27Potential+techniques+of+management+supple+movement+of+robots+and+their+virtual+models+Part+I%27%2C+%22Mechatronics%2C+Automation%2C+Control%22%2E+Moscow%2E+No%2E+11%2E+2003%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. M. Kulakov, &#x02018;Potential management techniques supple movement of robots and their virtual models Part II&#x02019;, &#x0201C;Mechatronics, Automation, Control&#x0201D;. No. 1. Moscow, 2004. pp. 15&#x02013;21. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+M%2E+Kulakov%2C+%27Potential+management+techniques+supple+movement+of+robots+and+their+virtual+models+Part+II%27%2C+%22Mechatronics%2C+Automation%2C+Control%22%2E+No%2E+1%2E+Moscow%2C+2004%2E+pp%2E+15-21%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Chernakova, A. Nechaev, A. Karpov, A. Ronzhin, &#x02018;Multimodal system for hands-free PC control&#x02019;, 13<superscript>th</superscript> European signal Processing Conference EUSIPCO-2005. Turkey. 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Chernakova%2C+A%2E+Nechaev%2C+A%2E+Karpov%2C+A%2E+Ronzhin%2C+%27Multimodal+system+for+hands-free+PC+control%27%2C+13th+European+signal+Processing+Conference+EUSIPCO-2005%2E+Turkey%2E+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. M. Kulakov, A. I. Nechaev, S. E. Chernakova and oth., &#x02018;Eye Tracking and Head-Mounted Display/Tracking Computer Systems for the Remote Control of Robots and Manipulators&#x02019;, Project # 1992P, Task 5 with EOARD. St. Petersburg, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+M%2E+Kulakov%2C+A%2E+I%2E+Nechaev%2C+S%2E+E%2E+Chernakova+and+oth%2E%2C+%27Eye+Tracking+and+Head-Mounted+Display%2FTracking+Computer+Systems+for+the+Remote+Control+of+Robots+and+Manipulators%27%2C+Project+%23+1992P%2C+Task+5+with+EOARD%2E+St%2E+Petersburg%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. I. Nechaev, J. S. Vorobjev, I. N. Corobkov, M. S. Olkov, V. N. Javnov, &#x02018;Structural Methods of recognition for real time systems&#x02019;, International Conference on Modeling Problems in Bionics &#x0201C;BIOMOD-92&#x0201D;. St.-Petersburg, 1992. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+I%2E+Nechaev%2C+J%2E+S%2E+Vorobjev%2C+I%2E+N%2E+Corobkov%2C+M%2E+S%2E+Olkov%2C+V%2E+N%2E+Javnov%2C+%27Structural+Methods+of+recognition+for+real+time+systems%27%2C+International+Conference+on+Modeling+Problems+in+Bionics+%22BIOMOD-92%22%2E+St%2E-Petersburg%2C+1992%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. M. Kulakov, &#x02018;Technology of immersion of the virtual object into real world&#x02019;, Supplement to magazine &#x0201C;Informatsionnyie Tekhnologii&#x0201D;, No. 10, 2004, pp. 1&#x02013;32. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+M%2E+Kulakov%2C+%27Technology+of+immersion+of+the+virtual+object+into+real+world%27%2C+Supplement+to+magazine+%22Informatsionnyie+Tekhnologii%22%2C+No%2E+10%2C+2004%2C+pp%2E+1-32%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. E. Chernakova, F. M. Kulakov, A. I. Nechayev, &#x02018;Hardware and software means of the HMI for remote operated robots with the application of systems for tracking human operator&#x02019;s movements&#x02019;, Works of the III International Conf. &#x0201C;Cybernetics and technologies of XXI century&#x0201D;, Voronezh, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+E%2E+Chernakova%2C+F%2E+M%2E+Kulakov%2C+A%2E+I%2E+Nechayev%2C+%27Hardware+and+software+means+of+the+HMI+for+remote+operated+robots+with+the+application+of+systems+for+tracking+human+operator%27s+movements%27%2C+Works+of+the+III+International+Conf%2E+%22Cybernetics+and+technologies+of+XXI+century%22%2C+Voronezh%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. E. Chernakova, A. I. Nechayev, V. P. Nazaruk, &#x02018;Method of recording and visualization of three-dimensional X-ray images in real-time module for tasks of nondestructive testing and medical diagnostics&#x02019;, Magazine &#x0201C;Informatsionnyie Tekhnologii&#x0201D;, No. 11, 2005, pp. 28&#x02013;37. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+E%2E+Chernakova%2C+A%2E+I%2E+Nechayev%2C+V%2E+P%2E+Nazaruk%2C+%27Method+of+recording+and+visualization+of+three-dimensional+X-ray+images+in+real-time+module+for+tasks+of+nondestructive+testing+and+medical+diagnostics%27%2C+Magazine+%22Informatsionnyie+Tekhnologii%22%2C+No%2E+11%2C+2005%2C+pp%2E+28-37%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch06" label="6" xreflabel="6">
<title>A Multi-Agent Reinforcement Learning Approach for the Efficient Control of Mobile Robots</title>
<para><emphasis role="strong">U. Dziomin<superscript><emphasis role="strong">1</emphasis></superscript>, A. Kabysh<superscript><emphasis role="strong">1</emphasis></superscript>, R. Stetter<superscript><emphasis role="strong">2</emphasis></superscript> and V. Golovko<superscript><emphasis role="strong">1</emphasis></superscript></emphasis></para>
<para><superscript>1</superscript>Brest State Technical University, Belarus</para>
<para><superscript>2</superscript>University of Ravensburg-Weingarten, Germany</para>
<para>Corresponding author: R. Stetter &lt;stetter@hs-weingarten.de&gt;</para>
<section class="lev2" id="sec6-5-1">
<title>Abstract</title>
<para>This paper presents a multi-agent control architecture for the efficient control of a multi-wheeled mobile platform. Such control architecture is based on the decomposition of a platform into a holonic, homogenous, multi-agent system. The multi-agent system incorporates multiple <emphasis>Q-</emphasis>learning agents, which permits them to effectively control every wheel relative to other wheels. The learning process was divided into two steps: <emphasis>module positioning &#x02013;</emphasis> where the agents learn to minimize the error of orientation, and c<emphasis>ooperative movement &#x02013;</emphasis> where the agents learn to adjust the desired velocity in order to conform to the desired position in formation. From this decomposition, every module agent will have two control policies for forward and angular velocity, respectively. Experiments were carried out with a simulation model and the real robot. Our results indicate a successful application of the purposed control architecture both in the simulation and in real robot.</para>
<para><emphasis role="strong">Keywords:</emphasis> control architecture, holonic homogenous multi-agent system, reinforcement learning, <emphasis>Q-</emphasis>Learning, efficient robot control</para>
</section>
<section class="lev1" id="sec6-1">
<title>6.1 Introduction</title>
<para>An efficient robot control is an important task for the applications of a mobile robot in production. The important control tasks are power consumption optimization and optimal trajectory planning. Control subsystems should provide energy consumption optimization in a robot control system. Four levels of robot power consumption optimization can be distinguished:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para><emphasis>Motor power consumption optimization</emphasis>. Those approaches based on energy-efficient technologies of motor development that produce substantial electricity saving and improve the life of the motor drive components [1, 2];</para></listitem>
<listitem>
<para><emphasis>Efficient robot motion.</emphasis> Commonly, this is a task of an inverse kinematics calculation. But the dynamic model is usually far more complex than the kinematic model [3]. Therefore, intellectual algorithms are relevant for the optimization of a robot motion [4];</para></listitem>
<listitem>
<para><emphasis>Efficient path planning.</emphasis> Such algorithms build a trajectory and divide it into different parts, which are reproduced by circles and straight lines. The robot control subsystem should provide movement along the trajectory parts. For example, Y. Mei and others show how to create an efficient trajectory using knowledge of the energy consumption of robot motions [5]. S. Ogunniyi and M. S. Tsoeu continue this work using reinforcement learning for path search [6];</para></listitem>
<listitem>
<para><emphasis>Efficient robot exploration</emphasis>. When a robot performs path planning between its current position and its next target in an uncertain environment, the goal is to reduce repeated coverage [7].</para></listitem></itemizedlist>
<para>The transportation of cargo is an actual task in modern production. Multi-wheeled mobile platforms are increasingly being used in autonomous transportation of heavy components. One of these platforms is <emphasis>a production mobile robot</emphasis>, which was developed and assembled at the University of Ravensburg-Weingarten, Germany [3]. The robot is illustrated in <link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link>(a). The platform dimensions are 1200cm in length and 800cm in width. The maximum manufacturer&#x02019;s payload is 500kg, battery capacity is 52Ah, and all modules drive independently.</para>
<para>The platform is based on four vehicle steering modules [3]. The steering module (<link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link>(b)) consists of two wheels powered by separate motors and behaves like a differential drive.</para>
<para>In this paper, we explore the problems of <emphasis>formation control</emphasis> and <emphasis>efficient motion control</emphasis> of multiple autonomous vehicle modules in circular trajectory motion. The goal is to achieve a circular motion of a mobile platform around a virtual reference beacon with optimal forward and angular speeds.</para>
<para>One solution to this problem, [8&#x02013;10] is to calculate the kinematics of a one-wheeled robot for circle driving and then generalize it for multi-vehicle systems. This approach has shown promising modeling results. The disadvantage of this technique is its low flexibility and high computational complexity.</para>
<fig id="F6-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link></label>
<caption><para>Production mobile robot: Production mobile platform (a); Driving module (b).</para></caption>
<graphic xlink:href="graphics/ch06_fig01.jpg"/>
</fig>
<para>An alternative approach is to use the machine learning theory to obtain an optimal control policy. The problem of multi-agent control in robotics is usually considered as a problem of formation control, trajectory planning, distributed control and others. In this paper we use techniques from multi-agent systems theory and reinforcement learning to create the desired control policy.</para>
<para>The content of this paper is the following: Section 6.2 gives a short introduction to the theory of holonic, homogenous, multi-agent systems and reinforcement learning. Section 6.3 describes the steering of a mobile platform in detail. Section 6.4 describes the multi-agent decomposition of a mobile platform. Using this decomposition, we propose a multi-agent control architecture based on the model described in Section 6.2. Section 6.5 contains a detailed description of the multi-agent control architecture. The Conclusion highlights important aspects of the presented work.</para>
</section>
<section class="lev1" id="sec6-2">
<title>6.2 Holonic Homogenous Multi-Agent Systems</title>
<para>A <emphasis>multi-agent system</emphasis> (MAS) consists of a collection of individual agents, where each agent displays a certain amount of autonomy about its actions and perception of domain, and communicates via message-passing with another agent [11, 12]. Agents act in organized structures which encapsulate the complexity of subsystems and therefore modularize its functionality. Organizations are social structures with means of conflict resolution through coordination mechanisms [13]. The overall <emphasis>emergent behavior</emphasis> of a multi-agent system is composed of a combination of individual agent behaviors determined by autonomous computation within each agent, and by communication among agents [14]. The field of MAS is a part of distributed AI, where each agent has a distinct problem solver for a specific task [12, 14].</para>
<section class="lev2" id="sec6-2-1">
<title>6.2.1 Holonic, Multi-Agent Systems</title>
<para>An agent (or MAS) that appears as a single entity to the outside world but is in fact composed of many sub-agents with the same inherent structure is called <emphasis>holon</emphasis>, and such sub-agents are called <emphasis>holonic agents</emphasis> [11, 14]. The transformation of a single entity into a set of interacting subagents is called <emphasis>holonic decomposition</emphasis>. Holonic decomposition is an isomorphic transformation. Gerber et al. [15] show that an environment containing multiple holonic agents can be isomorphically mapped as an environment in which exactly one agent is represented explicitly, and vice versa.</para>
<para>For the purposes of this paper and without the loss of generality, we use terms <emphasis>holon</emphasis> and <emphasis>holonic multi-agent system</emphasis> (Holonic MAS) interchangeably, meaning that a MAS contains <emphasis>exactly one</emphasis> holon. In the general case, a holonic, multi-agent system (called <emphasis>holarhy</emphasis>) is a self-organized, hierarchical structure composed of holons [14].</para>
<para>A holon is always represented as a single entity to the outside world. From the perspective of the environment, a holon behaves as an autonomous agent. Only a closer inspection reveals that a holon is constructed from a set of cooperating agents. It is possible to communicate with a holon simply by sending messages to them from the environment. The most challenging problem in this design is the distribution of individual and overall computation of the holonic MAS [15].</para>
<para>Although it is possible to organize holonic structures in a completely decentralized manner, it is more efficient to use an individual agent to represent a holon. Representatives are called the <emphasis>head of the holon</emphasis>; the other agents in the holon are called the <emphasis>body</emphasis> [11]. In some cases, one of the already existing agents is selected as the representative of the holon. In other cases, a new agent is explicitly introduced to represent the holon during its lifetime.</para>
<para>The head agent represents the shared intentions of the holon and negotiates these intentions with the agents in the holon&#x02019;s environment, as well as with the internal agents of the holon. Only the head agent communicates with the entities outside of the holon.</para>
<para>The organizational structure of a holonic, multi-agent system is depicted in <link linkend="F6-2">Figure <xref linkend="F6-2" remap="6.2"/></link>.</para>
<fig id="F6-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-2">Figure <xref linkend="F6-2" remap="6.2"/></link></label>
<caption><para>Organizational structure of a Holonic Multi-Agent System. Lines indicate the communication channels.</para></caption>
<graphic xlink:href="graphics/ch06_fig002.jpg"/>
</fig>
<para>When agents join the holon, they surrender some of their autonomy to the head agent. The binding force that keeps the head and body in a holon together can be called a <emphasis>commitments</emphasis> [16]. It should be explicitly noted that agents are not directly controlled by the head agent. The agents remain autonomous entities within the holon, but they align their individual behavior with the goals of holon.</para>
</section>
<section class="lev2" id="sec6-2-2">
<title>6.2.2 Homogenous, Multi-Agent Systems</title>
<para>For the purposes of this paper, we will consider the case when <emphasis>all body</emphasis> agents are <emphasis>homogenous</emphasis>. In a general, multi-agent scenario with homogeneous agents, there are several different agents with an identical structure (sensors, effectors, domain knowledge, and decision functions) [17]. The only differences among agents are their sensory inputs and the actual actions they take, as they are situated differently in the world [18]. Having different effector outputs is a necessary condition for MAS; if the agents all act together as a unit, then they are essentially a single agent. In order to realize this difference in output, homogeneous agents must have different sensor input as well. Otherwise, they will act identically.</para>
<para>Thus, the formal definition of <emphasis>holonic, homogenous, multi-agent system</emphasis> (H<superscript>2</superscript>MAS) is a tuple &#x0210B; = <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in128.jpg"/>:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>&#x0212C; = {<emphasis>M</emphasis><subscript>1</subscript>,<emphasis>M</emphasis><subscript>2</subscript>,...,<emphasis>M</emphasis><subscript><emphasis>n</emphasis></subscript>}&#x02013; is the set of homogenous <emphasis>body</emphasis> agents. Each agent is described by a tuple <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in128-1.jpg"/>, where:</para></listitem>
<listitem>
<para><emphasis>s</emphasis> &#x02013; is the set of possible agent states, where <emphasis>s<subscript>i</subscript></emphasis> &#x02208; s is the <emphasis>i</emphasis>-th agent current state;</para></listitem>
<listitem>
<para>&#x003B1;&#x02013; is the set of possible agent actions, where <emphasis>a<subscript>i</subscript></emphasis> &#x02208; &#x003B1; current action of the <emphasis>i</emphasis>-th agent;</para></listitem>
<listitem>
<para>&#x003C0; : s &#x02192; &#x003B1; is the behavior policy (decision function) which maps it&#x02019;s state to actions;</para></listitem>
<listitem>
<para><emphasis>h</emphasis> &#x02013; is the <emphasis>head agent</emphasis> representing the holon to the environment and responsible for coordinating the actions inside the holon:</para></listitem>
<listitem>
<para>&#x1D4AE; = s<superscript>&#x000D7;<emphasis>n</emphasis></superscript> = {(<emphasis>s</emphasis><subscript>1</subscript>, <emphasis>s</emphasis><subscript>2</subscript>, ..., <emphasis>s<subscript>n</subscript></emphasis>)|<emphasis>s<subscript>i</subscript></emphasis>} &#x2208; s for all 1 &#x2264; i &#x2264; n} &#x02013; is a <emphasis>joint state</emphasis> of the holon;</para></listitem>
<listitem>
<para>&#x1D49C; = &#x003B1;<superscript>&#x000D7;n</superscript> = {(a<subscript>1</subscript>, a<subscript>2</subscript>, ..., a<subscript>n</subscript>)|a<subscript>i</subscript>} &#x2208; &#x003B1; for all 1 &#x2264; i &#x2264; n} is a <emphasis>joint action</emphasis> of the holon;</para></listitem>
<listitem>
<para>&#x003C0; : &#x1D4AE;&#x02192;&#x1D49C; &#x02013; global;</para></listitem>
<listitem>
<para>&#x1D49E; &#x02013; is the <emphasis>commitment</emphasis> that defines the agreement to be inside the holon.</para></listitem></itemizedlist>
<para>The learning of multi-agent systems composed of homogenous agents has a few important properties which affect the usage of such systems.</para>
</section>
<section class="lev2" id="sec6-2-3">
<title>6.2.3 Approach to Commitment and Coordination in H<superscript><emphasis role="strong">2</emphasis></superscript> MAS</title>
<para>The holon is realised exclusively through cooperation among the constituent agents. The head agent is required to co-ordinate the work of the body agents to achieve the desired global behavior of H<superscript>2</superscript>MAS by combining individual behaviors, resolving collisions, etc. In this way, a head agent serves as a <emphasis>coordination strategy</emphasis> among agents. The head is aware of the goals of the holon, and has access to important environmental information which allows it to act as a central point of coordination for body agents.</para>
<para>Since a body agent has some degree of autonomy, it may perform an unexpected action, which can lead to uncoordinated behavior within the Holon. The head agent can observe the states and actions of all subordinate agents and can fix undesired behavior using simple coordination rule: if the current behavior of the holon <emphasis>M<subscript>i</subscript></emphasis> is inconsistent with the head agent&#x02019;s vision, then it sends a correction message to <emphasis>M<subscript>i</subscript></emphasis>. This action by the head is known as an influence on the body. When the <emphasis>body</emphasis> M<subscript>i</subscript> succumbs to the influence, this is called making a <emphasis>commitment</emphasis> to the Holon.</para>
</section>
<section class="lev2" id="sec6-2-4">
<title>6.2.4 Learning to Coordinate Through Interaction</title>
<para>The basic idea of the selected approach for coordination is to use influences between the head and the body to determine the sequence of correct actions to coordinate behavior within the holon. The core design question is how to determine such influences in terms of received messages and how received messages affect changes of individual policies.</para>
<para>To answer this question we postulate that interacting agents should constantly learn optimal coordination from scratch. To achieve this, we can use <emphasis>influence-based, multi-agent reinforcement learning</emphasis> [18&#x02013;20]<emphasis>.</emphasis> In this approach, agents learn to coordinate using reinforcement learning by exchanging rewards with each other.</para>
<para>In reinforcement learning, the <emphasis>i<superscript>th</superscript></emphasis> agent executes an action <emphasis>a<subscript>i</subscript></emphasis> at the current state <emphasis>s<subscript>i</subscript></emphasis>. It then goes to the next state <emphasis>s&#x02032;<subscript>i</subscript></emphasis> and receives a numerical reward r as feedback for the recent action [21], where <emphasis>s<subscript>i</subscript></emphasis>, <emphasis>s&#x02032;<subscript>i</subscript></emphasis> &#x02208; <emphasis>s, a<subscript>i</subscript></emphasis> &#x02208; &#x003B1;, <emphasis>r</emphasis> &#x02208; <emphasis>R</emphasis>. Ideally, agents should explore state space (interact with environment) to build an optimal policy &#x003C0;<superscript>&#x02217;</superscript>.</para>
<para>Let <emphasis>Q(s, a)</emphasis> &#x02013; represent a <emphasis>Q-function</emphasis> that reflects the quality of the specified action <emphasis>a</emphasis> in state <emphasis>s</emphasis>. Optimal policy can be expressed in terms of <emphasis>optimal Q-function <emphasis>Q</emphasis><superscript>&#x02217;</superscript></emphasis>:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq6-1.jpg"/></para>
<para>The initial values of <emphasis>Q-</emphasis>funcions are unknown and equal to zero. The learning goal is to approximate the <emphasis>Q-function</emphasis>, (e.g. to find <emphasis>true</emphasis> <emphasis>Q-</emphasis>values for each action in every state using received sequences of rewards).</para>
<para>A model of influence-based multi-agent reinforcement learning depicted in <link linkend="F6-3">Figure <xref linkend="F6-3" remap="6.3"/></link>.</para>
<fig id="F6-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-3">Figure <xref linkend="F6-3" remap="6.3"/></link></label>
<caption><para>Model of Influence Based Multi-Agent Reinforcement Learning in the Case of a Holonic Homogenous Multi-Agent System.</para></caption>
<graphic xlink:href="graphics/ch06_fig003.jpg"/>
</fig>
<para>In this model, a set of body agents with identical policies &#x003C0; acts in a common, shared environment. The <emphasis>i<superscript>th</superscript></emphasis> body agent <emphasis>M<subscript>i</subscript></emphasis> in the state <emphasis>s<subscript>i</subscript></emphasis> selects an action <emphasis>a<subscript>i</subscript></emphasis> using current policy &#x003C0;, and then moves to the next state <emphasis>s&#x02032;<subscript>i</subscript></emphasis>. The head agent observes changes resulting from the executed action and then calculates and assigns a <emphasis>r&#x02032;<subscript>i</subscript></emphasis> to the agent as an evaluative feedback.</para>
<para>Equation <emphasis role="up">( <xref rid="#x1-7003r2"><!--ref: GrindEQ__6_2_--></xref>)</emphasis> is a variation of the <emphasis>Q</emphasis>-learning update rule [21] used to update the values of the <emphasis>Q-</emphasis>function, and where learning homogeneity and parallelism are applied. Learning homogeneity refers to all agents building the same <emphasis>Q</emphasis>-function, and parallelism requires that they can do it in parallel. The following learning rule executes <emphasis>N</emphasis> times per step for each agent in parallel over single-shared <emphasis>Q</emphasis>-function:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq6-2.jpg"/></para>
</section>
</section>
<section class="lev1" id="sec6-3">
<title>6.3 Vehicle Steering Module</title>
<para>The platform is based on four vehicle steering modules. The steering module consists of two wheels powered by separate motors and behaves as a differential drive. It is mounted to the platform by a bearing that allows unlimited rotation of the module with respect to the platform (<link linkend="F6-4">Figure <xref linkend="F6-4" remap="6.4"/></link>). The platform may be equipped with three or more modules.</para>
<fig id="F6-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-4">Figure <xref linkend="F6-4" remap="6.4"/></link></label>
<caption><para>The Maneuverability of one module.</para></caption>
<graphic xlink:href="graphics/ch06_fig004.jpg"/>
</fig>
<para>The conventional approach for the platform control is a kinematics calculation and an inverse kinematics modeling [3]. The inverse kinematics calculation is known for the common schemes: the differential scheme, car scheme, and bicycle scheme. In the case of production module platforms, the four modules are controlled independently. As a consequence, the control system can only perform symmetric turning. Hence, the platform has limited maneuverability [3]. The other problem is the limitations of the robot configuration. Previous systems require recalculations if modules are added or removed from the platform. These recalculations require a qualified engineer.</para>
<para>The problem of steering the robot along the trajectory is illustrated in <link linkend="F6-5">Figure <xref linkend="F6-5" remap="6.5"/></link>. This trajectory consists of four segments:</para>
<fig id="F6-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-5">Figure <xref linkend="F6-5" remap="6.5"/></link></label>
<caption><para>Mobile Robot Trajectory Decomposition.</para></caption>
<graphic xlink:href="graphics/ch06_fig005.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>The turning radius length is <emphasis>R<subscript>1</subscript></emphasis>, the center of rotation is (<emphasis>x<subscript>1</subscript></emphasis>,<emphasis>y<subscript>1</subscript></emphasis>);</para></listitem>
<listitem>
<para>The straight segment;</para></listitem>
<listitem>
<para>The turning radius length is <emphasis>R<subscript>2</subscript></emphasis>, the center of rotation is (<emphasis>x<subscript>2</subscript></emphasis>,<emphasis>y<subscript>2</subscript></emphasis>);</para></listitem>
<listitem>
<para>The straight segment.</para></listitem></itemizedlist>
<para>The steering of the robot also fulfills the following specifications:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>At the starting point, the robot rotates all modules in the direction of the trajectory;</para></listitem>
<listitem>
<para>A robot cannot stop at any point of the trajectory. The trajectory always has smooth transitions from one segment to another.</para></listitem></itemizedlist>
</section>
<section class="lev1" id="sec6-4">
<title>6.4 A Decomposition of Mobile Platform</title>
<para>A platform is composed of identical modules attached to the platform in the same way as a <emphasis>multi-agent decomposition</emphasis>. This is a prominent way to develop a distributed control strategy for such platforms<emphasis>.</emphasis> Mobile platforms with four identical independent driving modules can be represented as homogenous, holonic, multi-agent systems as described in Section 6.2. The driving modules are represented as body agents (or module agents) and the head agent (or platform agent) represents the whole platform. The process of multi-agent decomposition described above is shown in <link linkend="F6-6">Figure <xref linkend="F6-6" remap="6.6"/></link>.</para>
<fig id="F6-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-6">Figure <xref linkend="F6-6" remap="6.6"/></link></label>
<caption><para>Holonic Decomposition of the Mobile Platform. Dashed lines represent the boundary of a Multi-Agent System (the Holon). Introduction of the Head Agent Leads to a reduction of communication costs.</para></caption>
<graphic xlink:href="graphics/ch06_fig006.jpg"/>
</fig>
<para>The whole platform reflects global information, such as the shape of the platform and the required module topology, including its desired positions relative to the centroid of the platform. To highlight this information, we can attach a <emphasis>virtual coordinate frame</emphasis> to the centroid of the platform to create the virtual structure.</para>
<fig id="F6-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-7">Figure <xref linkend="F6-7" remap="6.7"/></link></label>
<caption><para>Virtual Structure with a Virtual Coordinate Frame composed of Four Modules with a known Virtual Center.</para></caption>
<graphic xlink:href="graphics/ch06_fig007.jpg"/>
</fig>
<para><link linkend="F6-7">Figure <xref linkend="F6-7" remap="6.7"/></link> shows an illustrative example of the virtual structure approach with a formation composed of four vehicles capable of planar motions, where <emphasis>C</emphasis><subscript>0</subscript> represents the beacon frame and <emphasis>C</emphasis> represents a virtual coordinate frame located at a virtual center (<emphasis>x<subscript>vc</subscript>,y<subscript>vc</subscript></emphasis>) with an orientation &#x003C6;<subscript><emphasis>vc</emphasis></subscript> relative to <emphasis>C</emphasis><subscript>0</subscript>. Values of &#x003C1;<subscript><emphasis>i</emphasis></subscript> = [<emphasis>x<subscript>i</subscript>,y<subscript>i</subscript></emphasis>]<superscript><emphasis>T</emphasis></superscript> and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133.jpg"/> represent, respectively, the -th vehicle&#x02019;s actual and desired position. Values of &#x003C6;<subscript><emphasis>i</emphasis></subscript> and &#x003C6;<subscript><emphasis>i</emphasis></subscript><superscript><emphasis>d</emphasis></superscript> represent the actual and desired orientation, respectively, of the -th vehicle. Each module&#x02019;s desired position <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133-1.jpg"/> can be defined relative to the virtual coordinate frame.</para>
<para>For a formation stabilization with a static formation centroid, if each vehicle in a group can reach a consensus on the center point of the desired formation and specify a corresponding desired deviation from the center point, then the desired motion can be achieved [22]. If each vehicle can track its desired position accurately, then the desired formation shape can be preserved accurately.</para>
<para>The vectors <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133-2.jpg"/> and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133-3.jpg"/> represent, respectively, the -th vehicle&#x02019;s desired and actual deviation relative to C. The deviation vector <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133-4.jpg"/> of the <emphasis>i-</emphasis>th module relative to the desired position is defined as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq6-3.jpg"/></para>
<para>Each module&#x02019;s desired position can be defined relative to the virtual coordinate frame. Once the desired dynamics of the virtual structure are defined, the desired motion for each agent can be derived. As a result, path planning and trajectory generation techniques can be employed for the centroid while trajectory tracking strategies can be automatically derived for each module [23].</para>
</section>
<section class="lev1" id="sec6-5">
<title>6.5 The Robot Control System Learning</title>
<para>The main goal of the control system is to provide the movement of the robot along the desired circular trajectory. The objective is to create a cooperative control strategy for any configuration of <emphasis>N</emphasis> modules so that all the modules within the platform achieve circular motion around the beacon. The circular motions should have a prescribed radius of rotation &#x003C1;<subscript><emphasis>C</emphasis></subscript> defined by the center of the platform and the distance between neighbors. Further requirements are that module positioning before movement must be taken into account, and the adaptation of angular and linear speed during circular movement to reach optimal values.</para>
<para>We divide the process of learning into two steps:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para><emphasis>Module positioning &#x02013;</emphasis> a learning of the module to rotate to the trajectory direction (6.5.1);</para></listitem>
<listitem>
<para><emphasis>Cooperative movement -</emphasis> a learning of cooperative motion of modules within platform (6.5.2).</para></listitem></itemizedlist>
<para>The overall control architecture is depicted in <link linkend="F6-8">Figure <xref linkend="F6-8" remap="6.8"/></link>.</para>
<fig id="F6-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-8">Figure <xref linkend="F6-8" remap="6.8"/></link></label>
<caption><para>A unified view of the control architecture for a Mobile Platform.</para></caption>
<graphic xlink:href="graphics/ch06_fig008.jpg"/>
</fig>
<para>From this decomposition, every module agent will have two control policies, &#x003C0;<subscript><emphasis>v</emphasis></subscript> and &#x003C0;<subscript>&#x003C9;</subscript> , for both forward and angular velocity, respectively. Policy &#x003C0;<subscript>&#x003C9;</subscript> is responsible for correct module orientation around the beacon. Each module follows this policy before the platform starts moving. Policy &#x003C0;<subscript><emphasis>v</emphasis></subscript> is used during circular motion of the platform along curves. Both policies are created via reinforcement learning, which allows for generalization.</para>
<para>In the simulation phase, the head agent interacts with the modeling environment. In experiments with real robots, the head agent interacts with the planning subsystem. The Environment/Planning subsystem provides information about the desired speed of the platform <emphasis>v<superscript>d</superscript></emphasis> and the global state of the multi-agent <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in134.jpg"/>, where <emphasis>s<subscript>i</subscript> &#x02208; s</emphasis> is the state of the <emphasis>i-</emphasis>th module is defined by values in <link linkend="T6-1">Table <xref linkend="T6-1" remap="6.1"/></link> and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in135.jpg"/> is the state of the head agent describes the virtual coordinate frame.</para>
<table-wrap position="float" id="T6-1">
<label><link linkend="T6-1">Table <xref linkend="T6-1" remap="6.1"/></link></label>
<caption><para>The Environment Information</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="bottom" align="left">No</td>
<td valign="bottom" align="left">Robot Get</td>
<td valign="bottom" align="left">Value</td></tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left">X robot position, <emphasis>x</emphasis></td>
<td valign="top" align="left">Coordinate, m</td>
</tr>
<tr>
<td valign="top" align="left">2</td>
<td valign="top" align="left">Y robot position, <emphasis>y</emphasis></td>
<td valign="top" align="left">Coordinate, m</td>
</tr>
<tr>
<td valign="top" align="left">3</td>
<td valign="top" align="left">X of beacon center, <emphasis>x<subscript>b</subscript></emphasis></td>
<td valign="top" align="left">Coordinate, m</td>
</tr>
<tr>
<td valign="top" align="left">4</td>
<td valign="top" align="left">Y of beacon center, <emphasis>y<subscript>b</subscript></emphasis></td>
<td valign="top" align="left">Coordinate, m</td>
</tr>
<tr>
<td valign="top" align="left">5</td>
<td valign="top" align="left">Robot orientation angle, &#x003C6;<subscript><emphasis>i</emphasis></subscript></td>
<td valign="top" align="left">Float number, radians<?lb?>-&#x003C0; &#x0003C; &#x003C6;<subscript><emphasis>i</emphasis></subscript> &#x02264; &#x003C0;</td>
</tr>
<tr>
<td valign="top" align="left">6</td>
<td valign="top" align="left">Desired orientation angle relative to robot, &#x003C6;<subscript><emphasis>i</emphasis></subscript><superscript><emphasis>d</emphasis></superscript></td>
<td valign="top" align="left">Float number, radians<?lb?>-&#x003C0; &#x0003C; &#x003C6;<subscript><emphasis>i</emphasis></subscript><superscript><emphasis>d</emphasis></superscript> &#x02264; &#x003C0;</td>
</tr>
<tr>
<td valign="top" align="left">7</td>
<td valign="top" align="left">The radius size, <emphasis>r</emphasis></td>
<td valign="top" align="left">Float number, m</td>
</tr>
<tr>
<td valign="top" align="left">8</td>
<td valign="top" align="left">The desired radius size, <emphasis>r<superscript>d</superscript></emphasis></td>
<td valign="top" align="left">Float number, m</td></tr>
</tbody>
</table>
</table-wrap>
<section class="lev2" id="sec6-5-1">
<title>6.5.1 Learning of the Turning of a Module-Agent</title>
<para>This subsection describes the model for producing an efficient control rule for the positioning of a module, based on the relative position of the module with respect to the beacon. This control rule can be used for every module, since every steering module agent is homogenous.</para>
<para>The agent stays in a physical, 2-D environment with a reference beacon, as shown in <link linkend="F6-9">Figure <xref linkend="F6-9" remap="6.9"/></link>. The beacon position is defined by coordinates (<emphasis>x<subscript>0</subscript></emphasis>, <emphasis>y<subscript>0</subscript></emphasis>). The rotation radius &#x003C1; is the distance from the center of the module to the beacon.</para>
<fig id="F6-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-9">Figure <xref linkend="F6-9" remap="6.9"/></link></label>
<caption><para>State of the Module with Respect to Reference Beacon.</para></caption>
<graphic xlink:href="graphics/ch06_fig009.jpg"/>
</fig>
<para>The angle error is calculated using the following equations:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq6-4.jpg"/></para>
<para>Here, &#x003C6;<subscript><emphasis>i</emphasis></subscript><superscript><emphasis>d</emphasis></superscript> and &#x003C6;<subscript><emphasis>i</emphasis></subscript> are known from the environment.</para>
<para>In the simulated model environment, all necessary information about an agent and a beacon is provided. In a real robotic environment, this information is taken from wheel odometers and a module angle sensor. The environment information states are illustrated in <link linkend="T6-1">Table <xref linkend="T6-1" remap="6.1"/></link>.</para>
<para>The full set of actions available to the agent is presented in <link linkend="T6-2">Table <xref linkend="T6-2" remap="6.2"/></link>. The agent with actions <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in136.jpg"/> can change an angle speed by actions <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in136-1.jpg"/>> and linear speed by actions <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in136-2.jpg"/>. To turn, an agent controls the angular speed <emphasis>A</emphasis><subscript>&#x003C9;</subscript>.</para>
<table-wrap position="float" id="T6-2">
<label><link linkend="T6-2">Table <xref linkend="T6-2" remap="6.2"/></link></label>
<caption><para>Agent Actions</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="bottom" align="left">No</td>
<td valign="bottom" align="left">Robot Action</td>
<td valign="bottom" align="left">Value</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left">Increase force, <emphasis>v</emphasis>-</td>
<td valign="top" align="left">+0.1, m/s</td>
</tr>
<tr>
<td valign="top" align="left">2</td>
<td valign="top" align="left">Reduce force, <emphasis>v</emphasis>+</td>
<td valign="top" align="left">&#x02013;0.1, m/s</td>
</tr>
<tr>
<td valign="top" align="left">3</td>
<td valign="top" align="left">Increase turning left, &#x003C9;+</td>
<td valign="top" align="left">+0.1, rad/s</td>
</tr>
<tr>
<td valign="top" align="left">4</td>
<td valign="top" align="left">Increase turning right, &#x003C9;-</td>
<td valign="top" align="left">&#x02013;0.1, rad/s</td>
</tr>
<tr>
<td valign="top" align="left">5</td>
<td valign="top" align="left">Do nothing, &#x02205;</td>
<td valign="top" align="left">+0 m/s, +0 rad/s</td></tr>
</tbody>
</table>
</table-wrap>
<fig id="F6-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-10">Figure <xref linkend="F6-10" remap="6.10"/></link></label>
<caption><para>A Decision tree of the reward function.</para></caption>
<graphic xlink:href="graphics/ch06_fig0010.jpg"/>
</fig>
<para>The learning system is given a positive reward when the robot orientation is closer to the goal orientation <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in137.jpg"/> and is using optimal speed &#x003C9;<subscript><emphasis>opt</emphasis></subscript>. A penalty is received when the orientation of the robot deviates from the goal orientation or the selected action is not optimal for the given position. The value of the reward is defined as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq6-6.jpg"/></para>
<para>Where <emphasis>R</emphasis><subscript>&#x003C9;</subscript> &#x02013; is a reward function, which is represented by the decision tree depicted in <link linkend="F6-10">Figure <xref linkend="F6-10" remap="6.10"/></link>. Here, &#x003C6;<subscript><emphasis>stop</emphasis></subscript> represents the value of the angle, where the robot reduces speed to stop at the correct orientation, &#x003C9;<subscript><emphasis>opt</emphasis></subscript> [0.6 .. 0.8] rad/s, which is the optimal speed to minimize module power consumption. The parameter &#x003C6;<subscript><emphasis>stop</emphasis></subscript> is used to decrease the search space for the agent. When the agent angle error becomes smaller than &#x003C6;<subscript><emphasis>stop</emphasis></subscript>, an action that reduces the speed will receive the highest reward. The parameter &#x003C9;<subscript><emphasis>opt</emphasis></subscript> shows the possibility of power optimization by setting a value function. If the agent angle error is more than &#x003C6;<subscript><emphasis>stop</emphasis></subscript> and &#x003C9;<subscript><emphasis>opt</emphasis></subscript><superscript><emphasis>min</emphasis></superscript> &#x0003C; &#x003C9; &#x0003C; &#x003C9;<subscript><emphasis>opt</emphasis></subscript><superscript><emphasis>max</emphasis></superscript>, then the agent reward will increase. This coefficient which determines the increase ranges between [0 .. 1]. The optimization allows the use of the preferred speed with the lowest power consumption.</para>
<fig id="F6-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-11">Figure <xref linkend="F6-11" remap="6.11"/></link></label>
<caption><para>Result Topology of the Q-Function.</para></caption>
<graphic xlink:href="graphics/ch06_fig0011.jpg"/>
</fig>
<section class="lev3" id="sec6-5-1-1">
<title>6.5.1.1 Simulation</title>
<para>The first task of the robot control is becoming familiar with robot positioning through simulation. This step is done once for an individual module before any cooperative simulation sessions. The learned policy is stored and copied for other modules via knowledge transfer. The topology of the <emphasis>Q-</emphasis>function trained during 720 epochs is shown in <link linkend="F6-11">Figure <xref linkend="F6-11" remap="6.11"/></link>.</para>
<para>The external parameters of a simulation are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Learning rate &#x003B1; = 0,4;</para></listitem>
<listitem>
<para>Discount factor &#x003B3; = 0,7;</para></listitem>
<listitem>
<para>Minimal optimal speed &#x003C9;<subscript><emphasis>opt</emphasis></subscript><superscript><emphasis>min</emphasis></superscript> = 0,6 rad/s;</para></listitem>
<listitem>
<para>Maximum optimal speed <emphasis>&#x003C9;<subscript><emphasis>opt</emphasis></subscript><superscript><emphasis>max</emphasis></superscript> = 0,8</emphasis> rad/s;</para></listitem>
<listitem>
<para>Stop angle, &#x003C6;<subscript><emphasis>stop</emphasis></subscript> = 0,16 radians.</para></listitem></itemizedlist>
<para><link linkend="F6-12">Figure <xref linkend="F6-12" remap="6.12"/></link> shows the platform&#x02019;s initial state (left) and the positioning auto-adjustment (right) using learned policy [23].</para>
<fig id="F6-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-12">Figure <xref linkend="F6-12" remap="6.12"/></link></label>
<caption><para>Initial and Final Agente Positions.</para></caption>
<graphic xlink:href="graphics/ch06_fig012.jpg"/>
</fig>
</section>
<section class="lev3" id="sec6-5-1-2">
<title>6.5.1.2 Verification</title>
<para>The learning of the agent was executed on the real robot after a simulation with the same external parameters. The learning process took 1440 iterations. A real learning process takes more iterations on average because the real system has noise and sensor errors. <link linkend="F6-13">Figure <xref linkend="F6-13" remap="6.13"/></link> illustrates the result of execution of a studied control system used to turn modules to the center, which is on the rear right side of the images [24].</para>
<fig id="F6-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-13">Figure <xref linkend="F6-13" remap="6.13"/></link></label>
<caption><para>Execution of a Learned Control System to turn modules to the center, which is placed on the rear right relative to the platform.</para></caption>
<graphic xlink:href="graphics/ch06_fig013.jpg"/>
</fig>
</section>
</section>
<section class="lev2" id="sec6-5-2">
<title>6.5.2 Learning of the Turning of a Module-Agent</title>
<para>This subsection describes multi-agent learning for producing an efficient control law in the case of cooperative motion using an individual module&#x02019;s speed. The module&#x02019;s desired linear speed <emphasis>A<subscript>v</subscript></emphasis> should be derived through the learning process relative to the head agent so that the whole platform is moved in a circular motion.</para>
<para>Let the state of the module be represented by <emphasis>s<subscript>t</subscript></emphasis> = &#x0007B;<emphasis>v<subscript>t</subscript></emphasis>, <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133-4.jpg"/>&#x0007D;, where <emphasis>v<subscript>t</subscript></emphasis><emphasis><subscript></subscript></emphasis> is the current value of linear speed, and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133-4.jpg"/> is the error vector calculated by <emphasis role="up">( <xref rid="#x1-14001r7"><!--ref: GrindEQ__6_7_--></xref>)</emphasis>. Action set <emphasis>A<subscript>v</subscript></emphasis> = &#x0007B;&#x00D8;, <emphasis>v<subscript>+</subscript></emphasis>, <emphasis>v<subscript>-</subscript></emphasis>&#x0007D; is represented by the increasing/decreasing of the linear speed from <link linkend="T6-2">Table <xref linkend="T6-2" remap="6.2"/></link> and action <emphasis>a<subscript>t</subscript></emphasis> &#x02208; <emphasis>A</emphasis><subscript>&#x003BD;</subscript> is a change of forward speed &#x00394;&#x003BD;<superscript><emphasis>t</emphasis></superscript> for given moment in time t.</para>
<para>The virtual agent receives error information for each module and calculates the displacement error. This error can be positive (module ahead of the platform) or negative (module behind of the platform). The learning process progresses toward the minimization of error <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133-4.jpg"/> for every module. The maximum reward is given for the case where <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in133-4.jpg"/> &#x02192; 0, and a penalty is given when the position of the module deviates from the predefined position.</para>
<para>The value of the reward is defined as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq6-7.jpg"/></para>
<section class="lev3" id="sec6-5-2-1">
<title>6.5.2.1 Simulation</title>
<para><link linkend="F6-14">Figure <xref linkend="F6-14" remap="6.14"/></link> shows the experimental results of the cooperative movement after learning positioning [23]. It takes 11000 epochs on average. The external parameters of a simulation are:</para>
<fig id="F6-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-14">Figure <xref linkend="F6-14" remap="6.14"/></link></label>
<caption><para>Agents Team Driving Process.</para></caption>
<graphic xlink:href="graphics/ch06_fig014.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Learning rate &#x003B1; = 0,4;</para></listitem>
<listitem>
<para>Discount factor &#x003B3; = 0,7.</para></listitem></itemizedlist>
<para>During the modules learning, the control system did not use any stabilization of the driving direction. This is because a virtual environment has an ideal, flat surface. In the case of the real platform, stabilization will be provided by internal controllers of the low-level module software. This allows us to consider only the linear speed control.</para>
<fig id="F6-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-15">Figure <xref linkend="F6-15" remap="6.15"/></link></label>
<caption><para>The Experiment of modules turning as in the Car Kinematics Scheme (1&#x02013;6 screenshots) and movement around a White Beacon (7&#x02013;9).</para></caption>
<graphic xlink:href="graphics/ch06_fig015.jpg"/>
</fig>
<fig id="F6-16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-16">Figure <xref linkend="F6-16" remap="6.16"/></link></label>
<caption><para>The Experiment shows that the radius doesn&#x02019;t change during movement.</para></caption>
<graphic xlink:href="graphics/ch06_fig016.jpg"/>
</fig>
</section>
<section class="lev3" id="sec6-5-2-2">
<title>6.5.2.2 Verification</title>
<para>The knowledge base of the learned agents was transferred to the agents of the control system on the real robot. <link linkend="F6-15">Figure <xref linkend="F6-15" remap="6.15"/></link> demonstrates the process of the platform moving by the learned system [25]. At first, modules turn in the driving direction relative to the center of rotation (the circle drawn on white paper), as shown in screenshots 1&#x02013;6 in <link linkend="F6-15">Figure <xref linkend="F6-15" remap="6.15"/></link>. Then, the platform starts driving around the center of rotation in screenshots 7&#x02013;9 in <link linkend="F6-15">Figure <xref linkend="F6-15" remap="6.15"/></link>. The stabilization of the real module orientation is based on a low-level controller with feedback. This controller is provided by the software control system of the robot. It helps to restrict the intellectual control system by manipulating the linear speed of modules.</para>
<para>The distance to the center of rotation is always the same on the entire trajectory of the platform. This is confirmed by <link linkend="F6-16">Figure <xref linkend="F6-16" remap="6.16"/></link>. Hence, the robot drives around in a circle where the coordinates of the center and the radius are known.</para>
</section>
</section></section>
<section class="lev1" id="sec6-6">
<title>6.6 Conclusions</title>
<para>This paper focuses on an efficient, flexible, adaptive architecture for the control of a multi-wheeled, production, mobile robot. The system is based on a decomposition into a holonic, homogenous, multi-agent system and on influence-based, multi-agent reinforcement learning.</para>
<para>The proposed approach incorporates multiple <emphasis>Q-</emphasis>learning agents, which permits them to effectively control every module relative to the platform. The learning process was divided into two parts:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para><emphasis>Module positioning &#x02013;</emphasis> where agents learn to minimize the error of orientation;</para></listitem>
<listitem>
<para><emphasis>Cooperative movement &#x02013;</emphasis> where agents learn to adjust the desired velocity to conform to a desired position in formation.</para></listitem></itemizedlist>
<para>A head agent is used to coordinate modules through the second step of learning. From this decomposition, every module agent will have a separate control policy for both forward and angular velocity.</para>
<para>The reward functions are designed to produce efficient control. During learning, agents take into account the current reward value and the previous reward value that helps to find the best policy of agent actions. Altogether, this provides efficient control where agents must cooperate with each other and use the policy of least resistance between each other on a real platform.</para>
<para>The advantages of this method are as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para><emphasis>Decomposition</emphasis> means that instead of trying to build a global Q-function, we can build a set of local Q-functions;</para></listitem>
<listitem>
<para><emphasis>Adaptability</emphasis> &#x02013; the platform will adapt its behavior for a dynamically assigned beacon and will auto-reconfigure its moving trajectory;</para></listitem>
<listitem>
<para><emphasis>Scalability and generalization</emphasis> &#x02013; the same learning technique is used for every agent, for every beacon position, and for every platform configuration.</para></listitem></itemizedlist>
<para>In this chapter, we showed successful experiments with the real robot where the system provides robust steering of the platform. These results indicate that the application of intellectual adaptive control systems for real mobile robots have great potential in production.</para>
<para>In future works, we will consider a comparison of the developed approach to mobile robot steering with existing approaches and will provide further information about efficiency of the developed control systems relative to real control systems.</para>
</section>
<section class="lev1" id="sec6-7">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>J. C. Andreas, &#x02018;Energy-Efficient ElectricMotors, Revised and Expanded&#x02019;, CRC Press, 1992. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+C%2E+Andreas%2C+%27Energy-Efficient+ElectricMotors%2C+Revised+and+Expanded%27%2C+CRC+Press%2C+1992%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. T. de Almeida, P. Bertoldi and W. Leonhard, &#x02018;Energy efficiency improvements in electric motors and drives&#x02019;, Springer Berlin, 1997. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+T%2E+de+Almeida%2C+P%2E+Bertoldi+and+W%2E+Leonhard%2C+%27Energy+efficiency+improvements+in+electric+motors+and+drives%27%2C+Springer+Berlin%2C+1997%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. Stetter, P. Ziemniak and A. Paczynski, &#x02018;Development, Realization and Control of a Mobile Robot&#x02019;, In Research and Education in Robotics-EUROBOT 2010, Springer, 2011:130&#x02013;140. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+Stetter%2C+P%2E+Ziemniak+and+A%2E+Paczynski%2C+%27Development%2C+Realization+and+Control+of+a+Mobile+Robot%27%2C+In+Research+and+Education+in+Robotics-EUROBOT+2010%2C+Springer%2C+2011%3A130-140%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>U. Dziomin, A. Kabysh, V. Golovko and R. Stetter, &#x02018;A multi-agent reinforcement learning approach for the efficient control of mobile robot&#x02019;, In Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), 2013 IEEE 7th International Conference on, 2, 2013:867&#x02013;873. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=U%2E+Dziomin%2C+A%2E+Kabysh%2C+V%2E+Golovko+and+R%2E+Stetter%2C+%27A+multi-agent+reinforcement+learning+approach+for+the+efficient+control+of+mobile+robot%27%2C+In+Intelligent+Data+Acquisition+and+Advanced+Computing+Systems+%28IDAACS%29%2C+2013+IEEE+7th+International+Conference+on%2C+2%2C+2013%3A867-873%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Mei, Y.-H. Lu, Y. C. Hu, and C. G. Lee, &#x02018;Energy-efficient motion planning for mobile robots&#x02019;, In Robotics and Automation, 2004. Proceedings. ICRA&#x02019;04. 2004 IEEE International Conference on, 5, 2004:4344&#x02013;4349. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Mei%2C+Y%2E-H%2E+Lu%2C+Y%2E+C%2E+Hu%2C+and+C%2E+G%2E+Lee%2C+%27Energy-efficient+motion+planning+for+mobile+robots%27%2C+In+Robotics+and+Automation%2C+2004%2E+Proceedings%2E+ICRA%2704%2E+2004+IEEE+International+Conference+on%2C+5%2C+2004%3A4344-4349%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Ogunniyi and M. S. Tsoeu, &#x02018;Q-learning based energy efficient path planning using weights&#x02019;, In proceedings of the 24th symposium of the Pattern Recognition association of South Africa, 2013:76&#x02013;82. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Ogunniyi+and+M%2E+S%2E+Tsoeu%2C+%27Q-learning+based+energy+efficient+path+planning+using+weights%27%2C+In+proceedings+of+the+24th+symposium+of+the+Pattern+Recognition+association+of+South+Africa%2C+2013%3A76-82%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Mei, Y.-H. Lu, C. G. Lee and Y. C. Hu, &#x02018;Energy-efficient mobile robot exploration&#x02019;, In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, 2006: 505&#x02013;511. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Mei%2C+Y%2E-H%2E+Lu%2C+C%2E+G%2E+Lee+and+Y%2E+C%2E+Hu%2C+%27Energy-efficient+mobile+robot+exploration%27%2C+In+Robotics+and+Automation%2C+2006%2E+ICRA+2006%2E+Proceedings+2006+IEEE+International+Conference+on%2C+2006%3A+505-511%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>N. Ceccarelli, M. Di Marco, A. Garulli and A. Giannitrapani, &#x02018;Collective circular motion of multi-vehicle systems with sensory limitations&#x02019;, In Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC&#x02019;05. 44th IEEE Conference on, 2005:740&#x02013;745. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=N%2E+Ceccarelli%2C+M%2E+Di+Marco%2C+A%2E+Garulli+and+A%2E+Giannitrapani%2C+%27Collective+circular+motion+of+multi-vehicle+systems+with+sensory+limitations%27%2C+In+Decision+and+Control%2C+2005+and+2005+European+Control+Conference%2E+CDC-ECC%2705%2E+44th+IEEE+Conference+on%2C+2005%3A740-745%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>N. Ceccarelli, M. Di Marco, A. Garulli, and A. Giannitrapani, &#x02018;Collective circular motion of multi-vehicle systems&#x02019;, Automatica, 44(12): 3025&#x02013;3035, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=N%2E+Ceccarelli%2C+M%2E+Di+Marco%2C+A%2E+Garulli%2C+and+A%2E+Giannitrapani%2C+%27Collective+circular+motion+of+multi-vehicle+systems%27%2C+Automatica%2C+44%2812%29%3A+3025-3035%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Benedettelli, N. Ceccarelli, A. Garulli and A. Giannitrapani, &#x02018;Experimental validation of collective circular motion for nonholonomic multi-vehicle systems&#x02019;, Robotics and Autonomous Systems, 58(8):1028&#x02013;1036, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Benedettelli%2C+N%2E+Ceccarelli%2C+A%2E+Garulli+and+A%2E+Giannitrapani%2C+%27Experimental+validation+of+collective+circular+motion+for+nonholonomic+multi-vehicle+systems%27%2C+Robotics+and+Autonomous+Systems%2C+58%288%29%3A1028-1036%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>K. Fischer, M. Schillo and J. Siekmann, &#x02018;Holonic multiagent systems: A foundation for the organisation of multiagent systems&#x02019;, In Holonic and Multi-Agent Systems for Manufacturing., Springer, 2003: 71&#x02013;80. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+Fischer%2C+M%2E+Schillo+and+J%2E+Siekmann%2C+%27Holonic+multiagent+systems%3A+A+foundation+for+the+organisation+of+multiagent+systems%27%2C+In+Holonic+and+Multi-Agent+Systems+for+Manufacturing%2E%2C+Springer%2C+2003%3A+71-80%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>N. Vlassis, &#x02018;A concise introduction to multiagent systems and distributed artificial intelligence&#x02019;, Synthesis Lectures on Artificial Intelligence and Machine Learning, 1(1):1&#x02013;71, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=N%2E+Vlassis%2C+%27A+concise+introduction+to+multiagent+systems+and+distributed+artificial+intelligence%27%2C+Synthesis+Lectures+on+Artificial+Intelligence+and+Machine+Learning%2C+1%281%29%3A1-71%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>L. Gasser, &#x02018;Social conceptions of knowledge and action: DAI foundations and open systems semantics&#x02019;, Artificial intelligence, 47(1): 107&#x02013;138, 1991. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=L%2E+Gasser%2C+%27Social+conceptions+of+knowledge+and+action%3A+DAI+foundations+and+open+systems+semantics%27%2C+Artificial+intelligence%2C+47%281%29%3A+107-138%2C+1991%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Gerber, J. Siekmann and G. Vierke, &#x02018;Flexible autonomy in holonic agent systems&#x02019;, In Proceedings of the 1999 AAAI Spring Symposium on Agents with Adjustable Autonomy, 1999. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Gerber%2C+J%2E+Siekmann+and+G%2E+Vierke%2C+%27Flexible+autonomy+in+holonic+agent+systems%27%2C+In+Proceedings+of+the+1999+AAAI+Spring+Symposium+on+Agents+with+Adjustable+Autonomy%2C+1999%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Gerber, J. Siekmann and G. Vierke, &#x02018;Holonic multi-agent systems&#x02019;, Tech. rep. DFKI Deutsches Forschungszentrum fr Knstliche Intelligenz, Postfach 151141, 66041 Saarb. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Gerber%2C+J%2E+Siekmann+and+G%2E+Vierke%2C+%27Holonic+multi-agent+systems%27%2C+Tech%2E+rep%2E+DFKI+Deutsches+Forschungszentrum+fr+Knstliche+Intelligenz%2C+Postfach+151141%2C+66041+Saarb%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Castelfranchi, &#x02018;Commitments: From Individual Intentions to Groups and Organizations&#x02019;, In <emphasis>ICMAS</emphasis>, 95, 1995:41&#x02013;48. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Castelfranchi%2C+%27Commitments%3A+From+Individual+Intentions+to+Groups+and+Organizations%27%2C+In+ICMAS%2C+95%2C+1995%3A41-48%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>P. Stone and M. Veloso, &#x02018;Multiagent Systems: A Survey from a Machine Learning Perspective&#x02019;, Autonomous Robots, 8(3):345&#x02013;383, 2000. [Online]. http://dx.doi.org/10.1023/A%3A1008942012299 <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=P%2E+Stone+and+M%2E+Veloso%2C+%27Multiagent+Systems%3A+A+Survey+from+a+Machine+Learning+Perspective%27%2C+Autonomous+Robots%2C+8%283%29%3A345-383%2C+2000%2E+%5BOnline%5D%2E+http%3A%2F%2Fdx%2Edoi%2Eorg%2F10%2E1023%2FA%253A1008942012299" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Kabysh and V. Golovko, &#x02018;General model for organizing interactions in multi-agent systems&#x02019;, International Journal of Computing, 11(3): 224&#x02013;233, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Kabysh+and+V%2E+Golovko%2C+%27General+model+for+organizing+interactions+in+multi-agent+systems%27%2C+International+Journal+of+Computing%2C+11%283%29%3A+224-233%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Kabysh, V. Golovko and A. Lipnickas, &#x02018;Influence Learning for Multi-Agent Systems Based on Reinforcement Learning&#x02019;, International Journal of Computing, 11(1):39&#x02013;44, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Kabysh%2C+V%2E+Golovko+and+A%2E+Lipnickas%2C+%27Influence+Learning+for+Multi-Agent+Systems+Based+on+Reinforcement+Learning%27%2C+International+Journal+of+Computing%2C+11%281%29%3A39-44%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Kabysh, V. Golovko and K. Madani, &#x02018;Influence model and reinforcement learning for multi agent coordination&#x02019;, Journal of Qafqaz University, Mathematics and Computer Science, 33:58&#x02013;64, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Kabysh%2C+V%2E+Golovko+and+K%2E+Madani%2C+%27Influence+model+and+reinforcement+learning+for+multi+agent+coordination%27%2C+Journal+of+Qafqaz+University%2C+Mathematics+and+Computer+Science%2C+33%3A58-64%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. G. Barto, &#x02018;Reinforcement learning: An introduction&#x02019;, MIT press, 1998. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+G%2E+Barto%2C+%27Reinforcement+learning%3A+An+introduction%27%2C+MIT+press%2C+1998%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>W. Ren and N. Sorensen, &#x02018;Distributed coordination architecture for multi-robot formation control&#x02019;, Robotics and Autonomous Systems, 56(4):324&#x02013;333, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=W%2E+Ren+and+N%2E+Sorensen%2C+%27Distributed+coordination+architecture+for+multi-robot+formation+control%27%2C+Robotics+and+Autonomous+Systems%2C+56%284%29%3A324-333%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>[Online]. https://www.youtube.com/watch?v=MSweNcIOJYg <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=%5BOnline%5D%2E+https%3A%2F%2Fwww%2Eyoutube%2Ecom%2Fwatch%B4v%3DMSweNcIOJYg" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>[Online]. http://youtu.be/RCO-j32-ryg <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=%5BOnline%5D%2E+http%3A%2F%2Fyoutu%2Ebe%2FRCO-j32-ryg" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>[Online]. http://youtu.be/pwgmdAfGb40 <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=%5BOnline%5D%2E+http%3A%2F%2Fyoutu%2Ebe%2FpwgmdAfGb40" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch07" label="7" xreflabel="7">
<title>Underwater Robot Intelligent Control Based on Multilayer Neural Network</title>
<para><emphasis role="strong">D. A. Oskin<superscript><emphasis role="strong">1</emphasis></superscript>, A. A. Dyda<superscript><emphasis role="strong">1</emphasis></superscript>, S. Longhi<superscript><emphasis role="strong">2</emphasis></superscript> and A. Monteri&#x00F9;<superscript><emphasis role="strong">2</emphasis></superscript></emphasis></para>
<para><superscript>1</superscript>Department of Information Control Systems, Far Eastern Federal University Vladivostok, Russia</para>
<para><superscript>2</superscript>Dipartimento di Ingegneria dell&#x2019;Informazione, Universit&#x00E0; Politecnica delle Marche, Ancona, Italy</para>
<para>Corresponding author: D. A. Oskin &lt;daoskin@mail.ru&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>The chapter is devoted to the design of an intelligent neural network-based control system for underwater robots. A new algorithm for intelligent controller learning is derived using the speed gradient method. The proposed systems provide robot dynamics close to the reference ones. Simulation results of neural network control systems for underwater robot dynamics with parameter and partial structural uncertainty have confirmed the perspectives and effectiveness of the developed approach.</para>
<para><emphasis role="strong">Keywords:</emphasis> Underwater robot, control, uncertain dynamics, multilayer neural network speed gradient method</para>
</section>
<section class="lev1" id="sec7-1">
<title>7.1 Introduction</title>
<para>Underwater Robots (URs) promise great perspectives and have a broad scope of applications in the area of ocean exploration and exploitation. To provide exact movement along a prescribed space trajectory, URs need a high-quality control system. It is well known that URs can be considered as multi-dimensional nonlinear and uncertain controllable objects. Hence, the design procedure of URs control laws is a difficult and complex problem [3, 8].</para>
<para>Modern control theory has derived a lot of methods and approaches to solve appropriate synthesis problems such as nonlinear feedback linearization, adaptive control, robust control, variable structure systems, etc [1, 4]. However, most of these methods for control systems synthesis essentially use information about the structure of the URs mathematical model. The nature of the interaction of a robot with the water environment is so complicated that it is hard to get the exact equations of URs motion. A possible way to overcome control laws synthesis problems can be found in the class of artificial intelligence systems, in particular, based on multi-layer Neural Networks (NNs) [1, 2, 5].</para>
<para>Recently, a lot of publications were devoted to the problems of NNs identification and control, starting from the basic paper [5]. Many papers are associated, in particular, with applications of NNs to the problems of URs control [1, 2, 7].</para>
<para>Conventional applications of multi-layer NNs are based on preliminary network learning. As a rule, this process is the minimization of the criterion which expresses overall deviations of NN outputs from the desirable values, with given NN inputs. The network learning results in NN weight coefficients adjustment. Such an approach supposes the knowledge of teaching input-output pairs [5, 7].</para>
<para>The feature of NNs application as a controller consists in the fact that a desirable control signal is unknown in advance. The desired trajectory (program signal) can be defined only for the whole control system [1, 2].</para>
<para>Thus, the multi-layer NNs application in control tasks demands a development of approaches that take into account the dynamical nature of controllable objects.</para>
<para>In this chapter, an intelligent NNs-based control system for URs is designed. A new learning algorithm for an intelligent NN controller, which uses the speed gradient method [4], is proposed. Numerical experiments with control systems containing the proposed NN controller were carried out in different scenarios: varying parameters and different expressions for viscous torques and forces. Modeling results are given and discussed.</para>
<para>Note that the choice of a NN regulator is connected with the principal orientation of the neural network approach to a priori uncertainty, which characterizes any UR. In fact, matrices of inertia of the UR&#x02019;s rigid body are not exactly known, as well as the added water mass. Forces and torques of viscous friction are unknown and uncertain functional structure parameters. Hence, an UR can be considered as a controllable object with partial parameter and structure uncertainties.</para>
</section>
<section class="lev1" id="sec7-2">
<title>7.2 Underwater Robot Model</title>
<para>The UR mathematical model traditionally consists of differential equations describing its kinematics</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-1.jpg"/></para>
<para>and its dynamics</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-2.jpg"/></para>
<para>where <emphasis>J</emphasis>(<emphasis>q</emphasis><subscript>1</subscript>) is the kinematical matrix; <emphasis>q</emphasis><subscript>1</subscript>, <emphasis>q</emphasis><subscript>2</subscript> are the vectors of generalized coordinates and body-fixed frame velocities of the UR; <emphasis>U</emphasis> is the control forces and torques vector; <emphasis>D</emphasis> is the inertia matrix taking into account added masses of water; <emphasis>B</emphasis> is the Coriolis &#x02013; centripetal term matrix; <emphasis>G</emphasis> is the vector of generalized gravity, buoyancy and nonlinear damping forces/torques [3].</para>
<para>The lack a priori knowledge of the mathematical structure and the parameters of the UR model matrices and the UR model vectors can be compensated by an intensive experimental research. As a rule, this way is too expensive and takes a long time. One alternative approach is connected with the usage of the intelligent NN control.</para>
</section>
<section class="lev1" id="sec7-3">
<title>7.3 Intelligent NN Controller and Learning Algorithm Derivation</title>
<para>Our objective is to synthesize an underwater robot NN controller in order to provide the UR movement along a prescribed trajectory <emphasis>q<subscript>d1</subscript>(t), q<subscript>d2</subscript>(t)</emphasis>.</para>
<para>Firstly, we consider the control task with respect to the velocities <emphasis>q<subscript>d2</subscript>(t)</emphasis>. Let us define the error as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-3.jpg"/></para>
<para>and let&#x02019;s introduce the function <emphasis>Q</emphasis> as a measure of the difference between desired and real trajectories:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-4.jpg"/></para>
<para>where the matrix of inertia is <emphasis>D</emphasis> > 0.</para>
<para>Furthermore, we use the speed gradient method developed by A. Fradkov [4]. According to this method, let compute the time derivative of <emphasis>Q</emphasis>:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-5.jpg"/></para>
<para>From</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-6.jpg"/></para>
<para>and one has</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-7.jpg"/></para>
<para>Using the first term of the dynamics Equation <emphasis role="up">( <xref rid="#x1-3002r2"><!--ref: GrindEQ__7_2_--></xref>)</emphasis>, one can get the following:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-8.jpg"/></para>
<para>and thus the time derivative of function <emphasis>Q</emphasis> can be written in the following form:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-9.jpg"/></para>
<para>After mathematical manipulation, one gets</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg150.jpg"/></para>
<para>As known, there is a skew-symmetric matrix in the last term, hence, this term is equal to zero, and we obtain the following simplified expression:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-10.jpg"/></para>
<para>Our aim is to implement an intelligent UR control [1] based on neural networks. Without loss of generality of the proposed approach, let&#x02019;s choose a two-layer NN (<link linkend="F7-1">Figure <xref linkend="F7-1" remap="7.1"/></link>). Let the hidden and output layers have H and m neurons, respectively (m is equal to the dimension of <emphasis>e</emphasis><subscript>2</subscript>). For the sake of simplicity, one supposes that only the sum of weighted signals (without nonlinear transformation) is realized in the neural network output layer. The input vector has N coordinates.</para>
<fig id="F7-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-1">Figure <xref linkend="F7-1" remap="7.1"/></link></label>
<caption><para>Neural network structure.</para></caption>
<graphic xlink:href="graphics/ch07_figN001.jpg"/>
</fig>
<para>Let&#x02019;s define <emphasis>w<subscript>ij</subscript></emphasis> as the weight coefficient for the i-th input of the j-th neuron of the hidden layer. So, these coefficients compose the following matrix</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-11.jpg"/></para>
<para>As a result of the nonlinear transformation <emphasis>f(w,x)</emphasis>, the hidden layer output vector can be written in the following form:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-12.jpg"/></para>
<para>where <emphasis>w<subscript>k</subscript></emphasis> denotes the <emphasis>k</emphasis>-th raw of matrix <emphasis>w</emphasis> and x is the NN input vector.</para>
<para>Analogously, let&#x02019;s introduce the matrix <emphasis>W</emphasis> whose element <emphasis>W<subscript>li</subscript></emphasis> denotes the transform (weight) coefficient from the i-th neuron of the hidden layer to the l-th neuron of the output layer.</para>
<para>Once the NN parameters are defined, the underwater robot control signal (NN output) is computed as follows:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-13.jpg"/></para>
<para>Substitution of this control into <emphasis role="up">( <xref rid="#x1-4008r10"><!--ref: GrindEQ__7_10_--></xref>)</emphasis>, allows us to get</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-14.jpg"/></para>
<para>To derive the NN learning algorithm, we apply the speed gradient method [4]. For this, we compute the partial derivatives of the time derivative of function <emphasis>Q</emphasis> with respect to the adjustable NN parameters &#x02013; matrices and <emphasis>W</emphasis>. Direct differentiation gives</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-15.jpg"/></para>
<para>It is easy to demonstrate that if we choose all activation functions in the usual form</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-16.jpg"/></para>
<para>this implies the following property</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-17.jpg"/></para>
<para>Let&#x02019;s introduce the following additional functions</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-18.jpg"/></para>
<para>and the matrix</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-19.jpg"/></para>
<para>Hence, direct calculation gives</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-20.jpg"/></para>
<para>As a final stage, one can write the NN learning algorithm in the following form:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-21.jpg"/></para>
<para>where &#x003B3; is the learning step, <emphasis>k</emphasis> is the number of iterations.</para>
<para>The continuous form of this learning algorithm can be presented as</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-22.jpg"/></para>
<para>Such an integral law of the NN-regulator learning algorithm may cause unstable regimes in the control system, as it takes place in adaptive systems [4]. The following robust form of the same algorithm is also used:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-23.jpg"/></para>
<para>where constant &#x003B1; > 0.</para>
<para>Now, let&#x02019;s consider which components should be included in the NN input vector. The NN controller is oriented to compensate an influence of the appropriate matrix and vector functions, and thus, in the most common case, the NN input vector must be composed of <emphasis>q</emphasis><subscript>1</subscript>, <emphasis>q</emphasis><subscript>2</subscript>, <emphasis>e</emphasis><subscript>2</subscript>, <emphasis>q</emphasis><subscript><emphasis>d</emphasis>2</subscript> and their time derivative.</para>
<para>The NN learning procedure leads to the reduction of function <emphasis>Q</emphasis>, and thus, in ideal conditions, the error <emphasis>e</emphasis><subscript>2</subscript> converges to zero and the UR follows the desired trajectory</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-24.jpg"/></para>
<para>If the UR trajectory is given by <emphasis>q</emphasis><subscript><emphasis>d</emphasis>1</subscript><emphasis>(t)</emphasis>, one can choose</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-25.jpg"/></para>
<para>where <emphasis>k</emphasis> is a positive constant. From the kinematics Equation <emphasis role="up">( <xref rid="#x1-3001r1"><!--ref: GrindEQ__7_1_--></xref>)</emphasis>, it follows that</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-26.jpg"/></para>
<para>and</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-27.jpg"/></para>
<para>where</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-28.jpg"/></para>
<para>Hence, the UR follows to the planned trajectory <emphasis>q<subscript>d1</subscript>(t)</emphasis>.</para>
</section>
<section class="lev1" id="sec7-4">
<title>7.4 Simulation Results of the Intelligent NN Controller</title>
<para>In order to check the effectiveness of the proposed approach, different computer simulations have been carried out. The UR model parameters were taken from [6]. The UR parameters are the following:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg153.jpg"/></para>
<para>where the inertia matrix of the UR rigid body is</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg153-1.jpg"/></para>
<para>and the inertia matrix of the hydrodynamic added mass is</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg154.jpg"/></para>
<para>Matrices <emphasis>B</emphasis> and <emphasis>G</emphasis> are</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg154-1.jpg"/></para>
<para>Vector <emphasis>q<subscript>2</subscript></emphasis> consists of the following components (linear and angular UR velocities):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-29.jpg"/></para>
<para>The NN input is composed by <emphasis>q<subscript>2</subscript></emphasis> and <emphasis>e<subscript>2</subscript>.</emphasis> The NN output (control forces and torque) is the vector</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-30.jpg"/></para>
<para>For the NN controller containing 10 neurons in the hidden layer, the simulation results are given on Figures 7.2 &#x02013; 7.9.</para>
<para>In the considered numerical experiments, the desired trajectory was taken as follows:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg154-2.jpg"/></para>
<fig id="F7-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-2">Figure <xref linkend="F7-2" remap="7.2"/></link></label>
<caption><para>Transient processes in NN control system (&#x003B1; = 0.01, &#x003B3; = 250).</para></caption>
<graphic xlink:href="graphics/ch07_figN002.jpg"/>
</fig>
<fig id="F7-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-3">Figure <xref linkend="F7-3" remap="7.3"/></link></label>
<caption><para>Forces and Torque in NN control system (&#x003B1; = 0.01, &#x003B3; = 250).</para></caption>
<graphic xlink:href="graphics/ch07_figN003.jpg"/>
</fig>
<fig id="F7-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-4">Figure <xref linkend="F7-4" remap="7.4"/></link></label>
<caption><para>Examples of hidden layer weight coefficients evolution (&#x003B1; = 0.01,&#x003B3; = 250).</para></caption>
<graphic xlink:href="graphics/ch07_figN004.jpg"/>
</fig>
<fig id="F7-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-5">Figure <xref linkend="F7-5" remap="7.5"/></link></label>
<caption><para>Examples of output layer weight coefficients evolution (&#x003B1; = 0.01, &#x003B3; = 250).</para></caption>
<graphic xlink:href="graphics/ch07_figN005.jpg"/>
</fig>
<fig id="F7-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-6">Figure <xref linkend="F7-6" remap="7.6"/></link></label>
<caption><para>Transient processes in NN control system (&#x003B1; = 0.01, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN006.jpg"/>
</fig>
<fig id="F7-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-7">Figure <xref linkend="F7-7" remap="7.7"/></link></label>
<caption><para>Forces ant Torque in NN control system (&#x003B1; = 0.01, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN007.jpg"/>
</fig>
<fig id="F7-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-8">Figure <xref linkend="F7-8" remap="7.8"/></link></label>
<caption><para>Examples of hidden layer weight coefficients evolution (&#x003B1; = 0.01, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN008.jpg"/>
</fig>
<fig id="F7-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-9">Figure <xref linkend="F7-9" remap="7.9"/></link></label>
<caption><para>Examples of output layer weight coefficients evolution (&#x003B1; = 0.01, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN009.jpg"/>
</fig>
</section>
<section class="lev1" id="sec7-5">
<title>7.5 Modification of NN Control</title>
<para>In previous sections, a NN control was designed. Practically speaking, the synthesis procedure of the NN regulator does not use any information of the mathematical model of the controlled object. As one can see, differential equations describing the underwater robot dynamics have a particular structure which can be taken into account for solving the synthesis problem of the control system.</para>
<para>There exist different ways to solve it. One of the possible approaches is derived below:</para>
<para>As mentioned before, the parameters of underwater robots, such as added masses, moments of inertia, coefficients of viscous friction etc, are not all exactly known because of the complex hydrodynamic nature of the robot movement in the water environment.</para>
<para>Let&#x02019;s suppose that a set of nominal UR parameters can be estimated. Hence, it is possible to get appropriate nominal matrices <emphasis>D</emphasis><subscript>0</subscript>(<emphasis>q</emphasis><subscript>1</subscript>), <emphasis>B</emphasis><subscript>0</subscript>(<emphasis>q</emphasis><subscript>1</subscript>, <emphasis>q</emphasis><subscript>2</subscript>) and <emphasis>G</emphasis><subscript>0</subscript>(<emphasis>q</emphasis><subscript>1</subscript>,<emphasis>q</emphasis><subscript>2</subscript>) in Equation <emphasis role="up">( <xref rid="#x1-3002r2"><!--ref: GrindEQ__7_2_--></xref>)</emphasis>. Let&#x02019;s denote the deviations of the real matrices from the nominal ones as &#x00394;<emphasis>D</emphasis>(<emphasis>q</emphasis><subscript>1</subscript>), &#x00394;<emphasis>B</emphasis>(<emphasis>q</emphasis><subscript>1</subscript>,<emphasis>q</emphasis><subscript>2</subscript>) and &#x00394;<emphasis>G</emphasis>(<emphasis>q</emphasis><subscript>1</subscript>,<emphasis>q</emphasis><subscript>2</subscript>), respectively. So, the following takes place:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-31.jpg"/></para>
<para>Inserting expressions <emphasis role="up">( <xref rid="#x1-5001r29"><!--ref: GrindEQ__7_29_--></xref>)</emphasis> into Equation <emphasis role="up">( <xref rid="#x1-4008r10"><!--ref: GrindEQ__7_10_--></xref>)</emphasis> gives</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-32.jpg"/></para>
<para>Now let&#x02019;s choose the control law in the form:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-33.jpg"/></para>
<para>where</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-34.jpg"/></para>
<para>is the nominal control associated with the known part of the robot dynamics (matrix &#x00393; > 0 is positively definite) and <emphasis>U<subscript>NN</subscript></emphasis> is the neural network control to compensate the uncertainty. The scheme of the proposed NN control system for an underwater robot is given on <link linkend="F7-10">Figure <xref linkend="F7-10" remap="7.10"/></link>.</para>
<fig id="F7-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-10">Figure <xref linkend="F7-10" remap="7.10"/></link></label>
<caption><para>Scheme of the NN control system.</para></caption>
<graphic xlink:href="graphics/ch07_figN0010.jpg"/>
</fig>
<para>If the robot dynamics can be exactly determined (and uncertainty does not take place), the nominal control (7.34) fully compensates undesirable terms in (7.32) (<emphasis>U<subscript>NN</subscript></emphasis> can be taken as equal to zero) and one has</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-35.jpg"/></para>
<para>Thus, functions <emphasis>Q(t)</emphasis> and <emphasis>e<subscript>2</subscript>(t)</emphasis> converge to zero for <emphasis>t</emphasis> &#x02192;&#x0221E;.</para>
<para>In the general case, as follows from (7.32) &#x02013; <emphasis role="up">( <xref rid="#x1-6004r34"><!--ref: GrindEQ__7_34_--></xref>)</emphasis>, one has</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-36.jpg"/></para>
<para>As one can expect, the use of the nominal component of the control facilitates the implementation of the proper NN control.</para>
<para>Further steps of the NN controller learning algorithm can be done practically in the same manner as above (see Equation (7.15, 20 and 21)).</para>
<para>In order to check the derived NN control, mathematical simulations of the UR control system were carried out. The nominal matrices <emphasis>D</emphasis><subscript>0</subscript>(<emphasis>q</emphasis><subscript>1</subscript>), <emphasis>B</emphasis><subscript>0</subscript>(<emphasis>q</emphasis><subscript>1</subscript>,<emphasis>q</emphasis><subscript>2</subscript>) and <emphasis>G</emphasis><subscript>0</subscript>(<emphasis>q</emphasis><subscript>1</subscript>,<emphasis>q</emphasis><subscript>2</subscript>) were taken as follows:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg160.jpg"/></para>
<para>and matrix &#x00393; = diag[0.02, 0.02, 0.02].</para>
<para>Note that the matrices <emphasis>D</emphasis><subscript>0</subscript>, <emphasis>B</emphasis><subscript>0</subscript> of the nominal dynamics model contain only diagonal elements which are not equal to zero. This means that the nominal model is simplified and does not take into account an interaction between different control channels (of linear and angular velocities). The absence of these terms in the nominal dynamics results in partial parametric and structural uncertainty.</para>
<para>Figures 7.11 &#x02013; 7.18 show the transient processes and control signals (forces and torque) in the designed system with a modified NN regulator. The experimental results demonstrated that the robot coordinates converge to the desired trajectories. In comparison with the conventional multilayer NN applications, the weight coefficients of the proposed NN controller are varying simultaneously with the control processes.</para>
<fig id="F7-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-11">Figure <xref linkend="F7-11" remap="7.11"/></link></label>
<caption><para>Transient processes with modified NN-control (&#x003B1; = 0, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN0011.jpg"/>
</fig>
<fig id="F7-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-12">Figure <xref linkend="F7-12" remap="7.12"/></link></label>
<caption><para>Forces and torque with modified NN control (&#x003B1; = 0, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN0012.jpg"/>
</fig>
<fig id="F7-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-13">Figure <xref linkend="F7-13" remap="7.13"/></link></label>
<caption><para>Examples of hidden layer weight coefficients evolution (&#x003B1; = 0, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN0013.jpg"/>
</fig>
<fig id="F7-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-14">Figure <xref linkend="F7-14" remap="7.14"/></link></label>
<caption><para>Examples of output layer weight coefficients evolution (&#x003B1; = 0, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN0014.jpg"/>
</fig>
<fig id="F7-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-15">Figure <xref linkend="F7-15" remap="7.15"/></link></label>
<caption><para>Transient processes with modified NN control (&#x003B1; =0.001, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN0015.jpg"/>
</fig>
<fig id="F7-16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-16">Figure <xref linkend="F7-16" remap="7.16"/></link></label>
<caption><para>Forces and Torque with modified NN control (&#x003B1; = 0.001, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN0016.jpg"/>
</fig>
<fig id="F7-17" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-17">Figure <xref linkend="F7-17" remap="7.17"/></link></label>
<caption><para>Examples of hidden layer weight coefficients evolution (&#x003B1; =0.001, &#x003B3; =200).</para></caption>
<graphic xlink:href="graphics/ch07_figN0017.jpg"/>
</fig>
<fig id="F7-18" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-18">Figure <xref linkend="F7-18" remap="7.18"/></link></label>
<caption><para>Examples of output layer weight coefficients evolution (&#x003B1; =0.001, &#x003B3; = 200).</para></caption>
<graphic xlink:href="graphics/ch07_figN0018.jpg"/>
</fig>
</section>
<section class="lev1" id="sec7-6">
<title>7.6 Conclusions</title>
<para>An approach on how to design an intelligent NN controller for underwater robots and how to derive its learning algorithm on the basis of a speed gradient method is proposed and studied in this chapter. The numerical experiments have shown that high-quality processes can be achieved with the proposed intelligent NN control. In the study case of producing an UR control system, the NN learning procedure allows to overcome the parameter and partial structural uncertainty of the dynamical object. The combination of the neural network approach with the proposed control, designed using the nominal model of the underwater robot dynamics, allows to simplify the control system implementation and to improve the quality of the transient processes.</para>
</section>
<section class="lev1" id="sec7-7">
<title>Acknowledgement</title>
<para>The work of A.Dyda and D.Oskin was supported by Ministry of Science and Education of Russian Federation, the State Contract No 02G25.31.0025.</para>
</section>
<section class="lev1" id="sec7-8">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>A. A. Dyda, &#x02018;Adaptive and neural network control for complex dynamical objects&#x02019;, - Vladivostok, Dalnauka. 2007. pp. 149 (in Russian). <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+A%2E+Dyda%2C+%27Adaptive+and+neural+network+control+for+complex+dynamical+objects%27%2C+-+Vladivostok%2C+Dalnauka%2E+2007%2E+pp%2E+149+%28in+Russian%29%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. A. Dyda, D. A. Oskin, &#x02018;Neural network control system for underwater robots.&#x02019; IFAC conference on Control Application in Marine Systems &#x0201C;CAMS-2004&#x0201D;, - Ancona, Italy, 2004, pp. 427&#x02013;432. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+A%2E+Dyda%2C+D%2E+A%2E+Oskin%2C+%27Neural+network+control+system+for+underwater+robots%2E%27+IFAC+conference+on+Control+Application+in+Marine+Systems+%22CAMS-2004%22%2C+-+Ancona%2C+Italy%2C+2004%2C+pp%2E+427-432%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>T. I. Fossen, &#x02018;Marine Control Systems: Guidance, Navigation and Control of Ships, Rigs and Underwater Vehicles&#x02019;, Marine Cybernetics AS, Trodheim, Norway, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+I%2E+Fossen%2C+%27Marine+Control+Systems%3A+Guidance%2C+Navigation+and+Control+of+Ships%2C+Rigs+and+Underwater+Vehicles%27%2C+Marine+Cybernetics+AS%2C+Trodheim%2C+Norway%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. A. Fradkov, &#x02018;Adaptive control in large-scale systems&#x02019;, - M.: Nauka., 1990, (in Russian). <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+A%2E+Fradkov%2C+%27Adaptive+control+in+large-scale+systems%27%2C+-+M%2E%3A+Nauka%2E%2C+1990%2C+%28in+Russian%29%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. A. Narendra, K. Parthasaraty, &#x02018;Identification and control of dynamical systems using neural networks&#x02019;, IEEE Identification and Control of Dynamical System, Vol.1. No 1. 20, 1990, pp. 1475&#x02013;1483. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+A%2E+Narendra%2C+K%2E+Parthasaraty%2C+%27Identification+and+control+of+dynamical+systems+using+neural+networks%27%2C+IEEE+Identification+and+Control+of+Dynamical+System%2C+Vol%2E1%2E+No+1%2E+20%2C+1990%2C+pp%2E+1475-1483%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Ross, T. Fossen and A. Johansen, &#x02018;Identification of underwater vehicle hydrodynamic coefficients using free decay tests&#x02019;, Preprints of Int. Conf. CAMS-2004, Ancona, Italy, 2004. pp. 363&#x02013;368. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Ross%2C+T%2E+Fossen+and+A%2E+Johansen%2C+%27Identification+of+underwater+vehicle+hydrodynamic+coefficients+using+free+decay+tests%27%2C+Preprints+of+Int%2E+Conf%2E+CAMS-2004%2C+Ancona%2C+Italy%2C+2004%2E+pp%2E+363-368%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. Sutton and A. A. Craven, &#x02018;An on-line intelligent multi-input multi-output autopilot design study&#x02019;, Journal of Engineering for the Maritime Environment, vol. 216. No. M2, 2002, pp. 117&#x02013;131. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+Sutton+and+A%2E+A%2E+Craven%2C+%27An+on-line+intelligent+multi-input+multi-output+autopilot+design+study%27%2C+Journal+of+Engineering+for+the+Maritime+Environment%2C+vol%2E+216%2E+No%2E+M2%2C+2002%2C+pp%2E+117-131%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Yuh, &#x02018;Modeling and control of underwater robotic vehicles&#x02019;, Systems, Man and Cybernetics, IEEE Transactions on, vol. 20, no. 6, pp. 1475&#x02013;1483, Nov/Dec 1990. doi: 10.1109/21.61218 <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Yuh%2C+%27Modeling+and+control+of+underwater+robotic+vehicles%27%2C+Systems%2C+Man+and+Cybernetics%2C+IEEE+Transactions+on%2C+vol%2E+20%2C+no%2E+6%2C+pp%2E+1475-1483%2C+Nov%2FDec+1990%2E+doi%3A+10%2E1109%2F21%2E61218" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch08" label="8" xreflabel="8">
<title>Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots</title>
<para><emphasis role="strong">Y. P. Kondratenko<superscript><emphasis role="strong">1</emphasis></superscript> and V. Y. Kondratenko<superscript><emphasis role="strong">2</emphasis></superscript></emphasis></para>
<para><superscript>1</superscript>Petro Mohyla Black Sea State University, Ukraine</para>
<para><superscript>2</superscript>University of Colorado Denver, USA</para>
<para>Corresponding author: Y. P. Kondratenko &lt;y_kondrat2002@yahoo.com&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>The paper discusses advanced trends in design of modern tactile sensors and sensor systems for intelligent robots. The main focus is the detection of slip displacement signals corresponding to object slippage between the fingers of the robot&#x02019;s gripper.</para>
<para>It provides information on three approaches for using slip displacement signals, in particular, for the correction of the clamping force, the identification of manipulated object mass and the correction of the robot control algorithm. The study presents the analysis of different methods for the detection of slip displacement signals, as well as new sensor schemes, mathematical models and correction methods. Special attention is paid to investigations of sensors developed by the authors with capacitive, magnetic sensitive elements and automatic adjustment of clamping force. The new research results on the determination of object slippage direction based on multi-component capacity sensors are under consideration when the robot&#x02019;s gripper collides with the manipulated object.</para>
<para><emphasis role="strong">Keywords:</emphasis> slip displacement, tactile sensor, gripper, intelligent robot, model, information processing</para>
</section>
<section class="lev1" id="sec8-1">
<title>8.1 Introduction</title>
<para>Updated intelligent robots pose high-dynamic characteristics and effectively function under a particular set of conditions. The robot control problem is more complex in uncertain environments, as robots are usually lacking flexibility. Supplying robots with effective sensor systems provides essential extensions of their functional and technological feasibility [11]. For example, a robot may often encounter a problem of gripping and holding <emphasis>i</emphasis> object doing manipulation processes with the required clamping force <emphasis>F<subscript>i</subscript><superscript>r</superscript></emphasis> avoiding its deformation or mechanical injury, <emphasis>i</emphasis> = 1<emphasis>...n</emphasis>. To successfully solve the current tasks, the robots should possess the capability to recognize the objects by means of their own sensory systems. Besides, in some cases, the main parameter due to which robot can distinguish objects of the same geometric shape is their mass <emphasis>m<subscript>i</subscript></emphasis>(<emphasis>i</emphasis> = 1<emphasis>...n</emphasis>). The robot sensor system should identify the mass <emphasis>m<subscript>i</subscript></emphasis> of each <emphasis>i</emphasis>-th manipulated object in order to identify a class (set) an object refers to.</para>
<para>The sensor system should develop the required clamping force <emphasis>F<subscript>i</subscript><superscript>r</superscript></emphasis> corresponding to mass value <emphasis>m<subscript>i</subscript></emphasis>, as <emphasis>F<subscript>i</subscript><superscript>r</superscript></emphasis> = <emphasis>f</emphasis> (<emphasis>m<subscript>i</subscript></emphasis>). Such current data may be applied when the robot functions in dynamic or random environments. For example, in a case when the robot should identify unknown parameters for any type of object and location in robot&#x02019;s working zone. The visual sensor system may not always be utilized, in particular, in poor vision conditions. Furthermore, in cases when the robot manipulates with an object of variable mass <emphasis>m<subscript>i</subscript></emphasis>(<emphasis>t</emphasis>), its sensor system should provide the appropriate change of clamping force value <emphasis>F<subscript>i</subscript><superscript>r</superscript></emphasis> (<emphasis>t</emphasis>) = <emphasis>f</emphasis> [<emphasis>m<subscript>i</subscript></emphasis> (<emphasis>t</emphasis>)] for the gripper fingers. This information can also be used for the robot control algorithm correction, since the mass of the robot arm&#x02019;s last component and its summary inertia moment vary.</para>
</section>
<section class="lev1" id="sec8-2">
<title>8.2 Analysis of Robot Task Solving Based on Slip Displacement Signals Detection</title>
<para>One of the current approaches to solving the mass <emphasis>m<subscript>i</subscript></emphasis> identification problem of grasped objects and producing the required clamping force <emphasis>F<subscript>i</subscript><superscript>r</superscript></emphasis> is in the development of tactile sensor systems based on object slippage registration [1, 11, 17, 18, 20, 22] while slipping between the gripper fingers (<link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link>).</para>
<para>As usual, the slippage signal detection in robotic systems is accomplished either in the trial motion or in the regime of continuous lifting of the robot arm. In some cases, during the process of trial motions, it is necessary to make a series of trial motions (<link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link>) for creating the required clamping forces <emphasis>F<subscript>ob</subscript></emphasis> or <emphasis>F<subscript>ie</subscript> = kF<subscript>ob</subscript>,</emphasis> where <emphasis>k</emphasis> is a coefficient which impacts the reliability of the object motion (by the robot arm) in the realisation of the required path, <emphasis>k ></emphasis> 1.</para>
<fig id="F8-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link></label>
<caption><para>Grasping and lifting an object with the robot&#x02019;s arm: Initial positions of the gripper fingers (1,2) and object (3) (a); Creating the required clamping force <emphasis>F<subscript>ob</subscript></emphasis> by the gripper fingers during object slippage in the lifting process (b).</para></caption>
<graphic xlink:href="graphics/ch08_fig001.jpg"/>
</fig>
<para>According to <link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link>, the robot creates a minimal value of clamping force <emphasis>F<subscript>min</subscript></emphasis> in time moment <emphasis>t<subscript>1</subscript>.</emphasis> Then step by step robot lifts an object vertical distance &#x00394;<emphasis>l</emphasis> and the robot gripper increases the clamping force (<emphasis>F</emphasis> (<emphasis>t</emphasis><subscript>1</subscript>) + &#x00394;<emphasis>F</emphasis>) if a slip displacement signal appears. The grasping surface of the object is limited by value <emphasis>l<subscript>max</subscript></emphasis>. The first series of trial motions is finished in time moment <emphasis>t<subscript>2</subscript>,</emphasis> when <emphasis>l</emphasis> = <emphasis>l</emphasis><subscript>max</subscript> (<link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link>(b)). After that, the robot decompresses the fingers (<emphasis>t</emphasis><subscript>2</subscript><emphasis>...t</emphasis><subscript>3</subscript>), moves the gripper (<emphasis>t</emphasis><subscript>3</subscript><emphasis>...t</emphasis><subscript>4</subscript>) to the initial position (<link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link>(a)) and creates (<emphasis>t</emphasis><subscript>4</subscript><emphasis>...t</emphasis><subscript>5</subscript>) the initial value of the clamping force <emphasis>F(t<subscript>5</subscript>) = F (t<subscript>2</subscript>) + &#x00394;F = F<subscript>1</subscript></emphasis> for the beginning of a second stage or second series of trial motions.</para>
<para>Some sensor systems based on the slip displacement sensors were considered in [24, 25], but random robot environments very often requires the development of new robot sensors and sensor systems for increasing the speed of operations, the growth of positioning accuracy or the desired path-following precision.</para>
<para>Thus, the task of the registration of slippage signals between the robot fingers for manipulated objects is connected with: a) the necessity of the required force creation being adequate to the object&#x02019;s mass value; b) the recognition of objects; c) robot control algorithm correction.</para>
<para>The idea of a trial motion regime comprises the process of an iterative increase in the compressive force value if the slippage signal is being detected. The continuous lifting regime provides the simultaneous object lifting process and increasing clamping force until the slippage signal disappears. The choice of the slip displacement data acquisition method depends on a robot&#x02019;s purpose, the salient features of its functioning medium, the requirements of its response speed and the performance in terms of an error probability.</para>
<para><link linkend="F8-3">Figure <xref linkend="F8-3" remap="8.3"/></link> illustrates the main tasks in robotics which can be solved based on slip displacement signal detection.</para>
<fig id="F8-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link></label>
<caption><para>Series of trial motions with increasing clamping force <emphasis>F</emphasis> of gripper fingers based on object slippage.</para></caption>
<graphic xlink:href="graphics/ch08_fig002.jpg"/>
</fig>
<fig id="F8-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-3">Figure <xref linkend="F8-3" remap="8.3"/></link></label>
<caption><para>The algorithm for solving different robot tasks based on slip signal detection.</para></caption>
<graphic xlink:href="graphics/ch08_fig003.jpg"/>
</fig>
</section>
<section class="lev1" id="sec8-3">
<title>8.3 Analysis of Methods for Slip Displacement Sensors Design</title>
<para>Let&#x02019;s consider the main methods of slip displacement data acquisition, in particular [3, 4, 10, 11, 14, 17, 18, 20, 22]:</para>
<para><emphasis>The vibration detection method.</emphasis> This method is based on a principle of the vibration detection in the sensing element when the object is slipping. To implement the method mentioned, the following sensing elements may be adopted: a sapphire needle interacting with the crystal receiver or a rod with a steel ball, connected to the electromagnetic vibrator.</para>
<para><emphasis>The pressure re-distribution detection method.</emphasis> The method relies on the detection of a distribution change in pressure between gripper fingers at object slippage and is based on the physiological sensitivity function of human skin. The pressure transducers serve as nerves and are surrounded by an elastic substance as in the human body.</para>
<para><emphasis>The rolling motion detection method.</emphasis> The method is characterized by transducing the object displacements in the vertical direction when slipping to the rolling motion of a sensitive element. A slip displacement signal is detected through rolling of a cylinder roller with elastic covering and a large friction coefficient. The roller&#x02019;s rolling motions may be converted into an electric signal by means of photoelectric or magnetic transducers, containing a permanent magnet on a movable roller, and in the case of a magnetic head being placed on a gripper.</para>
<para><emphasis>The impact-sliding vibrations detection method.</emphasis> A core of the method implies the detection of liquid impact-sliding vibrations when the object is slipping. An acrylic disk with cylinder holes is used in the slip displacement sensor, realizing the method under consideration. A rubber gasket made in the form of a membrane protects one end of the disk, and a pressure gauge is installed on the other end. The hole is filled with water so that its pressure slightly exceeds the atmospheric pressure. While the motion of the object is in contact with a membrane, the impact-sliding vibrations are appearing and, therefore, inducing impulse changes on the water pressure imposed by a static pressure.</para>
<para><emphasis>The acceleration detection method.</emphasis> This method is based on the measurement of accelerations of the sensing element motion by the absolute acceleration signal separation. The slip displacement sensor, comprising two accelerometers, can be used in this case. One of the accelerometers senses the absolute acceleration in the gripper, another responds to the acceleration of the sensitive plate springing when the detail is slipping. The sensor is attached to the computer identifying the slip displacement signal by comparing the output signals of both accelerometers.</para>
<para><emphasis>The interference pattern change detection method.</emphasis> This method involves the conversion of the intensity changes reflected from the moving surface of the interference pattern. The intensity variation of the interference pattern is converted to a numerical code, the auto-correlation function is computed and it achieves its peak at the slip displacement disappearance.</para>
<para><emphasis>The configuration change detection in the sensitive elements method.</emphasis> The essence of the method incorporates the measurement of the varying parameters when the elastic sensitive element configuration changes. The sensitive elements made of conductive rubber afford coating of the object surface protruding above the gripper before the trial motion. When the object is displacing from the gripper, the configuration changes, the electrical resistance of such sensitive elements changes accordingly, confirming the existence of slippage.</para>
<para><emphasis>The data acquisition by means of the photoelastic effect method.</emphasis> An instance representing this method may be illustrated by a transducer, in which, under the applied effort, the deformation of sensitive leather produces the appearance of voltage in the photoelastic system. The object slippage results in the change of the sensitive leather deformation being registered by the electronic visual system. The photosensitive transducer is a device for the transformation of interference patterns into the form of a numerical signal. The obtained image is of binary character, each pixel gives one bit of information. The binary representation of each pixel enables to reduce the processing time.</para>
<para><emphasis>The data acquisition based on friction detection method.</emphasis> The method ensures the detection of the moment when the friction between the gripper fingers and the object to be grasped goes from friction at rest to dynamic friction.</para>
<para><emphasis>Method of fixing the sensitive elements on the object.</emphasis> The method is based on fixing the sensitive elements on the surface of the manipulated objects before the trial motions with the subsequent monitoring of their displacement relative to the gripper at slipping.</para>
<para><emphasis>Method based on recording oscillatory circuit parameters.</emphasis> The method is based on a change in the oscillatory circuit inductance while the object slips. The inductive slip sensor with a mobile core, stationary excitation winding and solenoid winding being one of the oscillatory circuit branches implements the method. The core may move due to the solenoid winding. The reduction of the solenoid winding voltage indicates the process of lowering. The core is lowering under its own weight from the gripper center onto the object to be grasped. The oscillatory circuit induces the forced oscillations with the frequency coinciding with the frequency of excitation in the excitation winding.</para>
<para><emphasis>The video signal detection method.</emphasis> The basis of this method constitutes a change in detection and ranging of patterns or video pictures as an indication of object slippage. The slip displacement detection is accomplished by means of the location sensors or visual sensors based on a laser source that has either a separated and reflecting beam or a vision with a non-coherent beam of light conductors for picture lighting and a coherent beam for image transmission.</para>
<para>The choice of a slip displacement detection method involves the mul-ticriterion approach taking into account the complexity of implementation, the bounds of functional capabilities, mass values and overall dimensions, reliability and cost.</para>
</section>
<section class="lev1" id="sec8-4">
<title>8.4 Mathematical Model of Magnetic Slip Displacement Sensor</title>
<section class="lev2" id="sec8-4-1">
<title>8.4.1 SDS Based on &#x0201C;Permanent Magnet/Hall Sensor&#x0201D; Sensitive Element and Its Mathematical Model</title>
<para>In this chapter, the authors consider a few instances of updating the measurement systems. To suit the requirements of increasing the noise immunity of the vibration measurement method, a modified method has been developed. The modified method is founded on the measurement of the sensitive element angular deviation occurring at the object slippage (<link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link>).</para>
<para>Let&#x02019;s consider the structure and mathematical model (MM) of the SDS developed by the authors with a magnetic sensitive element which can detect the bar&#x02019;s angular deviation appearing at object slippage (<link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link>). The dependence <emphasis>U</emphasis> <emphasis role="strong"><emphasis>=</emphasis></emphasis><emphasis>f</emphasis><emphasis role="strong">(&#x003B1;)</emphasis> can be used to determine the sensitivity of the SDS and the minimal possible amplitudes of the robot trial motions.</para>
<para>To construct the mathematical models, consider a magnetic system comprising a prismatic magnet with dimensions <emphasis>c &#x000D7; d &#x000D7; l,</emphasis> which is set to ferromagnetic plane with infinite permeability <emphasis role="strong"><emphasis>&#x003BC; = &#x0221E;</emphasis></emphasis> (<link linkend="F8-5">Figure <xref linkend="F8-5" remap="8.5"/></link>), where: <emphasis>c-</emphasis> width, <emphasis>d-</emphasis> length, and <emphasis>l</emphasis>- height of magnet, <emphasis>(d >> l).</emphasis> The point <emphasis>P(X<subscript>P</subscript>, Y<subscript>P</subscript>)</emphasis> is the observation point, which is located on the vertical axis and can change its position relative to the horizontal axis <emphasis>Ox</emphasis> or vertical axis <emphasis>Oy.</emphasis></para>
<fig id="F8-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link></label>
<caption><para>Magnetic SDS: 1&#x02013; Rod; 2&#x02013; Head; 3&#x02013; Permanent magnet; 4&#x02013; Hall sensor.</para></caption>
<graphic xlink:href="graphics/ch08_fig004.jpg"/>
</fig>
<para>A Hall sensor with a linear static characteristic is located at the observation point <emphasis>P.</emphasis> Let&#x02019;s form the mathematical model for the determination of the magnetic induction <emphasis>B</emphasis> and the output voltage <emphasis>U<subscript>out</subscript></emphasis> (<emphasis>P</emphasis>) of the Hall sensor in relation to an arbitrary position of the observation point <emphasis>P</emphasis> under the surface of the magnet.</para>
<fig id="F8-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-5">Figure <xref linkend="F8-5" remap="8.5"/></link></label>
<caption><para>Model of magnetic sensitive element.</para></caption>
<graphic xlink:href="graphics/ch08_fig005.jpg"/>
</fig>
<para>The value of magnetic induction (outside the magnet volume) is <emphasis><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inb.jpg"/> = &#x003BC;</emphasis><subscript>0</subscript><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inh.jpg"/>, where <emphasis>&#x003BC;</emphasis><subscript>0</subscript> is a magnetic constant; <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inh.jpg"/> is the magnetic field strength vector.</para>
<para>In the middle of the magnet&#x02013; the magnetic induction value is determined by the dependence <emphasis><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inb.jpg"/> = &#x003BC;</emphasis><subscript>0</subscript><emphasis>(<inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inj.jpg"/> + <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/inh.jpg"/>),</emphasis> where <emphasis>J</emphasis> is a magnetization value.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg175.jpg"/></para>
<para>where &#x003C7; is a magnetic susceptibility; <emphasis>J<subscript>0</subscript></emphasis> is the residual magnetization value.</para>
<para>The permanent magnet can be represented [26-28] as a simulation model of the surface magnetic charges that are evenly distributed across the magnet pole faces with the surface density <emphasis>J<subscript>T</subscript>.</emphasis></para>
<para>Thus, a <emphasis>y-</emphasis> component of the magnetic field strength <emphasis>H<subscript>y</subscript></emphasis> of the magnetic charges can be calculated as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq8-1.jpg"/></para>
<para>and the <emphasis>y-</emphasis> component of magnetic induction <emphasis>B<subscript>y</subscript></emphasis> can be presented as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg176.jpg"/></para>
<para>To determine the parameter <emphasis>J<subscript>T</subscript>,</emphasis> it is necessary to measure the induction of the pole faces center <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in176.jpg"/>. Value <emphasis>(y =</emphasis> <emphasis>l</emphasis>+) indicates that the measurement of <emphasis>B<subscript>mes</subscript></emphasis> is conducted outside the volume of the magnet. The value of magnetic induction at a point with the same coordinates on the inside of the pole faces can be considered equal to the value of induction from the outside pole faces by virtue of the continuity of the magnetic flux and lines of magnetic induction, namely:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg176-1.jpg"/></para>
<para>So, we can write: <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in176-1.jpg"/>, where <emphasis>B<subscript>mes</subscript></emphasis> is the value of magnetic induction measured at the geometric center of the top pole faces of the prismatic magnet.</para>
<para>On the basis of (8.1), we obtain:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg176-2.jpg"/></para>
<para>For the <emphasis>y-</emphasis> component of the magnetic induction <emphasis>B<subscript>y</subscript></emphasis> <emphasis role="strong">(</emphasis><emphasis>P</emphasis><emphasis role="strong">)</emphasis> at the observation point <emphasis>P,</emphasis> the following expression was obtained:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq8-2.jpg"/></para>
</section>
<section class="lev2" id="sec8-4-2">
<title>8.4.2 Simulation Results</title>
<para>For the analysis of the existing mathematical model (8.2), let&#x02019;s calculate the value of magnetic induction on the surface of the magnet (Barium Ferrite) with parameters of <emphasis>c</emphasis>=0.02m, <emphasis>d</emphasis>=0.08m, <emphasis>l</emphasis>=0.014m and a value of magnetic induction <emphasis>B<subscript>mes</subscript></emphasis> <emphasis role="strong"><emphasis>=</emphasis></emphasis>40 mT (value measured at the geometric center of the upper limit of the magnet).</para>
<para>The simulation results for the magnetic induction are represented as <emphasis>B<subscript>y</subscript> =f<subscript>i</subscript>(X<subscript>P</subscript>),i</emphasis> <emphasis role="strong"><emphasis>=</emphasis></emphasis>1<emphasis>,</emphasis>2<emphasis>,</emphasis>3 above the magnet for different values of the height <emphasis>Y<subscript>P</subscript></emphasis> of the observation point <emphasis>P (X<subscript>P</subscript>,Y<subscript>P</subscript>) ,</emphasis> where indicated:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg177.jpg"/></para>
<para>As can be seen from <link linkend="F8-6">Figure <xref linkend="F8-6" remap="8.6"/></link>, the magnetic induction <emphasis>B<subscript>y</subscript> = f<subscript>1</subscript> (X<subscript>P</subscript>)</emphasis> above the surface of the magnet is practically constant for the coordinate <emphasis>X<subscript>P</subscript></emphasis> &#x02208; [<emphasis>&#x02013;</emphasis>5; 5] mm, which is half of the corresponding size of the magnet. If the distance between the observation point <emphasis>P</emphasis> and magnet increases <emphasis>(f<subscript>2</subscript> (X<subscript>P</subscript>),</emphasis> f<subscript>3</subscript> <emphasis>(X<subscript>P</subscript>)</emphasis> in <link linkend="F8-6">Figure <xref linkend="F8-6" remap="8.6"/></link>), the curve shape changes become more gentle, with a pronounced peak above the geometric center of the top pole faces of the prismatic magnet (at the point <emphasis>X<subscript>P</subscript> = 0).</emphasis></para>
<para>For the Hall sensor (<link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link>) in the general case, the dependence of the output voltage <emphasis>U<subscript>out</subscript></emphasis> (<emphasis>P</emphasis>) on the magnitude of the magnetic induction <emphasis>B<subscript>y</subscript></emphasis> is defined as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq8-3.jpg"/></para>
<para>where <emphasis>k</emphasis> is the correction factor, that depends on the type of Hall sensor; <emphasis>U<subscript>c</subscript></emphasis> is a constant component of the Hall sensor output voltage.</para>
<fig id="F8-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-6">Figure <xref linkend="F8-6" remap="8.6"/></link></label>
<caption><para>Simulation results for <emphasis>B<subscript>y</subscript></emphasis> (<emphasis>P</emphasis>) based on the mathematical model (8.2).</para></caption>
<graphic xlink:href="graphics/ch08_fig006.jpg"/>
</fig>
<para>For Hall sensors with linear dependence of the output signal <emphasis>U<subscript>out</subscript></emphasis> on the magnetic induction <emphasis>B<subscript>y</subscript>, k = const</emphasis> and <emphasis>k = f&#x0007B;B<subscript>y</subscript></emphasis> (<emphasis>P</emphasis>)<emphasis>&#x0007D;</emphasis> for nonlinear dependence (8.3). For Hall sensor SS490 (Honeywell) with linear dependence (8.3), the values of the parameters are: <emphasis>U<subscript>c</subscript> =</emphasis> 2<emphasis>.</emphasis>5<emphasis>V</emphasis> and <emphasis>k =</emphasis> 0<emphasis>.</emphasis>032 (according to the static characteristic of the Hall sensor). The authors present the mathematical model of the Hall sensor output voltage <emphasis>U<subscript>out</subscript> (Y<subscript>P</subscript>)</emphasis> at its vertical displacement above the geometric center of the top pole faces of magnet <emphasis>(X<subscript>P</subscript> =</emphasis> 0):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq8-4.jpg"/></para>
<para>The comparative results for dependences <emphasis>U<subscript>out</subscript> (Y<subscript>p</subscript>), U<subscript>E</subscript> (Y<subscript>p</subscript>)</emphasis> and <emphasis>U<subscript>R</subscript>(Y<subscript>p</subscript>)</emphasis> are presented in <link linkend="F8-7">Figure <xref linkend="F8-7" remap="8.7"/></link>, where <emphasis>U<subscript>out</subscript> (Y <subscript>p</subscript>)</emphasis> was calculated using MM (8.4), <emphasis>U<subscript>E</subscript> (Y<subscript>p</subscript>)</emphasis> are the experimental results according to [7] and <emphasis>U<subscript>R</subscript> (Y<subscript>p</subscript>)</emphasis> is a nonlinear regressive model according to [8].</para>
<fig id="F8-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-7">Figure <xref linkend="F8-7" remap="8.7"/></link></label>
<caption><para>Comparative analysis of modeling and experimental results.</para></caption>
<graphic xlink:href="graphics/ch08_fig007.jpg"/>
</fig>
<para>The comparative analysis (<link linkend="F8-7">Figure <xref linkend="F8-7" remap="8.7"/></link>) of the developed mathematical model <emphasis>U<subscript>out</subscript> (Y<subscript>p</subscript>)</emphasis> with the experimental results <emphasis>U<subscript>E</subscript> (Y<subscript>p</subscript>)</emphasis> confirms the correctness and adequacy of the synthesized models (8.1)&#x02013;(8.4).</para>
</section>
</section>
<section class="lev1" id="sec8-5">
<title>8.5 Advanced Approaches for Increasing the Efficiency<break/>of Slip Displacement Sensors</title>
<para>This study presents a number of sensors for data acquisition in real time [5, 11, 13, 14]. The need for rigid gripper orientation before a trial motion has caused the development of the slip sensor based on a cylinder roller with a load which has two degrees of freedom [3, 14].</para>
<para>The sensitive element of the new sensor developed by the authors has the form of a ball (<link linkend="F8-8">Figure <xref linkend="F8-8" remap="8.8"/></link>) with light-reflecting sections disposed in a staggered order (<link linkend="F8-9">Figure <xref linkend="F8-9" remap="8.9"/></link>), thus providing slippage detection by the photo-method.</para>
<fig id="F8-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-8">Figure <xref linkend="F8-8" remap="8.8"/></link></label>
<caption><para>The ball as sensitive element of SDS: 1&#x02013; Finger of robot&#x02019;s gripper; 2&#x02013; Cavity for SDS instalation; 3&#x02013; Guides; 4&#x02013; Sensitive element (<emphasis>a</emphasis> ball); 5&#x02013; Spring; 6&#x02013; Conductive rubber; 7, 8&#x02013; Fiber optic light guides; 9&#x02013; <emphasis>a</emphasis> Sleeve; 10&#x02013; Light; 11&#x02013; Photodetector; 13&#x02013;Cover; 14&#x02013;Screw; 15&#x02013;Hole.</para></caption>
<graphic xlink:href="graphics/ch08_fig008.jpg"/>
</fig>
<fig id="F8-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-9">Figure <xref linkend="F8-9" remap="8.9"/></link></label>
<caption><para>Light-reflecting surface of the sensitive ball with reflecting and absorbing portions (12) for light signal.</para></caption>
<graphic xlink:href="graphics/ch08_fig009.jpg"/>
</fig>
<para>The ball is arranged in the sensor&#x02019;s space through spring-loaded slides. Each slide is connected to the surface of the gripper&#x02019;s space by an elastic element made of conductive rubber.</para>
<para>The ball motion is secured by friction-wheels and is measured with the aid of incremental transducers in another modification of the slip sensor with the ball acting as a sensitive element. The ball contacts with the object through the hole. In this case, the ball is located in the space of compressed air dispensed through the hole.</para>
<para>For the detection of the sensitive element angular deviation during object slippage in any direction, the authors propose [12] a slip displacement sensor with a measurement of changeable capacitance (<link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link>).</para>
<fig id="F8-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link></label>
<caption><para>Capacitated SDS for the detection of object slippage in different directions: 1&#x02013; Main cavity of robot&#x02019;s gripper; 2&#x02013; Additional cavity; 3&#x02013; Gripper&#x02019;s finger; 4&#x02013; Rod; 5&#x02013; Tip; 6&#x02013; Elastic working surface; 7&#x02013; Spring; 8&#x02013; Resilient element; 9, 10&#x02013; Capacitor plates.</para></caption>
<graphic xlink:href="graphics/ch08_fig0010.jpg"/>
</fig>
<fig id="F8-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-11">Figure <xref linkend="F8-11" remap="8.11"/></link></label>
<caption><para>Intelligent sensor system for identification of object slippage direction: 3&#x02013; Gripper&#x02019;s finger; 4&#x02013; Rod; 9, 10&#x02013; Capacitor plates; 11&#x02013; Converter &#x0201C;capacitance-voltage&#x0201D;; 12&#x02013; Delay element; 13, 18, 23&#x02013; Adders; 14, 15, 21, 26&#x02013; Threshold elements; 16&#x02013; Multi-Inputs element OR; 17&#x02013; Computer information-control system; 19, 20, 24, 25&#x02013; Channels for sensor information processing; 22, 27&#x02013; Elements NOT; 28&#x02013;39&#x02013; Elements AND.</para></caption>
<graphic xlink:href="graphics/ch08_fig0011.jpg"/>
</fig>
<para>The structure of the intelligent sensor system developed by the authors which can detect the direction of object slippage based on the capacitated SDS (<link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link>) is represented in <link linkend="F8-11">Figure <xref linkend="F8-11" remap="8.11"/></link> with channels of information processing and electronic units that implement the intellectual base of production rules to identify the direction of displacement of the object in the gripper (if there is a collision with an obstacle).</para>
<para>The SDS is placed on at least one of the gripper fingers (<link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link>). The recording element consists of four capacitors distributed across the conical surface of the additional cavity (2). One plate (9) of each capacitor is located on the surface of the rod (4) and the second plate (10) is placed on the inner surface of the cavity (2).</para>
<para>The intelligent sensor system (<link linkend="F8-11">Figure <xref linkend="F8-11" remap="8.11"/></link>) provides an identification of signals corresponding to object slippage direction <emphasis>&#x0007B;N, NE, E, SE, S, SW, W, NW&#x0007D;</emphasis> in the gripper in the cases of contacting obstacles.</para>
<para>The implementation of relation (<link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link>)</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg182.jpg"/></para>
<para>allows to increase the sensitivity of developed sensor system.</para>
<para>Initially (<link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link>), the tip (5) is held above the surface of the gripper&#x02019;s finger (3) by a spring (7), and a resilient element (8) holds a rod (4) in such a position that its longitudinal axis is perpendicular to the surface of the finger and coincides with the axis MN of the rod (4). When gripping a manipulation object, its surface comes in contact with the tip (5), the spring (7) is compressed and the tip (5) is immersed in a cavity (2). At this moment, the value of compressive force corresponds to the minimal pre-determined value <emphasis>F<subscript>min</subscript></emphasis> that eliminates distortion or damage of the object.</para>
<para>The object begins to slip in the gripper if an inequality between compressive force and the mass of the object exists during the trial motion. In this case, the rod (4) is deflecting along the sliding direction on the angle <emphasis role="strong"><emphasis>&#x003B1;</emphasis></emphasis> as the result of the friction forces between the object&#x02019;s contacting surface and the working surface (6) of the tip (5).</para>
<para>Thus, the longitudinal axis of the rod (4) coincides with the axis M&#x02032;N&#x02032;. Reciprocal movement of plates (9) and (10) with respect to each other in all capacitative elements leads to value changes of the capacities <emphasis>C</emphasis><subscript>1</subscript><emphasis>, C</emphasis><subscript>2</subscript><emphasis>, C</emphasis><subscript>3</subscript> and <emphasis>C</emphasis><subscript>4</subscript> depending on the direction of rod&#x02019;s movement. The changes of the capacities lead to the voltage changes at the outputs of the respective convertors &#x0201C;capacitance-voltage&#x0201D; in all sensory processing channels.</para>
<para>From time to time, the robot&#x02019;s gripper may be faced with an obstacle when the robot moves a manipulation object in dynamic environments according to preplanned trajectories. The obstacles can appear randomly in the dynamic working area of the intelligent robot. As a result of the collision between the robot gripper and the obstacle, object&#x02019;s slippage may appear if the clamping force <emphasis>F</emphasis> is not enough for reliable fixation of object between the gripper&#x02019;s fingers. The direction <emphasis>&#x0007B;N, NE, E, SE, S, SW, W, NW&#x0007D;</emphasis> of such slip displacement of the object depends on the position of the obstacle in the desired trajectory.</para>
<para>In this case, the output signal of element OR (16) is equal to 1 and this is a command signal for the computer system (17) which constrains the implementation of the planned trajectory. At the same time, the logical 1 signal appears at the output of one of the AND elements <emphasis>&#x0007B;</emphasis>29, 30, 31, 33, 34, 35, 37, 39<emphasis>&#x0007D;</emphasis> that corresponds to one of the object&#x02019;s slippage direction <emphasis>&#x0007B;N, NE, E, SE, S, SW, W, NW&#x0007D;</emphasis> in the robot gripper.</para>
<para>Let&#x02019;s consider, for example, the determination of slippage direction <emphasis>&#x0007B;N&#x0007D;</emphasis> after the contact between the obstacle and robot gripper (with object). If the object slips in direction <emphasis>&#x0007B;N&#x0007D;</emphasis> (<link linkend="F8-11">Figure <xref linkend="F8-11" remap="8.11"/></link>), then:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>the capacity <emphasis>C</emphasis><subscript>1</subscript> in first channel (19) increases and a logical 1 signal appears at the output of threshold element (14) and at the first input of the AND element (28);</para></listitem>
<listitem>
<para>the capacity <emphasis>C</emphasis><subscript>3</subscript> in the third channel (20) decreases and a logical 1 signal appears at the output of threshold element (15), at the second input and output of AND element (28) and at the first input of the AND element (29);</para></listitem>
<listitem>
<para>the capacities <emphasis>C<subscript>2</subscript>, C<subscript>4</subscript></emphasis> of the second (24) and fourth (25) channels of the sensor information processing are equivalent <emphasis>C</emphasis><subscript>2</subscript> = <emphasis>C</emphasis><subscript>4</subscript> and in this case a logical 0 signal appears at the outputs of adder (23) and threshold element (26), a logical 1 signal appears at the output of NOT element (27), at the second input and output of AND element (29) and at the second input of the computer information-control system (17). It means that the direction of the object&#x02019;s slippage is <emphasis>&#x0007B;N&#x0007D;,</emphasis> taking into account that output signals of the AND elements <emphasis>&#x0007B;</emphasis>30, 31, 33, 34, 35, 37, 39<emphasis>&#x0007D;</emphasis> equal 0.</para></listitem></itemizedlist>
<table-wrap position="float" id="T8-1">
<label><link linkend="T8-1">Table <xref linkend="T8-1" remap="8.1"/></link></label>
<caption><para>The base of production rules &#x0201C;IF-THEN&#x0201D; for indetification of the slip displacement direction</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="bottom" align="left">Number of</td>
<td valign="bottom" align="center" colspan="4">Antecedent</td>
<td valign="bottom" align="left">Consequent</td></tr>
<tr>
<td valign="bottom" align="left">Production Rule</td>
<td valign="bottom" align="center"><emphasis>U<subscript>1</subscript></emphasis></td>
<td valign="bottom" align="center"><emphasis>U<subscript>2</subscript></emphasis></td>
<td valign="bottom" align="center"><emphasis>U<subscript>3</subscript></emphasis></td>
<td valign="bottom" align="center"><emphasis>U<subscript>4</subscript></emphasis></td>
<td valign="bottom" align="left">Direction of Slippage</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="center">1</td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center">=</td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center">=</td>
<td valign="top" align="center"><emphasis>N</emphasis></td></tr>
<tr>
<td valign="top" align="center">2</td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>NE</emphasis></td>
</tr>
<tr>
<td valign="top" align="center">3</td>
<td valign="top" align="center">=</td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center">=</td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>E</emphasis></td>
</tr>
<tr>
<td valign="top" align="center">4</td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>SE</emphasis></td>
</tr>
<tr>
<td valign="top" align="center">5</td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center">=</td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center">=</td>
<td valign="top" align="center"><emphasis>S</emphasis></td>
</tr>
<tr>
<td valign="top" align="center">6</td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>SW</emphasis></td>
</tr>
<tr>
<td valign="top" align="center">7</td>
<td valign="top" align="center">=</td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center">=</td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>W</emphasis></td>
</tr>
<tr>
<td valign="top" align="center">8</td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>&#x0003C;</emphasis></td>
<td valign="top" align="center"><emphasis>></emphasis></td>
<td valign="top" align="center"><emphasis>NW</emphasis></td></tr>
</tbody>
</table>
</table-wrap>
<para>The production rules &#x0201C;IF-THEN&#x0201D; base are represented in <link linkend="T8-1">Table <xref linkend="T8-1" remap="8.1"/></link>. This rule base determines the functional dependence between the direction of object slippage (<link linkend="F8-11">Figure <xref linkend="F8-11" remap="8.11"/></link>), the current state of each capacitor <emphasis>&#x0007B;C</emphasis><subscript>1</subscript><emphasis>, C</emphasis><subscript>2</subscript><emphasis>, C</emphasis><subscript>3</subscript><emphasis>, C</emphasis><subscript>4</subscript><emphasis>&#x0007D;</emphasis> and the corresponding output signals <emphasis>&#x0007B;U</emphasis><subscript>1</subscript><emphasis>, U</emphasis><subscript>2</subscript><emphasis>, U</emphasis><subscript>3</subscript><emphasis>, U</emphasis><subscript>4</subscript><emphasis>&#x0007D;</emphasis> of the multi-capacitor slip displacement sensor, where: <emphasis>U<subscript>i</subscript></emphasis>, (<emphasis>i</emphasis> = 1<emphasis>...</emphasis>4)&#x02013; output signal of <emphasis>i</emphasis>-th converter &#x0201C;capacitance - voltage&#x0201D; (11); (<emphasis>></emphasis>)&#x02013; indicator of the corresponding signal <emphasis>U<subscript>i</subscript>,</emphasis>(<emphasis>i</emphasis> = 1<emphasis>...</emphasis>4) increases during the object slippage process; (<emphasis>&#x0003C;</emphasis>)&#x02013; indicator of the corresponding signal <emphasis>U<subscript>i</subscript>,</emphasis>(<emphasis>i</emphasis> = 1<emphasis>...</emphasis>4) decreases during the object slippage process; (=)&#x02013; pair&#x02019;s indicator of equivalence according to conditions <emphasis>U<subscript>i</subscript></emphasis> = <emphasis>U<subscript>j</subscript>,</emphasis> (<emphasis>i</emphasis> = 1<emphasis>...</emphasis>4), ( <emphasis>j</emphasis> = 1<emphasis>...</emphasis>4)<emphasis>, i</emphasis> &#x02260; <emphasis>j</emphasis> in the antecedents of the production rules.</para>
<para>The mathematical models of different types of slip displacement sensors with a measurement of changeable capacity are presented in [5, 19, 21, 23].</para>
</section>
<section class="lev1" id="sec8-6">
<title>8.6 Advances in Development of Smart Grippers for Intelligent Robots</title>
<section class="lev2" id="sec8-6-1">
<title>8.6.1 Self-Clamping Grippers of Intelligent Robots</title>
<para>The slip displacement signals, which are responsible for the creation of the required compressive force adequate to the object mass, provide the conditions for the correction of the gripper trajectory-planning algorithm, which identifies an object mass as a variable parameter [9]. The object mass identification is carried out in response to the final value of the compressive force, recorded at the slippage signal disappearance. It is of extreme importance to employ slip sensors with uprated response when the object mass changes in the functioning process.</para>
<para>In those cases when the main task of the sensing system is the compression of the object without its deformation or damage, it is expedient in future research to project the advanced grippers of a self-clamping design (<link linkend="F8-12">Figure <xref linkend="F8-12" remap="8.12"/></link>), excluding the gripper drive for the compressive force growth (at slippage) up to the required value.</para>
<fig id="F8-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-12">Figure <xref linkend="F8-12" remap="8.12"/></link></label>
<caption><para>Self-adjusting gripper of an intelligent robot with angle movement of clamping rollers: 1, 2&#x02013; Finger; 3, 4&#x02013; Guide groove; 5, 6&#x02013; Roller; 7, 8&#x02013; Roller axis; 9, 15, 20&#x02013; Spring; 10&#x02013; Object; 11, 18&#x02013; Elastic working surface; 12&#x02013; Clamping force sensor; 13, 14&#x02013; Electroconductive contacts; 16, 19&#x02013; Fixator; 17&#x02013; Stock; 21&#x02013; Adjusting screw; 22&#x02013; Deepening; 23&#x02013; Finger&#x02019;s drive.</para></caption>
<graphic xlink:href="graphics/ch08_fig0012.jpg"/>
</fig>
<para>The information processing system of a self-adjusting gripper of an intelligent robot with angle movement of clamping rollers consists of: 24&#x02013; control unit; 25&#x02013; delay element; 26, 30, 36&#x02013; adder; 27, 32, 37, 41&#x02013; threshold element; 28, 29, 31, 33, 34, 35, 38, 40, 42&#x02013; switch; 39&#x02013; voltage source.</para>
<para>In such a gripper (<link linkend="F8-12">Figure <xref linkend="F8-12" remap="8.12"/></link>), the rollers have two degrees of freedom and during object slippage they have a compound behavior (rotation and translation motions). This gripper (<link linkend="F8-12">Figure <xref linkend="F8-12" remap="8.12"/></link>) is adaptive since object self-clamping is being accomplished with a force adequate to the object mass up to the moment of the slippage disappearance [6, 14].</para>
<para>Another example of a developed self-clamping gripper [15] is represented in <link linkend="F8-13">Figure <xref linkend="F8-13" remap="8.13"/></link>, where: 1&#x02013; finger; 2&#x02013; roller axis; 3&#x02013; roller; 4&#x02013; sector element; 5&#x02013; guide gear racks; 6&#x02013; pinion; 7&#x02013; travel bar; 8, 9&#x02013; axis; 10&#x02013; object; 11&#x02013;elastic working surface; 12, 13&#x02013; spring; 14, 15&#x02013; clamping force sensor; 16&#x02013;electroconductive contacts; 17, 18&#x02013; fixator; 19, 20&#x02013; pintle.</para>
<fig id="F8-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-13">Figure <xref linkend="F8-13" remap="8.13"/></link></label>
<caption><para>Self-clamping gripper of an intelligent robot with plane-parallel displacement of the clamping roller: Front view (a); Top view (b).</para></caption>
<graphic xlink:href="graphics/ch08_fig0013.jpg"/>
</fig>
<para>The experimental self-clamping gripper with plane-parallel displacement of the clamping rollers and intelligent robot with 4 degrees of freedom for experimental investigations of the developed grippers and slip displacement sensors are represented in <link linkend="F8-14">Figure <xref linkend="F8-14" remap="8.14"/></link> and <link linkend="F8-15">Figure <xref linkend="F8-15" remap="8.15"/></link>, correspondingly.</para>
</section>
<section class="lev2" id="sec8-6-2">
<title>8.6.2 Slip Displacement Signal Processing in Real Time</title>
<para>Frequent handling operations require a compressive force being exerted through the intermediary of the robot&#x02019;s sensing system in the continuous hoisting operation. This regime shows a simultaneous increase in the compressive force while the continuous hoisting operation and lifting of the gripper in the vertical direction take place accompanied by the slip displacement signal measurement. When the slippage signal disappears, the compressive force does not increase and, therefore, the operations with the object are accomplished according to the robot&#x02019;s operational algorithm. To realize the trial motion regime and the continuous hoisting operation being controlled in real time, stringent requirements to the parameters should be met, in particular:</para>
<fig id="F8-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-14">Figure <xref linkend="F8-14" remap="8.14"/></link></label>
<caption><para>Experimental self-clamping gripper with plane-parallel displacement of the clamping rollers.</para></caption>
<graphic xlink:href="graphics/ch08_fig0014.jpg"/>
</fig>
<fig id="F8-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-15">Figure <xref linkend="F8-15" remap="8.15"/></link></label>
<caption><para>Intelligent robot with 4 degrees of freedom for experemental investigations of SDS.</para></caption>
<graphic xlink:href="graphics/ch08_fig0015.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>the response time between the moment of slippage emergence and the moment when the gripper fingers begin to increase the compressive force;</para></listitem>
<listitem>
<para>the time of the sliding process including the moments between the emergence of sliding and its disappearance;</para></listitem>
<listitem>
<para>the minimal object displacement detected by the slip signal.</para></listitem></itemizedlist>
<para>The problem of raising the sensors response in measuring the slip displacement signals is tackled by improving their configuration and using the measuring circuit designs with high resolving power.</para>
</section>
</section>
<section class="lev1" id="sec8-7">
<title>8.7 Conclusions</title>
<para>The slip displacement signal detection method considered in the present paper furnishes an explanation of the main detection principles and allows robot sensing systems to obtain broad capabilities. The authors developed a wide variety of SDS schemes and mathematical models for capacitative, magnetic and light-reflecting sensitive elements with improved characteristics (accuracy time response, sensitivity). It is very important that the developed multi-component capacity sensor allows identifying the direction of object slippage based on slip displacement signal detection which can appear in the case of intelligent robot collisions with any obstacle in a dynamic environment. The design of self-clamping grippers is also a very appropriate direction for intelligent robot development.</para>
<para>The results of the research are applicable in the automatic adjustment of the clamping force of robot&#x02019;s gripper and robot motion correction algorithms in real time. The methods introduced by the authors may be also used in random operational conditions, within problems of automatic assembly, sorting, pattern and image recognition in the working zones of robots. The proposed sensors and models can be used for the synthesis of intelligent robot control systems [2, 16] with new features and for solving orientation and control tasks during intelligent robot contacts with obstacles.</para>
</section>
<section class="lev1" id="sec8-8">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>Ravinder S. Dahiya, Giorgio Metta, Maurizio Valle and Giulio Sandini. Tactile Sensing From Humans to Humanoids, volume 26 of Issue 1, pages 1&#x02013;20. IEEE Transactions on Robotics, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Ravinder+S%2E+Dahiya%2C+Giorgio+Metta%2C+Maurizio+Valle+and+Giulio+Sandini%2E+Tactile+Sensing+From+Humans+to+Humanoids%2C+volume+26+of+Issue+1%2C+pages+1-20%2E+IEEE+Transactions+on+Robotics%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. A. Kargin. Introduction to Intelligent Machines. Book 1: Intelligent Regulators. Nord-Press, DonNU, Donetsk, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+A%2E+Kargin%2E+Introduction+to+Intelligent+Machines%2E+Book+1%3A+Intelligent+Regulators%2E+Nord-Press%2C+DonNU%2C+Donetsk%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko. Measurements methods for slip displacement signal registration. In Proc. of Intern. Symposium on Measurement Technology and Intelligent Instruments, pages 1451&#x02013;1461, Chongqing-Wuhan, China, 1993. Published by SPIE. USA. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2E+Measurements+methods+for+slip+displacement+signal+registration%2E+In+Proc%2E+of+Intern%2E+Symposium+on+Measurement+Technology+and+Intelligent+Instruments%2C+pages+1451-1461%2C+Chongqing-Wuhan%2C+China%2C+1993%2E+Published+by+SPIE%2E+USA%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko and X. Y. Huang. Slip displacement sensors of robotic assembly system. In Proc. of 10-th Intern. Conference on Assembly Automation, pages 429&#x02013;436, Kanazava, Japan, 23&#x02013;25 Oct 1989. IFS Publications. Kempston, United Kingdom. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko+and+X%2E+Y%2E+Huang%2E+Slip+displacement+sensors+of+robotic+assembly+system%2E+In+Proc%2E+of+10-th+Intern%2E+Conference+on+Assembly+Automation%2C+pages+429-436%2C+Kanazava%2C+Japan%2C+23-25+Oct+1989%2E+IFS+Publications%2E+Kempston%2C+United+Kingdom%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko and I. L. Nazarova. Mathematical model of capacity sensor with conical configuration of sensitive element. In Proceedings of the Donetsk National Technical University, No. 11(186), pages 186&#x02013;191, Donetsk: DNTU. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko+and+I%2E+L%2E+Nazarova%2E+Mathematical+model+of+capacity+sensor+with+conical+configuration+of+sensitive+element%2E+In+Proceedings+of+the+Donetsk+National+Technical+University%2C+No%2E+11%28186%29%2C+pages+186-191%2C+Donetsk%3A+DNTU%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko and E. A. Shvets. Adaptive gripper of intelligent robot. Patent No. 14569, Ukraine, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko+and+E%2E+A%2E+Shvets%2E+Adaptive+gripper+of+intelligent+robot%2E+Patent+No%2E+14569%2C+Ukraine%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko and O. S. Shyshkin. Experimental studies of the magnetic slip displacement sensors for adaptive robotic systems. In Proceedings of the Odessa Polytechnic University, pages 47&#x02013;51, Odessa, 2005. Special Issue. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko+and+O%2E+S%2E+Shyshkin%2E+Experimental+studies+of+the+magnetic+slip+displacement+sensors+for+adaptive+robotic+systems%2E+In+Proceedings+of+the+Odessa+Polytechnic+University%2C+pages+47-51%2C+Odessa%2C+2005%2E+Special+Issue%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko and O. S. Shyshkin. Synthesis of regression models of magnetic systems of slip displacement sensors. Radioelectronic and Computer Systems, (No. 6(25)): 210&#x02013;215, Kharkov, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko+and+O%2E+S%2E+Shyshkin%2E+Synthesis+of+regression+models+of+magnetic+systems+of+slip+displacement+sensors%2E+Radioelectronic+and+Computer+Systems%2C+%28No%2E+6%2825%29%29%3A+210-215%2C+Kharkov%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko, A. V. Kuzmichev and Y. Z. Yang. Robot control system using slip displacement signal for algoritm correction. In ROBOT CONTROL (SYROCO91). Selected papers from the 3-rd IFAC/IFIP/IMACS Symposium, pages 463&#x02013;469. Pergamon Press, Vienna, Austria. Oxford-NewYork-Seoul-Tokyo, 1991. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2C+A%2E+V%2E+Kuzmichev+and+Y%2E+Z%2E+Yang%2E+Robot+control+system+using+slip+displacement+signal+for+algoritm+correction%2E+In+ROBOT+CONTROL+%28SYROCO91%29%2E+Selected+papers+from+the+3-rd+IFAC%2FIFIP%2FIMACS+Symposium%2C+pages+463-469%2E+Pergamon+Press%2C+Vienna%2C+Austria%2E+Oxford-NewYork-Seoul-Tokyo%2C+1991%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko, E. A. Shvets and O. S. Shyshkin. Modern sensor systems of intelligent robots based on the slip displacement signal detection. In Annals of DAAAM for 2007 &#x00026; Proceedings of the 18th International DAAAM Symposium, pages 381&#x02013;382, Vienna, Austria, 2007. DAAAM International. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2C+E%2E+A%2E+Shvets+and+O%2E+S%2E+Shyshkin%2E+Modern+sensor+systems+of+intelligent+robots+based+on+the+slip+displacement+signal+detection%2E+In+Annals+of+DAAAM+for+2007+%26+Proceedings+of+the+18th+International+DAAAM+Symposium%2C+pages+381-382%2C+Vienna%2C+Austria%2C+2007%2E+DAAAM+International%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko, L. P. Klymenko, V. Y. Kondratenko, G. V. Kondratenko and E. A. Shvets. Slip displacement sensors for intelligent robots: Solutions and models. In Proceedings of the 2013 IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems, Vol. 2, pages 861&#x02013;866. IDAACS, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2C+L%2E+P%2E+Klymenko%2C+V%2E+Y%2E+Kondratenko%2C+G%2E+V%2E+Kondratenko+and+E%2E+A%2E+Shvets%2E+Slip+displacement+sensors+for+intelligent+robots%3A+Solutions+and+models%2E+In+Proceedings+of+the+2013+IEEE+7th+International+Conference+on+Intelligent+Data+Acquisition+and+Advanced+Computing+Systems%2C+Vol%2E+2%2C+pages+861-866%2E+IDAACS%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko, N. Y. Kondratenko and V. Y. Kondratenko. Intelligent sensor system. Patent No. 52080, Ukraine, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2C+N%2E+Y%2E+Kondratenko+and+V%2E+Y%2E+Kondratenko%2E+Intelligent+sensor+system%2E+Patent+No%2E+52080%2C+Ukraine%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko, O. S. Shyshkin and V. Y. Kondratenko. Device for detection of slip displacement signal. Patent No. 79155, Ukraine, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2C+O%2E+S%2E+Shyshkin+and+V%2E+Y%2E+Kondratenko%2E+Device+for+detection+of+slip+displacement+signal%2E+Patent+No%2E+79155%2C+Ukraine%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko, V. Y. Kondratenko, E. A. Shvets and O. S. Shyshkin. Adaptive gripper devices for robotic systems. In Mechatronics and Robotics (M&#x00026;R-2007): Proceeding of Intern. Scientific-and-Technological Congress (October 2&#x02013;5, 2007), pages 99&#x02013;105. Polytechnical University Press, Saint-Petersburg, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2C+V%2E+Y%2E+Kondratenko%2C+E%2E+A%2E+Shvets+and+O%2E+S%2E+Shyshkin%2E+Adaptive+gripper+devices+for+robotic+systems%2E+In+Mechatronics+and+Robotics+%28M%26R-2007%29%3A+Proceeding+of+Intern%2E+Scientific-and-Technological+Congress+%28October+2-5%2C+2007%29%2C+pages+99-105%2E+Polytechnical+University+Press%2C+Saint-Petersburg%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko, V. Y. Kondratenko, I. V. Markovsky, S. K. Chernov, E. A. Shvets and O. S. Shyshkin. Adaptive gripper of intelligent robot. Patent No. 26252, Ukraine, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2C+V%2E+Y%2E+Kondratenko%2C+I%2E+V%2E+Markovsky%2C+S%2E+K%2E+Chernov%2C+E%2E+A%2E+Shvets+and+O%2E+S%2E+Shyshkin%2E+Adaptive+gripper+of+intelligent+robot%2E+Patent+No%2E+26252%2C+Ukraine%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. P. Kondratenko, Y. M. Zaporozhets, G. V. Kondratenko and O. S. Shyshkin. Device for identification and analysis of tactile signals for information-control system of the adaptive robot. Patent No. 40710, Ukraine, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+P%2E+Kondratenko%2C+Y%2E+M%2E+Zaporozhets%2C+G%2E+V%2E+Kondratenko+and+O%2E+S%2E+Shyshkin%2E+Device+for+identification+and+analysis+of+tactile+signals+for+information-control+system+of+the+adaptive+robot%2E+Patent+No%2E+40710%2C+Ukraine%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. H. Lee. Tactile sensing: New directions, new challenges. Int. J. of Robotics Research. 19(7), Jul 2000, vol. 19 no. 7, pp. 636&#x02013;643., Jul 2000. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+H%2E+Lee%2E+Tactile+sensing%3A+New+directions%2C+new+challenges%2E+Int%2E+J%2E+of+Robotics+Research%2E+19%287%29%2C+Jul+2000%2C+vol%2E+19+no%2E+7%2C+pp%2E+636-643%2E%2C+Jul+2000%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Mark H. Lee and Howard R. Nicholls. Tactile sensing for mechatronics - a state of the art survey, volume 9, pages 1&#x02013;31. Mechatronics, 1999. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Mark+H%2E+Lee+and+Howard+R%2E+Nicholls%2E+Tactile+sensing+for+mechatronics+-+a+state+of+the+art+survey%2C+volume+9%2C+pages+1-31%2E+Mechatronics%2C+1999%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>H. B. Muhammad, C. M. Oddo, L. Beccai, C. Recchiuto, C. J. Anthony, M. J. Adams, M. C. Carrozza, D. W. L. Hukins and M. C. L. Ward. Development of a bioinspired MEMS based capacitive tactile sensor for a robotic finger. Sensors and Actuators A-165, pages 221&#x02013;229, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=H%2E+B%2E+Muhammad%2C+C%2E+M%2E+Oddo%2C+L%2E+Beccai%2C+C%2E+Recchiuto%2C+C%2E+J%2E+Anthony%2C+M%2E+J%2E+Adams%2C+M%2E+C%2E+Carrozza%2C+D%2E+W%2E+L%2E+Hukins+and+M%2E+C%2E+L%2E+Ward%2E+Development+of+a+bioinspired+MEMS+based+capacitive+tactile+sensor+for+a+robotic+finger%2E+Sensors+and+Actuators+A-165%2C+pages+221-229%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Howard R. Nicholls and Mark H. Lee. A survey of Robot Tactile Sensor Technology. The International Journal of Robotic Research, (Vol. 8, No. 3):3&#x02013;30, June 1989. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Howard+R%2E+Nicholls+and+Mark+H%2E+Lee%2E+A+survey+of+Robot+Tactile+Sensor+Technology%2E+The+International+Journal+of+Robotic+Research%2C+%28Vol%2E+8%2C+No%2E+3%29%3A3-30%2C+June+1989%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>E. P. Reidemeister and L. K. Johnson. Capacitive acceleration sensor for vehicle applications. In Sensors and actuators, pages 29&#x02013;34. SP-1066, 1995. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=E%2E+P%2E+Reidemeister+and+L%2E+K%2E+Johnson%2E+Capacitive+acceleration+sensor+for+vehicle+applications%2E+In+Sensors+and+actuators%2C+pages+29-34%2E+SP-1066%2C+1995%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Johan Tegin and Jan Wikander. Tactile Sensing in Intelligent Robotic Manipulation-A Review. Industrial Robot: An International Journal, (Vol. 32, No. 1):64&#x02013;70, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Johan+Tegin+and+Jan+Wikander%2E+Tactile+Sensing+in+Intelligent+Robotic+Manipulation-A+Review%2E+Industrial+Robot%3A+An+International+Journal%2C+%28Vol%2E+32%2C+No%2E+1%29%3A64-70%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. I. Tiwana, A. Shashank, S. J. Redmond and N. H. Lovell. Characterization of a capacitive tactile shear sensor for application in robotic and upper limb prostheses. Sensors and actuators, A-165, pages 164&#x02013;172, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+I%2E+Tiwana%2C+A%2E+Shashank%2C+S%2E+J%2E+Redmond+and+N%2E+H%2E+Lovell%2E+Characterization+of+a+capacitive+tactile+shear+sensor+for+application+in+robotic+and+upper+limb+prostheses%2E+Sensors+and+actuators%2C+A-165%2C+pages+164-172%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Ueda and K. Iwata. Adaptive grasping operation of an industrial robot. In Proc. of the 3-rd Int. Symp. Ind. Robots, pages 301&#x02013;310, Zurich, 1973. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Ueda+and+K%2E+Iwata%2E+Adaptive+grasping+operation+of+an+industrial+robot%2E+In+Proc%2E+of+the+3-rd+Int%2E+Symp%2E+Ind%2E+Robots%2C+pages+301-310%2C+Zurich%2C+1973%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Ueda, K. Iwata and H. Shingu. Tactile sensors for an industrial robot to detect a slip. In 2-nd Int. Symp. on Industrial Robots, pages 63&#x02013;76, Chicago, USA, 1972. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Ueda%2C+K%2E+Iwata+and+H%2E+Shingu%2E+Tactile+sensors+for+an+industrial+robot+to+detect+a+slip%2E+In+2-nd+Int%2E+Symp%2E+on+Industrial+Robots%2C+pages+63-76%2C+Chicago%2C+USA%2C+1972%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. M. Zaporozhets. Qualitative analysis of the characteristics of direct permanent magnets in magnetic systems with a gap. Technical electrodynamics, (No. 3):19&#x02013;24, 1980. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+M%2E+Zaporozhets%2E+Qualitative+analysis+of+the+characteristics+of+direct+permanent+magnets+in+magnetic+systems+with+a+gap%2E+Technical+electrodynamics%2C+%28No%2E+3%29%3A19-24%2C+1980%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. M. Zaporozhets, Y. P. Kondratenko and O. S. Shyshkin. Three-dimensional mathematical model for calculating the magnetic induction in magnetic-sensitive system of slip displacement sensor. Technical electrodynamics, (No. 5):76&#x02013;79, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+M%2E+Zaporozhets%2C+Y%2E+P%2E+Kondratenko+and+O%2E+S%2E+Shyshkin%2E+Three-dimensional+mathematical+model+for+calculating+the+magnetic+induction+in+magnetic-sensitive+system+of+slip+displacement+sensor%2E+Technical+electrodynamics%2C+%28No%2E+5%29%3A76-79%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. M. Zaporozhets, Y. P. Kondratenko and O. S. Shyshkin. Mathematical model of slip displacement sensor with registration of transversal constituents of magnetic field of sensing element. Technical electrodynamics, (No. 4):67&#x02013;72, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+M%2E+Zaporozhets%2C+Y%2E+P%2E+Kondratenko+and+O%2E+S%2E+Shyshkin%2E+Mathematical+model+of+slip+displacement+sensor+with+registration+of+transversal+constituents+of+magnetic+field+of+sensing+element%2E+Technical+electrodynamics%2C+%28No%2E+4%29%3A67-72%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch09" label="9" xreflabel="9">
<title>Distributed Data Acquisition and Control Systems for a Sized Autonomous Vehicle</title>
<para><emphasis role="strong">T. Happek, U. Lang, T. Bockmeier, D. Neubauer and A. Kuznietsov</emphasis></para>
<para>University of Applied Sciences, Friedberg, Germany</para>
<para>Corresponding author: T. Happek &lt;t.Happek@gmx.net&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>In this paper, we present an autonomous car with distributed data processing. The car is controlled by a multitude of independent sensors. For lane detection, a camera is used, which detects the lane marks using a Hough transformation. Once the camera detects these, one of them is selected to be followed by the car. This lane is verified by the other sensors of the car. These sensors check the route for obstructions or allow the car to scan a parking space and to park on the roadside if the gap is large enough. The car is built on a scale of 1:10 and shows excellent results on a test track.</para>
<para><emphasis role="strong">Keywords:</emphasis> Edge detection, image processing, microcontrollers, camera</para>
</section>
<section class="lev1" id="sec9-1">
<title>9.1 Introduction</title>
<para>In modern times, the question of safe traveling becomes more and more important. Most accidents are caused by human failure, so that in many sectors of industry the issue of &#x0201C;autonomous driving&#x0201D; is of increasing interest. An autonomous car will not have problems like being in a bad shape that day or tiredness and will suffer less from reduced visibility due to environmental influences. A car with laser sensors to detect objects on the road, sensors that measure the grip of the road, that calculate speed based on the signals of these sensors and with a fixed reaction time will reduce the number of accidents and related costs.</para>
<para>This chapter describes the project of an autonomous vehicle on a scale of 1:10, which was developed based on decentralized signal routing. The objective of the project is to build an autonomous car that is able to drive autonomously on a scaled road, including the detection of stopping lines, finding a parking area and parking autonomously.</para>
<para>The project is divided into three sections. The first section is about the car itself, the platform of the project. This section describes the sensors of the car, the schematic construction and the signal flow of the car.</para>
<para>The second section is about lane detection, the most important part of the vehicle. Utilizing a camera with several image filters, the lane marks of the road can be extracted from the camera image. This section also describes the calculation of the driving lane for the car.</para>
<para>The control of the vehicle is the matter of the third section. The car runs based on a mathematical model of the car, which calculates the speed and the steering angle of the car in real time, based on the driving lane provided by the camera.</para>
<para>The car is tested on a scaled indoor test track.</para>
</section>
<section class="lev1" id="sec9-2">
<title>9.2 The Testing Environment</title>
<fig id="F9-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-1">Figure <xref linkend="F9-1" remap="9.1"/></link></label>
<caption><para>Dimensions of the test track.</para></caption>
<graphic xlink:href="graphics/ch09_fig001.jpg"/>
</fig>
<para>Since the car is based on a scaled model car, the test track has to be scaled, too. Therefore, the test track has the same dimensions as the scaled road that is used in a competition for cars on a scale of 1:10 that takes place in Germany every year. As you can see in <link linkend="F9-1">Figure <xref linkend="F9-1" remap="9.1"/></link>, the road has fixed lane marks. This is important, because it&#x02019;s about a prototype. On a real road, several types of lane marks exist. From white lines, as in the test track, to pillars and a missing center line, every type of lane marks is expected to show up.</para>
<para>In order to simplify the lane detection on the test track, it is assumed, that at all times only one type of lane marks exists. The road has a fixed width and the radius of the curves measure at least 1 meter. The test track has no slope, but a flat surface.</para>
</section>
<section class="lev1" id="sec9-3">
<title>9.3 Description of the System</title>
<para>The basic idea for the vehicle is related to a distributed data acquisition strategy. That means, that all peripherals are not managed by a single microcontroller, but each peripheral has its own microcontroller, which handles the data processing for a specific task. All together are used to analyze the data of the different sensors of the car. Smaller controllers, for instance for the distance sensors or for the camera, are managed by one main controller. The input of the smaller controllers is provided simultaneously via CAN.</para>
<para>The base of the vehicle is a model car scaled 1:10. It includes each mechanical peripheral of a car, like the chassis or the engine. A platform for the control system is added. <link linkend="F9-2">Figure <xref linkend="F9-2" remap="9.2"/></link> shows the schematic topview of the car with the added platform.</para>
<fig id="F9-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-2">Figure <xref linkend="F9-2" remap="9.2"/></link></label>
<caption><para>Schematic base of the model car.</para></caption>
<graphic xlink:href="graphics/ch09_fig02.jpg"/>
</fig>
<para>The vehicle itself is equipped with two front boards, a side board, a rear board and the motherboard. All of these boards have one microcontroller for data analysis that is positioned next to the sensors. The front boards provide infrared sensors for object tracking in the distance.</para>
<para>The infrared sensors of the side board have the task of finding a parking space and transmitting the information via CAN bus to the main controller, which undertakes the control of parking supported by the information it gets from the sensors in the front and back of the car.</para>
<para>The rear board is equipped with infrared sensors, too. It serves the back of the vehicle only. That guarantees a safe distance to all objects in the back. The microcontrollers are responsible for the data processing of each board and send the information to the main controller via CAN bus. Each of the microcontrollers reacts on the incoming input signals of the corresponding sensors according to its implemented control. The infrared sensors are distributed alongside the car as you can see in <link linkend="F9-3">Figure <xref linkend="F9-3" remap="9.3"/></link>.</para>
<fig id="F9-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-3">Figure <xref linkend="F9-3" remap="9.3"/></link></label>
<caption><para>The infrared sensors distributed alongside the car.</para></caption>
<graphic xlink:href="graphics/ch09_fig003.jpg"/>
</fig>
<para>The motherboard with the integrated main controller is the main access point of the car. It provides the CAN bus connection, the power supply for the other boards and external sensors of the car. But primarily it&#x02019;s the central communications point of the car and manages the information that comes from the peripheral boards, including the data from the external sensors, the control signals for the engine and the servo for the starring angle.</para>
<para>The motherboard gets its power supply from three 5 V batteries. With these three batteries, the model car is able to drive about one hour autonomously.</para>
<para>The main task for the main controller is the control of the vehicle. It calculates the speed of the car and the starring angle based on a mathematical model of the car and the information of the sensors. The external engine driver sets the speed via PWM. The starring angle of the car is adjusted by the front wheels. An additional servo controls the wheel&#x02019;s angle.</para>
<para>The camera and its lane detection is the most important component of the vehicle. It is installed in the middle of the front of the car, see <link linkend="F9-4">Figure <xref linkend="F9-4" remap="9.4"/></link>. The viewing angle is important for the position of the camera. If the viewing angle is too small, the pictures of the camera show a near area in front of the car only, but not the area in the middle distance. If the viewing angle is too big, the camera shows a big area in front of the car indicating near and far distances, but the information of the road is so condensed, that an efficient lane detection isn&#x02019;t possible. The angle depends also on the height of the camera and the numerical aperture of the lens. The higher the camera is positioned, the smaller the viewing angle. For this project, the camera has a height of 30 cm and a viewing angle of 35 degrees. The height and the angle of the camera are based on experimental research.</para>
<fig id="F9-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-4">Figure <xref linkend="F9-4" remap="9.4"/></link></label>
<caption><para>Position of the camera.</para></caption>
<graphic xlink:href="graphics/ch09_fig04.jpg"/>
</fig>
<para><link linkend="F9-5">Figure <xref linkend="F9-5" remap="9.5"/></link> shows the reduced signal flow of the vehicle. The information from the infrared sensors is sent to a small microcontroller, as it is visualized by the spotted lines. In reality, each sensor has its own microcontroller, but to reduce the complexity of the graphic, they were shown as one. The camera has its own microcontroller. This controller must be able to accomplish the necessary calculations for lane detection in time. For the control of the vehicle by the main controller, it is necessary that all information from all other controllers are actualized in one calculation step, this is needed for the mathematical model of the car. The main controller gathers the analyzed data provided by the smaller microcontrollers, the data from the camera about the driving lane and the information from other sensors like gyroscope and accelerometer for its calculation. The essential signal flow of all these components to the main controller is visualized by the solid lines in <link linkend="F9-5">Figure <xref linkend="F9-5" remap="9.5"/></link>. After its calculation, the main controller sends control signals to the engine and the servo, which controls the starring angle of the car.</para>
<fig id="F9-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-5">Figure <xref linkend="F9-5" remap="9.5"/></link></label>
<caption><para>Schematic signal flow of the vehicle.</para></caption>
<graphic xlink:href="graphics/ch09_fig005.jpg"/>
</fig>
<para>Incremental encoders on the rear wheels detect the actual speed and calculate the path the vehicle has traveled during the last calculation step of the mathematical model. The sensors send the data via CAN bus to the main controller. The vehicle is front-engined, so traction of the rear wheels is ensured. Potential error in measurement through spinning is avoided.</para>
<para>There are two modules that do not communicate via CAN bus with the main controller: the first one is the camera, ensuring that the vehicle keeps the track, the second is a sensor module, which includes the gyroscope and accelerometer. Both modules do not have a CAN interface, but they communicate via an USART interface with the main microcontroller.</para>
<para>In the future, the focus will be on an interactive network of several independent vehicles based on radio transmission. This will allow all vehicles to communicate with each other and share information like traction and behavior of the road, actual position from GPS, or speed. The radio transmission is carried out with the industry standard called &#x0201C;Zigbee&#x0201D;. An XBEE module of the company &#x0201C;Digi&#x0201D; undertakes the radio transmission. The module uses an UART interface for the communication with the main microcontroller on the vehicle. Via this interface, the car will get information from other cars nearby. A detailed overview of the data processing system, including the XBEE module, is shown in <link linkend="F9-6">Figure <xref linkend="F9-6" remap="9.6"/></link>.</para>
<fig id="F9-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-6">Figure <xref linkend="F9-6" remap="9.6"/></link></label>
<caption><para>Overview of the data processing system.</para></caption>
<graphic xlink:href="graphics/ch09_fig006.jpg"/>
</fig>
</section>
<section class="lev1" id="sec9-4">
<title>9.4 Lane Detection</title>
<para>There are several steps needed to accomplish the lane detection.</para>
<para>First, the image has to be analyzed with an In-Range filter. In the second step, the points that the Hough-transformation has identified as lane marks, are divided into left and right lane marks. Next, the least squares method is used to transform the lane marks into a second-degree polynomial, thus providing the base to calculate the driving lane. Subsequently, the points of the driving lane are transformed into world coordinates.</para>
<para>Two types of filters are used to get the needed information from the image. Both are functions from the OpenCV-library. An In-Range filter is used to detect the white lane marks on the defined test track. The Hough-transformation calculates the exact position of the lane marks preparing them for the next steps.</para>
<section class="lev2" id="sec9-4-1">
<title>9.4.1 In-Range Filter</title>
<para>The In-Range filter transforms an RGB-image into an 8-bit binary image. It&#x02019;s made for the detection of pixels in a variable color range. The transformed picture has the same resolution as the original picture. Pixels belonging to the chosen color range are white. All other pixels in the image are black. The function works with the individual values of the RGB format. The chosen color is defined by two critical values of this format.</para>
<para><link linkend="F9-7">Figure <xref linkend="F9-7" remap="9.7"/></link> shows the result of the In-Range filter.</para>
<fig id="F9-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-7">Figure <xref linkend="F9-7" remap="9.7"/></link></label>
<caption><para>Comparison between original and in-range image.</para></caption>
<graphic xlink:href="graphics/ch09_fig007.jpg"/>
</fig>
</section>
<section class="lev2" id="sec9-4-2">
<title>9.4.2 Hough-Transformation</title>
<para>The Hough-transformation is an algorithm to detect lines or circles in images, which in this case means that it investigates the binary image from the In-Range filter in order to find the lane marks.</para>
<para>The Hessian normal form converts individual pixels, so that they can be recognized as lines in the Hough space. In this state, space lines are expressed by the distance to the point of origin and the angle to one of the axes. Due to the fact that the exact angle of the marks is unknown, the distance to the point of origin is calculated based on Equation (9.1), utilizing the most probable angles:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq9-1.jpg"/></para>
<fig id="F9-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-8">Figure <xref linkend="F9-8" remap="9.8"/></link></label>
<caption><para>Original image without and with Hough-lines.</para></caption>
<graphic xlink:href="graphics/ch09_fig008.jpg"/>
</fig>
<para>The intersection of the sinusoidals provides an angle and the distance of the straight line from the origin of coordinates. These parameters create a new line, so that the majority of the pixels can be detected. Furthermore, the function from the OpenCV-library returns the start and the endpoint of each Hough-line. As <link linkend="F9-8">Figure <xref linkend="F9-8" remap="9.8"/></link> shows, the lines of the Hough-transformation are precisely mapped on the lane marks of the road.</para>
</section>
<section class="lev2" id="sec9-4-3">
<title>9.4.3 Lane Marks</title>
<para>To provide a more precise calculation, all points along the line are included. These points are stored in two arrays and then sorted. As a first sorting criterion, the position of the last driving lane is used. The second criterion for sorting derives from their position in the image.</para>
<para>As mentioned before, the information in the image regarding long distances can be critical depending on the viewing angle and height of the camera. In order to concentrate on noncritical information only, points in the middle area of the image are used. <link linkend="F9-9">Figure <xref linkend="F9-9" remap="9.9"/></link> shows the sorted points on the right and the corresponding Hough-lines on the left side.</para>
<fig id="F9-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-9">Figure <xref linkend="F9-9" remap="9.9"/></link></label>
<caption><para>Hough-Lines and sorted points along the Hough-Lines.</para></caption>
<graphic xlink:href="graphics/ch09_fig009.jpg"/>
</fig>
</section>
<section class="lev2" id="sec9-4-4">
<title>9.4.4 Polynomial</title>
<para>To describe the lane marks more efficiently, a second-degree polynomial is used. The coefficients of the parable are derived by the least-squares method. A polynomial of a higher degree isn&#x02019;t needed, because the effort to calculate the coefficients is too high to make sense in this context, for the speed of the image processing is one of the critical points of the project. Furthermore, the area of the road, which is pictured by the camera, is too small. The road is unable to clone the typical form of a third-degree polynomial.</para>
<para>As visible in <link linkend="F9-10">Figure <xref linkend="F9-10" remap="9.10"/></link>, the parables derived from the sorted points are mapped precisely on the lane marks of the road. The algorithm to calculate the coefficients derived from the points of the lane marks is handwritten.</para>
<fig id="F9-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-10">Figure <xref linkend="F9-10" remap="9.10"/></link></label>
<caption><para>Sorted Points and Least-Square Parable.</para></caption>
<graphic xlink:href="graphics/ch09_fig010.jpg"/>
</fig>
</section>
<section class="lev2" id="sec9-4-5">
<title>9.4.5 Driving Lane</title>
<para>The driving lane for the car lies between the parables mentioned in the last chapter. To calculate the position of the points of the driving lane, the average of two opponent points of the two parables is taken. According to 9.2, the average for the x- and y-coordinates is calculated.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq9-2.jpg"/></para>
<para>In order to simplify the transformation from pixel-coordinates to world-coordinates, the driving lane is described by a fixed number of points in the image. The essential feature of these points is that they lie in predefined rows in the image. So, there is only the need to calculate the horizontal position of the parable for these points.</para>
<para>Theoretically it is possible that the program delivers an incorrect driving lane. Mistakes can occur because of flash lights, reflections on the road, missing lane marks due to different reasons or extreme light conditions, which are much faster than the auto white balance of the camera can bear. So in order to avoid mistakes that occur within a short time period, some kind of stabilization is required. Short time in this case means shorter than one second.</para>
<para>For the purpose of stabilization, the different driving points are stored. The stabilization works with these stored points in combination with four defined edge points in the image. First, the algorithm checks if the edge points of the new image differ from the edge points in the old image.</para>
<para>If the difference between the old points and the new points is low, the driving lane is calculated and the driving points are stored. In case that the difference between the points is too big, the driving lane is not updated and the driving lane is calculated by using the stored points. The algorithm works with the changes of the stored points. The new points are calculated by using the difference between the last image and the current one. This difference is derived from the change of the difference between the third and second image, that have been taken before the current one, and the difference between the second and the first image before the current one.</para>
<para>The critical values for the difference also depend on this calculation. That means that in curvas, the critical values are higher. If not, only the last three images are used for the calculation, in order to reduce the noise of the driving lane. However, in this case, the reaction time of the algorithm is lesser.</para>
<para>The reaction time also depends on the fps (frames per second) of the camera. For this project, a camera with 100 fps is used and the last fifteen driving lanes are stored. The number of stored driving lanes for 100 fps is based on experimental research.</para>
<para><link linkend="F9-10">Figure <xref linkend="F9-10" remap="9.10"/></link> shows the driving lane in red color. The four edge points mark the edge points of the rectangle.</para>
</section>
<section class="lev2" id="sec9-4-6">
<title>9.4.6 Stop Line</title>
<para>One of the main tasks of the camera is to detect stop lines. <link linkend="F9-11">Figure <xref linkend="F9-11" remap="9.11"/></link> shows the dimensions of the stop lines for this test track.</para>
<fig id="F9-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-11">Figure <xref linkend="F9-11" remap="9.11"/></link></label>
<caption><para>Parables and driving lane.</para></caption>
<graphic xlink:href="graphics/ch09_fig011.jpg"/>
</fig>
<para>In order to detect stop lines, the algorithm is searching for the main characteristics. First, the stop line is a horizontal line in the image. If the angle of a vertical line in the image is defined as zero degrees, that means, that the perfect stop line has an angle of 90 degree. The algorithm not only searches for 90 degree lines. The angle for a potential stop line is smaller than &#x02013;75 degree and bigger than +75 degree.</para>
<para>The next criterion of a stop line is that it lies on the car traffic lane. So, the algorithm does not need to search in the complete image for stop lines, but only in the area of the cars traffic lane. This area is marked by the four edge points of the rectangle mentioned in the last chapter. Once the algorithm finds a potential stop line in the right area with a correct angle, the algorithm checks the next two characteristics of a stop line: the length and the width of the stop line.</para>
<para>The length of the stop line is easy to check. The stop line must be as long as the road is wide, so the algorithm only needs to check the endpoints of the line. On the left side, the endpoint of the stop line must lie on the middle road marking. On the right side, the stop line borders on the left road marking from the crossing road. The stop line and the road marking differ in just one point: the width.</para>
<para>Since it is not possible to perceive the differences of the width in each situation, the stop line has no defined end point on this side. So, the algorithm checks if the end point of the potential stop line lies on or above the right road marking. It is hard to measure the width of a line in an image that has constant width and length in reality. The width of the line in the image in pixels depends on the camera position in relation to the line, the numerical aperture of the camera lens and the resolution of the camera. So, because in this project the position of the camera changes from time to time, measuring the width is not reliable to perceive the stop line. Therefore, the width is not used as a criterion for stop lines.</para>
<para><link linkend="F9-12">Figure <xref linkend="F9-12" remap="9.12"/></link> shows a typical crossing situation. The left image visualizes the basic situation and the middle image shows the search area as a rectangle. Here you can see that the stop line on the left side is not covered by the research area so the algorithm doesn&#x02019;t recognize the line as a stop line. On the right image, the stop line ends correctly on the middle road marking. The line in the image shows that the algorithm has found a stop line. Due to the left road marking from the crossing road, the line ends outside the real stop line.</para>
<fig id="F9-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-12">Figure <xref linkend="F9-12" remap="9.12"/></link></label>
<caption><para>Detection of stop lines.</para></caption>
<graphic xlink:href="graphics/ch09_fig012.jpg"/>
</fig>
</section>
<section class="lev2" id="sec9-4-7">
<title>9.4.7 Coordinate Transformation</title>
<para>To control the car, the lateral deviation and the course angle are needed. Both are calculated by the controller of the camera. The scale unit for the lateral deviation is meters and degrees for the course angle. Course angle means the angle of the driving lane which is calculated by the camera. The lateral deviation is the distance of the car&#x02019;s center of gravity to the driving lane when they are at the same level. Since the lateral deviation is needed in meters, the algorithm has to convert the pixel coordinates from the image into meters in the real world. The course angle can be calculated from the pixel coordinates in the image, but this method is error-prone.</para>
<para>There are two different methods to convert the pixels into meters.</para>
<para>Pixels can be converted via Equations (9.3) and (9.4).</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq9-3.jpg"/></para>
<para>In the equations, <emphasis>x</emphasis> and <emphasis>y</emphasis> are the coordinates in meters. <emphasis role="overline">&#x003B3;</emphasis> stands for the drift angle of the camera in the plane area and <emphasis role="overline">&#x1D703;</emphasis> stands for the pitch angle of the camera. &#x003B1; is the numerical aperture of the camera, <emphasis>u</emphasis> and <emphasis>v</emphasis> are the coordinates of one pixel in the image.</para>
<para>Using this equation, the complete image can be converted into real-world coordinates. The drawback of this method is that all parameters of the camera have to be known exactly; every difference between the numerical aperture in the equation and the exact physical aperture of the camera lens can cause massive failure in the calculation. Furthermore, this method needs more calculation time on the target hardware. A big plus of this method is that the camera can be re-positioned during experimental research.</para>
<para>The second method is to store references to some pixels in lookup tables. For these pixels, the corresponding values in meters can be calculated or can be measured. This method expends much less calculation time but is also much less precise. With this method, the camera cannot be re-positioned during experiment research. Every time the camera is re-positioned the reference tables must be re-calculated.</para>
<para>The method to prefer depends on the project requirements regarding accuracy and the projects hardware. For this project, the second method is used. To meet the demands on accuracy, for each tenth pixel of the camera, a reference is stored.</para>
</section>
</section>
<section class="lev1" id="sec9-5">
<title>9.5 Control of the Vehicle</title>
<para>The driving dynamic of the vehicle is characterized by the linear track model of Ackermann. As <link linkend="F9-13">Figure <xref linkend="F9-13" remap="9.13"/></link>(a) shows, the model is composed of a rear and front wheel which are connected by an axe. In order to rotate the vehicle on its main axe, the steering angle can be set with the front wheel.</para>
<fig id="F9-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-13">Figure <xref linkend="F9-13" remap="9.13"/></link></label>
<caption><para>Driving along a set path: Track model (a); Lateral deviation and heading angle (b).</para></caption>
<graphic xlink:href="graphics/ch09_fig0013.jpg"/>
</fig>
<para>To reduce the complexity of vehicle dynamics, three simplifications are made.</para>
<para>These are:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Neglect the air resistance, because the vehicle speed is very low;</para></listitem>
<listitem>
<para>Lateral forces on the wheels are linearized;</para></listitem>
<listitem>
<para>No roll of the vehicle about the x and y axis.</para></listitem></itemizedlist>
<para>Using these simplifications, the created model should differ only marginally from reality. Linking the transverse dynamics of the vehicle with the driving dynamics, you can derive the following relation:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq9-5.jpg"/></para>
<para>Because of the equation in the state space, a controller can be designed using tools such as Matlab Simulink or Scilab X-cos.</para>
<para>In order to keep the vehicle on the track, the conditions such as heading angle, the yaw rate and lateral deviation must be known. A gyroscope is used to detect the yaw rate of the vehicle. The lateral deviation and the course angle are calculated from the camera. The camera sends the lateral deviation and the curse angle. Until the next image is analyzed, the coordinates on the microcontroller stay the same as before. Between two pictures the lock angle and the transverse deviation were recalculated after each motion. This is possible because the velocity and yaw rate are known at any time. <link linkend="F9-13">Figure <xref linkend="F9-13" remap="9.13"/></link>(b) illustrates the relationship of lateral deviation (&#x00394;Y) and heading angle (&#x003B1;).</para>
</section>
<section class="lev1" id="sec9-6">
<title>9.6 Results</title>
<para>This section gives an overview of the project results.</para>
<para>The autonomous car was built with the hardware suggested before. Experiments on scaled test roads show that the car can drive autonomously. However, the tests also showed the limitations of this prototype. The effort for the image processing was undervalued. The on-board processor of the camera isn&#x02019;t able to accomplish the necessary calculations in time. In terms of reaction to this fact, the maximum speed of the car has to be very slow. If it isn&#x02019;t, the control of the vehicle gets unstable with a more or less random driving path. In addition, the car has problems with too sharp curves. The process to divide the image isn&#x02019;t dynamic, so in curves the preset section becomes incorrect and the algorithm isn&#x02019;t able to calculate a correct driving path. Thanks to its laser sensors, the car is able to avoid collisions with ba&#x0FB04;es.</para>
<para>To improve the performance of the car, the hardware for the image processing has to be improved. The image processing works stable. Problems derive from the calculation algorithm of the driving path. At this point in time, the algorithm doesn&#x02019;t contain the necessary interrupts for every situation on the road, but this drawback will be corrected in the second prototype.</para>
</section>
<section class="lev1" id="sec9-7">
<title>9.7 Conclusions</title>
<para>In this chapter, an autonomous vehicle with distributed data acquisition and control systems has been presented. For control, the vehicle has a number of independent sensors. The main sensor is a camera with a lane tracking algorithm, which contains edge detection and Hough transformation. The lane is verified by laser sensors in the front and side of the vehicle. It is planned to build a superordinate control system, which leads a group of autonomous vehicles using a wireless communication protocol.</para>
</section>
<section class="lev1" id="sec9-8">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>J. Canny, &#x02018;A Computational Approach to Edge Detection&#x02019;, IEEE 1986. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Canny%2C+%27A+Computational+Approach+to+Edge+Detection%27%2C+IEEE+1986%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Erhardt, &#x02018;Einf&#x00FC;hrung in die Digitale Bildverarbeitung&#x02019;, Offenburg 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Erhardt%2C+%27Einf%FChrung+in+die+Digitale+Bildverarbeitung%27%2C+Offenburg+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>W. Heiden, &#x02018;Kanten in Bildern - Filterung und Kantenerkennung&#x02019;, St. Augustin 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=W%2E+Heiden%2C+%27Kanten+in+Bildern+-+Filterung+und+Kantenerkennung%27%2C+St%2E+Augustin+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>B. J&#x00E4;hne, &#x02018;Digitale Bildverarbeitung und Bildgewinnung&#x02019;, Berlin 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=B%2E+J%E4hne%2C+%27Digitale+Bildverarbeitung+und+Bildgewinnung%27%2C+Berlin+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Jauernig, &#x02018;Einsatz von Algorithmen der Photogrammmetrie und Bildverarbeitung zur Einblendung spezifischer Lichtraumprofile in Videosequenzen&#x02019;, Hochschule Leipzig 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Jauernig%2C+%27Einsatz+von+Algorithmen+der+Photogrammmetrie+und+Bildverarbeitung+zur+Einblendung+spezifischer+Lichtraumprofile+in+Videosequenzen%27%2C+Hochschule+Leipzig+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Kant, &#x02018;Bildverarbeitungsmodul zur Fahrspurerkennung f&#x00FC;r ein autonomes Fahrzeug&#x02019;, Hochschule Hamburg 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Kant%2C+%27Bildverarbeitungsmodul+zur+Fahrspurerkennung+f%FCr+ein+autonomes+Fahrzeug%27%2C+Hochschule+Hamburg+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>L. F. Kirk, &#x02018;Sch&#x00E4;tzung der Brennweite mit Hilfe der Hough-Transformation&#x02019;, Fachhochschule, K&#x00F6;ln 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=L%2E+F%2E+Kirk%2C+%27Sch%E4tzung+der+Brennweite+mit+Hilfe+der+Hough-Transformation%27%2C+Fachhochschule%2C+K%F6ln+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>N. Kruse, &#x02018;Kameragest&#x00FC;tzte Fahrspurerkennung f&#x00FC;r autonome Modellfahrzeuge&#x02019;, Universit&#x00E4;t Hamburg 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=N%2E+Kruse%2C+%27Kameragest%FCtzte+Fahrspurerkennung+f%FCr+autonome+Modellfahrzeuge%27%2C+Universit%E4t+Hamburg+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. Lin&#x00DF;, &#x02018;Praktische Ausbildung und Training Qualit&#x00E4;tsmanagement Objekterkennung mit Hough-Transformation&#x02019;, Technische Universit&#x00E4;t Ilmenau. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+Lin%DF%2C+%27Praktische+Ausbildung+und+Training+Qualit%E4tsmanagement+Objekterkennung+mit+Hough-Transformation%27%2C+Technische+Universit%E4t+Ilmenau%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. Maini, H. Aggarwal, &#x02018;Study and Comparison of Various Image Edge Detection Techniques&#x02019;, Punjabi University, India. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+Maini%2C+H%2E+Aggarwal%2C+%27Study+and+Comparison+of+Various+Image+Edge+Detection+Techniques%27%2C+Punjabi+University%2C+India%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>P. Sch&#x00F6;ley, Kantendetektoren, Technische Universit&#x00E4;t Dresden, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=P%2E+Sch%F6ley%2C+Kantendetektoren%2C+Technische+Universit%E4t+Dresden%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Unger, &#x02018;Untersuchung von Linien und Kantenextraktionsalgorithmen im Rahmen der Verifikation von Ackerlandobjekten&#x02019;, Universit&#x00E4;t Hannover, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Unger%2C+%27Untersuchung+von+Linien+und+Kantenextraktionsalgorithmen+im+Rahmen+der+Verifikation+von+Ackerlandobjekten%27%2C+Universit%E4t+Hannover%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Wagner, Kantenextraktion &#x02013; Klassische Verfahren, Universit&#x00E4;t, Ulm, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Wagner%2C+Kantenextraktion+-+Klassische+Verfahren%2C+Universit%E4t%2C+Ulm%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. M. Wagner, G. M., &#x02018;Bestimmung der Kameraverzerrung mit Hilfe der Hough-Transformation&#x02019;, Fachhochschule K&#x00F6;ln, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+M%2E+Wagner%2C+G%2E+M%2E%2C+%27Bestimmung+der+Kameraverzerrung+mit+Hilfe+der+Hough-Transformation%27%2C+Fachhochschule+K%F6ln%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>L. Bergen, H. Burkhardt, Morphological image processing. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=L%2E+Bergen%2C+H%2E+Burkhardt%2C+Morphological+image+processing%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Wohlfeil, &#x02018;Detection and tracking of vehicles with a moving camera&#x02019; Humbolt-Universit&#x00E4;t zu Berlin Institut f&#x00FC;r Informatik; Deutsches Zentrum f&#x00FC;r Luft- und Raumfahrt Institut f&#x00FC;r Verkehrsforschung, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Wohlfeil%2C+%27Detection+and+tracking+of+vehicles+with+a+moving+camera%27+Humbolt-Universit%E4t+zu+Berlin+Institut+f%FCr+Informatik%3B+Deutsches+Zentrum+f%FCr+Luft-+und+Raumfahrt+Institut+f%FCr+Verkehrsforschung%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>B. Hulin &#x02018;Video-based obstacle detection clearance of the pantograph electric railways&#x02019;, Fakult&#x00E4;t f&#x00FC;r Elektrotechnik und Informationstechnick, Technische Universit&#x00E4;t M&#x00FC;nchen, 2003. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=B%2E+Hulin+%27Video-based+obstacle+detection+clearance+of+the+pantograph+electric+railways%27%2C+Fakult%E4t+f%FCr+Elektrotechnik+und+Informationstechnick%2C+Technische+Universit%E4t+M%FCnchen%2C+2003%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Alberts, &#x02018;Vision for the project FAUST, Focus of Expansion/optical flow&#x02019;, Hamburg University of Applied Sciences, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Alberts%2C+%27Vision+for+the+project+FAUST%2C+Focus+of+Expansion%2Foptical+flow%27%2C+Hamburg+University+of+Applied+Sciences%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Tutorial Filter; Vision&#x00026;Control System Lighting Optics, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Tutorial+Filter%3B+Vision%26Control+System+Lighting+Optics%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Prof. Dr. &#x02013;Ing. Habil G. Lin&#x00DF;, &#x02018;Practical education and training quality management, object detection with Hough transform&#x02019;, Technische Universit&#x00E4;t Ilmenau, Department of Mechanical Engineering, Department of Quality Assurance. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Prof%2E+Dr%2E+-Ing%2E+Habil+G%2E+Lin%DF%2C+%27Practical+education+and+training+quality+management%2C+object+detection+with+Hough+transform%27%2C+Technische+Universit%E4t+Ilmenau%2C+Department+of+Mechanical+Engineering%2C+Department+of+Quality+Assurance%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Wagner, Edge extraction<emphasis>,</emphasis> Classical methods; Seminar presentation on &#x0201C;Image segmentation and computer Vision&#x0201D;, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Wagner%2C+Edge+extraction%2C+Classical+methods%3B+Seminar+presentation+on+%22Image+segmentation+and+computer+Vision%22%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>I. Nikolov, &#x02018;Adaptive camera parameters for optimum lane detection and tracking&#x02019;, Hamburg University of Applied Sciences, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=I%2E+Nikolov%2C+%27Adaptive+camera+parameters+for+optimum+lane+detection+and+tracking%27%2C+Hamburg+University+of+Applied+Sciences%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>V. Schaefer, A. Zipser, &#x02018;Algorithmic Applications Hough transform&#x02019;, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=V%2E+Schaefer%2C+A%2E+Zipser%2C+%27Algorithmic+Applications+Hough+transform%27%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C.Rathemacher, &#x02018;GPU-based detection of algebraic structures writable&#x02019;, Fachhochschule Wiesbaden, Fachbereich Design Informatik Medien, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2ERathemacher%2C+%27GPU-based+detection+of+algebraic+structures+writable%27%2C+Fachhochschule+Wiesbaden%2C+Fachbereich+Design+Informatik+Medien%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Pelkofer, &#x02018;Behavioral decision for autonomous vehicles with gaze control&#x02019;, Universit&#x00E4;t der Bundeswehr M&#x00FC;nchen, Fakult&#x00E4;t f&#x00FC;r Luft- und Raumfahrttechnik, Institut f&#x00FC;r Systemdynamik und Flugmechanik, 2003. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Pelkofer%2C+%27Behavioral+decision+for+autonomous+vehicles+with+gaze+control%27%2C+Universit%E4t+der+Bundeswehr+M%FCnchen%2C+Fakult%E4t+f%FCr+Luft-+und+Raumfahrttechnik%2C+Institut+f%FCr+Systemdynamik+und+Flugmechanik%2C+2003%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D.Berger, &#x02018;Lane inference system for lane algorithm Three Feature Based Lane Detection Algorithm (TFALDA)&#x02019;, Hamburg University of Applied Sciences, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2EBerger%2C+%27Lane+inference+system+for+lane+algorithm+Three+Feature+Based+Lane+Detection+Algorithm+%28TFALDA%29%27%2C+Hamburg+University+of+Applied+Sciences%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Alberto Broggi, &#x02018;A Massively Parallel Approach to Real-Time Vision-Based Road Markings Detection&#x02019;, Dipartimento di ingengneria dell&#x02019;Information Universita di Parma. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Alberto+Broggi%2C+%27A+Massively+Parallel+Approach+to+Real-Time+Vision-Based+Road+Markings+Detection%27%2C+Dipartimento+di+ingengneria+dell%27Information+Universita+di+Parma%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Andreas Weide, &#x02018;Entwicklung einer Kameragest&#x00FC;tzten Fahrspurerkennung f&#x00FC;r ein autonomes Fahrzeug&#x02019;, University of Applied Sciences (Friedberg, Germany), 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Andreas+Weide%2C+%27Entwicklung+einer+Kameragest%FCtzten+Fahrspurerkennung+f%FCr+ein+autonomes+Fahrzeug%27%2C+University+of+Applied+Sciences+%28Friedberg%2C+Germany%29%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch10" label="10" xreflabel="10">
<title>Polymetric Sensing in Intelligent Systems</title>
<para><emphasis role="strong">Yu. Zhukov, B. Gordeev, A. Zivenko and A. Nakonechniy</emphasis></para>
<para>National University of Shipbuilding named after admiral Makarov, Ukraine<break/>Corresponding author: Yu. Zhukov &lt;prof.zhukov@gmail.com&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>The authors examine the up-to-date relationship between the theory of polymetric measurements and the state of the art in intelligent system sensing. The chapter contains commentaries about: concepts and axioms of polymetric measurements theory, corresponding monitoring information systems used in different technologies and some prospects for polymetric sensing in intelligent systems and robots. The application of the described concepts in technological processes ready to be controlled by intelligent systems is illustrated.</para>
<para><emphasis role="strong">Keywords:</emphasis> Polymetric signal, polymetric sensor, intelligent sensory system, control, axiomatic theory, modelling</para>
</section>
<section class="lev1" id="sec10-1">
<title>10.1 Topicality of Polymetric Sensing</title>
<para>For some time it has been widely recognized [1] that the future factory should be a smart facility where design and manufacture are strongly integrated into a single engineering process that enables &#x02018;right first time&#x02019; every time production of products to take place. It seems completely natural to use intelligent robots and/or control systems at such factories to provide on-line, opportune and precise assessment of the quality of the production process and to guarantee the match and fit, performance and functionality of every component of the product prototype that is created.</para>
<para>Wide deployment of sensor-based intelligent robots at the heart of future products and technology-driven systems may substantially accelerate the process of their technological convergence and lead to their introduction in similar processes and products.</para>
<para>At the same time it is well known [2] that intelligent robot creation and development is a rather fragmented task and there are different progress-restricting problems in each particular field of research. Numerous advanced research teams are doing their best to overcome these limitations for intelligent robot language, learning, reasoning, perception, planning and control.</para>
<para>Nowadays, a few researchers are still developing the Ideal Rational Robot, as it is quite evident that the computations necessary to reach ideal rationality in real operating environments require much more time and much more productive processors. Productivity is permanently growing but being still limited for IRR tasks. Furthermore, the robotic perception of all the necessary characteristics of real operating environments has not been sufficiently provided yet.</para>
<para>The number of corresponding sensors and instruments necessary for these robots is growing, but their sensing remains too complicated, expensive, time-consuming and IRR remains far from being realistic for actual application.</para>
<para>F.H. Simons&#x02019; concept of Bounded Rationality and the concept of Asymptotic Bounded Optimality based on S.J. Russell&#x02019;s and D. Subramanian&#x02019;s approach of provably bounded-optimal agents have formed a rather pragmatic and fruitful trend for the development of optimal programs.</para>
<para>One has to take into account the fact that time limitations are even stricter if we are trying to overcome them for both calculation and perception tasks. While immense progress has been made in each of these subfields in the last few decades, it is still necessary to comprehend how they can be integrated to produce a really effective intelligent robot.</para>
<para>The introduction of the concepts of agent-based methods and systems [3, 4] including holonic environment [5] and multi-agent perceptive agencies [6] jointly with the concept of polymetric measurements [7&#x02013;9] has engendered a strong incentive to evaluate all the potentialities of using polymetric sensing for intelligent robots, as this may be a precondition for generating real benefits in the field.</para>
<para>The matter is that obvious practical success has been achieved during the wide deployment of SADCO<superscript>&#x000AE;</superscript> polymetric systems for monitoring a variety of multiple quantitative and qualitative characteristics of different liquid and/or loose cargoes using a single polymetric sensor for measuring more than three characteristics of cargoes simultaneously within a single chronos-topos framework. Similar information systems were also a successful tool for the online remote control of complex technological processes of production, storage and consumption of various technological media [8].</para>
<para>But, as indicated above, there are very specific requirements for sensing in intelligent robots. In fact, one of the most important restrictions is connected with very limited time-consumption for the input of real-time information concerning an intelligent robot and/or multi-agent control systems operational environment.</para>
<para>Thus, we face an actual and urgent need to integrate and combine these advanced approaches within the calculation and perception components of intelligent robots and/or multi-agent monitoring and control system design, starting from different (nearly diametrically opposite) initial backgrounds of each component and, by means of irrefutable arguments, arriving at jointly acceptable conclusions and effective solutions.</para>
</section>
<section class="lev1" id="sec10-2">
<title>10.2 Advanced Perception Components of Intelligent Systems or Robots</title>
<section class="lev2" id="sec10-2-1">
<title>10.2.1 Comparison of the Basics of Classical and Polymetric Sensing</title>
<para>Classical or industrial metrology is based on the contemporary Axiomatic Theory [10, 11].</para>
<para>In classical industrial metrology, it is presupposed that for practical measurement processes it is necessary to have some set &#x003A9; = &#x0007B;&#x003C3;, &#x003BB;,&#x02026;&#x0007D; of different instruments with various sensor transformation and construction systems.</para>
<para>But every particular instrument has a custom-made uniform scale for the assessment of the actual value of the object specific characteristic under control. Let <emphasis>a</emphasis><subscript>&#x003C3;</subscript>(<emphasis>i</emphasis>) be a numerical function of two nonnumeric variables &#x02013; a physical object <emphasis>i</emphasis>&#x02208;&#x02135; and a specific instrument, i.e.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-1.jpg"/></para>
<para>This function is called a quantitative assessment of a physical quantity of an object.</para>
<para>In the general case for an industrial control system or for intelligent robot sensing process, it is necessary to have <emphasis>N</emphasis> instruments for each characteristic under monitoring at <emphasis>M</emphasis> possible locations of the components of the object under control. The more practical the application, the quicker we face the curse of multidimensionality for our control system.</para>
<para>Polymetric measurement in general is the process of getting simultaneous assessments of a set of object physical quantities (more than two) using one special measuring transformer (a sensor). The first successful appraisal of the corresponding prototype instruments was carried out in 1988&#x02013;1992 on-board three different vessels during full-scale sea trials. After the successful tests and the recognition of the instruments by customers and the classification societies, the theoretical background was generalized and presented to the scientific community [12&#x02013;14].</para>
<para>The latest results [7&#x02013;9] seem to be prospective for intelligent robot sensing due to reduced time and financial pressure, simplified design and reduced general number of sensory components.</para>
<para>Summarizing the comparison of the basics of classical and polymetric measurement theories, it is essential to comment another consequence of their axioms and definitions. The introduction of the principle of the simultaneous assessment of a physical quantity and its measurement from the same polymetric signal Polymetric signal is one of the key provisions of the theory and the design practice for developing appropriate instruments. The structure of an appropriate perceptive intelligent control or monitoring system should be changed correspondingly.</para>
</section>
<section class="lev2" id="sec10-2-2">
<title>10.2.2 Advanced Structure of Multi-Agent Intelligent Systems</title>
<para>In order for multi-agent control or monitoring systems and intelligent robots to satisfactorily fulfil the potential missions and applications envisioned for them, it is necessary to incorporate as many recent advances in the above described fields as possible within the real-time operation of the intelligent system or robot-controlled processes.</para>
<para>This is the challenge for the intelligent robotics engineer, because many advanced algorithms in this field still require too much time for computation, despite improvements made in recent years in microelectronics and algorithms. Especially, it concerns the problem of intelligent robot sensing (timely and precise perception).</para>
<para>The well-known variability versus space (topos) and versus time (chronos) is related to similar variability of the measured data [6, 7, 10] engendering the advantages of using polymetric measurements.</para>
<para>That is why, in contrast to the multi-sensor perceptive agency concept [6] based on the use of several measuring transformers, each one of them being the sensing part of each particular agent within the distributed multi-sensor control system (i.e. several perceptive agents in complex perceptive agency), the use of one equivalent polymetric transformer for an equivalent perceptive agency is proposed.</para>
<para>The concept of Polymetric Perceptive Agency (PPA) for intelligent system and robot sensing is schematically illustrated in <link linkend="F10-1">Figure <xref linkend="F10-1" remap="10.1"/></link>. Such simplified structure of PPA sub-agency of the Decision-making Agency (DMA) is designated to be used in different industries and technologies (maritime, on-shore, robotics, etc. [9, 15]).</para>
<fig id="F10-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-1">Figure <xref linkend="F10-1" remap="10.1"/></link></label>
<caption><para>The main idea of the replacement of the distributed multi-Sensor system by a polymetric perceptive agent.</para></caption>
<graphic xlink:href="graphics/ch10_fig001.jpg"/>
</fig>
<para>There are some practical optimistic examples of the successful deployment and long-term (more than 15 years) operation of industrial polymetric monitoring and control systems/agencies based on polymetric measurementPolymetric measurement technique in different fields of manufacturing and transportation [8].</para>
<fig id="F10-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-2">Figure <xref linkend="F10-2" remap="10.2"/></link></label>
<caption><para>TDR Coaxial probe immersed into the liquid and the corresponding polymetric signal.</para></caption>
<graphic xlink:href="graphics/ch10_fig0002.jpg"/>
</fig>
</section>
</section>
<section class="lev1" id="sec10-3">
<title>10.3 Practical Example of Polymetric Sensing</title>
<para>Here we describe a practical case from the research of the Polymetric Systems Laboratory at the National University of Shipbuilding and LLC AMICO (Mykolaiv, Ukraine). The practical goal is to ensure the effective and safe control of the water level in the cooling pond of nuclear power stations at the spent nuclear fuel storage during normal operation and emergency post-accident operation. There are many level sensors, which are used by the control systems in the normal operation mode: floating-type, hydrostatic, capacitive, radar, ultrasonic, etc. [16]. But there exist many problems concerning their installation and functioning under real post-accident conditions. The matter is that high pressure and extremely high temperature, saturated steam and radiation, vibration and other disturbing factors are expected in the cooling pond in emergency mode.</para>
<para>Thus, high reliability and radiation resistance are the most important requirements for such level-sensing equipment. One of the most suitable sensing techniques in this case is the proposed modified Time Domain Reflectometry (TDRTdr) &#x02013; see <link linkend="F10-2">Figure <xref linkend="F10-2" remap="10.2"/></link>.</para>
<para>In this type of level measurement, microwave pulses are conducted along a cable or rod probe <emphasis>1</emphasis> partially immersed in the liquid <emphasis>2</emphasis> and reflected by the product surface.</para>
<para>Sounding and reflected pulses are detected by the transducer. The transit time of the pulse <emphasis>t</emphasis><subscript>0</subscript> is a function of the distance from the level sensor to the surface of the liquid <emphasis>L</emphasis><subscript>0</subscript>. This time is measured and then the distance from the level sensor to the surface of the liquid is calculated according to the calibration function:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-2.jpg"/></para>
<para>which is usually presented as a linear function:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-3.jpg"/></para>
<para>where <emphasis>b</emphasis><subscript>0</subscript> and <emphasis>b</emphasis><subscript>1</subscript> &#x02013; coefficients which are obtained during a calibration procedure.</para>
<para>From the physical point of view, the slope of this function stands for the speed of the electromagnetic wave which propagates forward from the generator on the pcb-board, along the cable and the specially designed measuring probe, and back to the generator. It can be defined as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-4.jpg"/></para>
<para>where <emphasis>c</emphasis> &#x02013; speed of light in vacuum; <emphasis>&#x1D700;<subscript>0</subscript></emphasis> &#x02013; dielectric constant of the material through which the electromagnetic pulse propagates (it is close to 1 for air); the coefficient of &#x00BD; stands for the fact that the electromagnetic pulse propagates along double the length of the probe (forward and backward).</para>
<para>The fact of the presence and the value of the intercept <emphasis>b</emphasis><subscript>1</subscript> are caused by many reasons. One of them is that the electromagnetic pulse actually passes the distance greater than the measuring probe length. In the general case, the real calibration function is not linear [8].</para>
<section class="lev2" id="sec10-3-1">
<title>10.3.1 Adding the Time Scale</title>
<para>In this practical case, the basic approaches for constructing the polymetric signal are described based on the TDR technique. The concept of forming an informative pulse polymetric signal includes the following: generation of short pulses and sending them to a special measuring probe, receiving the reflected pulses and signal pre-processing for its final interpretation. It is worth mentioning that in terms of polymetrics, each of these steps is specially designed to increase the informativity and interpretability of the resulting signal.</para>
<para>Therefore, to calculate the distance from the sensor to the surface of the liquid, it is necessary to carry out the calibration procedure. The level-sensing procedure can be simplified by adding additional information to the initial &#x00AB;classic&#x00BB; signal to obtain the time scale of this signal. A stroboscopic transformation of the real signal is one of the necessary signal pre-processing stages during the level measurement procedure.</para>
<para>This transformation is required to produce an expanded time sampled signal for its future conversion to digital form. This transformation can be carried out with the help of a stroboscopic transformer based on two oscillators with frequencies <emphasis>f</emphasis><subscript>1</subscript> (the frequency of input signal) and <emphasis>f</emphasis><subscript>2</subscript> (the local oscillator frequency) that are offset by a small value &#x00394;<emphasis>f</emphasis> = <emphasis>f</emphasis><subscript>1</subscript> &#x02013; <emphasis>f</emphasis><subscript>2</subscript> [17]. The duration of the input signal is:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-5.jpg"/></para>
<para>As a result of this transformation, we have expanded signal duration:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-6.jpg"/></para>
<para>In this case, the relationship between the duration of transformed and the original signal is expressed by the stroboscopic transformation ratio:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-7.jpg"/></para>
<para>The next processing step of the transformed signal is the conversion of this inherently analog signal to digital form with the help of the analog-to-digital converter (ADC).</para>
<para>The time scale and delays during analog-to-digital conversion are known. Therefore, it is possible to count ADC conversion cycles and to calculate the time scale of the converted signal. It is not convenient because conversion cycle duration is connected with the ADC parameters, clock frequency value and stability, etc. In order to exclude the use of the conversion cycles number and ADC parameters, it is possible to &#x00AB;add&#x00BB; the additional information about the time scale of the converted signal.</para>
<para>In this case, we can add a special marker which helps to measure the reference time interval &#x02013; see <link linkend="F10-3">Figure <xref linkend="F10-3" remap="10.3"/></link>.</para>
<fig id="F10-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-3">Figure <xref linkend="F10-3" remap="10.3"/></link></label>
<caption><para>Time diagrams for polymetric signal formation using additional reference time intervals.</para></caption>
<graphic xlink:href="graphics/ch10_fig0003.jpg"/>
</fig>
<para>The main idea is that ADC conversion time <emphasis>T<subscript>ADC</subscript></emphasis> must be greater than the duration of the signal from the output of the stroboscopic transformer <emphasis>T<subscript>TS</subscript></emphasis> to obtain at least 2 sounding pulses in the resulting digitized signal. It is necessary to count the delays between two sounding pulses &#x003C4;<emphasis><subscript>ZZ</subscript></emphasis> and between the first sounding pulse and the reflected pulse &#x003C4;<emphasis><subscript>ZS</subscript></emphasis> (in terms of the number of ADC readings).</para>
<para>The delay &#x003C4;<emphasis><subscript>ZZ</subscript></emphasis>, expressed in ADC readings count, corresponds to the time delay <emphasis>T<subscript>TS</subscript></emphasis> (in seconds). The next equation should be used to calculate the time scale of the transformed and digitized signal:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-8.jpg"/></para>
<para>It is possible to calculate the time value of the delay between the pulses <emphasis>t<subscript>0</subscript></emphasis> (see <link linkend="F10-2">Figure <xref linkend="F10-2" remap="10.2"/></link>):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-9.jpg"/></para>
<para>Finally, to calculate the distance to the surface of the liquid, it is necessary to use the equation:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-10.jpg"/></para>
<para>where <emphasis>b</emphasis><subscript>1</subscript> (zero shift) is calculated using information on the PCB layout and generator parameters or during a simple calibration procedure.</para>
</section>
<section class="lev2" id="sec10-3-2">
<title>10.3.2 Adding the Information about the Velocity of the Electromagnetic Wave</title>
<para>In the previous paragraph, a special marker was added to the signal to obtain the time scale of the signal and to easily calculate the distance to the surface of the liquid in normal operating mode.</para>
<para>But, as it was mentioned above, the emergency operating mode can be characterized by high temperatures and pressure, the presence of saturated steam in the cooling pond. Steam under high pressure and temperature will slow down the propagation speed of a radar signal which can cause an additional measurement error. It is possible to add the additional information to the signal for automated correction of measurement. A special reflector or several reflectors are to be used in this case [8]. Almost any step of wave impedance discontinuities can be used as a required reflector (probes with stepwise change of the impedance, fixing elements, etc.). The reflector is placed at a fixed and known distance LR from the generator and the receiver of short electromagnetic pulses GR in the vapour dielectric &#x02013; presented in <link linkend="F10-4">Figure <xref linkend="F10-4" remap="10.4"/></link>.</para>
<fig id="F10-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-4">Figure <xref linkend="F10-4" remap="10.4"/></link></label>
<caption><para>Disposition of the measuring probe in the tank, position of the reflector and corresponding signals for the cases with air and vapour.</para></caption>
<graphic xlink:href="graphics/ch10_fig004.jpg"/>
</fig>
<para>If there is vapour in the tank, the pulses reflected from the special reflector and from the surface of the liquid are shifted according to the change of the dielectric constant of vapour (as compared to the situation when there is no vapour in the tank).</para>
<para>The delays between the sounding pulse and the pulse reflected from the special reflector &#x003C4;<emphasis><subscript>R</subscript></emphasis> and between the sounding pulse and the pulse reflected from the surface of the liquid &#x003C4;<emphasis><subscript>ZS</subscript></emphasis> for cases with air and vapour are different.</para>
<para>The dielectric constant &#x1D700;<subscript>0</subscript> can be calculated using the known distance <emphasis>L<subscript>R</subscript>.</emphasis> Therefore, the distance to the surface of the liquid <emphasis>L</emphasis><subscript>0</subscript> can be calculated using the corrected dielectric constant value:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq10-11.jpg"/></para>
<para>As it can be seen from the equation, the result of the measurement depends on the time intervals between the sounding and reflected pulses and the reference distance between the generator and the reflector.</para>
<para>The above-described example of polymetric signal formation showed that a single signal carries information about the time scale of this signal, the velocity of propagation of electromagnetic waves in dielectrics and the linear dimensions of these layers. This list of measurable parameters can be easily continued by using additional information in the existing signals and building new &#x00AB;hyper-signals&#x00BB; [9] for measuring some other characteristics of controllable objects (e.g. on the basis of the spectral analysis of these signals the controllable liquid can be classified and some quality characteristics of the liquid can be calculated).</para>
</section>
</section>
<section class="lev1" id="sec10-4">
<title>10.4 Efficiency of Industrial Polymetric Systems</title>
<section class="lev2" id="sec10-4-1">
<title>10.4.1 Naval Application</title>
<para>One of the first polymetric sensory systems was designated for on-board loading and safety control (LASCOS) of tankers, fishing, offshore, supply and research vessels. The prototypes of these systems were developed, industrialized and tested during full-scale sea trials in the early 90-es of the last century [18]. These systems have more developed polymetric sensing subsystems (from the &#x0201C;<emphasis>topos-chronos</emphasis>&#x0201D; compatibility and accuracy points of view). That is why they are also successfully providing commercial control of cargo handling operations.</para>
<para>The structure of the hardware part of LASCOS for a typical offshore supply vessel is presented in <link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link>. It consists of the following: operator workplace (1); a radar antenna (2); an on-board anemometer (3); the radar display and a keyboard (4); a set of sensors for ship draft monitoring (5); a set of polymetric sensors for fuel-oil, ballast water and other liquid cargo quantity and quality monitoring and control (6); a set of polymetric sensors for liquefied LPG or LNG cargo quantity and quality monitoring and control (7); switchboards of the subsystem for actuating devices and operating mechanisms control (8); a basic electronic block of the subsystem for liquid, liquefied and loose cargo monitoring and control (9); a block with the sensors for real-time monitoring of ship dynamic parameters (10).</para>
<fig id="F10-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link></label>
<caption><para>Example of the general structure of lascos hardware components and elements.</para></caption>
<graphic xlink:href="graphics/ch10_fig005.jpg"/>
</fig>
<para>The structure of the software part of the typical sensory intelligent LASCOS is presented in <link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link>.</para>
<para>It consists of three main elements: a sensory monitoring agency (SMA) which includes three other sensory monitoring agencies &#x02013; SSM (sea state, e.g. wind and wave model parameters), SPM (ship parameters) and NEM (navigation environment parameters); an information environment agency (INE) including fuzzy holonic models of ship state (VSM) and weather conditions (WCM), and also data (DB) and knowledge (KB) bases; and last but not least &#x02013; an operator interface agency (OPIA) which provides the decision-making person (DMP) with necessary visual and digital information.</para>
<fig id="F10-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link></label>
<caption><para>The general structure of lascos software elements and functions.</para></caption>
<graphic xlink:href="graphics/ch10_fig006.jpg"/>
</fig>
<para>Unfortunately, until now, the above-mentioned challenges have not been combined together in an integrated model, which applies cutting-edge and novel simulating techniques. Agent-based computations are adaptive to information changes and disruptions, exhibit intelligence and are inherently distributed [4]. Holonic agents inherently may help design and operational control processes in self-recovery and react to environmental real-time perturbations.</para>
<para>The agents are vital in a ship operations monitoring and control context, as ship safety refers to the inherently distributed and stochastic perturbation of its own state parameters and external weather excitations. Agents are welcome in the on-board LASCOS system design because they provide properties such as autonomy, responsiveness, distributiveness, openness and redundancy [15]. They can be designed to deal with uncertain and/or incomplete information and knowledge and this is extremely topical for fuzzy LASCOS as a whole. On the other hand, the problem of their sensing has never been under systematic consideration yet.</para>
<section class="lev3" id="sec10-4-1-1">
<title>10.4.1.1 Sensory monitoring agency SMA</title>
<para>As mentioned above, all initial real-time information for the LASCOS as well as the sensory system is provided to DMP by the holonic agent-based sensory monitoring agency (SMA) which includes three other sensory monitoring agencies &#x02013; SSM, SPM and NEM (see <link linkend="F10-7">Figure <xref linkend="F10-7" remap="10.7"/></link>).</para>
<fig id="F10-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-7">Figure <xref linkend="F10-7" remap="10.7"/></link></label>
<caption><para>The General Structure of LASCOS Holonic agencies functions.</para></caption>
<graphic xlink:href="graphics/ch10_fig007.jpg"/>
</fig>
<para>The simultaneous functioning of all the structures presented in <link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link>, <link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link> and <link linkend="F10-7">Figure <xref linkend="F10-7" remap="10.7"/></link>, i.e. hardware, logic and process algorithms, the corresponding software components and elements of LASCOS provide online forming of information environment for the system operator or seafarer &#x02013; the decision-maker.</para>
<para>The polymetric sensing of the computer-aided control and monitoring system for cargo and ballast tanks is designed to ensure effective and safe control of ship operations in remote mode. This system is one of the most critical systems on-board a ship or any other ocean-floating vehicle. The operation of the system must be finely adjusted with the requirements of the marine industry. The polymetric system is a high-tech solution for the task of on-line monitoring and control of the ship liquid cargoes and ballast water tanks state. These systems are also designated for ship safety parameter monitoring and for docking operations control.</para>
<para>The solution provides safe and reliable operations in real-life harsh conditions, reduces risks for vessels and generates both operational and financial benefits for customers providing:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>The manual, automated or automatic remote control of cargo handling operations;</para></listitem>
<listitem>
<para>The on-line monitoring of level, volume, temperature, mass, centre of mass for liquid, loose cargo or ballast water in all tanks;</para></listitem>
<listitem>
<para>The monitoring and control of list and trim, draft line, bending and sagging lines;</para></listitem>
<listitem>
<para>The monitoring of the vessel position during operation or docking; level indication and alarm of all monitored tanks;</para></listitem>
<listitem>
<para>The remote control of cargo valves and pumps according to the actual operational conditions; the feeding of control signals to actuators, valves and pumps; the monitoring of hermetic dryness of dry tanks and conditions of water ingress in the possible damage conditions;</para></listitem>
<listitem>
<para>The audible and visual warning about risky process parameters deviation from the stipulated values;</para></listitem>
<listitem>
<para>The registration and storage of retrospective information on operations process parameters, equipment state and operator actions (&#x0201C;Black Box&#x0201D; functions);</para></listitem>
<listitem>
<para>Comprehensive data and knowledge bases management;</para></listitem>
<listitem>
<para>The testing and the diagnostics of all equipment operability.</para></listitem></itemizedlist>
</section>
<section class="lev3" id="sec10-4-1-2">
<title>10.4.1.2 Information Environment Agency INE</title>
<para>The information environment agency (INE) includes the fuzzy holonic models of ship state (VSM), the weather conditions model (WCM), and also data (DB) and knowledge (KB) bases. The central processor of the LASCOS system processes all the necessary initial information concerning ship design and structure in all possible loading conditions using the digital virtual ship model (VSM).</para>
<para>The information concerning the actual distribution of liquid and loose cargoes (including their masses and centres of gravity) are provided by the sensory monitoring agency (SMA). This information is combined with the information concerning actual weather and navigation environment details that are converted into the actual weather condition model (WCM). The information, permanently refreshable by SMA, is stored in the database and it is used in real-time mode by the VSM and WCM models [18].</para>
</section>
<section class="lev3" id="sec10-4-1-3">
<title>10.4.1.3 Operator Interface Agency OPI</title>
<para>The operator interface agency (OPI) of the LASCOS system provides the decision-maker with all the necessary visual and digital information in the most convenient way for decision-making support on quality, efficiency and timeliness. Examples of some of these interfaces are presented in Figure10.8 and <link linkend="F10-9">Figure <xref linkend="F10-9" remap="10.9"/></link>.</para>
<para>One of the functions of the LASCOS system is to predict safe combinations of ship speed and course in rough sea actual weather conditions. On the safe-storming diagram, the seafarer may see three coloured zones of &#x0201C;speed-course&#x0201D; combinations: the green zone of completely safe ship speed and course combination (grey zone in <link linkend="F10-8">Figure <xref linkend="F10-8" remap="10.8"/></link>); the dangerous red zone with &#x0201C;speed-course&#x0201D; combinations leading to ship capsizal with up to more than 95% probability (the black zone in <link linkend="F10-8">Figure <xref linkend="F10-8" remap="10.8"/></link>); and the yellow zone with &#x0201C;speed-course&#x0201D; combinations with intensive ship motions, preventing normal navigation and other operational activities (the white zone in <link linkend="F10-8">Figure <xref linkend="F10-8" remap="10.8"/></link>).</para>
<fig id="F10-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-8">Figure <xref linkend="F10-8" remap="10.8"/></link></label>
<caption><para>Safe Storming Diagram Interface of LASCOS.</para></caption>
<graphic xlink:href="graphics/ch10_fig008.jpg"/>
</fig>
<para>The centralized monitoring and control of ship&#x02019;s operational processes is performed via the operator interface from the main LASCOS system processor screen located in the wheelhouse. The advanced operator interface is playing an increasingly important role in the light of the visual management concept. The proposed operator interface for LASCOS provides all traditional man-machine control functionality. Moreover, this interface is clearly structured, powerful, ergonomic and easy to understand; it also makes the immediate forecast of ship behaviour under the control action chosen by the operator, thus preventing too risky and unsafe decisions. It also ensures all control processes rules and requirements consolidating all the controllable factors for better efficiency of operations to exclude human factor influence on decision-making. The interface also ensures system security via personal password profiles for nominated responsible and thoroughly trained persons to prevent the incorrect usage of equipment and to avoid the unsafe conditions of ship operations.</para>
</section>
<section class="lev3" id="sec10-4-1-4">
<title>10.4.1.4 Advantages of the polymetric sensing</title>
<para>The described example of a sensory system for typical offshore supply vessel (presented in <link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link>) can be used for the demonstration of the polymetric sensing advantages.</para>
<para>The structure of the typical cargo sensory system for the offshore supply vessel consists of the following parts: a set of the sensors for the fuel-oil, liquefied LPG or LNG, ballast/drinking water and other liquid cargo quantity and quality monitoring; a set of sensors for the bulk cargo quantity and quality monitoring.</para>
<fig id="F10-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-9">Figure <xref linkend="F10-9" remap="10.9"/></link></label>
<caption><para>The Main Window of Ballast System Operations Interface.</para></caption>
<graphic xlink:href="graphics/ch10_fig009.jpg"/>
</fig>
<para>Each tank is designated for the particular cargo type and equipped with the required sensors. For example, for the measurement of the diesel oil parameters, the corresponding tanks are equipped with the level sensors (level sensors with the separation level measurement feature) and temperature sensors. The total number of the tanks, corresponding sensors for the tanks of a typical supply vessel, are shown in <link linkend="T10-1">Table <xref linkend="T10-1" remap="10.1"/></link> (in case of the traditional sensory system).</para>
<table-wrap position="float" id="T10-1">
<label><link linkend="T10-1">Table <xref linkend="T10-1" remap="10.1"/></link></label>
<caption><para>Quantity and Sensor Types for the Traditional Cargo Sensory System (Example)</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="bottom" align="left"><break/><break/>Cargo Type</td>
<td valign="bottom" align="left"><break/>Measureable Parameters</td>
<td valign="bottom" align="left"><break/>Tanks Number</td>
<td valign="bottom" align="left"><break/>Sensors Number/Tank</td>
<td valign="bottom" align="left">Total Sensors Number</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Diesel oil</td>
<td valign="top" align="left">Level in the tank, presence of water in the tank, temperature</td>
<td valign="top" align="left">6</td>
<td valign="top" align="left">3</td>
<td valign="top" align="left">18</td>
</tr>
<tr>
<td valign="top" align="left">LPG</td>
<td valign="top" align="left">Level in the tank, quantity of the liquid and vapor gas, temperature</td>
<td valign="top" align="left">1</td>
<td valign="top" align="left">3</td>
<td valign="top" align="left">3</td>
</tr>
<tr>
<td valign="top" align="left">Ballast water</td>
<td valign="top" align="left">Level in the tank</td>
<td valign="top" align="left">6</td>
<td valign="top" align="left">1</td>
<td valign="top" align="left">6</td>
</tr>
<tr>
<td valign="top" align="left">Drinking water</td>
<td valign="top" align="left">Level in the tank, presence of other liquids in the tank</td>
<td valign="top" align="left">6</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">12</td>
</tr>
<tr>
<td valign="top" align="left">Bulk cargo</td>
<td valign="top" align="left">Level in the tank, quality parameter<break/>(e.g. moisture content)</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">4</td>
</tr>
<tr>
<td valign="top" align="left">Total</td>
<td valign="top" align="left"></td>
<td valign="top" align="left">21</td>
<td valign="top" align="left"></td>
<td valign="top" align="left">43</td></tr>
</tbody>
</table>
</table-wrap>
<para>All the information acquired from the sensors must be pre-processed for the final calculation of the required cargo and ship parameters in the computing system. Each of the sensors requires power and communication lines, acquisition devices and/or interface transformers (e.g. current loop into RS-485 MODBUS) and so on.</para>
<para>In contrast to the classical systems, the polymetric sensory system requires only one sensor for the measurement of all required cargo parameters in the tank. Therefore, if we assume that traditional and polymetric sensory systems are equivalent in the measurement information quality and reliability (systems are interchangeable without any loss of measurement information quality), it is obvious that polymetric system has the advantage in the measurement channels number.</para>
<para>The cost criterion can be used for the comparison of the efficiency of traditional and polymetric sensory systems. Denoting the cost of one measurement channel (the sensor + communication/power supply lines + transformers/transmitters) as <emphasis>C<subscript>TMS</subscript></emphasis> and <emphasis>C<subscript>PMS</subscript></emphasis>, the cost of the processing complex as <emphasis>C<subscript>TPC</subscript></emphasis> and <emphasis>C<subscript>PPC</subscript></emphasis>, the costs of the sensory systems for traditional sensory system <emphasis>C<subscript>TSS</subscript></emphasis> and for polymetric sensory system <emphasis>C<subscript>PSS</subscript></emphasis> can be estimated:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg230.jpg"/></para>
<para>where <emphasis>N<subscript>TSS</subscript></emphasis>, <emphasis>N<subscript>PSS</subscript></emphasis> &#x02013; the numbers of sensors used in the traditional and polymetric sensory system correspondingly.</para>
<para>Assuming that</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg230-1.jpg"/></para>
<para>the efficiency comparison of the polymetric and traditional sensory system <emphasis>E<subscript>TSS/PSS</subscript></emphasis> can be roughly estimated using the costs of the equivalent sensory systems [8]:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg230-2.jpg"/></para>
<para>As can be seen from the above-described supply vessel example, the polymetric sensory system efficiency is two times greater than the efficiency of the traditional sensory system (<emphasis>E<subscript>TSS/PSS</subscript></emphasis> = 44/22 = 2). It is worth mentioning that this comparison of the system efficiency is very rough and it is used only for the demonstration of the main advantage of the polymetric sensory system.</para>
</section>
<section class="lev3" id="sec10-4-1-5">
<title>10.4.1.5 Floating dock operation control system</title>
<para>Another example of naval polymetric sensory systems is a computer-aided floating dock ballasting process control and monitoring system (CCS DBS) with the main interface window presented in <link linkend="F10-9">Figure <xref linkend="F10-9" remap="10.9"/></link>.</para>
<para>These systems were designated to ensure effective and safe control of docking operations in the automatic remote mode. The ballast system is one of the most critical systems on the floating dock. The operation must be finely coordinated with the requirements of the marine industry. The polymetric system enables to take high-tech solutions to support safe control and monitoring of the dock ballast system. This solution provides the safe and reliable operations of dock facilities in real-life harsh conditions, reduces risks for vessels and generates both operational and financial benefits to customers.</para>
<para>The main system interface window (see <link linkend="F10-9">Figure <xref linkend="F10-9" remap="10.9"/></link>) contains the technological equipment layout and displays the actual status of a rudder, valves, pumps, etc. The main window consists of the main menu, the toolbar, the information panel and also the technological layout containing control elements.</para>
<para>It is possible to start, change parameters or stop technological processes by clicking a particular control element. The user interface enables the operator to efficiently supervise and control every detail of any technological process. All information concerning ship safety and operation monitoring and control, event and alarm management, database management or message control is structured in functional windows.</para>
</section>
<section class="lev3" id="sec10-4-1-6">
<title>10.4.1.6 Onshore applications</title>
<para>Another example of the successful polymetric sensing of computer-aided control and monitoring systems is connected with different marine terminals (crude oil, light and low-volatility fuel, diesel, liquefied petroleum gas, grain) and even with bilge water cleaning shops. Such sensing allows simultaneous monitoring of practically all the necessary parameters using one sensor for each tank, silo or other reservoir.</para>
<para>Namely, a set of the following characteristics is taken by a single polymetric sensor:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>The level, separation level of the non-mixed media and the online control of upper/lower critical levels and volume of the corresponding cargo in each tank;</para></listitem>
<listitem>
<para>The temperature field in the media, the temperature of the product at particular points;</para></listitem></itemizedlist>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Density, octane/cetane number of a fuel, propane-butane proportion in petroleum gas, presence and percentage of water in any mixture or solution (including aggressive chemicals &#x02013; acids, alkalis, etc.);</para></listitem></itemizedlist>
<para>As a result, a considerable increase of the measuring performance factor <emphasis>E<subscript>TSS/PSS</subscript></emphasis> was achieved in each application system (up to 4&#x02013;6 times [8]) with the essential concurrent reduction of number of instruments and measuring channels of the monitoring system. All the customers (more than 50 objects in Ukraine, Russia, Uzbekistan, etc.) reported commercial benefits after serial SADCO<superscript>TM</superscript> systems deployment.</para>
</section>
<section class="lev3" id="sec10-4-1-7">
<title>10.4.1.7 Special applications</title>
<para>Special applications of polymetric sensing of control and monitoring systems were developed for the following: real-time remote monitoring of motor lubrication oil production process (quantitative and qualitative control of the components depending on time and real temperature deviation during the production process); real-time remote control of aggressive chemicals at a nuclear power station water purification shop (quantitative and qualitative control of the components depending on time and real temperature deviation during the production process); water parameters control in the primary coolant circuit at a nuclear power station in normal and post-emergency operation state (fluidized bed-level control, pressure and temperature monitoring &#x02013; all in the conditions of increased radioactivity).</para>
</section>
</section></section>
<section class="lev1" id="sec10-5">
<title>10.5 Conclusions</title>
<para>This paper has been intended as a general presentation of a rapidly developing area of the promising transition from traditional on-board monitoring systems to intelligent sensory decision support and control systems based on neoteric polymetric measuring, data mining and holonic agent techniques. The area is especially attractive to those researchers who are attempting to develop the most effective intelligent systems by squeezing the maximum information from the simplest and most reliable sensors by means of sophisticated and effective algorithms.</para>
<para>In order for monitoring and control systems to become intelligent, not only for exhibition demonstrations and show presentations, but for real industrial applications, one needs to implement the leading-edge solutions for each and every component of such a system.</para>
<para>Combining polymetric sensing, data mining and holonic agencies techniques into one incorporated approach seems to be rather prospective if more and more research will develop appropriate theory models and integrate them into the practice as well.</para>
</section>
<section class="lev1" id="sec10-6">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>K. Hossine, M. Milton and B. Nimmo, &#x02018;Metrology for 2020s&#x02019;, Middlesex, UK: National Physical Laboratory, 2012, p. 28. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+Hossine%2C+M%2E+Milton+and+B%2E+Nimmo%2C+%27Metrology+for+2020s%27%2C+Middlesex%2C+UK%3A+National+Physical+Laboratory%2C+2012%2C+p%2E+28%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Russell and P. Norvig, &#x02018;Artificial Intelligence: A Modern Approach&#x02019;, Upper Saddle River, NJ: Prentice Hall, 2003. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Russell+and+P%2E+Norvig%2C+%27Artificial+Intelligence%3A+A+Modern+Approach%27%2C+Upper+Saddle+River%2C+NJ%3A+Prentice+Hall%2C+2003%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>L. Monostori, J. Vancza and S. Kumara, &#x02018;Agent-Based Systems for manufacturing&#x02019;, Annals of the CIRP, no. 55, 2 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=L%2E+Monostori%2C+J%2E+Vancza+and+S%2E+Kumara%2C+%27Agent-Based+Systems+for+manufacturing%27%2C+Annals+of+the+CIRP%2C+no%2E+55%2C+2+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>P. Leitao, &#x02018;Agent&#x02013;based distributed manufacturing control: A state-of-the-art survey&#x02019;, Engineering Applications of Artificial Intelligence, no. 22, pp. 979&#x02013;991, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=P%2E+Leitao%2C+%27Agent-based+distributed+manufacturing+control%3A+A+state-of-the-art+survey%27%2C+Engineering+Applications+of+Artificial+Intelligence%2C+no%2E+22%2C+pp%2E+979-991%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. McFarlane and S. Bussman, &#x02018;Developments in Holonic Production Planning and Control&#x02019;, Int. Journal of Production Planning and Control, vol. 6, no. 11, pp. 522&#x02013;536, 2000. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+McFarlane+and+S%2E+Bussman%2C+%27Developments+in+Holonic+Production+Planning+and+Control%27%2C+Int%2E+Journal+of+Production+Planning+and+Control%2C+vol%2E+6%2C+no%2E+11%2C+pp%2E+522-536%2C+2000%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. Amigoni, F. Brandolini, G. D&#x02019;Antona, R. Ottoboni and M. Somal vico, &#x02018;Artificial Intelligence in Science of Measurements: From Measurement Instruments to Perceptive Agencies&#x02019;, IEEE Transactions on Instrumentation and Measurements, vol. 3, no. 52, pp. 716&#x02013;723, 6 2003. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+Amigoni%2C+F%2E+Brandolini%2C+G%2E+D%27Antona%2C+R%2E+Ottoboni+and+M%2E+Somal+vico%2C+%27Artificial+Intelligence+in+Science+of+Measurements%3A+From+Measurement+Instruments+to+Perceptive+Agencies%27%2C+IEEE+Transactions+on+Instrumentation+and+Measurements%2C+vol%2E+3%2C+no%2E+52%2C+pp%2E+716-723%2C+6+2003%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Zhukov, &#x02018;Theory of Polymetric Measurements&#x02019;, in Proceedings of 1st International Conference: Innovations in Shipbuilding and Offshore Technology, Nikolayev, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Zhukov%2C+%27Theory+of+Polymetric+Measurements%27%2C+in+Proceedings+of+1st+International+Conference%3A+Innovations+in+Shipbuilding+and+Offshore+Technology%2C+Nikolayev%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Zhukov, B. Gordeev and A. Zivenko, &#x02018;Polymetric Systems: Theory and Practice&#x02019;, Nikolayev: Atoll, 2012, p. 369. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Zhukov%2C+B%2E+Gordeev+and+A%2E+Zivenko%2C+%27Polymetric+Systems%3A+Theory+and+Practice%27%2C+Nikolayev%3A+Atoll%2C+2012%2C+p%2E+369%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Zhukov, &#x02018;Concept of Hypersignal and it&#x02019;s Application in Naval Cybernetics&#x02019;, in Proceedings of 3-rd International Conference: Innovations in Shipbuilding and Offshore Technology, Nikolayev, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Zhukov%2C+%27Concept+of+Hypersignal+and+it%27s+Application+in+Naval+Cybernetics%27%2C+in+Proceedings+of+3-rd+International+Conference%3A+Innovations+in+Shipbuilding+and+Offshore+Technology%2C+Nikolayev%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>B. Marchenko and L. Scherbak, &#x02018;Foundations of Theory of Measurements&#x02019;, Proceedings of Institute of Electrodynamics of National Academy of Sciences of Ukraine, Electroenergetika, pp. 221&#x02013;230, 1999. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=B%2E+Marchenko+and+L%2E+Scherbak%2C+%27Foundations+of+Theory+of+Measurements%27%2C+Proceedings+of+Institute+of+Electrodynamics+of+National+Academy+of+Sciences+of+Ukraine%2C+Electroenergetika%2C+pp%2E+221-230%2C+1999%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>B. Marchenko and L. Scherbak, &#x02018;Modern Concept for Development of Theory of Measurements&#x02019;, Reports of National Academy of Sciences of Ukraine, no. 10, pp. 85&#x02013;88, 1999. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=B%2E+Marchenko+and+L%2E+Scherbak%2C+%27Modern+Concept+for+Development+of+Theory+of+Measurements%27%2C+Reports+of+National+Academy+of+Sciences+of+Ukraine%2C+no%2E+10%2C+pp%2E+85-88%2C+1999%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Zhukov, &#x02018;Fuzzy Algorithms of Information Processing in Systems of Ship Impulse Polymetrics&#x02019;, in Proceedings of 1st International Conference: Problems of Energy saving and Ecology in Shipbuilding, Nikolayev, 1996. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Zhukov%2C+%27Fuzzy+Algorithms+of+Information+Processing+in+Systems+of+Ship+Impulse+Polymetrics%27%2C+in+Proceedings+of+1st+International+Conference%3A+Problems+of+Energy+saving+and+Ecology+in+Shipbuilding%2C+Nikolayev%2C+1996%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Zhukov, &#x02018;Solitonic Models of Polymetric Monitoring Systems&#x02019;, in Proceedings of International Conference &#x0201C;Shipbuilding problems: state, ideas, solutions&#x0201D;, USMTU, Nikolayev, 1997. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Zhukov%2C+%27Solitonic+Models+of+Polymetric+Monitoring+Systems%27%2C+in+Proceedings+of+International+Conference+%22Shipbuilding+problems%3A+state%2C+ideas%2C+solutions%22%2C+USMTU%2C+Nikolayev%2C+1997%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Zhukov, B. Gordeev and A. Leontiev, &#x02018;Concept of cloned Polymetric Signal and it&#x02019;s application in Monitoring and Expert Systems&#x02019;, in Proceedings of 3-rd International Conference on Marine Industry, Varna, 2001. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Zhukov%2C+B%2E+Gordeev+and+A%2E+Leontiev%2C+%27Concept+of+cloned+Polymetric+Signal+and+it%27s+application+in+Monitoring+and+Expert+Systems%27%2C+in+Proceedings+of+3-rd+International+Conference+on+Marine+Industry%2C+Varna%2C+2001%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Zhukov, B. Gordeev and A. Zivenko, &#x02018;Polymetric sensing of intelligent robots&#x02019;, in Proceedings of IEEE 7th Int. Conf. on IDAACS, Berlin, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Zhukov%2C+B%2E+Gordeev+and+A%2E+Zivenko%2C+%27Polymetric+sensing+of+intelligent+robots%27%2C+in+Proceedings+of+IEEE+7th+Int%2E+Conf%2E+on+IDAACS%2C+Berlin%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Zivenko, A. Nakonechniy and D. Motorkin, &#x02018;Level measurement principles &#x00026; sensors&#x02019;, in Materialy IX mezinarodni vedecko-prackticka conference &#x0201C;Veda a technologie: krok do budoucnosti - 2013&#x0201D;, Prague, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Zivenko%2C+A%2E+Nakonechniy+and+D%2E+Motorkin%2C+%27Level+measurement+principles+%26+sensors%27%2C+in+Materialy+IX+mezinarodni+vedecko-prackticka+conference+%22Veda+a+technologie%3A+krok+do+budoucnosti+-+2013%22%2C+Prague%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Zivenko, &#x02018;Forming and pre-processing of the polymetric signal&#x02019;, Collection of Scientific Publications of NUS, vol. 11, no. 439, pp. 114&#x02013;122, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Zivenko%2C+%27Forming+and+pre-processing+of+the+polymetric+signal%27%2C+Collection+of+Scientific+Publications+of+NUS%2C+vol%2E+11%2C+no%2E+439%2C+pp%2E+114-122%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Zhukov, &#x02018;Instrumental Ship Dynamic Stability Control&#x02019;, in Proceedings of 5th IMAEM, Athens, 1990. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Zhukov%2C+%27Instrumental+Ship+Dynamic+Stability+Control%27%2C+in+Proceedings+of+5th+IMAEM%2C+Athens%2C+1990%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch11" label="11" xreflabel="11">
<title>Design and Implementation of Wireless Sensor Network Based on Multilevel Femtocells for Home Monitoring</title>
<para><emphasis role="strong">D. Popescu, G. Stamatescu, A. M&#x01CE;ciuc&#x01CE; and M. Stru&#x00163;u</emphasis></para>
<para>Department of Automatic Control and Industrial Informatics, University &#x0201C;Politehnica&#x0201D; of Bucharest, Romania</para>
<para>Corresponding author: G. Stamatescu &lt;grigore.stamatescu@upb.ro&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>An intelligent femtocell-based sensor network is proposed for home monitoring of elderly or people with chronic diseases. The femtocell is defined as a small sensor network which is placed into the patient&#x02019;s house and consists of both mobile and fixed sensors disposed on three layers. The first layer contains body sensors attached to the patient that monitor different health parameters, patient location, position and possible falls. The second layer is dedicated for ambient sensors and routing inside the cell. The third layer contains emergency ambient sensors that cover burglary events or toxic gas concentration, distributed by necessities. Cell implementation is based on the IRIS family of motes running the embedded software for resource-constrained devices, TinyOS. In order to reduce energy consumption and radiation level, adaptive rates of acquisition and communication are used. Experimental results within the system architecture are presented for a detailed analysis and validation.</para>
<para><emphasis role="strong">Keywords:</emphasis> wireless sensor network, ambient monitoring, body sensor network, ambient assisted living</para>
</section>
<section class="lev1" id="sec11-1">
<title>11.1 Introduction</title>
<para>Recent developments in computing and communication systems applied to healthcare technology give us the possibility to implement a wide range of home-monitoring solutions for elderly or people with chronic diseases [1]. Thus, people may perform their daily activities while being constantly under the supervision of the medical personnel. The indoor environment is optimized such that the possibility of injury is minimal. Alarm triggers and smart algorithms sent data to the right intervention units in regard to the detected emergency [2].</para>
<para>When living inside closed spaces, small variables may be significant to the entire person&#x02019;s well-being. Therefore, the quality of the air, temperature, humidity or the amount of light inside the house may be important parameters [3]. Reduced costs, size and weight and energy-efficient operation of the monitoring nodes, together with the more versatile wireless communications, make the daily usage of the systems monitoring health parameters more convenient. By wearing them, patients are free to move at their own will inside the monitored perimeter, practically forgetting their presence. The goal is to design the entire system operation for a long period of time without human intervention and at the same time, triggering as few false alarms as possible.</para>
<para>Many studies investigated the feasibility of using several sensors placed on different parts of the body for continuous monitoring [4]. Home care for the elderly and chronic disease persons becomes an economic and social necessity. With a growing population of ageing people and the health care prices rising all over the world, we expect a great demand for home care systems [5, 6]. An Internet-based topology is proposed in [7] for the remote home-monitoring applications that use a broker server, managed by a service provider. The security risks from the home PC are transferred to the broker server and removed, as the broker server is located between the remote-monitoring devices and the patient&#x02019;s house. An early prototype of a mobile health service platform that was based on Body Area Networks is MobiHealth [8]. The most important requirements of the developer for an e-health application are size and power consumption, as considered in [9]. Also, in [10], a thorough comprehensive study of the energy conservation challenges in wireless sensor networks is carried out.</para>
<para>In [11], a wireless body area network providing long-term health monitoring of patients under natural physiological states without constraining their normal activities is presented.</para>
<para>Integrating the body sensors with the existing ambient monitoring network in order to provide a complete view of the monitored parameters is one of the issues discussed in this paper. Providing a localization system and a basic algorithm for event identification is also part of our strategy to fulfill all possible user requests. Caregivers also value information about the quality of air inside the living area. Many false health problems are usually related to the lack of oxygen or high levels of CO or CO<subscript>2</subscript>.</para>
<para>The chapter is organized as follows: Section 2 provides an overall view on the proposed system architecture and detailed insight into the operation requirements for each of the three layers for body, ambient and emergency monitoring. Section 3 introduces the main criteria for efficient data collection and a proposal for an adaptive data rate algorithm for both the body sensor network and the ambient sensor network. This has the aim of reducing the amount of data generated within the networks, considering processing, storage, and energy requirements. Implementation details and experimental results are evaluated in Section 4, where the path is set for long-term deployment and validation of the system. Section 5 concludes the chapter and highlights the main directions for future work.</para>
<table-wrap position="float" id="T11-1">
<label><link linkend="T11-1">Table <xref linkend="T11-1" remap="11.1"/></link></label>
<caption><para>Main characteristics of IEEE 802.15.1 and 802.15.4</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">IEEE Standard</td>
<td valign="top" align="left">802.15.1</td>
<td valign="top" align="left">802.15.4</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Frequency</td>
<td valign="top" align="left">ISM - 2.4GHz</td>
<td valign="top" align="left">ISM/868/915 MHz</td>
</tr>
<tr>
<td valign="top" align="left">Data rate</td>
<td valign="top" align="left">1 Mbps</td>
<td valign="top" align="left">250 kbps</td>
</tr>
<tr>
<td valign="top" align="left">Topology</td>
<td valign="top" align="left">Star</td>
<td valign="top" align="left">Mesh</td>
</tr>
<tr>
<td valign="top" align="left">Scalability</td>
<td valign="top" align="left">Medium</td>
<td valign="top" align="left">High</td>
</tr>
<tr>
<td valign="top" align="left">Latency</td>
<td valign="top" align="left">Medium</td>
<td valign="top" align="left">High</td>
</tr>
<tr>
<td valign="top" align="left">Interference mit.</td>
<td valign="top" align="left">FHSS/DHSS</td>
<td valign="top" align="left">CSMA/CA</td>
</tr>
<tr>
<td valign="top" align="left">Trademark</td>
<td valign="top" align="left">Bluetooth</td>
<td valign="top" align="left">ZigBee</td></tr>
</tbody>
</table>
</table-wrap>
</section>
<section class="lev1" id="sec11-2">
<title>11.2 Network Architecture and Femtocell Structure</title>
<para>The proposed sensor network architecture is based on hybrid femtocells. A hybrid femtocell contains sensors which are grouped based on their functional requirements, mobility and energy consumption characteristics in three layers: the body sensor network (BSN), the ambient sensor network (ASN) and the emergency sensor network (ESN), as presented in <link linkend="F11-1">Figure <xref linkend="F11-1" remap="11.1"/></link>. Coordination is implemented through a central entity called the femtocell management node (FMN) which aggregates data from the three layers and acts as a interface to the outside world by means of the internet. Communication between different components can be made using wireless technology and radio compatible fiber optic. <link linkend="T11-1">Table <xref linkend="T11-1" remap="11.1"/></link> lists the high-level characteristics of the two low-power wireless communication standards often used in home-monitoring scenarios: IEEE 802.15.1 and IEEE 802.15.4. This highlights an important design trade-off in the deployment of a wireless sensor network for home monitoring. While IEEE 802.15.4 and ZigBee enable large dense networks with complex mesh topologies, the use of Bluetooth can become an advantage in applications with higher data-rate requirements and low latency.</para>
<para>The layer characteristics and functionalities are further elaborated upon.</para>
<fig id="F11-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-1">Figure <xref linkend="F11-1" remap="11.1"/></link></label>
<caption><para>Hybrid femtocell configuration.</para></caption>
<graphic xlink:href="graphics/ch11_fig001.jpg"/>
</fig>
<section class="lev2" id="sec11-2-1">
<title>11.2.1 Body Sensor Network</title>
<para>The body sensor network functional design includes battery charging nodes along with user-friendly construction and operation. In an idealised case, the size and weight would go unnoticed, immediately or after a short accomoda-tion period, not disturbing the patient or elderly person when wearing them. Nodes communicate using a low-power wireless communication protocol for very short distance data transmission and reception e.g. ZigBee or Bluetooth, depending on the data streaming rate of the application and energy resources on the node. Very low-energy consumption is an essential design criteria as changing the batteries on a daily basis becomes stressful on the long term and ideally the nodes would be embedded into wearable technology. Some on the sensed parameters for the BSN include dual- or tri- axial accelerometers, blood pressure, ECG, blood oxygen saturation, heart rate and body temperature.</para>
<para>The main events that should be detected by the BSN cover fall detection, activity recognition and variations in investigated parameters corresponding to alert and alarm levels.</para>
</section>
<section class="lev2" id="sec11-2-2">
<title>11.2.2 Ambient Sensor Network</title>
<para>The ambient sensor network is comprised of a series of fixed measurement nodes, placed optimally in the target area/environment as to maximize sensing coverage and network connectivity through a minimum number of nodes. The low-power communication operates on longer communication links than the BSN and has to be robust to main phenomena affecting indoor wireless communication, such as interference, reflections and assymetric links. Though fixed node placement provides more flexibility when choosing the energy source, battery operation and low maintenance, enabled by low-energy consumption is preferred to mains power. For example, in the ASN, the router nodes which are tasked with redirecting much of the network traffic in the multi-hop architecture can be operated from the main power line.</para>
<para>Within the general framework, the ASN can serve as an external system for patient monitoring through a localization function based on link quality and signal strength. The monitored parameters include ambient temperature, humidity, barometric pressure and light. These can be evaluated individually or can serve as input data to a more advanced decision support system which can correlate the evolution of indoor parameters with the BSN data from the patient to infer the conditions for certain diseases. Some nodes of the ASN might include complex sensors like audio and video capture giving a more detailed insight into the patient&#x02019;s behaviour. Their current use is somewhat limited by the additional computing and communication ressources needed to accomodate the sensors into current wireless sensor network architectures as well as by privacy and data-protection concerns.</para>
</section>
<section class="lev2" id="sec11-2-3">
<title>11.2.3 Emergency Sensor Network</title>
<para>The multi-level femtocell reserves a special role for the emergency sensor network which can be considered as a particular case of ASN tasked with quick detection and reaction to life or property threats through embedded detection mechanisms. The sensing nodes are fixed and their placement is well suited to the specifics of the measurement process. As an example, the house could be fitted with gas sensors in the kitchen next to a stove, carbon dioxide sensors would be placed in the bedroom and passive infrared sensor and pressure sensors would be fitted next to doors and windows. As the operation of the ESN is considered critical, energy-efficient operation becomes a secondary design criteria. The nodes should have redundant energy sources, both batteries and mains power supply, and redundant communication links. For example, a wireless low latency communication protocol with simple network topology to minimize overhead and packet loss can be used as a main interface with the possibility of switching to a wired interface or using the ASN infrastructure in certain situations.</para>
<para>The main tasks of the ESN are to detect dangerous gas concentrations posing threats of explosion and/or intoxication and to detect intruders into the home such as burglars.</para>
<fig id="F11-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-2">Figure <xref linkend="F11-2" remap="11.2"/></link></label>
<caption><para>Data and information flow within and outside the femtocell.</para></caption>
<graphic xlink:href="graphics/ch11_fig2.jpg"/>
</fig>
</section>
<section class="lev2" id="sec11-2-4">
<title>11.2.4 Higher-level Architecture and Functional Overview</title>
<para>One of the features of the hybrid femtocell is that the body sensor always interacts with the closest ambient sensor node, <link linkend="F11-2">Figure <xref linkend="F11-2" remap="11.2"/></link>, in order to send data to the gateway. This function reassures us that the sensors attached to the monitored person are always connected and the energy consumption is optimal because the distance between the person and the closest router is minimal. This feature can also be used as a mean of localization. The person wearing the body sensor will be always connected to the closest router. By using fixed environmental sensors with own ID and previously known positions, we can determine which room is presently used by the inhabitant. In order to have an accurate localization of the patient, an attenuation map of the sensors from each room must be created. Considering that patient is localization in a certain room of the home, by the closest ASN, is not accurate this could happen due to the following scenario: lets suppose that we have an ASN located in the bedroom, situated in the left side of the room. In the right part of the room, we have the door to the living room, and near this door we have the ASN for the living room. If the patient is located in the bedroom, but very close to the door, it will have as closest ASN the one from the living room, but he/she is situated in the bedroom. In order to avoid this localization error, we introduce the attenuation map of the sensors. Every ASN that localizes the BSN on the patient will transmit an attenuation factor. This way, using the attenuation factors from each sensor, we can localize the patient very accurately. In our example, if the bedroom ASN has a 10% factor, and the living room ASN has a 90% factor, using the attenuation map, we localize the patient as being in the bedroom, but very close to the door between the two rooms.</para>
<fig id="F11-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-3">Figure <xref linkend="F11-3" remap="11.3"/></link></label>
<caption><para>System architecture.</para></caption>
<graphic xlink:href="graphics/ch11_fig003.jpg"/>
</fig>
<para>The position of a hybrid femtocell in the large wireless sensor network system is presented in <link linkend="F11-3">Figure <xref linkend="F11-3" remap="11.3"/></link>. Its main purpose is to monitor and interpret data, sending specific alarms when required. The communication between the femtocells and the network is based on internet. The same method is used for the communication between the network administrator and hospital, police, fire station or relatives.</para>
<fig id="F11-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-4">Figure <xref linkend="F11-4" remap="11.4"/></link></label>
<caption><para>Sensor deployment example.</para></caption>
<graphic xlink:href="graphics/ch11_fig004.jpg"/>
</fig>
<para>In <link linkend="F11-4">Figure <xref linkend="F11-4" remap="11.4"/></link>, the ambient sensors&#x02019; spatial placement and reference deployment inside the patients apartment is showcased. This fixed topology is consistent with the routing role of the ambient sensors in regard to the considered mobile body sensors. The goal of the system is to collect relevant data for reporting and processing. Therefore, achieving a very high sensory and communication coverage is among our main objectives.</para>
<para>The entire monitoring system benefits from a series of gas sensor modules strategically placed throughout the monitored space. Thus, the Parallax-embedded gas-sensing module for CO<subscript>2</subscript> is designed to allow a microcontroller to determine when a preset carbon dioxide gas level has been reached or exceeded. Interfacing with the sensor module is done through a 4-pin SIP header and requires two I/O pins from the host microcontroller. The sensor module is mainly intended to provide a means of comparing carbon dioxide sources and being able to set an alarm limit when the source becomes excessive [4].</para>
<para>A more advanced system for indoor gas monitoring, MICARES, is shown in <link linkend="F11-5">Figure <xref linkend="F11-5" remap="11.5"/></link>. It consists of an expansion module for a typical wireless sensor network platform with the ability to measure CO, CO<subscript>2</subscript> and O<subscript>3</subscript> concentrations and perform on board duty-cycling of the sensors for low energy as well as measurement compensation and calibration. Using this kind of platform, data can be reliably evaluated locally and relayed to the gateway as small packets of timely information. The module can be either powered by mains or draw power from the energy source of the host node. Also, it can operate as a stand-alone device through ModBus serial communication with other information processing systems.</para>
<fig id="F11-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-5">Figure <xref linkend="F11-5" remap="11.5"/></link></label>
<caption><para>Embedded carbon dioxide sensor node module [12].</para></caption>
<graphic xlink:href="graphics/ch11_fig005.jpg"/>
</fig>
<para>In case an alert is triggered, the information is automatically processed and the alarm is sent to the specific intervention factor (hospital, police, fire department, etc.). These have the option of remotely accesing the fem-tocell mangement node in order to verify that the alarm is valid and act by dispatching an intervention team to solve the issue. Subsequently, all the alarms generated over time are classified and stored in a central database for reporting purposes.</para>
</section>
</section>
<section class="lev1" id="sec11-3">
<title>11.3 Data Processing</title>
<para>Data harvesting and processing is performed at the femtocell level, so the whole network is not flooded by useless data from all femtocells. Considering the main goals of conserving node energy, while taking into account the limited storage and processing capabilities of each node, we proposed a strategy to dynamically adapt the sampling rate of the investigated parameters for the BSN and ASN. The ESN is considered crtical home infrastructure and it is not affected by this strategy, with its own specific challenges for reliability, periodic sensor maintenance and calibration.</para>
<para>Adaptive data acquisition is implemented by framing collected data into a safety interval given by [V<emphasis><subscript>min</subscript></emphasis>, V<emphasis><subscript>max</subscript></emphasis>]. While the raw values are relatively close to the interval center <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in244.jpg"/> and the recent variation of the time series, given by the derivative, is low over a exponential weighted time horizon, then the data acquisition rate is lowered periodically towards a lower bound. When collected measurements start to vary, data acquisition is increased steeply in order to capture the significant evene, then a stability period is observed and it begins to be lowered again. Though local data storage on the node can be exploited, we prefer to synchornize collection with transmission of the packets through the radio interface in order to conserve data freshness which can prove important in order to effectively react to current events. Energy level must be considered in order to prevent the reception of corrupted data and to avoid unbalanced node battery depletion e.g. in the case of routing nodes in the network. This represents a validation factor of the data and is automatically transmitted together with the parameter value. Energy-aware policies can be implemented at the femotcell level to optimize battery recharge, in the BSN case, and battery replacement, for the ASN.</para>
<para>Evaluation of the parameter derivate takes place for attaining a variable rate of acquisition. The values obtained by calculating the derivate also help us to decide what type of event has happened. This can be done by building a knowledge base during system commissioning and initial operating period which can be used to relate modifications in observed parameter to trained event classes. These can go from trivial associations like high temperature and low oxygen/smoke meaning a fire, to subtle patient state of health detection by changing vital signs and aversion to light. The algorithm described can be summarized as follows:</para>
<fig id="A6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Algorithm 1</label>
<caption><para>Adaptive Data Collection for Home Monitoring</para></caption>
<graphic xlink:href="graphics/alg1.jpg"/>
</fig>
</section>
<section class="lev1" id="sec11-4">
<title>11.4 Experimental Results</title>
<para>In order to bring value to the theoretical system architecture proposed, two experiments have be devised and implemented. They cover the body and ambient sensor layers of the multi-level femtocell for home monitoring. The main operational scenario that was considered involves an elderly patient, living alone at home in an appartment or house with multiple rooms. A caregiver is available on call as well as a permanent connection to the emergency services exists, with the possibility on alerting close relatives in the process. As functional requirements, we target activity monitoring and classification by body-worn accelerometers and ambient measurement of humidity, temperature, pressure and light, along with their correlations.</para>
<fig id="F11-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-6">Figure <xref linkend="F11-6" remap="11.6"/></link></label>
<caption><para>Accelerometer node placement on the patient&#x02019;s body.</para></caption>
<graphic xlink:href="graphics/ch11_fig006.jpg"/>
</fig>
<para>The first experiment is performed as part of the body sensor network. Therefore, two accelerometers with two axes were placed on the patient, one on the right knee and the other one on his left hip, as shown in <link linkend="F11-6">Figure <xref linkend="F11-6" remap="11.6"/></link>. The sensors are automatically assigned a unique value ID. Therefore, in our experiment, the sensor situated on the knee has ID104, and the other one, placed on the left hip, ID103. In order to overcome the hardware limitations, in a tridimensional representation, the representation of the 3 axis using those two accelerometers is the following:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>X axis: front back, X axis of node 104;</para></listitem>
<listitem>
<para>Y axis: left right, X axis of node 103;</para></listitem>
<listitem>
<para>Z axis: bottom up, Y axis of both nodes.</para></listitem></itemizedlist>
<fig id="F11-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-7">Figure <xref linkend="F11-7" remap="11.7"/></link></label>
<caption><para>X Axis experiment acceleration with thresholding.</para></caption>
<graphic xlink:href="graphics/ch11_fig007.jpg"/>
</fig>
<para>The following activities were executed during the experiment: standing, sitting on a chair, standing again and slow walking from bedroom to living. Data acquisition has been performed using MOTE-VIEW [13]. This is an interface, client layer, between a user and a deployed network of wireless sensors. Besides this main function, the user can change or update the individual node firmware, switch from low-power to high-power mode and set the radio transmit power. Collected data is stored in a local database and can be accessed remotely by authorized third parties.</para>
<para>Multiple experiments have been conducted in order to determine the trust values. Therefore, in the X and Y axis charts presented in Figures 11.7 and 11.8, the readings outside the green lines represent that an event occurred, obtained by thresholding. The following events have been monitored during our experiment in this order: standing, sitting on a chair, standing again and slow walking from bedroom to living. The used sensors are ADXL202E [14].</para>
<para>These are low-cost, low-power, complete 2-axis accelerometer with a digital output, all on a single monolithic IC. They are an improved version of the ADXL202AQC/JQC. The ADXL202E will measure accelerations with a full-scale range of 2 g. The ADXL202E can measure both dynamic acceleration (e.g., vibration) and static acceleration (e.g., gravity). The outputs are analog voltage or digital signals whose duty cycles (ratio of pulse width to period) are proportional to acceleration. The duty cycle outputs can be directly measured by a micro- processor counter, without an A/D converter or glue logic. The duty cycle period is adjustable from 0.5 ms to 10 ms via a single resistor (RSET).</para>
<fig id="F11-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-8">Figure <xref linkend="F11-8" remap="11.8"/></link></label>
<caption><para>Y Axis experiment acceleration with thresholding.</para></caption>
<graphic xlink:href="graphics/ch11_fig008.jpg"/>
</fig>
<para>The radio base station is made up of an IRIS radio/processor board connected to a MIB520 USB interface board via the 51-pin expansion connector. The interface board uses a FTDI chip and provides two virtual COM ports to the host system. COMx is used for programming the connected mote and COMx+1 is used by middleware applications to read serial data. The used network protocol is called XMesh, owned by Crossbow, based on the standard 802.15.4. The Crossbow sensor networks can run different power strategies, each of these strategies being a trade-off between power, data rates and latency.</para>
<table-wrap position="float" id="T11-2">
<label><link linkend="T11-2">Table <xref linkend="T11-2" remap="11.2"/></link></label>
<caption><para>XMesh power configuration matrix</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Power Mode</td>
<td valign="top" align="left">MICA2</td>
<td valign="top" align="left">MICA2DOT</td>
<td valign="top" align="left">MICAz</td></tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">XMesh-HP</td>
<td valign="top" align="left">X</td>
<td valign="top" align="left">X</td>
<td valign="top" align="left">X</td>
</tr>
<tr>
<td valign="top" align="left">XMesh-LP</td>
<td valign="top" align="left">X</td>
<td valign="top" align="left">X</td>
<td valign="top" align="left">Async. only</td>
</tr>
<tr>
<td valign="top" align="left">XMesh-ELP</td>
<td valign="top" align="left">X</td>
<td valign="top" align="left">X</td>
<td valign="top" align="left">X</td></tr>
</tbody>
</table>
</table-wrap>
<para>XMesh-HP [15] is the best approach for systems that have continuous power, offering the highest message rate, usually proportional to the band rate of the radio. Radios and processors are continually powered, consuming between 15 and 30 mA depending on the type of the Mote. Route Update Messages and Health Messages are sent at a faster rate which decreases the time it takes to form a mesh or for a new mote to join the mesh.</para>
<para>XMeshLP [15] is used for battery-operated systems that require multi-month or multi-year life. It can run either time-synchronized or asynchronous. The best power efficiency is achieved with time synchronization within 1 msec. The motes wake-up, typically 8 times/second, time synchronized, for a very short interval to see if the radio is detecting any signal over the noise background. If this happens, the radio is kept alive to receive the signal. This action usually results in a base level current of 80 <emphasis>&#x00B5;</emphasis>A. The total average current depends on the number of messages received and transmitted. If data is transmitted every three minutes, the usual power in a 50-node mesh is around 220 <emphasis>&#x00B5;</emphasis>A. XMesh-LP can be configured for even lower power by reducing the wake-up periods and transmitting at lower rates. Also, route update intervals are set at a lower rate to conserve power, resulting in a longer mesh formation time.</para>
<table-wrap position="float" id="T11-3">
<label><link linkend="T11-3">Table <xref linkend="T11-3" remap="11.3"/></link></label>
<caption><para>XMesh performance summary table</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Parameter</td>
<td valign="top" align="left">XMesh-HP</td>
<td valign="top" align="left">XMesh-LP</td>
<td valign="top" align="left">XMesh-ELP</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Route update interval</td>
<td valign="top" align="left">36 sec.</td>
<td valign="top" align="left">360 sec.</td>
<td valign="top" align="left">36 sec. (HP) / 360 sec. (LP)</td>
</tr>
<tr>
<td valign="top" align="left">Data message rate</td>
<td valign="top" align="left">10 sec. typ.</td>
<td valign="top" align="left">180 sec., typ.</td>
<td valign="top" align="left">N/A</td>
</tr>
<tr>
<td valign="top" align="left">Mesh formation time</td>
<td valign="top" align="left">2&#x02013;3 RUI</td>
<td valign="top" align="left">X</td>
<td valign="top" align="left">X</td>
</tr>
<tr>
<td valign="top" align="left">Average current usage</td>
<td valign="top" align="left">20&#x02013;30 mA</td>
<td valign="top" align="left">400 uA</td>
<td valign="top" align="left">50 uA</td></tr>
</tbody>
</table>
</table-wrap>
<para>XMesh-ELP [15] is only used for leaf nodes that communicate with parent nodes running XMesh-HP. A leaf node is defined as a node that does not participate in the mesh; it never routes messages from child motes to parent motes. The results of the ELP version are very low power because the mote does not need to use the time synchronized wake-up mode to check for radio messages. The mote can sleep for very long times, this way maintaining its neighborhood list to remember which parents it can select. If it does not get a link-level acknowledgement when it transmits to a parent, it will find another parent and so on. This operation can happen very quickly or might take some time if the RF environment or mesh configuration has changed considerably.</para>
<para>Because of their small size, nodes can be easily concealed into the background, interfering as little as possible with the user&#x02019;s day-to-day routine. We have also the possibility to set the sampling rate at a suitable level in order to achieve low-power consumption and by this a long operating range without human intervention [16].</para>
<para>Our infrastructure also offers routing facilities increasing the reliability of our network by self-configuring into a multihop communication system whenever direct links are not possible. After experimenting with different topologies, we achieved a working test scenario which consisted in a four-level multihop communication network which is more than we expect to be necessary in any of our deployment locations.</para>
<fig id="F11-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-9">Figure <xref linkend="F11-9" remap="11.9"/></link></label>
<caption><para>Measurement data: humidity (a); temperature (b).</para></caption>
<graphic xlink:href="graphics/ch11_fig009a.jpg"/>
</fig>
<fig id="F11-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-10">Figure <xref linkend="F11-10" remap="11.10"/></link></label>
<caption><para>Measurement data: barometric pressure (a); light (b).</para></caption>
<graphic xlink:href="graphics/ch11_fig010.jpg"/>
</fig>
<para>Extensive experimental work has been carried out for the ambient sensor layer of the system based on MTS400 IRIS sensor nodes. One of the reasons for choosing this specific board has been that it provided the needed sensors in order to gather a variety of environmental parameters, like temperature, humidity, relative pressure and ambient light. The experimental deployment consists of three measurement nodes organized in a true mesh topology in a testing indoor environment. These aim at modeling a real implementation in the patient&#x02019;s home and were taken over the course of a week-long deployment. In order to assure accounting to uneven sampling from the sensor nodes, we use as reference time MATLAB serial time units which are converted from conventional time stamps entries into the MOTE-VIEW database of the form <emphasis>dd.mm.yyyy HH:MM:SS</emphasis>.</para>
<para>In <link linkend="F11-9">Figure <xref linkend="F11-9" remap="11.9"/></link>(a), the evolution of the humidity parameter measured by indoor deployed sensors can be seen. The differences account for node placement in the different rooms and exposure to windows and doors. Subsequent processing can lead to computing average values and to other correlations with ambient and body parameters and an intelligent information system which can associate variations in ambient humidity and temperature to influences on chronic disease. <link linkend="F11-9">Figure <xref linkend="F11-9" remap="11.9"/></link>(b) illustrates temperature variations obtained from the ambient sensor network. These reflect the circadian evolution of the measured parameter and show the importance of correct node placement and data aggregation within the sensor network.</para>
<table-wrap position="float" id="T11-4">
<label><link linkend="T11-4">Table <xref linkend="T11-4" remap="11.4"/></link></label>
<caption><para>Ambiental Monitoring Summary</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left" colspan="2">Node ID</td>
<td valign="top" align="left">6692</td>
<td valign="top" align="left">6782</td>
<td valign="top" align="left">6945</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Humidity [%]</td>
<td valign="top" align="left"><emphasis>min</emphasis></td>
<td valign="top" align="left">22.5</td>
<td valign="top" align="left">27.8</td>
<td valign="top" align="left">35.8</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>max</emphasis></td>
<td valign="top" align="left">41</td>
<td valign="top" align="left">42.8</td>
<td valign="top" align="left">50.5</td></tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>avg</emphasis></td>
<td valign="top" align="left">33.39</td>
<td valign="top" align="left">34.62</td>
<td valign="top" align="left">42.26</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>stdev</emphasis></td>
<td valign="top" align="left">4.76</td>
<td valign="top" align="left">3.22</td>
<td valign="top" align="left">2.58</td>
</tr>
<tr>
<td valign="top" align="left">Temperature [degC]</td>
<td valign="top" align="left"><emphasis>min</emphasis></td>
<td valign="top" align="left">23.08</td>
<td valign="top" align="left">23.78</td>
<td valign="top" align="left">23.4</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>max</emphasis></td>
<td valign="top" align="left">32.98</td>
<td valign="top" align="left">29.01</td>
<td valign="top" align="left">27.48</td></tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>avg</emphasis></td>
<td valign="top" align="left">26.24</td>
<td valign="top" align="left">25.45</td>
<td valign="top" align="left">24.66</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>stdev</emphasis></td>
<td valign="top" align="left">2.7</td>
<td valign="top" align="left">1.1</td>
<td valign="top" align="left">0.75</td>
</tr>
<tr>
<td valign="top" align="left">Pressure [mbar]</td>
<td valign="top" align="left"><emphasis>min</emphasis></td>
<td valign="top" align="left">997.83</td>
<td valign="top" align="left">997.34</td>
<td valign="top" align="left">998.24</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>max</emphasis></td>
<td valign="top" align="left">1009.6</td>
<td valign="top" align="left">1009.3</td>
<td valign="top" align="left">1010.2</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>avg</emphasis></td>
<td valign="top" align="left">1003.3</td>
<td valign="top" align="left">1002.2</td>
<td valign="top" align="left">1003.5</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>stdev</emphasis></td>
<td valign="top" align="left">2.85</td>
<td valign="top" align="left">2.7</td>
<td valign="top" align="left">2.7</td>
</tr>
<tr>
<td valign="top" align="left">Light [lux]</td>
<td valign="top" align="left"><emphasis>min</emphasis></td>
<td valign="top" align="left">0</td>
<td valign="top" align="left">0</td>
<td valign="top" align="left">0</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>max</emphasis></td>
<td valign="top" align="left">1847.1</td>
<td valign="top" align="left">1847.1</td>
<td valign="top" align="left">1847.1</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>avg</emphasis></td>
<td valign="top" align="left">952.18</td>
<td valign="top" align="left">705.87</td>
<td valign="top" align="left">414.05</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"><emphasis>stdev</emphasis></td>
<td valign="top" align="left">880.32</td>
<td valign="top" align="left">801.35</td>
<td valign="top" align="left">517.96</td></tr>
</tbody>
</table>
</table-wrap>
<para>Barometric pressure (<link linkend="F11-10">Figure <xref linkend="F11-10" remap="11.10"/></link>(a)) is also observed by the sensor network over the testing period. This is the parameter that is least influenced by node placement and more by general weather trends. As differences between individual sensor node values of a few percentage points, these can be attributed to sensing element calibration or local temperature compensation. Aggregating data also in this case can lead to higher-quality measurements. Ambiental light values, suitable for feeding data to an intelligent mood lighting system, are shown in <link linkend="F11-10">Figure <xref linkend="F11-10" remap="11.10"/></link>(b). The light sensor saturates in full daylight at around 1850 lux and quickly responds to variations in the measured light. The most important period of the day are dawn and dusk where the information provided can assure a smooth transition from artificial to natural light and reverse.</para>
<para>Data flow coming from the ambient sensor network is processed in multiple steps: at the source node, in the network e.g. through aggregation or sensor fusion, and at the home server or gateway level. Each steps converts raw data to higher-level pieces of information which can be more efficiently operated with and become meaninful through correlation and interactive visualization. Efficient management of this infomation is critical to correct operation of the home-monitoring system. Alerts and alarms have to be reliable and build trust among the end user leading to widespread acceptance whilst assuring a high level of integrity, security and user privacy.</para>
<para><link linkend="T11-4">Table <xref linkend="T11-4" remap="11.4"/></link> summarizes the aggregated experimental deployment values for the three nodes over the investigated period. Minimum, maximum, average and the standard deviations of the collected time series for each of the measured ambiental parameters are listed.</para>
<para>Making effective use of the large quantities of data generated by the layers of the femtocell structure, represented by the individual sensor networks, poses a significant challenge. One idea is to apply computational intelligence techniques, either in a centralized manner at the gateway level or in a distributed fashion where computing taks are spread among the nodes according to a dynamic strategy. An example is given for time series prediction on the temperature data collected by the ambient sensor network using neural networks. As a tool, the MATLAB technical computing environment can be used for modeling, testing and validation and embedded code generation at the gateway level.</para>
</section>
<section class="lev1" id="sec11-5">
<title>11.5 Conclusion</title>
<para>The chapter introduced a system architecture composed of a smart hybrid sensor network for indoor monitoring using a multilayer femtocell for ubiquitous intelligent home monitoring. The main components of the system are three low-level wireless sensor networks: body, ambient and emergency, along with a central coordination entity named the femtocell management node. This also acts as gateway towards the internet and the interested stakeholders in an ambient-assisted living scenario. It has been argued that efficient data collection and processing strategies along with robust networking protocols can enable seemless integration into the patient&#x02019;s home and bring added value to home care whilst reducing overall medical and assitance costs. Recent advances in miniaturization of discrete electronic components and systems, along with enhanced computing and communication capabilities of intelligent home device offer a good opportunity in this application area. This can be exploited for both reasearch and development in the field of ambient-assisted living to increase quality of life while dealing with increased medical costs.</para>
<para>The main experimental focus was on the body sensor network layer and the ambient sensor layer and experimental deployment and implementation has been illustrated. First, body-worn wireless accelerometers were used to detect and classify human activity based on time-domain thresholding. Second, extensive results from a medium-term deployment of an ambient sensor network were illustrated and discussed. This had, as a main purpose, the collection of ambient parameters, like temperature, humidity, barometric pressure and ambient light while observing network protocol behaviour. The main conclusion is that wireless sensor network systems and protocols offer a reliable option for deployment in home monitoring, given the specific challenges of indoor environments.</para>
<para>Plans for future development have been established on three main paths. One direction includes extending the system with more measured parameters through additional sensor integration with the wireless nodes. The focus here would be on the body sensor network side where a deep insight into the patient&#x02019;s well-being and health status can be gained. Also, while raw data and machine-learning algorithms can provide high-confidence recommendations and alerts to the caregivers, data visualization in the home and for the patient should not be neglected. This can be done by developing adaptive and intuitive interfaces for the patient or elderly person which enhance acceptance of the system. The quality and accuracy of the expected results has to be increased by integrating state-of-the-art sensing, signal processing and embedded computing hardware along with the implementation of advanced methods for experimental data processing.</para>
</section>
<section class="lev1" id="sec11-6">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>A. M&#x01CE;ciuc&#x01CE;, D. Popescu, M. Struu, and G. Stamatescu. Wireless sensor network based on multilevel femtocells for home monitoring. In 7th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, pages 499&#x02013;503, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+M&#x01CE;ciuc&#x01CE;%2C+D%2E+Popescu%2C+M%2E+Struu%2C+and+G%2E+Stamatescu%2E+Wireless+sensor+network+based+on+multilevel+femtocells+for+home+monitoring%2E+In+7th+IEEE+International+Conference+on+Intelligent+Data+Acquisition+and+Advanced+Computing+Systems%3A+Technology+and+Applications%2C+pages+499-503%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Fahim, I. Fatima, Sungyoung Lee, and Young-Koo Lee. Daily life activity tracking application for smart homes using android smartphone. In Advanced Communication Technology (ICACT), 2012 14th International Conference on, pages 241&#x02013;245, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Fahim%2C+I%2E+Fatima%2C+Sungyoung+Lee%2C+and+Young-Koo+Lee%2E+Daily+life+activity+tracking+application+for+smart+homes+using+android+smartphone%2E+In+Advanced+Communication+Technology+%28ICACT%29%2C+2012+14th+International+Conference+on%2C+pages+241-245%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Smolen, K. Czopek, and P. Augustyniak. Non-invasive sensors based human state in nightlong sleep analysis for home-care. In Computing in Cardiology, 2010, pages 45&#x02013;48, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Smolen%2C+K%2E+Czopek%2C+and+P%2E+Augustyniak%2E+Non-invasive+sensors+based+human+state+in+nightlong+sleep+analysis+for+home-care%2E+In+Computing+in+Cardiology%2C+2010%2C+pages+45-48%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Yibin Hou, Na Li, and Zhangqin Huang. Triaxial accelerometer-based real time fall event detection. In Information Society (i-Society), 2012 International Conference on, pages 386&#x02013;390, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Yibin+Hou%2C+Na+Li%2C+and+Zhangqin+Huang%2E+Triaxial+accelerometer-based+real+time+fall+event+detection%2E+In+Information+Society+%28i-Society%29%2C+2012+International+Conference+on%2C+pages+386-390%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Andrew D. Jurik and Alfred C. Weaver. Remote medical monitoring. Computer, 41 (4):96&#x02013;99, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Andrew+D%2E+Jurik+and+Alfred+C%2E+Weaver%2E+Remote+medical+monitoring%2E+Computer%2C+41+%284%29%3A96-99%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>P. Campbell. Population projections: States, 1995&#x02013;2025. Technical report, U.S. Census Bureau, 1997. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=P%2E+Campbell%2E+Population+projections%3A+States%2C+1995-2025%2E+Technical+report%2C+U%2ES%2E+Census+Bureau%2C+1997%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Chao-Hung Lin, Shuenn-Tsong Young, and Te-Son Kuo. A remote data access architecture for home-monitoring health-care applications. Medical Engineering and Physics, 29(2):199&#x02013;204, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Chao-Hung+Lin%2C+Shuenn-Tsong+Young%2C+and+Te-Son+Kuo%2E+A+remote+data+access+architecture+for+home-monitoring+health-care+applications%2E+Medical+Engineering+and+Physics%2C+29%282%29%3A199-204%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Aart Van Halteren, Dimitri Konstantas, Richard Bults, Katarzyna Wac, Nicolai Dokovsky, George Koprinkov, Val Jones, and Ing Widya. Mobihealth: ambulant patient monitoring over next generation public wireless networks. In in Demiris, G. (Ed.): E-health: Current Status and Future Trends, IOS, pages 107&#x02013;122. Press, 2004. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Aart+Van+Halteren%2C+Dimitri+Konstantas%2C+Richard+Bults%2C+Katarzyna+Wac%2C+Nicolai+Dokovsky%2C+George+Koprinkov%2C+Val+Jones%2C+and+Ing+Widya%2E+Mobihealth%3A+ambulant+patient+monitoring+over+next+generation+public+wireless+networks%2E+In+in+Demiris%2C+G%2E+%28Ed%2E%29%3A+E-health%3A+Current+Status+and+Future+Trends%2C+IOS%2C+pages+107-122%2E+Press%2C+2004%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>DD Vergados, D Vouyioukas, NA Pantazis, I Anagnostopoulos, DJ Vergados, and V Loumos. Decision support algorithms and optimization techniques for personal homecare environment. IEEE Int&#x00C4;&#x00F4;l. Special Topic Conf. Information Technology in Biomedicine (ITAB 2006), 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=DD+Vergados%2C+D+Vouyioukas%2C+NA+Pantazis%2C+I+Anagnostopoulos%2C+DJ+Vergados%2C+and+V+Loumos%2E+Decision+support+algorithms+and+optimization+techniques+for+personal+homecare+environment%2E+IEEE+Int%C4%F4l%2E+Special+Topic+Conf%2E+Information+Technology+in+Biomedicine+%28ITAB+2006%29%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Tarannum. Energy conservation challenges in wireless sensor networks: A comprehensive study. Wireless Sensor Network, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Tarannum%2E+Energy+conservation+challenges+in+wireless+sensor+networks%3A+A+comprehensive+study%2E+Wireless+Sensor+Network%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Emil Jovanov, Aleksandar Milenkovic, Chris Otto, and PietC de Groen. A wireless body area network of intelligent motion sensors for computer assisted physical rehabilitation. Journal of NeuroEngineering and Rehabilitation, 2(1):1&#x02013;10, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Emil+Jovanov%2C+Aleksandar+Milenkovic%2C+Chris+Otto%2C+and+PietC+de+Groen%2E+A+wireless+body+area+network+of+intelligent+motion+sensors+for+computer+assisted+physical+rehabilitation%2E+Journal+of+NeuroEngineering+and+Rehabilitation%2C+2%281%29%3A1-10%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. Stamatescu, D. Popescu, and V. Sgarciu. Micares: Mote expansion module for gas sensing applications. In Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), 2013 IEEE 7th International Conference on, volume 01, pages 504&#x02013;508, Sept 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+Stamatescu%2C+D%2E+Popescu%2C+and+V%2E+Sgarciu%2E+Micares%3A+Mote+expansion+module+for+gas+sensing+applications%2E+In+Intelligent+Data+Acquisition+and+Advanced+Computing+Systems+%28IDAACS%29%2C+2013+IEEE+7th+International+Conference+on%2C+volume+01%2C+pages+504-508%2C+Sept+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Turon. Mote-view: A sensor network monitoring and management tool. In Embedded Networked Sensors, 2005. EmNetS-II. The Second IEEE Workshop on, May 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Turon%2E+Mote-view%3A+A+sensor+network+monitoring+and+management+tool%2E+In+Embedded+Networked+Sensors%2C+2005%2E+EmNetS-II%2E+The+Second+IEEE+Workshop+on%2C+May+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>T. Al-ani, Quynh Trang Le Ba, and E. Monacelli. On-line automatic detection of human activity in home using wavelet and hidden markov models scilab toolkits. In Control Applications, 2007. CCA 2007., Oct 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+Al-ani%2C+Quynh+Trang+Le+Ba%2C+and+E%2E+Monacelli%2E+On-line+automatic+detection+of+human+activity+in+home+using+wavelet+and+hidden+markov+models+scilab+toolkits%2E+In+Control+Applications%2C+2007%2E+CCA+2007%2E%2C+Oct+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Crossbow Inc. XMesh User&#x02019;s Manual, revision c edition, March 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Crossbow+Inc%2E+XMesh+User%27s+Manual%2C+revision+c+edition%2C+March+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. Stamatescu and V. Sgarciu. Evaluation of wireless sensor network monitoring for indoor spaces. In Instrumentation Measurement, Sensor Network and Automation (IMSNA), 2012 International Symposium on, Aug 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+Stamatescu+and+V%2E+Sgarciu%2E+Evaluation+of+wireless+sensor+network+monitoring+for+indoor+spaces%2E+In+Instrumentation+Measurement%2C+Sensor+Network+and+Automation+%28IMSNA%29%2C+2012+International+Symposium+on%2C+Aug+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Parallax Inc. Co2 gas sensor module (27929) datasheet. Technical report, Parallax Inc., 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Parallax+Inc%2E+Co2+gas+sensor+module+%2827929%29+datasheet%2E+Technical+report%2C+Parallax+Inc%2E%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch12" label="12" xreflabel="12">
<title>Common Framework Model for Multi-Purpose Underwater Data Collection Devices Deployed with Remotely Operated Vehicles</title>
<para><emphasis role="strong">M.C. Caraivan<superscript><emphasis role="strong">1</emphasis></superscript>, V. Dache<superscript><emphasis role="strong">2</emphasis></superscript> and V. Sgarciu<superscript><emphasis role="strong">2</emphasis></superscript></emphasis></para>
<para><superscript>1</superscript>Faculty of Applied Sciences and Engineering, University Ovidius<break/>of Constanta, Romania</para>
<para><superscript>2</superscript>Faculty of Automatic Control and Computers, University Politehnica<break/>of Bucharest, Romania</para>
<para>Corresponding author: M.C. Caraivan &lt;caraivanmitrut@gmail.com&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>This paper is following the development of real-time applications for marine operations focusing on modern modelling and simulation methods and presents a common framework model for multi-purpose underwater sensors used for offshore exploration. It is addressing deployment challenges of underwater sensor networks called by the authors &#x0201C;Safe-Nets&#x0201D; by using Remotely Operated Vehicles (ROV).</para>
<para><emphasis role="strong">Keywords:</emphasis> Remotely Operated Vehicles, ROV, simulation, testing, object modelling, underwater component, oceanographic data collection, pollution</para>
</section>
<section class="lev1" id="sec12-1">
<title>12.1 Introduction</title>
<para>The natural disaster following the explosion of BP Deepwater Horizon offshore oil-drilling rig in the Gulf of Mexico has raised questions more than ever about the safety of mankind&#x02019;s offshore oil-quests. For three months in 2010, almost 5 million barrels of crude oil formed the largest accidental marine oil spill in the history of the petroleum industry. The frequency of maritime disasters and their effects appear to have dramatically increased during the last century [1], and this draws considerable attention from decision makers in communities and governments. Disaster management requires the collaboration of several management organizations resulting in heterogeneous systems. Interoperability of these systems is fundamental in order to assure effective collaboration between different organizations.</para>
<para>Research efforts in the exploration of offshore resources have increased more and more during the last decades, thus contributing to greater global interest in the area of underwater technologies. Underwater sensor networks are going to become in the nearby future the background infrastructure for applications which will enable geological prospection, pollution monitoring, and oceanographic data collection. Furthermore, these data collection networks could in fact improve offshore exploration control by replacing the on-site instrumentation data systems used today in the oil-industry nearby well heads or in well-control operations, e.g. using underwater webcams which can provide important visual data aid for surveys or for offshore drilling explorations. These facts lead to the idea of deploying multi-purpose underwater sensor networks along-side with oil companies&#x02019; offshore operations. The study is trying to show the collateral benefits of deploying such underwater sensor networks and we address state-of-the-art ideas and possible implementations of different applications like military surveillance of coastal areas, assisting navigation [2] or disaster prevention systems &#x02013; including earthquakes and tsunami detection warning alarms in advance &#x02013; all in order to overcome the biggest challenge of development: the cost of implementation.</para>
<para>It is instructive to compare current terrestrial sensor network practices to underwater approaches: terrestrial networks emphasize low-cost nodes (around a maximum of US$100), dense deployments (at most a few 100m apart) and multi-hop short-range communication. By comparison, typical underwater wireless communications today are expensive (US$10.000 per node or even more), sparsely deployed (a few nodes, placed kilometres apart), typically communicating directly to a &#x0201C;base-station&#x0201D; over long-distance ranges rather than with each other. We seek to reverse the design points which make land networks so practical and easy to expand and develop, so underwater sensor nodes that can be inexpensive, densely deployed, and communicating peer-to-peer [3].</para>
<para>Multiple Unmanned or Autonomous Underwater Vehicles (UUVs, AUVs), equipped with underwater sensors, will also find application in exploration of natural undersea resources and gathering of scientific data in collaborative monitoring missions. To make these applications viable, there is a need to enable underwater communications among underwater devices. Ocean Sampling Networks have been experimented in the Monterey Bay area, where networks of sensors and AUVs, such as the Odyssey-class AUVs, performed synoptic, cooperative adaptive sampling of the 3D coastal ocean environment [4]. While offshore constructions&#x02019; number grows, we should be able to implement auxiliary systems that allow us to better understand and protect the ocean surface we are building on. We will be using Remotely Operated Vehicles (ROVs) and a VMAX-Perry Slingsby ROV Simulator with scenario development capabilities to determine the most efficient way of deploying our underwater sensor networks, which we called &#x0201C;Safe-Nets&#x0201D;, around offshore oil drilling operations, including all types of jackets, jack-ups, platforms, spars or any other offshore metallic or concrete structure.</para>
<para>The ability to have small devices physically distributed near offshore oil-fields&#x02019; operations brings new opportunities to observe and monitor micro-habitats [5], structural monitoring [6] or wide-area environmental systems [7]. We even began to imagine a scenario where we can expand these sensor networks in order to slowly and steadily develop a global &#x0201C;WaterNet&#x0201D;, which could be an extension of the Internet on land. In the same manner which allowed Internet networks on land to develop by constantly adding more and more nodes to the network, we could allow information to be transmitted from buoy to buoy in an access-point like system. These small underwater &#x0201C;Safe-Nets&#x0201D; could be joined together and the network could expand into a global &#x0201C;Water-Net&#x0201D; in the future, allowing data to be sent and received to and from shore bases. Of course, today, we can see considerable less kilobytes of data to be sent and received at first, but the main advantages would be in favour of disaster prevention systems. &#x0201C;Safe-Nets&#x0201D; for seismic activity and tsunami warning systems alone can represent one of the reasons for underwater network deployment, which are quite limited today compared to their counterparts on land. We propose a model of interoperability in case of a marine pollution disaster for a management system based upon Enterprise Architecture Principles.</para>
<para>If we keep in mind that the sea is a harsh environment, where reliability, redundancy and maintenance-free equipment are most desirable objectives, we should seek the methods and procedures for keeping the future development in a framework that should be backwards compatible with any other sensor nodes already deployed. In order to comply with the active need for upgrading to future technologies, we have thought of a common framework model with multiple layers and drawers for components which can be used for different purposes, but mainly for underwater data collection and monitoring. This development using Enterprise Architecture Principles is sustainable through time, as it is backed up by different solutions to our research challenges, such as power supply problem, fouling corrosion, self-configuration, self-troubleshooting protocols, communication protocols and hardware methods.</para>
<para>Two-thirds of the surface of Earth is covered by water and as history proved it, there is a constantly increasing number of ideas to use this space. One of the most recent is perhaps moving entire buildings of servers - Google&#x02019;s Data-Centres [8] overseas, literally, because of their cooling needs which nowadays are tremendous. These produce a heat footprint clearly visible even from satellites and by transferring them to the offshore environment, their overheating problems would have cheaper cooling methods which could be satisfied by the ocean&#x02019;s seawater almost constant temperature. Also, we discuss the electrical power supply possibilities further in the following chapter.</para>
</section>
<section class="lev1" id="sec12-2">
<title>12.2 Research Challenges</title>
<para>We seek to overcome each of the design challenges that prohibited underwater sensor network development, especially by designing a common framework with different option modules available to be installed. If having a hardwire link at hand, by attaching these devices to offshore construction sites or to autonomous buoys, we could provide inexpensive sensors by using the power supply or communication means from that specific structure. We are looking forward to develop a variety of option modules for our common framework to be used for all types of underwater operations, which can include the instrumentation necessities nearby wellheads and drill strings or any type of oceanographic data collection, therefore becoming a solution at hand for any given task. This could provide the financial means of deploying underwater Safe-Nets, especially by tethering to all the offshore structures or exploration facilities which need different underwater data collection by their default nature.</para>
<section class="lev2" id="sec12-2-1">
<title>12.2.1 Power Supply</title>
<para>Until now, only battery power was mainly used in underwater-based sensor deployments. The sensors were deployed and shortly afterwards were recovered. In our case, the close proximity to oil-rig platforms or other offshore constructions means already existing external power sources: diesel or gas generators, wind turbines, gas pressure turbines. We can overcome this design issue with cable connections to jackets or to autonomous buoys with solar panels which are currently undergoing research [9, 10].</para>
<para>Industrial applications such as oil-fields and production lines use extensive instrumentation, sometimes with the need of a video-feedback from the underwater operations site. Considering the depths at which these cameras should operate, there is also an imperative need for proper lighting of the area; therefore we can anticipate that these nodes will be tethered in order to have a power source at hand.</para>
<para>Battery power problems which in our case can be overcome not only by sleep-awake energy efficient protocols [11&#x02013;13], but also by having connectivity at hand to other future system types of producing electricity from renewable resources, like wave energy converter units according to the European project Aquatic Renewable Energy Technologies (Aqua-RET) [14]:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Attenuator-type <link linkend="F12-1">Figure <xref linkend="F12-1" remap="12.1"/></link>: Pelamis Wave Energy Converter [15];</para></listitem></itemizedlist>
<fig id="F12-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-1">Figure <xref linkend="F12-1" remap="12.1"/></link></label>
<caption><para>Pelamis wave converter Orkney, U.K.</para></caption>
<graphic xlink:href="graphics/ch12_fig001.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Axial symmetric absorption points as in <link linkend="F12-2">Figure <xref linkend="F12-2" remap="12.2"/></link>: WaveBob [16], AquaBuoy, OE Buoys [17] or Powerbuoy [18];</para></listitem></itemizedlist>
<fig id="F12-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-2">Figure <xref linkend="F12-2" remap="12.2"/></link></label>
<caption><para>Axial symmetric absorption buoy.</para></caption>
<graphic xlink:href="graphics/ch12_fig002.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Wave-level oscillation converters: completely submerged Waveroller or surface Oyster [19];</para></listitem>
<listitem>
<para>Overtopping devices <link linkend="F12-3">Figure <xref linkend="F12-3" remap="12.3"/></link>: Wave Dragon [20];</para></listitem></itemizedlist>
<fig id="F12-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-3">Figure <xref linkend="F12-3" remap="12.3"/></link></label>
<caption><para>Wave Dragon - Overtopping devices principle.</para></caption>
<graphic xlink:href="graphics/ch12_fig003.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Submersible differential pressure devices <link linkend="F12-4">Figure <xref linkend="F12-4" remap="12.4"/></link>: Archimedes Waveswing [21];</para></listitem></itemizedlist>
<fig id="F12-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-4">Figure <xref linkend="F12-4" remap="12.4"/></link></label>
<caption><para>Archimedes Waveswing (AWS).</para></caption>
<graphic xlink:href="graphics/ch12_fig004.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Oscillating Water Column (OWC) devices.</para></listitem></itemizedlist>
<para>In addition, we are considering also other types of clean energy technology production systems at sea:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Wind farms: usually the wind speed at sea is far greater than on land, however, by comparison to its land counterpart, offshore wind turbines are harder to install and need more technical and financial efforts. The distance to land, water depth and sea floor structure are factors that need to be taken into consideration for Aeolian projects at sea. The first project for an offshore wind farm was developed in Denmark in 1991;</para>
<fig id="F12-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-5">Figure <xref linkend="F12-5" remap="12.5"/></link></label>
<caption><para>Wind farms in North Sea.</para></caption>
<graphic xlink:href="graphics/ch12_fig005.jpg"/>
</fig>
</listitem>
<listitem>
<para>Oceans&#x02019; thermic energy by using the temperature difference between surface and depth waters, which needs to be at least 20<superscript>o</superscript>C at less than 100m from sea surface. These desiderates are usually full-filled nearby in Equatorial regions;</para></listitem>
<listitem>
<para>Tidal waves and ocean currents such as Gulf Stream, Florida Straits, North Atlantic Drift possess energy which can be extracted with underwater turbines.</para></listitem></itemizedlist>
<para>Besides the power supply facilities, all these devices themselves could in fact be areas of interest for deployment of our Safe-Net sensors.</para>
</section>
<section class="lev2" id="sec12-2-2">
<title>12.2.2 Communications</title>
<para>Until now, there were several attempts to deploy underwater sensors that record data during their mission, but they were always recovered afterwards. This did not give the flexibility needed for real-time monitoring situations like surveillance or environmental and seismic monitoring. The recorded data could not be accessed until the instruments were recovered. It was also impossible to detect failures before the retrieval and this could easily lead to the complete failure of a mission. Also, the amount of data stored was limited by the capacity of the devices on-board the sensors (flash memories, hard disks).</para>
<para>Two possible implementations are buoys with high-speed RF-based communications, or wired connections to some sensor nodes. The communication bandwidth can be provided also by satellite connections which are usually present on offshore facilities. If linked to an autonomous buoy, the device provides GPS telemetry and has communication capabilities of its own. Therefore, once the information gets to the surface, radio communications are considered to be already provided as standard. Regarding underwater communications, usually the typical physical layer technology implies acoustic communications. Radio waves have long-distance propagation issues through sea water and can only be done at extra low frequencies, below 300 Hz [22]. This requires large antennae and high transmissions power, which we would prefer avoiding. Optical waves do not suffer from such high attenuation but are affected by scattering. Moreover, transmission of optical signals requires high precision in pointing the narrow laser beams. The primary advantage of this type of data transmission is the higher theoretical rate of transmission, while the disadvantages are the range and the line-of-sight operation needed. We did not consider this as a feasible solution due to marine snow, non-uniform illumination issues and other possible interferences.</para>
<para>We do not intend to mix different communication protocols with different physical layers, but we analyze the compatibility of each with existing underwater acoustic communications, state-of-the-art protocols and routing algorithms. Our approach will be a hybrid system, like the one in <link linkend="F12-6">Figure <xref linkend="F12-6" remap="12.6"/></link> that will incorporate both tethered sensors and wireless acoustic where absolutely no other solution can be implemented (e.g.: a group of bottom sea floor anchored sensor nodes are implemented nearby an oil pipe, interconnected to one or more underwater &#x0201C;sinks&#x0201D;, which are in charge of relaying data from the ocean bottom network the a surface station [23].</para>
<fig id="F12-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-6">Figure <xref linkend="F12-6" remap="12.6"/></link></label>
<caption><para>Possible underwater sensor network deployment nearby Jack-up rig.</para></caption>
<graphic xlink:href="graphics/ch12_fig006.jpg"/>
</fig>
<para>Regarding the propagation of acoustic waves in the frequency gamma we are interested in, for the multi-level communication between Safe-Net sensor nodes, we are looking into already known models [24]. One of the major problems related to the fluid dynamics are the non-linear movement equations, which imply the fact that there isn&#x02019;t a general exact solution. Acoustics represent the first order of approximation in which the non-linear effects are neglected [25]. Acoustic waves propagate because of the medium compressibility and the acoustic pressure or the sound pressure represents the local deviation of the pressure whose root cause can be traced back to a sound wave generated against the local environment. In air, the sound pressure can be measured using a microphone, while in water it can be measured using a hydrophone.</para>
<fig id="F12-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-7">Figure <xref linkend="F12-7" remap="12.7"/></link></label>
<caption><para>Sound pressure diagram:1&#x02013; Equilibrium; 2&#x02013; Sound; 3&#x02013; Environment Pressure; 4&#x02013; Instantaneous Pressure of Sound.</para></caption>
<graphic xlink:href="graphics/ch12_fig007.jpg"/>
</fig>
<para>Considering the case of acoustic waves propagation in real fluids for our mathematic general formalism, we have made the following assumptions: gravity forces can be neglected, so equilibrium pressure and density get uniform values all over the fluid&#x02019;s volume (<emphasis>p<subscript>0</subscript></emphasis> and<emphasis role="strong"></emphasis><emphasis>&#x003C1;<subscript>0</subscript></emphasis>); the dissipative effects such as viscosity and thermic conductibility are negligible; the medium is homogenous, isotropic and has perfect elasticity, as well as the fluid particles speed is slow (the &#x0201C;<emphasis>small amplitudes</emphasis>&#x0201D; assumption). Therefore, we can write a Taylor development for the pressure and density fluctuation relationship:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-1.jpg"/></para>
<para>where the partial derivatives are constant for the adiabatic process around the <emphasis>&#x003C1;<subscript>0</subscript></emphasis><emphasis role="strong"><emphasis><subscript></subscript></emphasis></emphasis> equilibrium density of the fluid.</para>
<para>If the density fluctuations are small, meaning <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in266.jpg"/>, then the high-order terms can be reduced and the adiabatic state equation becomes linear:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-2.jpg"/></para>
<para>The pressure generated by the sound <emphasis>p</emphasis> (12.4) is directly related with the particle movement and the amplitude <emphasis role="strong"><emphasis><emphasis role="bf">&#x003BE;</emphasis></emphasis></emphasis>through equation (12.3):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-3.jpg"/></para>
<para>where the symbols together with the I.S. measurement units are presented in the following table:</para>
<table-wrap position="float" id="T12-1">
<label><link linkend="T12-1">Table <xref linkend="T12-1" remap="12.1"/></link></label>
<caption><para>Symbols Definition and Corresponding I.S. Measurement Units</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Symbol</td>
<td valign="top" align="left">Measurement Unit</td>
<td valign="top" align="left">Description</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><emphasis>p</emphasis></td>
<td valign="top" align="left">Pascal</td>
<td valign="top" align="left">Sound Pressure</td>
</tr>
<tr>
<td valign="top" align="left"><emphasis>f</emphasis></td>
<td valign="top" align="left">Hertz</td>
<td valign="top" align="left">Frequency</td>
</tr>
<tr>
<td valign="top" align="left">&#x003C1;</td>
<td valign="top" align="left">kg/m<superscript>3</superscript></td>
<td valign="top" align="left">Environment Density (constant)</td>
</tr>
<tr>
<td valign="top" align="left">c</td>
<td valign="top" align="left">m/s</td>
<td valign="top" align="left">Sound Speed (constant)</td>
</tr>
<tr>
<td valign="top" align="left">v</td>
<td valign="top" align="left">m/s</td>
<td valign="top" align="left">Particle Speed</td>
</tr>
<tr>
<td valign="top" align="left">&#x003C9;</td>
<td valign="top" align="left">rad / s</td>
<td valign="top" align="left">Angular Speed</td>
</tr>
<tr>
<td valign="top" align="left">&#x003BE;</td>
<td valign="top" align="left">m</td>
<td valign="top" align="left">Particle Movement</td>
</tr>
<tr>
<td valign="top" align="left">Z</td>
<td valign="top" align="left">N&#x00B7;s/m<superscript>3</superscript></td>
<td valign="top" align="left">Acoustic Impedance</td>
</tr>
<tr>
<td valign="top" align="left">a</td>
<td valign="top" align="left">m/s<superscript>2</superscript></td>
<td valign="top" align="left">Particle Acceleration</td>
</tr>
<tr>
<td valign="top" align="left">I</td>
<td valign="top" align="left">W/m<superscript>2</superscript></td>
<td valign="top" align="left">Sound Intensity</td>
</tr>
<tr>
<td valign="top" align="left">E</td>
<td valign="top" align="left">W&#x00B7;s/m<superscript>3</superscript></td>
<td valign="top" align="left">Sound Energy Density</td>
</tr>
<tr>
<td valign="top" align="left">P<subscript><emphasis>ac</emphasis></subscript></td>
<td valign="top" align="left">Watt</td>
<td valign="top" align="left">Acoustic Power</td>
</tr>
<tr>
<td valign="top" align="left">A</td>
<td valign="top" align="left">m<superscript>2</superscript></td>
<td valign="top" align="left">Surface</td></tr>
</tbody>
</table>
</table-wrap>
<para>The fundamental attenuation describes the power loss of a tone at a frequency <emphasis>f</emphasis>, during its movement across a distance <emphasis>d</emphasis>. The first level of our summary description takes into consideration this loss which occurs on the transmission distance <emphasis>d</emphasis>. The second level calculates the specific loss of one location caused by reflexions and refractions of upper and lower surfaces, i.e. sea surface and bottom and also, the sound speed variations due to depth differences. The result is a better prediction model of a specific transmitter. The third level addresses the apparently random power shifts of the signal received, by considering an average during a period of time. These changes are due to slow variations of the propagation environment, e.g. tidal waves.</para>
<para>All these phenomena are relevant for determining the transmission power needs in order to accomplish an efficient and successful underwater communication. We can also think at a separate model which could address much faster changes of the instantaneous signal power at any given time, but at a far smaller scale. The Signal Noise Ratio for different transmission distances as a frequency function can be viewed in <link linkend="F12-8">Figure <xref linkend="F12-8" remap="12.8"/></link>. The sound absorption limits the bandwidth which can be used for transmission and becomes dependent on the distance:</para>
<fig id="F12-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-8">Figure <xref linkend="F12-8" remap="12.8"/></link></label>
<caption><para>Signal-Noise Ratio (SNR).</para></caption>
<graphic xlink:href="graphics/ch12_fig008.jpg"/>
</fig>
<para>By evaluating the entity <emphasis>A(d,f) N(f)</emphasis> as a function of ideal propagation of the attenuation <emphasis>A(d,f)</emphasis> and as a consequence of tipical spectral power of the background noise <emphasis>N(f)</emphasis>, which drops 18dB per decade, we find the combined effect of attenuation and noise in underwater acoustics. This characteristic describes the observation of SNR around the frequency bandwidth <emphasis>f</emphasis>. This shows that high frequencies suffer fast attenuation on long distances, which forces most modems to operate on a narrow bandwidth few kHz at most, by suggesting the optimal frequency for a specific transmission [26]. It also shows that the available bandwidth and implicitly the effective transmission rate reduces with higher distances; therefore, the development of any big network should start by determining its specific frequency and reserving a bandwidth around it [27].</para>
</section>
<section class="lev2" id="sec12-2-3">
<title>12.2.3 Maintenance</title>
<para>Ocean can be a very harsh environment and underwater sensors are prone to failures because of fouling and corrosion. The sensor&#x02019;s construction method could include one miniaturized copper-alloy anode for anti-corrosion, as well as one miniaturized aluminum-alloy anode which could fight fouling. Modern anti-fouling systems already installed on rigs use microprocessor controlled anodes and the current flowing to each anti-fouling and anti-corrosion anode is quite low and the technology could be adapted by miniaturization of the existing anodes. Although we are considering the environmental impact of deploying such a high number of underwater devices, our primary concerns are the feasibility and the durability of the network and how we can address these factors in order to be able to expand our network through time and its enlargement to be backwards compatible to already deployed nodes. Besides the communication protocols being backwards compatible, underwater Safe-Net nodes must possess self-configuration capabilities, i.e. must be able to coordinate their operation, location or movement and data communication handshake protocols by themselves. So, we state the obvious, that this can only be possible if the Safe-Net nodes are resistant enough in the salt, corrosive water of the sea.</para>
</section>
<section class="lev2" id="sec12-2-4">
<title>12.2.4 Law and Finance</title>
<para>At the end of 2010, the European Commission issued a statement concerning the safety regulations for offshore oil and gas drilling operations, with the declared purpose of developing new laws for the European Union concerning oil rigs. The primary objective of these measures will be the enforcement in this domain of the highest safety standards in the world until present time in order to prevent ecological disasters like the one in the Gulf of Mexico.</para>
<para>Moreover, during March &#x02013; May 2011, following a public consultation session regarding the European Union legal frame for current practices of marine exploration and production safety standards, the community experts have drawn the line saying that although generally speaking the activities meet high standards of safety, these vary from one company to another, because of the different laws which apply in each country. Therefore, the next legislative proposal should enforce a common ground for all E.U. members concerning the laws for prevention, reaction times and measures in case of emergencies, as well as the <emphasis>financial liability</emphasis>.</para>
<para>According to the top 10 list of companies by largest revenues in the fiscal year 2010&#x02013;2011, 7 are oil and gas industry companies, which summed up $2.54 billion USD revenues out of a total $3.43 billion USD. This means more than 74% of the global revenues [28]; therefore, the cost of deploying such Safe-Nets around drilling operations is rather small and the benefits would be huge. Laws could be issued by governments in order to enforce the obligation to oil and gas companies working at sea to use this sensor networks every time a new site is being surveyed or a new jacket is installed. This could also apply to existing oil rigs, jack-ups which move between different places or even for subsea production sites. The ability to have small devices physically distributed near offshore oil-fields&#x02019; operations brings new opportunities for emergency-cases interoperability up to the higher level of disaster management systems point of view [29].</para>
<para>Ocean Sampling Networks have been experimented in the Monterey Bay area, where networks of sensors and AUVs, such as the Odyssey-class AUVs, performed synoptic, cooperative adaptive sampling of the 3D coastal ocean environment. Seaweb is an example of a large underwater sensor network developed for military purposes of detection and monitoring submarines [30]. Another example is the consortium formed by Massachusetts Institute of Technology (MIT) and Australia&#x02019;s Commonwealth Scientific and Industrial Research Organization which has collected data with fixed and mobile sensors mounted on autonomous underwater vehicles. The network was only temporary and lasted only for a few days around the coasts of Australia [31].</para>
<para>Ocean Observatories Initiative represents one of the largest ongoing underwater cabled networks, which has eliminated both the acoustic communication and power supply problems right from the start, by using already existing underwater cables or new installs. The investment on Neptune project was huge, approximately $153 billion dollars [32], but the idea seems quite bright if we look at the most important underwater cables, which are already running data under the oceans (<link linkend="F12-9">Figure <xref linkend="F12-9" remap="12.9"/></link>, with courtesy of TeleGeography.com). In 1956, North America was connected to Europe by an undersea cable called TAT-1. It was the world&#x02019;s first submarine telephone system, although telegraph cables had crossed for the ocean for a century. Trans-Atlantic cable capacity soared over the next 50 years, reaching a huge amount of data flowing back and forth the continents, nearly 10 Tbps in 2008.</para>
</section>
<section class="lev2" id="sec12-2-5">
<title>12.2.5 Possible Applications</title>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Seismic monitoring. Frequent seismic monitoring is of importance in oil extraction; studies of variation in the reservoir over time are called 4-D seismic and are useful for judging field performance and motivating intervention;</para></listitem>
<listitem>
<para>Disaster prevention and environmental monitoring. Sensor networks for seismic activity mentioned before could also be used for tsunami warnings to coastal areas. While there is always a potential for sudden devastation (see Japan 2011), warning systems can be quite effective. There is also the possibility of pollution monitoring: chemical, biological and so on and so forth;</para></listitem>
<listitem>
<para>Weather forecast improvement: monitoring of ocean currents and winds can improve ocean weather forecasts, detecting climate change and also understanding and predicting the effect of human activities on marine ecosystems;</para></listitem>
<listitem>
<para>Assisted navigation: sensors can be used to locate dangerous rocks in shallow waters. The buoys can also signal the presence of submerged wrecks or potential dangerous areas for navigation;</para></listitem>
<listitem>
<para>Surveillance used for coast-line or border-lines, detecting the presence of ships in country marine belt. Fixed underwater sensors can monitor areas for surveillance, reconnaissance or even intrusion detection systems.</para></listitem></itemizedlist>
<fig id="F12-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-9">Figure <xref linkend="F12-9" remap="12.9"/></link></label>
<caption><para>Most important underwater data &#x00026; voice cables (2008).</para></caption>
<graphic xlink:href="graphics/ch12_fig009.jpg"/>
</fig>
</section>
</section>
<section class="lev1" id="sec12-3">
<title>12.3 Mathematical Model</title>
<para>We introduce the class of systems which was considered when conducting the research for the PhD thesis, as well as definitions on configurations of sensors and remote actuators. This class of distributed parameter systems which describes important concepts for parameter identification and optimal experiment design has been adapted from the theoretical and practical research &#x0201C;<emphasis>Optimal Sensing and Actuation Policies for Networked Mobile Agents in a Class of Cyber-Physical Systems</emphasis>&#x0201D; [33]. The study presents models for aerial drones in the U.S.A., which could take high-resolution pictures of the agricultural terrain and the algorithm was pointed to data correlation between meteorological stations on the ground by matching the pictures with the low-resolution ones taken from satellites. The purpose was to introduce a new methodology to transform low-resolution remote sensing date about soil moisture to higher resolution information that contains better knowledge for use in hydrologic studies or water management decision making. The goal of the study was aiming to obtain a high-resolution data set with the help of a combination of ground measurements from instrumentation stations and low-altitude remote sensing, typically images obtained from a UAV. The study introduces optimal trajectories and launching points of UAV remote sensors in order to solve the problem of maximum terrain coverage using least hardware means, also expensive in their case.</para>
<para>We have taken further this study by matching the agricultural terrain with our underwater environment and making an analogy between the fixed instrumentation systems on ground, the meteorological stations and all the fixed offshore structures already put in place through-out the sea. The mobile drones are represented by remotely operated vehicles or by autonomous underwater vehicles which can have data collection sensors on-board and can be used as mobile network nodes. The optimisation of the best distribution pattern of the nodes in the underwater environment can be extrapolated only by neglecting the environment constants, which weren&#x02019;t taken into account by the study [33]. This issue is further to be investigated.</para>
<section class="lev2" id="sec12-3-1">
<title>12.3.1 System Definition</title>
<para>The class of distributed parameter systems considered can be described by the state Equation [34]:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-5.jpg"/></para>
<para>where <emphasis>Y = L<superscript>2</superscript>(&#x003A9;)</emphasis> is the state space and <emphasis>&#x003A9;</emphasis> is a bounded and open subset of &#x0211D;<superscript><emphasis>n</emphasis></superscript> with a sufficiently regular boundary <emphasis>&#x00393; = &#x02202;&#x003A9;</emphasis>. The domain <emphasis>&#x003A9;</emphasis> is the geometrical support of the considered system (12.5). <emphasis>A</emphasis> is a linear operator describing the dynamics of the as the set of linear maps from <emphasis>U</emphasis> to <emphasis>Y</emphasis> is the input operator; <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in272.jpg"/> is the space of integrable functions <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in272-1.jpg"/> such that the function <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in272-2.jpg"/> is integrable on <emphasis>]0,T[</emphasis> and <emphasis>U</emphasis> is a Hilbert control space. In addition, the considered system has the following output equation:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-6.jpg"/></para>
<para>Where <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in273.jpg"/> and <emphasis>Z</emphasis> is a Hilbert observation space. We can adapt the definitions of actuators, sensors, controllability and observability to system classes that are formulated in the state Equation form (12.5).</para>
<para>The tradition approach of the analysis in distributed parameter systems is fairly abstract in its purely mathematical form. Therefore, all the characteristics of the system related to its spatial variables and geometrical aspects of the inputs and outputs of the system are considered. To introduce a more practical approach from an engineering point of view, the study [33] introduces the concepts of actuators and sensors in the distributed parameter systems point of view. With these concepts at hand, we can describe more practically the relationship between a system and its environment, in our case sea/ocean water. The study can be extended beyond the operators <emphasis>A</emphasis>, <emphasis>B</emphasis> and <emphasis>C</emphasis>, with the consideration of the spatial distribution, location and number of sensors and actuators.</para>
<para>The sensors&#x02019; measurements are, in fact, the observations on the system, having a passive role. On the other hand, actuators provide a forcing input on the system. Sensors and actuators can be of different natures: zone or point-wise or domain distributed, internal or boundary, stationary or mobile. An additional important notion is the concept of region of a domain. It is generally defined as a subdomain of <emphasis>&#x003A9;</emphasis>. Instead of considering a problem on the totality of <emphasis>&#x003A9;</emphasis>, the focus can be concentrated only on a subregion <emphasis>&#x003C9;&#x1D700; &#x003A9;</emphasis>, while the results can still be extended to <emphasis>&#x003C9; = &#x003A9;</emphasis>. Such consideration allows the generalization of different definition and methodologies developed in previous works on distributed parameter systems analysis and control.</para>
</section>
<section class="lev2" id="sec12-3-2">
<title>12.3.2 Actuator Definition</title>
<para>Let <emphasis>&#x003A9;</emphasis> be an open and bounded subset of &#x0211D;<superscript><emphasis>n</emphasis></superscript> with a sufficiently smooth boundary <emphasis>&#x00393; = <emphasis>&#x02202;&#x003A9;</emphasis></emphasis> [35]. An actuator is a couple (<emphasis>D, g)</emphasis> where <emphasis>D</emphasis> represents the geometrical support of the actuator, <emphasis>D = supp(g)</emphasis> &#x02282; &#x003A9; and <emphasis>g</emphasis> is its spatial distribution.</para>
<para>An actuator (<emphasis>D, g)</emphasis> is said to be:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>A zone actuator if <emphasis>D</emphasis> is a non-empty sub-region of <emphasis>&#x003A9;</emphasis>;</para></listitem>
<listitem>
<para>A point-wise actuator if <emphasis>D</emphasis> is reduced to a point <emphasis>b</emphasis> &#x02208; &#x003A9;. In this case, we have <emphasis>g=</emphasis>&#x02202;<subscript><emphasis>b</emphasis></subscript> where <emphasis>&#x02202;<subscript>b</subscript></emphasis> is the Dirac function concentrated at b. The actuator is denoted <emphasis>(b,&#x02202;<subscript>b</subscript>)</emphasis>.</para></listitem></itemizedlist>
<para>An actuator, zone or point-wise, is said to be a boundary actuator if its support D&#x02282;&#x00393;. An illustration of the actuators supports is given in <link linkend="F12-10">Figure <xref linkend="F12-10" remap="12.10"/></link>:</para>
<fig id="F12-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-10">Figure <xref linkend="F12-10" remap="12.10"/></link></label>
<caption><para>Graphical representation of actuators&#x02019; supports.</para></caption>
<graphic xlink:href="graphics/ch12_fig0010.jpg"/>
</fig>
<para>In the previous definition, we assume that <emphasis>g</emphasis> &#x02208; <emphasis>L</emphasis><superscript>2</superscript>(D). For a collection of <emphasis>p</emphasis> actuators (<emphasis>D<subscript>i</subscript>,g<subscript>i</subscript></emphasis>)<subscript>1&#x02264;i&#x02264;<emphasis>p</emphasis></subscript>, we have <emphasis>U</emphasis> = &#x0211D;<superscript><emphasis>p</emphasis></superscript>, <emphasis>B</emphasis> : &#x0211D;<superscript><emphasis>p</emphasis></superscript> &#x02192; <emphasis>L</emphasis><superscript>2</superscript>(&#x03A9;) and:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-7.jpg"/></para>
<para><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/in274.jpg"/> and <emphasis>g<subscript>i</subscript></emphasis> &#x02208; <emphasis>L</emphasis><superscript>2</superscript>(<emphasis>D<subscript>i</subscript></emphasis>) with <emphasis>D<subscript>i</subscript></emphasis> = <emphasis>supp (g<subscript>i</subscript>)</emphasis> &#x02282; &#x003A9; for <emphasis>i</emphasis> = 1,&#x02026;,<emphasis>p</emphasis> and <emphasis>D<subscript>i</subscript> &#x02229; D<subscript>j</subscript></emphasis> = &#x02205; for <emphasis>i &#x02260; j.</emphasis> So, we have the following:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-8.jpg"/></para>
<para>where <emphasis>M</emphasis> <superscript><emphasis>T</emphasis></superscript> is the transpose matrix of <emphasis>M</emphasis> and &#x0003C;&#x00B7;,&#x00B7;>=&#x0003C;&#x00B7;,&#x00B7;><emphasis><subscript>Y</subscript></emphasis> is the inner product in <emphasis>Y</emphasis> and for <emphasis>v</emphasis> &#x02208; <emphasis>Y</emphasis> , if <emphasis>supp(v)=D</emphasis>, we have:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-9.jpg"/></para>
<para>When <emphasis>D</emphasis> does not depend on time, the actuator <emphasis>(D,g)</emphasis> is said to be fixed or stationary. Otherwise, it is a moving or mobile actuator denoted by <emphasis>(D<subscript>t</subscript>,g<subscript>t</subscript>)</emphasis>, where <emphasis>D(t)</emphasis> and <emphasis>g(t)</emphasis> are, respectively, the geometrical support and the spatial distribution of the actuator at time <emphasis>t</emphasis>, as in <link linkend="F12-11">Figure <xref linkend="F12-11" remap="12.11"/></link>:</para>
<fig id="F12-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-11">Figure <xref linkend="F12-11" remap="12.11"/></link></label>
<caption><para>Illustration of the geometrical support and spatial distribution of an actuator.</para></caption>
<graphic xlink:href="graphics/ch12_fig0011.jpg"/>
</fig>
</section>
<section class="lev2" id="sec12-3-3">
<title>12.3.3 Sensor Definition</title>
<para>A definition of sensors in the distributed parameters systems point of view is provided by [35]: a sensor is a couple <emphasis>(D,h)</emphasis> where <emphasis>D</emphasis> is the support of the sensor, <emphasis>D = supp(h)</emphasis> &#x02282; &#x003A9; and <emphasis>h</emphasis> its spatial distribution.</para>
<para>A graphical representation of the sensors supports is given in <link linkend="F12-12">Figure <xref linkend="F12-12" remap="12.12"/></link>:</para>
<fig id="F12-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-12">Figure <xref linkend="F12-12" remap="12.12"/></link></label>
<caption><para>Graphical representation of the sensor supports.</para></caption>
<graphic xlink:href="graphics/ch12_fig0012.jpg"/>
</fig>
<para>It is usually assumed that <emphasis>h &#x02208; L</emphasis><superscript>2</superscript>(D). Similarly, we can define zone or point-wise, internal or boundary, fixed or moving sensors. If the output of the system is given by means of <emphasis>q</emphasis> zone sensors (<emphasis>D<subscript>i</subscript>,h<subscript>i</subscript></emphasis>)<subscript>1&#x02264;<emphasis>i</emphasis>&#x02264;<emphasis>q</emphasis></subscript> with <emphasis>h<subscript>i</subscript></emphasis> &#x02208; <emphasis>L</emphasis><superscript>2</superscript>(<emphasis>D<subscript>i</subscript></emphasis>), <emphasis>D<subscript>i</subscript></emphasis> = <emphasis>supp</emphasis>(<emphasis>h<subscript>i</subscript></emphasis>) &#x02282; &#x003A9; for <emphasis>i</emphasis> = 1,&#x02026;,<emphasis>q</emphasis> and <emphasis>D<subscript>i</subscript></emphasis> &#x02229; <emphasis>D<subscript>j</subscript></emphasis> = &#x003D5; if <emphasis>i &#x02260;j</emphasis>, then in the zone case, the distributed parameter system&#x02019;s output operator <emphasis>C</emphasis> is defined by <emphasis>C : L</emphasis><superscript>2</superscript>(&#x03A9;) &#x02192; &#x0211D;<superscript><emphasis>p</emphasis></superscript>:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-10.jpg"/></para>
<para>And the output is given by:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-11.jpg"/></para>
<para>A sensor <emphasis>(D,h)</emphasis> is a zone sensor if <emphasis>D</emphasis> is a non-empty sub-region of <emphasis>&#x003A9;</emphasis>. The sensor <emphasis>(D,h)</emphasis> is a point-wise sensor if <emphasis>D</emphasis> is limited to a point <emphasis>c</emphasis> &#x02208; &#x003A9; and in this case <emphasis>h=</emphasis>&#x02202;<subscript><emphasis>c</emphasis></subscript> is the Dirac function concentrated in <emphasis>c</emphasis>. The sensor is denoted as (<emphasis>c,</emphasis>&#x02202;<subscript><emphasis>c</emphasis></subscript>). If <emphasis>D</emphasis> &#x02282; &#x00393; = &#x02202;&#x003A9;, the sensor <emphasis>(D,h)</emphasis> is called a boundary sensor. If <emphasis>D</emphasis> is not dependent on time, the sensor <emphasis>(D,h)</emphasis> is said to be fixed or stationary, otherwise it is said to be mobile and is denoted as <emphasis>(D<subscript>t</subscript>,h<subscript>t</subscript>)</emphasis>. In the case of <emphasis>q</emphasis> point-wise fixed sensors located in (<emphasis>c<subscript>i</subscript></emphasis>)<subscript>1&#x02264;<emphasis>i</emphasis>=&#x02264;<emphasis>q</emphasis></subscript>, the output function is a <emphasis>q</emphasis>-vector given by the relationship:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq12-12.jpg"/></para>
<para>Where <emphasis>c<subscript>i</subscript></emphasis> is the position of the <emphasis>i</emphasis>-th sensor and <emphasis>y(t,c<subscript>i</subscript></emphasis>) is the state of the system in <emphasis>c<subscript>i</subscript></emphasis> at a given time <emphasis>t</emphasis>. [33] based on [36] devines also the notions of observability and local controllability in the sense of distributed parameters systems. [33] also shows that due to the nature of the problem of parameter identification, the abstract operator-theoretic formalism used above to define the dynamics of a distributed parameter system is not convenient. A formalism based on <emphasis>n</emphasis> partial differential equations is used instead. According to this setup, the sensor location and clustering phenomenon problem is ilustrated in the Fisher Information Matrix (FIM) [37], which is a well-known performance measure tool when looking for best measurements and is widely used in optimum experimental design theory for lumped systems [38]. Its inverse consititues an approximation of the covariance matrix for the estimate of <emphasis>&#x00398;</emphasis>. However, there is a serious issue in the FIM framework of optimal measurements for parameter estimation of distributed parameters system, which is the dependence of the solution on the initial guess on parameters [39]. The dependence of the optimal location on <emphasis>&#x00398;</emphasis> is very problematic; however, some <emphasis>robust design</emphasis> techniques have been developed in order to minimize or elude the influence and we propose similar methodologies.</para>
<para>By analogy with the study [33], we can try to optimize this solution for underwater points of interest. But in our case, of course, the problems are much more complex because of the physical and chemical properties of the environment.</para>
<para>We can consider two communication architectures for underwater Safe-Nets. One is a two-dimensional architecture, where sensors are anchored to the bottom of the ocean, and the other is a three-dimensional architecture, where sensors float at different ocean depths covering the entire monitored volume region. While the former is designed for networks whose main objective is to monitor the ocean bottom, the latter is more suitable to detect and observe phenomena in the three-dimensional space that cannot be adequately observed by means of ocean bottom sensor nodes. The mathematical model above refers only to the two-dimensional architecture case and we are looking into further researches for the three-dimensional optimization, especially when talking about the sensor-clustering phenomena.</para>
</section>
</section>
<section class="lev1" id="sec12-4">
<title>12.4 ROV</title>
<para>A remotely operated vehicle (ROV) is a non-autonomous underwater robot. They are commonly used in deep-water industries such as offshore hydrocarbon extraction. A ROV may sometimes be called a remotely operated underwater vehicle to distinguish it from remote control vehicles operating on land or in the air. ROVs are unoccupied, highly manoeuvrable and operated by a person aboard a vessel by means of commands sent through a tether.</para>
<para>They are linked to the ship by this tether (sometimes referred to as an umbilical cable), which is a group of cables that carry electrical power, video and data signals back and forth between the operator and the vehicle. The ROVs are used in offshore oilfield production sites, underwater pipelines inspection, welding operations, subsea BOP (Blow-Out Preventer) manipulation as well as other tasks:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Seabed Mining &#x02013; deposits of interest: gas hydrates, manganese nodules, metals and diamonds;</para></listitem>
<listitem>
<para>Aggregates Industry &#x02013; used to monitor the action and effectiveness of suction pipes during extraction;</para></listitem>
<listitem>
<para>Cable and Node placements &#x02013; 4D or time lapse Seismic investigation of crowded offshore oilfields;</para></listitem>
<listitem>
<para>Jack-up &#x00026; Semi-Submersible surveys &#x02013; preparation and arrival of a Jack-up or Semi-Submersible drilling rig;</para></listitem>
<listitem>
<para>Drilling &#x00026; Wellhead support &#x02013; monitoring drilling operations, installation/removal of template &#x00026; Blow-Out Preventers (BOP), open-hole drilling (Bubble Watch), regular inspections on BOP, debris removal, IRM and in-field maintenance support (Well servicing);</para></listitem>
<listitem>
<para>Decommissioning of Platforms / Subsea Structures &#x02013; Dismantle structures safely and environment friendly;</para></listitem>
<listitem>
<para>Geotechnical Investigation &#x02013; Pipeline route surveys, Pipeline Lay &#x02013; Startup, Touchdown monitoring, Pipeline Pull-ins, Pipeline crossings, Pipeline Lay-downs, Pipeline Metrology, Pipeline Connections, Post-lay Survey, Regular Inspections;</para></listitem>
<listitem>
<para>Submarine Cables &#x02013; Route Surveys, Cable Lay, Touchdown monitoring, Cable Post-Lay, Cable Burial;</para></listitem>
<listitem>
<para>Ocean Research &#x02013; Seabed sampling and surveys;</para></listitem>
<listitem>
<para>Nuclear Industry &#x02013; Inspections, Intervention and Decommissioning of Nuclear Power Plants;</para></listitem>
<listitem>
<para>Commercial Salvage &#x02013; Insurance investigation and assessment surveys, Salvage of Sunken Vessels, Cargoes, Equipment and Hazardous Cargo Recovery;</para></listitem>
<listitem>
<para>Vessel and Port Inspections &#x02013; Investigations, Monitoring of Ports and homeland security.</para></listitem></itemizedlist>
<para>We are going to use PerrySlingsby Triton XLS and XLR models of the remote operated vehicles (ROV), which are currently available in the Black Sea area. While having the bigger goal in mind - deploying such networks on a large scale - we can only think now for a small test bed and before any physical implementation we are creating simulation scenarios on the VMAX ROV Simulator. Simulation helps preventing any damages to the ROV itself or any of the subsea structures we encounter. This also prevents any real-life impossible design-situations to occur, e.g.: the ROV&#x02019;s robotic arms have very good dexterity and their movement is described by many degrees of freedom - however, sometimes we find out the limits of motion and in some given situations, deploying objects in some positions may prove difficult or even impossible. We address these hypothetical situations and try to find the best solutions for securely deploying the sensors by anchors to the sea floor or by tethering to any metallic or concrete structures: jackets, jack-up legs, autonomous buoys, subsea well production heads, offshore wind farm production sites, so on and so forth.</para>
<para>In the Black Sea area, operating in Romania&#x02019;s territorial sea coast line, we identified 4 working-class ROVs, out of which 2 are manufactured by PerrySlingsby U.K.: 1 Triton XLX and 1 Triton XLR - first prototype of its kind, which led to our models used in simulation.</para>
<section class="lev2" id="sec12-4-1">
<title>12.4.1 ROV Manipulator Systems</title>
<para>Schilling Robotics&#x02019; TITAN 4 manipulator with 7 degrees of freedom (<link linkend="F12-13">Figure <xref linkend="F12-13" remap="12.13"/></link>) is the industry&#x02019;s premier system that offers the optimum combination of dexterity and strength. Hundreds of TITAN manipulators are in use worldwide every day, and are the predominant manipulator of choice for work-class ROV systems. Constructed from titanium, the TITAN 4 is uniquely capable of withstanding the industry&#x02019;s harsh environment and repetitive needs.</para>
<fig id="F12-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-13">Figure <xref linkend="F12-13" remap="12.13"/></link></label>
<caption><para>Titan 4 Manipulator 7-F.</para></caption>
<graphic xlink:href="graphics/ch12_fig0013.jpg"/>
</fig>
<fig id="F12-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-14">Figure <xref linkend="F12-14" remap="12.14"/></link></label>
<caption><para>Master Arm Control.</para></caption>
<graphic xlink:href="graphics/ch12_fig0014.jpg"/>
</fig>
<para>The movement of the 7-function Master Arm Control (<link linkend="F12-14">Figure <xref linkend="F12-14" remap="12.14"/></link>) is transmitted through the fibre optics inside the tether and the underwater media converters situated on the ROV pass the information to the Titan 4 Manipulator after it is checked for send/receive errors. The exact movement of the joints of the 7-function above the sea level represent the movement of the Titan-4 underwater. This provides the dexterity and degrees of freedom needed to execute most difficult tasks (<link linkend="F12-15">Figure <xref linkend="F12-15" remap="12.15"/></link>):</para>
<fig id="F12-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-15">Figure <xref linkend="F12-15" remap="12.15"/></link></label>
<caption><para>Titan 4 &#x02013; Stow dimensions.</para></caption>
<graphic xlink:href="graphics/ch12_fig0015.jpg"/>
</fig>
<para>Schilling&#x02019;s RigMaster is a five-function, rate-controlled, heavy-lift grabber arm that can be mounted on a wide range of ROVs (<link linkend="F12-16">Figure <xref linkend="F12-16" remap="12.16"/></link>). The grabber arm can be used to grasp and lift heavy objects or to anchor the ROV by clamping the gripper around a structural member at the work site. Constructed primarily of aluminium and titanium, the RigMaster delivers the power, performance, and reliability required for such demanding work. A typical work-class ROV utilizes a combination of the five-function RigMaster and seven-function TITAN 4.</para>
<fig id="F12-16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-16">Figure <xref linkend="F12-16" remap="12.16"/></link></label>
<caption><para>RigMaster 5-F.</para></caption>
<graphic xlink:href="graphics/ch12_fig0016.jpg"/>
</fig>
<fig id="F12-17" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-17">Figure <xref linkend="F12-17" remap="12.17"/></link></label>
<caption><para>RigMaster range of motion &#x02013; Side view.</para></caption>
<graphic xlink:href="graphics/ch12_fig0017.jpg"/>
</fig>
<para>With these two manipulator systems, any type of sensor can be deployed or fixed on the ocean bottom. In order for a better understanding of the process and likely problems which can occur during the installation, we are going to use the VMAX Tech. &#x02013; PerrySlingsby ROV Simulator for which we are going to develop a modelling and simulation scenario concerning the deployment of underwater sensors Safe-Net surrounding areas of offshore oil and gas drilling operations.</para>
</section>
<section class="lev2" id="sec12-4-2">
<title>12.4.2 Types of Offshore Constructions</title>
<para>Offshore constructions represent the installation of structures and facilities in a marine environment, usually for the production and transmission of electricity, oil, gas or other resources. We have taken into consideration most usual encountered offshore types of structures and facilities, focusing on the shapes which are found underwater:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Fixed platforms;</para></listitem>
<listitem>
<para>Jack-up oil and gas drilling and/or production rigs;</para></listitem>
<listitem>
<para>Jackets with top sides;</para></listitem>
<listitem>
<para>Spars or floating platforms;</para></listitem>
<listitem>
<para>Semi-submersibles;</para></listitem>
<listitem>
<para>Drilling ships;</para></listitem>
<listitem>
<para>Floating tension legs;</para></listitem>
<listitem>
<para>Floating production storage and o&#x0FB04;oading (FPSO);</para></listitem>
<listitem>
<para>Subsea well production heads;</para></listitem>
<listitem>
<para>Offshore wind farm production sites.</para></listitem></itemizedlist>
<para>We have created a simple scenario in which we use a PerrySlingsby Triton XLS ROV connected to a TMS (Tether Management System) and where we can use the physics of the robotic arms in order to understand which movements are going to be needed in order to implant sensors of different sizes into the ocean floor, as well as nearby the types of subsea structures mentioned above. We try to create handling tools for the Schilling Robotics 7-F arm in order to easily deploy and fix or common framework device model and also try to find best spots for all the offshore types of structures we encountered in our offshore experience inquiry [40].</para>
</section>
</section>
<section class="lev1" id="sec12-5">
<title>12.5 ROV Simulator</title>
<para>The VMAX Simulator is software and hardware package intended to be used by engineers to help in the design process of procedures, equipment and methodologies, having a &#x0201C;physics based simulation&#x0201D; for the offshore environment. The simulator is capable of creating scenarios that are highly detailed and focused on one area of operation or broad in scope to allow an inspection of an entire subsea field. For creating a scenario, there are two skill sets needed: 3D Studio Max modelling and &#x0201C;.lua&#x0201D; script programming skills.</para>
<para>In order to safely deploy our Safe-Nets&#x02019; sensor balls into the water and fix them to jack-up rigs metallic structures or to any other offshore constructions, we first try to develop models of those structures and include them into a standard fly-alone ROV simulation scenario. This is a two-step process as any object&#x02019;s model has to be created in 3D Studio Max software and afterwards it can be programmatically be inserted into the simulation scenario. The simulation scenarios are initialized by a series of Lua scripts, which is very similar to C++ programming language and The VMAX Scenario Creation is <emphasis>open source</emphasis>. The scripts are plain text files that can be edited using many programs, including Microsoft Windows Notepad. The file names end with .lua extension and are recommended to be opened with jEdit editor. This is also an open-source editor which requires the installation of Java Runtime Environment (JRE).</para>
<para>We have altered the simulation scenarios as it can be seen in <link linkend="F12-18">Figure <xref linkend="F12-18" remap="12.18"/></link> and <link linkend="F12-19">Figure <xref linkend="F12-19" remap="12.19"/></link> in order to obtain a better model of the Black Sea floor through-out Romania&#x02019;s coast line, which usually contains more sand because of the Danube sediments coming from The Danube Delta. Geologists working on-board the Romanian jack-ups considered the sea-floor in the VMAX ROV Simulator very much alike with the one in the geological and oil-petroleum interest zones up to 150-160 miles out in the sea. Throughout these zones the water depth doesn&#x02019;t exceed 80-90m, which is the limit at which drilling jack-up rigs can operate (legs have 118m in length).</para>
<fig id="F12-18" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-18">Figure <xref linkend="F12-18" remap="12.18"/></link></label>
<caption><para>Triton XLS ROV in simulation scenario.</para></caption>
<graphic xlink:href="graphics/ch12_fig0018.jpg"/>
</fig>
<fig id="F12-19" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-19">Figure <xref linkend="F12-19" remap="12.19"/></link></label>
<caption><para>Triton XLS schilling robotics 7-Function arm in scenario.<break/>Courtesy of TelegeoGraphy.com</para></caption>
<graphic xlink:href="graphics/ch12_fig0019.jpg"/>
</fig>
<para>The simulator which is open-source was the starting base for a scenario where we translated the needs of the ROV in terms of sensor handling, tether positioning and pilot techniques combined with the specifications of the sea-floor where the Safe-Nets will be deployed. The scenarios are initialized by a series of .Lua scripts and the typical hierarchical file layout is presented in <link linkend="F12-20">Figure <xref linkend="F12-20" remap="12.20"/></link>.</para>
<fig id="F12-20" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-20">Figure <xref linkend="F12-20" remap="12.20"/></link></label>
<caption><para>Typical hierarchical file layout.</para></caption>
<graphic xlink:href="graphics/ch12_fig0020.jpg"/>
</fig>
<para>The resources are divided into two large classes of information: <emphasis>Scenarios</emphasis>-related data and <emphasis>Assets</emphasis>. The former contains among others: Bathymetry, Lua, Manipulators, Tooling, TMS (Tether Management System), Vehicles, Components and IP (Internet Protocol communications between assets).</para>
<para>Bathymetry directory contains terrain information about a specific location, where we could alter the sand properties on the sea floor. The terrain stored here may be used across several scenarios. We could add a graphic asset by using the template for the bathymetry part. The collision geometry can be later generated based on the modelled geometry. We remind that the simulator scenario creation software is <emphasis>open-source</emphasis> and we present in the following lines some of the parts of the basic scenario provided with the full-up simulator which we modified in order to accommodate our specific needs:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/pg287.jpg"/></para>
<para>The bathymetry template uses triangle mesh collision for the terrain. This will provide collisions that are contoured to the bathymetry model</para>
<para>The Manipulators directory contains sub-directories for each arm and each sub-director contains a model file with a .Lua script function used to create the manipulator and add it to the ROV. We are looking forward to creating a new manipulator usable for each case of sensor deployment.</para>
<para>The Tooling directory has either categories of tools or some uncategorized ones, each having a model file name &#x0201C;.ive&#x0201D; or &#x0201C;.flt&#x0201D; and a Lua script file with the code to create that specific tool [41].</para>
<para>Whereas the typical training scenarios include mainly a fly-around and getting used to the ROV commands for the pilot and assistant pilot, we have used the auxiliary commands in order to simulate the planting of the Safe-Net around a jacket or a buoy for example. As far as the training scenario is concerned, we covered the basics for a pilot to get around a jacket safely, carrying some sort of object in the Titan4 manipulator robotic arm, without dropping it, or without having the ROV&#x02019;s tether tangling with the jacket metallic structure. The tether contains hundreds of fibre-optic cables covered with a Kevlar reinforcement, but it is recommended that no more than 4 total 3600 spins are made in one direction, clockwise or counter-clockwise, even having this strengthened cover, in order to avoid any loss of communication between the control console and the ROV itself. Any interaction between the tether and the metallic structure could represent a potential threat to the ROV&#x02019;s integrity.</para>
</section>
<section class="lev1" id="sec12-6">
<title>12.6 Common Modular Framework</title>
<para>An overview of the challenges and application possibilities of deploying underwater sensor networks nearby oil rigs drilling operations areas and offshore construction sites surroundings led to the conclusion that a standard device is needed in order to deploy multi-purpose underwater sensor networks. We detected the need for a standard, common, easy-to-use device framework for multi-purpose underwater sensors in marine operations, as we were preparing the devices for future use. This framework should be used for multiple different sensors and we consider the modular approach to be best suited for future use, providing the much-needed versatility.</para>
<para>We considered the buoyancy capabilities needed for a stand-alone device launched on the sea surface and we started with an almost spherical-shaped model <link linkend="F12-21">Figure <xref linkend="F12-21" remap="12.21"/></link>. If tethering should be needed, a small O-ring cap on one of the sphere&#x02019;s poles can be mounted:</para>
<fig id="F12-21" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-21">Figure <xref linkend="F12-21" remap="12.21"/></link></label>
<caption><para>Spherical-shaped model designed for common framework; a >= b; c is Tether/Cable entry point diameter.</para></caption>
<graphic xlink:href="graphics/ch12_fig0021.jpg"/>
</fig>
<para>The device will be able to accommodate a variety of sensors, adapted within the inside &#x0201C;drawers&#x0201D;, its layers being highly modular. In this manner, with the same network node, we will be able to empower a lot of types of applications and this is an essential step in justifying the costs of development. We believe this characteristic is critical for improving the financial desirability of any future Safe-Nets offshore project.</para>
<para>Our simulation scenario is still scarce in modelled objects as the process of creating them quite realistic is taking a long time. However many simulation scenario variables we may alter, after finding out real types of situations which occur on offshore structures, we learned that simulating the deployment and deciding spots of anchoring for our sensors can only help, but not solve real-life deployment, as parameters decided beforehand on shore can change dramatically offshore.</para>
<para>However, we believe that our 3D models for underwater multi-purpose sensors still stand as a good idea for our Safe-Nets real-life development and implementation. Tethered or untethered, the upper hemisphere can include a power adapter which can be used also as batteries compartment if the sensor is wireless. The sensors have enough drawers for electronic modules and Type 03 is designed with built-in cable management system. Also, Type 03 is designed with a membrane for a sensitive pollution sensor. We have chosen a very simple closing mechanism for starters, using clamps on both sides, which can ensure the sealing of the device. The upper and lower hemispheres close on top of an O-ring seal which can be lubricated additionally with water repellent grease. Also, we have designed a unidirectional valve which can be used for a vacuum pump to clear out the air inside. The vacuum strengthens the seal against water pressure. In <link linkend="F12-22">Figure <xref linkend="F12-22" remap="12.22"/></link>, we present a few prototypes which we tried to model and simulate:</para>
<fig id="F12-22" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-22">Figure <xref linkend="F12-22" remap="12.22"/></link></label>
<caption><para>Underwater multi-purpose devices prototypes 01 &#x02013; 05.</para></caption>
<graphic xlink:href="graphics/ch12_fig0022.jpg"/>
</fig>
<fig id="F12-23" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-23">Figure <xref linkend="F12-23" remap="12.23"/></link></label>
<caption><para>Grouping method for multiple simultaneous sensor deployment.</para></caption>
<graphic xlink:href="graphics/ch12_fig0023.jpg"/>
</fig>
<para>Within the same common modular framework, we have thought at a multi-deployment method for 3 or more sensors at the same time. Actually, the following ideas were issued because of the repeated fail trials with an ROV to grab and hold a Safe-Net sensor long enough in order to place it in a hook coming from an autonomous buoy above the sea surface, affected by wave length and height. Because of the real difficulties encountered, especially when inserting higher waves into the scenario, we have thought of a way to get the job done more quickly (<link linkend="F12-23">Figure <xref linkend="F12-23" remap="12.23"/></link>):</para>
<para>Moreover, the spherical model framework of the sensor, the basic node of the Safe-Net, will prove to be very difficult to handle using the simple manipulator, as it tends to slip, and the objective is to carry it without dropping. Therefore, we have designed a &#x0201C;cup-holder&#x0201D; shape for grabbing more easily the sphere and if it contains a cable connection, it should not be tampered by the grabber, as it can be seen in <link linkend="F12-24">Figure <xref linkend="F12-24" remap="12.24"/></link>:</para>
<fig id="F12-24" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-24">Figure <xref linkend="F12-24" remap="12.24"/></link></label>
<caption><para>Basic sensor device holder designed for simulation.</para></caption>
<graphic xlink:href="graphics/ch12_fig0024.jpg"/>
</fig>
</section>
<section class="lev1" id="sec12-7">
<title>12.7 Conclusions</title>
<para>Most of the state-of-the-art solutions regarding underwater sensor networks rely on specific task-oriented sensors, which are developed and launched with different means and no standardization. The studies we found usually use power from batteries and all sorts of resilient algorithms in order to minimize battery draining and use sleep-awake states of the nodes, which finally are recovered from water in order to retrieve data collections. Our approach is trying to regulate the ways of deploying and fixing the sensors towards offshore structures and moreover to offer solutions to more than one application task. This may seem as a general approach, but this is needed in order to avoid launching different technology nodes which afterwards will not be able to communicate with each other. Development of a virtual environment-based training system for ROV pilots could be the starting point for deploying underwater sensor networks worldwide, as these are the people who will actually be in the position to implement it.</para>
<para>This chapter investigates the main challenges for the development of an efficient common framework for multi-purpose underwater data collection devices (sensors). Several fundamental key aspects of underwater acoustic communications are also investigated. We want to deploy around existing offshore constructions and this research is still a work in progress. The main challenges for the development of efficient networking solutions posed by deploying sensors in the underwater environment are detailed at all levels. In short, this article has analyzed the necessity of considering the physical fundamentals of an underwater network development in the planetarium ocean, starting from the instrumentation needs surrounding offshore oil drilling sites and early warning systems for disaster prevention worldwide.</para>
<para>We suggest various extension possibilities for applications of these safe-nets, starting from pollution monitoring around offshore oil drilling sites, early warning systems for disaster prevention (earthquakes, tsunami) or weather forecast improvement, up to military surveillance applications, all in order to overcome the cost of implementation of such underwater networks.</para>
</section>
<section class="lev1" id="sec12-8">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>K. Eshghi and R. C. Larson, &#x02018;Disasters: lessons from the past 105 years&#x02019;, Disaster Prevention and Management, Vol. 17, pp.61&#x02013;82, 2008. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+Eshghi+and+R%2E+C%2E+Larson%2C+%27Disasters%3A+lessons+from+the+past+105+years%27%2C+Disaster+Prevention+and+Management%2C+Vol%2E+17%2C+pp%2E61-82%2C+2008%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Green, &#x02018;Acoustic modems, navigation aids and networks for undersea operations&#x02019;, IEEE Oceans Conference Proceedings, pp.1&#x02013;6, Sydney, Australia, May 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Green%2C+%27Acoustic+modems%2C+navigation+aids+and+networks+for+undersea+operations%27%2C+IEEE+Oceans+Conference+Proceedings%2C+pp%2E1-6%2C+Sydney%2C+Australia%2C+May+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Heidemann, Y. Li and A. Syed, &#x02018;Underwater Sensor Networking: Research Challenges and Potential Applications&#x02019;, USC Information Sciences Institute, USC/ISI Technical Report ISI-TR-2005&#x02013;603, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Heidemann%2C+Y%2E+Li+and+A%2E+Syed%2C+%27Underwater+Sensor+Networking%3A+Research+Challenges+and+Potential+Applications%27%2C+USC+Information+Sciences+Institute%2C+USC%2FISI+Technical+Report+ISI-TR-2005-603%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>T. Melodia, Ian F. Akyildiz and D. Pompili, &#x02018;Challenges for Efficient Communication in Underwater Acoustic Sensor Networks&#x02019;, ACM Sigbed Review, vol.1, no.2, 2004. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=T%2E+Melodia%2C+Ian+F%2E+Akyildiz+and+D%2E+Pompili%2C+%27Challenges+for+Efficient+Communication+in+Underwater+Acoustic+Sensor+Networks%27%2C+ACM+Sigbed+Review%2C+vol%2E1%2C+no%2E2%2C+2004%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Cerpa and et al., &#x02018;Habitat monitoring: Application driver for wireless communications technology&#x02019;, ACM SIGCOMM Workshop on Data Communications in Latin America and the Caribbean, San Jose, Costa Rica, 2001. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Cerpa+and+et+al%2E%2C+%27Habitat+monitoring%3A+Application+driver+for+wireless+communications+technology%27%2C+ACM+SIGCOMM+Workshop+on+Data+Communications+in+Latin+America+and+the+Caribbean%2C+San+Jose%2C+Costa+Rica%2C+2001%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Whang, N. Xu and S. Rangwala, &#x02018;Development of an embedded sensing system for structural health monitoring&#x02019;, International Workshop on Smart Materials and Structures Technology, pp. 68&#x02013;71, 2004. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Whang%2C+N%2E+Xu+and+S%2E+Rangwala%2C+%27Development+of+an+embedded+sensing+system+for+structural+health+monitoring%27%2C+International+Workshop+on+Smart+Materials+and+Structures+Technology%2C+pp%2E+68-71%2C+2004%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Steere, A. Baptista and D. McNamee, &#x02018;Research challenges in environmental observation and forecasting systems&#x02019;, 6th ACM International Conference on Mobile Computing and Networking, Boston, MA, USA, 2000. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Steere%2C+A%2E+Baptista+and+D%2E+McNamee%2C+%27Research+challenges+in+environmental+observation+and+forecasting+systems%27%2C+6th+ACM+International+Conference+on+Mobile+Computing+and+Networking%2C+Boston%2C+MA%2C+USA%2C+2000%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>L. Dignan, &#x02018;Google&#x02019;s Data Centers&#x02019;, [Online] 2011.http://www.zdnet.com/blog/btl/google-makes-waves-and-may-have-solved-the-data-center-conundrum/9937http://www.datacenterknowledge.com/archives/2008/09/06/google-planning-offshore-data-barges/. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=L%2E+Dignan%2C+%27Google%27s+Data+Centers%27%2C+%5BOnline%5D+2011%2Ehttp%3A%2F%2Fwww%2Ezdnet%2Ecom%2Fblog%2Fbtl%2Fgoogle-makes-waves-and-may-have-solved-the-data-center-conundrum%2F9937http%3A%2F%2Fwww%2Edatacenterknowledge%2Ecom%2Farchives%2F2008%2F09%2F06%2Fgoogle-planning-offshore-data-barges%2F%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. S. Outlaw, &#x02018;Computerization of an Autonomous Mobile Buoy&#x02019;, Florida Institute of Technology, Vol. Master Thesis in Ocean Engineering, Melbourne, FL, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+S%2E+Outlaw%2C+%27Computerization+of+an+Autonomous+Mobile+Buoy%27%2C+Florida+Institute+of+Technology%2C+Vol%2E+Master+Thesis+in+Ocean+Engineering%2C+Melbourne%2C+FL%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Garcier, et al., &#x02018;Autonomous Meteorogical Buoy&#x02019;, Instrumentation Viewpoint, vol. 7, Winter, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Garcier%2C+et+al%2E%2C+%27Autonomous+Meteorogical+Buoy%27%2C+Instrumentation+Viewpoint%2C+vol%2E+7%2C+Winter%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>V. Dache, M.C. Caraivan and V. Sgarciu, &#x02018;Advanced Building Energy Management Systems - Optimize power consumption&#x02019;, INCOM 2012, pp. 426, Bucharest, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=V%2E+Dache%2C+M%2EC%2E+Caraivan+and+V%2E+Sgarciu%2C+%27Advanced+Building+Energy+Management+Systems+-+Optimize+power+consumption%27%2C+INCOM+2012%2C+pp%2E+426%2C+Bucharest%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Hong-Jun, et al., &#x02018;Challenges: Building Scalable and Distributed Underwater Wireless Sensor Networks (UWSNs) for Aquatic Applications&#x02019;, UCONN CSE Technical Report, UbiNet-TR05&#x02013;02, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Hong-Jun%2C+et+al%2E%2C+%27Challenges%3A+Building+Scalable+and+Distributed+Underwater+Wireless+Sensor+Networks+%28UWSNs%29+for+Aquatic+Applications%27%2C+UCONN+CSE+Technical+Report%2C+UbiNet-TR05-02%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Pompili, T. Melodia and A.F. Ian, &#x02018;A Resilient Routing Algorithm for Long-term Applications in Underwater Sensor Networks&#x02019;, Atlanta, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Pompili%2C+T%2E+Melodia+and+A%2EF%2E+Ian%2C+%27A+Resilient+Routing+Algorithm+for+Long-term+Applications+in+Underwater+Sensor+Networks%27%2C+Atlanta%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Aquaret, [Online], 2008. www.aquaret.com <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Aquaret%2C+%5BOnline%5D%2C+2008%2E+www%2Eaquaret%2Ecom" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>PelamisWaves, [Online], 2012. http://www.pelamiswave.com/pelamis-technology <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=PelamisWaves%2C+%5BOnline%5D%2C+2012%2E+http%3A%2F%2Fwww%2Epelamiswave%2Ecom%2Fpelamis-technology" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>WaveBob, [Online], 2009. http://www.wavebob.com <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=WaveBob%2C+%5BOnline%5D%2C+2009%2E+http%3A%2F%2Fwww%2Ewavebob%2Ecom" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Buoy, OE, [Online], 2009. www.oceanenergy.ie <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Buoy%2C+OE%2C+%5BOnline%5D%2C+2009%2E+www%2Eoceanenergy%2Eie" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>PowerBuoy, [Online], 2008. www.oceanpowertechnologies.com <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=PowerBuoy%2C+%5BOnline%5D%2C+2008%2E+www%2Eoceanpowertechnologies%2Ecom" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Oyster, [Online], 2011. www.aquamarinepower.com <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Oyster%2C+%5BOnline%5D%2C+2011%2E+www%2Eaquamarinepower%2Ecom" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>WaveDragon, [Online], 2011. www.wavedragon.net <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=WaveDragon%2C+%5BOnline%5D%2C+2011%2E+www%2Ewavedragon%2Enet" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>AWS, [Online], 2010. http://www.awsocean.com <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=AWS%2C+%5BOnline%5D%2C+2010%2E+http%3A%2F%2Fwww%2Eawsocean%2Ecom" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>F. Mosca, G. Matte and T. Shimura, &#x02018;Low-frequency source for very long-range underwater communication&#x02019;, Journal of Acoustical Society of America Express Letters, vol. 133, 10.1121/1.4773199, Melville, NY, U.S.A., 20 December 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=F%2E+Mosca%2C+G%2E+Matte+and+T%2E+Shimura%2C+%27Low-frequency+source+for+very+long-range+underwater+communication%27%2C+Journal+of+Acoustical+Society+of+America+Express+Letters%2C+vol%2E+133%2C+10%2E1121%2F1%2E4773199%2C+Melville%2C+NY%2C+U%2ES%2EA%2E%2C+20+December+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>D. Pompili and T. Melodia, &#x02018;An Architecture for Ocean Bottom UnderWater Acoustic Sensor Networks (UWASN)&#x02019;, Georgia, Atlanta, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=D%2E+Pompili+and+T%2E+Melodia%2C+%27An+Architecture+for+Ocean+Bottom+UnderWater+Acoustic+Sensor+Networks+%28UWASN%29%27%2C+Georgia%2C+Atlanta%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R. Urick, &#x02018;Principles of underwater sound&#x02019;, McGraw Hill Publishing, New York, NY, U.S.A., 1983. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2E+Urick%2C+%27Principles+of+underwater+sound%27%2C+McGraw+Hill+Publishing%2C+New+York%2C+NY%2C+U%2ES%2EA%2E%2C+1983%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S.W. Rienstra and A. Hirschber, &#x02018;An Introduction to Acoustics&#x02019;, Eindhoven University of Technology, Eindhoven, The Netherlands, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2EW%2E+Rienstra+and+A%2E+Hirschber%2C+%27An+Introduction+to+Acoustics%27%2C+Eindhoven+University+of+Technology%2C+Eindhoven%2C+The+Netherlands%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Wills, W. Ye and J. Heidemann, &#x02018;LowPower Acoustic Modem for Dense Underwater Sensor Networks&#x02019;, USC Information Sciences Institute, 2008 <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Wills%2C+W%2E+Ye+and+J%2E+Heidemann%2C+%27LowPower+Acoustic+Modem+for+Dense+Underwater+Sensor+Networks%27%2C+USC+Information+Sciences+Institute%2C+2008" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Stojanovic, &#x02018;On the relationship between capacity and distance in an underwater acoustic communication channel&#x02019;, ACM Mobile Computing Communications, Rev.11, pp.34&#x02013;43, doi:10.1145/1347364.1347373, 2007. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Stojanovic%2C+%27On+the+relationship+between+capacity+and+distance+in+an+underwater+acoustic+communication+channel%27%2C+ACM+Mobile+Computing+Communications%2C+Rev%2E11%2C+pp%2E34-43%2C+doi%3A10%2E1145%2F1347364%2E1347373%2C+2007%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Wikipedia.org, Wikipedia List of Companies by Revenue, [Online], 2011. http://en.wikipedia.org/wiki/List_of_companies_by_revenue <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Wikipedia%2Eorg%2C+Wikipedia+List+of+Companies+by+Revenue%2C+%5BOnline%5D%2C+2011%2E+http%3A%2F%2Fen%2Ewikipedia%2Eorg%2Fwiki%2FList%5Fof%5Fcompanies%5Fby%5Frevenue" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>V. Nicolescu, M. Caraivan, &#x02018;On the Interoperability in Marine Pollution&#x02019;, IESA&#x02019;14 7th International Conference on Interoperability for Enterprise Systems and Applications, Albi, France, 2014. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=V%2E+Nicolescu%2C+M%2E+Caraivan%2C+%27On+the+Interoperability+in+Marine+Pollution%27%2C+IESA%2714+7th+International+Conference+on+Interoperability+for+Enterprise+Systems+and+Applications%2C+Albi%2C+France%2C+2014%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Proakis, J. Rice, et al., &#x02018;Shallow water acoustic networks&#x02019;, IEEE Communications Magazine, pp. 114&#x02013;119, 2001. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Proakis%2C+J%2E+Rice%2C+et+al%2E%2C+%27Shallow+water+acoustic+networks%27%2C+IEEE+Communications+Magazine%2C+pp%2E+114-119%2C+2001%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>I. Vasilescu, et al., &#x02018;Data collection, storage and retrieval with an underwater sensor network&#x02019;, 3rd ACM SenSys Conference Proceedings, pp.154&#x02013;165, San Diego, CA, U.S.A., November 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=I%2E+Vasilescu%2C+et+al%2E%2C+%27Data+collection%2C+storage+and+retrieval+with+an+underwater+sensor+network%27%2C+3rd+ACM+SenSys+Conference+Proceedings%2C+pp%2E154-165%2C+San+Diego%2C+CA%2C+U%2ES%2EA%2E%2C+November+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>P. Fairley, &#x02018;Neptune rising&#x02019;, IEEE Spectrum Magazine #42, pp. 38&#x02013;45, doi:10.1109/MSPEC.2005.1526903, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=P%2E+Fairley%2C+%27Neptune+rising%27%2C+IEEE+Spectrum+Magazine+%2342%2C+pp%2E+38-45%2C+doi%3A10%2E1109%2FMSPEC%2E2005%2E1526903%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>C. Tricaud, &#x02018;Optimal Sensing and Actuation Policies for Networked Mobile Agents in a Class of Cyber-Physical Systems&#x02019;, Utah State University, Logan, Utah, 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=C%2E+Tricaud%2C+%27Optimal+Sensing+and+Actuation+Policies+for+Networked+Mobile+Agents+in+a+Class+of+Cyber-Physical+Systems%27%2C+Utah+State+University%2C+Logan%2C+Utah%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. El Jai, &#x02018;Distributed systems analysis via sensors and actuators&#x02019;, Sensors and Actuators A&#x02019;, vol. 29, pp.1&#x02013;11, 1991. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+El+Jai%2C+%27Distributed+systems+analysis+via+sensors+and+actuators%27%2C+Sensors+and+Actuators+A%27%2C+vol%2E+29%2C+pp%2E1-11%2C+1991%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. El Jai and A.J. Pritchard, &#x02018;Sensors and actuators in distributed systems&#x02019;, International Journal of Control, vol. 46, iss. 4, pp. 1139&#x02013;1153, 1987. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+El+Jai+and+A%2EJ%2E+Pritchard%2C+%27Sensors+and+actuators+in+distributed+systems%27%2C+International+Journal+of+Control%2C+vol%2E+46%2C+iss%2E+4%2C+pp%2E+1139-1153%2C+1987%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. El Jai, et al., &#x02018;Regional controllability of distributed-parameter systems&#x02019;, International Journal of Control, vol. 62, iss. 6, pp.1351&#x02013;1356, 1995. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+El+Jai%2C+et+al%2E%2C+%27Regional+controllability+of+distributed-parameter+systems%27%2C+International+Journal+of+Control%2C+vol%2E+62%2C+iss%2E+6%2C+pp%2E1351-1356%2C+1995%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>E. Rafajowics, &#x02018;Optimum choice of moving sensor trajectories for distributed parameter system identification&#x02019;, International Journal of Control, vol. 43. pp.1441&#x02013;1451, 1986. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=E%2E+Rafajowics%2C+%27Optimum+choice+of+moving+sensor+trajectories+for+distributed+parameter+system+identification%27%2C+International+Journal+of+Control%2C+vol%2E+43%2E+pp%2E1441-1451%2C+1986%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>N.Z. Sun, &#x02018;Inverse Problems in Groundwater Modeling&#x02019;, Theory and Applications of Transport in Porous Media, Kluwer Academic Publishers, Dodrecht, The Netherlands, 1994. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=N%2EZ%2E+Sun%2C+%27Inverse+Problems+in+Groundwater+Modeling%27%2C+Theory+and+Applications+of+Transport+in+Porous+Media%2C+Kluwer+Academic+Publishers%2C+Dodrecht%2C+The+Netherlands%2C+1994%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Patan, &#x02018;Optimal Observation Strategies for Parameter Estimation of Distributed Systems&#x02019;, University of Zielona Gora Press, Zielona Gora, Poland, 2004. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Patan%2C+%27Optimal+Observation+Strategies+for+Parameter+Estimation+of+Distributed+Systems%27%2C+University+of+Zielona+Gora+Press%2C+Zielona+Gora%2C+Poland%2C+2004%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Caraivan, V. Dache and V. Sgarciu, &#x02018;Simulation Scenarios for Deploying Underwater Safe-Net Sensor Networks Using Remote Operated Vehicles&#x02019;, 19th International Conference on Control Systems and Computer Science Conference Proceedings, IEEE CSCS&#x02019;19 BMS# CFP1372U-CDR, ISBN: 978&#x02013;0-7695&#x02013;4980-4, Bucharest, Romania, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Caraivan%2C+V%2E+Dache+and+V%2E+Sgarciu%2C+%27Simulation+Scenarios+for+Deploying+Underwater+Safe-Net+Sensor+Networks+Using+Remote+Operated+Vehicles%27%2C+19th+International+Conference+on+Control+Systems+and+Computer+Science+Conference+Proceedings%2C+IEEE+CSCS%2719+BMS%23+CFP1372U-CDR%2C+ISBN%3A+978-0-7695-4980-4%2C+Bucharest%2C+Romania%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>B. Manavi, VMAX Technologies Inc. Help File, Houston, 77041&#x02013;4014 Texas, TX, U.S.A., 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=B%2E+Manavi%2C+VMAX+Technologies+Inc%2E+Help+File%2C+Houston%2C+77041-4014+Texas%2C+TX%2C+U%2ES%2EA%2E%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="chapter" id="ch13" label="13" xreflabel="13">
<title>M2M in Agriculture &#x02013; Business Models and Security Issues</title>
<para><emphasis role="strong">S. Gansemer<superscript><emphasis role="strong">1</emphasis></superscript>, J. Sell<superscript><emphasis role="strong">1</emphasis></superscript>, U. Grossmann<superscript><emphasis role="strong">1</emphasis></superscript>, E. Eren<superscript><emphasis role="strong">1</emphasis></superscript>,<superscript></superscript> B. Horster<superscript><emphasis role="strong">2</emphasis></superscript>, T.Horster-M&#x00F6;ller<superscript><emphasis role="strong">2</emphasis></superscript> and C. Rusch<superscript><emphasis role="strong">3</emphasis></superscript></emphasis></para>
<para><superscript>1</superscript>University of Applied Sciences and Arts Dortmund, Dortmund, Germany</para>
<para><superscript>2</superscript>VIVAI Software AG, Dortmund, Germany</para>
<para><superscript>3</superscript>Claas Selbstfahrende Erntemaschinen GmbH, Harsewinkel, Germany</para>
<para>Corresponding author: S. Gansemer &lt;sebastian.gansemer@fh-dortmund.de&gt;</para>
<section class="lev2" id="sec-">
<title>Abstract</title>
<para>Machine-to-machine communication (M2M) is one of the major innovations in the ICT sector. Especially in agricultural business with heterogeneous machinery, diverse process partners and high machine operating costs, M2M offers large potential in process optimization. Within this paper, a concept for process optimization in agricultural business using M2M technologies is presented using three application scenarios. Within that concept, standardization and communication as well as security aspects are discussed. Furthermore, corresponding business models building on the presented scenarios are discussed and results from economic analysis are presented.</para>
<para><emphasis role="strong">Keywords:</emphasis> M2M, agriculture, communication, standardization, business case, process transparency, operation data acquisition, business model, security</para>
</section>
<section class="lev1" id="sec13-1">
<title>13.1 Introduction</title>
<para>Machine-to-machine communication (M2M) currently is one of the major innovations in the ICT sector. The agricultural sector is characterized by heterogeneous machinery, diverse process partners and high operational machinery costs. Many optimization solutions aim to optimize a single machine but not the whole process. This paper deals with improving the entire process chain within the agricultural area. In the first part of this paper, a concept for supporting process optimization in heterogeneous process chains in agricultural business using M2M communication technologies is discussed. The second part presents business cases for the proposed system and outcomes from economic analysis. In the third part last not least security aspects related to the proposed system are discussed.</para>
</section>
<section class="lev1" id="sec13-2">
<title>13.2 Related Work</title>
<para>The application of M2M technology in agriculture is targeted by several other research groups. Moummadi et. al. [1] present a model for an agricultural decision support system using both multi-agent-system and constraint programming. The systems purpose is controlling and optimizing water exploitation in greenhouses.</para>
<para>Wu et. al. [2] present a number of models for M2M usage in different sectors such as utilities, security and public safety, tracking and tracing, telematics, payment, healthcare, remote maintenance and control and consumer devices. They discuss technological market trends and the influence of different industries on M2M applications.</para>
<para>An insurance system based on telematics technology is demonstrated by Daesub et. al. [3]. They investigate trends in insurance industry based on telematics and recommend a supporting framework.</para>
<para>A business model framework for M2M business models based on cloud computing is shown by Juliandri et. al. [4]. They identify nine basic building blocks for a business model aiming to increase value while reducing costs.</para>
<para>Gon&#x00E7;alves and Dobbelaere [5] discuss several business scenarios based on specific technical scenarios. Within the presented scenarios, the stakeholders assume different levels of control over the customer relationship and the assets determining the value proposition.</para>
<para>A model for software updates of mobile M2M devices is presented in [6]. They aim on low bandwidth use and avoidance of system reboot.</para>
</section>
<section class="lev1" id="sec13-3">
<title>13.3 Communication and Standardization</title>
<para>The agricultural sector is characterized by heterogeneous machinery and diverse process partners. Problems arise from idle times in agricultural processes, suboptimal machine allocation and improper planning. Other problems are generated by incompatibilities of machinery built by different manufacturers. Because of proprietary signals on machine buses not fitting on one another collaboration between machines may be inhibited [7, 8].</para>
<para>To support collaboration of heterogeneous machinery a standardized communication language is needed. Communication takes place either direct via machine to machine or via machine to cloud.</para>
<para>Sensors in machines record different parameters such as position, moving speed, mass and quality of harvested produce. These operational and machine logging data from the registered machines are synchronized between machines and finally sent via telecommunication network to a recording web portal. Data are stored within the portal&#x00B4;s database and are used for optimizing process chain or develop and implement business models based on that data. All data is sent through machine&#x02019;s ISO- and CAN-bus in proprietary syntax.</para>
<para>Within the concept, each machine uses a &#x0201C;black-box&#x0201D; which translates manufacturer specific bus signal data to a standardized data format. The concept is shown in <link linkend="F13-1">Figure <xref linkend="F13-1" remap="13.1"/></link>. Machines may be equipped with diverse numbers of sensors resulting in different numbers of signals available. The standard should cover most of those signals. However, due to the diverse machinery available, not every signal available on the machine can be supported within the proposed concept.</para>
<fig id="F13-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-1">Figure <xref linkend="F13-1" remap="13.1"/></link></label>
<caption><para>Synchronization of standards.</para></caption>
<graphic xlink:href="graphics/ch13_fig001.jpg"/>
</fig>
<para>Within this paper, the concept of a portal (M2M-Teledesk) is presented suited for dealing with the problems mentioned above. The system&#x02019;s framework is shown in <link linkend="F13-2">Figure <xref linkend="F13-2" remap="13.2"/></link>. The black-boxes installed on each machine are interfaces between the machine&#x02019;s internal buses and the portal (M2M-Teledesk). Black-boxes are equipped with mobile network communication interfaces for transferring data between machines among each other and between machines and the portal as well.</para>
<fig id="F13-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-2">Figure <xref linkend="F13-2" remap="13.2"/></link></label>
<caption><para>M2M teledesk framework [9].</para></caption>
<graphic xlink:href="graphics/ch13_fig002.jpg"/>
</fig>
<para>Every machine is set up with a black-box which reads internal buses and translates signals to the proposed open M2M standard, runs different applications and communicates data to and from the machine using WIFI or mobile data communication networks. The system uses a public key infrastructure for safety and trust reasons (see Section 13.7). Within the portal collected data is aggregated and provided to other analyzing and evaluating systems (e.g. farm management). Depending on the machine a full set or a subset of data specified in the standard can be used. Older machines may be retrofitted with a black-box providing only a subset of available data as a smaller number of sensors are available only.</para>
<para>The data is visualized within the portal and helps the farmer to optimize business processes to meet documentation requirements or to build data-based business models. Especially when it comes to complex and detailed records of many synchronized machines, the system shows its advantages.</para>
<para>Communication between machines takes place either directly from machine to machine or via a mobile communication network (e.g. GSM or UMTS). Within agricultural processes operating in rural areas, the availability of mobile communication networks is not always given. There are two strategies to increase the availability of network coverage:</para><itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>National roaming SIM cards;</para></listitem>
<listitem>
<para>Femtocells.</para></listitem></itemizedlist>
<para>With national roaming SIM cards being able to roam into all available networks, the availability of mobile network coverage can be increased, while with standard SIM cards only one network can be used in the home country [10]. National roaming SIM cards are operating in a country different from their home location (e.g. a spanish SIM card operating in Germany). The SIM card can roam into all available networks as long as issuing provider and network operator signed a roaming agreement. Although network coverage can be increased, a communication channel cannot be guaranteed.</para>
<fig id="F13-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-3">Figure <xref linkend="F13-3" remap="13.3"/></link></label>
<caption><para>Femtocell communication in agriculture [9].</para></caption>
<graphic xlink:href="graphics/ch13_fig003.jpg"/>
</fig>
<para>With femtocells [2], dedicated base station is placed on the field where machines are operating. The concept is presented in <link linkend="F13-3">Figure <xref linkend="F13-3" remap="13.3"/></link>. Machines communicate to the base station e.g. via WLAN or GSM/UMTS, while base-station is connected to the portal by GSM/UMTS or satellite connection. The location of the femtocell base-station should be chosen in a way that coverage is given at every location within the corresponding area either via the femtocell or via direct connection to a mobile network. This strategy enables communication even without network coverage by the operator. However, the implementation effort is significantly higher than in case of using national roaming SIM cards.</para>
</section>
<section class="lev1" id="sec13-4">
<title>13.4 Business Cases</title>
<para>The described system can be used in different manners. Three main business cases have been identified:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem>
<para>Process Transparency (PT);</para></listitem>
<listitem>
<para>Operation Data Acquisition (ODA);</para></listitem>
<listitem>
<para>Remote Software Update (RSU).</para></listitem></itemizedlist>
<para>Process transparency (PT) mainly focuses on in-time optimization of process chains, while ODA uses downstream analysis of data. Remote software update (RSU) aims to securely install applications or firmware updates on machines without the use of a service technician. These three business cases are described below in more detail.</para>
<section class="lev2" id="sec13-4-1">
<title>13.4.1 Process Transparency (PT)</title>
<para>Processes in agricultural business are affected by several process participants. Furthermore, the used machines in many cases are operating with high costs. A visualization of an exemplary corn-harvesting process is presented in <link linkend="F13-4">Figure <xref linkend="F13-4" remap="13.4"/></link>. During the harvesting process, a harvester is e.g. cropping corn. Synchronously, a transport vehicle needs to drive in parallel to the harvester to transport the harvested produce. Machines involved in this sub-process need to be synchronized in real time. In case of the transport vehicle being filled up, it has to be replaced by another empty transport vehicle. Full transport vehicles make their way to e.g. a silo or a biogas power plant where the transport vehicle has to enter via a scale to measure the mass of the harvested produce. Furthermore, a quality check of the harvested produce is carried out manually.</para>
<fig id="F13-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-4">Figure <xref linkend="F13-4" remap="13.4"/></link></label>
<caption><para>Information- and controlflow of scenario harvesting process.</para></caption>
<graphic xlink:href="graphics/ch13_fig004.jpg"/>
</fig>
<para>This process may be optimized by the portal in different ways. Due to the registration of sensor data, the weighting and quality check part in the process may be skipped or reduced to spot checks if the customer deems the data within the system to be trustworthy. Furthermore, the data is visualized by the portal to give the production manager the opportunity to optimize the process in near real-time. Before starting the process, a production plan is prepared by the production manager either manually with support by the system or automatically by the system. Within the plan, machines are allocated with time and position data. When the system registers a plan deviation, the plan is updated either manually or automatically. This approach allows reducing idle times saving costs and resources.</para>
</section>
<section class="lev2" id="sec13-4-2">
<title>13.4.2 Operations Data Acquisition (ODA)</title>
<para>Within Operations Data Acquisition (ODA) scenario data gathered by the machine sensors is saved for downstream processing and analysis. While process transparency aims to synchronize process data in real-time to support process optimization, ODA data is gathered and sent to the server after the process is finished. Analysis is done e.g. to generate yield maps or to analyze machine behavior.</para>
</section>
<section class="lev2" id="sec13-4-3">
<title>13.4.3 Remote Software Update (RSU)</title>
<para>The remote software update (RSU) process aims to remotely install software on a machine. Software update includes two sub scenarios, firmware upgrade and app-installation. App-installation means the installation of an additional piece of software from a third-party-software provider while firmware updates. The main aspect of software update is to ensure that the software is installed in a secure way, meaning that the machine proof to install software which comes from an authorized source and was not changed during network transport. Details on the security measures can be found in Section 13.7.</para>
</section>
</section>
<section class="lev1" id="sec13-5">
<title>13.5 Business Models</title>
<para>Based on the scenarios and the data described above business and licensing models are developed. <link linkend="F13-5">Figure <xref linkend="F13-5" remap="13.5"/></link> shows the value chain of M2M Teledesk consisting of six partners.</para>
<fig id="F13-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-5">Figure <xref linkend="F13-5" remap="13.5"/></link></label>
<caption><para>Value chain of M2M-Teledesk [9].</para></caption>
<graphic xlink:href="graphics/ch13_fig005.jpg"/>
</fig>
<para>For all partners of the value chain business potential has been analyzed and is shown in <link linkend="T13-1">Table <xref linkend="T13-1" remap="13.1"/></link>. The table shows the partner&#x02019;s roles, the expected revenue and cost development and the resulting business potential.</para>
<table-wrap position="float" id="T13-1">
<label><link linkend="T13-1">Table <xref linkend="T13-1" remap="13.1"/></link></label>
<caption><para>Revenue, costs and business potential for partners along M2M value chain</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left"><break/><break/>Partner</td>
<td valign="top" align="left"><break/><break/>Role</td>
<td valign="top" align="left">Revenue Development<!--br /-->(Per Unit)</td>
<td valign="top" align="left">Cost Development (Per Unit)</td>
<td valign="top" align="left"><break/>Business Potential</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Module manufacturer</td>
<td valign="top" align="left">Manufacturer of black-box</td>
<td valign="top" align="left">Constant</td>
<td valign="top" align="left">Declining</td>
<td valign="top" align="left">+</td>
</tr>
<tr>
<td valign="top" align="left">Machine manufacturer</td>
<td valign="top" align="left">Manufacturer of machines</td>
<td valign="top" align="left">Progressive</td>
<td valign="top" align="left">Declining</td>
<td valign="top" align="left">++</td>
</tr>
<tr>
<td valign="top" align="left">Mobile network operator</td>
<td valign="top" align="left">Data transport, SIM management</td>
<td valign="top" align="left">Constant</td>
<td valign="top" align="left">Declining</td>
<td valign="top" align="left">+</td>
</tr>
<tr>
<td valign="top" align="left">3rd party software provider</td>
<td valign="top" align="left">Software developer, application provider</td>
<td valign="top" align="left">Constant/ progressive (depending on business model)</td>
<td valign="top" align="left">Depending on business model</td>
<td valign="top" align="left">+</td>
</tr>
<tr>
<td valign="top" align="left">Portal provider</td>
<td valign="top" align="left">Portal operator</td>
<td valign="top" align="left">Progressive</td>
<td valign="top" align="left">Declining</td>
<td valign="top" align="left">++</td></tr>
</tbody>
</table>
</table-wrap>
<para>The module manufacturer produces the black-boxes (see <link linkend="F13-2">Figure <xref linkend="F13-2" remap="13.2"/></link>) built into the machines or used to retrofit older machines. Revenues for module manufacturers mostly come from black-box sales. Costs per unit are expected to decline with increasing number of sold units.</para>
<para>The machine manufacturer&#x02019;s revenues come from machine sales as well as services delivery and savings due to remote software updates. The cost of development is expected to be declining with the increasing number of sold units.</para>
<para>The mobile network operator&#x02019;s role is to deliver data through a mobile network. SIM card management may also be done by the network operator but can also be done by an independent partner. Revenues consist of fees for data traffic as well as service fees for SIM card supply and management. Additional costs for extra data volume over an existing network are very low.</para>
<para>Third-party software providers can be part of the value chain; however, this is not compulsory. They either supply an application bringing additional functions to the machinery or implement an own business model based on the data held in the portal.</para>
<para>The software is sold through the portal and is delivered to the machinery by the remote software update process described above. The revenues development per unit depends on the employed business model. When only software is sold, revenues per unit are constant. With additional business models, revenues may also develop progressively.</para>
<para>Costs are mostly one-time costs for software development as well as running costs for maintenance. However, additional costs may arise depending on the business model. The portal provider operates and manages the portal. Revenues consist of usage fees, revenues from third party app sales, fees for delivering software updates and other service fees. Costs are mainly for portal operation, support and data license fees. The end users&#x02019; revenues come from savings due to increased process efficiency, while costs arise for additional deductions for machines, additional costs for higher skilled workforce, system usage fees and so on. Business potential is given for all partners involved in the value chain.</para>
<para>With applications developed by third-party software developers a variety of new business models can be implemented. One model is given by &#x0201C;pay-per-use&#x0201D; as well as &#x0201C;pay-how-you-use&#x0201D; insurance or leasing. Within this business model insurance or leasing companies are able to calculate insurance or leasing rates more adequate to risk depending on real-usage patterns. The insurance or leasing company is integrated in the value chain as a third-party software provider. For running the business model, data showing the usage pattern is needed. To gain this data, the third-party software provider needs to pay license fees.</para>
<fig id="F13-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-6">Figure <xref linkend="F13-6" remap="13.6"/></link></label>
<caption><para>Service-links between delivering and receiving services.</para></caption>
<graphic xlink:href="graphics/ch13_fig06.jpg"/>
</fig>
</section>
<section class="lev1" id="sec13-6">
<title>13.6 Economic Analysis</title>
<para>Economic analysis of the system leads to a model consisting of linear equations. For visualizing the quantitative relations between different services, sub-services and partners of a so-called swimlane-gozintograph is used. Based on standard gozintograph methodology as described in [11] the resulting figure is adapted by including swimlane methodology [12] to show the involved partners. <link linkend="F13-6">Figure <xref linkend="F13-6" remap="13.6"/></link> shows the corresponding swimlane-gozintograph. Columns represent the involved partners; transparent circles indicate different services delivered by the partners. Shaded circles represent business cases, i.e. services delivered externally.</para>
<para>The figure shows the relations between internal and external services and the share of each partner in the different business cases. From this gozintograph, mathematical equations can be derived, enabling the calculation of the gross margins for each business case.</para>
<para>From <link linkend="F13-6">Figure <xref linkend="F13-6" remap="13.6"/></link>, linear equations are derived, including transfer prices, amounts of service delivery and external sales prices for cost and gross margin calculation.</para>
<para>Variable costs for the business cases can be calculated using Equation system (13.1).</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq13-1.jpg"/></para>
<para>where a<subscript><emphasis>ij</emphasis></subscript> &#x02013; amount of service j delivered for service i; b<subscript><emphasis>j</emphasis></subscript> &#x02013; transfer prices of service j; c<subscript><emphasis>i</emphasis></subscript> &#x02013; variable costs of finally receiving service i; m &#x02013; number of finally receiving services; n &#x02013; number of delivering services.</para>
<para>The system of linear equations yields relation matrix A=(a<subscript><emphasis>ij</emphasis></subscript>) and transfer price vector B=(b<subscript><emphasis>j</emphasis></subscript>). The vector C=(c<subscript><emphasis>i</emphasis></subscript>) of variable costs can be represented by Equation (13.2).</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq13-2.jpg"/></para>
<para>Using the vector D=(d<subscript><emphasis>i</emphasis></subscript>) consisting of sales prices of finally receiving services Equation (13.3) leads to the vector M=(m<subscript><emphasis>i</emphasis></subscript>) of gross margin per unit of all business cases, i.e. finally receiving services.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq13-3.jpg"/></para>
<para><link linkend="F13-7">Figure <xref linkend="F13-7" remap="13.7"/></link> exemplifies the input matrix A and vectors B and D with estimated quantities. In matrix A, the rows indicate the business cases PT (row 1), ODA (row 2) and RSU (row 3). Columns represent delivering services indicated as white circles in <link linkend="F13-6">Figure <xref linkend="F13-6" remap="13.6"/></link>. The elements of vector B represent transfer prices of delivering services. Elements of the vector D represent sales prices of the three business cases.</para>
<fig id="F13-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-7">Figure <xref linkend="F13-7" remap="13.7"/></link></label>
<caption><para>Relation matrix A, transfer price vector B and sales price vector D.</para></caption>
<graphic xlink:href="graphics/ch13_fig007.jpg"/>
</fig>
<fig id="F13-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-8">Figure <xref linkend="F13-8" remap="13.8"/></link></label>
<caption><para>Vector of variable costs C and vector of marginal return per unit M.</para></caption>
<graphic xlink:href="graphics/ch13_fig008.jpg"/>
</fig>
<para>The results of economic analysis are shown in <link linkend="F13-8">Figure <xref linkend="F13-8" remap="13.8"/></link>. Elements of the calculated vector C indicate variable costs of the three business cases. It can be seen that the marginal return per unit is positive for all three business cases with the highest marginal return for business case Process Transparency.</para>
</section>
<section class="lev1" id="sec13-7">
<title>13.7 Communication Security</title>
<para>Securing the communication channel against unauthorized access and manipulation is another important factor which has to be taken into account.</para>
<para>One has to consider the following communication scenarios: communication between machines and portal using mobile networks such as 3G/4G and WLAN, secure remote firmware update and covering dead spots.</para>
<para>The whole security concept is based on asymmetric encryption. Every participant in the communication chain (machines, machine manufacturer, M2M portal, provider) needs a key pair which should be created on the machine to keep the private key on the device.</para>
<para>This security concept was developed in the context of a bachelor thesis at the FH Dortmund [12]. The main target was the use of open and established standards [13, 14].</para>
<section class="lev2" id="sec13-7-1">
<title>13.7.1 CA</title>
<para>The central instance of the security structure is a CA (Certificate Authority) which provides services like issuing certificates (by providing a CSR (Certificate Signing Request)), revoking certificates, checking certificates if they are rejected (through CRLs/OCSP). During the key creation process, a CSR is being created which will be passed to the PKI. The CSR is signed and the certificate is sent back to the device (machine).</para>
</section>
<section class="lev2" id="sec13-7-2">
<title>13.7.2 Communicating On-the-Go</title>
<para>The communication between the machines and the portal is secured by means of a mutually authenticated HTTPS connection. The portal identifies itself to the machine by presenting its certificate and vice versa. During the initiation of the connection, every device has to check the presented certificate by the other part: 1) is the certificate signed by the M2M CA (this prevents man-in-the-middle-attacks)? If yes: 2) check the certificate of the counterpart against the CA if the certificate is revoked or not. This is done by using OCSP or CRLs (as a fallback in case OCSP is failing).</para>
<para>After the connection has been initiated, both partners can communicate securely, while the security of the underlying network(s) (like mobile 2G/3G, WLAN etc.) is no more important.</para>
</section>
<section class="lev2" id="sec13-7-3">
<title>13.7.3 Covering Dead Spots</title>
<para>In case that a mobile communication is not possible due to lacking availability, the collected data has to be transferred using other methods. Here, other vehicles (such as transportation vehicles) have to deliver the data from the machine within the dead spot to areas with mobile network coverage from where they are sent to the portal. During the transportation, the data has to be secured against manipulation and unauthorized access.</para>
<para>Preparing a data packet for delivery involves the following steps: At first the data is encrypted using the portal&#x02019;s public key. In order to check if the public key is still valid, it is checked with the corresponding certificate against the CA (through OCSP/CRL). This prevents unauthorized access. In the next step, the signature of the encrypted data is created. Therefore, the checksum of the data is calculated and encrypted with the private key of the originating machine. Both the signature (encrypted checksum) and the encrypted data are sent to the vehicle.</para>
<para>The portal checks the signature by decrypting the checksum using the originating machine&#x02019;s public key (key/certificate is checked through OCSP/CRL) and by creating the checksum itself of the data package. If both checksums match, the data has not been manipulated and can be decrypted using the private key of the portal.</para>
</section>
<section class="lev2" id="sec13-7-4">
<title>13.7.4 Securing WLAN Infrastructures</title>
<para>In the vicinity of a farm, a wireless LAN connection will be used instead of mobile network connection. The M2M project elaborated a reference WLAN network which can be installed on the farm premises. This network is designed and optimized for the M2M system. In order to guarantee that only authorized machines have access to the network, the authentication scheme is based on IEEE 802.1X with a RADIUS using AES/CCMP encryption (IEEE 802.11i RSN). Furthermore, a DHCP/DNS service is provided by the gateway which interconnects the network to the internet and acts as relay to the M2M portal. A machine connects to the M2M wireless network by using its X.509v2 certificate. The certificate is presented to the RADIUS server which performs checks (OCSP/CRL) against the CA whether it is revoked or not and whether the certificate is signed by the M2M CA. The machine itself has to check the RADIUS certificate whether it belongs to the M2M CA in order to avoid rogue access points. If all checks are passed successfully, the RADIUS server grants access to the network.</para>
</section>
<section class="lev2" id="sec13-7-5">
<title>13.7.5 Firmware Update</title>
<para>It is necessary to periodically apply updates for the software systems on the machines. The updates are passed from the manufacturer of the machine through the portal to the destination machine.</para>
<para>Since the update packages may contain critical and confidential data, the provision of the update package has to be secured accordingly. Because of the file size (100MB and up), asymmetric encryption is not appropriate. Instead, a 256-bit symmetric AES key is generated and is used to encrypt the update. This key is secured using the public key encryption (after checking the corresponding certificate through OCSP/CRL). Here, the public key of the destination machine is used. In the next step, the signature of the encrypted update file is calculated by generating the hash value which then is encrypted with the private key of the manufacturer. Now the signature, the encrypted file and the encrypted AES key are sent to the destination machine.</para>
<para>On the latter, the signature is checked by generating the checksum of the encrypted file and by comparing it with the decrypted checksum. The checksum is decrypted with the public key of the manufacturer. The corresponding certificate has to be checked, too) checksum. If both checksums match, the update did not lose integrity. Finally, the AES key can be decrypt using the private key of the destination machine and the update can be decrypted (<link linkend="F13-9">Figure <xref linkend="F13-9" remap="13.9"/></link>).</para>
</section>
</section>
<section class="lev1" id="sec13-8">
<title>13.8 Resume</title>
<para>This paper presents a concept for the optimization of process information chain to improve efficiency in agricultural harvesting process. Machine-to-machine communication plays a central role to synchronize data between diverse process partners.</para>
<para>The information gathered by sensors at agricultural machines plays the central role to build new business models. Business model analysis shows that all parties along the value chain gain good business potential. It has been shown that the three described business models can be operated with positive marginal return per unit under the assumptions made in the project.</para>
<para>However, security issues and business models play an important role for a successful system operation. With the described security measures, system operation can be done ensuring confidentiality, integrity as well as availability.</para>
<fig id="F13-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-9">Figure <xref linkend="F13-9" remap="13.9"/></link></label>
<caption><para>Large file encryption.</para></caption>
<graphic xlink:href="graphics/ch13_fig009.jpg"/>
</fig>
<para>As the system is designed with an open and generic approach, adoption to other branches such as construction and the opportunity to bring in new functions and business models via third-party software brings additional market potential.</para>
<para>The concepts presented in this paper were developed within the project M2M-Teledesk. The project aims to implement a prototypical system following the concept described above.</para>
</section>
<section class="lev1" id="sec13-9">
<title>13.9 Acknowledgement</title>
<para>The presented work was done in the research project &#x0201C;M2M-Teledesk&#x0201D; funded by the state government of North-Rhine-Westfalia and European Union Fund for regional development (EUROP&#x00C4;ISCHE UNION - Europ&#x00E4;ischer Fonds f&#x00FC;r regionale Entwicklung - Investition in unsere Zukunft). Project partners of M2M-Teledesk are University of Applied Sciences and Arts, Dortmund, VIVAI Software AG, Dortmund and Claas Selbstfahrende Erntemaschinen GmbH, Harsewinkel.</para>
</section>
<section class="lev1" id="sec13-10">
<title>References</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>K. Moummadi, R. Abidar and H. Medromi, &#x02018;Generic model based on constraint programming and multi-agent system for M2M services and agricultural decision support&#x02019;, In Multimedia Computing and Systems (ICMCS), 2011:1&#x02013;6. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+Moummadi%2C+R%2E+Abidar+and+H%2E+Medromi%2C+%27Generic+model+based+on+constraint+programming+and+multi-agent+system+for+M2M+services+and+agricultural+decision+support%27%2C+In+Multimedia+Computing+and+Systems+%28ICMCS%29%2C+2011%3A1-6%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>G. Wu, S. Talwar, K. Johnsson, N. Himayat and K. Johnson, &#x02018;M2M: From Mobile to Embedded Internet&#x02019;, In IEEE Communications Magazine, 2011:36&#x02013;42. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=G%2E+Wu%2C+S%2E+Talwar%2C+K%2E+Johnsson%2C+N%2E+Himayat+and+K%2E+Johnson%2C+%27M2M%3A+From+Mobile+to+Embedded+Internet%27%2C+In+IEEE+Communications+Magazine%2C+2011%3A36-42%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Daesub, C. Jongwoo, K. Hyunsuk and K. Juwan, &#x02018;Future Automotive Insurance System based on Telematics Technology&#x02019;, In 10th International Conference on Advanced Communication Technology (ICACT), 2008:679&#x02013;681. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Daesub%2C+C%2E+Jongwoo%2C+K%2E+Hyunsuk+and+K%2E+Juwan%2C+%27Future+Automotive+Insurance+System+based+on+Telematics+Technology%27%2C+In+10th+International+Conference+on+Advanced+Communication+Technology+%28ICACT%29%2C+2008%3A679-681%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Juliandri, M. Musida and Supriyadi, &#x02018;Positioning cloud computing in machine to machine business models&#x02019;, In Cloud Computing and Social Networking (ICCCSN), 2012:1&#x02013;4. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Juliandri%2C+M%2E+Musida+and+Supriyadi%2C+%27Positioning+cloud+computing+in+machine+to+machine+business+models%27%2C+In+Cloud+Computing+and+Social+Networking+%28ICCCSN%29%2C+2012%3A1-4%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>V. Goncalves and P. Dobbelaere, &#x02018;Business Scenarios for Machine-to-Machine Mobile Applications&#x02019;, In Mobile Business and 2010 Ninth Global Mobility Roundtable (ICMB-GMR), 2010. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=V%2E+Goncalves+and+P%2E+Dobbelaere%2C+%27Business+Scenarios+for+Machine-to-Machine+Mobile+Applications%27%2C+In+Mobile+Business+and+2010+Ninth+Global+Mobility+Roundtable+%28ICMB-GMR%29%2C+2010%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>Y. Chang, T. Chi, W. Wang and S. Kuo, &#x02018;Dynamic software update model for remote entity management of machine-to-machine service capability&#x02019;, In IET Communications Journal, 2012. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=Y%2E+Chang%2C+T%2E+Chi%2C+W%2E+Wang+and+S%2E+Kuo%2C+%27Dynamic+software+update+model+for+remote+entity+management+of+machine-to-machine+service+capability%27%2C+In+IET+Communications+Journal%2C+2012%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Blank, G. Kormann and K. Berns, &#x02018;A Modular Sensor Fusion Approach for Agricultural Machines&#x02019;, In XXXVI CIOSTA&#x02216;&#x00026; CIGR Section V Conference, Vienna, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Blank%2C+G%2E+Kormann+and+K%2E+Berns%2C+%27A+Modular+Sensor+Fusion+Approach+for+Agricultural+Machines%27%2C+In+XXXVI+CIOSTA&#x2216;%26+CIGR+Section+V+Conference%2C+Vienna%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>M. Mau, &#x02018;Supply Chain Management in Agriculture - Including Economics Aspects like Responsibility and Transparency&#x02019;, In X EAAE Congress Exploring Diversity in European Agriculture, 2002. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=M%2E+Mau%2C+%27Supply+Chain+Management+in+Agriculture+-+Including+Economics+Aspects+like+Responsibility+and+Transparency%27%2C+In+X+EAAE+Congress+Exploring+Diversity+in+European+Agriculture%2C+2002%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>S. Gansemer, U. Grossmann, B. Horster, T. Horster-Moeller and C. Rusch, &#x02018;Machine-to-machine communication for optimization of information chain in agricultural business&#x02019;, In 7th IEEE International Conference on Intelligent Data Acquisition and Advances Computing Systems, Berlin, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=S%2E+Gansemer%2C+U%2E+Grossmann%2C+B%2E+Horster%2C+T%2E+Horster-Moeller+and+C%2E+Rusch%2C+%27Machine-to-machine+communication+for+optimization+of+information+chain+in+agricultural+business%27%2C+In+7th+IEEE+International+Conference+on+Intelligent+Data+Acquisition+and+Advances+Computing+Systems%2C+Berlin%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>K. Johansson, &#x02018;Cost efficient provisioning of wireless access: Infrastructure cost modeling and multi-operator ressource sharing&#x02019;, Thesis KTH School of Electrical Engineering, Stockholm, 2005. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=K%2E+Johansson%2C+%27Cost+efficient+provisioning+of+wireless+access%3A+Infrastructure+cost+modeling+and+multi-operator+ressource+sharing%27%2C+Thesis+KTH+School+of+Electrical+Engineering%2C+Stockholm%2C+2005%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>A. Vazsonyi, &#x02018;The use of mathematics in production and investory control&#x02019;, Management Science, 1 (1), Jan. 1955. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=A%2E+Vazsonyi%2C+%27The+use+of+mathematics+in+production+and+investory+control%27%2C+Management+Science%2C+1+%281%29%2C+Jan%2E+1955%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>R.K. Ko, S. S. G. Lee and E. W. Lee, &#x02018;Business process management (BPM) standards: a survey&#x02019;, Business Process Mansgement Journal, 15(5):744&#x02013;791, 2009. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=R%2EK%2E+Ko%2C+S%2E+S%2E+G%2E+Lee+and+E%2E+W%2E+Lee%2C+%27Business+process+management+%28BPM%29+standards%3A+a+survey%27%2C+Business+Process+Mansgement+Journal%2C+15%285%29%3A744-791%2C+2009%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>J. Sell, &#x02018;Konzeption einer Ende-zu-Ende Absicherung f&#x00FC;r eine M2M-Telematik Anwendung f&#x00FC;r die Firma Claas&#x02019;, Thesis FH Dortmund, Dortmund, 2013. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=J%2E+Sell%2C+%27Konzeption+einer+Ende-zu-Ende+Absicherung+f%FCr+eine+M2M-Telematik+Anwendung+f%FCr+die+Firma+Claas%27%2C+Thesis+FH+Dortmund%2C+Dortmund%2C+2013%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>E. Eren and K. Detken, &#x02018;Mobile Security. Risiken mobiler Kommunikation und L&#x00F6;sungen zur mobilen Sicherheit&#x02019;, Wien, Carl Hanser Verlag, 2006. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=E%2E+Eren+and+K%2E+Detken%2C+%27Mobile+Security%2E+Risiken+mobiler+Kommunikation+und+L%F6sungen+zur+mobilen+Sicherheit%27%2C+Wien%2C+Carl+Hanser+Verlag%2C+2006%2E" target="_blank">Google Scholar</ulink></para></listitem>
<listitem>
<para>E. Eren and G. Aljabari, &#x02018;Virtualization of Wireless LAN Infrastructures&#x02019;, In 6th IEEE Workshop on Intelligent Data Acquisition and Advanced Computing Systems, Prague, 2011. <ulink url="https://scholar.google.com/scholar?hl=en&amp;q=E%2E+Eren+and+G%2E+Aljabari%2C+%27Virtualization+of+Wireless+LAN+Infrastructures%27%2C+In+6th+IEEE+Workshop+on+Intelligent+Data+Acquisition+and+Advanced+Computing+Systems%2C+Prague%2C+2011%2E" target="_blank">Google Scholar</ulink></para></listitem></orderedlist>
</section>
</chapter>
<chapter class="nosec" id="bib01">
<title>Editor&#x02019;s Biographies</title>
<para><emphasis role="strong">Richard J. Duro</emphasis> received the B.Sc., M.Sc., and Ph.D. degrees in physics from the University of Santiago de Compostela, Spain, in 1988, 1989, and 1992, respectively.</para>
<para>He is currently a Full Professor in the Department of Computer Science and head of the Integrated Group for Engineering Research at the University of A Coru&#x00F1;a, Coru&#x00F1;a, Spain. His research interests include higher order neural network structures, signal processing, and autonomous and evolutionary robotics.<!--br /--></para>
<para><emphasis role="strong">Yuriy P. Kondratenko</emphasis>, Doctor of Science, Professor, Honour Inventor of Ukraine (2008), Corr. Academician of Royal Academy of Doctors (Barcelona, Spain), Professor of Intelligent Information Systems at Petro Mohyla Black Sea State University, Ukraine. He has received a Ph.D. (1983) and a Dr.Sc. (1994) in Elements and Devices of Computer and Control Systems from Odessa National Polytechnic University. He received several international grants and scholarships for conducting research at Institute of Automation of Chongqing University, P.R.China (1988&#x02013;1989), Ruhr-University Bochum, Germany (2000, 2010), Nazareth College and Cleveland State University, USA (2003). His research interests include robotics, automation, sensors and control systems, intelligent decision support systems, fuzzy logic, soft computing, elements and devices of computing systems. He is the principal researcher of several international research projects with Spain, P.R. of China et al. and author of more than 120 patents and 12 books (including author&#x02019;s chapters in monographs) published by Springer, World Scientific, Pergamon Press, Academic Verlag etc. He is a member of the GAMM, DAAAM, AMSE UAPL and PBD-Honor Society of International Scholars and visiting lecture at the universities in Rochester, Cleveland, Kassel, Vladivostok and Warsaw. <!--br /--></para>
</chapter>
<chapter class="nosec" id="bib02">
<title>Author&#x02019;s Biographies</title>
<para><emphasis role="strong">Francisco Bellas</emphasis> received the B.Sc. and M.Sc. degrees in physics from the University of Santiago de Compostela, Spain, in 1999 and 2001, respectively, and the Ph.D. degree in computer science from the University of A Coru&#x00F1;a, Coru&#x00F1;a, Spain, in 2003.</para>
<para>He is currently a Profesor Contratado Doctor at the University of A Coru&#x00F1;a. He is a member of the Integrated Group for Engineering Research at the University of A Coru&#x00F1;a. His current research interests are related to evolutionary algorithms applied to artificial neural networks, multiagent systems, and robotics.<!--br /--></para>
<para><emphasis role="strong">Mitru Corneliu Caraivan</emphasis> has received his B.Sc. degree in Automatic Control and Computers in 2009, from Politehnica University of Bucharest, Romania. Following the master thesis in Fakult&#x00E4;t f&#x00FC;r Inginieurwissenshaften, University of Duisburg-Essen, Germany, he earned the Ph.D. degree in Systems Engineering in 2013 from Politehnica University of Bucharest. Since 2009 he is part-time Assistant Professor in the Faculty of Applied Sciences and Engineering, Ovidius University of Constanta, Romania. His main research interests focus on offshore oil and gas automatic control systems on drilling and exploration rigs, programmable logic controller networks, instrumentation data sensors and actuators, systems redundancy and reliability. Since 2010, he has gained experience on offshore jack-up rigs as IT/Electronics Engineer focusing on specific industry equipment: NOV Amphion Systems, Cameron (former TTS-Sense) X-COM Cyber-Chairs 5<superscript><emphasis>th</emphasis></superscript> gen, drilling equipment instrumentation, PLCs, SBCs, fire and gas alarm systems, satellite communications and networking solutions.<!--br /--></para>
<para><emphasis role="strong">Valentin Dache</emphasis> has received his B.Sc. degree in Automatic Control and Computers in 2008, from the Politehnica University of Bucharest, Romania. Following the master thesis in the same university, he is currently a Ph.D. student in Systems Engineering, having an on-going research which focuses on intelligent buildings management system. Other interests include the field of remotely controlled radiation scanners and non-intrusive inspection systems for customs control and borders security - ROBOSCAN and ROBOSCAN 2M AERIA systems - two times winner of the Grand Prix of Salon International des Inventions de Gen&#x00E8;ve, in 2009 and 2013. Working in the team which won the International Exhibition of Inventions he also owns one invention patent.<!--br /--></para>
<para><emphasis role="strong">&#x00C1;lvaro Deibe D&#x00ED;az</emphasis>received the M.S. degree in Industrial Engineering in 1994 from the University of Vigo, Spain, and a Ph.D. in Industrial Engineering in 2010 in the University of A Coru&#x00F1;a. He is currently Titular de Universidad in the Department of Mathematical and Representation Methods at the same University. He is a member of the Integrated Group for Engineering Research at the University of A Coru&#x00F1;a. His research interests include Automation and Embedded Systems.<!--br /--></para>
<para><emphasis role="strong">Alexander A. Dyda</emphasis> was graduated from Electrical Engineering Department of Far-Eastern Polytechnic Institute (Vladivostok) in 1977. He received degree of Candidate of Science (PhD) from Leningrad Electrical Engineering Institute in 1986. From 1979 to 1999 he was Assistant Professor, principal lecturer and Associate Professor of Department of Technical Cybernetics &#x00026; Informatics. In 1990&#x02013;1991 and 1995 he was researcher in the Department of System Sciences at &#x0201C;La Sapienza&#x0201D; University (Rome, Italy) and the Department of System Science &#x00026; Mathematics of Washington University (Saint-Louis, USA), respectively. In 1998 he received degree of Doctor of Technical Sciences (DSc) from Institute of Automation &#x00026; Control Processes, Russian Academy of Sciences. From 1999 to 2003 he was Chairman of Department of Information Systems. Since 2003 he is Full Professor of Department of Automatic &#x00026; Information Systems and head of the laboratory of Nonlinear &#x00026; Intelligent Control Systems at Maritime State University.<!--br /--></para>
<para><emphasis role="strong">Uladzimir Dziomin</emphasis> is a PhD student at Intelligent Information Technology Department of Brest State Technical University. He received MSc and Diploma in Information Technology at BrSTU. The area of his research is Intelligent Control, Autonomous Learning Robots and Distributed Systems.<!--br /--></para>
<para><emphasis role="strong">Chernakova Svetlana Eduardovna</emphasis> was born on August 3, 1967 in the town of Kalinin in the Kalinin Region. She graduated from the Leningrad Polytechnic Institute in 1990, has written more than 80 scientific papers, and possesses considerable knowledge related to development and creation of television systems for different purposes. Area of interest: pattern identification, intelligent systems &#x0201C;human-to-machine&#x0201D;, intelligent technology of training by demonstration, mixed (virtual) reality. Place of work: St. Petersburg Institute of Informatics of the Russian Academy of Sciences (SPIIRAS), St. Petersburg E-mail: S_chernakova@rambler.ru Address: 199226, Saint-Petersburg, Nalichnaya Street, H# 36, Building 6, Apt 160<!--br /--></para>
<para><emphasis role="strong">Andr&#x00E9;s Fa&#x00ED;&#x00F1;a</emphasis> received the M.Sc. and Ph.D. degrees in industrial engineering from the University of A Coru&#x00F1;a, Spain, in 2006 and 2011, respectively.</para>
<para>He is currently a Postdoctoral Researcher at the IT University of Copenhagen, Denmark. His interests include modular and self-reconfigurable robotics, evolutionary robotics, mobile robotics, and electronic and mechanical design.<!--br /--></para>
<para><emphasis role="strong">Vladimir Golovko</emphasis>, Professor, is a Head of Intelligent Information Technologies Department and Laboratory of Artificial Neural Networks since 2003. He received his PhD degree from the Belarus State University and Doctor of Sciences degree in Computer Science from United Institutes of Informatics Problem of National Academy of Sciences (Belarus). His research interests include Artificial Intelligence, Neural Networks, Autonomous Learning Robots, Intelligent signal processing.<!--br /--></para>
<para><emphasis role="strong">Marcos Miguez Gonz&#x00E1;lez</emphasis> received his master degree in Naval Architecture and Marine Engineering in 2004, from the University of A Coru&#x00F1;a, Spain, and his PhD related with the analysis of parametric roll resonance for the same university in 2012. He worked for three years in shipbuilding industry. Since 2007, he is a researcher in the GII (Integrated Group for Engineering Research) of the University of A Coru&#x00F1;a, formerly under a scholarship from the Spanish Ministry of Education and nowadays as an assistant professor. His research topics are ship behaviour modelling, parametric rolling and stability guidance systems.<!--br /--></para>
<para><emphasis role="strong">Boris Gordeev</emphasis> received the PhD degree from Leningrad Institute of High-precision Mechanics and Optics, Leningrad, USSR, in 1987 and Doctor of Science degree from Institute of Electrodynamics of National Academy of Sciences of Ukraine in Elements and Devices of Computer and Control Systems in 2011.</para>
<para>He is currently a Professor of Marine Instrumentation Department at National University of Shipbuilding, Ukraine. His current research interest is related to polymetric signals generation and processing for measuring systems, including control systems for continuous measurement of the quantitative and qualitative parameters of the fuels and liquefied petroleum gas. He is the author of more than 150 scientific publications in the above mentioned field.<!--br /--></para>
<para><emphasis role="strong">Anton Kabysh</emphasis>, assistant of Intelligent Information Technologies Department of Brest State Technical University. He is working on Ph.D thesis in Multi-Agent Control and Reinforcement Learning at Robotics Laboratory in BrSTU. Research interest also includes distributed systems, swarm intelligence and optimal control.<!--br /--></para>
<para><emphasis role="strong">Volodymyr Y. Kondratenko</emphasis> received a PhD degree in Applied Mathematics from Colorado University Denver in 2015. He has received a B.S. summa cum laude in Computer Science and Mathematics at Taras Shevchenko Kyiv National University (2008), and a M.S. at University of Colorado (2010) as a Winner of Fulbright Scholarship. He is the author of numerous publications in such fields as Wildfire Modelling and Fuzzy logic, Medallist of the Small Academy of Science of Ukraine (2004, 2nd place) and winner of All-Ukrainian Olympiad in mathematics (2004, 3rd place). His research interests include Computational Mathematics, Probability Theory, Data Assimilation, Computer Modelling, and Fuzzy Logic.<!--br /--></para>
<para><emphasis role="strong">Sauro Longhi</emphasis> received the Doctor degree in Electronic Engineering in 1979 from the University of Ancona, Italy, and post-graduate Diploma in Automatic Control in 1984 from the University of Rome &#x0201C;La Sapienza&#x0201D;, Italy. From 1980 to 1981 he held a fellowship at the University of Ancona. From 1981 to 1983 he was with the R&#x00026;D Laboratory of the Telettra S.p.A., Chieti, Italy, mainly involved in activities of research and electronic design of modulation and demodulation systems for spread spectrum numerical transmission systems. Since 1983 he has been at the Dipartimento di Elettronica e Automatica of the University of Ancona, now Information Engineering Department, of the Universit&#x00E0; Politecnica delle State Marche-Ancona. From July 2011 to October 2013 he was the Director of this Department. Since November 1, 2013 he is the Rector of the Universit&#x00E0; Politecnica delle Marche.<!--br /--></para>
<para><emphasis role="strong">Andrei Maciuca</emphasis> obtained his PhD in 2014 from the Department of Automatic Control and Industrial Informatics, University &#x0201C;Politehnica&#x0201D; of Bucharest with a thesis on &#x0201C;Sensor Networks for Home Monitoring of Elderly and Chronic Patients&#x0201D;.</para>
<para><emphasis role="strong">Kurosh Madani</emphasis> received his PhD degree in Electrical Engineering and Computer Sciences from University Paris-Sud, Orsay (France) in 1990. In 1995, he received the Doctor Habilitate degree (Dr Hab.) from UPEC. Since 1998 he has worked as Chair Professor in Electrical Engineering at University Paris-Est Creteil (UPEC). Co-initiator of the Images, Signals and Intelligent Systems Laboratory (LISSI/EA 3956), he is head of one of the four research groups of LISSI. His current research interests include bio-inspired perception, artificial awareness, cognitive robotics, human-like robots and intelligent machines. Since 1996 he is elected Academician of the International Informatization Academy. In 1997 he was elected as Academician of the International Academy of Technological Cybernetics.<!--br /--></para>
<para><emphasis role="strong">Kulakov Felix Mikhailovich</emphasis> was born on July 4, 1931 in the town of Nadezhdinsk in Ural Region. He graduated from the Leningrad Polytechnic Institute in 1955, and has written more than 230 scientific papers. He works in the sphere of robot supervision and research of automatic performance of the mechatronic systems. He is an Honored Worker of Science of the Russian Federation, Professor, Chief Research scientist, St. Petersburg Institute of Informatics of the Russian Academy of Sciences (SPII RAS), head of the branch of the department of the &#x0201C;Mechanics of the controlled movement&#x0201D; of the St. Petersburg State University at SPIIRAS. E-mail: kufelix@yandex.ru Address: 198064, Saint-Petersburg, Tikhoretsky Avenue, H# 11, building 4, Apt 50<!--br /--></para>
<para><emphasis role="strong">Andrea Monteri&#x00F9;</emphasis> received the Laurea Degree (joint BSc/MSc equivalent) summa cum laude in Electronic Engineering and the Ph.D. degree in Artificial Intelligence Systems from the Universit&#x00E0; Politecnica delle Marche, Ancona, Italy, in 2003 and 2006, respectively. His MSc thesis has been developed to the Automation Department of the Technical University of Denmark, Lyngby, Denmark. In 2005 he was a visiting researcher at the Center for Robot Assisted Search &#x00026; Rescue of the University of South Florida, Tampa, Florida. Since 2005, he is Teaching Assistant of Automatic Control, Automation Systems, Industrial Automation, Modelling and Identification of Dynamic Processes. Since 2007, he has a PostDoc and Research Fellowship at the Dipartimento di Ingegneria dell&#x02019;Informazione of the Universit&#x00E0; Politecnica delle Marche, where currently he is a Contract Professor.<!--br /--></para>
<para><emphasis role="strong">Alexander Nakonechniy</emphasis> received the B.Sc. and M.Sc. degrees in High Voltages Electro-physics from National University of Shipbuilding, Ukraine, in 2007 and 2009 respectively.</para>
<para>He is currently a post-graduate student at Marine Instrumentation Department of National University of Shipbuilding, Ukraine. His current research interests are polymetric measurements, sensors and sensors networks, generation of short pulses. He is the author of several scientific publications in the above fields.<!--br /--></para>
<para><emphasis role="strong">Sergey I. Osadchy</emphasis> Doctor of Science, Professor, Chief of Production Process Automatic Performance Department at Kirovograd National Technical University, Ukraine. He has received a Ph.D. (1987) and a Dr.Sc. (2013) in Automatic Performance of Control Processesfrom National Technic University of Ukraine &#x0201C;Kiev Polytechnic Institute&#x0201D;. His research interests include robotics, automation, sensors and optimal control systems synthesis, analysis and identification, underwater supercavitating vehicle stabilization and control. He is the author of more than 140 scientific articles, 25 patents and 4 books. <!--br /--></para>
<para><emphasis role="strong">Dmitry A. Oskin</emphasis> was graduated from Electrical Engineering Department of Far-Eastern State Technical University (Vladivostok) in 1997. He received degree of Candidate of Science (PhD) in 2004. From 2000 to 2005 he was an Assistant Professor of Department of Information Systems. Since 2005, he is Associate Professor of Department of Automatic &#x00026; Information Systems and Senior researcher of the laboratory of Nonlinear &#x00026; Intelligent Control Systems at Maritime State University. <!--br /--></para>
<para><emphasis role="strong">Fernando L&#x00F3;pez Pe&#x00F1;a</emphasis> received the Master degree in Aeronautical Engineering from the Polytechnic University of Madrid, Spain, in 1981, a Research Master from the von Karman Institute of Fluid Dynamics, Belgium, in 1987, and the Ph.D. degree from the University of Louvain, Belgium, in 1992. He is currently a professor at the University of A Coru&#x00F1;a, Spain. He authored about 100 papers on peer reviewed journals and international conferences, holds five patents, and has leaded more than 80 research projects. His current research activities are related to intelligent hydrodynamics and aerodynamics design and optimization, flow measurement and diagnosis, and signal and image processing.<!--br /--></para>
<para><emphasis role="strong">Dan Popescu</emphasis> obtained the MSc degree in Automatic Control in 1974, MS in mathematics in 1980, PhD degree in Electrical Engineering in 1987. He is currently Full Professor, in the Department of Automation and Industrial Informatics, University &#x0201C;Politehnica&#x0201D; of Bucharest, head of the laboratory Artificial Vision and responsible for master program Complex Signal Processing in Multimedia Applications. He is PhD adviser in the field of Automatic Control. Current scientific areas are: equipments for complex measurements, data acquisition and remote control, wireless sensor networks, alerting systems, pattern recognition and complex image processing, interdisciplinary approaches. He is author of 15 books, more than 120 papers, and director of five national research grants. He is IEEE member, SRAIT member, and he received the 2008 IBM Faculty Award. His research interests lay in the field of wireless sensor networks, image processing and reconfigurable computing.<!--br /--></para>
<para><emphasis role="strong">Dominik Maximili&#x00E1;n Ram&#x00ED;k</emphasis> received his PhD in 2012 in signal and image processing from University of Paris-Est, France. Member of LISSI laboratory of University Paris-Est Creteil (UPEC), his current research topic concerns processing of complex images using bio-inspired artificial intelligence approaches and consequent extraction of semantic information with use in mobile robotics control and industrial processes supervision.<!--br /--></para>
<para><emphasis role="strong">Christian Rusch</emphasis> studied Mechatronics between 1999 and 2005 and graduated from the Technical University of Braunschweig. Then he started his scientific career as Research Scientist at the Technical University of Berlin. The Title of his PhD Thesis is &#x0201C;Analysis of data security of self-configuring radio networks on mobile working machines illustrated by the process documentation&#x0201D;. He finished the PhD (Dr. &#x02013;Ing.) in 2012. In 2011 he moved in to the industry and worked for CLAAS as project manager with focus on wireless communication for mobile working machines. Since 2014 he is system engineer in CLAAS Electronic Systems. Furthermore, he is project leader of the standardization group &#x0201C;Wireless ISOBUS Communication&#x0201D;.<!--br /--></para>
<para><emphasis role="strong">F&#x00E9;lix Orjales Saavedra</emphasis> received the master degree in Industrial Engineering in 2010 from the University of Le&#x00F3;n, Spain. He is currently working on his Ph.D. degree in the Department of Industrial Engineering at the University of A Coru&#x00F1;a, Spain, related with collaborative unmanned aerial vehicles. He is a researcher at the Integrated Group for Engineering Research, and his main research topics include autonomous systems, robotics, and electronic design.<!--br /--></para>
<para><emphasis role="strong">Christophe Sabourin</emphasis> received his PhD in Robotics and Control from University of Orleans (France) in November 2004. Since 2005, he has been a researcher and a staff member of Images, Signals and Intelligent Systems Laboratory (LISSI/EA 3956) of University Paris-Est Creteil (UPEC). His current interests relate to areas of complex and bio-inspired intelligent artificial systems, cognitive robotics, humanoid robots, collective and social robotics.<!--br /--></para>
<para><emphasis role="strong">Valentin Sg&#x00E2;rciu</emphasis> is full professor in the Faculty of Automatic Control and Computers, Politehnica University of Bucharest, Romania since 1995 and PhD coordinator in the same university since 2008. His teaching activity includes undergraduate and postgraduate courses: &#x0201C;Sensors and Transducers&#x0201D;, &#x0201C;Data Processing&#x0201D;, &#x0201C;Transducers and Measuring systems&#x0201D;, &#x0201C;Reliability and Diagnosis&#x0201D;, &#x0201C;Security of Informatics Systems&#x0201D; and &#x0201C;Product and System Diagnosis&#x0201D;. His research activity include the fields of intelligent buildings, intelligent measurements, wireless sensor networks, software engineering, operating systems, control system theory, e-learning, distributed heterogeneous systems and middleware technologies. He has over 100 scientific papers presented in national and international symposiums and congresses, published in the associated preprints or in specialized Romanian and international journals. He is first author of 16 books and owns one invention patent.<!--br /--></para>
<para><emphasis role="strong">Daniel Souto</emphasis> received the M.Sc. degree in industrial engineering from the University of A Coru&#x00F1;a, Coru&#x00F1;a, Spain, in 2007. He is working towards the Ph.D. degree in the Department of Industrial Engineering at the same university.</para>
<para>He is currently a Researcher at the Integrated Group for Engineering Research. His research activities are related to automatic design and mechanical design of robots.<!--br /--></para>
<para><emphasis role="strong">Grigore Stamatescu</emphasis> is currently Assistant Professor within the Department of Automatic Control and Industrial Informatics, University &#x0201C;Politehnica&#x0201D; of Bucharest &#x02013; Intelligent Measurement Technologies and Transducers laboratory. He obtained his PhD in 2012 from the same institution with a thesis on &#x0201C;Improving Life and Work with Reliable Wireless Sensor Networks&#x0201D;. He is a member of the IEEE Instrumentation and Measurement and Industrial Electronics societies and has published over 50 papers in international journals and indexed conference proceedings. His current research interests lay in the fields of intelligent sensors, embedded networked sensing and information processing.<!--br /--></para>
<para><emphasis role="strong">Ralf Stetter</emphasis> is a Professor in the Department of Mechanical Engineering at the Hochschule Ravensburg-Weingarten since 2004. His Teaching Area is &#x0201C;Design and Development in Automotive Technology&#x0201D;. He received his PhD and Diploma in Mechanical Engineering from the Technische Universit&#x00E4;t M&#x00FC;nchen (TUM). He is currently Vice-Dean of the Department of Mechanical Engineering. He works as Project Manager in the Steinbeis-Transfer-Centre &#x0201C;Automotive Systems&#x0201D;. He was Team Coordinator at Audi AG, Ingolstadt in the Automotive Interior Product Development.<!--br /--></para>
<para><emphasis role="strong">Mircea Strutu</emphasis> obtained his PhD in 2014 from the Department of Automatic Control and Industrial Informatics, University &#x0201C;Politehnica&#x0201D; of Bucharest with a thesis on &#x0201C;Wireless Networks for Environmental Monitoring and Alerting based on Mobile Agents&#x0201D;. <!--br /--></para>
<para><emphasis role="strong">Anna S. Timoshenko</emphasis>, Lecturer in Department of Information Technology at Kirovograd Flight Academy of National Aviation University. She graduated from the State Flight Academy with honors in 2002. She graduated from the Kirovograd National Technical University in 2006. Then she took post-graduate studies in Kirovograd Flight Academy of National Aviation University (2009&#x02013;2012). She is the author of numerous publications in such fields as Navigation and Motion Control, Automation and Robotics.<!--br /--></para>
<para><emphasis role="strong">Blanca Mar&#x00ED;a Priego Torres</emphasis> received the title of Telecommunications Engineer in 2009 from the University of Granada, Spain. In 2011, she obtained the Master&#x02019;s Degree in Information and Communications Technologies in Mobile Networks from the University of A Coru&#x00F1;a, Spain. She is currently working in her PhD as a member of the Integrated Group for Engineering Research at the University of A Coru&#x00F1;a. Her research interests include the analysis, processing and interpretation of signals and multi-dimensional images in relation to the Industrial and Naval Field, and through the application of techniques based on new neural-network structures, soft computing and machine learning.<!--br /--></para>
<para><emphasis role="strong">Yuriy Zhukov</emphasis> received the PhD and Doctor of Science degrees from Nikolayev Shipbuilding Institute (now NUOS), Ukraine, in 1981 and 1994, respectively.</para>
<para>He is currently a Professor, Chief of Marine Instrumentation Department and Neoteric Naval Engineering Institute at National University of Shipbuilding, Ukraine. He is the author of more than 250 scientific publications in fields of ship design and its operational safety, dynamic systems behavior monitoring and control, decision making support systems and artificial intelligence, etc. His current research interest is related to intelligent multi-agent sensory systems application in the above mentioned fields.<!--br /--></para>
<para><emphasis role="strong">Alexey Zivenko</emphasis> received the PhD degree in Computer Systems and Components from Petro Mohyla Black Sea State University, Ukraine, in 2013.</para>
<para>He is currently an Associate Professor of Marine Instrumentation Department at National University of Shipbuilding, Ukraine. His current research interests are polymetric measurements, non-destructive assay of liquids, intellectual measurement systems and robotics, data acquisition systems anddata mining. He is the author of more than 20 scientific publications in the above mentioned fields.<!--br /--></para>
<para><emphasis role="strong">Valerii A. Zozulya</emphasis>, Associate Professor in Department of automation of production processes at Kirovograd National Technical University, Ukraine. He has received a Ph.D. (1999) and received the academic rank of associate professor (2003) at Kirovograd National Technical University. He is the author of numerous publications in such fields as motion control, robotics and optimal control systems synthesis, analysis and identification.<!--br /--></para>
</chapter>
</book>
