﻿<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="9788770220408.xsl"?>
<book id="home" xmlns:xlink="http://www.w3.org/1999/xlink">
<bookinfo>
<title>The Digital Shopfloor: Industrial Automation in the Industry 4.0 Era &#8211; Performance Analysis and Applications</title>
<affiliation><emphasis role="strong">Editors</emphasis></affiliation>
<authorgroup>
<author><firstname>John</firstname>
<surname>Soldatos</surname>
</author>
</authorgroup>
<affiliation>Athens Information Technology Greece</affiliation>
<authorgroup>
<author><firstname>Oscar</firstname>
<surname>Lazaro</surname>
</author>
</authorgroup>
<affiliation>Innovalia Association Spain</affiliation>
<authorgroup>
<author><firstname>Franco</firstname>
<surname>Cavadini</surname>
</author>
</authorgroup>
<affiliation>Synesis-Consortium Italy</affiliation>
<publisher>
<publishername>River Publishers</publishername>
</publisher>
<isbn>9788770220408</isbn>
</bookinfo>
<preface class="preface" id="preface01">
<title>RIVER PUBLISHERS SERIES IN AUTOMATION, CONTROL AND ROBOTICS</title>
<para><emphasis>Series Editors:</emphasis></para>
<para><emphasis role="strong">ISHWAR K. SETHI</emphasis></para>
<para><emphasis>Oakland University USA</emphasis></para>
<para><emphasis role="strong">TAREK SOBH</emphasis></para>
<para><emphasis>University of Bridgeport USA</emphasis></para>
<para><emphasis role="strong">QUAN MIN ZHU</emphasis></para>
<para><emphasis>University of the West of England UK</emphasis></para>
<para>Indexing: All books published in this series are submitted to the Web of Science Book Citation Index (BkCI), to SCOPUS, to CrossRef and to Google Scholar for evaluation and indexing.</para>
<para>The &#8220;River Publishers Series in Automation, Control and Robotics&#8221; is a series of comprehensive academic and professional books which focus on the theory and applications of automation, control and robotics. The series focuses on topics ranging from the theory and use of control systems, automation engineering, robotics and intelligent machines.</para>
<para>Books published in the series include research monographs, edited volumes, handbooks and textbooks. The books provide professionals, researchers, educators, and advanced students in the field with an invaluable insight into the latest research and developments.</para>
<para>Topics covered in the series include, but are by no means restricted to the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Robots and Intelligent Machines</para></listitem>
<listitem><para>Robotics</para></listitem>
<listitem><para>Control Systems</para></listitem>
<listitem><para>Control Theory</para></listitem>
<listitem><para>Automation Engineering</para></listitem>
</itemizedlist>
<para>For a list of other books in this series, visit www.riverpublishers.com</para>
</preface>
<preface class="preface" id="preface02">
<title>Foreword</title>
<para>As the Technical Director of the European Factories of the Future Research Association (EFFRA), it is with great pleasure and satisfaction that I witness the completion of this book on digital automation, cyber physical production systems and the vision of a fully digital shopfloor. EFFRA is an industry-driven association promoting the development of new and innovative production technologies. It is the official representative of the private side in the &#8216;Factories of the Future&#8217; Public-Private Partnership (PPP) under the Horizon 2020 program of the European Commission. As such it has been also supporting the three research projects (FAR-EDGE, AUTOWARE, DAEDALUS) that produced the book, which have formed the Digital Shopfloor Alliance (DSA).</para>
<para>The book provides insights on a variety of digital automation platforms and solutions, based on advanced technologies ICT technologies like cloud/edge computing, distributed ledger technologies and cognitive computing, which will play a key role in supporting automation in the factories of the future. Moreover, solutions based on the promising IEC 61499 standards are described. Overall, the presented results are fully aligned with some of the research priorities that EFFRA has been setting and detailing during the last couple of years. In particular, two years ago, EFFRA launched the ConnectedFactories Coordination Action, with a view to providing more insight in priorities and steps towards the digital transformation of production systems and facilities. ConnectedFactories has generated a first set of generic pathways to digital manufacturing.</para>
<fig id="F0-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<graphic xlink:href="graphics/frwd_fig001.jpg"/>
</fig>
<para>These pathways reflect our main directions for transforming factories in the Industry 4.0 era, and include:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">The Autonomous Smart Factories pathway</emphasis>, which focuses on optimised and sustainable manufacturing including advanced human-in- the-loop workspaces.</para></listitem>
<listitem><para><emphasis role="strong">The Hyperconnected Factories pathway</emphasis>, which boosts the networking enterprises towards formulating complex, dynamic supply chains and value networks.</para></listitem>
<listitem><para><emphasis role="strong">The Collaborative Product-Service Factories pathway</emphasis>, which emphasizes data-driven product-service engineering in knowledge intensive factories.</para></listitem>
</itemizedlist>
<para>As part of the ConnectedFactories initiative, we have also illustrated a solid initial set of key cross-cutting factors and enablers that should be addressed in order to progress on the pathways. Likewise, we have also described a rich set of relevant industrial and research cases.</para>
<para>The work reflected in the book is perfectly aligned to our &#8220;Autonomous Smart Factories&#8221; pathway, as the presented technologies and use cases of the DSA are boosting significant improvements in production time, quality, sustainability and cost-efficiency at the same time. The co-editors have done a good job in presenting the added-value of the solutions developed by the three projects. At EFFRA we appreciate seeing results aligned to our research and development roadmaps. In the case of the results presented in this book, we are also happy to see the development of complementary services and community building initiatives, which could provide value to our members. We are happy to support the three projects in their dissemination and community building initiatives.</para>
<para>It&#8217;s also very positive that this book is offered based on an Open Access publication modality, which could help it reach a wider readership and will boost its impact.</para>
<para>EFFRA is a growing network of actors that play key roles on national, regional, European and even global initiatives, as a contribution to knowledge exchange and experience sharing. I believe that many of these actors will find the book a very interesting read.</para>
<para>Chris Decubber</para>
<para>Technical Director</para>
<para>European Factories of the Future Research Association</para>
<para>Brussels</para>
<para>April 4th, 2019</para>
</preface>
<preface class="preface" id="preface03">
<title>Preface</title>
<para>In today&#8217;s competitive global environment, manufacturers are offered with unprecedented opportunities to build hyper-efficient and highly flexible plants, towards meeting variable market demand, while at the same time supporting new production models such as make-to-order (MTO), configure-to-order (CTO) and engineer-to-order (ETO). In particular, the on-going digitization of industry enables manufacturers to develop, deploy and use scalable and advanced manufacturing systems (e.g., highly configurable production lines), which are suitable to support the above-listed production models and enable mass customization at shorter times and lower costs, without compromising manufacturing quality.</para>
<para>During the last few years, the digital transformation of industrial processes is propelled by the emergence and rise of the fourth industrial revolution (Industry 4.0). The latter is based on the extensive deployment of Cyber-Physical Production Systems (CPPS) and Industrial Internet of Things (IIoT) technologies in the manufacturing shopfloor, as well as on the seamless and timely exchange of digital information across supply chain participants. CPPS and IIoT technologies enable the virtualization of manufacturing operations, as well as their implementation based on IT (information technology) services rather than based on conventional OT (operational technology).</para>
<para>The benefits of Industry 4.0 have been already proven in the scope of pilot and production deployments in a number of different use cases including flexibility in automation, predictive maintenance, zero-defect manufacturing and so on. Recently, the digital manufacturing community has produced a wide array of standards for building Industry 4.0 systems, including standards-based Reference Architectures (RA), (such as RAMI 4.0 (Reference Architecture Model Industry 4.0) and the RA of the Industrial Internet Consortium (IIRA).</para>
<para>Despite early implementations and proof of concepts based on these RAs, CPPS/IIoT deployments are still in their infancy for a number of reasons, including:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Manufacturers&#8217; poor awareness about digital manufacturing solutions and their business value potential</emphasis>, as well as the lack of relevant internal CPPS/IIoT knowledge.</para></listitem>
<listitem><para><emphasis role="strong">The high costs associated with the deployment, maintenance and operation of CPPS systems in the manufacturing shopfloors</emphasis>, which are particularly challenging in the case of SME (small and medium-sized enterprises) manufacturers that lack the equity capital needed to invest in Industry 4.0.</para></listitem>
<listitem><para><emphasis role="strong">The time needed to implement CPPS/IIoT and the lack of a smooth and proven migration path</emphasis> from existing OT solutions.</para></listitem>
<listitem><para><emphasis role="strong">The uncertainty over the business benefits and impacts of IIoT and CPPS technologies</emphasis>, including the lack of proven methods for the techno-economic evaluation of Industry 4.0 systems.</para></listitem>
<listitem><para><emphasis role="strong">Manufacturers&#8217; increased reliance on external integrators, consultants and vendors</emphasis>.</para></listitem>
<listitem><para><emphasis role="strong">The absence of a well-developed value chain needed to sustain the acceptance of these new technologies for digital automation</emphasis>.</para></listitem>
</itemizedlist>
<para>In order to alleviate these challenges, three EC co-funded projects (namely H2020 FAR-EDGE (http://www.far-edge.eu/), H2020 DAEDALUS (http://daedalus.iec61499.eu) and H2020 AUTOWARE (http://www.autoware-eu.org/)) have recently joined forces towards a &#8220;Digital Shopfloor Alliance&#8221;. The Alliance aims at providing leading edge and standards-based digital automation solutions, along with guidelines and blueprints for their effective deployment, validation and evaluation.</para>
<para>The present book provides a comprehensive description of some of the most representative solutions offered by these three projects, along with the ways these solutions can be combined in order to achieve multiplier effects and maximize the benefits of their use. The presented solutions include standards-based digital automation solutions, following different deployment paradigms, such as cloud and edge computing systems. Moreover, they also comprise a rich set of digital simulation solutions, which are have been explored in conjunction with the H2020 MAYA project (http://www.maya-euproject.com/). The latter facilitate the testing and evaluation of what-if scenarios at low risk and cost, without disrupting shopfloor operations. As already outlined, beyond leading edge scientific and technological development solutions, the book comprises a rich set of complementary assets that are indispensable to the successful adoption of IIoT/CPPS in the shopfloor. These assets include methods for techno-economic analysis, techniques for migrating for traditional technologies to IIoT/CPPS system, as well as ecosystems providing training and technical support to prospective deployers.</para>
<para>The book is structured in the following three parts, which deal with three distinct topics and elements of the next generation of digital automation in Industry 4.0:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">The first part of the book is devoted to digital automation platforms</emphasis>. Following an introduction to Industry 4.0 in general and digital automation platforms in particular, this part presents the digital automation platforms of the FAR-EDGE, AUTOWARE and DAEDALUS projects. As part of these platforms, various automation functionalities are presented, including data analytics functionalities. Moreover, the concept of a fully digital shopfloor is introduced.</para></listitem>
<listitem><para><emphasis role="strong">The second part of the book focuses on the presentation of digital simulation and digital twins&#8217; functionalities</emphasis>. These include information about the models that underpin digital twins, as well as the simulators that enable experimentation with these processes over these digital models.</para></listitem>
<listitem><para><emphasis role="strong">The third part of the book provides information about complementary assets and supporting services that boost the adoption of digital automation functionalities in the Industry 4.0 era</emphasis>. Training services, migration services and ecosystem building services are discussed based on the results of the three projects of the Digital Shopfloor Alliance.</para></listitem>
</itemizedlist>
<para>The various topics in all three chapters are presented in a tutorial manner, in order to facilitate readers without deep technical backgrounds to follow them. Nevertheless, a basic understanding of cloud computing, Internet, sensors and data science concepts facilitates the reading and understanding of the core technical concepts that are presented in the book.</para>
<para>The target audience of the book includes:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Researchers in the areas of digital manufacturing and more specifically in the areas of digital automation and simulation</emphasis>, who wish to be updated about latest Industry 4.0 developments in these areas.</para></listitem>
<listitem><para><emphasis role="strong">Manufacturers, with an interest in the next generation of digital automation solutions</emphasis> based on cyber-physical systems.</para></listitem>
<listitem><para><emphasis role="strong">Practitioners and providers of Industrial IoT solutions</emphasis>, who are interested in the implementation of use cases in automation, simulation and supply chain management.</para></listitem>
<listitem><para><emphasis role="strong">Managers wishing to understand technologies and solutions that underpin Industry 4.0</emphasis>, along with representative applications in the shopfloor and across the supply chain.</para></listitem>
</itemizedlist>
<para>In general, the book provides insights into automation and simulation platforms towards a digital shopfloor. Moreover, it discusses the elements of a fully digital shopfloor, which is the vision of the DSA for the years to come. We hope that you will find it useful as a tutorial introduction to several digital automation topics and technologies, including cloud computing, edge computing, blockchains, software technologies and the IEC 61499 standard, along with their role in the future of digital automation. The book will be published as an open-access publication, which could make it broadly and freely available to the Industry 4.0 and Industrial Internet of Things communities. We would like to thank River Publishers for the opportunity and their collaboration in making this happen.</para>
<para>Finally, we take the chance to thank all members of our project for their valuable inputs and contributions in developing the presented systems and platforms, as well as in documenting them as part of the book. Likewise, we would also like to acknowledge funding and support from the European Commission as part of the H2020 AUTOWARE, DAEDALUS, MAYA and FAR-EDGE contracts.</para>
<para>September 2018,</para>
<para>John Soldatos</para>
<para>Oscar Lazaro</para>
<para>Franco Cavadini</para>
</preface>

<preface class="preface" id="preface04">
<title>List of Contributors</title>
<para><emphasis role="strong">Aikaterini Roukounaki</emphasis>, <emphasis>Kifisias 44 Ave., Marousi, GR15125, Greece; E-mail: arou@ait.gr</emphasis></para>
<para><emphasis role="strong">Aitor Gonzalez</emphasis>, <emphasis>Asociacion de Empresas Tecnologicas Innovalia, Rodriguez Arias, 6, 605, 48008-Bilbao, Spain;<break/>E-mail: aitgonzalez@innovalia.org</emphasis></para>
<para><emphasis role="strong">Alessandro Brusaferri</emphasis>, <emphasis>Consiglio Nazionale delle Ricerche (CNR), Institute of Industrial Technologies and Automation (STIIMA), Research Institute, Via Alfonso Corti 12, 20133 Milano, Italy;<break/>E-mail: alessandro.brusaferri@itia.cnr.it</emphasis></para>
<para><emphasis role="strong">Ambra Cal&#224;</emphasis>, <emphasis>Siemens AG Corporate Technology, Erlangen, Germany;<break/>E-mail: ambra.cala@siemens.com</emphasis></para>
<para><emphasis role="strong">Andrea Barni</emphasis>, <emphasis>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland; E-mail: andrea.barni@supsi.ch</emphasis></para>
<para><emphasis role="strong">Andrea Passarella</emphasis>, <emphasis>Institute of Informatics and Telematics, National Research Council (CNR), Pisa, Italy; E-mail: andrea.passarella@iit.cnr.it</emphasis></para>
<para><emphasis role="strong">Anton Ru&#382;i&#263;</emphasis>, <emphasis>Jo&#382;ef Stefan Institute, Department of Automatics</emphasis>, <emphasis>Biocybernetics, and Robotics, Jamova 39, 1000 Ljubljana, Slovenia; E-mail: ales.ude@ijs.si</emphasis></para>
<para><emphasis role="strong">Arndt L&#252;der</emphasis>, <emphasis>Otto-von-Guericke University Magdeburg, Magdeburg, Germany; E-mail: arndt.lueder@ovgu.de</emphasis></para>
<para><emphasis role="strong">Batzi Uribarri</emphasis>, <emphasis>Software Quality Systems, Avenida Zugazarte 8 1-6, 48930-Getxo, Spain; E-mail: buribarri@sqs.es</emphasis></para>
<para><emphasis role="strong">Bego&#241;a Laibarra</emphasis>, <emphasis>Software Quality Systems, Avenida Zugazarte 8 1-6, 48930-Getxo, Spain; E-mail: blaibarraz@sqs.es</emphasis></para>
<para><emphasis role="strong">Bojan Nemec</emphasis>, <emphasis>Jo&#382;ef Stefan Institute, Department of Automatics</emphasis>, <emphasis>Biocybernetics, and Robotics, Jamova 39, 1000 Ljubljana, Slovenia; E-mail: bojan.nemec@ijs.si</emphasis></para>
<para><emphasis role="strong">Dario Piga</emphasis>, <emphasis>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI) Dalle Molle Institute for Artificial Intelligence (IDSIA) Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland; E-mail: dario.piga@supsi.ch</emphasis></para>
<para><emphasis role="strong">Diego Rovere</emphasis>, <emphasis>TTS srl, Italy; E-mail: rovere@ttsnetwork.com</emphasis></para>
<para><emphasis role="strong">Elias Montini</emphasis>, <emphasis>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland; E-mail: elias.montini@supsi.ch</emphasis></para>
<para><emphasis role="strong">Elisa Negri</emphasis>, <emphasis>Politecnico di Milano, Milan, Italy; E-mail: elisa.negri@polimi.it</emphasis></para>
<para><emphasis role="strong">Filippo Boschi</emphasis>, <emphasis>Politecnico di Milano, Milan, Italy; E-mail: filippo.boschi@polimi.it</emphasis></para>
<para><emphasis role="strong">Franco A. Cavadini</emphasis>, <emphasis>Synesis, SCARL, Via Cavour 2, 22074 Lomazzo, Italy; E-mail: franco.cavadini@synesis-consortium.eu</emphasis></para>
<para><emphasis role="strong">Gernot Kollegger</emphasis>, <emphasis>nxtControl, GmbH, Aum&#252;hlweg 3/B14, A-2544 Leobersdorf, Austria; E-mail: gernot.kollegger@nxtcontrol.com</emphasis></para>
<para><emphasis role="strong">Giacomo Pallucca</emphasis>, <emphasis>Consiglio Nazionale delle Ricerche (CNR), Institute of Industrial Technologies and Automation (STIIMA), Research Institute, Via Alfonso Corti 12, 20133 Milano, Italy;<break/>E-mail: giacomo.pallucca@itia.cnr.it</emphasis> <emphasis role="strong">Giovanni dal Maso</emphasis>, <emphasis>TTS srl, Italy; E-mail: dalmaso@ttsnetwork.com</emphasis></para>
<para><emphasis role="strong">Giuseppe Landolfi</emphasis>, <emphasis>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland; E-mail: giuseppe.landolfi@supsi.ch</emphasis></para>
<para><emphasis role="strong">Giuseppe Montalbano</emphasis>, <emphasis>Synesis, SCARL, Via Cavour 2, 22074 Lomazzo, Italy; E-mail: giuseppe.montalbano@synesis-consortium.eu</emphasis></para>
<para><emphasis role="strong">Horst Mayer</emphasis>, <emphasis>nxtControl, GmbH, Aum&#252;hlweg 3/B14, A-2544 Leobersdorf, Austria; E-mail: horst.mayer@nxtcontrol.com</emphasis></para>
<para><emphasis role="strong">Jan Wehrstedt</emphasis>, <emphasis>SIEMENS, Germany; E-mail: janchristoph.wehrstedt@siemens.com</emphasis></para>
<para><emphasis role="strong">Javier Gozalvez</emphasis>, <emphasis>UWICORE Laboratory, Universidad Miguel Hern&#225;ndez de Elche (UMH), Elche, Spain; E-mail: j.gozalvez@umh.es</emphasis></para>
<para><emphasis role="strong">John Kaldis</emphasis>, <emphasis>Athens Information Technology, Greece; E-mail: jkaldis@ait.gr</emphasis></para>
<para><emphasis role="strong">John Soldatos</emphasis>, <emphasis>Kifisias 44 Ave., Marousi, GR15125, Greece; E-mail: jsol@ait.gr</emphasis></para>
<para><emphasis role="strong">J&#252;rgen Elger</emphasis>, <emphasis>Siemens AG Corporate Technology, Erlangen, Germany; E-mail: juergen.elger@siemens.com</emphasis></para>
<para><emphasis role="strong">Lara Gonz&#225;lez</emphasis>, <emphasis>Asociacion de Empresas Tecnologicas Innovalia, Rodriguez Arias, 6, 605, 48008-Bilbao, Spain; E-mail: lgonzalez@innovalia.org</emphasis></para>
<para><emphasis role="strong">Marco Conti</emphasis>, <emphasis>Institute of Informatics and Telematics, National Research Council (CNR), Pisa, Italy; E-mail: marco.conti@iit.cnr.it</emphasis></para>
<para><emphasis role="strong">Marco Macchi</emphasis>, <emphasis>Politecnico di Milano, Milan, Italy; E-mail: marco.macchi@polimi.it</emphasis></para>
<para><emphasis role="strong">Marco Taisch</emphasis>, <emphasis>Politecnico di Milano, Milan, Italy; E-mail: marco.taisch@polimi.it</emphasis></para>
<para><emphasis role="strong">Maria del Carmen Lucas-Esta&#241;</emphasis>, <emphasis>UWICORE Laboratory, Universidad Miguel Hern&#225;ndez de Elche (UMH), Elche, Spain; E-mail: m.lucas@umh.es</emphasis></para>
<para><emphasis role="strong">Marino Alge</emphasis>, <emphasis>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland; E-mail: marino.alge@supsi.ch</emphasis></para>
<para><emphasis role="strong">Martijn Rooker</emphasis>, <emphasis>TTTech Computertechnik AG, Schoenbrunner Strasse 7, A-1040 Vienna, Austria; E-mail: martijn.rooker@tttech.com</emphasis></para>
<para><emphasis role="strong">Marzio Sorlini</emphasis>, <emphasis>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland; E-mail: marzio.sorlini@supsi.ch</emphasis></para>
<para><emphasis role="strong">Mauro Isaja</emphasis>, <emphasis>Engineering Ingegneria Informatica SpA, Italy; E-mail: mauro.isaja@eng.it</emphasis></para>
<para><emphasis role="strong">Michele Ciavotta</emphasis>, <emphasis>Universit&#224; degli Studi di Milano-Bicocca, Italy; E-mail: michele.ciavotta@unimib.it</emphasis></para>
<para><emphasis role="strong">Miguel Sepulcre</emphasis>, <emphasis>UWICORE Laboratory, Universidad Miguel Hern&#225;ndez de Elche (UMH), Elche, Spain; E-mail: msepulcre@umh.es</emphasis></para>
<para><emphasis role="strong">Nikos Kefalakis</emphasis>, <emphasis>Kifisias 44 Ave., Marousi, GR15125, Greece; E-mail: nkef@ait.gr</emphasis></para>
<para><emphasis role="strong">Oscar Lazaro</emphasis>, <emphasis>Asociacion de Empresas Tecnologicas Innovalia, Rodriguez Arias, 6, 605, 48008-Bilbao, Spain; E-mail: olazaro@innovalia.org</emphasis></para>
<para><emphasis role="strong">Paola Maria Fantini</emphasis>, <emphasis>Politecnico di Milano, Milan, Italy; E-mail: paola.fantini@polimi.it</emphasis></para>
<para><emphasis role="strong">Paolo Pedrazzoli</emphasis>, <emphasis>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland; E-mail: paolo.pedrazzoli@supsi.ch; pedrazzoli@ttsnetwork.com</emphasis></para>
<para><emphasis role="strong">Pedro Malo</emphasis>, <emphasis>Unparallel Innovation Lda, Portugal; E-mail: pedro.malo@unparallel.pt</emphasis></para>
<para><emphasis role="strong">Silvia Menato</emphasis>, <emphasis>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland; E-mail: silvia.menato@supsi.ch</emphasis></para>
<para><emphasis role="strong">Theofanis P. Raptis</emphasis>, <emphasis>Institute of Informatics and Telematics, National Research Council (CNR), Pisa, Italy; E-mail: theofanis.raptis@iit.cnr.it</emphasis></para>
<para><emphasis role="strong">Tiago Teixeira</emphasis>, <emphasis>Unparallel Innovation Lda, Portugal; E-mail: tiago.teixeira@unparallel.pt</emphasis></para>
<para><emphasis role="strong">Torben Meyer</emphasis>, <emphasis>VOLKSWAGEN, Germany; E-mail: torben.meyer@volkswagen.de</emphasis></para>
<para><emphasis role="strong">Valeriy Vytakin</emphasis>, <emphasis>Department of Computer Science, Electrical and Space Engineering, Lule&#229; tekniska universitet, A3314 Lule&#229;, Sweden; E-mail: Valeriy.Vyatkin@ltu.se</emphasis></para>
<para><emphasis role="strong">Veronika Brandstetter</emphasis>, <emphasis>SIEMENS, Germany; E-mail: veronika.brandstetter@siemens.com</emphasis></para>
<para><emphasis role="strong">Volkan Gezer</emphasis>, <emphasis>German Research Center for Artificial Intelligence (DFKI), Germany; E-mail: Volkan.Gezer@dfki.de</emphasis></para>
</preface>

<preface class="preface" id="preface05">
<title>List of Figures</title>
<table-wrap position="float" id="T">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F1-1">Figure 1.1</link></emphasis></td><td valign="top" align="left">Main drivers and use cases of Industry 4.0</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-1">Figure 2.1</link></emphasis></td><td valign="top" align="left">RAMI 4.0, IVRA and IIRA reference models for Industry 4.0</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-2">Figure 2.2</link></emphasis></td><td valign="top" align="left">RAMI 4.0 3D Model</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-3">Figure 2.3</link></emphasis></td><td valign="top" align="left">Smart Service Welt Reference Model &amp; Vision</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-4">Figure 2.4</link></emphasis></td><td valign="top" align="left">Digital manufacturing platform landscape</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-5">Figure 2.5</link></emphasis></td><td valign="top" align="left">Industrial Data Space reference model</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-6">Figure 2.6</link></emphasis></td><td valign="top" align="left">General structure of Reference Architecture Model</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-7">Figure 2.7</link></emphasis></td><td valign="top" align="left">Materializing the IDS Architecture using FIWARE</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-8">Figure 2.8</link></emphasis></td><td valign="top" align="left">Digital shopfloor visions for autonomous modular manufacturing, assembly and collaborative robotics</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-9">Figure 2.9</link></emphasis></td><td valign="top" align="left">AUTOWARE digital automation solution-oriented context</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-10">Figure 2.10</link></emphasis></td><td valign="top" align="left">AUTOWARE framework</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-11">Figure 2.11</link></emphasis></td><td valign="top" align="left">AUTOWARE Reference Architecture</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-12">Figure 2.12</link></emphasis></td><td valign="top" align="left">AUTOWARE harmonized automatic awareness open technologies</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-13">Figure 2.13</link></emphasis></td><td valign="top" align="left">AUTOWARE Software-Defined Autonomous Service Platform</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-14">Figure 2.14</link></emphasis></td><td valign="top" align="left">Context Broker basic workflow &amp; FIWARE Context Broker Architecture.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-15">Figure 2.15</link></emphasis></td><td valign="top" align="left">Embedding of the fog node into the AUTOWARE software-defined platform as part of the cloud/fog computing &amp; persistence service support.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-16">Figure 2.16</link></emphasis></td><td valign="top" align="left">Mapping and coverage of RAMI 4.0 by the AUTOWARE framework</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-17">Figure 2.17</link></emphasis></td><td valign="top" align="left">Z-Bre4k zero break down workflow &amp; strategies</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-18">Figure 2.18</link></emphasis></td><td valign="top" align="left">Z-BRE4K General Architecture Structure</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-19">Figure 2.19</link></emphasis></td><td valign="top" align="left">Z-BRE4K General Architecture Connections</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-20">Figure 2.20</link></emphasis></td><td valign="top" align="left">Z-BRE4K General OS</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F2-21">Figure 2.21</link></emphasis></td><td valign="top" align="left">Use Cases Particular Information Workflow</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F3-1">Figure 3.1</link></emphasis></td><td valign="top" align="left">FAR-EDGE RA overall view</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F3-2">Figure 3.2</link></emphasis></td><td valign="top" align="left">FAR-EDGE RA Functional Domains</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F3-3">Figure 3.3</link></emphasis></td><td valign="top" align="left">FAR-EDGE RA Field Tier</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F3-4">Figure 3.4</link></emphasis></td><td valign="top" align="left">FAR-EDGE RA Gateway Tier</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F3-5">Figure 3.5</link></emphasis></td><td valign="top" align="left">FAR-EDGE RA Ledger Tier</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F3-6">Figure 3.6</link></emphasis></td><td valign="top" align="left">FAR-EDGE Cloud Tier</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F3-7">Figure 3.7</link></emphasis></td><td valign="top" align="left">Mass-customization use case</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F3-8">Figure 3.8</link></emphasis></td><td valign="top" align="left">Reshoring use case</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-1">Figure 4.1</link></emphasis></td><td valign="top" align="left">Classical automation pyramid representation</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-2">Figure 4.2</link></emphasis></td><td valign="top" align="left">Daedalus fully accepts the concept of vertically integrated automation pyramid introduced by the PATHFINDER road-mapping initiative and further developed with the Horizon 2020 Maya project</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-3">Figure 4.3</link></emphasis></td><td valign="top" align="left">The industrial &#8220;needs&#8221; for a transition towards a digital manufacturing paradigm</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-4">Figure 4.4</link></emphasis></td><td valign="top" align="left">Commissioner Oettinger agenda for digitalizing manufacturing in Europe</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-5">Figure 4.5</link></emphasis></td><td valign="top" align="left">RAMI 4.0 framework to support vertical and horizontal integration between different functional elements of the factory of the future</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-6">Figure 4.6</link></emphasis></td><td valign="top" align="left">Qualitative representation of the functional model for an automation CPS based on IEC-61499 technologies; concept of &#8220;CPS-izer&#8221;</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-7">Figure 4.7</link></emphasis></td><td valign="top" align="left">The need for local cognitive functionalities is due to the requirements of Big Data elaboration</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-8">Figure 4.8</link></emphasis></td><td valign="top" align="left">Framing of an IEC-61499 CPS within a complex shopfloor</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-9">Figure 4.9</link></emphasis></td><td valign="top" align="left">Hierarchical aggregation of CPS orchestrated to behave coherently</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-10">Figure 4.10</link></emphasis></td><td valign="top" align="left">Progressive encapsulation of behaviour in common interfaces</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-11">Figure 4.11</link></emphasis></td><td valign="top" align="left">IEC-61499 CPS Development, the IP Value-Add Chain</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-12">Figure 4.12</link></emphasis></td><td valign="top" align="left">Direct and distributed connection between the digital domain and the Daedalus&#8217; CPS</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-13">Figure 4.13</link></emphasis></td><td valign="top" align="left">Qualitative functional model of an automation CPS based on IEC-61499</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-14">Figure 4.14</link></emphasis></td><td valign="top" align="left">IEC 61499 runtime architecture</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F4-15">Figure 4.15</link></emphasis></td><td valign="top" align="left">IEC 61499 CPS-izer</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-1">Figure 5.1</link></emphasis></td><td valign="top" align="left">The AUTOWARE framework</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-2">Figure 5.2</link></emphasis></td><td valign="top" align="left">Communication network and data management system into the AUTOWARE Reference Architecture</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-3">Figure 5.3</link></emphasis></td><td valign="top" align="left">Examples of centralized management architectures.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-4">Figure 5.4</link></emphasis></td><td valign="top" align="left">Examples of hierarchical IWN architectures</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-5">Figure 5.5</link></emphasis></td><td valign="top" align="left">Private 5G Networks architecture for Industrial IoT systems</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-6">Figure 5.6</link></emphasis></td><td valign="top" align="left">Key capabilities of Private 5G Networks for Industrial IoT systems</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-7">Figure 5.7</link></emphasis></td><td valign="top" align="left">Hierarchical and heterogeneous reference architecture to support CPPS connectivity and data management</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-8">Figure 5.8</link></emphasis></td><td valign="top" align="left">Communication and data management functions in different entities of the hierarchical architecture</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-9">Figure 5.9</link></emphasis></td><td valign="top" align="left">LM&#8211;Orchestrator interaction at different tiers of the management architecture</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-10">Figure 5.10</link></emphasis></td><td valign="top" align="left">Virtual cells based on RAN Slicing.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-11">Figure 5.11</link></emphasis></td><td valign="top" align="left">Cloudification of the RAN</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-12">Figure 5.12</link></emphasis></td><td valign="top" align="left">Hybrid communication management: interaction between management entities.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F5-13">Figure 5.13</link></emphasis></td><td valign="top" align="left">Integration of the hierarchical and multi-tier heterogeneous communication and data management architecture into the AUTOWARE Reference Architecture</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-1">Figure 6.1</link></emphasis></td><td valign="top" align="left">DDA Architecture and main components</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-2">Figure 6.2</link></emphasis></td><td valign="top" align="left">Representation of an Analytics Manifest in XML format (XML Schema)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-3">Figure 6.3</link></emphasis></td><td valign="top" align="left">EA-Engine operation example</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-4">Figure 6.4</link></emphasis></td><td valign="top" align="left">EA-Engine configuration example (Sequence Diagram)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-5">Figure 6.5</link></emphasis></td><td valign="top" align="left">EA-Engine initialization example (Sequence Diagram)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-6">Figure 6.6</link></emphasis></td><td valign="top" align="left">EA-Engine runtime operation example (Sequence Diagram)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-7">Figure 6.7</link></emphasis></td><td valign="top" align="left">DL deployment choices (right) and EG deployment detail (left)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-8">Figure 6.8</link></emphasis></td><td valign="top" align="left">Elements of the open-source implementation of the DDA</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F6-9">Figure 6.9</link></emphasis></td><td valign="top" align="left">DDA Visualization and administration dashboard</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-1">Figure 7.1</link></emphasis></td><td valign="top" align="left">Schematic representation of Hybrid Model Predictive Control Toolbox</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-2">Figure 7.2</link></emphasis></td><td valign="top" align="left">Conceptual map of used software. In the centre, there is object-oriented programming language that better supports an easy development and management between different application&#8217;s needs</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-3">Figure 7.3</link></emphasis></td><td valign="top" align="left">Subsequence approximation of a non-linear system.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-4">Figure 7.4</link></emphasis></td><td valign="top" align="left">Model Predictive Control scheme</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-5">Figure 7.5</link></emphasis></td><td valign="top" align="left">Receding horizon scheme.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-6">Figure 7.6</link></emphasis></td><td valign="top" align="left">Flow of MPC calculation at each control execution</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-7">Figure 7.7</link></emphasis></td><td valign="top" align="left">Schematic representation of hybrid system.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-8">Figure 7.8</link></emphasis></td><td valign="top" align="left">Polyhedral partition representation of a hybrid model. It is possible to see 13 partitions that divide the input state space into 13 pieces-wise sub-systems (using MatLab 2017b)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-9">Figure 7.9</link></emphasis></td><td valign="top" align="left">Graphic scheme of the links between the different classes of hybrid. The arrow from A to B classes shows that A is a subset of B</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-10">Figure 7.10</link></emphasis></td><td valign="top" align="left">Example of a three-dimensional PWA function <emphasis>y</emphasis> = <emphasis>f</emphasis>(<emphasis>x</emphasis><subscript>1</subscript>, <emphasis>x</emphasis><subscript>2</subscript>)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-11">Figure 7.11</link></emphasis></td><td valign="top" align="left">Valve: an example of basic function block</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-12">Figure 7.12</link></emphasis></td><td valign="top" align="left">Example of execution control chart (ECC)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-13">Figure 7.13</link></emphasis></td><td valign="top" align="left">exec_SPChange algorithm from the valve basic FB.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-14">Figure 7.14</link></emphasis></td><td valign="top" align="left">A composite function block with an encapsulated function block network</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-15">Figure 7.15</link></emphasis></td><td valign="top" align="left">Example of FB_DLL function block</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-16">Figure 7.16</link></emphasis></td><td valign="top" align="left">Illustration of the compact approach based on exploitation of generic DLL FBs</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-17">Figure 7.17</link></emphasis></td><td valign="top" align="left">Illustration of the extended approach based on exploitation of generic DLL FBs</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F7-18">Figure 7.18</link></emphasis></td><td valign="top" align="left">Illustration of the distributed approach based on exploitation of generic DLL FBs</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F8-1">Figure 8.1</link></emphasis></td><td valign="top" align="left">Life-cycle stages to achieve human&#8211;robot symbiosis from design to runtime through dedicated training</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F8-2">Figure 8.2</link></emphasis></td><td valign="top" align="left">Bidirectional exchange of support between humans and robots</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F8-3">Figure 8.3</link></emphasis></td><td valign="top" align="left">Three dimensions of characterization of Symbionts.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F8-4">Figure 8.4</link></emphasis></td><td valign="top" align="left">Qualitative representation of the technological key enabling concepts of the Mutualism Framework</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F9-1">Figure 9.1</link></emphasis></td><td valign="top" align="left">Digital Models and Dynamic Access to Plant Information</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F9-2">Figure 9.2</link></emphasis></td><td valign="top" align="left">Snapshot of the FAR-EDGE Digital Models Structure</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-1">Figure 10.1</link></emphasis></td><td valign="top" align="left">Class diagram of the base classes</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-2">Figure 10.2</link></emphasis></td><td valign="top" align="left">Object diagram of the base model</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-3">Figure 10.3</link></emphasis></td><td valign="top" align="left">Class diagram for assets and behaviours</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-4">Figure 10.4</link></emphasis></td><td valign="top" align="left">Prototype-resource object diagram</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-5">Figure 10.5</link></emphasis></td><td valign="top" align="left">Example of usage of main and secondary hierarchies.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-6">Figure 10.6</link></emphasis></td><td valign="top" align="left">Prototype Model class diagram</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-7">Figure 10.7</link></emphasis></td><td valign="top" align="left">Class diagram of resources section</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-8">Figure 10.8</link></emphasis></td><td valign="top" align="left">Class diagram of devices section</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-9">Figure 10.9</link></emphasis></td><td valign="top" align="left">Class diagram of the Project Model section</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-10">Figure 10.10</link></emphasis></td><td valign="top" align="left">Object diagram of the Project Model</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-11">Figure 10.11</link></emphasis></td><td valign="top" align="left">Schedule and workpiece representation</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-12">Figure 10.12</link></emphasis></td><td valign="top" align="left">Program structure representation</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-13">Figure 10.13</link></emphasis></td><td valign="top" align="left">Machining executable representation</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-14">Figure 10.14</link></emphasis></td><td valign="top" align="left">Assembly-Executable representation</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-15">Figure 10.15</link></emphasis></td><td valign="top" align="left">Disassembly representation.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F10-16">Figure 10.16</link></emphasis></td><td valign="top" align="left">Class diagram for the security section of the <emphasis>Meta Data Model</emphasis></td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F11-1">Figure 11.1</link></emphasis></td><td valign="top" align="left">CSI Component Diagram</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F11-2">Figure 11.2</link></emphasis></td><td valign="top" align="left">Lambda Architecture</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F11-3">Figure 11.3</link></emphasis></td><td valign="top" align="left">CPS connection</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F11-4">Figure 11.4</link></emphasis></td><td valign="top" align="left">Sequence diagram.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F11-5">Figure 11.5</link></emphasis></td><td valign="top" align="left">Outline of the Real-to-digital synchronization</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F12-1">Figure 12.1</link></emphasis></td><td valign="top" align="left">Automation value network general representation</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F12-2">Figure 12.2</link></emphasis></td><td valign="top" align="left">Digital Marketplace Architecture.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F12-3">Figure 12.3</link></emphasis></td><td valign="top" align="left">Digital Marketplace Data Model</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F12-4">Figure 12.4</link></emphasis></td><td valign="top" align="left">High-level definition of marketplace interactions with main Daedalus stakeholders</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F13-1">Figure 13.1</link></emphasis></td><td valign="top" align="left">CMM&#8217;s five maturity levels</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F13-2">Figure 13.2</link></emphasis></td><td valign="top" align="left">Migration path definition</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F13-3">Figure 13.3</link></emphasis></td><td valign="top" align="left">FAR-EDGE Migration Matrix</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-1">Figure 14.1</link></emphasis></td><td valign="top" align="left">DSA manufacturing multisided ecosystem</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-2">Figure 14.2</link></emphasis></td><td valign="top" align="left">DSA ecosystem objectives</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-3">Figure 14.3</link></emphasis></td><td valign="top" align="left">DSA ecosystem strategic services</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-4">Figure 14.4</link></emphasis></td><td valign="top" align="left">DSA-aligned open HW &amp; SW platforms</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-5">Figure 14.5</link></emphasis></td><td valign="top" align="left">AUTOWARE Reference Architecture (layers, communication, &amp; modelling)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-6">Figure 14.6</link></emphasis></td><td valign="top" align="left">AUTOWARE Reference Architecture (SDA-SP)</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-7">Figure 14.7</link></emphasis></td><td valign="top" align="left">AUTOWARE business impact on SMEs</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-8">Figure 14.8</link></emphasis></td><td valign="top" align="left">Smart service welt data-centric reference architecture</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-9">Figure 14.9</link></emphasis></td><td valign="top" align="left">Main characteristics of CPPS solutions that are desired by SMEs</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-10">Figure 14.10</link></emphasis></td><td valign="top" align="left">Multi-axis certification solution</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-11">Figure 14.11</link></emphasis></td><td valign="top" align="left">DSA-integrated approach for Digital Automation Solutions Certification</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-12">Figure 14.12</link></emphasis></td><td valign="top" align="left">Digital automation solutions certification workflow</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-13">Figure 14.13</link></emphasis></td><td valign="top" align="left">DSA capability development framework</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-14">Figure 14.14</link></emphasis></td><td valign="top" align="left">DSA service deployment path</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-15">Figure 14.15</link></emphasis></td><td valign="top" align="left">DSA key services</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F14-16">Figure 14.16</link></emphasis></td><td valign="top" align="left">DSA key services</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F15-1">Figure 15.1</link></emphasis></td><td valign="top" align="left">SmartFactory&#8217;s Industrie 4.0 production line</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F15-2">Figure 15.2</link></emphasis></td><td valign="top" align="left">SkaLa production line.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F15-3">Figure 15.3</link></emphasis></td><td valign="top" align="left">Overview of Services offered by various IIoT/ Industry 4.0 ecosystems and communities</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F15-4">Figure 15.4</link></emphasis></td><td valign="top" align="left">Baseline functionalities of a Multi-sided market platform</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F15-5">Figure 15.5</link></emphasis></td><td valign="top" align="left">Home page of Edge4Industy portal.</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="F15-6">Figure 15.6</link></emphasis></td><td valign="top" align="left">Content structure of the Edge4Industry portal</td></tr>
</tbody>
</table>
</table-wrap>
</preface>

<preface class="preface" id="preface06">
<title>List of Tables</title>
<table-wrap position="float" id="T">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T5-1">Table 5.1</link></emphasis></td><td valign="top" align="left">5G-PPP use case families for manufacturing</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T5-2">Table 5.2</link></emphasis></td><td valign="top" align="left">Performance requirements for three classes of communication in industry established by ETSI</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T5-3">Table 5.3</link></emphasis></td><td valign="top" align="left">Timing requirements for motion control systems</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T5-4">Table 5.4</link></emphasis></td><td valign="top" align="left">Communication requirements for some industrial applications</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T5-5">Table 5.5</link></emphasis></td><td valign="top" align="left">Additional requirements for different application scenarios</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T6-1">Table 6.1</link></emphasis></td><td valign="top" align="left">Requirements and design principles for the FAR-EDGE DDA</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T12-1">Table 12.1</link></emphasis></td><td valign="top" align="left">Mapping of stakeholders on Marketplace ecosystem</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T13-1">Table 13.1</link></emphasis></td><td valign="top" align="left">AS-IS situation of the use case for the automation functional domain</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T13-2">Table 13.2</link></emphasis></td><td valign="top" align="left">MP for the implementation of reconfigurability</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T13-3">Table 13.3</link></emphasis></td><td valign="top" align="left">MP for the implementation of simulation-based optimization</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T14-1">Table 14.1</link></emphasis></td><td valign="top" align="left">AUTOWARE enablers aligned to DSA-RA</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T14-2">Table 14.2</link></emphasis></td><td valign="top" align="left">Data collection template for the certification process</td></tr>
<tr><td valign="top" align="left"><emphasis role="strong"><link linkend="T14-3">Table 14.3</link></emphasis></td><td valign="top" align="left">Identification of DSA players</td></tr>
</tbody>
</table>
</table-wrap>
</preface>

<preface class="preface" id="preface07">
<title>List of Abbreviations</title>
<table-wrap position="float" id="T1">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr><td valign="top" align="left">ARX</td><td valign="top" align="left">Auto Regressive Exogenous</td></tr>
<tr><td valign="top" align="left">API</td><td valign="top" align="left">Application Programming Interface</td></tr>
<tr><td valign="top" align="left">BoM</td><td valign="top" align="left">Bill of Materials</td></tr>
<tr><td valign="top" align="left">BoP</td><td valign="top" align="left">Bill of Processes</td></tr>
<tr><td valign="top" align="left">CAD</td><td valign="top" align="left">Computer-Aided Design</td></tr>
<tr><td valign="top" align="left">CMM</td><td valign="top" align="left">Capability Maturity Model</td></tr>
<tr><td valign="top" align="left">CMMI</td><td valign="top" align="left">Capability Maturity Model Integration</td></tr>
<tr><td valign="top" align="left">CPS</td><td valign="top" align="left">Cyber-Physical System</td></tr>
<tr><td valign="top" align="left">CV</td><td valign="top" align="left">Controlled Variable</td></tr>
<tr><td valign="top" align="left">DREAMY</td><td valign="top" align="left">Digital REadiness Assessment MaturitY</td></tr>
<tr><td valign="top" align="left">DV</td><td valign="top" align="left">Disturbance Variable</td></tr>
<tr><td valign="top" align="left">EC</td><td valign="top" align="left">Edge Computing</td></tr>
<tr><td valign="top" align="left">ELC</td><td valign="top" align="left">Extended Linear Complementary</td></tr>
<tr><td valign="top" align="left">ERP</td><td valign="top" align="left">Enterprise Resource Planning</td></tr>
<tr><td valign="top" align="left">FAR-EDGE</td><td valign="top" align="left">Factory Automation Edge Computing Operating System Reference Implementation</td></tr>
<tr><td valign="top" align="left">FB</td><td valign="top" align="left">Functional Block</td></tr>
<tr><td valign="top" align="left">HA</td><td valign="top" align="left">Hybrid Automata</td></tr>
<tr><td valign="top" align="left">HMI</td><td valign="top" align="left">Human&#8211;Machine Interface</td></tr>
<tr><td valign="top" align="left">HMPC</td><td valign="top" align="left">Hybrid Model Predictive Control</td></tr>
<tr><td valign="top" align="left">ICT</td><td valign="top" align="left">Information and communication technologies</td></tr>
<tr><td valign="top" align="left">IMC-AESOP</td><td valign="top" align="left">ArchitecturE for Service-Oriented Process - Monitoring and Control</td></tr>
<tr><td valign="top" align="left">IoT</td><td valign="top" align="left">Internet of Things</td></tr>
<tr><td valign="top" align="left">IT</td><td valign="top" align="left">Information technologies</td></tr>
<tr><td valign="top" align="left">LAN</td><td valign="top" align="left">Local Area Network</td></tr>
<tr><td valign="top" align="left">LC</td><td valign="top" align="left">Linear Complementary</td></tr>
<tr><td valign="top" align="left">LP</td><td valign="top" align="left">Linear Programming</td></tr>
<tr><td valign="top" align="left">MASHUP</td><td valign="top" align="left">MigrAtion to Service Harmonization compUting Platform</td></tr>
<tr><td valign="top" align="left">MES</td><td valign="top" align="left">Manufacturing Execution System</td></tr>
<tr><td valign="top" align="left">MILP</td><td valign="top" align="left">Mixed-Integer Linear Programming</td></tr>
<tr><td valign="top" align="left">MIMO</td><td valign="top" align="left">Multi-Input Multi-Output</td></tr>
<tr><td valign="top" align="left">MIP</td><td valign="top" align="left">Mixed-Integer Programming</td></tr>
<tr><td valign="top" align="left">MIQP</td><td valign="top" align="left">Mixed-Integer Quadratic Programming</td></tr>
<tr><td valign="top" align="left">MLD</td><td valign="top" align="left">Mixed Logical Dynamical</td></tr>
<tr><td valign="top" align="left">MMPS</td><td valign="top" align="left">Max-Min-Plus-Scaling</td></tr>
<tr><td valign="top" align="left">MOMOCS</td><td valign="top" align="left">MOdel driven MOdernisation of Complex Systems</td></tr>
<tr><td valign="top" align="left">MPC</td><td valign="top" align="left">Model Predictive Control</td></tr>
<tr><td valign="top" align="left">MV</td><td valign="top" align="left">Manipulated Variable</td></tr>
<tr><td valign="top" align="left">OCM</td><td valign="top" align="left">On-line Control Modeller</td></tr>
<tr><td valign="top" align="left">OCS</td><td valign="top" align="left">On-line Control Solver</td></tr>
<tr><td valign="top" align="left">OIS</td><td valign="top" align="left">On-line Identification System</td></tr>
<tr><td valign="top" align="left">OT</td><td valign="top" align="left">Operational Technology</td></tr>
<tr><td valign="top" align="left">OV</td><td valign="top" align="left">Output Variable</td></tr>
<tr><td valign="top" align="left">PERFoRM</td><td valign="top" align="left">Production harmonizEd Reconfiguration of Flexible Robots and Machinery</td></tr>
<tr><td valign="top" align="left">PLC</td><td valign="top" align="left">Programmable Logic Controller</td></tr>
<tr><td valign="top" align="left">PLM</td><td valign="top" align="left">Product Lifecycle Management</td></tr>
<tr><td valign="top" align="left">PWA</td><td valign="top" align="left">Piece Wise Affine</td></tr>
<tr><td valign="top" align="left">QP</td><td valign="top" align="left">Quadratic Programming</td></tr>
<tr><td valign="top" align="left">RHC</td><td valign="top" align="left">Receding Horizon Control</td></tr>
<tr><td valign="top" align="left">SCADA</td><td valign="top" align="left">Supervisory Control And Data Acquisition</td></tr>
<tr><td valign="top" align="left">SDK</td><td valign="top" align="left">Software Development Kit</td></tr>
<tr><td valign="top" align="left">SISO</td><td valign="top" align="left">Single-Input Single-Output</td></tr>
<tr><td valign="top" align="left">SMART</td><td valign="top" align="left">Service-Oriented Migration and Reuse Technique</td></tr>
<tr><td valign="top" align="left">SOA</td><td valign="top" align="left">Service-oriented Architecture</td></tr>
<tr><td valign="top" align="left">SOAMIG</td><td valign="top" align="left">Migration of legacy software into service-oriented architectures</td></tr>
<tr><td valign="top" align="left">WAN</td><td valign="top" align="left">Wide Area Network</td></tr>
<tr><td valign="top" align="left">XIRUP</td><td valign="top" align="left">eXtreme end-User dRiven Process</td></tr>
</tbody>
</table>
</table-wrap>
</preface>

<preface class="preface" id="preface08">
<title>Contents</title>
<table-wrap position="float">
<table cellspacing="5" cellpadding="5" frame="none" rules="none">
<tbody>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch01">1 Introduction to Industry 4.0 and the Digital Shopfloor Vision</link></emphasis><?lb?>John Soldatos</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C1.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top" colspan="2"><emphasis role="strong">PART I</emphasis></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch02">2 Open Automation Framework for Cognitive Manufacturing</link></emphasis><?lb?>Oscar Lazaro, Martijn Rooker, Bego&#241;a Laibarra, Anton Ru&#382;i&#263;, Bojan Nemec and Aitor Gonzalez</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C2.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch03">3 Reference Architecture for Factory Automation using Edge Computing and Blockchain Technologies</link></emphasis><?lb?>Mauro Isaja</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C3.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch04">4 IEC-61499 Distributed Automation for the Next Generation of Manufacturing Systems</link></emphasis><?lb?>Franco A. Cavadini, Giuseppe Montalbano, Gernot Kollegger, Horst Mayer and Valeriy Vytakin</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C4.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch05">5 Communication and Data Management in Industry 4.0</link></emphasis><?lb?>Maria del Carmen Lucas-Esta&#241; , Theofanis P. Raptis, Miguel Sepulcre, Andrea Passarella, Javier Gozalvez and Marco Conti</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C5.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch06">6 A Framework for Flexible and Programmable Data Analytics in Industrial Environments</link></emphasis><?lb?>Nikos Kefalakis, Aikaterini Roukounaki, John Soldatos and Mauro Isaja</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C6.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch07">7 Model Predictive Control in Discrete Manufacturing Shopfloors</link></emphasis><?lb?>Alessandro Brusaferri, Giacomo Pallucca, Franco A. Cavadini, Giuseppe Montalbano and Dario Piga</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C7.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch08">8 Modular Human&#8211;Robot Applications in the Digital Shopfloor Based on IEC-61499</link></emphasis><?lb?>Franco A. Cavadini and Paolo Pedrazzoli</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C8.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top" colspan="2"><emphasis role="strong">PART II</emphasis></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch09">9 Digital Models for Industrial Automation Platforms</link></emphasis><?lb?>Nikos Kefalakis, Aikaterini Roukounaki and John Soldatos</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C9.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch010">10 Open Semantic Meta-model as a Cornerstone for the Design and</link></emphasis><?lb?>Jan Wehrstedt, Diego Rovere, Paolo Pedrazzoli, Giovanni dal Maso, Torben Meyer, Veronika Brandstetter, Michele Ciavotta, Marco Macchi and Elisa Negri</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C10.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch011">11 A Centralized Support Infrastructure (CSI) to Manage CPS Digital Twin, towards the Synchronization between CPS Deployed on the Shopfloor and Their Digital Representation</link></emphasis><?lb?>Diego Rovere, Paolo Pedrazzoli, Giovanni dal Maso, Marino Alge and Michele Ciavotta</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C11.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top" colspan="2"><emphasis role="strong">PART II</emphasis></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch012">12 Building an Automation Software Ecosystem on the Top of IEC 61499</link></emphasis><?lb?>Andrea Barni, Elias Montini, Giuseppe Landolfi, Marzio Sorlini and Silvia Menato</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C12.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch013">13 Migration Strategies towards the Digital Manufacturing Automation</link></emphasis><?lb?>Ambra Cal&#224;, Filippo Boschi, Paola Maria Fantini, Arndt L&#252;der, Marco Taisch and J&#252;rgen Elger</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C13.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch014">14 Tools and Techniques for Digital Automation Solutions Certification</link></emphasis><?lb?>Batzi Uribarri, Lara Gonz&#225;lez, Bego&#241;a Laibarra and Oscar Lazaro</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C14.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch015">15 Ecosystems for Digital Automation Solutions an Overview and the Edge4Industry Approach</link></emphasis><?lb?>John Soldatos, John Kaldis, Tiago Teixeira, Volkan Gezer and Pedro Malo</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C15.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch016">16 Epilogue</link></emphasis></td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220408C16.pdf">Download As PDF</ulink></td></tr>
</tbody>
</table>
</table-wrap>
</preface>
<chapter class="chapter" id="ch01" label="1" xreflabel="1">
<title>Introduction to Industry 4.0 and the Digital Shopfloor Vision</title>
<para><emphasis role="strong">John Soldatos</emphasis></para>
<para>Kifisias 44 Ave., Marousi, GR15125, Greece E-mail: jsol@ait.gr</para>
<para>This chapter is an introduction to the fourth industrial revolution (Industry 4.0) in general and digital automation platforms in particular. It illustrates the main drivers behind Industry 4.0 and presents some of the most prominent use cases. Accordingly, it introduces the scope and functionalities of digital automation platforms, along with digital technologies that enable them. The chapter ends by introducing the vision of a fully digital shopfloor, which sets the scene for understanding the platforms and technologies that are presented in subsequent chapters.</para>
<section class="lev1" id="sec1-1">
<title>1.1 Introduction</title>
<para>In the era of globalization, industrial organizations are under continuous pressure to innovate, improve their competitiveness and perform better than their competitors in the global market. Digital technologies are one of their most powerful allies in these efforts, as they can help them increase automation, eliminate error prone processes, enhance their proactivity, streamline their business operations, make their processes knowledge intensive, reduce costs, increase their smartness and overall do more with less. Moreover, the technology acceleration trends provide them with a host of opportunities for innovating in their processes and transforming their operations in a way that results not only in marginal productivity improvements, but rather in a disruptive paradigm shift in their operations. This is the reason why many industrial organizations are heavily investing in the digitization of their processes as part of a wider and strategic digital transformation agenda.</para>
<para>In this landscape, the term Industry 4.0 has been recently introduced. This introduction has signalled the &#8220;official&#8221; start of the fourth industrial revolution, which is based on the deployment and use of Cyber-Physical Systems (CPS) in industrial plants, as means of fostering the digitization, automation and intelligence of industrial processes [1]. CPS systems facilitate the connection between the physical world of machines, industrial automation devices and Operational Technology (OT), with the world of computers, cloud data centres and Information Technology (IT). In simple terms, Industry 4.0 advocates the seamless connection of machines and physical devices with the IT infrastructure, as means of completely digitizing industrial processes.</para>
<para>In recent years, Industy 4.0 is used more widely, beyond CPS systems and physical processes, as a means of signifying the disruptive power of digital transformation in virtual all industries and application domains. For example, terms like Healthcare 4.0 or Finance 4.0 are commonly used as derivatives of Industry 4.0. Nevertheless, the origins of the term lie in the digitization of industrial organizations and their processes, notably in the digitization of factories and industrial plants. Note also that in most countries Industry 4.0 is used to signify the wider ecosystem of business actors, processes and services that underpin the digital transformation of industrial organizations, which makes it also a marketing concept rather than strictly a technological concept.</para>
<para>The present book refers to Industry 4.0 based on its original definition i.e. as the fourth industrial revolution in manufacturing and production, aiming to present some tangible digital solutions for manufacturing, but also to develop a vision for the future where plant operations will be fully digitized. However, it also provides insights on the complementary assets that should accompany technological developments towards successful adoption. For example, the book presents concrete examples of such assets, including migration services, training services and ecosystem building efforts. This chapter serves as a preamble to the entire book and has the following objectives:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>To introduce the business motivation and main drivers behind Industry 4.0 in manufacturing. Most of the systems and technologies that are presented in this book are destined to help manufacturers confront such business pressures and to excel in the era of globalization and technology acceleration.</para></listitem>
<listitem><para>To present some of the main Industry 4.0 use cases in areas such as industrial automation, enterprise maintenance and worker safety. These use cases set the scene for understanding the functionalities and use of the platforms that are presented in this book, including use cases that are not explicitly presented as part of the subsequent chapters.</para></listitem>
<listitem><para>To illustrate the main digital technologies that enable the platforms and technologies presented in the book. Note that the book is about the digitization of industrial processes and digital automation platforms, rather than about IT technologies. However, in this first chapter, we provide readers with insights about which digital technologies are enabling Industry 4.0 in manufacturing and how.</para></listitem>
<listitem><para>To review the state of the art in digital automation platforms, including information about legacy efforts for digitizing the shopfloor based on technologies like Service Oriented Architectures (SOA) and intelligent agents. It&#8217;s important to understand how we got to today&#8217;s digital automation platforms and what is nowadays different from what has been done in the past.</para></listitem>
<listitem><para>To introduce the vision of a fully digital shopfloor that is driving the collaboration of research projects that are contributing to this book. The vision involves interconnection of all machines and complete digitization of all processes in order to deliver the highest possible automation with excellent quality at the same time, as part of a cognitive and fully autonomous factory. It may take several years before this vision is realized, but the main building blocks are already set in place and presented as various chapters of the book.</para></listitem>
</itemizedlist>
<para>In-line with the above-listed objectives, the chapter is structured as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Section 2 presents the main business drivers behind Industry 4.0 and illustrates some of the most prominent use cases, notably the ones with proven business value;</para></listitem>
<listitem><para>Section 3 discusses the digital technologies that underpin the fourth industrial revolution and outlines their relation to the systems that are presented in the latter chapter;</para></listitem>
<listitem><para>Section 4 reviews the past and the present of digital automation platforms, while also introducing the vision of a fully digital shopfloor;</para></listitem>
<listitem><para>Section 5 is the final and concluding section of the chapter.</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="sec1-2">
<title>1.2 Drivers and Main Use Cases</title>
<para>The future of manufacturing is driven by the following trends, which stem from competitive pressures of the globalized environment:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">New production models and mass customization</emphasis>: Manufacturers are increasingly seeking ways of producing more customized products that are tailored to customer needs. As a result, there is a shift from mass production to mass customization. Likewise, conventional Made-to- Stock production models are giving their place to more customized ones such as Made-to-Order, Configure-to-Order and Engineering-to-Order.</para></listitem>
<listitem><para><emphasis role="strong">Production Reshoring</emphasis>: Globalization has led to the off-shoring of production operations for the places of innovation to low-labour countries. This was typically the case with several Western countries (including the USA (United States of America) and many EU (European Union) countries), which opted to keep the innovative design processes at home, while outsourcing manufacturing and production operations to Eastern countries (e.g., China, India). In recent years, several organizations are working towards reversing this trend through moving production processes back to the place of innovation, which is commonly called reshoring as opposed to off-shoring. Increased automation is a key enabler of reshoring strategies as it reduces the significance of the labour cost in the overall production process.</para></listitem>
<listitem><para><emphasis role="strong">Proximity Sourcing</emphasis>: Manufacturers are also employing proximity sourcing strategies as an element of their competitiveness. These strategies strive to ensure that sourcing is performed in close proximity to the plant that will use the source materials. This requires intelligent management of information about supply chain and logistics operations, which is also a main driver of the Industry 4.0.</para></listitem>
<listitem><para><emphasis role="strong">Human-centred manufacturing</emphasis>: Workers remain the major asset of the production process, yet a shift from laborious tasks to more knowledge intensive tasks is required. In addition to supporting other trends (such as mass customization and reshoring) this can be a key to improving workers&#8217; engagement, safety and quality of life. The digitalization of industrial processes obviates the need for laborious error-prone tasks and provides opportunities for improving workers&#8217; knowledge about the production processes. Hence, it&#8217;s a key for placing the worker at the centre of the knowledge-intensive shopfloor and for transitioning to human centred processes.</para></listitem>
</itemizedlist>
<fig id="F1-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F1-1">Figure <xref linkend="F1-1" remap="1.1"/></link></label>
<caption><para>Main drivers and use cases of Industry 4.0.</para></caption>
<graphic xlink:href="graphics/ch01_fig001.jpg"/>
</fig>
<para>The deployment of CPS systems in the shopfloor enables the seamless collection of digital data about all production processes, which increases the agility of automation operations, enabling the acquisition of knowledge about processes and facilitates optimal decision making. At the same time, CPS systems are able to initiate and execute digitally driven operations in the shopfloor. Coupled with digital technologies that are described in the next section, CPS systems can deliver endless possibilities for automation, optimizations and complete restructuring of industrial processes.</para>
<para>The fourth industrial revolution has an horizon of several decades, where it will deliver its full potential based on the interconnection all machines and OT systems, but also based on the employment of the ever evolving digital technologies such as Artificial Intelligence (AI), Big Data and the Industrial Internet of Things (IIoT). Nevertheless, during the first years of the Industry 4.0 movement, manufacturers have successfully deployed and validated the first set of use cases, which can directly deliver quick wins and business value. These use cases span the following areas:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Flexibility in Automation Architectures and Configuration:</emphasis> Agility and flexibility in automation are key prerequisites for the transition to the range of future production models that enable mass customization. These models ask for flexibility in the way each individual product is produced, effectively reducing production lot to size one. In this context, digital technologies can be used to change the configuration of production lines at the digital/IT rather than at the physical/OT layer of production systems, yielding the configuration of a production line much faster and much more flexible. Hence, digitally transformed productions lines are able to produce products with different (rather than fixed) configurations.</para></listitem>
<listitem><para><emphasis role="strong">Shift towards Predictive Maintenance:</emphasis> Nowadays, most industrial organizations are employing preventive maintenance in order to avoid the catastrophic consequences of unplanned downtime and unscheduled maintenance. Hence, they replace tools and parts, at regular intervals before their estimated End of Life. Even though preventive maintenance techniques are much more effective than reactive maintenance, they are still far from delivering optimal Overall Equipment Efficiency (OEE), as they tend to perform maintenance earlier than actually required. Digital technologies and Industry 4.0 hold the promise to facilitate a transition to predictive maintenance that will enable the accurate prediction of parameters such as the End-of-Life (EoL) and the Remaining Useful Life (RUL) of machines and their parts, as a means of optimizing OEE, minimizing unscheduled downtimes and scheduling maintenance and repairs at the best point in time. Predictive maintenance is usually based on the collection and analysis of large digital datasets about the condition of the equipment, such as data from vibration, acoustic and ultrasonic sensors, data from thermal images, power consumption data, oil analysis data, data from thermal images, as well as quality data from enterprise systems. As such predictive maintenance is a classical Big Data and Artificial Intelligence problem in the industry, which is relevant not only in manufacturing, but also in other industrial sectors such as energy, mining, oil &amp; gas and more.</para></listitem>
<listitem><para><emphasis role="strong">Quality Management Excellence and Zero Defect Manufacturing (ZDM):</emphasis> The advent of CPS systems and Industry 4.0 will enable manufacturers to collect large datasets about their processes, including data about the physical aspects of these processes. Equipment maintenance data is one example of such datasets. Other examples include datasets about the quality of the operations and of the resulting products, supply chain indicators, data about the quality of the source materials, data about the accuracy and consistency of assembly processes and more. By consolidating and analysing these datasets, manufacturers will be in a position to optimize their quality management processes and to meet stringent goals set from their quality management standards such as SixSigma and Total Quality Management (TQM). Early quality management and predictive maintenance deployments that take advantage of CPS systems provide such evidence. Moreover, the expanded digitalization of the shopfloor will in the future enable the proactive identification of defect causes, as well as the activation of related remedial actions on the fly, as means of achieving the vision of Zero Defect Manufacturing (ZDM). Likewise, digital technologies will facilitate the implementation of continuous improvement disciplines, through continuous collection of data and the employment of self-learning systems that continually improve themselves based on past data and evidence. Overall, in the Industry 4.0 era, manufacturers will become able to implement more efficient and cost-effective ZDM processes, while lowering the barriers of transition from current approach to quality management excellence.</para></listitem>
<listitem><para><emphasis role="strong">Digital Simulations and Digital Twins:</emphasis> Industrial processes are generally inflexible given that it is practically impossible to cancel or undo an action once the latter has taken place in the shopfloor. Therefore, it&#8217;s extremely difficult to test and validate alternative deployment con-figurations without disrupting production. Digital simulations provide the means of circumventing field testing, through using digital data for what-if analysis at the digital world and without a need of testing all scenarios in the field. Industry 4.0 technologies empower much more reliable and faster digital simulations, based on the use of advanced technologies for the collection, consolidation and analytics of very large datasets. Moreover, the Industry 4.0 era will be characterized by the wider use of a new disruptive concept i.e. the concept of a &#8220;digital twin&#8221;. A digital twin is a faithful digital representation of a physical entity, which is built based on the development of a proper digital model for the physical item and the subsequent collection of a host of digital data about the item, in-line with the specified model. The design of a digital twin can be very challenging as a result of the need to consolidate the physical properties of an item, its behaviour, aspects of the processes where it is used and business aspects regarding its use in a single model. Digital twins provide plant operators and automation solution providers with the means of running credible simulations in the digital world, prior to deploying new automation ideas and algorithms in the physical world. In several cases, digital twins&#8217; instances can be connected and fully synchronized with their physical item counterparts as a means of configuring systems and processes at the IT rather than the OT layer of the Inudstry4.0 systems. As already outlined, this can greatly facilitate automation flexibility, as well.</para></listitem>
<listitem><para><emphasis role="strong">Seamless and accurate information flows across the supply chain:</emphasis> For over two decades, enterprises are heavily investing in the optimization of their supply chain operations, as a core element of their competitiveness. Supply chain management has always been a matter of properly acquiring, exchanging and managing information across the manufacturing chain, based on information sources and touch points of all supply chain stakeholders. Industry 4.0 comes to disrupt this information management, through adding an important element that was typically missing in traditional supply chain management: The information about the status of the physical world, such as the status of machines, equipment, processes and devices. Indeed, the advent of CPS systems and Industrial Internet of Things technologies enable the integration of this information across the supply chain. Furthermore, CPS systems and Industry 4.0 provide the means of influencing the status of the physical processes across the supply chain, in addition to changing the status of business information systems [e.g., production schedules in an Enterprise Resource Planning (ERP) system or materials information in a Warehouse Management System (WMS)]. This gives rise to disruptive supply chain innovations, which result in increased automation, less errors, increased efficiency and reduced supply chain costs.</para></listitem>
<listitem><para><emphasis role="strong">Worker Training, Safety and Well Being:</emphasis> Industry 4.0 emphasizes the importance of keeping employees engaged and at the centre of industrial processes, while alleviating them from the burden of laborious, tedious and time-consuming tasks. In this direction, several Industry 4.0 use cases entail the deployment of advanced visualization technologies such as ergonomic dashboards, Virtual Reality (VR) and Augmented Reality (AR) in order to ease the workers&#8217; interaction with the digital shopfloor and its devices. Note that AR and VR are extensively used in order to train employees under safe conditions i.e. through interaction with cyber representations of the physical equipment and/or with remote guidance from experienced colleagues or other experts. Likewise, wearables and other pervasive devices are extensively deployed in order to facilitate the tracking of the employee in the shopfloor towards ensuring that he/she works under safe conditions that do not jeopardise his/her well-being.</para></listitem>
</itemizedlist>
<para>While the presented list of use cases is not exhaustive, it is certainly indicative of the purpose and scope of most digital manufacturing deployments in recent years. Later chapters in this book present practical examples of Industry 4.0 deployments that concern one or more of the above use cases. However, we expect that these use cases will gradually expand in sophistication as part of the digital shopfloor vision, which is illustrated in a following section of this chapter. Moreover, we will see the interconnection and interaction of these use cases as part of a more cognitive, autonomous and automated factory, where automation configuration, supply chain flexibility, predictive maintenance, worker training and safety, as well as digital twins co-exist and complement each other.</para>
</section>
<section class="lev1" id="sec1-3">
<title>1.3 The Digital Technologies Behind Industry 4.0</title>
<para>Industry 4.0 is largely about the introduction of CPS systems in the shopfloor, in order to digitally interconnect the machines and the OT technology with IT systems such as Enterprise Resource Planning (ERP), Computerized Maintenance Management (CMM), Manufacturing Execution Systems (MES) and Customer Relationship Management (CRM), Supply Chain Management (SCM) systems. Based on CPS systems, the entire factory or plant can become a large scale CPS system that employs Industrial Internet of Things (IIoT) protocols and technologies for data collection, processing and actuation. In practice, an Industry 4.0 deployment takes advantage of multiple digital technologies in order to endow the digital automation systems with intelligence, accuracy and cost-effectiveness. Hence, Industry 4.0 is largely propelled by the rapid evolution of various digital technologies, which enable most of the use cases listed above. For example, predictive maintenance is greatly boosted by Big Data technologies that provide the means for analysing maintenance related data from a host of batch and streaming data sources. As another example, Industry 4.0 quality management and supply chain management use cases ask for fast exchange of data from and to the shopfloor, including interactions with numerous devices. The latter are propelled by advanced connectivity technologies such as 5G and LPWAN (Low Power Wide Area Networks).</para>
<para>In following paragraphs, we provide a list of the main digital technologies that empower the Industry 4.0 vision and highlight their importance for the factories of the future.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">CPS and Industrial Internet of Things:</emphasis> As already outlined, CPS systems are considered as the main building blocks of Industry 4.0 systems. In the medium term, most machines will be CPS systems that will provide the means for collecting digital data from the physical worlds, but also interfaces for actuating and control over them. CPS systems are conceptually Industrial Internet of Things (IIoT) systems, which enable interaction and data exchange with physical devices. Note however that IIoT systems provide also the means for interconnecting legacy machines with IT systems and ultimately treating them as CPS systems. This is mainly achieved through the augmentation of physical devices with middleware that implements popular IoT protocols, such as MQTT, OPC-UA, WebSocket and more. Overall, CPS and IIoT systems will be at the very core of all Industry 4.0 deployments in the years to come.</para></listitem>
<listitem><para><emphasis role="strong">5G Communications:</emphasis> Industrial plants are characteristic examples of device saturated environments, since there are likely to comprise thousands of sensors, edge gateways, machines and automation devices. Early Industry 4.0 involves only a small subset of these devices and hence can dispose with state of the art connectivity technologies such as Wi-Fi and 4G/LTE (Long Term Evolution) technologies. Nevertheless, in the medium and long term, a much larger number of machines and devices should be supported, as they will gradually connect to Industry 4.0 deployments. Likewise, much larger volumes of data and mobility of smart objects (e.g., drones and autonomous guided vehicles) should be handled, in several cases through high performance and lower latency. For these reasons, future deployments will require the capabilities advocated by 5G technologies which are currently being tested by several telecom operators worldwide. In particular, 5G technologies will enable low-latency data acquisition from thousands of devices at plant scale, which offering spectrum efficiency and ease of deployment.</para></listitem>
<listitem><para><emphasis role="strong">Low Power Wide Area Networks:</emphasis> In recent years, low power wide area network technologies (such as LoraWAN, NB-IoT and SigFox) have emerged, in order to support IoT devices connectivity at scale, notably the connectivity of low power devices. These technologies offer flexible and cost effective deployment, while at the same time supporting novel applications in both indoor and outdoor environments, including the accurate localization of items in indoor environments. We envisaged that such technologies will be also used in order to provide &#8220;location- as-a-service&#8221; capabilities in industrial plants. Their deployment will come to enhance rather than replace the connectivity capabilities that are currently provided by 4G and WiFi technologies, notably in the direction of accurate item localization that existing technologies cannot deliver.</para></listitem>
<listitem><para><emphasis role="strong">Cloud Computing:</emphasis> CPS manufacturing systems and applications are very commonly deployed in the cloud, in order to take advantage of the capacity, scalability and quality of service of cloud computing. Moreover, manufacturers tend to deploy their enterprise systems in the cloud. Likewise, state of the art automation platforms (including some of the platforms that are presented in this book) are cloud based. In the medium term, we will see most manufacturing applications in the cloud, yielding cloud computing infrastructure an indispensable element of Industry 4.0.</para></listitem>
<listitem><para><emphasis role="strong">Edge Computing:</emphasis> During the last couple of years, CPS and IIoT deployments in factories implement the edge computing paradigm. The latter complements the cloud with capabilities for fast (nearly real time) processing, which is performed close to the field rather than in the cloud [2]. In an edge computing deployment, edge nodes are deployed close to the field in order to support data filtering, local data processing, as well as fast (real time) actuation and control tasks. The edge computing paradigm is promoted by the major reference architecture for IIoT and Industry 4.0 such as the Industrial Internet Consortium Reference Architecture (IIRA) and the Reference Architecture of the OpenFog consortium.</para></listitem>
<listitem><para><emphasis role="strong">Big Data:</emphasis> The vast majority of Industry 4.0 use cases are data intensive, as they involve many data flows from multiple heterogeneous data sources, including streaming data sources. In other words, several Industry 4.0 use cases are based on datasets that feature the 4Vs (Volume,Variety, Velocity, Veracity) of Big Data. As mentioned in earlier sections, predictive maintenance is a classic example of a Big Data use case, as it combines multi-sensor data with data from enterprise systems in a single processing pipeline. Therefore, the evolution of Big Data technologies and tools is a key enabler of the fourth industrial revolution. Industry 4.0 is typically empowered by Big Data technologies for data collection, consolidation and storage, given that industrial use cases need to bring together multiple fragmented datasets and to store them in a reliable and cost-effective fashion. However, the business value of these data lies in their analysis, which is indicative of the importance of Big Data analytics techniques, including machine learning techniques.</para></listitem>
<listitem><para><emphasis role="strong">Artificial Intelligence:</emphasis> Even though there is a lot of hype around the use of AI in the industry, most manufactures and plant operators are familiar with this technology. Indeed, AI has been deployed in industrial plants for over two decades, in different forms such as fuzzy logic and expert systems. In the Industry 4.0 the term is revised and extended in order to include the use of deep learning and deep neural networks for advanced data mining. The use of these techniques is directly enabled from the Big Data technologies that have been outlined in the previous paragraph. Hence, they have a very close affiliation with Big Data, as deep learning can be used in conjunction with Big Data technologies. AI data analytics is more efficient than conventional machine learning in identifying complex patterns such as operation degradation patterns for machines, patterns of product defect causes, complex failure modes and more. In industrial environments, AI can be embedded in digital automation systems, but also in physical devices such as robots and edge gateways.</para></listitem>
<listitem><para><emphasis role="strong">Augmented Reality:</emphasis> AR is another technology that has been used in plants since several decades. It is also revisited as a result of the emergence of more accurate tracking technologies and of new cost-effective devices. It can be used in many different ways in order to disrupt industrial processes. As a prominent example, AR can be used for remote support of maintenance workers in their service tasks. In particular, with AR the worker needs no longer to consult paper manuals or phone supports. He/she can rather view on-line the repair or service instructions provided by an expert (e.g., the machine vendor) from a remote location. As another example, AR can be used for training workers on complex tasks (e.g., picking or assembly tasks), through displaying them cyberpresentations of the ways these tasks are performed by experts or more experienced workers.</para></listitem>
<listitem><para><emphasis role="strong">Blockchain Technologies:</emphasis> Blockchain technologies are in their infancy as far as their deployment in industrial settings is concerned. Despite the hype around blockchains, their sole large scale, enterprise application remains their use in cryptocurrencies such as Bitcoin and Ethereum. Nevertheless, some of the projects that are presented in this book are already experimenting with blockchains in industry, while also benchmarking their performance. In particular, the FAR-EDGE project is using blockchain technology for the decentralization and synchronization of industrial processes, notably processes that span multiple stations in the factory. However, other uses of the blockchain are also possible, such as its use for securing datasets based on encryption, as well as its use for traceability in the supply chain. It&#8217;s therefore likely that the blockchain will play role in future stages of Industry 4.0, yet it has not so far been validated at scale. Note also that in the scope of Industry 4.0 applications, permissioned blockchains can be used (like in FAR-EDGE), instead of public blockchains. Permissioned blockchains provide increased privacy, authentication and authorization of users, as well as better performance than public ones, which makes them more suitable for industrial deployment and use.</para></listitem>
<listitem><para><emphasis role="strong">Cyber Security:</emphasis> Industry 4.0 applications introduce several security challenges, given that they are on the verge of IT and OT, which pose conflicting requirements from the security viewpoint. Any Industry 4.0 solutions should come with strong security features towards protecting datasets, ensuring the trustworthiness of new devices and protecting the deployment for vulnerabilities of IT assets.</para></listitem>
<listitem><para><emphasis role="strong">3D Printing and Additive Manufacturing:</emphasis> Along with the above-listed IT technologies, CPS manufacturing processes benefit from 3D printing, as an element of the digital automation platforms and processes. 3D printing processes can be driven by the digital data of an Industry 4.0 deployment, such as a digital twin of a piece of equipment or part that can be printed. Additive manufacturing processes can be integrated in a digital manufacturing deployment in support of the above-listed use cases. For example, 3D printing can be used to accelerate the maintenance and repair process, through printing parts or tools, rather than having to order them or to keep significant inventory. Likewise, printing processes can be integrated in order to flexible customize the configuration of a production line and subsequently of the products produced. This can greatly boost mass customization.</para></listitem>
</itemizedlist>
<para>None of the chapters of the book is devoted to the presentation of digital technologies, as the emphasis is on digital automation systems and their functionalities. However, all the presented systems comprise one or more of the above digital building blocks. Moreover, some of the chapters are devoted to automation solutions that are built round the above listed technologies such as edge computing, cloud computing and blockchain technology.</para>
</section>
<section class="lev1" id="sec1-4">
<title>1.4 Digital Automation Platforms and the Vision of the Digital Shopfloor</title>
<section class="lev2" id="sec1-4-1">
<title>1.4.1 Overview of Digital Automation Platforms</title>
<para>The vision of using digital technologies towards enhancing the flexibility and configurability of industrial automation tasks is not new. For over a decade manufacturers have been seeking for scalable distributed solutions both for manufacturing automation and for collaboration across the manufacturing value chain [3]. Such solutions are driven by future manufacturing requirements, including reduction of costs and time needed to adapt to variable market demand, interoperability across heterogeneous hardware and software elements, integration and interoperability across enterprises (in the manufacturing chain), seamless and cost effective scalability through adding resources without disrupting operations, reusability of devices and production resources, plug-and-play connectivity, as well as better forecasting and predictability of processes and interactions towards meeting real-time demand [4]. These requirements have given rise to distributed decentralized approaches for de-centralizing and virtualization the conventional automation pyramid [5].</para>
<para>One of the most prominent approaches has been the application for intelligent agents in industrial automation, in the scope of in distributed environments where time-critical response, high robustness, fast local reconfiguration, and solutions to complex problems (e.g., production scheduling) are required [6]. Agent-based approaches fall in general devised in the following main categories:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Functional decomposition approaches</emphasis>, where agents correspond to functional modules that are assigned to manufacturing or enterprise processes e.g., order acquisition, planning, scheduling, handling of materials, product distribution and more.</para></listitem>
<listitem><para><emphasis role="strong">Physical decomposition approaches</emphasis>, where agents are used to represent entities in the physical world (e.g., machines, tools, cells, products, parts, features, operations and more). This decomposition impacts also the implementation of manufacturing processes such as production scheduling. For example, in the case of functional decomposition, scheduling can be implemented as a process that merges local schedules maintained by agents in charge of ordering. Likewise, in the case of physical decomposition scheduling can be implemented based on a negotiation process between agents that represent single resources (e.g., cells, machines, tools, fixtures etc.).</para></listitem>
</itemizedlist>
<para>Despite the advantages of agent technology for manufacturing operations (e.g., distribution, autonomy, scalability, reliability), agents are considered inefficient when dealing with low-level control tasks that have very stringent performance requirements. Furthermore, a direct mapping between software agents and manufacturing hardware has not been realized and/or standardized [7].</para>
<para>In addition to software agents&#8217; technology, Service Oriented Architecture (SOA) paradigms to decentralized automation have also emerged with a view to exploiting SOA&#8217;s reusability, autonomy and loose coupling characteristics. SOA approaches to manufacturing automation are based on the identification of operations that can be transformed and exposed as services. Accordingly, these operations are exploited towards implementing service-oriented automation workflows. SOA solutions come with enterprise service bus infrastructures, which decouple producers from consumers, while at the same time facilitating the integration of complex event processing. Furthermore, SOA is a standardized and widely adopted technology, which presents several advantages over software agents, while giving rise to approaches that combine SOA and agents (e.g., [8]). SOA deployments in the shopfloor have also focused on the integration of device level services with enterprise level services, including for example deployments that virtualize Programmable Logic Controllers (PLC) [9], along with implementations of execution environments for Functional Block Instances (FBI), including functional blocks compliant to the IEC 61499 standard [10]. Nevertheless, SOA architectures have been unable to solve the real-time limitations of agent technology, which has given rise to various customizations of the technology (e.g., [11]).</para>
<para>The rise of CPS manufacturing, along with the evolution of the digital technologies that were presented in the previous section (e.g., Cloud Computing, IIoT and Big Data technologies) has led to the emergence of several cloud-based industrial automation platforms, including platforms offered by prominent IT and industrial automation vendors (e.g., IBM, SIEMENS, BOSCH, Microsoft, Software AG, SAP) and platforms developed in the scope of EU projects (e.g., FP7 iMain (http://www.imain-project.eu/), ARTEMIS JU (Joint Undertaking) Arrowhead (http://www.arrowhead.eu), FoF (Factories of the Future) SUPREME (https://www.supreme-fof.eu/) and more). Each of these platforms comes with certain unique value propositions, which aim at differentiating them from competitors.</para>
<para>Acknowledging the benefits of edge computing for industrial automation, Standards Development Organizations (SDOs) have specified relevant reference architectures, while industrial organizations are already working towards providing tangible edge computing implementations. SDOs such as the OpenFog Consortium and the Industrial Internet Consortium (IIC) have produced Reference Architectures (RA). The RA of the OpenFog Consortium prescribes a high-level architecture for internet of things systems, which covers industrial IoT use cases. On the other hand, the RA of the IIC outlines the structuring principles of systems for industrial applications. The IIC RA [12] is not limited to edge computing, but rather based on edge computing principles in terms of its implementation. It addresses a wide range of industrial use cases in multiple sectors, including factory automation. These RAs have been recently released and their reference implementations are still in their early stages.</para>
<para>A reference implementation of the IIC RA&#8217;s edge computing functionalities [13] for factory automation is provided as part of IIC&#8217;s edge intelligence testbed. This testbed provides a proof-of-concept implementation of edge computing functionalities on the shopfloor. The focus of the testbed is on configurable edge computing environments, which enable the development and testing of leading edge systems and algorithms for edge analytics. Moreover, Dell-EMC has implemented the EdgeX Foundry framework [14], which is a vendor-neutral open source project hosted by the Linux Foundation that builds a common open framework for IIoT edge computing. The framework is influenced by the above-listed reference architectures and was recently released. Other vendors (e.g., Microsoft and Amazon) are also incorporating support for edge devices and Edge Gateways in their cloud platforms.</para>
<para>The platforms and solutions that are presented in following chapters advance the state of the art in digital automation platforms, based on the implementation of advanced intelligence, resilience and security features, but also through the integration of leading edge technologies (e.g., AI and blockchain technologies). The relevant innovations are presented in the individual chapters that present these solutions. Note, however, that the FAR-EDGE, AUTOWARE and DAEDALUS solutions that are presented in the book fall in the realm of research solutions. Hence, they implement advanced features, yet they lack the maturity for very large scale digital automation deployments.</para>
</section>
<section class="lev2" id="sec1-4-2">
<title>1.4.2 Outlook Towards a Fully Digital Shopfloor</title>
<para>The digital automation platforms that are listed in the previous paragraphs support the early stage Industry 4.0 deployments, which are characterized by the integration of a limited number of CPS systems and the digitization of selected production processes. As part of the evolution of Industry 4.0 deployments, we will witness a substantial increase of the scope of these deployments in terms of the connected machines and devices, but also in terms of the processes that will be digitized and automated. The ultimate vision is a fully digital shopfloor, where all machines and OT devices will be connected to the IT systems, while acting as CPS systems. This digital shopfloor will support all of the described functionalities and use cases in areas such as automation, simulation, maintenance, quality management, supply chain management and more. Moreover, these functionalities will seamlessly interoperate towards supporting end-to-end processes both inside the factory and across the supply chain. The interaction between these modules will empower more integrated scenarios, where for example information collected by the shopfloor is used to perform a digital simulation and produce outcomes that drive a control operation on the field.</para>
<para>The fully digital shopfloor will enable an autonomous factory, which will be characterized by the following properties:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Holistic, Integrated and End-to-End:</emphasis> The digital shopfloor will deploy digital technologies and capabilities end-to-end, in order to address the digital transformation of all the production processes, rather than of selected processes which is the situation nowadays.</para></listitem>
<listitem><para><emphasis role="strong">Predictive and Anticipatory:</emphasis> Solutions within the fully digital shopfloor will be able to predict and anticipate important events such as machine failures and occurrence of production defects, as a means of proactively taking action in order to optimize operations.</para></listitem>
<listitem><para><emphasis role="strong">Fast and Real-Time:</emphasis> Solutions in the digital shopfloor will be fast and able to operate in real-time timescales, which will allow them to remedy potential problems and to perform optimizations on-line (e.g., support on-line defect repairs).</para></listitem>
<listitem><para><emphasis role="strong">Flexible and Adaptive:</emphasis> In the digital shopfloor of the future, automation solutions will be dynamic and adaptive to changing production requirements and manufacturing contexts. As such their digital capabilities, including their security characteristics, will be flexible and reconfigurable, in order to support dynamic control of production processes and their quality in the system life-cycle.</para></listitem>
<listitem><para><emphasis role="strong">Standards-Based:</emphasis> The realization of the digital shopfloor could be greatly facilitated based on the integration and use of standards-based solutions, notably solutions that adhere to mainstream digital manufacturing (e.g. RAMI4.0) and Industrial Internet of Things (IIoT) standards (e.g., OpenFog Consortium). Adherence to such standards will greatly facilitate aspects such as integration and interoperability.</para></listitem>
<listitem><para><emphasis role="strong">Open:</emphasis> The solutions of the digital shopfloor should be openly accessible through Open APIs (Application Programming Interfaces), which will facilitate their expansion with more features and functionalities.</para></listitem>
<listitem><para><emphasis role="strong">Cost-Effective:</emphasis> The digital shopfloor will be extremely cost effective in its configuration and operations, based on its flexible, dynamic, reconfigurable and composable nature. In particular, the autonomy of the digital shopfloor solutions will eliminate costs associated with human-mediated error-prone processes, while their composability will lower development and deployment costs.</para></listitem>
<listitem><para><emphasis role="strong">Human-Centric (Human-in-the-Loop):</emphasis> A fully digital shopfloor shall address human factors end-to-end, including product design aspects, employees&#8217; training, proper visualization of production processes, as well as safety of human-in-the-loop processes.</para></listitem>
<listitem><para><emphasis role="strong">Continuous Improvement:</emphasis> The digital driven production processes will be characterized by a continuous improvement discipline, which will occur at various timescales, including machine, process and end-to- end production levels.</para></listitem>
</itemizedlist>
<para>In the scope of the digital shopfloor, products and production processes can be fully virtualized and managed in the digital world (e.g., through their digital twin counterparts). This implies that digital information about the products and the production processes will be collected and managed end-to-end, towards a continuous improvement discipline.</para>
<para>The vision of the digital shopfloor requires development and integration activities across the following complementary pillars:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Digitally enhanced manufacturing equipment:</emphasis> Industry 4.0 hinges on the interconnection of machines and equipment in the cyber world as CPS systems. Currently, legacy machines are augmented based on internet of things protocols in order to become part of Industry 4.0 deployments. At the same time, new machines come with digital interfaces and acts as CPS systems. In the medium and long term, machines will be digitally enhanced in order to provide embedded intelligence functionalities, such as the ability to detect and remedy defects, to identify maintenance parameters and to schedule maintenance activities and more. Such intelligence functionalities will endow machines with flexibility, reconfigurability, adaptability and proactivity properties, which are key enablers of the fully digital shopfloor.</para></listitem>
<listitem><para><emphasis role="strong">Open digital platforms for automation and service engineering:</emphasis> In the digital shopfloor, digitally enhanced machinery must be interconnected in order to support factory wide processes. To this end, various digital manufacturing platforms shall be integrated based on composable functionalities. This is also important given that factories and manufacturing chain tend to deploy multiple rather than a single digital automation platform. Hence, the composition of multiple functionalities from different platforms is required in order to support end-to-end production processes as part of the digital shopfloor.</para></listitem>
<listitem><para><emphasis role="strong">Interoperable digital components and technologies:</emphasis> The digital shopfloor will be able to seamlessly integrate advanced digital and CPS technologies such as sensors, data analytics and AI algorithms. The digital shopfloor will be flexibly and continually upgradable with the best-of-breed of digital technologies for manufacturing as the latter become available. This is a key prerequisite for upgrading the intelligence of the plant floor, with minimum disruption in production operations.</para></listitem>
<listitem><para><emphasis role="strong">Experimentation facilities including pilot lines and testbeds:</emphasis> The transition to a fully digital shopfloor requires heavy and continuous testing efforts, as well as auditing against standards. Extensive testing is therefore required without disrupting existing operations as a means of guaranteeing smooth migration. To this end, there is a need for experimental facilities and pilot lines where digital manufacturing developments can be tested and validated prior to their deployment in production. This is the reason why some of the subsequent chapters of the book refer to existing experimental facilities and testbeds, as key elements of Industry 4.0 and digital manufacturing platforms ecosystems building efforts.</para></listitem>
<listitem><para><emphasis role="strong">Open Innovation Processes</emphasis>: One of the overarching objectives of Industry 4.0 is to enable increased flexibility in digital automation deployments, not only in order to boost new production models (such as mass customization), but also in order to ease innovation in digital automation. To this end, open innovation processes should be established over the interconnected digital platforms, leveraging on IT innovation vehicles such as Application Programming Interfaces (APIs) and the experimental facilities outlined above. The latter could serve as a sandbox for innovation.</para></listitem>
</itemizedlist>
<fig id="F1-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<graphic xlink:href="graphics/ch01_fig002.jpg"/>
</fig>
<para>The road to the fully digital shopfloor is very challenging as a result of the need to develop, establish, validate and combine the above-listed pillars. However, there is already evidence of the benefits of digital technologies in the shopfloor and across the supply chain. Later chapters of this book present this evidence, along with some of the key digital manufacturing platforms that demonstrate the benefits of digital manufacturing platforms and of the related digitally transformed production processes.</para>
</section>
</section>
<section class="lev1" id="sec1-5">
<title>1.5 Conclusion</title>
<para>This chapter has introduced Industry 4.0 in general and digital automation platforms in particular, which are at the core of the book. Our introduction to Industry 4.0 has underlined some of the proven and most prominent use cases that are being implemented as part of early deployments. Special emphasis has been given in use cases associated with flexible automation, worker training and safety, predictive maintenance, quality management, digital simulations and more. Basic knowledge about these use cases is a key prerequisite for understanding the automation use cases and applications that are presented in subsequent chapters of the book.</para>
<para>The chapter has also presented the most widely used digital technologies in the scope of Industry 4.0. Emphasis has been put on illustrating the relevance of each technology to Industry 4.0 use cases, but also on presenting how their evolution will impact deployment and adoption of CPS manufacturing systems. This discussion of digital technologies is also a prerequisite for understanding the details of the digital solutions that are presented in subsequent chapters. This is particularly important, given that no chapter of the book presents in detail digital technologies. Rather the emphasis of the book is on presenting advanced manufacturing solutions based on digital automation platforms that leverage the above-listed digital technologies.</para>
<para>Despite early deployments and the emergence of various digital automation platforms, the Industry 4.0 vision is still in the early stages. In the medium- and long-term, different technologies and platforms will be integrated towards a fully digital shopfloor, which supports the digital transformation of industrial processes end-to-end. The vision of a fully digital shopfloor entails the interoperability and interconnection of multiple digitally enhanced machines in-line with the needs of end-to-end automation processes within the factory. As part of this book, we present several automation approaches and functionalities, including field control, data analytics and digital simulations. In the future digital shopfloor, these functionalities will co-exist and seamlessly interoperate in order to enable fully autonomous, intelligent and resource efficient factories. With this wider vision in mind, readers could focus on the more fine-grained descriptions of platforms and technologies presented in subsequent chapters.</para>
</section>
<section class="lev1" id="sec1-6">
<title>References</title>
<para>[1] Alasdair Gilchrist &#8216;Industry 4.0: The Industrial Internet of Things&#8217;, 1st ed. Edition Apress, June 28, 2016, ISBN-10: 1484220463, ISBN-13: 978-1484220467</para>
<para>[2] F. Bonomi, R. Milito, J. Zhu, S. Addepalli, &#8216;Fog computing and its role in the internet of things&#8217;, Proceedings of the first edition of the MCC workshop on Mobile cloud computing, MCC &#8217;12, pp 13&#8211;16, 2012.</para>
<para>[3] F. Jammes and H. Smit, &#8216;Service-Oriented Paradigms in Industrial Automation Industrial Informatics&#8217;, IEEE Transactions on., pp. 62&#8211;70, vol. 1, issue 1, Feb, 2005.</para>
<para>[4] T. Halu&#353;ka, R. Pauli&#232;ek, and P. Va&#382;an, &#8220;SOA as A Possible Way to Heal Manufacturing Industry&#8221;, International Journal of Computer and Communication Engineering, Vol. 1, No. 2, July 2012</para>
<para>[5] A. W. Colombo (ed.), T. Bangemann (ed.), S. Karnouskos (ed.), J. Delsing (ed.), P. Stluka (ed.), R. Harrison (ed.), F. Jammes (ed.), J. L. Mart&#237;nez Lastra (ed.), &#8216;Industrial Cloud-Based Cyber-Physical Systems: The IMC-AESOP Approach&#8217;, Springer. 245 p. 2014.</para>
<para>[6] C Leit&#227;o, &#8220;Agent-based distributed manufacturing control: A state-of- the-art survey,&#8221; Engineering Applications of Artificial Intelligence, vol. 22, no. 7, pp. 979&#8211;991, Oct. 2009.</para>
<para>[7] P. Vrba, &#8216;Review of Industrial Applications of Multi-agent Technologies&#8217;, Service Orientation in Holonic and Multi Agent Manufacturing and Robotics, Studies in Computational Intelligence Vol. 472, Springer, pp 327&#8211;338, 2013.</para>
<para>[8] Tapia, S. Rodr&#237;guez, J. Bajo, and J. Corchado, &#8216;FUSION@, A SOA-Based Multi-agent Architecture&#8217;, in International Symposium on Distributed Computing and Artificial Intelligence 2008 (DCAI 2008), vol. 50 J. Corchado, S. Rodr&#237;guez, J. Llinas, and J. Molina, Eds. Springer Berlin/Heidelberg, 2009, pp. 99&#8211;107.</para>
<para>[9] F. Basile, P. Chiacchio, and D. Gerbasio, &#8216;On the Implementation of Industrial Automation Systems Based on PLC&#8217;, IEEE Transactions on Automation Science and Engineering, Volume: 10, Issue: 4, pp. 9901003, Oct 2013.</para>
<para>[10] W. Dai, V. Vyatkin, J. Christensen, V. Dubinin, &#8216;Bridging Service- Oriented Architecture and IEC 61499 for Flexibility and Interoperabil-ity&#8217;, Industrial Informatics, IEEE Transactions on, Volume: 11, Issue: 3 pp: 771&#8211;781, DOI: 10.1109/TII.2015. 2423495, 2015.</para>
<para>[11] T. Cucinotta and Coll, &#8220;A Real-Time Service-Oriented Architecture for Industrial Automation,&#8221; Industrial Informatics, IEEE Transactions on, vol. 5, issue 3, pp. 267&#8211;277, Aug. 2009.</para>
<para>[12] Industrial Internet Consortium. &#8216;The Industrial Internet of Things Volume G1: Reference Architecture&#8217;, version 1.8, [Online], Available from: http://www.iiconsortium.org/IIRA.htm 2018.05.30, 2017</para>
<para>[13] Industrial Internet Consortium &#8216;IIC Edge Intelligence Testbed&#8217;, Available from: http://www.iiconsortium.org/edge-intelligence.htm 2018.05.30, 2017.</para>
<para>[14] &#8216;EdgeX Foundry Framework&#8217; [Online], Available from: https:// www.edgexfoundry.org/ 2018.05.30, 2018.</para>
</section>
</chapter>
<part class="part" id="part01" label="I" xreflabel="I" role="PART">
<title/>
<chapter class="chapter" id="ch02" label="2" xreflabel="2">
<title>Open Automation Framework for Cognitive Manufacturing</title>
<para><emphasis role="strong">Oscar Lazaro<superscript>3</superscript>, Martijn Rooker<superscript>1</superscript>, Bego&#241;a Laibarra<superscript>4</superscript>, Anton Ru&#382;i&#263;<superscript>2</superscript>, Bojan Nemec<superscript>2</superscript> and Aitor Gonzalez<superscript>3</superscript></emphasis></para>
<para><superscript>1</superscript> TTTech Computertechnik AG, Schoenbrunner Strasse 7, A-1040 Vienna, Austria</para>
<para><superscript>2</superscript> Jo&#382;ef Stefan Institute, Department of Automatics, Biocybernetics, and Robotics, Jamova 39, 1000 Ljubljana, Slovenia</para>
<para><superscript>3</superscript> Asociacion de Empresas Tecnologicas Innovalia, Rodriguez Arias, 6, 605, 48008-Bilbao, Spain</para>
<para><superscript>4</superscript> Software Quality Systems, Avenida Zugazarte 8 1-6, 48930-Getxo, Spain E-mail: olazaro@innovalia.org; martijn.rooker@tttech.com; blaibarra@sqs.es; ales.ude@ijs.si; bojan.nemec@ijs.si; aitgonzalez@innovalia.org;</para>
<para>The successful introduction of flexible, reconfigurable and self-adaptive manufacturing processes relies on evolving traditional automation ISA-95 automation solutions to adopt innovative automation pyramids proposed by CPS vision building efforts behind projects such as PathFinder, ScorpiuS and RAMI 4.0 IEC 62443/ISA99. These evolved automation pyramids demand approaches for the successful integration of data-intensive cloud and fog-based edge computing and communication digital manufacturing processes from the shopfloor to the factory to the cloud. This chapter presents an insight into the business and operational processes and technologies, which motivate the development of a digital cognitive automation framework for collaborative robotics and modular manufacturing systems particularly tailored to SME operations and needs, i.e. the AUTOWARE Operative System (OS).</para>
<para>To meet the requirements of both large and small firms, this chapter elaborates on the proposal of a holistic framework for smart integration of well-established SME-friendly digital frameworks such as the ROS-supported robotic Reconcell framework, FIWARE-enabled data-driven BEinCPPS/MIDIH Cyber Physical Production frameworks and OpenFog [3] compliant open-control hardware frameworks. The chapter demonstrates how AUTOWARE digital abilities are able to support automatic awareness; a first step in the support of autonomous manufacturing capabilities in the digital shopfloor. This chapter also demonstrates how the framework can be populated with additional digital abilities to support the development of advanced predictive maintenance strategies as those proposed by the Zbre4k project.</para>
<section class="lev1" id="sec2-1">
<title>2.1 Introduction</title>
<para>SMEs are a pillar of the European economy and key stakeholder for a successful digital transformation of the European industry. In fact, manufacturing is the second most important sector in terms of small and medium-sized enterprises&#8217; (SMEs) employment and value added in Europe [1]. Over 80% of the total number of manufacturing companies is constituted by SMEs, which represent 59% of total employment in this sector.</para>
<para>In an increasingly global competition arena, companies need to respond quickly and economically feasible to the market requirements. In terms of market trends, a growing product variety and mass customization are leading to demand-driven approaches. Industry, in general, and SMEs, in particular, face significant challenges to deal with the evolution of automation solutions (equipment, instrumentation and manufacturing processes) they should support to respond to demand-driven approaches, i.e. increasing and abrupt changes in market demands intensified by the manufacturing trends of mass customization and individualization, which needs to be coupled with pressure on reduction of production costs, imply that manufacturing configurations need to change more frequently and dynamically.</para>
<para>Current practice is such that a production system is designed and optimized to execute the exact same process over and over again. Regarding the growing dynamics and these major driving trends, the planning and control of production systems has become increasingly complex regarding flexibility and productivity as well as the <emphasis role="strong">decreasing predictability of processes</emphasis>. It is well accepted that every production system should pursue the following three main objectives: (1) providing capability for rapid responsiveness, (2) enhancement of product quality and (3) production at low cost. On the one hand, these requirements have been traditionally satisfied through highly stable and repeatable processes with the support of <emphasis role="strong">traditional automation pyramids</emphasis>. On the other hand, these requirements can be achieved by creating short response times to deviations in the production system, the production process, or the configuration of the product in coherence to overall performance targets. In order to obtain short response times, a high process transparency and reliable provisioning of the required information to the point of need at the correct time and without human intervention are essential.</para>
<para>However, the success of those adaptive and responsive production systems highly depends on real-time and operation-synchronous information from the production system, the production process and the individual product. Nevertheless, it can be stated that the concept of fully automated production systems is no longer a viable vision, as it has been shown that the conventional automation is not able to deal with the ever-rising complexity of modern production systems. Especially, a high reactivity, agility and adaptability required by modern production systems can only be reached by human operators with their immense cognitive capabilities, which enable them to react to unpredictable situations, to plan their further actions, to learn and to gain experience and to communicate with others. Thus, new concepts are required, which apply these cognitive principles to support autonomy in the planning processes and control systems of production systems. Open and smart cyber-physical systems (CPS) are considered to be the next (r)evolution in industrial automation linked to Industry 4.0 manufacturing transformation, with enormous business potential enabling novel business models for integrated services and products. Today, the trend goes towards open CPS devices and we see a strong request for open platforms, which act as computational basis that can be extended during manufacturing operation. <emphasis role="strong">However, the full potential of open CPS has yet to be fully realized in the context of cognitive autonomous production systems</emphasis>.</para>
<para>In fact, in particular to SMEs, it still seems difficult to understand the driving forces and most suitable strategies behind shopfloor digitalization and how they can increase their competitiveness making use of the vast variety of individualized products and solutions to digitize their manufacturing process, making them cognitive and smart and compliant with Industry 4.0 reference architecture RAMI 4.0 IEC 62443/ISA99. Moreover, as SMEs intend to adopt data-intensive collaborative robotics and modular manufacturing systems, making their advanced manufacturing processes more competitive, they face additional challenges to the implementation of &#8220;cloudified&#8221; automation processes. While the building blocks for digital automation are available, it is up to the SMEs to align, connect and integrate them to meet the needs of their individual advanced manufacturing processes, leading to difficult and costly digital automation platform adoption.</para>
<para>This chapter presents the AUTOWARE architecture, a concerted effort of a group of European companies under the Digital Shopfloor Alliance (DSA) [12] to provide an open consolidated architecture that aligns currently disconnected open architectural approaches with the European reference architecture for Industry 4.0 (RAMI 4.0) to lower the barrier of small, medium- and micro-sized enterprises (SMMEs) in the development and incremental deployment of cognitive digital automation solutions for next- generation autonomous manufacturing processes. This chapter is organized as follows. Section 2.2 presents the background and state of the art on open digital manufacturing platforms, with a particular focus on European initiatives. Section 2.3 introduces the AUTOWARE open OS building blocks and discusses their mapping to RAMI 4.0, the Reference Architecture for Manufacturing Industry 4.0. Then, Section 2.4 exemplifies how AUTOWARE platform can be tailored and customized to advanced predictive maintenance services. Finally, the chapter concludes with the main features of the AUTOWARE open automation framework.</para>
</section>
<section class="lev1" id="sec2-2">
<title>2.2 State of the Play: Digital Manufacturing Platforms</title>
<para>Industry 4.0 started as a digital transformation initiative with a focus on the digital transformation of European factories towards smart digital production systems through intense vertical and horizontal integration. This resulted in the development by European industry of the RAMI 4.0 reference model built on the strong foundations of the automation European industry. As a consequence, Asian and American countries have also put efforts to define their reference model for the digitization of their manufacturing processes with stronger influences from IT and IoT industries. This has resulted in the development of the IVRA (Industrial Value Chain Reference Architecture) by the Industrial Value Chain Initiative (IVI) in Asia and the Industrial Internet Reference Architecture (IIRA) by the US IIC initiative; see <link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link> below. These initiatives clearly showed the need to consider in the digitalization of European industry not only the Smart Production dimension, but also Smart Product and Smart Supply Chain dimensions.</para>
<para>As a consequence, European industry kicked off complementary efforts to ensure on the one hand RAMI 4.0, IVRA and IIRA interoperability, mapping and alignment for global operation of digital manufacturing processes. On the other hand, it has also triggered the need to extend the RAMI 4.0 model with an additional data-driven and digital smart service dimension beyond factory IT/OT integration. This resulted in the development of initiatives such as the Smart Service Welt and the Industrial Data Space to promote the development of smart data spaces as the basis for trusted industrial data exchange. This also derived in a more recent development of a need to support an increased autonomous operation shopfloors in the context of smart data-driven manufacturing processes.</para>
<fig id="F2-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link></label>
<caption><para>RAMI 4.0, IVRA and IIRA reference models for Industry 4.0.</para></caption>
<graphic xlink:href="graphics/ch02_fig001.jpg"/>
</fig>
<para>This section provides a state-of-the-art revision of the reference models for factories 4.0 with a focus on RAMI 4.0 and the state of play of digital platforms initiatives developed to address the needs of data-driven operations within Industry 4.0, as the basis and context for the development of a framework for digital automation in industrial SMEs aiming at implementing cognitive and autonomous manufacturing processes.</para>
<section class="lev2" id="sec2-2-1">
<title>2.2.1 RAMI 4.0 (Reference Architecture Model Industry 4.0)</title>
<para>The RAMI 4.0 (Reference Architecture Model for Industry 4.0 [34]) specification was published in July 2015. It provided a reference architecture initially for the Industrie 4.0 initiative and later for alignment of European activities and international ones. RAMI 4.0 groups different aspects in a common model and assures the end-to-end consistency of <emphasis>&#8220;. . . technical, administrative and commercial data created in the ambit of a means of production of the workpiece&#8221;</emphasis> across the entire value stream and their accessibility at all times. Although the RAMI 4.0 is essentially focused on the manufacturing process and production facilities, it tries to focus on all essential aspects of Industry 4.0. The participants (a field device, a machine, a system or a whole factory) can be logically classified in this model and relevant Industry 4.0 concepts are described and implemented.</para>
<para>The RAMI 4.0 3D model (see <link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link>) summarizes its objectives and different perspectives and provides relations between individual components. The model adopts the basic ideas of the Smart Grid Architecture Model (SGAM), which was defined by the European Smart Grid Coordination Group (SG-CG) and is worldwide accepted. The SGAM was adapted and modified according to the Industry 4.0 requirements.</para>
<para>The RAMI 4.0 model aims at supporting a common view among different industrial branches like automation, engineering and process engineering. The 3D model combines:</para>
<fig id="F2-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link></label>
<caption><para>RAMI 4.0 3D Model.</para></caption>
<graphic xlink:href="graphics/ch02_fig002.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Hierarchical Levels (Y Axis):</emphasis> this axis collects the hierarchy levels envisaged by the IEC 62264 international standards on the integration of company computing and control systems;</para></listitem>
<listitem><para><emphasis role="strong">Cycle &amp; Value Stream (X Axis):</emphasis> the second axis represents the life cycle of facilities and products. The RAMI 4.0 takes the IEC 62890 standard for life cycle management as a reference point to structure the life cycle. This axis focuses on features able to provide a consistent data model during the whole life cycle of an entity.</para></listitem>
<listitem><para><emphasis role="strong">Layers (Z Axis):</emphasis> the vertical axis, finally, represents the various perspectives from the assets up to the business processes.</para></listitem>
</itemizedlist>
<para>The combination of the elements on these three axes represented a quite innovative management of product manufacturing, especially the elements on the <emphasis>X</emphasis> axis. Indeed, the RAMI 4.0 is the only reference architecture to explicitly analyze and take into account entities&#8217; life cycles at their time of proposal. Later, other models such as IVRA have also adopted that view.</para>
<para>One of the main objectives of RAMI 4.0 is to provide an end-to-end (i.e. since the inception of the product&#8217;s idea, until its dismantling or recycling) framework able to connect and consistently correlate all technical, adminis-trative and commercial data so as to create value streams providing added value to the manufacturer.</para>
<para>Many elements are available in RAMI 4.0, e.g. models, types, instances, production lines, factories, etc.). They differentiate between objects, which are elements that have a life cycle and data associated with it. On the other hand, there are the so-called &#8220;active&#8221; elements inside the different layers and are called Industry 4.0 components (I4.0 component). I4.0 components are also objects, but they have the ability to interact with other elements and can be summarized as follows: (1) they provide data and functions within an information system about an even complex object; (2) they expose one or more end-points through which their data and functions can be accessed and (3) they have to follow a common semantic model.</para>
<para>Therefore, the RAMI 4.0 framework goal is to define how I4.0 components communicate and interact with each other and how they can be coordinated to achieve the objectives set by the manufacturing companies.</para>
</section>
<section class="lev2" id="sec2-2-2">
<title>2.2.2 Data-driven Digital Manufacturing Platforms for Industry 4.0</title>
<para>The digital convergence of traditional industries is increasingly causing the boundaries between the industrial and service sectors to disappear. In March 2015, Acatech, through the Industry-Science Research Alliance&#8217;s strategic initiative <emphasis>&#8220;Web-based Services for Businesses&#8221;</emphasis>, has proposed a layered architecture (see <link linkend="F2-3">Figure <xref linkend="F2-3" remap="2.3"/></link>), to facilitate a shift from product-centric to user-centric business models, which extends the Industry 4.0 perspective.</para>
<para>At a technical level, these new forms of cooperation and collaboration will be enabled by new digital infrastructures. <emphasis role="strong">Smart spaces</emphasis> are the smart environments where smart, Internet-enabled objects, devices and machines (smart products) connect to each other. The term <emphasis role="strong">&#8220;smart products&#8221;</emphasis> refers to actual production machines but also encompasses their virtual representations (CPS digital twins). These products are described as &#8220;smart&#8221; because they know their own manufacturing and usage history and are able to act autonomously. Data generated on networked physical platforms are consol-idated and processed on <emphasis role="strong">software-defined platforms</emphasis>. Providers connect to each other via these service platforms to form <emphasis role="strong">digital ecosystems</emphasis>.</para>
<para>Digital industrial platforms integrate the different digital technologies into real-world applications, processes, products and services; while new business models re-shuffle value chains and blur boundaries between products and services [16].</para>
<para>In the last few years, a number of initiatives have been announced by the public and private sectors globally dealing with the development of digital manufacturing platforms and multi-sided ecosystems for Industry 4.0 (see <link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>). Vertical initiatives such as AUTOSAR [29] and ISOBUS [28], for instance, in the smart product dimension aim at enabling smart products in the automotive and smart agrifood sectors, whereas initiatives such as OPC-UA [31] intend to address manufacturing equipment universal access to a large extent. Similarly, more horizontal open (source) platform initiatives dealing with embedded systems (S3P [27]) or local automation clouds (Arrowhead [26], Productive 4.0 [32]) deal with networked physical product control across vertical industries, e.g. transport, manufacturing, health, energy and agrifood.</para>
<fig id="F2-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-3">Figure <xref linkend="F2-3" remap="2.3"/></link></label>
<caption><para>Smart Service Welt Reference Model &amp; Vision.</para></caption>
<graphic xlink:href="graphics/ch02_fig003.jpg"/>
</fig>
<fig id="F2-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link></label>
<caption><para>Digital manufacturing platform landscape. Adapted from [14] and [15].</para></caption>
<graphic xlink:href="graphics/ch02_fig004.jpg"/>
</fig>
<para>However, the largest investment of industry so far has focused on the development of software-defined platforms to leverage smart spaces for smart data; either for vertical industries or for more horizontal approaches. Initiatives such as FIWARE for Smart Industry [22], MIDIH [21] or Boost 4.0 [24] are working to pave the way for the implementation of data-driven smart connected factories. On the other hand, more cross-domain initiatives for smart Internet services (FIWARE [23]), data-sharing sovereignty (International Data Spaces [25]) or Industrial IoT (IIC [30]) are both providing critical general software foundations for the development of vertical solutions such as those mentioned before (FIWARE Smart Industry, Boost 4.0 or MIDIH) and ensuring that interoperability across domains is properly developed as part of the digital transformation supporting the breakup of inter-domain information silos.</para>
<para>Along this line is also worth noting the recent efforts from large industrial software companies to provide commercial solutions with open APIs to respond to the challenge of leveraging digital infrastructures and smart data platforms to support the next generation of digital services. In this area are very relevant initiatives such as Mindsphere by SIEMENS [17], Leonardo by SAP [18], Bosch IoT suite [19] or 3DExperience [20] by Dassault Systems.</para>
</section>
<section class="lev2" id="sec2-2-3">
<title>2.2.3 International Data Spaces</title>
<para>The <emphasis role="strong">Industrial Data Space initiative</emphasis> is an initiative driven forward by Fraunhofer together with over 90 key industrial players such as ATOS, Bayer, Boehringer Ingelheim, KOMSA, PricewaterhouseCoopers, REWE, SICK, Thyssen-Krupp, T<inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/udot.jpg"/>V Nord, Volkswagen, ZVEI, SAP. BOSCH, Audi, Deutche Telekom, Huawei, Rittal and a network of European multipliers (INNOVALIA, TNO, VTT, SINTEF, POLIMI, etc.). <emphasis role="strong">Digital sovereignty over industrial data and trust in data sharing</emphasis> are key issues in the Industrial Data Space. Data will be shared between certified partners only when it is truly required by the user of that data for a value-added service. The basic principles that form the framework for the technological concept of the Industrial Data Space are summarized as (1) securely sharing data along the entire data supply chain and easily combining own data with publicly available data (such as weather and traffic information, geodata, etc.) and semi-public data, such as from a specific value chain. (2) Sovereignty over data, that is, control over who has what rights in which context, is just as important as legal certainty, to be ensured by certifying participants, data sources and data services. The reference architecture model should be seen as a blueprint for secure data exchange and efficient data combination. <link linkend="F2-5">Figure <xref linkend="F2-5" remap="2.5"/></link> illustrates the technical architecture of the Industrial Data Space.</para>
<fig id="F2-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-5">Figure <xref linkend="F2-5" remap="2.5"/></link></label>
<caption><para>Industrial Data Space reference model.</para></caption>
<graphic xlink:href="graphics/ch02_fig005.jpg"/>
</fig>
<para>The Industrial Data Space fosters secure data exchange among its participants, while at the same time ensures data sovereignty for the participating data owners. The Industrial Data Space Association defines the framework and governance principles for the Reference Architecture Model, as well as interfaces aiming at establishing an international standard which considers the following user requirements: (1) data sovereignty; (2) data usage control; (3) decentralized approach; (4) multiple implementations; (5) standardized interfaces; (6) certification; (7) data economy and (8) secure data supply chains.</para>
<para>In compliance with common system architecture models and standards (such as ISO 42010, 4+1 view model, etc.), the Reference Architecture Model uses a five-layer structure expressing stakeholder concerns and viewpoints at different levels of granularity (see <link linkend="F2-6">Figure <xref linkend="F2-6" remap="2.6"/></link>).</para>
<para>The IDS reference architecture consists of the following layers:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The <emphasis role="strong">business layer</emphasis> specifies and categorizes the different stakeholders (namely the roles) of the Industrial Data Space, including their activities and the interactions among them.</para></listitem>
</itemizedlist>
<fig id="F2-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-6">Figure <xref linkend="F2-6" remap="2.6"/></link></label>
<caption><para>General structure of Reference Architecture Model [36].</para></caption>
<graphic xlink:href="graphics/ch02_fig006.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The <emphasis role="strong">functional layer</emphasis> comprises the functional requirements of the Industrial Data Space and the concrete features derived from them (in terms of abstract, technology-agnostic functionalities of logical software components).</para></listitem>
<listitem><para>The <emphasis role="strong">process layer</emphasis> provides a dynamic view of the architecture; using the BPMN notation, it describes the interactions among the different components of the Industrial Data Space.</para></listitem>
<listitem><para>The <emphasis role="strong">information layer</emphasis> defines a conceptual model, which makes use of &#8220;linked data&#8221; principles for describing both the static and dynamic aspects of the Industrial Data Space&#8217;s constituents (e.g. participants active, Data Endpoints deployed, Data Apps advertised or datasets exchanged).</para></listitem>
<listitem><para>The <emphasis role="strong">system layer</emphasis> is concerned with the decomposition of the log-ical software components, considering aspects such as integration, configuration, deployment and extensibility of these components.</para></listitem>
</itemizedlist>
<para>In addition, the Reference Architecture Model contains three cross-sectional perspectives:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Security:</emphasis> It provides means to identify participants, protect data communication and control the usage of data.</para></listitem>
<listitem><para><emphasis role="strong">Certification:</emphasis> It defines the processes, roles, objects and criteria involved in the certification of hardware and software artifacts as well as organizations in IDS.</para></listitem>
<listitem><para><emphasis role="strong">Governance:</emphasis> It defines the roles, functions and processes from a governance and compliance point of view, defining the requirements to be met by an innovative data ecosystem to achieve corporate interoperability.</para></listitem>
</itemizedlist>
<para><emphasis role="strong"><emphasis>System layer: technical components</emphasis></emphasis></para>
<para>The most interesting layer for the IDS framework is the system layer, where the roles defined in other layers (business and functional Layers) are now mapped onto a concrete data and service architecture in order to meet the requirements, resulting in what is the technical core of the IDS. From the requirements identified, three major technical components can be derived:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Connector</para></listitem>
<listitem><para>Broker</para></listitem>
<listitem><para>App Store</para></listitem>
</itemizedlist>
<para>These are supported by four additional components, which are not specific to the IDS:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Identity provider</para></listitem>
<listitem><para>Vocabulary hub</para></listitem>
<listitem><para>Update repository (source for updates of deployed connectors)</para></listitem>
<listitem><para>Trust repository (source for trustworthy software stacks and fingerprints as well as remote attestation checks).</para></listitem>
</itemizedlist>
<para><emphasis role="strong"><emphasis>IDS open source implementation using FIWARE</emphasis></emphasis></para>
<para>The most interesting aspect about the IDS business reference architecture is the opportunity to support multiple implementations and to combine it with open source enablers. It is a common goal that a valid open source implementation of the IDS Architecture can be based on FIWARE software components, compatible also with FIWARE architecture principles.</para>
<para>The FIWARE foundation is working towards making sure that core FIWARE Generic Enablers can be integrated together to build a valid open source implementation of the IDS architecture. Both organizations are col-laborating on the development of domain data models and communicating about the development of their respective specifications and architectures to keep them compatible.</para>
<para>The way FIWARE software components can be combined to support the implementation of the main IDS architecture components is shown in <link linkend="F2-7">Figure <xref linkend="F2-7" remap="2.7"/></link>. FIWARE technology offers the following features to support IDS implementation:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Docker-based tools relying on Docker Hub Services enabling automated deployment and configuration of Data Apps.</para></listitem>

<listitem><para>Standard vocabularies are being proposed at https://www.fiware.org/data- models</para></listitem>
</orderedlist>
<fig id="F2-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-7">Figure <xref linkend="F2-7" remap="2.7"/></link></label>
<caption><para>Materializing the IDS Architecture using FIWARE.</para></caption>
<graphic xlink:href="graphics/ch02_fig007.jpg"/>
</fig>
<orderedlist numeration="arabic" continuation="3" spacing="normal">
<listitem><para>Data Apps map to NGSI adapters or Apps processing context information.</para></listitem>

<listitem><para>Both External and Internal IDS Connectors are implemented using FIWARE Context Broker components.</para></listitem>

<listitem><para>Extended CKAN Data Publication Platform.</para></listitem>

<listitem><para>FIWARE Context Broker components will be used as core component of IDS Connectors.</para></listitem>

<listitem><para>Interface between IDS connectors based on FIWARE NGSI.</para></listitem>
</orderedlist>
</section>
</section>
<section class="lev1" id="sec2-3">
<title>2.3 Autoware Framework for Digital Shopfloor Automation</title>
<section class="lev2" id="sec2-3-1">
<title>2.3.1 Digital Shopfloor Evolution: Trends &amp; Challenges</title>
<para>The previous section presented the main digital platform and reference architecture work currently in place to deal with data-driven digital transformation in manufacturing. The industrial digitalization supported by Industry 4.0 and its vision of the intelligent networked factory of the future are major talking points as mobile technologies like cloud computing are revolutionizing industrial processes. With embedded systems, components and machines can now talk to one another and self-optimize, self-configure and self-diagnose processes and their current state, providing intelligent support for workers in their increasingly complex decision-making. Today&#8217;s centrally organized enterprise is turning into a decentralized, dynamically controlled factory whose production is defined by individuality, flexibility and rapidity. As a consequence, see <link linkend="F2-8">Figure <xref linkend="F2-8" remap="2.8"/></link> below, the digital shopfloor vision is increasingly evolving towards more flexible plug &amp; produce modular assembly islands moving away from more rigid production lines with the ambition of real-time actuation and adaptation <emphasis role="strong">(cognition and autonomy)</emphasis> of production with an aim of reaching zero defect manufacturing. Equally, manufacturing processes are increasingly collaborative among humans, robots and autonomous mobile systems that come together as needed for mission-oriented tasks.</para>
<para>This new scenario is obviously generating that SMEs face difficulties at various levels to make strategic decisions while building a digital shopfloor, i.e. evolution model to adopt, automation technology selection and cost and time of deployment and operation, associated return on investments that will boost their business strategies (quality, efficiency, cost, flexibility, sustainability, innovation).</para>
<para>Since the 1980s, the IT structure of factories has been ordered hierarchically from field level to the level of factory control. Cloud and edge technologies now make it possible to disengage these hierarchies and link up individual components &#8211; from computer numerical control CNC and robot control RC to manufacturing execution systems MES and enterprise resource planning ERP &#8211; in flexible networks. The core of this new approach is the virtualization of systems in which software functionality (digital abilities) is decoupled from the specific computer hardware (embedded, edge, cloud, HPC) where it runs. In other words, software used to depend on specific computer or control platforms is now separated from it via virtual machines and transferred to the cloud or the edge based on decision/actuation time scales. In a multitude of ways, <emphasis role="strong">transfer of control functions to the cloud</emphasis> opens up a whole new dimension of flexibility. First of all, the cloud-edge mechanism <emphasis>&#8220;rapid elasticity&#8221;</emphasis> enables the flexible and mostly automatic distribution of computing capacity. This means that the computing power of a whole group of processor cores in a &#8220;private cloud&#8221; can be allocated in a few seconds &#8211; for instance, between the CPU-intensive processes of the five-axis interpolation of a milling machine or the complex axis control of cooperating robots. Consequently, a much more efficient use of available computing power can be made than was possible with the older, purely decentralized control systems for individual machines and robots. At the same time, further gains in flexibility are given when &#8211; with adequate computing power &#8211; any number of virtual machine controls VMC or virtual robot controls VRC can be generated. The <emphasis role="strong">cloud-based control</emphasis> opens the way to upgrading or retrofitting high-quality machines and equipment whose control systems are outdated. The main challenge here is meeting the stringent real-time requirements set by state-of-the-art machine and robot control systems.</para>
<fig id="F2-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-8">Figure <xref linkend="F2-8" remap="2.8"/></link></label>
<caption><para>Digital shopfloor visions for autonomous modular manufacturing, assembly and collaborative robotics.</para></caption>
<graphic xlink:href="graphics/ch02_fig008.jpg"/>
</fig>
<para>AUTOWARE [3], a European initiative under the European Commission initiative for digitizing European Industry, supports the deployment of such autonomous digital shopfloor solutions based on the following three pillars:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Pillar1: Harmonized open hardware and software digital automation reference architecture</emphasis>. From a data-driven perspective for cyber physical production systems (smart products), leverage a reference architecture across open ICT technologies for manufacturing SME (I4MS, www.i4ms.eu) digital transformation competence domains (cloud, edge/OpenFog, BeinCPPS/MIDIH, robotics/ROS-ReconCell). For keeping integration time and costs under control, AUTOWARE framework acts as a glue across manufacturing users and digital automation solution developers in a friendly ecosystem for business development, more efficient service development over harmonized architectures (smart machine, cloudified control, cognitive planning- app-ized operation).</para></listitem>
<listitem><para><emphasis role="strong">Pillar 2: Availability of digital ability technology enablers for digital shopfloor automatic awareness and cloud/edge-based control support</emphasis>. Leverage a number of SME <emphasis>digital abilities</emphasis>, e.g. augmented virtuality, reliable wireless communications, smart data distribution and cognitive planning, to ease the development of automatic awareness capabilities in autonomous manufacturing systems. For ensuring digital shopfloor extendibility, the AUTOWARE framework envisions the development of <emphasis>usability services</emphasis> (Cyber Physical Production Systems (CPPS) trusted auto-configuration, programming by demonstration) as well as associated standard compliant <emphasis>validation &amp; verification services</emphasis> for digital shopfloor solution.</para></listitem>
<listitem><para><emphasis role="strong">Pillar 3: Digital automation business value model to maximize Industry 4.0 return of investment</emphasis>. Leverage digital automation investments through a shared SME cognitive manufacturing migration model and an investment assessment platform for incremental brownfield cognitive autonomous solution deployment.</para></listitem>
</itemizedlist>
<para>As opposed to other manufacturing environments, digital automation faces an increased challenge in terms of the large diversity of technologies involved. This implies that access to digital technologies or digital services is not enough for Industry 4.0 in general, but SMEs in particular, to leverage the Industry 4.0 business value. In the context of digital automation in general and in the context of cognitive and autonomous systems in particular, safe and secure integration of all technologies involved (robotic systems, production systems, computing platforms, cognitive services and mobile information services) into solutions is the real challenge, as illustrated in <link linkend="F2-9">Figure <xref linkend="F2-9" remap="2.9"/></link>.</para>
<para>Based on these three pillars, AUTOWARE has proposed a framework based on other existing frameworks (e.g. MIDIH, BEinCPPS, FIWARE, RAMI 4.0), taking into consideration the industrial requirements from several use cases, aiming to provide a solution-oriented framework for digital shopfloor automation. <link linkend="F2-10">Figure <xref linkend="F2-10" remap="2.10"/></link> shows the AUTOWARE framework with its main components.</para>
<fig id="F2-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-9">Figure <xref linkend="F2-9" remap="2.9"/></link></label>
<caption><para>AUTOWARE digital automation solution-oriented context.</para></caption>
<graphic xlink:href="graphics/ch02_fig009.jpg"/>
</fig>
<fig id="F2-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-10">Figure <xref linkend="F2-10" remap="2.10"/></link></label>
<caption><para>AUTOWARE framework.</para></caption>
<graphic xlink:href="graphics/ch02_fig010.jpg"/>
</fig>
<para>The AUTOWARE framework from a technical perspective offers many features and concepts that are of great importance for cognitive manufacturing in particular to the automatic awareness abilities that AUTOWARE is primarily aiming at:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Open platform</emphasis>. Platforms contain different technology building blocks with communication and computation instances with strong virtualization properties with respect to both safety and security for the cloudification of CPS services.</para></listitem>
<listitem><para><emphasis role="strong">Reference architecture</emphasis>. Platforms focused on harmonization of reference models for cloudification of CPS services have to make a template style approach for flexible application of an architectural design for suitable implementation of cognitive manufacturing solutions, e.g. predictive maintenance, zero defect manufacturing, energy efficiency.</para></listitem>
<listitem><para><emphasis role="strong">Connectivity to IoT</emphasis>. Multi-level operation (edge, cloud) and function visualization through open interfaces allow native support for service connection and disconnection from the platform, orchestrating and provisioning services efficiently and effectively.</para></listitem>
<listitem><para><emphasis role="strong">Dynamic configuration</emphasis>. Software-defined operation of systems allows automatic integration of other systems to connect or disconnect from the system, dynamic configuration including scheduling is implemented. The deployment of new functionalities, new services and new system structures poses new safety and security system requirements; component must be more dynamically configured and validated and finally integrated into these systems.</para></listitem>
<listitem><para><emphasis role="strong">Autonomous controls</emphasis>. High automation levels and autonomy require a high degree of design and development work in the area of sensors and actuators on the one hand and a high degree of efficient and robust sensor fusion on the other.</para></listitem>
<listitem><para><emphasis role="strong">Virtualization of real-time functions</emphasis>. Control functions can be virtualized and executed away from machine environments, and machine data can be accessed remotely in real time. This enables a large variety of novel functionalities as it allows the geographical distribution of com-putationally intensive processes, executed remotely from the location of action.</para></listitem>
</itemizedlist>
<section class="lev3" id="sec2-3-1-1">
<title>2.3.1.1 Pillar 1: AUTOWARE open reference architecture for autonomous digital shopfloor</title>
<para>AUTOWARE Reference Architecture (RA) aligns the cognitive manufacturing technical enablers, i.e. robotic systems, smart machines, cloudified control, secure cloud-based planning systems and application platform to provide cognitive automation systems as solutions while exploiting cloud technologies and smart machines as a common system. AUTOWARE leverages a reference architecture that allows harmonization of collaborative robotics, reconfigurable cells and modular manufacturing system control architectures with BEinCPPS and MIDIH data-driven industrial service reference architectures (already fully aligned with ECSEL CRYSTAL and EMC2 CPS design practices) supported by secure and edge-powered reliable industrial (wireless) communication systems (5G, WiFi and OPC-UA TSN) and high-performance cloud computing platforms (CloudFlow) across cognitive manufacturing competence domains (automation, analytics and simulation).</para>
<para>The goal of the AUTOWARE RA is to have a broad industrial applicability, map applicable technologies to different areas and to guide technology and standard development. From a structural perspective, the AUTOWARE RA covers two different areas denoted as domains:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Design domain:</emphasis> it describes the design and development methods, tools and services for designing AUTOWARE CPPS. The components of the design domain enable users to intuitively design the applications (the so-called automatic awareness digital ability usability services).</para></listitem>
<listitem><para><emphasis role="strong">Runtime domain:</emphasis> it includes all the systems that support the execution and operation of the AUTOWARE autonomous CPPS.</para></listitem>
</itemizedlist>
<para>The AUTOWARE RA has four layers/levels (see <link linkend="F2-11">Figure <xref linkend="F2-11" remap="2.11"/></link>), which target all relevant layers for the modeling of autonomous CPPS in the view of AUTOWARE:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Enterprise:</emphasis> The enterprise layer is the top layer of the AUTOWARE reference architecture that encompasses all enterprise&#8217;s systems, as well as interaction with third parties and other factories.</para></listitem>
<listitem><para><emphasis role="strong">Factory:</emphasis> At the factory layer, a single factory is depicted. This includes all the various workcells or production lines available for the complete production.</para></listitem>
<listitem><para><emphasis role="strong">Workcell/Production Line:</emphasis> The workcell layer represents the individual production line of cell within a company. Nowadays, a factory typically contains multiple production lines (or production cells), where individual machines, robots, etc. are located in or become a part of.</para></listitem>
<listitem><para><emphasis role="strong">Field Devices:</emphasis> The field devices layer is the lowest level of the reference architecture, where the actual machines, robots, conveyer belt, as well as controllers, sensors and actuators are positioned.</para></listitem>
</itemizedlist>
<para>To uphold the concept of Industry 4.0 and to move from the old-fashioned automation pyramid (where only communication was mainly possible within a specific layer, and to establish communication between the different layers, complicated interfaces were required), the communication concept is a &#8220;pillar&#8221; to cover all the mentioned layers. The communication pillar enables direct communication between the different layers. The pillar is named <emphasis role="strong">Fog/Cloud</emphasis> and uses wired (e.g. IEEE 802.1 TSN) and wireless communication to create direct interaction between the different layers by using Fog/Cloud concepts (blue column in <link linkend="F2-11">Figure <xref linkend="F2-11" remap="2.11"/></link>). In good alignment with this paradigm, this pillar is also responsible for data persistence and potentially distributed transaction management services across the various components of the autonomous digital manufacturing system.</para>
<para>Finally, the last part of the AUTOWARE Reference Architecture focuses on the actual <emphasis role="strong">modeling, programming and configuration</emphasis> of the different technical components inside the different layers (green column in <link linkend="F2-11">Figure <xref linkend="F2-11" remap="2.11"/></link>). On each layer, different tools or services are applied and for all of them, different modeling approaches are available. The goal of these modeling approaches is to ease the end user/system developer/system integration developing the tools or technologies for the different levels. Additionally, it could be possible to have modeling approaches that take the different layers into account and make it easier for the users to model the interaction between the different layers.</para>
<fig id="F2-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-11">Figure <xref linkend="F2-11" remap="2.11"/></link></label>
<caption><para>AUTOWARE Reference Architecture.</para></caption>
<graphic xlink:href="graphics/ch02_fig011.jpg"/>
</fig>
<para>The AUTOWARE reference architecture also represents the two <emphasis role="strong">data domains</emphasis> that the architecture anticipates, namely the data in motion and data and rest domains. These layers are also matched in the architecture with the <emphasis role="strong">type of services</emphasis> automation, analysis and learning/simulation that are also pillars of the RA. The model also represents the layers of the RA where such services could be executed with the support of the fog/cloud computing and persistence services (blue pillar in <link linkend="F2-11">Figure <xref linkend="F2-11" remap="2.11"/></link>).</para>
</section>
<section class="lev3" id="sec2-3-1-2">
<title>2.3.1.2 Pillar 2: AUTOWARE digital abilities for automatic awareness in the autonomous digital shopfloor</title>
<para>As an initial and crucial step towards autonomous shopfloor operation, AUTOWARE provides a set of digital technologies and services for setting the foundation <emphasis role="strong">of automatic awareness in a digital shopfloor</emphasis>. Automatic awareness is the precondition for any form of more advanced autonomous decision and/or self-adaptation process. Autonomous digital shopfloor operations require integration across multiple disciplines. In fact, as discussed in [37] and shown in <link linkend="F2-12">Figure <xref linkend="F2-12" remap="2.12"/></link>, openness and interoperability need to be facilitated across all of those in a harmonized manner to ensure future digital shopfloor extendibility as industry gradually adopts digital abilities and services to build their competitive advantage.</para>
<para>For this purpose, the AUTOWARE framework provides three main components. These AUTOWARE components (technologies, usability services and V&amp;V services) provide a collection of enablers that facilitates the different users of the AUTOWARE framework to interact with the system on different levels. Apart from the enablers developed in the AUTOWARE project, there have been several international projects to promote the creation of new open source enablers for such an architecture. The most interesting ones have come from FIWARE Smart Industry, I4MS and IDS communities and have been integrated into the AUTOWARE framework. Within AUTOWARE, there are three different enablers: technology, usability and verification and validation (V&amp;V), which are crucial to ensure that a particular digital ability (in the specific case of AUTOWARE, automatic awareness) can be effectively and efficiently modeled, programmed, configured, deployed and operated in a digital shopfloor.</para>
<para>On the one hand, within the AUTOWARE framework, there is a collection of technology enablers, which can be identified as the technical tools, methods and components developed or provided within the AUTOWARE framework. Examples of technology enablers within the AUTOWARE project are robotic systems, smart machines, cloudified control systems, fog nodes, secure cloud- and fog-based planning systems as solutions to exploit cloud and fog technologies and smart machines as a common system. All these conform to a set of <emphasis role="strong">automatic awareness integrated technologies</emphasis>, which, as shown in <link linkend="F2-12">Figure <xref linkend="F2-12" remap="2.12"/></link>, adopt i-ROS-ready reconfigurable robotic cell and collaborative robotic bi-manipulation technology, smart product memory technology, OpenFog edge computing and virtualization technology, 5G-ready distributed data processing and reliable wireless mobile networking technologies, OPC-UA compliant Time Sensitive Networking (TSN) technology, Deep object recognition technology and ETSI CIM-ready FIWARE Context Brokering technology.</para>
<fig id="F2-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-12">Figure <xref linkend="F2-12" remap="2.12"/></link></label>
<caption><para>AUTOWARE harmonized automatic awareness open technologies.</para></caption>
<graphic xlink:href="graphics/ch02_fig012.jpg"/>
</fig>
<para>On the other hand, the AUTOWARE digital ability framework additionally provides <emphasis role="strong">automatic awareness usability services</emphasis> intended for a more cost-effective, fast and usable modeling, programming and configuration of integrated solutions based on the AUTOWARE enabling automatic digital shopfloor awareness technologies. This includes, for instance, augmented virtuality services, CPPS-trusted auto-configuration services or robot programming by training services.</para>
<para>Through its digital abilities, AUTOWARE facilitates the means for the deployment of completely open digital shopfloor automation solutions for fast data connection across factory systems (from shop floor to office floor) and across value chains (in cooperation with component and machine OEM smart services and knowledge). The AUTOWARE added value is not only to deliver a layered model for the four layers of the digital business ecosystem discussed in Section 2.2 for the digital shopfloor (smart space, smart product, smart data and smart service), but more importantly to provide an open and flexible approach with suitable interfaces to commercial platforms that allows the implementation of collective and collaborative services based on trusted information spaces and extensive exploitation of digital twin capabilities and machine models and operational footprints.</para>
<para>The third element in the AUTOWARE digital ability is the provision of <emphasis role="strong">validation and verification (V&amp;V) services</emphasis> for digital shopfloor solutions, i.e. CPPS. Although CPPS are defined to work correctly under several environmental conditions, in practice, it is enough if it works properly under specific conditions. In this context, certification processes help to guarantee correct operation under certain conditions, making the engineering process easier, cheaper and shorter for SMEs that want to include CPPS in their businesses. In addition, certification can increase the credibility and visibility of CPPS as it guarantees its correct operation under specific standards. If a CPPS is certified to follow some international or European standards or regulation, then it is not necessary to be certified in each country, so the integration complexity, cost and duration are highly reduced.</para>
</section>
<section class="lev3" id="sec2-3-1-3">
<title>2.3.1.3 Pillar 3: AUTOWARE business value</title>
<para>On the one hand, around the world, traditional manufacturing industry is in the throes of a digital transformation that is accelerated by exponentially growing technologies (e.g. intelligent robots, autonomous drones, sensors, 3D printing). Indeed, there are several European initiatives (e.g. I4MS initiative) and interesting platforms that are developing digitalization solutions for manufacturing companies in different areas: robotic solutions, cloudifica- tion manufacturing initiatives, CPS platforms implementation, reconfigurable cells, etc. However, all these initiatives were developed in isolation and they act as isolated components.</para>
<para>On the other hand, manufacturing SMEs need to digitalize their processes in order to increase their competitiveness through the adoption of ICT technologies. However, the global competition and the individualized products and solutions that currently exist make it difficult for manufacturing SMEs to access all this potential.</para>
<para>For this reason, AUTOWARE defined a new Autonomous Factory Ecosystem around their AUTOWARE Business Value Pillar allowing manufacturing SMEs to gain a clear competitive advantage for the implementation of their manufacturing processes. This pillar provides access to a set of new generation of tools and decision support toolboxes capable of supporting CPPS and digital services cloudification, robotics systems, reconfigurable cells, thanks to a faster and holistic management of several initiatives and tools into an open ecosystem providing a more seamless transfer of information across physical and digital worlds.</para>
<para>Therefore, AUTOWARE provides an <emphasis role="strong"><emphasis>open CPPS solution hub ecosystem</emphasis></emphasis> that gathers all resources together, thus enabling SMEs to access all the different components in order to develop digital automation cognitive solutions for their manufacturing processes in a controlled manner and quantifiable business impact.</para>
<para>AUTOWARE reduces the complexity of the access to the different isolated tools significantly and speeds up the process by which multisided partners can meet and work together. Indeed, AUTOWARE connects several initiatives for strengthening the European SME offer on cognitive autonomous products and leveraging cognitive autonomous production processes and equipment towards manufacturing SMEs. Thus, AUTOWARE leverages the development of open CPPS ecosystem and joins several stakeholders&#8217; needs:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">End Users (SME):</emphasis> The main target group of the AUTOWARE project is SMEs (small and medium-sized enterprises) that are looking to change their production according to Industry 4.0, CPPS and Internet of Things (IoT). These SMEs are considered the end user of the AUTOWARE developments, whereby they do not have to use all the developed technologies, but can only be interested in a subset of the technologies.</para></listitem>
<listitem><para><emphasis role="strong">Software Developers:</emphasis> As the AUTOWARE platform is an open plat-form, software developers can create new applications that can run on the AUTOWARE system. To support these users in their work, the system provides high usability and intuitiveness level, so that software developers can program the system to their wishes.</para></listitem>
<listitem><para><emphasis role="strong">Technology Developers:</emphasis> The individual technical enablers can be used as a single technology, but being an open technology, they can also be integrated into different technologies by technology developers. The technology must be open and once again be intuitive to re-use in different applications. Technology developers can then easily use the AUTOWARE technology to develop new technologies for their applications and create new markets for the AUTOWARE results.</para></listitem>
<listitem><para><emphasis role="strong">Integrator:</emphasis> The integrator is responsible for the integration of the technologies into the whole manufacturing chain. To target this user group, the technologies must support open interfaces, so the system can intuitively be integrated into the existing chain. The advantage of the open interfaces is that the integrator is not bound to a certain brand or vendor.</para></listitem>
<listitem><para><emphasis role="strong">Policy Makers:</emphasis> Policy makers can make or break a technology. To increase the acceptance rate, the exploitation and dissemination of the technology must be at a professional level, and additionally, the technology must be validated, supporting the right standards and targeting the right problems currently present on the market. Policy makers can push technologies further into the market and act as large catalyst for new technologies.</para></listitem>
<listitem><para><emphasis role="strong">HW Developers:</emphasis> For hardware developers, it is important to know what kind of hardware is required for the usage of the different technologies. In ideal case, all kind of legacy hardware is capable of interacting with new hardware, but unfortunately, this is not always the case.</para></listitem>
<listitem><para><emphasis role="strong">Automation Equipment Providers:</emphasis> The technologies developed within the AUTOWARE project can be of interest to other automation equipment providers, e.g. robot providers, industrial controller providers, sensor providers, etc.</para></listitem>
</itemizedlist>
</section>
</section>
<section class="lev2" id="sec2-3-2">
<title>2.3.2 AUTOWARE Software-Defined Autonomous Service Platform</title>
<para>Once the complete AUTOWARE framework overview has been presented, this section will focus on the detailed presentation of the software-defined service platform for autonomous manufacturing services. This section extends the main technological blocks underlying the AUTOWARE reference architecture.</para>
<para>Due to the recent development of numerous technical enablers (e.g. IoT, cloud, edge, HPC etc.), it is possible to take a service-based approach for many components of production information systems (IS). When using a service-based approach, instead of developing, deploying and running our own implementations for all production IS tasks, an external service provider can be considered and the end user can rent access to the offered services, reducing the cost and knowledge needed.</para>
<para>AUTOWARE focuses on a service-based approach denoted as software- defined autonomous service platform (in the following, also abbreviated as &#8220;service platform&#8221;) based on open protocols and implementing all the functionalities (physical, control, supervision, MES, ERP) as services. As a result, the components can be reused, the solution can be reconfigured and the technological advanced can be easily followed.</para>
<para><link linkend="F2-13">Figure <xref linkend="F2-13" remap="2.13"/></link> includes the reference architecture of the AUTOWARE service platform showing also how all the functionalities are positioned in the overall scheme of production IS. There are different functionalities (and therefore, services) on the different layers depending on the scope, but all of them are interconnected.</para>
<section class="lev3" id="sec2-3-2-1">
<title>2.3.2.1 Cloud &amp; Fog computing services enablers and context management</title>
<para>AUTOWARE considers several cloud services enablers for an easier implementation of the different services or functionalities. Context management and service function virtualization is a critical element to be supported in the delivery of automatic awareness abilities in a digital shopfloor. The use of these open source enablers permits the easier exchange of information and interoperability between different components and services, something really useful for future use cases.</para>
<para>AUTOWARE RA considers FIWARE for Smart Industry technology as the basis to meet AUTOWARE needs of context building management for digital automation information systems with extended support to robotic systems. Additionally, AUTOWARE considers OpenFog as the framework for operation of virtualized service functions.</para>
<fig id="F2-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-13">Figure <xref linkend="F2-13" remap="2.13"/></link></label>
<caption><para>AUTOWARE Software-Defined Autonomous Service Platform.</para></caption>
<graphic xlink:href="graphics/ch02_fig013.jpg"/>
</fig>
<para>The main features introduced in the cloud &amp; edge computing pillar, beyond those inherent to OpenFog specifications, are the support for automation context information management, processing and visualization. Such functionalities are being provided through edge and cloud support to two main FIWARE components:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Backend Device Management &#8211; IDAS:</emphasis> For the translation from IoT- specific protocols to the NGSI context information protocol considered by FIWARE enablers.</para></listitem>
<listitem><para><emphasis role="strong">Orion Context Broker:</emphasis> It produces, gathers, publishes and consumes context information. This is the main context information communication system throughout the AUTOWARE architecture. It facilitates the exchange of context information between Context Information Producers and Consumers through a Publish/Subscribe methodology (see <link linkend="F2-14">Figure <xref linkend="F2-14" remap="2.14"/></link>). This permits a high decentralized and large-scale context information management and high interoperability between the different components due to the use of a common NGSI protocol. The IDS architecture and connectors permit the use of such a powerful communication tool, making the use of IDS an extension of the AUTOWARE RA through FIWARE support to IDS reference architecture, as described in Section 2.2.</para></listitem>
</itemizedlist>
<fig id="F2-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-14">Figure <xref linkend="F2-14" remap="2.14"/></link></label>
<caption><para>Context Broker basic workflow &amp; FIWARE Context Broker Architecture.</para></caption>
<graphic xlink:href="graphics/ch02_fig014.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Backend Device Management &#8211; IDAS:</emphasis> For the translation from IoT- specific protocols to the NGSI context information protocol considered by FIWARE enablers.</para></listitem>
<listitem><para><emphasis role="strong">Cosmos:</emphasis> For an easier Big Data analysis over context integrated information with most popular Big Data platforms and cloud storage.</para></listitem>
</itemizedlist>
<para>AUTOWARE extends a cloud-based architecture to a more flexible and efficient one based on fog computing, which is defined by the OpenFog Consortium as follows: &#8220;A horizontal, system-level architecture that distributes computing, storage, control and networking functions closer to the users along a cloud-to-thing continuum&#8221;. Adding an intermediate layer for data aggregation and computing capabilities at the edge of the network resolves the bottlenecks and disadvantages in complex industrial scenarios: (1) data bottlenecks that occur on the interface between IT and cloud infrastructure; (2) disability to guarantee pre-defined latencies in the communication; (3) sensor data are sent unfiltered to the cloud and (4) limited intelligence on the machine level.</para>
<fig id="F2-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-15">Figure <xref linkend="F2-15" remap="2.15"/></link></label>
<caption><para>Embedding of the fog node into the AUTOWARE software-defined platform as part of the cloud/fog computing &amp; persistence service support.</para></caption>
<graphic xlink:href="graphics/ch02_fig015.jpg"/>
</fig>
<para>These drawbacks can be repealed using fog nodes. In addition, strict requirements on timing or even real-time constrains can only be achieved by avoiding long transmission of the data. Thus, the fog computing approach is inherently avoiding the latencies.</para>
<para><link linkend="F2-15">Figure <xref linkend="F2-15" remap="2.15"/></link> shows the embedding of the fog node into the AUTOWARE framework. The architecture supports the following aspects:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Machine Control Capabilities:</emphasis> AUTOWARE platform can control the different machines (e.g. robots, machines, etc.) within the plant or the manufacturing cell. It can connect to remote I/Os via an integrated PLC.</para></listitem>
<listitem><para><emphasis role="strong">Device Management Capabilities:</emphasis> It allows users to perform management of multiple machines in a distributed manner. The device manager is situated in the main office, whereas the devices are distributed over the factories, possible worldwide. The communication between the device manager and the different devices must be implemented over a secure and safe communication channel.</para></listitem>
<listitem><para><emphasis role="strong">Data Gateway:</emphasis> It enables the communication between other fog nodes, between the fog node and the cloud and with a remote operator.</para></listitem>
<listitem><para><emphasis role="strong">Visualization Capabilities:</emphasis> The AUTOWARE open platform provides standard interfaces (wired and wireless) to guarantee connectivity via user interfaces to access data via reports, dashboards, etc.</para></listitem>
<listitem><para><emphasis role="strong">Application Hosting Functionality:</emphasis> It can be located as well in the fog as in the cloud.</para></listitem>
</itemizedlist>
<para>The pillars of this architecture, which are common themes of the OpenFog reference architecture, include security, scalability, openness, autonomy, RAS (reliability, availability and serviceability), agility, hierarchy and programmability.</para>
</section>
</section>
<section class="lev2" id="sec2-3-3">
<title>2.3.3 AUTOWARE Framework and RAMI 4.0 Compliance</title>
<para>The overall AUTOWARE Framework and Reference Architecture is also related to the RAMI 4.0, as this is the identified reference architecture for Industry 4.0. The goal of the AUTOWARE project was to keep the developments related to the topics of Industry 4.0 and keep the Reference Architecture and Framework related to the RAMI 4.0 as well as to extend their scope to address the smart service welt data-centric service operations and future autonomous service demands.</para>
<para>To establish this link, the consortium mapped the different concepts and components of the AUTOWARE Framework to the RAMI 4.0 model. In <link linkend="F2-16">Figure <xref linkend="F2-16" remap="2.16"/></link>, the result of such mapping is provided. As it can be observed, the layers of the RAMI 4.0 architecture are well covered by the digital abilities enablers (technologies and service). Moreover, the business value matches with the vision of the business layer of the RAMI 4.0 architecture. On the hierarchical axis, the mapping is provided with the layers of the reference architecture, whereas the lifecycle coverage for type and instance is addressed through the modeling, configuration, programming pillar and the cloud/fog computing and persistence service layers. As discussed in the previous subsection, the data-management services to support at the various layers simulation, learning and knowledge-cognitive capabilities are actually implementing those advanced Industry 4.0 functionalities based on the cloud and edge support. This strict mapping ensures that the AUTOWARE framework not only supports Industry 4.0 scenarios, but also that they can also bring forward more advanced data-driven autonomous operations.</para>
</section>
</section>
<section class="lev1" id="sec2-4">
<title>2.4 Autoware Framework for Predictive Maintenance Platform Implementation</title>
<para>In the new Industry 4.0 paradigm, cognitive manufacturing is a fundamental pillar. It transforms manufacturing in three ways:</para>
<para><emphasis role="strong">1. Intelligent Assets and Equipment:</emphasis> utilizing interconnected sensors, analytics, and cognitive capabilities to sense, communicate and self- diagnose any type of issues in order to optimize performance and efficiency and reduce unnecessary downtime.</para>
<para><emphasis role="strong">2. Cognitive Processes and Operations:</emphasis> analyzing a huge variety of information from workflows, context, process and environment to quality controls, enhance operations and decision-making. </para>
<fig id="F2-16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-16">Figure <xref linkend="F2-16" remap="2.16"/></link></label>
<caption><para>Mapping and coverage of RAMI 4.0 by the AUTOWARE framework.</para></caption>
<graphic xlink:href="graphics/ch02_fig016.jpg"/>
</fig>
<para><emphasis role="strong">3. Smarter Resources and Optimization:</emphasis> combining various forms of data from different individuals, locations, usage and expertise with cognitive insight to optimize and enhance resources such as labor, workforce, and energy, improving in such a way the efficiency of the process.</para>
<para>Predictive maintenance is the prediction of a tool life cycle or other maintenance issues by the use of the information gathered by different sensors and analyzing that information by different types of analytical processes and means. Therefore, predictive maintenance is a clear example of cognitive manufacturing and the focus of the Z-Bre4k project, which employs AUTOWARE Digital Shopfloor reference architecture as its framework for process operation. This section discusses how the AUTOWARE framework can be customized and additional digital abilities and services can be incorporated to implement advanced Industry 4.0 manufacturing processes.</para>
<section class="lev2" id="sec2-4-1">
<title>2.4.1 Z-BRE4K: Zero-Unexpected-Breakdowns and Increased Operating Life of Factories</title>
<para>The H2020 project <emphasis role="strong">Z-BRE4K</emphasis>, https://www.z-bre4k.eu/, looks to implement predictive maintenance strategies to avoid unexpected breakdowns, thus increasing the uptime and overall efficiency of manufacturing scenarios. To this extent, several hardware and software solutions will be implemented in three industrial demonstrators, adapting to the particular needs of each one.</para>
<para>In particular, Z-BRE4K delivers a solution composed of eight scalable strategies at component, machine and system level targeting:</para>
<para><emphasis role="strong">1. Z-PREDICT</emphasis>. The prediction occurrence of failure.</para>
<para><emphasis role="strong">2. Z-DIAGNOSE</emphasis>. The early detection of current or emerging failure.</para>
<para><emphasis role="strong">3. Z-PREVENT</emphasis>. The prevention of failure occurrence, building up or even propagation in the production system.</para>
<para><emphasis role="strong">4. Z-ESTIMATE</emphasis>. The estimation of the remaining useful life of assets.</para>
<para><emphasis role="strong">5. Z-MANAGE</emphasis>. The management of the strategies through event modeling, KPI monitoring and real-time decision support.</para>
<para><emphasis role="strong">6. Z-REMEDIATE</emphasis>. The replacement, reconfiguration, re-use, retirement and recycling of components/assets ().</para>
<para><emphasis role="strong">7. Z-SYNCHRONISE</emphasis>. Synchronizing remedy actions, production planning and logistics.</para>
<para><emphasis role="strong">8. Z-SAFETY</emphasis>. Preserving the safety, health and comfort of the workers.</para>
<para>The Z-BRE4K solution implementation is expected to have a significant impact, namely (1) increase of the in-service efficiency by 24%, (2) reduced accidents, (3) increased verification according to objectives and (4) 400 new jobs created and over e42M ROI for the consortium.</para>
<para>In order to implement these strategies and reach these impact results, data coming from machine components, industrial lines and shop floors will be fed in the Z-BRE4K platform, which is featured by a communication middleware operative system, a semantic framework module, a dedicated condition monitoring module, a cognitive embedded module, a machine simulator to develop digital twins, an advanced decision support system (DSS), an FMECA module and a predictive maintenance module, together with a cutting-edge vision H/S solution for manufacturing applications associated to advanced HMI.</para>
<para>The General Architecture must be able to support all the components developed under the Z-BRE4K project, which lead to fulfilling the predictive maintenance strategies, being able to keep the information flow constant and well distributed between all the components. At the same time, it must permit an easy implementation in each use case scenario, leading the way towards each particular architecture for each use case and, in the future, different scenarios from other industrial systems. This means that the General Architecture must be highly flexible and easily adapted to new use cases, promoting the predictive maintenance towards its integration in SMEs.</para>
<para>Due to the high flexibility, the architecture requires the main communication middleware operative system to support a high number of different types of data coming from different types of sensors and control software. At the same time, due to the high number of different components, it must also support the need of a continuous communication between all of them, and the interoperability must reach top-notch levels.</para>
</section>
<section class="lev2" id="sec2-4-2">
<title>2.4.2 Z-Bre4k Architecture Methodology</title>
<para>The Z-Bre4k architecture is designed and developed on the foundations of the AUTOWARE reference architecture and building blocks enabling the convergence of information technology (IT), operational technology (OT), engineering technology (ET) and the leveraging of interoperability of industrial data spaces (IDS), for the support of a factory ecosystem. The objective is to develop a highly adaptive real-time machine (network of components) simulation platform that wraps around the physical equipment for the purpose of predicting uptimes and breakdowns, thus creating intuitive maintenance control and management systems.</para>
<para>The AUTOWARE framework has been selected as open OS for the Z-Bre4k framework for cognitive CPPS service development and strategy implementation. The AUTOWARE open framework is particularly well suited for integration of Z-Bre4k strategies over legacy machines and IT systems with minimum interference and that even SMEs are able to easily integrate advanced predictive maintenance strategies in the very same IT framework used to deal with production optimization or zero defect manufacturing processes.</para>
</section>
<section class="lev2" id="sec2-4-3">
<title>2.4.3 Z-BRE4K General Architecture Structure</title>
<para>The Z-BRE4K General Architecture will be a combination of the AUTOWARE RA from <link linkend="F2-17">Figure <xref linkend="F2-17" remap="2.17"/></link> with a vertical separation definition included in the Digital Shopfloor Alliance Reference Architecture and the integration of the IDS General Architecture from <link linkend="F2-6">Figure <xref linkend="F2-6" remap="2.6"/></link> by using FIWARE Generic Enablers as IDS core components. The main result is shown in <link linkend="F2-18">Figure <xref linkend="F2-18" remap="2.18"/></link>, where the Z-BRE4K Automation, Z-BRE4K Analytics and Z-BRE4K Simulation are presented following the Far-Edge Architecture principles envisioned in the DSA Reference Architecture.</para>
</section>
<section class="lev2" id="sec2-4-4">
<title>2.4.4 Z-BRE4K General Architecture Information Workflow</title>
<para>Since the predictive maintenance Z-BRE4K is aiming at has been envisioned as a service, the General Architecture will adapt AUTOWARE Service Platform Reference Architecture to the Z-BRE4K structure as shown in <link linkend="F2-19">Figure <xref linkend="F2-19" remap="2.19"/></link>. <link linkend="F2-19">Figure <xref linkend="F2-19" remap="2.19"/></link> shows the different services divided into the AUTOWARE different blocks and layers, all of them interconnecting through suitable data-buses constructed across information contexts. The main work cell and plant network will be done through the IDS Connector and FIWARE Orion Context Broker principally, but not necessary, so other communication methodologies are also supported, to be able to adapt the architecture to any future use case implementation. The Fog/Cloud interconnection is always available through the fog nodes described in Section 2.3. This will permit the use of storage, HPC and Deep Learning FIWARE Generic Enablers for better computing and calculating processes.</para>
<para>The information captured by the field devices (sensors, machines, etc.) is sent through the Time Sensitive Network (TSN) located in the end users facilities to the Control Services and Perception Services &amp; Model Building components in Real Time.</para>
<fig id="F2-17" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-17">Figure <xref linkend="F2-17" remap="2.17"/></link></label>
<caption><para>Z-Bre4k zero break down workflow &amp; strategies.</para></caption>
<graphic xlink:href="graphics/ch02_fig017.jpg"/>
</fig>
<fig id="F2-18" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-18">Figure <xref linkend="F2-18" remap="2.18"/></link></label>
<caption><para>Z-BRE4K General Architecture Structure.</para></caption>
<graphic xlink:href="graphics/ch02_fig018.jpg"/>
</fig>
<para>The next step is, through the IDS Connectors connected to the Workcell layer components, the data (normally preprocessed by the Workcell components) is sent to (published) the Orion Context Broker. The different components from the factory layer that are subscribed to each data set will receive it for their analysis and processing. The factory services components, which are divided into Learning, Simulating and Cognitive Computing Ser-vices, may require processed data from another factory layer service. The outputs from factory layer components that are required as inputs by other factory layer components will be published once again in the Orion Context Broker in the Workcell. The factory layer components that need those outputs as inputs will be subscribed to that data and will receive it. That is how the communication and information flow will be carried out through the different hierarchical levels.</para>
<para>The Learning, Simulating and Cognitive Computing Services will end up creating valuable information as outputs that will be published in the Plant Network&#8217;s Orion Context Broker. The different Business Management Services will recollect the information required as inputs for their processing and will elaborate reports, actions, alerts, decision support actions, etc. Dual Reality and Modelling Services will also gather information and will process it to give extra support information for business management decision making and user interfaces by publishing it back in the Plant&#8217;s Orion Context Broker.</para>
<para>The Business Management Services will be able to send information to the Control Services for user interface issues or optimization actions if necessary.</para>
</section>
<section class="lev2" id="sec2-4-5">
<title>2.4.5 Z-BRE4K General Architecture Component Distribution</title>
<para>Following the Z-BRE4K General Architecture Service Block division from <link linkend="F2-19">Figure <xref linkend="F2-19" remap="2.19"/></link> and the component for predictive maintenance, the final Z-BRE4K General OS will be as shown in <link linkend="F2-20">Figure <xref linkend="F2-20" remap="2.20"/></link>, where the specific technologies, services and tools to support the required predictive maintenance digital ability are actually illustrated.</para>
<para>The strength of the AUTOWARE RA to serve the Z-Bre4k predictive maintenance lies that once the data has been published in the Orion Context Broker in any of the scenarios considered, they can consider similar information workflows (see <link linkend="F2-21">Figure <xref linkend="F2-21" remap="2.21"/></link>).</para>
<para>The information in the particular use cases, presented in <link linkend="F2-21">Figure <xref linkend="F2-21" remap="2.21"/></link>, for the predictive maintenance will go as follows: (1) The information is gathered by the field devices, pre-processed if necessary by the control and model building services and published in the Orion Context Broker through each use cases&#8217; IDS Connector. (2) The data is collected by subscription by the C-03 Semantic Framework, where it is given the semantic structure and stored in a DB (fog/cloud computing most probable). Then, it is published again in the Context Broker. (3) Data is used to feed the C-08 Machine Simulators. (4) Prediction algorithms (from the C-07 Predictive Maintenance) are run through the C-08 outputs. (5) The C-04 DSS gathers information from the C-07 Predictive Maintenance and analyzes it, giving as an output the failure mode. (6) The C-05 FMECA gets the failure mode from the DSS through the context broker. (7) FMECA returns criticality, risk, redundancy, etc. for the specific failure mode to the DSS through the Context Broker. (8) The DSS, based on the Rules set, provides Recommendations to the Technicians through a common User Interface and control services. (9) The Technicians can use the C-06 VRfx for the better understanding of the information. (10) The Technicians take Actions on the assets through the control services based on the recommendations given. (11) The Technicians provide Feedback on the accuracy of the Recommendations given by the DSS. (12) The DSS improves its Rules and Recommendations based on the Feedback received.</para>
<fig id="F2-19" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-19">Figure <xref linkend="F2-19" remap="2.19"/></link></label>
<caption><para>Z-BRE4K General Architecture Connections.</para></caption>
<graphic xlink:href="graphics/ch02_fig019.jpg"/>
</fig>
<fig id="F2-20" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-20">Figure <xref linkend="F2-20" remap="2.20"/></link></label>
<caption><para>Z-BRE4K General OS.</para></caption>
<graphic xlink:href="graphics/ch02_fig020.jpg"/>
</fig>
<fig id="F2-21" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-21">Figure <xref linkend="F2-21" remap="2.21"/></link></label>
<caption><para>Use Cases Particular Information Workflow.</para></caption>
<graphic xlink:href="graphics/ch02_fig021.jpg"/>
</fig>
</section>
</section>
<section class="lev1" id="sec2-5">
<title>2.5 Conclusions</title>
<para>In this chapter, we have discussed the needs for development of a digital automation framework for the support of autonomous digital manufacturing workflows. We have also presented how various open platforms (i-ROS, OpenFog, IDS, FIWARE, BeinCPPS, MIDIH, ReconCell, Arrowhead, OPC-UA/TSN, 5G) can be harmonized through open APIs to deliver a software-defined digital shopfloor platform enabling a more cost-effective, control and extendable deployment of digital abilities in the shopfloor in close alignment with business strategies and investments available. This chapter has also presented how AUTOWARE is also bringing forward the technology enablers (connectivity, data distribution, edge extension of automation and control equipment for app-ized smart open control hardware (open trusted platforms) operation, deep object recognition), usability services (augmented virutality, CPPS autoconfiguration, robotic programming by training) and verification and validation framework (safety &amp; standard compliant) to the deployment and operation of automatic awareness digital abilities, as a first step in cognitive autonomous digital shopfloor evolution. We have presented how open platforms for fog/edge computing can be combined with cloudified control solutions and open platforms for collaborative robotics, modular manufacturing and reconfigurable cells for delivery of advanced manufacturing capabilities in SMEs. Moreover, we have also presented how the AUTOWARE framework is flexible enough to be adopted and enriched with additional digital capability services to support advanced and collaborative predictive maintenance decision workflows. AUTOWARE is adapted for operation of predictive maintenance strategies in high diversity of machinery (robotic systems, inline quality control equipment, injection molding, stamping press, high-performance smart tooling/dies and fixtures), very challenging and sometimes critical manufacturing processes (highly automated packaging industry, multi-stage zero-defect adaptive manufacturing of structural lightweight component for automotive industry, short-batch mass customized production process for consumer electronics and health sector) and key economic European sectors with the strongest SME presence (automotive, food and beverage, consumer electronics).</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This work was funded by the European Commission through the FoF-RIA Project AUTOWARE: Wireless Autonomous, Reliable and Resilient Production Operation Architecture for Cognitive Manufacturing (No. 723909) and through the FoF-IA Project Zbre4k: Strategies and Predictive Maintenance models wrapped around physical systems for Zero-unexpected-Breakdowns and increased operating life of Factories (No. 768869).</para>
</section>
<section class="lev1" id="sec2-6">
<title>References</title>
<para>[1] P. Muller, J. Julius, D. Herr, L. Koch, V. Peycheva, S. McKiernan, Annual Report on European SMEs 2016/2017 European Commission, Directorate-General for Internal Market, Industry, Entrepreneurship and SMEs; Directorate H. https://ec.europa.eu/growth/smes/business- friendly-environment/performance-review es#annual-report</para>
<para>[2] AUTOWARE http://www.AUTOWARE-eu.org/</para>
<para>[3] OpenFog Consotium; https://www.openfogconsortium.org/#consortium</para>
<para>[4] B. Laibarra &#8220;Digital Shopfloor Alliance&#8221;, EFFRA ConnectedFactories Digital Platforms for Manufacturing Workshop, Session 2 - Integration between projects&#8217; platforms &#8211; standards &amp; interoperability, Brussels, 5, 6 February 2018 https://cloud.effra.eu/index.php/s/2tlFxi811TOjCOp</para>
<para>[5] Acatech, &#8220;Recommendations for the Strategic Initiative Web-based Services for Businesses. Final report of the Smart Service Working Group&#8221;, 19 August 2015. https://www.acatech.de/Publikation/recom mendations-for-the-strategic-initiative-web-based-services-for-businesses-final-report-of-the-smart-service-working-group/</para>
<para>[6] M. Lemke &#8220;Digital Industrial Platforms for the Smart Connected Factory of the Future&#8221;, Manufuture &#8211; Tallinn 24 October 2017. http://manufuture2017.eu/wp-content/uploads/2017/10/pdf-Max-Lemk e-24.10.pdf</para>
<para>[7] A. Zwegers, Workshop on Digital Manufacturing Platforms for Connected Smart Factories, Brussels, 19 October 2017. https://ec.europa. eu/digital-single-market/en/news/workshop-digital-manufacturing-platf orms-connected-smart-factories</para>
<para>[8] EC, &#8220;Digitising European Industry: progress so far, 2 years after the launch&#8221;. March 2018. https://ec.europa.eu/digital-single-market/en/ news/digitising-european-industry-2-years-brochure</para>
<para>[9] SIEMENS Mindsphere https://siemens.mindsphere.io/</para>
<para>[10] SAP Leonardo https://www.sap.com/products/leonardo.html</para>
<para>[11] Bosch IoT Suite https://www.bosch-si.com/iot-platform/bosch-iot-suite/homepage-bosch-iot-suite.html</para>
<para>[12] Dassault 3D experience platform https://www.3ds.com/</para>
<para>[13] Manufacturing Industry Digital Innovation Hubs (MIDIH) http://www. midih.eu/</para>
<para>[14] Fiware Smart Industry https://www.fiware.org/community/smart- industry/</para>
<para>[15] Fiware https://www.fiware.org/</para>
<para>[16] Big data for Factories 4.0 http://boost40.eu/</para>
<para>[17] International Data Spaces Association (IDSA) https://www.internationaldataspaces.org/en/</para>
<para>[18] Arrowhead framework http://www.arrowhead.eu/</para>
<para>[19] Smart, Safe &amp; Secure Platform http://www.esterel-technologies. com/S3P-en.html</para>
<para>[20] ISOBUS https://www.aef-online.org/the-aef/isobus.html</para>
<para>[21] AUTOSAR (Automotive Open System Architecture) https://www. autosar.org/</para>
<para>[22] Industrial Interner Consortium (IIC) https://www.iiconsortium.org/</para>
<para>[23] OPC-UA https://opcfoundation.org/</para>
<para>[24] Productive 4.0 https://productive40.eu/</para>
<para>[25] Industrial Value Chain Initiative (IVI) https://iv-i.org/wp/en/</para>
<para>[26] RAMI 4.0 https://www.zvei.org/en/subjects/industry-4-0/the-reference- architectural-model-rami-40-and-the-industrie-40-component/</para>
<para>[27] IIRA https://www.iiconsortium.org/wc-technology.htm</para>
<para>[28] Otto B., Lohmann S., Steinbuss S. IDS Reference Architecture Model Version 2.0, April 2018.</para>
<para>[29] E. Molina, O. Lazaro, et al. &#8220;The AUTOWARE Framework and Requirements for the Cognitive Digital Automation&#8221;, In: Camarinha-Matos L., Afsarmanesh H., Fornasiero R. (eds) Collaboration in a Data-Rich World. PRO-VE 2017. IFIP Advances in Information and Communication Technology, vol. 506. Springer, Cham.</para>
</section>
</chapter>

<chapter class="chapter" id="ch03" label="3" xreflabel="3">
<title>Reference Architecture for Factory Automation using Edge Computing and Blockchain Technologies</title>
<para><emphasis role="strong">Mauro Isaja</emphasis></para>
<para>Engineering Ingegneria Informatica SpA, Italy E-mail: mauro.isaja@eng.it</para>
<para>This chapter will introduce the reader to the FAR-EDGE Reference Architecture (RA): the conceptual framework that, in the scope of the FAR-EDGE project, was used as the blueprint for the proof-of-concept implementation of a novel <emphasis>edge computing platform for factory automation</emphasis>: the FAR-EDGE Platform. Such platform is going to prove edge computing&#8217;s potential to increase flexibility and lower costs, without compromising on production time and quality. The FAR-EDGE RA exploits best practices and lessons learned in similar contexts by the global community of system architects (e.g., Industrie 4.0, Industrial Internet Consortium) and provides a terse representation of concepts, roles, structure and behaviour of the system under analysis. Its unique approach to edge computing is centered on the use of <emphasis>distributed ledger technology (DLT)</emphasis> and <emphasis>smart contracts</emphasis> &#8211; better known under the collective label of <emphasis>Blockchain</emphasis>. The FAR-EDGE project is exploring the use of Blockchain as a key enabling technology for industrial automation, analytics and virtualization, with validation use cases executed in real-world environments that are briefly described at the end of the chapter.</para>
<section class="lev1" id="sec3-1">
<title>3.1 FAR-EDGE Project Background</title>
<para>FAR-EDGE&#8217;s main goal is to provide a novel edge computing solution for the virtualization of the factory automation pyramid. The idea of decentralizing factory automation is not new. Rather, for over a decade, several initiatives, including background projects of the consortium partners, have introduced decentralized factory automation solutions based on various technologies like intelligent agents and service-oriented architectures (SOA). These background initiatives produced proof-of-concept implementations that highlighted the benefits of decentralized automation in terms of flexibility; yet they are still not being widely deployed in manufacturing plants. Neverthe-less, the vision is still alive, as this virtualization can make production systems more flexible and agile, increase product quality and reduce cost, e.g., enable scalable, fast-configurable production lines to meet the global challenges of <emphasis>mass-customization</emphasis> and <emphasis>reshoring</emphasis>.</para>
<para>With the advent of the Industrie 4.0 and the Industrial Internet of Things (IIoT), such solutions are revisited in the light of the integration of Cyber-Physical Systems (CPS) within cloud computing infrastructures. Therefore, several cloud-based applications are deployed and used in factories, which leverage the capacity and scalability of the cloud, while fostering supply chain collaboration and virtual manufacturing chains. Early implementations have also revealed the limitations of the cloud in terms of efficient bandwidth usage and its ability to support real-time operations, including operations close to the field. In order to alleviate these limitations, edge computing architectures have recently introduced. Edge computing architectures introduce layers of edge nodes between the field and the cloud, as a means of:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Saving bandwidth and storage</emphasis>, as edge nodes can filter data streams from the field in order to get rid of information that does not provide value for industrial automation.</para></listitem>
<listitem><para><emphasis role="strong">Enabling low-latency and proximity processing</emphasis>, since information can be processed close to the field, rather in a remote (back-end) cloud infrastructure.</para></listitem>
<listitem><para><emphasis role="strong">Providing enhanced scalability</emphasis>, given that edge computing supports decentralized storage and processing that scale better when compared to conventional centralized cloud processing. This is especially the case when interfacing to numerous devices is needed.</para></listitem>
<listitem><para><emphasis role="strong">Supporting shopfloor isolation and privacy-friendliness</emphasis>, since edge nodes deployed at the shopfloor can be isolated from the rest of the edge network. This can provide increased security and protection of manufacturing dataset in cases required.</para></listitem>
</itemizedlist>
<para>These benefits make edge computing suitable for specific classes of use cases in factories, including:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Large-scale distributed applications</emphasis>, typically applications that involve multiple plants or factories, which collect and process streams from numerous distributed devices in large scale.</para></listitem>
<listitem><para><emphasis role="strong">Nearly real-time applications</emphasis>, which need to analyze data close to the field or even control CPS such as smart machines and industrial robots. A special class of such real-time applications involves edge analytics applications.</para></listitem>
</itemizedlist>
<para>As a result, the application of edge computing for factory automation is extremely promising, since it can support decentralized factory automation in a way that supports real-time interactions and analytics in large scale. FAR-EDGE researches have explored the application of the edge computing paradigm in factory automation, through designing and implementing reference implementations in line with recent standards for edge computing in industrial automation applications. Note that FAR-EDGE was one of the first initiatives to research and experiment with edge computing in the manufacturing shopfloor, as relevant activities were in their infancy when FAR-EDGE project was approved. However, the state of the art in factory automation based on edge computing has evolved and FAR-EDGE efforts are taking into account this evolution.</para>
</section>
<section class="lev1" id="sec3-2">
<title>3.2 FAR-EDGE Vision and Positioning</title>
<para>FAR-EDGE&#8217;s vision is to research and provide a proof-of-concept implementation of an <emphasis>edge computing platform for factory automation</emphasis>, which will prove edge computing&#8217;s potential to increase automation flexibility and lower automation costs, without however compromising production time and quality. The FAR-EDGE architecture is aligned to the IIC RA, while exploiting concepts from other RAs and standards such as the OpenFog RA and RAMI 4.0 (see below for more details). Hence, the project will be providing one of the world&#8217;s first reference implementation of edge computing for factory automation. Within this scope, FAR-EDGE will offer a host of functionalities that are not addressed by other implementations, such as IEC-61499 compliant automation and simulation.</para>
<para>Beyond its functional uniqueness, FAR-EDGE is also unique from a research perspective. In particular, the project is researching the applicability of disruptive KETs: <emphasis>distributed ledger technology (DLT)</emphasis> and <emphasis>smart contracts</emphasis> &#8211; better known under the collective label of <emphasis>Blockchain</emphasis>. The Blockchain concept, while being well understood and thoroughly tested in mission-critical areas like digital currencies (e.g., Bitcoin, Ethereum), 
has never been applied before to industrial systems. FAR-EDGE aims at demonstrating how a pool of services built on a generic Blockchain platform can enable decentralized factory automation in an effective, reliable, scalable and secure way. In FAR-EDGE, such services are responsible for sharing process state and enforcing business rules across the computing nodes of a distributed system, thus permitting virtual automation and analytics processes that span multiple nodes &#8211; or, from a bottom-up perspective, autonomous nodes that cooperate to a common goal.</para>
</section>
<section class="lev1" id="sec3-3">
<title>3.3 State of the Art in Reference Architectures</title>
<para>A <emphasis>reference architecture</emphasis> (RA) is often a synthesis of best practices having their roots in past experience. Sometimes it may represent a &#8220;vision&#8221;, i.e., a conceptual framework that aims more at shaping the future and improving over state-of-the-art design rather than at building systems faster and with lower risk. The most successful RAs &#8211; those that are known and used beyond the boundaries of their native ground &#8211; are those combining both approaches. Whatever the strategy, an RA is for teamwork: its major contribution to devel-opment is to set a common context, vocabulary and repository of patterns for all stakeholders.</para>
<para>In FAR-EDGE, where we explore the business value of applying innovative computing patterns to the smart factory, starting from an effective RA is of paramount importance. For this reason, the FAR-EDGE Reference Architecture was the very first outcome of the project&#8217;s platform development effort.</para>
<para>In our research, we considered some well-known and accepted <emphasis>generic RAs</emphasis> (see sub-section below) as sources of inspiration. The goal was twofold: on the one hand, to leverage valuable experience from large and respected communities; on the other hand, to be consistent and compatible with the mainstream evolution of the smart factory, e.g., Industrial IoT and Industry 4.0. At the end of this journey, we expect the FAR-EDGE RA to become an asset not only in the scope of the project (as the basis for the FAR-EDGE Platform&#8217;s design), but also in the much wider one of factory automation, where it may guide the design of ad-hoc solutions having edge computing as their main technology driver.</para>
<section class="lev2" id="sec3-3-1">
<title>3.3.1 Generic Reference Architectures</title>
<para>A generic RA is one that, while addressing a given field of technology, is not targeting any specific application, domain, industry or even (in one case) sector. Its value is mainly in communication: lowering the impedance of information flow within the development team and possibly also towards the general public. As such, it is basically an ontology and/or a mind mapping tool. However, as we will see further on in this analysis, sometimes the ambition of a generic RA is also to set a standard for runtime interoperability of systems and components, placing some constraints on implementation choices. Obviously, for this approach to make sense, it should be backed by a critical mass of solution providers, all willing to give up the vendor-lock-in competitive factor in exchange for the access to a wider market.</para>
<para>We have identified three generic RAs that have enough traction to influence the &#8220;technical DNA&#8221; of the FAR-EDGE Platform: RAMI 4.0, IIRA and OpenFog RA. In the following sub-sections, each of them is briefly analysed and, when it is the case, some elements that are relevant to FAR-EDGE are extracted to be reused later on.</para>
</section>
<section class="lev2" id="sec3-3-2">
<title>3.3.2 RAMI 4.0</title>
<para>The Reference Architectural Model for Industrie 4.0 (RAMI 4.0)<footnote id="fn3_1" label="1"> <para>https://www.zvei.org/en/subjects/industry-4-0/</para></footnote> is a generic RA addressing the manufacturing sector. As its name clearly states, it is the outcome of Platform Industrie 4.0,<footnote id="fn3_2" label="2"> <para>http://www.plattform-i40.de/I40/Navigation/EN/Home/home.html</para></footnote> the German public&#8211;private initiative addressing the fourth industrial revolution, i.e., merging the digital, physical and biological worlds into CPS.</para>
<para>According to some experts [1], the expected benefits of the adoption of CPS in the factory are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>higher quality</para></listitem>
<listitem><para>more flexibility</para></listitem>
<listitem><para>higher productivity</para></listitem>
<listitem><para>standardization in development</para></listitem>
<listitem><para>products can be launched earlier</para></listitem>
<listitem><para>continuous benchmarking and improvement</para></listitem>
<listitem><para>global competition among strong businesses</para></listitem>
<listitem><para>new labour market opportunities</para></listitem>
<listitem><para>creation of appealing jobs at the intersection of mechanical engineering, automation and IT new services and business model</para></listitem>
</itemizedlist>
<para>To ensure that all participants involved in discussions understand each other, RAMI 4.0 defines a 3D structure for mapping the elements of production systems in a standard way.</para>
<para>RAMI 4.0, however, is also a standard-setting effort. While still a work in (slow) progress at the time of writing [2], its roadmap includes the definition of a globally standardized communication architecture that should enable the plug-and-play of <emphasis>Things</emphasis> (e.g., field devices, connected factory tools and equipment, smart machines, etc.) into composite CPS. Currently, only the general concept of <emphasis>I4.0 Component</emphasis> has been introduced: any Thing that is wrapped inside an <emphasis>Administration Shell</emphasis>, which provides a standard interface for communication, control and management while hiding the internals of the actual physical object. Future work will identify standard languages for the exchange of information, define standard data and process models and include recommendations for implementation &#8211; communication protocols in the first place.</para>
<para>With respect to the latter point, OPC UA is central to the RAMI 4.0 strategy. It is the successor of the much popular (in Microsoft-based shopfloors) OPC machine-to-machine communication protocol for industrial automation. As opposed to OPC, OPC UA is an open, royalty-free cross-platform and supports very complex information models. I4.0 Components will be required to adopt OPC UA as their interfacing mechanism, while also relying on several IEC standards (e.g., 62832, 61804, etc.) for information sharing.</para>
<para>RAMI 4.0 has gained a significant traction in Germany and is also driving the discussion around Industry 4.0 solutions and platforms in Europe. In particular, its glossary and its 3D structure for element mapping are increasingly used in sector-specific projects (in particular, platform-building ones) and working groups as a common language. The FAR-EDGE RA will adopt some of the RAMI 4.0 conceptual framework as its own, simplifying communication with the external communities of developers and users.</para>
</section>
<section class="lev2" id="sec3-3-3">
<title>3.3.3 IIRA</title>
<para>The Industrial Internet Reference Architecture (IIRA)<footnote id="fn3_3" label="3"> <para>http://www.iiconsortium.org/IIRA.htm</para></footnote> has been developed and is actively maintained by the Industrial Internet Consortium (IIC), a global community of organizations (&gt;250 members, including IBM, Intel, Cisco, Samsung, Huawei, Microsoft, Oracle, SAP, Boeing, Siemens, Bosch and General Electric) committed to the wider and better adoption of the</para>
<para>Internet of Things by the industry at large. The IIRA, first published in 2015 and since evolved into version 1.8 (Jan 2017), is a standards-based architectural template and methodology for the design of Industrial Internet Systems (IIS). Being an RA, it provides an ontology of IIS and some architectural patterns, encouraging the reuse of common building blocks and promoting interoperability. It is worth noting that a collaboration between the IIC and Platform Industrie 4.0, with the purpose of harmonizing RAMI 4.0 and IIRA, has been announced.<footnote id="fn3_4" label="4"> <para>http://www.iiconsortium.org/iic-and-i40.htm &#8211; to date, no concrete outcomes of such collaboration have been published.</para></footnote></para>
<para>IIRA has four separate but interrelated <emphasis>viewpoints</emphasis>, defined by identifying the relevant stakeholders of IIoT use cases and determining the proper framing of concerns. These viewpoints are: business, usage, functional and implementation.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The <emphasis>business viewpoint</emphasis> attends to the concerns of the identification of stakeholders and their business vision, values and objectives. These concerns are of particular interest to decision-makers, product managers and system engineers.</para></listitem>
<listitem><para>The <emphasis>usage viewpoint</emphasis> addresses the concerns of expected system usage. It is typically represented as sequences of activities involving human or logical users that deliver its intended functionality in ultimately achieving its fundamental system capabilities.</para></listitem>
<listitem><para>The <emphasis>functional viewpoint</emphasis> focuses on the functional components in a system, their interrelation and structure, the interfaces and interactions between them and the relation and interactions of the system with external elements in the environment.</para></listitem>
<listitem><para>The <emphasis>implementation viewpoint</emphasis> deals with the technologies needed to implement functional components, their communication schemes and their life cycle procedures.</para></listitem>
</itemizedlist>
<para>In FAR-EDGE, which deals with platforms rather than solutions, the functional and implementation viewpoints are the most useful.</para>
<para>The functional viewpoint decomposes an IIS into functional domains, which are, following a bottom-up order, <emphasis>control</emphasis>, <emphasis>operations</emphasis>, <emphasis>information</emphasis>, <emphasis>application</emphasis> and <emphasis>business</emphasis>. Of particular interest in FAR-EDGE are the first three.</para>
<para>The <emphasis>control domain</emphasis> represents functions that are performed by industrial control systems: reading data from sensors, applying rules and logic and exercising control over the physical system through actuators. Both accuracy and resolution in timing are critical. Components implementing these functions are usually deployed in proximity to the physical systems they control, and may therefore be distributed.</para>
<para>The <emphasis>operations domain</emphasis> represents the functions for the provisioning, management, monitoring and optimization of the systems in the control domain.</para>
<para>The <emphasis>information domain</emphasis> represents the functions for gathering and analysing data to acquire high-level intelligence about the overall system. As opposed to their control domain counterparts, components implementing these functions have no timing constraints and are typically deployed in factory or corporate data centres, or even in the cloud as a service.</para>
<para>Overall, the functional viewpoint tells us that control, management and data flow in IIS are three separate concerns having very different non-functional requirements, so that implementation choices may also differ substantially.</para>
<para>The implementation viewpoint describes some well-established architectural patterns for IIS: the Three-tier, the Gateway-mediated Edge Connectivity and Management and the Layered Databus. They are of particular interest in FAR-EDGE, as they all deal with edge computing, although in different ways.</para>
<para>The <emphasis>Three-tier architectural pattern</emphasis> distributes concerns to separate but connected tiers: Edge, Platform and Enterprise. Each of them play a specific role with respect to control and data flows. Consistently with the requirements stemming from the functional viewpoint, control functionality is positioned in the Edge Tier, i.e., in close proximity to the controlled systems, while data- related (information) and management (operations) services are part of the Platform. However, the IIRA document v1.8 also states that in real systems, some functions of the information domain may be implemented in or close to the edge tier, along with some application logic and rules to enable <emphasis>intelligent edge computing</emphasis>. Interestingly enough, though, the opposite &#8211; edge computing as part of Platform functionality &#8211; is not contemplated by IIRA, probably because intelligent edge nodes (i.e., connected factory equipment with on-board computing capabilities) are deemed to be an OEM&#8217;s (Original Equipment Manufacturer) concern. However, there is a component in the IIRA diagram suggesting that such boundaries may be blurred: the Gateway, which is part of the Edge Tier, connects it to both the Platform and Enterprise ones.</para>
<para>The Edge Gateway (EG) is in fact the focus point of another IIRA architectural pattern: the <emphasis>Gateway-mediated Edge Connectivity and Management</emphasis>. It allows for localizing operations and controls (edge analytics and computing). Its main benefit is in breaking down the complexity of the IIS, so that it may scale up in both numbers of managed assets and networking. The EG acts as an endpoint for the wide-area network while isolating the individual local networks of edge nodes. It may be used as a management point for devices and as an aggregation hub where some data processing and control logic is deployed.</para>
<para>The implementation viewpoint indeed provides some very relevant building blocks for the FAR-EDGE platform. What we see as a gap in the IIRA approach, up to this point, is the lack of such a block for addressing <emphasis>distributed computing</emphasis>, which is implied in the very notion of edge computing when used as a load-distribution technique for systems that are still centralized in their upper tiers. A partial answer to this question is given by the third and last IIRA architectural pattern: the <emphasis>Layered Databus</emphasis>. According to this design, an IIS can be partitioned into multiple horizontal layers that together define a hierarchy of scopes: machine, system, system of systems and Internet. Within each layer, components communicate with each other in a <emphasis>peer-to-peer</emphasis> (P2P) fashion, supported by a layer-specific databus. A databus is a logical connected space that implements a common data model, allowing interoperable communications between endpoints at that layer. For instance, a databus can be deployed within a smart machine to connect its internal sensors, actuators, controls and analytics. At the system level, another databus can be used for communications between different machines. At the system of systems level, still another databus can connect together a series of systems for coordinated control, monitoring and analysis.</para>
<para>In FAR-EDGE, the concept of cross-node P2P communication is going to play a key role as the enabling technology for edge computing in the three functional domains of interest: control, operations and information.</para>
</section>
<section class="lev2" id="sec3-3-4">
<title>3.3.4 OpenFog RA</title>
<para>The OpenFog Consortium<footnote id="fn3_5" label="5"> <para>https://www.openfogconsortium.org/</para></footnote> is a public&#8211;private initiative, which was born in 2015 and shares similarities to the IIC: both consortia share big players like IBM, Microsoft, Intel and Cisco as their founding members and both use the ISO/IEC/IEEE 42010:2011 international standard<footnote id="fn3_6" label="6"> <para>https://www.iso.org/standard/50508.html</para></footnote> for communicating architecture descriptions to stakeholders. However, the OpenFog initiative is not constrained to any specific sector: it is a technology-oriented ecosystem that fosters the adoption of <emphasis>fog computing</emphasis> in order to solve the bandwidth, latency and communications challenges of IoT, AI, robotics and other advanced concepts in the digitized world. Fog computing is a term first introduced by Cisco, and is basically a synonym for edge computing<footnote id="fn3_7" label="7"> <para>The term conveys the concept of cloud computing moved at the ground level</para></footnote>: both refer to the practice of moving computing and/or storage services towards the edge nodes of a networked system.</para>
<para>The OpenFog RA was first released at the beginning of 2017, and as such it is the most recent contribution to the mainstream world of IoT-related architectures. The technical paper that describes it<footnote id="fn3_8" label="8"> <para>https://www.openfogconsortium.org/ra/</para></footnote> is quite rich in content. As in IIRA, <emphasis>viewpoints</emphasis> are used to frame similar concerns, which in OpenFog RA are restricted to <emphasis>functional</emphasis> and <emphasis>deployment</emphasis> (the latter being roughly equivalent of IIRA&#8217;s <emphasis>implementation</emphasis> viewpoint). However, these topics are not discussed in much detail. In particular, the functional viewpoint is nothing more than a placeholder, for example, use cases (one of them provided as an annex to the document), while the deployment viewpoint just skims the surface, introducing the concept of multi-tier systems. With respect to this, however, a very interesting example is made, which shows how the OpenFog approach to deployment is close to IIRA&#8217;s Layered Databus pattern: it is a hierarchy of layers where nodes on the same level can interact with each other &#8211; in what is called &#8220;east&#8211;west communication&#8221; &#8211; without the mediation of higher-level entities. The layers themselves, although more relevant to a smart city context, are quite consistent with the IIRA ones. The means by which P2P communication should be implemented are not specified (no databus, in this case).</para>
<para>Besides viewpoints, two additional kinds of frames are used to organize concepts: <emphasis>views</emphasis> and <emphasis>perspectives</emphasis>. The former include aspects (i.e., <emphasis>node</emphasis>, <emphasis>system</emphasis> and <emphasis>software</emphasis>) that have a clear positioning in the structure of a system, and are further articulated into sub-aspects (e.g., the node view includes security, management, network, accelerators, compute, storage, protocol abstraction and sensors/actuators); the latter are crosscutting concerns (e.g., performance, security, etc.).</para>
<para>Overall, the OpenFog RA gives the impression of being an ambitious exercise, having the main goal of creating a universal conceptual framework that is at the same time generic, comprehensive and detailed. The mapping of a large scale, complex and critical use case (airport visual security), as provided in the document, is impressive, but this comes as no surprise because that was obviously the case study on which the RA itself was fine-tuned. The reverse path &#8211; designing a new system using OpenFog RA as the blueprint &#8211; appears to be a daunting task, in particular in industrial scenarios where a very pragmatic approach is the norm. In FAR-EDGE, the value that we see in OpenFog RA is &#8211; again, as it was also introduced in IIRA &#8211; the concept of a hierarchy of geo-scoped layers that use P2P communication internally.</para>
</section>
</section>
<section class="lev1" id="sec3-4">
<title>3.4 FAR-EDGE Reference Architecture</title>
<para>The FAR-EDGE Reference Architecture is the conceptual framework that has driven the design and the implementation of the FAR-EDGE Platform. As an RA, its first goal is communication: providing a terse representation of concepts, roles, structure and behaviour of the system under analysis both internally for the benefit of team members and externally for the sake of dissemination and ecosystem-building. There is a second goal, too, which is reuse: exploiting best practices and lessons learned in similar contexts by the global community of system architects.</para>
<para>The FAR-EDGE RA is described from two architectural viewpoints: the <emphasis>functional viewpoint</emphasis> and the <emphasis>structural viewpoint</emphasis>. In the sections that follow, they are described in detail. A partial <emphasis>implementation viewpoint</emphasis> is also provided further on, with its scope limited to the Ledger Tier. <link linkend="F3-1">Figure <xref linkend="F3-1" remap="3.1"/></link> provides an overall architecture representation that includes all elements.</para>
<section class="lev2" id="sec3-4-1">
<title>3.4.1 Functional Viewpoint</title>
<para>According to the FAR-EDGE RA, the functionality of a factory automation platform can be decomposed into three high-level <emphasis>Functional Domains</emphasis> &#8211; <emphasis>Automation</emphasis>, <emphasis>Analytics</emphasis> and <emphasis>Simulation</emphasis> &#8211; and four <emphasis>Crosscutting</emphasis> <emphasis>(XC) Functions</emphasis> &#8211; <emphasis>Management</emphasis>, <emphasis>Security</emphasis>, <emphasis>Digital Models</emphasis> and <emphasis>Field Abstraction &amp; Data Routing</emphasis>. To better clarify the scope of such topics, we have tried to map them to similar IIRA concepts. However, the reader should be aware that the overall scope of the IIRA is wider, as it aims at modelling entire Industrial Internet Systems, while the FAR-EDGE RA is more focused and detailed: oftentimes, concept mapping is partial or even impossible.</para>
<fig id="F3-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-1">Figure <xref linkend="F3-1" remap="3.1"/></link></label>
<caption><para>FAR-EDGE RA overall view.</para></caption>
<graphic xlink:href="graphics/ch03_fig001.jpg"/>
</fig>
<para>Functional Domains and XC Functions are orthogonal to structural Tiers (see next section): the implementation of a given functionality may &#8211; but is not required to &#8211; span multiple Tiers, so that in the overall architecture representation (<link linkend="F3-1">Figure <xref linkend="F3-1" remap="3.1"/></link>), Functional Domains appear as vertical lanes drawn across horizontal layers. In <link linkend="F3-2">Figure <xref linkend="F3-2" remap="3.2"/></link> the relationship between Functional Domains, their users and the factory environment is highlighted by arrows showing the flow of data and of control.</para>
<fig id="F3-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-2">Figure <xref linkend="F3-2" remap="3.2"/></link></label>
<caption><para>FAR-EDGE RA Functional Domains.</para></caption>
<graphic xlink:href="graphics/ch03_fig002.jpg"/>
</fig>
<section class="lev3" id="sec3-4-1-1">
<title>3.4.1.1 Automation domain</title>
<para>The FAR-EDGE Automation domain includes functionalities supporting automated control and automated configuration of physical production pro-cesses. While the meaning of &#8220;control&#8221; in this context is straightforward, &#8220;configuration&#8221; is worth a few additional words. Automated configuration is the enabler of plug-and-play factory equipment &#8211; better known as <emphasis>plug- and-produce</emphasis> &#8211; which in turn is a key technology for mass-customization, as it allows a faster and less expensive adjustment of the production process to cope with a very dynamic market demand. The Automation domain requires a bidirectional monitoring/control communication channel with the Field, typically with low bandwidth but very strict timing requirements (tight control loop). In some advanced scenarios, Automation is controlled &#8211; to some extent &#8211; by the results of Analytics and/or Simulation (see below for more details on this topic).</para>
<para>The Automation domain partially maps to the Control domain of the IIRA. The main difference is that Control is also responsible for decoupling the real word from the digital world, as it includes the functionality for Field communication, entity abstraction, modelling and asset management. In other words, Control mediates all Field access from other domains like Information, Operations, etc. In the FAR-EDGE RA, instead, the Automation domain is only focused on its main role, while auxiliary concerns are dealt with by Data Models and by Field Abstraction &amp; Data Routing, which are XC Functions.</para>
</section>
<section class="lev3" id="sec3-4-1-2">
<title>3.4.1.2 Analytics domain</title>
<para>The FAR-EDGE Analytics domain includes functionalities for gathering and processing Field data for a better understanding of production processes, i.e., a factory-focused business intelligence. This typically requires a high- bandwidth Field communication channel, as the volume of information that needs to be transferred in a given time unit may be substantial. On the other hand, channel latency tends to be less critical than in the Automation scenario. The Analytics domain provides intelligence to its users, but these are not necessarily limited to humans or vertical applications (e.g., a predictive maintenance solution): the Automation and Simulation domains, if properly configured, can both make direct use of the outcome of data analysis algorithms. In the case of Automation, the behaviour of a workflow might change in response to changes detected in the controlled process, e.g., a process drift caused by the progressive wear of machinery or by the quality of assembly components being lower than usual. In the case of Simulation, data analysis can be used to update the parameters of a digital model (see the following section).</para>
<para>The Analytics domain matches perfectly the Information domain of the IIRA, except that the latter is receiving data from the Field through the mediation of Control functionalities.</para>
</section>
<section class="lev3" id="sec3-4-1-3">
<title>3.4.1.3 Simulation domain</title>
<para>The FAR-EDGE Simulation domain includes functionalities for simulating the behaviour of physical production processes for the purpose of optimization or of testing what/if scenarios at minimal cost and risk and without any impact of regular shop activities. Simulation requires digital models of plants and processes to be in-sync with the real-world objects they repre-sent. As the real world is subject to change, models should reflect those changes. For instance, the model of a machine assumes a given value of electric power/energy consumption, but the actual values will diverge as the real machine wears down. To detect this gap and correct the model accordingly, raw data from the Field (direct) or complex analysis algorithms (from Analytics) can be used. However, it is important to point out that model synchronization functionality is <emphasis>not</emphasis> part of the Simulation domain, which acts just as a consumer of the Digital Models XC Functions.</para>
<para>There is no mapping between the Simulation domain and any functional domain of the IIRA: in the latter, simulation support is not considered as an integral part of the infrastructure.</para>
</section>
<section class="lev3" id="sec3-4-1-4">
<title>3.4.1.4 Crosscutting functions</title>
<para>Crosscutting Functions address, as the name suggests, common specific concerns. Their implementation tends to be pervasive, affecting several Functional Domains and Tiers. They are briefly listed and described here.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Management:</emphasis> Low-level functions for monitoring and commissioning/ decommissioning of individual system modules, i.e., factory equipment and IT components that expose a management interface. They partially correspond to IIRA&#8217;s Operations functional domain, with the exclusion of its more high-level functions like diagnostics, prognostics and optimization.</para></listitem>
<listitem><para><emphasis role="strong">Security:</emphasis> Functions securing the system against the unruly behaviour of its user and of connected systems. These include digital identity management and authentication, access control policy management and enforcement, communication and data encryption. They partially correspond to the Trustworthiness subset of System Characteristics from IIRA.</para></listitem>
<listitem><para><emphasis role="strong">Digital Models:</emphasis> Functions for the management of digital models and their synchronization with the real-world entities they represent. Digital modes are a shared asset, as they may be used as the basis for automated configuration, simulation and field abstraction, e.g., semantic interoperability of heterogeneous field systems. They correspond to the Modeling and Asset Management layers of IIRA&#8217;s Control functional domain.</para></listitem>
<listitem><para><emphasis role="strong">Field Abstraction &amp; Data Routing:</emphasis> Functions that ensure the con-nectivity of business logic (FAR-EDGE RA Functional Domains) to the Field, abstracting away the technical details, like device discovery and communication protocols. Data routing refers to the capability of establishing direct producer&#8211;consumer channels on demand, optimized for unidirectional massive data streaming, e.g., for feeding Analytics. They correspond to the Communication and Entity Abstraction layers of IIRA&#8217;s Control functional domain.</para></listitem>
</itemizedlist>
</section>
</section>
<section class="lev2" id="sec3-4-2">
<title>3.4.2 Structural Viewpoint</title>
<para>The FAR-EDGE RA uses two classes of concepts for describing the structure of a system: <emphasis>Scopes</emphasis> and <emphasis>Tiers</emphasis>.</para>
<para>Scopes are very simple and straightforward: they define a coarse mapping of system elements to either the factory &#8211; <emphasis>Plant Scope</emphasis> &#8211; or the broader world of corporate IT &#8211; <emphasis>Enterprise Ecosystem Scope</emphasis>. Examples of elements in Plant Scope are machinery, field devices, workstations, SCADA and MES systems, and any software running in the factory data centre. To the Enterprise Ecosystem Scope belong ERP and PLM systems and any application or service shared across multiple factories or even companies, e.g., supply chain members.</para>
<para>Tiers are a more detailed and technically oriented classification of deployment concerns: they can be easily mapped to scopes, but they provide more insight into the relationship between system components. Not surprisingly, FAR-EDGE being inspired by edge and distributed computing paradigms, this kind of classification is quite similar to the OpenFog RA&#8217;s deployment viewpoint, except for the fact that FAR-EDGE Tiers are industry-oriented whereas OpenFog ones are not. That said, FAR-EDGE Tiers are one of the most innovative traits of its RA, and they are individually described here.</para>
<section class="lev3" id="sec3-4-2-1">
<title>3.4.2.1 Field Tier</title>
<para>The Field Tier (see <link linkend="F3-3">Figure <xref linkend="F3-3" remap="3.3"/></link>) is the bottom layer of the FAR-EDGE RA and is populated by <emphasis>Edge Nodes (EN)</emphasis>: any kind of device that is connected to the <emphasis>digital world</emphasis> on one side and to the <emphasis>real world</emphasis> to the other. ENs can have embedded intelligence (e.g., a smart machine) or not (e.g., an IoT sensor or actuator); the FAR-EDGE RA honours this difference: <emphasis>Smart Objects</emphasis> are ENs with on-board computing capabilities, and <emphasis>Connected Devices</emphasis> are those without. The Smart Object is where local control logic runs: it is a semiautonomous entity that does not need to interact too frequently with the upper layers of the system.</para>
<para>The Field is also populated by entities of the real world, i.e., those physical elements of production processes that are not directly connected to the network, and as such are not considered as ENs: <emphasis>Things</emphasis>, <emphasis>People</emphasis> and <emphasis>Environments</emphasis>. These are represented in the digital world by some kind of EN &#8220;wrapper&#8221;. For instance, room temperature (Environment) is measured by an IoT sensor (Connected Device), the proximity of a worker (People) to a physical checkpoint location is published by an RFID wearable and detected by an RFID Gate (Connected Device) and a conveyor belt (Thing) is operated by a PLC (Smart Object).</para>
<fig id="F3-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-3">Figure <xref linkend="F3-3" remap="3.3"/></link></label>
<caption><para>FAR-EDGE RA Field Tier.</para></caption>
<graphic xlink:href="graphics/ch03_fig003.jpg"/>
</fig>
<para>The Field Tier is in Plant Scope. Individual ENs are connected to the digital world in the upper Tiers either directly by means of the shopfloor&#8217;s LAN, or indirectly through some special-purpose local network (e.g., WSN) that is bridged to the former.</para>
<para>From the RAMI 4.0 perspective, the FAR-EDGE Field Tier corresponds to the <emphasis role="strong">Field Device</emphasis> and <emphasis role="strong">Control Device</emphasis> levels on the <emphasis role="strong">Hierarchy</emphasis> axis (IEC-62264/IEC-61512), while the entities there contained are positioned across the <emphasis role="strong">Asset</emphasis> and <emphasis role="strong">Integration Layers</emphasis>.</para>
</section>
<section class="lev3" id="sec3-4-2-2">
<title>3.4.2.2 Gateway Tier</title>
<para>The Gateway Tier (see <link linkend="F3-4">Figure <xref linkend="F3-4" remap="3.4"/></link>) is the core of the FAR-EDGE RA. It hosts those parts of Functional Domains and XC Functions that can leverage the edge computing model, i.e., software designed to run on multiple, distributed computing nodes placed close to the field, which may include resource- constrained nodes. The Gateway Tier is populated by <emphasis>Edge Gateways (EG)</emphasis>: computing devices that act as a digital world gateway to the real world of the Field. These machines are typically more powerful than the average intelligent EN (e.g., blade servers) and are connected to a fast LAN. Strategically positioned close to physical systems, the EG can execute <emphasis>Edge</emphasis> <emphasis>Processes</emphasis>: time- and bandwidth-critical functionality having <emphasis>local scope</emphasis>. For instance, the orchestration of a complex physical process that is monitored and operated by a number of sensors, actuators (Connected Devices) and embedded controllers (Smart Objects); or the real-time analysis of a huge volume of live data that is streamed from a nearby Field source.</para>
<fig id="F3-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-4">Figure <xref linkend="F3-4" remap="3.4"/></link></label>
<caption><para>FAR-EDGE RA Gateway Tier.</para></caption>
<graphic xlink:href="graphics/ch03_fig004.jpg"/>
</fig>
<para>By itself, the Gateway Tier does not introduce anything new: deploying computing power and data storage in close proximity to where it is actually used is a standard best practice in the industry, which helps reduce network latency and traffic. However, this technique basically requires that the scope of individual subsystems is narrow (e.g., a single work station). If instead the critical functionality applies to a wider scenario (e.g., an entire plant or enterprise), it must be either deployed at a higher level (e.g., the Cloud) &#8211; thus losing all benefits of proximity &#8211; or run as multiple parallel instances, each focused on its own narrow scope. In the latter case, new problems may arise: keeping global variables in-sync across all local instances of a given process, reaching a consensus among local instances on a &#8220;common truth&#8221;, collecting aggregated results from independent copies of a data analytics algorithm, etc. These problems are well known: the need for peer nodes of a distributed system to mutually exchange information is recognized by the OpenFog RA. The innovative approach in FAR-EDGE is to define a specific system layer &#8211; the Ledger Tier &#8211; that is responsible for the implementation of such mechanisms and to guarantee an appropriate Quality of Service level.</para>
<para>The Gateway Tier is in Plant Scope, located above the Field Tier and below the Cloud Tier &#8211; in this context, we do not consider the Ledge Tier as part of the north-south continuum, due to its very specific role of support layer. Individual EGs are connected with each other and with the north side of the system &#8211; i.e., the globally scoped digital world in the Cloud Tier &#8211; by means of the factory LAN, and to the south side through the shopfloor LAN.</para>
<para>From the RAMI 4.0 perspective, the FAR-EDGE Gateway Tier corresponds to the <emphasis role="strong">Station</emphasis> and <emphasis role="strong">Work Centre</emphasis> levels on the <emphasis role="strong">Hierarchy</emphasis> axis (IEC-62264/IEC-61512), while the EGs there contained are positioned across the <emphasis role="strong">Asset</emphasis>, <emphasis role="strong">Integration</emphasis> and <emphasis role="strong">Communication Layers</emphasis>. Edge Processes running on EGs, however, map to the <emphasis role="strong">Information</emphasis> and <emphasis role="strong">Functional Layers</emphasis>.</para>
</section>
<section class="lev3" id="sec3-4-2-3">
<title>3.4.2.3 Ledger Tier</title>
<para>The Ledger Tier (see <link linkend="F3-5">Figure <xref linkend="F3-5" remap="3.5"/></link>) is a complete abstraction: it does not correspond to any physical deployment environment, and even the entities that it &#8220;contains&#8221; are conventional abstractions. Such entities are <emphasis>Ledger Services</emphasis>, which implement decentralized business logic as <emphasis>smart contracts</emphasis> executed on a Blockchain platform (see next section for an in-depth technical analysis).</para>
<fig id="F3-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-5">Figure <xref linkend="F3-5" remap="3.5"/></link></label>
<caption><para>FAR-EDGE RA Ledger Tier.</para></caption>
<graphic xlink:href="graphics/ch03_fig005.jpg"/>
</fig>
<para>Ledger Services are transaction-oriented: each service call that needs to modify the shared state of a system must be evaluated and approved by <emphasis>Peer Nodes</emphasis> before taking effect. Similarly to &#8220;regular&#8221; services, Ledger Services are implemented as executable code; however, they are not actually executed on any specific computing node: each service call is executed in parallel by all Peer Nodes that happen to be online at the moment, which then need to reach a consensus on its validity. Most importantly, even the executable code of Ledger Services can be deployed and updated online by means of a distributed ledger transaction, just like any other state change.</para>
<para>Ledger Services implement the part of Functional Domains and/or XC Functions that enable the edge computing model, through providing support for their Edge Service counterpart. For example, the Analytics Functional Domain may define a local analytics function (Edge Service) that must be executed in parallel on several EGs, and also a corresponding service call (Ledger Service) that will be invoked from the former each time new or updated local results become available, so that all results can converge into an aggregated dataset. In this case, aggregation logic is included in the</para>
<para>Ledger Service. Another use case may come from the Automation Functional Domain, demonstrating how the Ledger Tier can also be leveraged from the Field: a smart machine with embedded <emphasis>plug-and-produce</emphasis> functionality (Smart Object) can ask permission to join the system by making a service call and then, having received green light, can dynamically deploy its own specific Ledger Service for publishing its current state and/or receiving external high-level commands.</para>
<para>The Ledger Tier lays across the Plant and the Enterprise Ecosystem Scopes, as it can provide support to any Tier. The physical location of Peer Nodes, which implement smart contracts and the distributed ledger, is not defined by the FAR-EDGE RA as it depends on implementation choices. For instance, some implementations may use EGs and even some of the more capable ENs in the role of Peer Nodes; others may separate concerns, relying on specialized computing nodes that are deployed on the Cloud.</para>
<para>From the RAMI 4.0 perspective, the FAR-EDGE Ledger Tier corresponds to the <emphasis role="strong">Work Centre</emphasis>, <emphasis role="strong">Enterprise</emphasis> and <emphasis role="strong">Connected World</emphasis> levels on the <emphasis role="strong">Hierarchy</emphasis> axis (IEC-62264/IEC-61512), while the Ledger Services there contained are positioned across the <emphasis role="strong">Information</emphasis> and <emphasis role="strong">Functional Layers</emphasis>.</para>
</section>
<section class="lev3" id="sec3-4-2-4">
<title>3.4.2.4 Cloud Tier</title>
<para>The Cloud Tier (see <link linkend="F3-6">Figure <xref linkend="F3-6" remap="3.6"/></link>) is the top layer of the FAR-EDGE RA, and also the simplest and more &#8220;traditional&#8221; one. It is populated by <emphasis>Cloud Servers (CS)</emphasis>: powerful computing machines, sometimes configured as clusters, that are connected to a fast LAN internally to their hosting data centre, and made accessible from the outside world by means of a corporate LAN or the Internet. On CSs runs that part of the business logic of Functional Domains and XC Functions that benefits from having the widest of scopes over production processes, and can deal with the downside of being physically deployed far away from them. This includes the planning, monitoring and management of entire factories, enterprises and supply chains (e.g., MES, ERP and SCM systems). The Cloud Tier is populated by <emphasis>Cloud Services</emphasis> and <emphasis>Applications</emphasis>. The difference between them is straightforward: Cloud Services implement specialized functions that are provided as individual API calls to Applications, which instead &#8220;package&#8221; a wider set of related operations that are relevant to some higher-level goal and often &#8211; but not necessarily &#8211; expose an interactive human interface.</para>
<fig id="F3-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-6">Figure <xref linkend="F3-6" remap="3.6"/></link></label>
<caption><para>FAR-EDGE Cloud Tier.</para></caption>
<graphic xlink:href="graphics/ch03_fig006.jpg"/>
</fig>
<para>The Cloud Tier is in Enterprise Ecosystem scope. The &#8220;Cloud&#8221; term in this context implies that Cloud Services and Applications are visible from all Tiers, wherever located. It does <emphasis>not</emphasis> imply that CSs should be actually hosted on some commercial ISP&#8217;s infrastructure. More often, in particular in large enterprises, the Cloud Tier corresponds to one or more corporate data centres (private cloud), ensuring that the entire system is fully under the control of its owner.</para>
<para>From the RAMI 4.0 perspective, the FAR-EDGE Cloud Tier corresponds to the <emphasis role="strong">Work Centre</emphasis>, <emphasis role="strong">Enterprise</emphasis> and <emphasis role="strong">Connected World</emphasis> levels on the <emphasis role="strong">Hierarchy</emphasis> axis (IEC-62264/IEC-61512), while the Cloud Services and Applications there contained are positioned across the <emphasis role="strong">Information</emphasis>, <emphasis role="strong">Functional</emphasis> and <emphasis role="strong">Business Layers</emphasis>.</para>
</section>
</section></section>
<section class="lev1" id="sec3-5">
<title>3.5 Key Enabling Technologies for Decentralization</title>
<para>In this section, our main concern is the use of Blockchain and smart contracts as the key enabling technologies of Ledger Services (see the Ledger Tier section above). In FAR-EDGE, the baseline Blockchain platform is an off-the-shelf product, which is enriched by application-specific smart contract software. That said, there are some Blockchain-related basic issues that we need to account for.</para>
<section class="lev2" id="sec3-5-1">
<title>3.5.1 Blockchain Issues</title>
<para>For those familiar with the technology, the main question is: how can a Blockchain fit industrial automation scenarios? According to conventional wisdom, Blockchains are slow and cumbersome systems with limited scalability and an aversion to data-intensive applications. Nevertheless, while this vision has solid roots in reality, in the context of smart factories, these shortcomings are not as relevant as it may seem. In order to substantiate this claim, though, we first need to explain some key points of the technology.</para>
<para>First and foremost, the Blockchain is a log of all transactions (i.e., state changes) executed in the system. The log, which is basically a witness of past and current system states, is replicated and kept in-sync across multiple nodes. All nodes are peers, so that no &#8220;master node&#8221; or &#8220;master copy&#8221; of the log exists anywhere at any time. Internally, the log is a linear sequence of records (i.e., <emphasis>blocks</emphasis> containing transactions) that are individually immutable and time-stamped. The sequence itself can only be modified by appending new records at the end. The integrity of both records and sequence is protected by means of strong cryptographic algorithms [3]. Moreover, all records must be approved by consensus among peers, using some sort of <emphasis>Byzantine Fault Tolerance (BFT)</emphasis> mechanism as a guarantee that an agreement on effective system state can always be reached, even if some peers are unavailable or misbehaving (in good faith or for malicious purposes) [4, 5].</para>
<para>The process described above is all about trust: the consensus protocol guarantees that all approved transactions conform to the business logic that peers have agreed on, while the log provides irrefutable evidence of transactions. For this to work in a zero-trust environment, where peers that do not know (let alone trust) each other and are not subject to a higher authority, there is yet another mechanism in place: an economic incentive that rewards &#8220;proper&#8221; behaviour and makes the cost of cheating much higher than the profit. Given that the whole system must be self-contained and autonomous, such incentive is based on native digital money: a <emphasis>cryptocurrency</emphasis>. This closes the loop: all public Blockchain networks need <emphasis>cryptoeconomics</emphasis> to make their BFT mechanism work. For some of them (e.g., Bitcoin), the cryptocurrency itself is the main goal of the system: transactions are only used to exchange value between users. Other systems (e.g., Ethereum) are much more flexible, as we will see further on. That said, cryptocurrencies are problematic for many reasons, including regulatory compliance, and hinder the adoption of the Blockchain in the corporate world.</para>
<para>Another key point of Blockchain technology that is worth mentioning is the problem of <emphasis>transaction finality</emphasis>. Most BFT implementations rely on <emphasis>forks</emphasis> to resolve conflicts between peer nodes: when two incompatible opinions on the validity of some transaction exist, the log is split into two branches, each corresponding to one alternate vision of reality, i.e., of system state. The other nodes of the network will then have to choose which branch is the valid one, and will do this by appending their new blocks to the &#8220;right&#8221; branch only. Over time, consensus will coalesce on one branch (the one having more new blocks appended), and the losing branch will be abandoned. While this scheme is indeed effective for achieving BFT in public networks, it has one important consequence: there is no absolute guarantee that a committed transaction will stay so, because it may be deemed invalid <emphasis>after</emphasis> it is written to the log. In other words, it may appear only on the &#8220;bad&#8221; branch of a fork and be reverted when the conflict is resolved. Clearly enough, this behaviour of the Blockchain is not acceptable in scenarios where a committed transaction has side effects on other systems.</para>
<para>This is how first-generation Blockchains work. For all these reasons, public Blockchains are, at least to date, extremely inefficient for common online transaction processing (OLTP) tasks. This is most unfortunate, because second-generation platforms like Ethereum have introduced the smart contract concept. Smart contracts were initially conceived as a way for users to define their custom business logic for transaction, i.e., making the Blockchain &#8220;smarter&#8221; by extending or even replacing the built-in logic of the platform. It then became clear that smart contracts, if properly leveraged, could also turn a Blockchain into a distributed computing platform with unlimited potential. However, distributed applications would still have to deal with the scalability, responsiveness and transaction finality of the underlying BFT engine, which significantly limits the range of possible use cases.</para>
<para>To tackle this problem, the developer community is currently treading two separate paths: upgrading the BFT architecture on the one hand and relax functional requirements on the other hand. The former approach is ambitious but slow and difficult: it is followed by a third generation of Blockchain platforms that are proposing some innovative solution, although transaction finality still appears to be an open point nearly everywhere. The latter is much easier: if we can assume some limited degree of trust between parties, we can radically simplify the BFT architecture and thus remove the worst bottlenecks. From this reasoning, an entirely new species was born in recent years: <emphasis>permissioned</emphasis> Blockchains. Given their simpler architecture, commercial-grade permissioned Blockchains are already available today (e.g., Hyperledger, Corda), as opposed to third-generation ones (e.g., EOS, NEO) which are still experimental.</para>
</section>
<section class="lev2" id="sec3-5-2">
<title>3.5.2 Permissioned Blockchains</title>
<para>Permissioned Blockchains are second-generation architectures that do not support anonymous nodes and do not rely on cryptoeconomics. Basically, they are meant to make the power of Blockchain and smart contracts available to the enterprise, at least to some extent. Their BFT is still a decentralized process executed by peer nodes; however, the process runs under the supervision of a central authority. This means that all nodes must have a strong digital identity (no anonymous parties) and be trusted by the authority in order to join the system. Trust, and thus access to the Blockchain, can be revoked at any time. The BFT protocol can then rely on some basic assumptions and perform much faster, narrowing the distance from OLTP standards in terms of both responsiveness and throughput. Some BFT implementation also support final transactions, as consensus on transaction validity can be reached in near-real-time <emphasis>before</emphasis> anything is written to the log.</para>
<para>The key point of permissioned Blockchains is that they are only partially decentralized, leaving governance and administration roles in the hands of a leading entity &#8211; be it a single organization or a consortium. This aspect is a boon for enterprise adoption, for obvious reasons. Typically, these networks are also much smaller than public ones, with the positive side effect of limiting the inefficiency of data storage caused by massive data replication across peer nodes. Overall, we can argue that permissioned Blockchains are a viable compromise between the original concept and legacy OLTP systems. But then, to what extent? Can we identify some use cases that a state-of-the-art permissioned Blockchain can effectively support? This is exactly what the FAR-EDGE project aims at, with the added goal of validating claims on the field, by means of pilot applications deployed in real-world industrial environments.</para>
</section>
<section class="lev2" id="sec3-5-3">
<title>3.5.3 The FAR-EDGE Ledger Tier</title>
<para>The first problem that FAR-EDGE had to face was to define the <emphasis>performance envelope</emphasis> of current Blockchain implementations, so that validation cases could be shaped according to the sustainable workload. The idea was to set the benchmark for a Blockchain <emphasis>comfort zone</emphasis> in terms of a few objective and measurable Key Performance Indicators (KPI), targeting the known weak points of the technology:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis>Transaction Average Latency (TrxAL)</emphasis> &#8211; The average waiting time for a client to get confirmation of a transaction, expressed in seconds.</para></listitem>
<listitem><para><emphasis>Transaction Maximum Sustained Throughput (TrxMST)</emphasis> &#8211; The maximum number of transactions that can be processed in a second, on average.</para></listitem>
</itemizedlist>
<para>The benchmark was set by stress-testing, in a lab environment, actual Blockchain platforms. These were selected after a preliminary analysis of the permissioned Blockchains available from open source communities, using criteria like code maturity and, most importantly, finality of transactions. The only two platforms that passed the selection were Hyperledger Fabric (HLF) and NEO. The stress test was then conducted using BlockBench, a specialized testing framework [6], and a simple configuration of eight nodes on commodity hardware.</para>
<para>HLF emerged from tests as the only viable platform for CPS applica-tions, given that NEO is penalized by a significant latency (∼7s.), which is independent of workload (the expected result for a &#8220;classical&#8221; Blockchain architecture that aggregates transactions into blocks and defines a fixed delay for processing each block). On the contrary, HLF was able to accept a workload of up to 160 transactions per second with relatively low latency (0.1&#8211;1 s.). On heavier workloads, up to 1000 transactions per second, NEO is instead the clear winner, thanks to its constant latency, while HLF&#8217;s performance progressively degrades (&gt;50 s.). This workload profile however, while appealing for high-throughput scenarios (e.g., B2C payment networks), is not compatible with basic CPS requirements. Consequently, the Blockchain performance benchmark was set as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>&lt; = TrxAL &lt; = 1.0</para></listitem>
<listitem><para>0 &lt; = TrxMST &lt; = 160</para></listitem>
</itemizedlist>
<para>This is also considered the performance envelope of the FAR-EDGE Ledger Tier, as the HLF platform has been adopted as its baseline Blockchain implementation.</para>
</section>
<section class="lev2" id="sec3-5-4">
<title>3.5.4 Validation use Cases</title>
<para>Having marked some boundaries, the FAR-EDGE project then proceeded with the identification of some pilot applications for the validation phase. The starting point was a set of candidate use cases proposed by our potential users, who were eager to tackle some concrete problems and experiment with some new ideas. The general framework of this exercise is described here.</para>
<para>As explained, the main objective in FAR-EDGE is to achieve flexibility in the factory through the decentralization of production systems. The catalyst of this transformation is the Blockchain, which &#8211; if used as a computing platform rather than a distributed ledger &#8211; allows the virtualization of the automation pyramid. The Blockchain provides a common <emphasis>virtual space</emphasis> where data can be securely shared and business logic can be consistently run. That said, users can leverage this opportunity in two ways: one easier but somewhat limited approach, and the other more difficult and more ambitious approach.</para>
<para>The easiest approach is of the brown-field type: just migrate (some of) the factory&#8217;s centralized monitoring and control functionality to Ledger Services on the Ledger Tier. Thanks to the Gateway Tier, legacy centralized services can be &#8220;impersonated&#8221; on a local scale by Edge Gateways: the shopfloor &#8211; that hardest environment to tamper with in a production facility &#8211; is left untouched. The main advantages of this configuration are the mitigation of performance bottlenecks (heavy network traffic is confined locally, workload is spread across multiple computing nodes) and added resiliency (segments of the shopfloor can still be functional when temporarily disconnected from the main network). Flexibility is also enhanced, but on a coarse-grained scale, modularity is achieved by grouping a number of shopfloor Edge Nodes under the umbrella of one Edge Gateway, so that they all together become a single &#8220;module&#8221; with some degree of self-contained intelligence and autonomy. Advanced Industry 4.0 scenarios like plug-and-produce are out of reach.</para>
<para>The more ambitious approach is also a much more difficult and risky endeavour in real-world business, being of the green-field type. It is about delegating responsibility to Smart Objects on the shopfloor, which communicate with each other through the mediation of the Ledger Tier. The business logic in Ledger Services is of higher level with respect to the previous scenario: more about governance and orchestration than direct control. The Gateway Tier has a marginal role, mostly confined to Big Data analytics. In this configuration, central bottlenecks are totally removed and the degree of flexibility is extreme. The price to pay is that a complete overhaul of the shopfloor of existing factories is required, replacing PLC-based automation with intelligent machines.</para>
<para>In FAR-EDGE, both paths are explored with different use cases combining on automation, analytics and simulation. We here give one full example of each type.</para>
<para>The first use case follows the brown-field approach. The legacy environment is an assembly facility for industrial vehicles. The pilot is called <emphasis>mass-customization</emphasis>: the name refers to capability of the factory assembly line to handle individually customized products having a high level of variety. If implemented successfully, mass-customization can give a strategic advantage to target niche markets and meet diverse customer needs in a timely fashion. In particular, the pilot factory produces highly customized trucks. The product specification is defined by up to 800 unique variants, and the final assembly includes approximately 7000 manufacturing operations and handles a very high degree of geometrical variety (axle configurations, fuel tank positions etc.). Despite the high level of variety in the standard product, at some production sites, 60% of the produced trucks have unique customer adaption.</para>
<para>In the pilot factory, the main assembly line is sequential but feeds a number of finishing lines that work in parallel. In particular, the wheel alignment verification is done on the finishing assembly line and is one of the last active checks done on trucks before they leave the plant. This opens up an opportunity to optimize the workload. In the as-is scenario, wheel alignment stations are statically configured to accommodate specific truck model ranges: products must be routed to a matching station on arrival, creating a potential bottleneck if model variety is not optimal. As part of the configuration, a handheld nut runner tool needs to be instructed as to the torque force to apply.</para>
<para>In the to-be solution, according to the FAR-EDGE architectural blueprint, each wheel alignment station is represented at the Gateway Tier level by a dedicated Edge Gateway box. The EG runs some simple ad-hoc automation software that integrates the Field systems attached to the station (e.g., a barcode reader, the smart nut runner) using standard IoT protocols like MQTT. The EG also runs a peer node that is a member of the logical Ledger Tier. A custom Ledger Service deployed on the Ledger Tier implements the business logic of the use case. The instruction set for the products to be processed is sent in JSON format to the Ledger Service, once per day, by the central ERP-MES systems: from that point and until a new production plan is published, the Ledger and Gateway Tiers are autonomous.</para>
<para>When a new truck reaches the end of the main line, it is dispatched to the first finishing line available, achieving the desired result of product flow optimization. Then, when it reaches the <emphasis>wheel alignment station</emphasis>, the chassis ID is scanned by a barcode reader and a request for instructions is sent, through the automation layer on the EG, to the Ledger Service. The Ledger Service will retrieve the instruction set from the production plan &#8211; which is saved on the Ledger itself &#8211; by matching the chassis ID. When the automation layer receives the instructions set, it parses the specific configuration parameters of interest and sends them to the nut runner, which adjusts itself. The wheel alignment operations will then proceed as usual. A record of the actual operations performed, which may differ from those in the instruction set, is finally set back to the Ledger and used to update the production plan. An overall view of the use case is given in <link linkend="F3-7">Figure <xref linkend="F3-7" remap="3.7"/></link>.</para>
<fig id="F3-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-7">Figure <xref linkend="F3-7" remap="3.7"/></link></label>
<caption><para>Mass-customization use case.</para></caption>
<graphic xlink:href="graphics/ch03_fig007.jpg"/>
</fig>
<para>While the product flow optimization mentioned above is the immediate result of the pilot, there are some additional benefits to be gained either as a by-product or as planned extensions.</para>
<para>First, the wheel alignment station, together with its EG box, becomes an autonomous module that can be easily added/removed and even relocated in a different environment. This scenario is not as far-fetched as it may seem, because it actually comes from a business requirement: the company has a number of production sites in different locations all over the world, each with their own unique MES maps. The deployment of a new module with different MES maps is currently a difficult and costly process.</para>
<para>Second, in the future, the truck itself may become a Smart Object that communicates directly with the Ledger Tier. Truck&#8211;Ledger interactions will happen throughout the entire life cycle of the truck &#8211; from manufacturing to operation and until decommissioning &#8211; with the Ledger maintaining a digital twin of the truck.</para>
<para>The second use case follows instead the <emphasis>heavyweight</emphasis> green-field approach. The pilot belongs to a white goods (i.e., domestic appliances) factory. The objective of the pilot is &#8220;reshoring&#8221;, which in the FAR-EDGE context means enabling the company to move production back from off-shore locations, thanks to a better support for the rapid deployment of new technologies (i.e., shopfloor Smart Objects) offered by the more advanced domestic plants. In this particular plant, a 1 km long conveyor belt moves pallets of finished products from the factory to a warehouse, where they are either stocked or forwarded for immediate delivery. The factory/warehouse conveyor is not only a physical boundary, but also an administrative one, as the two facilities are under the responsibility of two different business units. Moreover, once the pallet is loaded on a delivery vehicle, it comes under the responsibility of a third party who operates the delivery business.</para>
<para>In the as-is scenario, the conveyor feeds 19 shipping bays, or &#8220;lanes&#8221;, in the warehouse. Each lane is simply a dead-end conveyor segment, where pallets are dropped in by the conveyor and retrieved by a manually operated forklift (basically, an FIFO queue). Simple mechanical actuators do the physical routing of the pallets, controlled by logic that runs on a central &#8220;sorter&#8221; PLC. The sorting logic is very simple: it is based on a production schedule that is defined once per day and on static mappings of the lanes to product types and/or final destinations. This approach has one big problem: production cannot be dynamically tuned to match business changes, or at least only to a very limited extent, because the fixed dispatching scheme downstream cannot sync with it. The problem is not only in software: the physical layout of the system is fixed.</para>
<para>In the to-be solution, the shipping bays become Smart Objects that can be plugged in and out at need (see <link linkend="F3-8">Figure <xref linkend="F3-8" remap="3.8"/></link>). They embed simple sensors that detect the number of pallets currently in their local queue, and a controller board that runs some custom automation logic and connects directly to the Ledger Tier (i.e., without the mediation of an Edge Gateway). A custom Ledger Service acts as a coordination hub: it is responsible for authorizing a new &#8220;smart bay&#8221; that advertise itself to join the system (plug-and-produce) and, once accepted, to apply the sorting logic. This is based on the current state of the main conveyor belt, where incoming and outgoing pallets are individually identified by an RFID tag, and on &#8220;capability update&#8221; messages that are sent by smart bays each time they undergo an internal state change (e.g., number of free slots in the local queue, preference for a product type). The production schedule is not required at all, because sorting is only calculated on the actual state.</para>
<fig id="F3-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-8">Figure <xref linkend="F3-8" remap="3.8"/></link></label>
<caption><para>Reshoring use case.</para></caption>
<graphic xlink:href="graphics/ch03_fig008.jpg"/>
</fig>
</section>
</section>
<section class="lev1" id="sec3-6">
<title>3.6 Conclusions</title>
<para>FAR-EDGE is one of the few ongoing initiatives that focus on edge computing for factory automation, similarly to the IIC&#8217;s edge intelligence testbed and EdgeX Foundry. However, the FAR-EDGE RA introduces some unique concepts. In particular, the notion of a special logical layer, the Ledger Tier, that is responsible for sharing process state and enforcing business rules across the computing nodes of a distributed system, thus permitting <emphasis>virtual</emphasis> automation and analytics processes that span multiple nodes &#8211; or, from a bottom-up perspective, autonomous nodes that cooperate to a common goal. This new kind of architectural layer stems from the availability of Blockchain technology, which, while being well understood and thoroughly tested in mission-critical areas like digital currencies, have never been applied before to industrial systems. FAR-EDGE aims at demonstrating how a pool of specific Ledger Services can enable decentralized factory automation in an effective, reliable, scalable and secure way. In this chapter, we also presented the general framework of the industrial pilot applications that are going to be run during the validation phase of the project.</para>
</section>
<section class="lev1" id="sec3-7">
<title>References</title>
<para>[1] Karsten Schweichhart: Reference Architectural Model Industrie 4.0 &#8211; An Introduction, April 2016, Deutsche Telekom, online resource: https://ec.europa.eu/futurium/en/system/files/ged/a2-schweichhart-reference_architectural_model_industrie_4.0_rami_4.0.pdf</para>
<para>[2] Dagmar Dirzus, Gunther Koschnick: Reference Architectural Model Industrie 4.0 &#8211; Status Report, July 2015, VDI/ZVEI, online resource: https://www.zvei.org/fileadmin/user_upload/Themen/Industrie_4.0/Das_Referenzarchitekturmodell_RAMI_4.0_und_die_Industrie_4.0-Komponente/pdf/5305_Publikation_GMA_Status_Report_ZVEI_Reference_Architecture_Model.pdf</para>
<para>[3] H. Halpin, M. Piekarska, &#8220;Introduction to Security and Privacy on the Blockchain&#8221;, IEEE European Symposium on Security and Privacy Workshops (EuroS &amp; PW), Paris, 2017, pp. 1&#8211;3.</para>
<para>[4] L. Lamport, R. Shostak, M. Pease, &#8220;The Byzantine Generals problem&#8221;, ACM Transactions on Programming Languages and Systems, volume 4 no. 3,p. 382&#8211;401, 1982.</para>
<para>[5] Z. Zheng, S. Xie, H. Dai, X. Chen, H. Wang, &#8220;An overview of Blockchain technology: architecture, consensus, and future trends&#8221;, proceedings of IEEE 6th International Congress on Big Data, 2017.</para>
<para>[6] T. Dinh, J. Wang, G. Chen, R. Liu, C. Ooi, K. L. Tan, &#8220;BLOCKBENCH: a framework for analyzing private Blockchains&#8221;, unpublished, 2017. Retrieved from: https://arxiv.org/pdf/1703.04057.pdf</para>
</section>
</chapter>

<chapter class="chapter" id="ch04" label="4" xreflabel="4">
<title>IEC-61499 Distributed Automation for the Next Generation of Manufacturing Systems</title>
<para><emphasis role="strong">Franco A. Cavadini<superscript>1</superscript>, Giuseppe Montalbano<superscript>1</superscript>, Gernot Kollegger<superscript>2</superscript>, Horst Mayer<superscript>2</superscript> and Valeriy Vytakin<superscript>3</superscript></emphasis></para>
<para><superscript>1</superscript> Synesis, SCARL, Via Cavour 2, 22074 Lomazzo, Italy</para>
<para><superscript>2</superscript>nxtControl, GmbH, Aum&#252;hlweg 3/B14, A-2544 Leobersdorf, Austria</para>
<para><superscript>3</superscript> Department of Computer Science, Electrical and Space Engineering,</para>
<para>Lule&#229; tekniska universitet, A3314 Lule&#229;, Sweden</para>
<para>E-mail: franco.cavadini@synesis-consortium.eu;</para>
<para>giuseppe.montalbano@synesis-consortium.eu;</para>
<para>gernot.kollegger@nxtcontrol.com;</para>
<para>horst.mayer@nxtcontrol.com;</para>
<para>Valeriy.Vyatkin@ltu.se</para>
<para>Global competition in the manufacturing sector is becoming fiercer and fiercer, with fast evolving requirements that must now take much more into account: rising product variety; product individualization; volatile markets; increasing relevance of value networks; shortening product life cycles. To fulfil these increasingly complex requirements, companies have to invest on new technological solutions and to focus the efforts on the conception of new automation platforms that could grant to the shopfloor systems the flexibility and re-configurability required to optimize their manufacturing processes, whether they are continuous, discrete or a combination of both.</para>
<para>Daedalus is conceived to enable the full exploitation of the CPS&#8217; virtualized intelligence concept, through the adoption of a completely distributed automation platform based on IEC-61499 standard, fostering the creation of a digital ecosystem that could go beyond the current limits of manufacturing control systems and propose an ever-growing market of innovative solutions for the design, engineering, production and maintenance of plants&#8217; automation.</para>
<section class="lev1" id="sec4-1">
<title>4.1 Introduction</title>
<para>European leadership and excellence in manufacturing are being significantly threatened by the huge economic crisis that hit the Western countries over the last years. More sustainable and efficient production systems, able to keep pace with the market evolution, are fundamental in the recovery plan aimed at innovating the European competitive landscape. An essential ingredient for a winning innovation path is a more aware and widespread use of ICT in manufacturing-related processes.</para>
<para>In fact, the rapid advances in ubiquitous computational power, coupled with the opportunities of de-localizing into the Cloud parts of an ICT framework, have the potential to give rise to a new generation of service- based industrial automation systems, whose local intelligence (for real-time management and orchestration of manufacturing tasks) can be dynamically linked to runtime functionalities residing in-Cloud (an ecosystem where those functionalities can be developed and sold). Improving the already existing and implemented IEC-61499 standard, these new &#8220;Cyber Physical Systems&#8221; will adopt an open and fully interoperable automation language (dissipating the borders between the physical shop floors and the cyber-world), to enable their seamless interaction and orchestration, while still allowing proprietary development for their embedded mechanisms.</para>
<para>These CPS based on real-time distributed intelligence, enhanced by functional extensions into the Cloud, will lead to a new information- driven automation infrastructure, where the traditional hierarchical view of a factory functional architecture is complemented by a direct access to the on-board services (non-real-time) exposed by the Cyber-Physical manufacturing system, composed in complex orchestrated behaviours. As a consequence, the current classical approach to the Automation Pyramid (<link linkend="F4-1">Figure <xref linkend="F4-1" remap="4.1"/></link>) has been recently addressed several times (Manufuture, ICT 2013 and ICT 2014 conference, etc.) and deemed by RTD experts and industrial key players to be inadequate to cope with current manufacturing trends and in need to evolve.</para>
<para>In the European initiative Daedalus, financed under the Horizon 2020 research programme, it has been acknowledged deeply that CPS intrinsic existence defies the concept of rigid hierarchical levels, being each CPS capable of complex functions across all layers. An updated version of the pyramid representation is therefore adopted (<link linkend="F4-2">Figure <xref linkend="F4-2" remap="4.2"/></link>), where CPS are hierarchically orchestrated in real time (within the shop floor) through the IEC-61499 automation language, to achieve complex and optimized behaviours (impossible to other current technologies), while still being singularly and directly accessible, at runtime, by whatever elements of the Factory ICT infrastructure that wants to leverage on their internal functionalities (and have the privileges to do that).</para>
<fig id="F4-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-1">Figure <xref linkend="F4-1" remap="4.1"/></link></label>
<caption><para>Classical automation pyramid representation.</para></caption>
<graphic xlink:href="graphics/ch04_fig001.jpg"/>
</fig>
<fig id="F4-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-2">Figure <xref linkend="F4-2" remap="4.2"/></link></label>
<caption><para>Daedalus fully accepts the concept of vertically integrated automation pyramid introduced by the PATHFINDER [1] road-mapping initiative and further developed with the Horizon 2020 Maya project [6].</para></caption>
<graphic xlink:href="graphics/ch04_fig002.jpg"/>
</fig>
<para>This innovative approach to the way of conceiving automated intelligence within a Factory &#8211; across the boundaries of its physically separated functional areas thanks to the constant and bidirectional connection to the cyber world &#8211; will be the enabler for a revolutionary paradigm shift within the market of industrial automation. The technological platform of Daedalus will become in fact also Economic Platform of a completely new multi-sided ecosystem, where the creation of added-value products and services by device producers, machine builders, systems integrators and application developers will go beyond the current limits of manufacturing control systems and propose an ever-growing market of innovative solutions for the design, engineering, production and maintenance of plants&#8217; automation (see also <link linkend="ch013">Chapter <xref linkend="ch13" remap="13"/></link> of this book).</para>
</section>
<section class="lev1" id="sec4-2">
<title>4.2 Transition towards the Digital Manufacturing Paradigm: A Need of the Market</title>
<para>Current worldwide landscape is seeing continuously growing value creation from digitization, with digital technologies increasingly playing the central role in value creation for the entire economy. More and more types of product are seeing a progressive transition to the &#8220;digital inside&#8221; model, where innovation is mostly related to the extension of the product-model to the service-model, through a deeper integration of digital representations. This means, in concrete terms, that even in very &#8220;classical&#8221; domains, the dissipation of borders between what is a product and what are the services that it enables is fostering a widespread &#8220;Business Model Innovation&#8221; need.</para>
<para>Looking at how global competition in the manufacturing sector is becoming fiercer and fiercer, with fast evolving requirements that must now take into account several concurrent factors, it is clear that European Manufacturing Companies have to focus the efforts on new automation solutions that could grant to the shop floor systems the flexibility and reconfigurability required to optimize their manufacturing processes (<link linkend="F4-3">Figure <xref linkend="F4-3" remap="4.3"/></link>).</para>
<para>To realize such a vision, current technological constraints must be surpassed through research and development activities focusing on the following topics:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>interoperability of data/information (versus compatibility) and robustness;</para></listitem>
</itemizedlist>
<fig id="F4-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-3">Figure <xref linkend="F4-3" remap="4.3"/></link></label>
<caption><para>The industrial &#8220;needs&#8221; for a transition towards a digital manufacturing paradigm.</para></caption>
<graphic xlink:href="graphics/ch04_fig003.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>integration of different temporal-decision scale data (real-time, near-time, anytime) and multiple data sources;</para></listitem>
<listitem><para>integration of the real and the virtual data-information towards a predictive model for manufacturing; real-time data collection, analysis, decision, enforcement;</para></listitem>
<listitem><para>optimization in complex system of systems infrastructures;</para></listitem>
<listitem><para>seamless data integration across the process value chain;</para></listitem>
<listitem><para>standardization and interoperability of manufacturing assets components, subsystems and services.</para></listitem>
</itemizedlist>
<para>Within this context, the future of Europe&#8217;s industry must be digital, as clearly highlighted by Commissioner Oettinger EU-wide strategy [2] to &#8220;ensure that all industrial sectors make the best use of new technologies and manage their transition towards higher value digitised products and processes&#8221; through the &#8220;Leadership in next generation open and interoperable digital platforms&#8221;, opening incredible opportunities for high growth of vertical markets, especially for currently &#8220;non-digital&#8221; industries (<link linkend="F4-4">Figure <xref linkend="F4-4" remap="4.4"/></link>).</para>
<para>The core motivation for Daedalus was therefore born by the awareness that purely technological advancements in themselves are not enough to satisfy the need of innovation of the industrial automation market. New methodologies for the sector&#8217;s main stakeholders to solve the new manufacturing needs of end-users must be conceived and supported by the creation of a technological and economic ecosystem built on top of a multi-sided platform.</para>
<para>In developing this concept, Daedalus takes into account a certain number of fundamental &#8220;non-functional&#8221; requirements:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>CPS-like interoperable devices must be &#8220;released&#8221; on the market together with their digital counterpart, both in terms of behavioural models and with the software &#8220;apps&#8221; that allows their simple integration and orchestration in complex system-of-systems architectures;</para></listitem>
</itemizedlist>
<fig id="F4-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-4">Figure <xref linkend="F4-4" remap="4.4"/></link></label>
<caption><para>Commissioner Oettinger agenda for digitalizing manufacturing in Europe.</para></caption>
<graphic xlink:href="graphics/ch04_fig004.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The development of coordination (orchestration) intelligence by system integrators, machine builders or plant integrators (more in general, by aggregators of CPS) should rely on existing libraries of basic functions, developed and provided in an easy-to-access way by experts of specific algorithmic domains;</para></listitem>
<listitem><para>Systemic performance improvement at automation level should rely on well-maintained SDKs that mask the complexity of behind-the-scenes optimization approaches;</para></listitem>
<listitem><para>Large-scale adoption of simulation as a tool to accelerate development and deployment of complex automation solutions should be obtained by shifting the implementation effort of models to device/system producers;</para></listitem>
</itemizedlist>
<para>This translates into an explicit involvement of all main stakeholders of the automation development domain, brought together in a multi-sided market. Such &#8220;Automation Ecosystem&#8221; must rely on a technological platform that, leveraging on standardization and interoperability, can mask the complexity of interconnecting these Complementors.</para>
</section>
<section class="lev1" id="sec4-3">
<title>4.3 Reasons for a New Engineering Paradigm in Automation</title>
<para>The core conceptual idea launched at European level by the German &#8220;Industrie 4.0&#8221; initiative is that embedding intelligence into computational systems distributed throughout the factory should enable vertical networking with business process at management level, and horizontal connection among dispersed value networks.</para>
<para>The RAMI 4.0 framework has been therefore developed to highlight this new degree of integration between different aspects of the manufacturing domain, which does not exist only within the usual hierarchy of automation (functional layers) but also across life cycle and aggregation levels (<link linkend="F4-5">Figure <xref linkend="F4-5" remap="4.5"/></link>). The core issue (tackled by Daedalus), which is not apparent enough in this framework, is that the evolution of the Hierarchy Levels, those that characterize the progressive aggregation of physical systems into more complex one, is currently limited by a technological gap between the shop floor and the office floor automation.</para>
<para>In fact, two specific limits hinder the transition towards the next step of the shop floor automation:</para>
<fig id="F4-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-5">Figure <xref linkend="F4-5" remap="4.5"/></link></label>
<caption><para>RAMI 4.0 framework to support vertical and horizontal integration between different functional elements of the factory of the future.</para></caption>
<graphic xlink:href="graphics/ch04_fig005.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Current PLC technology, which dominates the deployment of industrial automation applications, is a legacy of the 1980s, unsuited for sustaining complex &#8220;system of intelligent systems&#8221; functional architectures;</para></listitem>
<listitem><para>Automation languages of the IEC-61131 standard, basis of the afore-mentioned PLCs, are antiquated from a software engineering point of view; additionally, they have been implemented by each vendor through specific &#8220;dialects&#8221; that prevent real interoperability.</para></listitem>
</itemizedlist>
<para>In technological terms, this has a very specific impact: while products of the automation domain are still completely based on an engineering approach built over the concept of a time cycle (and, consequently, its programming languages), the ICT domain has been working for decades through object orientation and, most importantly, event-based programming. Trying to bring together these two worlds, to guarantee the new levels of integration envisioned by Industry 4.0, is going to be practically impossible if nothing changes in the way industrial automation is conceived and then deployed.</para>
<para>During the last 20 years, the standardization and research efforts related to control software for industrial automation was focused on improving quality and reliability while reducing development time. As explained previously, distributed automation is considered the needed innovation step; however, the current automation paradigm, based on the use of programmable logic controllers (PLC), according to the IEC 61131-3 standard, is not suitable for distributed systems, as it was conceived for centralized ones. This device-centric and monolithic engineering approach is not well apt for regular changes of the executed control applications, while the multiple engineering tools required for adapting them greatly increases the engineering time, because the majority of vendors implements specific extensions or only partial support of IEC 61131-3.</para>
<para>The IEC took this into account for the development of the IEC 61499 architecture in order to support such new features of next-generation industrial automation systems as distribution and reconfiguration [3], offering modern platform-independent approach to system design, similar to the Model-Driven Architecture [4]. The MDA approach has greatly improved flexibility and efficiency of the development process for embedded systems [5] on account of re-using elements of the solutions, described in high-level languages. We can expect similar benefits from IEC 61499 for industrial automation that MDA brought to software engineering and embedded system development.</para>
<para>The solution is therefore to propose a technological foundation to CPS that could be used to overstep these constraints and consequently enable the additional functionalities needed by the Automation Digital Platform envisioned by the project. By exploiting the already existing features of the IEC-61499 international standard for distributed automation, the idea is to propose a functional model for CPS that blends coherently real-time coordination of its automation tasks with the &#8220;anytime&#8221; provision of services to other elements of the automation pyramid (<link linkend="F4-6">Figure <xref linkend="F4-6" remap="4.6"/></link>).</para>
<fig id="F4-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-6">Figure <xref linkend="F4-6" remap="4.6"/></link></label>
<caption><para>Qualitative representation of the functional model for an automation CPS based on IEC-61499 technologies; concept of &#8220;CPS-izer&#8221;.</para></caption>
<graphic xlink:href="graphics/ch04_fig006.jpg"/>
</fig>
<para>This extension of the IEC-61499 functionalities adopts the openness and interoperability of implementation that the standard proposes, guaranteeing that CPS developed independently will be able to communicate and be orchestrated. But it is not just a matter of interoperable communication between CPS at shop floor level; transition towards an effective digitalization requires other composing elements:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The real-time automation logic of a CPS must be programmed under an object-oriented paradigm and taking into account the transition from the time-based approach of the low-level control and the event-based needs of a service-oriented paradigm;</para></listitem>
<listitem><para>The controller of a CPS must also contain a high-level semantic description of the behavioural models of the system it governs, mapping the automation tasks on top of it; this is needed to allow external modules (in the digital domain) to be capable of reading the raw data generated at shop floor level with the appropriate level of semantic context;</para></listitem>
</itemizedlist>
<fig id="F4-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-7">Figure <xref linkend="F4-7" remap="4.7"/></link></label>
<caption><para>The need for local cognitive functionalities is due to the requirements of Big Data elaboration.</para></caption>
<graphic xlink:href="graphics/ch04_fig007.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A certain degree of cognitive functionalities must be programmed directly within the CPS, to guarantee that elaboration and modelling of data is done near to the sources of such data (<link linkend="F4-7">Figure <xref linkend="F4-7" remap="4.7"/></link>);</para></listitem>
</itemizedlist>
<para>Finally, the &#8220;exposition&#8221; of services to the digital domain must be conceived by the automation engineer coherently and concurrently to the design of the internal automation tasks, enabling a secure interaction between internal (real-time) automation tasks and &#8220;external&#8221; requests for asynchronous functionalities.</para>
<para>This notwithstanding, the project understands and accepts the need of CPS vendors (developers) to protect their IP and/or continue using proprietary engineering technologies: the proposed approach supports different levels of &#8220;protection&#8221; to the inner working mechanism of a system, from a fully IEC61499-compliant but closed (= not accessible by users) implementation, to the &#8220;wrapping&#8221; of legacy PLCs.</para>
<para><link linkend="F4-8">Figure <xref linkend="F4-8" remap="4.8"/></link> therefore shows how the concept of an IEC-61499 CPS (networked in real time with similar systems, compliant with the standard) is only an enabler for a much more complex shopfloor automation, where horizontal integration with other platforms (eventually still in real time) is guaranteed by support to an extensive set of communication protocols and middleware (such as OPC-UA and DDS), while vertical integration through a service-oriented approach enables the extension of automation functionalities into the digital domain, where the concept of an APP store can greatly facilitate diffusion at market level of this approach.</para>
<fig id="F4-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-8">Figure <xref linkend="F4-8" remap="4.8"/></link></label>
<caption><para>Framing of an IEC-61499 CPS within a complex shopfloor.</para></caption>
<graphic xlink:href="graphics/ch04_fig008.jpg"/>
</fig>
<section class="lev2" id="sec4-3-1">
<title>4.3.1 Distribution of Intelligence is Useless without Appropriate Orchestration Mechanisms</title>
<para>Providing automation devices as IEC61499-compliant CPS is just the enabler for the cornerstone of the project. In fact, the real complexity of future shop floors (and, thus, the opportunities for new manufacturing paradigms) resides in the possibility to develop easily the multi-level orchestration intelligence needed to coordinate the behaviour of all the CPS composing a shop floor.</para>
<para>In fact, the paradigm of decentralization of computing power into smaller devices cannot be deployed only by solving issues about communication about them. Previous attempts to bring the concepts of service orientation into the automation domain has failed when facing the &#8220;servers-only issue&#8221;: even if an intelligent systems is programmed to &#8220;expose&#8221; its functionalities as services to be invoked (a &#8220;server&#8221;, using the vocabulary of SoA), the moment we have several of these servers, the problem that remains is who is going to coordinate those services in an orchestrated way (the &#8220;client&#8221;) and, most importantly, in which programming language should such a client be designed.</para>
<para>The adoption of IEC-61499 presents automatically the solution to this issue, with an industry-ready approach (validated in several production environments) that already satisfies the major needs for engineering complex orchestrating applications: interoperability between devices, real-time communication between distributed systems, hardware abstraction, automatic management of low-level variable binding between CPS, a modern development language (and environment), etc. This set of functionalities just needs to be &#8220;completed&#8221; with additional ones that will make it the undisputed standard at European level.</para>
<fig id="F4-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-9">Figure <xref linkend="F4-9" remap="4.9"/></link></label>
<caption><para>Hierarchical aggregation of CPS orchestrated to behave coherently.</para></caption>
<graphic xlink:href="graphics/ch04_fig009.jpg"/>
</fig>
<para><link linkend="F4-9">Figure <xref linkend="F4-9" remap="4.9"/></link> therefore shows how a real &#8220;hierarchy&#8221; of CPS can be imagined in the shop floor of future factories, where the physical aggregation of equipment and devices to generate more complex systems (typical of the mechatronic approach) must be equally supported by a progressive orchestration of their behaviour, accepting the so-called &#8220;Automation Object Orien-tation&#8221; (A-OO, see also Section 4.5 for details) and taking into account that each subsystem may exist with its own controller and internally developed control logics.</para>
<para>The strength of this approach, that is already supported in all its basic and fundamental functionalities by the IEC-61499 standard and programming language, is highlighted in <link linkend="F4-10">Figure <xref linkend="F4-10" remap="4.10"/></link>.</para>
<para>A single CPS, independently from being a basic one or obtained through aggregation of others, can be seen internally (from the perspective of the developer of that CPS) as an intelligent system, which must be programmed (eventually in proprietary technologies) to exhibit a certain behaviour and expose it over an IEC-61499 interface. On the other hand, seen from outside, the CPS will be a &#8220;black box&#8221; guaranteeing certain functionalities. This simplifies greatly both the activities of re-configurability and upgrade, and the progressive hiding of maintenance-related details.</para>
<fig id="F4-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-10">Figure <xref linkend="F4-10" remap="4.10"/></link></label>
<caption><para>Progressive encapsulation of behaviour in common interfaces.</para></caption>
<graphic xlink:href="graphics/ch04_fig010.jpg"/>
</fig>
<para>Thanks to this unique and innovative approach, new automation systems will be capable of providing simple-to-deploy aggregation of already existing CPS, each one with its own on-board intelligence, to compose articulated &#8220;Systems of Cyber-Physical Systems&#8221; that, for the final user, will be nothing more than &#8220;bigger&#8221; CPS, exhibiting concerted behaviours that will mask their internal working mechanisms based on the design decision of the CPS provider.</para>
<para>The adoption of IEC-61499 provides also another opportunity, which is enabled by its natural object orientation (not only at software level but also in dealing with hardware topology through an appropriate abstraction layer): highly increase re-usability of code and applications.</para>
<para><link linkend="F4-11">Figure <xref linkend="F4-11" remap="4.11"/></link> shows how the development and IP generation value chain would be applied in the case of high code re-usability enabled by the usage of IEC-61499, where software components of increasing complexity (and aggregation of functionalities) would be progressively employed by different users of the automation domain (further explored in <link linkend="ch013">Chapter <xref linkend="ch13" remap="13"/></link> in its large-scale consequences on the market).</para>
<fig id="F4-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-11">Figure <xref linkend="F4-11" remap="4.11"/></link></label>
<caption><para>IEC-61499 CPS Development, the IP Value-Add Chain.</para></caption>
<graphic xlink:href="graphics/ch04_fig011.jpg"/>
</fig>
</section>
<section class="lev2" id="sec4-3-2">
<title>4.3.2 Defiance of Rigid Hierarchical Levels towards the Full Virtualization of the Automation Pyramid</title>
<para>While the design of orchestrating intelligence supported by IEC-61499 allows the conception of complex aggregated system of systems with advanced behaviour, the CPS functional model (at multiple levels) and the corresponding direct access to non-real-time &#8220;services&#8221; enables the complete restructuring of the concepts of a factory automation pyramid.</para>
<para>New levels of vertical and horizontal integration can be envisioned thanks to the peculiar service-oriented approach proposed by Daedalus. In fact, current MES and ERP can extend their scopes of application towards the shop-floor by being capable of directly accessing the information flows and elaboration functionalities of the automation CPS; moreover, non-real-time and bidirectional exchange of information can exist between devices even if they are not explicitly orchestrated, such as among products and manufacturing equipment, or between systems of different departments (across the production value chain).</para>
<para><link linkend="F4-12">Figure <xref linkend="F4-12" remap="4.12"/></link> proposes a different vision of the factory, extending the point of view outside of the shopfloor and into the so-called &#8220;digital&#8221; domain, where all the ICT tools of a company exists, from the MES up to the ERP. Hiding temporarily the hierarchy of CPS at shopfloor level shown before (for ease of readability), the picture shows how each IEC-61499 CPS of Daedalus, based on the functional model of <link linkend="F4-6">Figure <xref linkend="F4-6" remap="4.6"/></link>, can connect directly and independently from the other to any &#8220;digital module&#8221; allowed to do that from a security perspective. This means in practice that:</para>
<fig id="F4-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-12">Figure <xref linkend="F4-12" remap="4.12"/></link></label>
<caption><para>Direct and distributed connection between the digital domain and the Daedalus&#8217; CPS.</para></caption>
<graphic xlink:href="graphics/ch04_fig012.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Asynchronous connections can be established and maintained between a specific CPS and whatever ICT module has the privileges to do so, for instance, for extensive data gathering with semantic description attached; the level of access (within the shopfloor hierarchy of aggregation) is limited only by the granularity enabled by automation developers;</para></listitem>
<listitem><para>Each CPS can be programmed to &#8220;expose&#8221; only the connections and functionalities that its automation developer deems appropriate, increasing at design level the security of the overall connection (apart from specific cyber-security mechanisms);</para></listitem>
<listitem><para>Real-time automation functionalities governing the behaviour of the system can be &#8220;augmented&#8221; by asynchronous access to digital modules conceived to offer specific tools to the automation developer, exploiting, for instance, the higher computational power of a local or cloud server.</para></listitem>
</itemizedlist>
<para>As an explicit consequence, the &#8220;Industrie 4.0&#8221;-envisioned bridging between the execution of the lowest-level manufacturing operations on the shop floor and the highest-level decision making of the top management of a factory is automatically obtained.</para>
</section>
</section>
<section class="lev1" id="sec4-4">
<title>4.4 IEC-61499 Approach to Cyber-Physical Systems</title>
<section class="lev2" id="sec4-4-1">
<title>4.4.1 IEC-61499 runtime</title>
<para>Based on an overall vision of CPS introduced in Daedalus (see <link linkend="F4-13">Figure <xref linkend="F4-13" remap="4.13"/></link>), an IEC61499 runtime enables the 61499-execution model running on a given OS and hardware platform, for example, the Linux Debian OS running on an ARM cortex platform. The runtime includes an event scheduler module responsible for scheduling the execution of algorithms; a resource management module to handle the creation, deletion, and life cycles of managed function blocks in a deployed application and modules to provide timer, memory, logging, IO access and communication services. The combination of hardware, OS services and the IEC 61499 runtime are collectively known as a device in the 61499 context, and a generic architecture for such a device is illustrated in <link linkend="F4-14">Figure <xref linkend="F4-14" remap="4.14"/></link>.</para>
<para>A control application is developed using an IEC-61499 compliant Engineering tool and then deployed to the device where, when necessary, it utilizes different communication protocols and OS services to interact with other CPS and the physical world (e.g., IO access). The IEC 61499 runtime can be extended to support different communication stacks, field buses and OS service and they are to be encapsulated as SIFB function blocks where the control application can access their services by making event and data connections to them. In this way, the application designer does not require any knowledge about the technical details how the communication will be established. For the platform to be widely applicable, it also needs the ability to communicate with other wireless CPS devices (see Section 4.5).</para>
<fig id="F4-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-13">Figure <xref linkend="F4-13" remap="4.13"/></link></label>
<caption><para>Qualitative functional model of an automation CPS based on IEC-61499.</para></caption>
<graphic xlink:href="graphics/ch04_fig013.jpg"/>
</fig>
<fig id="F4-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-14">Figure <xref linkend="F4-14" remap="4.14"/></link></label>
<caption><para>IEC 61499 runtime architecture.</para></caption>
<graphic xlink:href="graphics/ch04_fig014.jpg"/>
</fig>
<para>To enable faster, easier and less error-prone configuration of a network of CPSs in a dynamic changeable network topology, in Daedalus, auto-discovery and self-declaration have been added to the IEC61499 Runtime. To allow this, each device must be capable of creating semantic description of its own interface and functional automation capabilities, making its existence on the network (presence) known to other devices by advertising its entrance and leaving of the network and make necessary exchange of information in standardized, unambiguous syntax and semantics.</para>
<para>The first step is to develop a semantic meta-model for describing the functionalities provided by the CPS. The model must describe the physical interface of the device (parameters) and logical interface to access the automation capabilities it provides. Once the model has been automatically created, it can be exchanged with other CPS in predefined, extensible .xml format.</para>
<para>For the CPS to easily adapt to the dynamic network topology (imagine wireless CPS devices on a mobile platform), where CPS or SoA entities may join and leave local network at will, the auto-discovery must be based on a zero-configuration (zeroconf) technology, where there is no need to manually reconfigure the network layout or a need for a centralized DNS server, where it becomes a single point of failure. A CPS device participating in a zeroconf network will be automatically assigned with address and hostnames, making low-level network communication possible immediately after a device joins a network. Multicast DNS, a subset of the zeroconf technology, will further allow CPS to subscribe and be automatically notified of changes to the layout of the network.</para>
<para>To support the exchange of semantic information used for identification of other CPS&#8217;s capabilities in the network, a new communication protocol based on XMPP has been chosen to be included in the IEC 61499 runtime. XMPP is chosen to leverage on mature standards that will encourage a broader acceptance of the solution implemented as well as its intrinsic nature of being extensible via its XEP protocol.</para>
</section>
<section class="lev2" id="sec4-4-2">
<title>4.4.2 Functional Interfaces</title>
<section class="lev3" id="sec4-4-2-1">
<title>4.4.2.1 IEC-61499 interface</title>
<para>The IEC-61499 interface enables the CPS to connect to a network of IEC- 61499-based controllers leveraging a communication profile compliant with the IEC-61499 standard and enabling, as a consequence, a unified and globally recognized communication means with a network of automation devices.</para>
<para>This interface is mainly dedicated to the exchange of real-time data among the CPSs participating to the same IEC-61499 distributed control application, but it is also exploited by other systems to interact with a CPS to accomplish to specific tasks, as for example:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>to configure the IEC-61499 runtime;</para></listitem>
<listitem><para>to deploy the IEC-61499 code in the CPS;</para></listitem>
<listitem><para>to monitor and debug the IEC-61499 control application.</para></listitem>
</itemizedlist>
<para>The IEC-61499 interface is also the interface that is going to host a strong real-time synchronization mechanisms.</para>
<para>From a hardware perspective, the IEC-61499 interface can be implemented both as a wired Ethernet interface, allowing wired and strong reliable connections, and as a wireless interface, providing flexibility in the implementation of a communication network. It is relevant to highlight, however, that the wireless connectivity will pose some limits in the performance that can be expected for the coordination of the distributed CPS in that network.</para>
</section>
<section class="lev3" id="sec4-4-2-2">
<title>4.4.2.2 Wireless interface</title>
<para>The Wireless interface of DAEDALUS&#8217; CPS is mainly dedicated to the interfacing with remote devices based on dedicated communication protocols, for application-specific tasks. When the considered task follows in the context of connectivity among IEC-61499 nodes, this interface can partially overlap in terms of functionalities the IEC-61499 interface (in the wireless version). However, while the IEC-61499 interface is designed to be a general communication interface for cooperation of distributed control devices over an IP-based network, the Wireless interface is specialized to support specific communication links. Some examples of specific communication channels for which the Wireless interface would be appropriate are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Point-to-point bus communication over a specific wireless technology (different from 802.11a,b,g,n) between two CPSs to support IEC-61499 connectivity;</para></listitem>
<listitem><para>Connection to remote device for mono-/bi-directional exchange of data, for example:</para>
<itemizedlist mark="circle" spacing="normal">
<listitem><para>to a remote I/O module;</para></listitem>
<listitem><para>to a DAEDALUS CPS behaving as a supervisor node;</para></listitem>
<listitem><para>to a third-party technology gateway.</para></listitem></itemizedlist>
</listitem>
</itemizedlist>
<para>To enable an effective approach, which can make easier to extend in future this to support additional wireless communication technologies, this interface is structured on a dual layer:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>a hardware abstraction layer, which provides the mechanisms to leverage the Wireless interface within an IEC-61499 application, and that hides the details of the communication technology adopted underneath;</para></listitem>
<listitem><para>a technology-specific driver, which is leveraged by the abstraction layer to map the expected functionalities over the specific features offered by the selected communication protocol.</para></listitem>
</itemizedlist>
</section>
<section class="lev3" id="sec4-4-2-3">
<title>4.4.2.3 Wrapping interface</title>
<para>The Wrapping interface constitutes the enabler for an IEC-61499-based controller to operate as a CPS-izer. This interface has to enable the communication with a &#8220;legacy&#8221; controller through a communication channel not based on the IEC-61499 protocol.</para>
<para>While from the communication protocol perspective, we can foresee different implementations of this interface based on the specific protocol adopted by the CPS to connect to a non-IEC61499-based controller. The main characteristic of this interface is to present a well-defined mechanism to enable interaction with third-party control applications.</para>
<para>Through this interface, it will be possible to enable the cooperation between the event-based approach of a DAEDALUS&#8217; CPS with the scan- based mechanism adopted by classic controllers. This enables us to consider the CPS as a wrapper that extends the capabilities of the legacy controller with the IEC-61499 features.</para>
</section>
<section class="lev3" id="sec4-4-2-4">
<title>4.4.2.4 Service-oriented interface</title>
<para>The service-oriented interface of a DAEDALUS CPS is fully integrated in the IEC 61499 runtime platform and conceived to enable a dynamic interaction among the CPSs and between the CPSs and the higher automation layers. By means of that interface, a CPS will be able to connect to other systems at the shop floor or at the supervisory/management levels for acquisition of data reflecting the current state of the manufacturing process and therefore extending its perceiving capabilities over the limits of its directly connected sensors.</para>
<para>The service-oriented interface enables a unified methodology of interaction among the intelligent units of the manufacturing plant and, at the same time, the possibility for an orchestrating unit at supervisory/management level to interact directly with the network of cyber-devices and coordinate their action, without requiring compliancy with the IEC 61499.</para>
<para>A CPS exposes through its service-oriented interface a set of function-alities that are exploited by an orchestrating intelligence to reconstruct a better understanding of the actual condition of the manufacturing process and of the CPS&#8217; behaviour and to elaborate more accurate and effective coordination plans, which are then used to instruct appropriately the single automation units.</para>
<para>The service-oriented interface provides a flexible communication mechanism that does not require the specification of all the nodes involved in the communication at design stage, hence making the application easy to scale.</para>
<para>The specification of the service-oriented interface defines (among other aspects):</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The architectural mechanism to integrate the service-oriented interface within an IEC-61499 runtime;</para></listitem>
<listitem><para>The protocols supported by the initial implementation of the DAEDALUS platform;</para></listitem>
<listitem><para>The set of services implemented as a first prototype of the interface.</para></listitem>
</itemizedlist>
</section>
<section class="lev3" id="sec4-4-2-5">
<title>4.4.2.5 Fieldbus interface(s)</title>
<para>To enable a DAEDALUS CPS to be applicable to different application scenarios, the CPS should support connectivity toward other automation devices through common fieldbus technologies.</para>
<para>The fieldbus interface(s) can be of different types and the specific implementation will depend on the types of technologies, for which the appropriate driver will be available/implemented, and the application requirements.</para>
<para>The general goal of this interface is to provide I/O communication with other automation devices. Some of the common fieldbus technologies that are planned to be supported are EtherCAT and Modbus TCP/IP.</para>
</section>
<section class="lev3" id="sec4-4-2-6">
<title>4.4.2.6 Local I/O interface</title>
<para>The Local I/O interface represents a specific interfacing mechanism toward the I/O modules locally installed in the same HW platform of the CPS.</para>
<para>From a functional point of view, this interface is similar to the Fieldbus interface, but it is specialized to enable the exploitation of the resources characterizing a specific implementation of DAEDALUS CPS: those resources can leverage custom/proprietary communication mechanisms, instead of common standards.</para>
</section>
</section>
</section>
<section class="lev1" id="sec4-5">
<title>4.5 The &#8220;CPS-izer&#8221;, a Transitional Path towards Full Adoption of IEC-61499</title>
<para>The technological concept is that of a CPS-izer: a small-footprint (and costs) controller, based on the IEC-61499 technologies of Daedalus, is also capable of interfacing with usual PLCs through standard communication buses (<link linkend="F4-15">Figure <xref linkend="F4-15" remap="4.15"/></link>). This could provide a path for transition towards digital automation to two major families of users:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>End-users will have access to a product that can be easily installed on existing machines and manufacturing systems and, with a limited engineering effort, used to upgrade their plants prolonging functional life;</para></listitem>
<listitem><para>Other developers of automation platforms compliant with IEC-61131 will have a temporary solution to make their systems at least partially coherent with the new IEC-61499 standard.</para></listitem>
</itemizedlist>
<fig id="F4-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-15">Figure <xref linkend="F4-15" remap="4.15"/></link></label>
<caption><para>IEC 61499 CPS-izer.</para></caption>
<graphic xlink:href="graphics/ch04_fig015.jpg"/>
</fig>
<para>Finally, the object-oriented approach that the standard adopts will not be limited a priori to automation algorithms only, but it can be extended to further &#8220;dimensions of existence&#8221; of the system, guaranteeing two important added values. Behavioural models of CPS (needed for several purposes, such as simulation) will become explicit elements of the device virtual representation (avatar), enabling seamless (= transparent to the end-user) connectivity between the device deployed on field and its models memorized in-Cloud. In addition, synchronization and co-simulation in near-real-time will be automatically achieved as already part of the functional IEC-61499 architecture, with the event-based nature of the standard perfectly suited to deal with the management of Big Data coming from the field.</para>
<para>The CPS-izer follows the same common requirements like for an IEC- 61499 Controller device, but deviations of the implementation of the common requirements for the CPS-izer in comparison to an IEC-61499 Controller device are possible. Besides these common requirements, there are other requirements and constraints defined for the CPS-izer. First of all, the CPS-izer needs to provide support for legacy industrial networks.</para>
<para>Legacy industrial networks are characterized by means of their physical and data link layers (e.g., Serial, CAN, Ethernet) and the transport layers up to the application layers depending on the implemented technologies (e.g., Modbus/RTU, PROFIBUS, CANopen, PROFINET).</para>
<para>The CPS-izer supports these legacy industrial networks by means of the adaptation of an interface, which could be implemented by hardware, software or IP-core. The preferred solution for the CPS-izer in Daedalus is the HMS Anybus&#174; (see https://www.anybus.com/products/embedded-index) embedded product family of various industrial interfaces. The CPS-izer only implements a slave/server/device in terms of the applied industrial networking technology. As a restriction, here the CPS-izer cannot be used as a master/client/controller in any industrial network.</para>
<para>One constraint of this device is that it will only support connectivity to legacy industrial networks but no IO data as signals, neither discrete nor analogue. As for an example, other IO modules connected in the legacy industrial network controlled by a PLC in that system can be used if additional IO signals are needed. Also, no support for IO data like in industrial sensor/actuator system (e.g., AS-Interface, IO-Link) will be provided. Those systems would require a master to be implemented, which is out of scope for the realization of the CPS-izer. If IO data from such systems need to be exchanged with CPS, this shall be realized by using an appropriate master in the legacy industrial network controlled by a PLC in that system. The CPS-izer may have limited resources for IEC 61499 functionalities compared to the IEC 61499 Controller when it comes to the implementation of the runtime system. It must of course implement function block(s) and driver(s)/interface(s) to handle the data transfer to the legacy industrial network connected.</para>
<para>The CPS-izer will map input data and output data between CPS and legacy industrial networks. For this, the CPS-izer will implement some kind of a shared memory (in either physical or logical way) to exchange data. The data mapped to this area will be consistent in common for all inputs and outputs mapped with the legacy industrial network. It may be consistent to a finer granularity depending on the types of devices connected.</para>
<para>Since all legacy industrial networks share the same implementation approach of mapping data, this will be the lowest common denominator of all such systems. So, the CPS-izer will follow this philosophy. Some &#8211; but not all &#8211; legacy industrial networks provide events like alarms or diagnostic messages. That implementation is always specific to the industrial network, but no generic solution will be available for this. So, the CPS-izer will not support events of the legacy industrial networks.</para>
<para>The configuration of the available input and output data in CPS-izer will be specific to the legacy industrial network it is connected to. The tools and methods typically for such networks are applied. The PLC in that system is responsible to get the input data from CPS-izer and write them to the outputs of the devices. On the other hand, it will collect the inputs from the devices and put them into the output data of the CPS-izer. The processing of output and input data in the PLC will follow the common approach for a scan cycle as it is implemented in automation industries since decades: read inputs &#8211; execute process data &#8211; write outputs.</para>
<para>In terms of such PLC systems, the CPS-izer will put output data from the legacy industrial network to the CPS, which is seen there as input. It will get output data from the CPS, which is seen as input in the legacy industrial network. For the CPS-izer, the execution of the process data in the PLC is just a copy function to copy data from the process image input to the process image output.</para>
<para>Some legacy industrial networks like EtherCAT or PROFINET provide real-time capabilities to transport IO data in the ms or even s range of cycle times. This real-time behaviour will not be made transparent to the CPS. The CPS-izer will only guarantee data consistency between CPS and legacy industrial network related to the cycle time running in that network, but it cannot guarantee real-time transport between both systems.</para>
<para>The CPS-izer should be realized in a small industrial-approved plastic housing, which could be easily mounted at a machine or in a cabinet using DIN-rail mechanics. It should require a single 24 V power supply as used in standard industrial automation systems. Furthermore, it should realize a common way to connect to legacy industrial networks by means of front plugs/connectors and indicators.</para>
<para>The CPS-izer should follow requirements for industrial grading like temperature range, shock and vibration, EMC and others for common cabinet mounting. It must adhere to CE compliance.</para>
<para>Harsh industrial requirements like IP67, sealed connectors and housing and higher temperature range are not in the focus of the realization of the CPS-izer.</para>
</section>
<section class="lev1" id="sec4-6">
<title>4.6 Conclusions</title>
<para>This chapter has explored a new generation of functional architecture for industrial automation, centred around the concepts, methodologies and technologies of the IEC-61499 standard but exploiting and extending them for a concrete implementation of what are called &#8220;Cyber-Physical Systems&#8221;.</para>
<para>The transition to this type of model is not just a matter of installing new devices into a shopfloor, but it requires a real paradigm shift in the way real-time control and automation in manufacturing are engineering, introducing new concepts of design and the corresponding skills.</para>
<para>Daedalus project is developing all the tools to enable such a transition, considering both green-field and brown-field scenarios, accepting that Industry 4.0 full implementation will need a radical change in the way existing PLCs work.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>This work was achieved within the EU-H2020 project DAEDALUS, which received funding from the European Union&#8217;s Horizon 2020 research and innovation programme, under grant agreement No 723248.</para></section>
<section class="lev1" id="sec4-7">
<title>References</title>
<para>[1] http://www.pathfinderproject.eu</para>
<para>[2] https://ec.europa.eu/digital-agenda/en/digitising-european-industry</para>
<para>[3] Zoitl, Alois, and Valeriy Vyatkin. &#8220;IEC 61499 Architecture for Distributed Automation: the &#8216;Glass Half Full&#8217;View&#8221;, IEEE Industrial Electronics Magazine 3.4: 7&#8211;23, 2009.</para>
<para>[4] Object Management Group, &#8220;Model Driven Architecture&#8221;, Online Available: http://www.omg.org/mda/faq_mda.htm, Jun. 2009.</para>
<para>[5] B. Huber, R. Obermaisser, and P. Peti, &#8220;MDA-based development in the DECOS integrated architecture&#8211;modeling the hardware platform&#8221;, in Object and Component-Oriented Real-Time Distributed Computing, 2006. ISORC 2006. Ninth IEEE International Symposium in April 2006, p. 10.</para>
<para>[6] http://www.maya-euproject.com</para>
</section>
</chapter>

<chapter class="chapter" id="ch05" label="5" xreflabel="5">
<title>Communication and Data Management in Industry 4.0</title>
<para><emphasis role="strong">Maria del Carmen Lucas-Esta&#241; <superscript>1</superscript>, Theofanis P. Raptis<superscript>2</superscript></emphasis>, <emphasis role="strong">Miguel Sepulcre<superscript>1</superscript>, Andrea Passarella<superscript>2</superscript>, Javier Gozalvez<superscript>1</superscript> and Marco Conti<superscript>2</superscript></emphasis></para>
<para><superscript>1</superscript> UWICORE Laboratory, Universidad Miguel Hern&#225;ndez de Elche (UMH), Elche, Spain</para>
<para><superscript>2</superscript> Institute of Informatics and Telematics, National Research Council (CNR), Pisa, Italy</para>
<para>E-mail: m.lucas@umh.es; theofanis.raptis@iit.cnr.it; msepulcre@umh.es;</para>
<para>andrea.passarella@iit.cnr.it; j.gozalvez@umh.es; marco.conti@iit.cnr.it</para>
<para>The Industry 4.0 paradigm alludes to a new industrial revolution where factories evolve towards digitalized and networked structures where intelligence is spread among the different elements of the production systems. Two key technological enablers to achieve the flexibility and efficiency sought for factories of the future are the communication networks and the data management schemes that will support connectivity and data distribution in Cyber-Physical Production Systems. Communications and data management must be built upon a flexible and reliable architecture to be able to efficiently meet the stringent and varying requirements in terms of latency, reliability and data rates demanded by industrial applications. To this aim, this chapter presents a hierarchical communications and data management architecture, where decentralized and local management decisions are coordinated by a central orchestrator that ensures the efficient global operation of the system. The defined architecture considers a multi-tier organization, where different management strategies can be applied to satisfy the different requirements in terms of latency and reliability of different industrial applications. The use of virtualization and softwarization technologies as RAN Slicing and</para>
<para>Cloud RAN will allow to achieve the flexibility, scalability and adaptation capabilities required to support the high-demanding and diverse industrial environment.</para>
<section class="lev1" id="sec5-1">
<title>5.1 Introduction</title>
<para>In future industrial applications, the Internet of Things (IoT) with its communications and data management functions will help shape the operational efficiency and safety of industrial processes through integrating sensors, data management, advanced analytics, and automation into a mega-unit [1]. The future and significant participation of intelligent robots will enable effective and cost-efficient production, achieving sustainable revenue growth. Industrial automation systems, emerging from the Industry 4.0 paradigm, count on sensors&#8217; information and the analysis of such information [2]. As such, connectivity is a crucial factor for the success of industrial Cyber-Physical-Systems (CPS), where machines and components can talk to one another. Moreover, in the context of Industry 4.0 and to match the increased market demand for highly customized products, traditional pilot lines designed for mass production are now evolving towards more flexible &#8220;plug &amp; produce&#8221; modular manufacturing strategies based on autonomous assembly stations [3], which will make increased use of massive volumes of Big Data streams to support self-learning capabilities and will demand real-time reactions of increasingly connected mobile and autonomous robots and vehicles. While conventional cloud solutions will be definitely part of the picture, they will not be enough. The concept of centrally organized enterprises at which large amounts of data are sent to a remote data center do not deliver the expected performance for Industry 4.0 scenarios and applications. Recently, moving service supply from the cloud to the edge has enabled the possibility of meeting application delay requirements, improves scalability and energy efficiency, and mitigates the network traffic burden. With these advantages, decentralized industrial operations can become a promising solution and can provide more scalable services for delay-tolerant applications [4].</para>
<para>Two technological enablers of Industry 4.0 are: (i) the communication infrastructure that will support the ubiquitous connectivity of Cyber-Physical Production Systems (CPPS) and (ii) the data management schemes built upon the communication infrastructure that will enable efficient data distribution within the Factories of the Future [5]. In the industrial environment, a wide set of applications and services with very different communication requirements will coexist, being one of the most demanding verticals with respect to the number of connected nodes, ultra-low latencies, ultra-high reliability, energy efficiency, and ultra-low communication costs [6]. The varying and stringent communication and data availability requirements of the industrial applications pose an important challenge for the design of the communication network and of the data management systems. The communication network and the data management strategy must be built upon a flexible architecture capable of meeting the communication requirements of the industrial applications, with particular attention on time-critical automation.</para>
<para>The architecture reviewed in this chapter is the reference communications and data management architecture of the H2020 AUTOWARE project [7]. The main objective of AUTOWARE is to build an open consolidated ecosystem that lowers the barriers of small, medium- &amp; micro-sized enterprises (SMMEs) for cognitive automation application development and application of autonomous manufacturing processes. Communications and data management are two technological enablers within the AUTOWARE Framework (<link linkend="F5-1">Figure <xref linkend="F5-1" remap="5.1"/></link> and presented in detail in <link linkend="ch02">Chapter <xref linkend="ch2" remap="2"/></link>). Within the AUTOWARE framework, the AUTOWARE Reference Architecture establishes four layers: Enterprise, Factory, Workcell/Production Line, and Field Devices. In addition, the AUTOWARE Reference Architecture also includes two transversal layers: (i) the Fog/Cloud layer, since applications or services in all the layers can be included or implemented in the Fog/Cloud, and (ii) the Modelling layer, since different technical components inside the different layers can be modelled, and it could be possible to have modeling approaches that take the different layers into account. The communications and data management architecture proposed in AUTOWARE supports the communication network and the data management system and enables the data exchange between the different AUTOWARE components, exploiting the Fog and/or Cloud concepts. It provides communication links between devices, entities, and applications implemented in different layers, and also within the same layer. Within the AUTOWARE Reference Architecture (defined in the H2020 AUTOWARE Project), the communication network and data management system can be represented as a transversal layer that interconnects all the functional layers of the AUTOWARE Reference Architecture (see <link linkend="F5-2">Figure <xref linkend="F5-2" remap="5.2"/></link>). The communications and data management architecture presented in this chapter provides the communication and data distribution capabilities required by the different systems or platforms developed within the AUTOWARE framework.</para>
<fig id="F5-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-1">Figure <xref linkend="F5-1" remap="5.1"/></link></label>
<caption><para>The AUTOWARE framework.</para></caption>
<graphic xlink:href="graphics/ch05_fig001.jpg"/>
</fig>
<para>AUTOWARE proposes the use of a heterogeneous network that integrates different communication technologies covering the industrial environment. The objective is to exploit the abilities of different wired and wireless communication technologies to meet the broad range of communication requirements posed by Industry 4.0 in an efficient and reliable way. To this aim, inter-system interferences between different wireless technologies operating in the same unlicensed frequency band need to be monitored and controlled, as well as inter-cell interferences for wireless technologies using the licensed spectrum. From a data management standpoint, real-time data availability requirements, optimized utilization of IT resources (particularly for SMMEs), and data ownership constraints call for distributed data management schemes, whereby data are stored, replicated, and accessed from multiple locations in the network, depending on data generation and data access patterns, as well as the status of physical resources at the individual nodes.</para>
<fig id="F5-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-2">Figure <xref linkend="F5-2" remap="5.2"/></link></label>
<caption><para>Communication network and data management system into the AUTOWARE Reference Architecture.</para></caption>
<graphic xlink:href="graphics/ch05_fig002.jpg"/>
</fig>
<para>To efficiently integrate the different communication technologies in a unique network and handle the data management process, we adopt a software-defined hierarchical approach where a central entity guarantees the coordination of local and distributed managers resulting in a mix of centralized management (orchestration) and decentralized operation of the communication and data management functions. Communication links are organized in different virtual tiers based on the performance requirements of the application they support. Different communications and data management strategies can then be applied at each tier to meet the specific communication and data availability requirements of each application. To implement the proposed hierarchical and multi-tier management architecture, we consider the use of RAN (Radio Access Network) Slicing and Cloud RAN as technological enablers to achieve the flexibility, scalability, and adaptation architectural capabilities needed to guarantee the stringent and varying communication and data distribution requirements of industrial applications.</para>
<para>This chapter is organized as follows. Section 5.2 presents the requirements imposed by Industry 4.0 to the communications and data management system. Section 5.3 reviews communication architectures proposed for Industrial Wireless Networks, and Section 5.4 presents traditional and current trends on the design of data management strategies in industrial environments. Section 5.5 presents the proposed communications and data management architecture, and the technological enablers considered to build up the architecture, RAN Slicing and Cloud RAN. Section 5.6 describes the possibilities offered by the proposed hierarchical architecture to implement hybrid management schemes to introduce flexibility in the management of wireless connections while maintaining a close coordination with a central network manager. Section 5.7 presents examples of early adoption of communication and data management concepts supported by the suggested architecture. How the reference communications and data management architecture fits into the overall AUTOWARE framework is presented in Section 5.8. Section 5.9 summarizes and concludes the chapter.</para>
</section>
<section class="lev1" id="sec5-2">
<title>5.2 Industry 4.0 Communication and Data Requirements</title>
<para>Industry 4.0 poses a complex communication environment because of the wide set of different industrial applications and services that will coexist, all of them demanding very different and stringent communication requirements. The 5GPPP classifies industrial use cases in five families, each of them representing a different subset of communication requirements in terms of latency, reliability, availability, throughput, etc. [6]. Instant process optimization based on real-time monitoring of the manufacturing performance and the quality of produced goods is one of the most demanding use case families in terms of latency and reliability. Some of the sensors may communicate at low bitrates but with ultra-low latency and ultra-high reliability, whereas vision-controlled robot arms or mobile robots may require reliable high- bandwidth communication. Inside the factory, there are also applications or services without time-critical requirements, such as the localization of assets and goods and logistic processes, non-time critical quality control, or data capturing for later usage in virtual design contexts. The challenge in this second use case family is to ensure high availability of the wireless networks, given the harsh industrial environment. Remotely controlling digital factories requires end-to-end communications between remote workers and the factory. This use case family could simply involve the use of tablets or smartphones, or more complex scenarios with augmented reality devices that facilitate the creation of virtual back office teams that exploit the collected data for preventives analytics. In this use case family, there is a less stringent need for low latency, but high availability is key to ensure that emergency maintenance actions can take place immediately. The fourth use case family identified involves the connectivity between different production sites as well as with further actors in the value chain (e.g. suppliers, logistics) seamlessly. A high level of network and service availability and reliability including wireless link is one of the key requirements. The last use case family identified by the 5G-PPP considers that factories will play an important role in the provisioning of the connected goods that are produced, for which autonomy is a key requirement. Table 5.1 summarizes the communication requirements for each of the five use case families identified by the 5G-PPP.</para>
<para>The International Society of Automation (ISA) and ETSI also highlight the diverse communication requirements of industrial applications. For example, ISA establishes safety, control, and monitoring applications in six different classes based on the importance of message timeliness [9]. ETSI has also investigated the communication requirements of industrial automation in [10] and differentiated two types of applications. The first type involves the use of sensors and actuators in industrial automation and its main requirement is the real-time behavior or determinism. The second type of applications involves the communication at higher levels of the automation hierarchy, e.g. at the control or enterprise level, where throughput, security, and reliability become more important. Automation systems are subdivided into three main classes (manufacturing cell, factory hall, and plant level) with different needs in terms of latency (from 5 to 20 ms). Their requirements in terms of latency, update time, and number of devices can notably differ between them (see Table 5.2). However, all three classes require a 10<superscript>&#8211;9</superscript> packet loss rate and a 99.999% application availability.</para>
<table-wrap position="float" id="T5-1">
<label><link linkend="T5-1">Table <xref linkend="T5-1" remap="5.1"/></link></label>
<caption><para>5G-PPP use case families for manufacturing [6]</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr><td valign="top" align="left">Use Case Family</td><td valign="top" align="left">Representative Scenarios</td><td valign="top" align="left">Latency</td><td valign="top" align="left">Reliability Bandwidth</td></tr>
<tr><td valign="top" align="left">1.</td><td valign="top" align="left">Time-critical process optimization inside factory</td><td valign="top" align="left">
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Real-time closed-loop communication between machines to increase efficiency and flexibility</para></listitem>
<listitem><para>3D augmented reality applications for training and maintenance</para></listitem>
<listitem><para>3D video-driven interaction between collaborative robots and humans</para></listitem>
</itemizedlist>
</td><td valign="top" align="left">Ultra low</td><td valign="top" align="left">Ultra high</td><td valign="top" align="left">Low to high</td></tr>
<tr><td valign="top" align="left">2.</td><td valign="top" align="left">Non-timecritical in-factory communication</td><td valign="top" align="left">
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Identification/tracing of objects/goods inside the factory</para></listitem>
<listitem><para>Non-real-time sensor data capturing for process optimization</para></listitem>
<listitem><para>Data capturing for design, simulation, and forecasting of new products and production processes</para></listitem>
</itemizedlist></td><td valign="top" align="left">Less critical</td><td valign="top" align="left">High</td><td valign="top" align="left">Low to high</td></tr>
<tr><td valign="top" align="left">3.</td><td valign="top" align="left">Remote control</td><td valign="top" align="left">
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Remote quality inspection/diagnostics</para></listitem>
<listitem><para>Remote virtual back office</para></listitem>
</itemizedlist>
</td><td valign="top" align="left">Less critical</td><td valign="top" align="left">High</td><td valign="top" align="left">Low to high</td></tr>
<tr><td valign="top" align="left">4.</td><td valign="top" align="left">Intra-/Interenterprise communication</td><td valign="top" align="left">
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Identification/tracking of goods in the end-to-end value chain</para></listitem>
<listitem><para>Reliable and secure interconnection of premises (intra-/inter-enterprise)</para></listitem>
<listitem><para>Exchanging data for simulation/design purposes</para></listitem>
</itemizedlist>
</td><td valign="top" align="left">Ultra low to less critical</td><td valign="top" align="left">High</td><td valign="top" align="left">Low to high</td></tr>
<tr><td valign="top" align="left">5.</td><td valign="top" align="left">Connected goods</td><td valign="top" align="left">
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Connecting goods during product lifetime to monitor product characteristics, sensing its surrounding context and offering new data-driven services</para></listitem>
</itemizedlist>
</td><td valign="top" align="left">Less critical</td><td valign="top" align="left">Low</td><td valign="top" align="left">Low</td>
</tr>
</tbody>
</table>
</table-wrap>
<para>The timing requirements depend on different factors. As presented by the 5GPPP in [6], process automation industries (such as oil and gas, chemicals, food and beverage, etc.) typically require cycle times of about 100 ms. In factory automation (e.g. automotive production, industrial machinery, and consumer products), typical cycle times are 10 ms. The highest demands</para>
<table-wrap position="float" id="T5-2">
<label><link linkend="T5-2">Table <xref linkend="T5-2" remap="5.2"/></link></label>
<caption><para>Performance requirements for three classes of communication in industry estab-lished by ETSI [10]</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr valign="top">
  
<td>&#160;</td><td valign="top" align="left">Manufacturing Cell</td><td valign="top" align="left">Factory Hall</td><td valign="top" align="left">Plant Level</td></tr>
<tr><td valign="top" align="left">Indoor/outdoor application</td><td valign="top" align="left">Indoor</td><td valign="top" align="left">Mostly indoor</td><td valign="top" align="left">Mostly outdoor</td></tr>
<tr><td valign="top" align="left">Spatial dimension L&#215;W&#215;H (m<superscript>3</superscript> )</td><td valign="top" align="left">10&#215;10&#215;3</td><td valign="top" align="left">100&#215;100&#215;10</td><td valign="top" align="left">1000&#215;1000&#215;50</td></tr>
<tr><td valign="top" align="left">Number of devices (typically)</td><td valign="top" align="left">30</td><td valign="top" align="left">100</td><td valign="top" align="left">1000</td></tr>
<tr><td valign="top" align="left">Number of parallel networks (clusters)</td><td valign="top" align="left">10</td><td valign="top" align="left">5</td><td valign="top" align="left">5</td></tr>
<tr><td valign="top" align="left">Number of such clusters per plant</td><td valign="top" align="left">50</td><td valign="top" align="left">10</td><td valign="top" align="left">1</td></tr>
<tr><td valign="top" align="left">Min. number of locally parallel devices</td><td valign="top" align="left">300</td><td valign="top" align="left">500</td><td valign="top" align="left">250</td></tr>
<tr><td valign="top" align="left">Network type</td><td valign="top" align="left">Star</td><td valign="top" align="left">Star/Mesh</td><td valign="top" align="left">Mesh</td></tr>
<tr><td valign="top" align="left">Packet size (on air, byte)</td><td valign="top" align="left">16</td><td valign="top" align="left">200</td><td valign="top" align="left">105</td></tr>
<tr><td valign="top" align="left">Max. allowable latency (end-to-end) incl. jitter/retransmits (ms)</td><td valign="top" align="left">5 &#177; 10%</td><td valign="top" align="left">20 &#177; 10%</td><td valign="top" align="left">20 &#177; 10%</td></tr>
<tr><td valign="top" align="left">Max. on-air duty cycle related to media utilization</td><td valign="top" align="left">20%</td><td valign="top" align="left">20%</td><td valign="top" align="left">20%</td></tr>
<tr><td valign="top" align="left">Update time (ms)</td><td valign="top" align="left">50 &#177; 10%</td><td valign="top" align="left">200 &#177; 10%</td><td valign="top" align="left">500 &#177; 10%</td></tr>
<tr><td valign="top" align="left">Packet loss rate (outside latency)</td><td valign="top" align="left">10<subscript>&#8211;9</subscript></td><td valign="top" align="left">10<subscript>&#8211;9</subscript></td><td valign="top" align="left">10<subscript>&#8211;9</subscript></td></tr>
<tr><td valign="top" align="left">Spectral efficiency (typically) (bis/s/Hz)</td><td valign="top" align="left">1</td><td valign="top" align="left">1.18</td><td valign="top" align="left">0.13</td></tr>
<tr><td valign="top" align="left">Bandwidth requirements (MHZ)</td><td valign="top" align="left">8</td><td valign="top" align="left">34</td><td valign="top" align="left">34</td></tr>
<tr><td valign="top" align="left">Application availability</td>
 <td>&#160;</td><td valign="top" align="left">Exceeds 99.999%</td><td>&#160;</td></tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="T5-3">
<label><link linkend="T5-3">Table <xref linkend="T5-3" remap="5.3"/></link></label>
<caption><para>Timing requirements for motion control systems [6]</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr><td valign="top" align="left">Requirement</td><td valign="top" align="left">Value</td></tr>
<tr><td valign="top" align="left">Cycle time</td><td valign="top" align="left">1 ms (250 &#181; s . . . 31.25 &#181; s)</td></tr>
<tr><td valign="top" align="left">Response time/update time</td><td valign="top" align="left">. . . 100 &#181; s</td></tr>
<tr><td valign="top" align="left">Jitter</td><td valign="top" align="left">&lt;1 &#181;s . . . 30 ns</td></tr>
<tr><td valign="top" align="left">Switch latency time</td><td valign="top" align="left">. . . 40 ns</td></tr>
<tr><td valign="top" align="left">Redundancy switchover time</td><td valign="top" align="left">&lt;15 &#181;s</td></tr>
<tr><td valign="top" align="left">Time synchronization accuracy</td><td valign="top" align="left">...100 ns</td></tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="T5-4">
<label><link linkend="T5-4">Table <xref linkend="T5-4" remap="5.4"/></link></label>
<caption><para>Communication requirements for some industrial applications [5]</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr valign="top">
 <td>&#160;</td><td valign="top" align="left">Motion Control</td><td valign="top" align="left">Condition Monitoring</td><td valign="top" align="left">Augmented Reality</td></tr>
<tr><td valign="top" align="left">Latency/cycle time</td><td valign="top" align="left">250 &#181;s&#8211;1 ms</td><td valign="top" align="left">100 ms</td><td valign="top" align="left">10 ms</td></tr>
<tr><td valign="top" align="left">Reliability (PER)</td><td valign="top" align="left">1e-8</td><td valign="top" align="left">1e-5</td><td valign="top" align="left">1e-5</td></tr>
<tr><td valign="top" align="left">Data rate</td><td valign="top" align="left">kbit/s&#8211;Mbit/s</td><td valign="top" align="left">kbit/s</td><td valign="top" align="left">Mbit/s&#8211;Gbit/s</td></tr>
</tbody>
</table>
</table-wrap>
<para>are set by motion control applications (printing machines, textiles, paper mills, etc.) requiring cycle times of less than 1 ms with a jitter of less than 1 &#181;s. For motion control, current requirements are shown in Table 5.3. Table 5.4 also shows the communication requirements of three relevant application examples (extracted from [5]) that illustrate the range of diverging and stringent communications requirements imposed by Industry 4.0.</para>
<para>These requirements have been confirmed within AUTOWARE. The communication requirements of several industrial use cases that are being developed within AUTOWARE have been analyzed. For example, in the PWR Pack AUTOWARE use case presented in [11], a stringent latency bound of 1 ms with a data rate lower than 100 kb/s is imposed to transmit commands from a Programmable Logic Controller (PLC) to a robot to control the servomotors and the movement of the robot, while 1&#8211;100 Mb have to be transmitted per image from a camera to a 3D visualization system tolerating a maximum 5 ms latency. On the other hand, the communication between a fixed robot and a component supplier mobile robotic platform within the neutral experimentation facility for collaborative robotics that is being developed by IK4-Tekniker [12] requires robust, flexible, and highly reliable wireless communication with latency bounded to some hundreds of milliseconds to guarantee the coordination and interoperation of both robots.</para>
<para>Due to the fact that the application functions should be applicable to different types of network nodes, they cannot rely only on specific communication functions, but include additional functions like smart data distribution and management. It should be worth noting that the ultimate Industry 4.0 application performance is the result of the concurrent operation and synergies across communication architectures and data distribution strategies. Table 5.5 shows some additional requirements for different application scenarios that impose additional constraints to manage the communications network and impose specific constraints to data management schemes [13, 14]. A massive M2M (machine to machine) connectivity will require an Access Point (AP) to support hundreds of thousands of field devices, with obvious limitations on the data rates each can support, and thus on rates at which they are enquired for (new) data. Maintenance for such large connectivity should be very low; thus, a very long battery period for such devices will be a necessity. A battery life for wireless devices greater than 10 years will mean that many hard-to-reach sensors and actuators could only sustain very low data rates. Reliability will play a critical role in industrial requirements with safety protection and control applications, calling for resilient data management schemes. In addition to all these requirements, a network should also be able to provide pervasive connectivity experience for the devices that may transition from outdoors to indoors location in a mobile scenario. Finally, data availability issues impose other specific requirements. For example, depending on applications, data might not be replicated outside of a set of devices or a geographical area for ownership reasons. Data might have to be replicated, instead, on other groups of nodes for data availability. Conversions across data formats might be needed, to guarantee interoperability across different factory or enterprise systems. All these issues belong to the broader concept of data sovereignty that is the main focus of the Industrial Data Space (IDS) initiative [15].</para>
<table-wrap position="float" id="T5-5">
<label><link linkend="T5-5">Table <xref linkend="T5-5" remap="5.5"/></link></label>
<caption><para>Additional requirements for different application scenarios [13, 14]</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr valign="top">
  <td>&#160;</td><td valign="top" align="left">Desired Value</td><td valign="top" align="left">Application Scenario</td></tr>
<tr><td valign="top" align="left">Connectivity</td><td valign="top" align="left">300.000 devices per AP</td><td valign="top" align="left">Massive M2M connectivity</td></tr>
<tr><td valign="top" align="left">Battery life</td><td valign="top" align="left">&gt;10 years</td><td valign="top" align="left">Hard-to-reach deployments</td></tr>
<tr><td valign="top" align="left">Reliability</td><td valign="top" align="left">99.999%</td><td valign="top" align="left">Protection and control</td></tr>
<tr><td valign="top" align="left">Seamless and quick connectivity</td><td valign="top" align="left">&#8211;</td><td valign="top" align="left">Mobile devices</td></tr>
</tbody>
</table>
</table-wrap>
</section>
<section class="lev1" id="sec5-3">
<title>5.3 Industrial Wireless Network Architectures</title>
<para>Traditionally, communication networks in industrial systems have been based on wired fieldbuses and Ethernet-based technologies, and often on proprietary standards such as HART, PROFIBUS, Foundation Fieldbus H1, etc. While wired technologies can provide high communications reliability, they are not able to fully meet the required flexibility and adaptation of future manufacturing processes for Industry 4.0. Wireless communication technologies present key advantages for industrial monitoring and control systems. They can provide connectivity to moving parts or mobile objects (robots, machinery, or workers) and offer the desired deployment flexibility by minimizing and significantly simplifying the need of cable installation. Operating in unlicensed frequency bands, WirelessHART, ISA100.11a, and IEEE 802.15.4e, are some of the wireless technologies developed to support industrial automation and control applications. These technologies are based on the IEEE 802.15.4 physical and MAC (Medium Access Control) layers, and share some fundamental technologies and mechanisms, e.g., a centralized network management and Time Division Multiple Access (TDMA) combined with Frequency Hopping (FH). <link linkend="F5-3">Figure <xref linkend="F5-3" remap="5.3"/></link> shows the network architecture for WirelessHart and ISA100.11a. In both examples, there is a central network management entity referred to as Network Manager in a WirelessHart network and System Manager in the ISA100.11a network that is in charge of the configuration and management at the data link and network levels of the communications between the different devices (gateways, routers, and end devices).</para>
<fig id="F5-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-3">Figure <xref linkend="F5-3" remap="5.3"/></link></label>
<caption><para>Examples of centralized management architectures.</para></caption>
<graphic xlink:href="graphics/ch05_fig003.jpg"/>
</fig>
<para>The main objective of having a centralized network management is to achieve high communications reliability levels. However, the excessive overhead and reconfiguration time that results from collecting state information by the central manager (e.g. the Network Manager in a WirelessHart network or the System Manager in a ISA100.11a network) and distributing management decisions to end devices limits the reconfiguration and scalability capabilities of networks with centralized management, as highlighted in [16] and [17]. To overcome this drawback, the authors of [17&#8211;21] proposed to divide a large network into multiple subnetworks and considered a hierarchical management architecture. In this context, each subnetwork has its own manager that deals with the wireless dynamics within its subnetwork. A global entity is in charge of the management and coordination of the entire network with the subnetwork managers. Proposals in [19&#8211;21] rely on hierarchical architectures and also propose the integration of heterogeneous technologies to efficiently guarantee the wide range of different communication requirements of industrial applications; the need of using heterogeneous technologies in manufacturing processes was already highlighted by ETSI in [10]. For example, the approach proposed in [19], and shown in <link linkend="F5-4">Figure <xref linkend="F5-4" remap="5.4"/></link>(a), considers the deployment of several subnetworks in the lowest level of the industrial network architecture connecting sensors and actuators. The deployed devices collect data and send it to a central control and management system, which is located at the highest level of the network architecture. This IWN integrates and exploits various wireless technologies with different communication capacities at different levels of the architecture. Coordinators at each subnetwork act as sink nodes and collect data from different low- bandwidth sensors and transmit it to gateway nodes using higher-bandwidth wireless technologies. The gateway nodes are usually deployed so that they can collect and transmit data from various sink nodes to the central control and management system through high-bandwidth technologies. Another example is the network architecture proposed in the framework of the DEWI (Dependable Embedded Wireless Infrastructure) project [22]. The DEWI hierarchical architecture [20] is depicted in <link linkend="F5-4">Figure <xref linkend="F5-4" remap="5.4"/></link>(b). This architecture is based on the concept of DEWI Bubbles. A DEWI Bubble is defined as a high-level abstraction of a set of industrial wireless sensor networks (WSN) located in proximity with enhanced inter-operability, technology reusability, and cross-domain development. In ref. [20], standard interfaces are defined to allow WSNs that can implement different communication technologies to exchange information among them. Each WSN has its own Gateway that is in charge of the WSN management and protocol translation. The use of resources at different WSNs inside a Bubble is coordinated by a higher-level gateway that also provides protocol translation functionalities for the WSN under its support. Communication between different Bubbles is possible through their corresponding Bubble Gateways. Interfaces, services, and interoperability features of the different nodes and gateways are described in [20]. Ref. [20] is focused on IoT systems and provides connectivity to a large number of communication devices. However, it does not particularly consider applications with very stringent latency and reliability requirements.</para>
<fig id="F5-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-4">Figure <xref linkend="F5-4" remap="5.4"/></link></label>
<caption><para>Examples of hierarchical IWN architectures.</para></caption>
<graphic xlink:href="graphics/ch05_fig004.jpg"/>
</fig>
<para>Another interesting hierarchical management architecture that considers the use of heterogeneous wireless technologies is presented in [21], and has been developed in the framework of the KoI project [23]. The architecture presented in [21] proposes a two-tier management approach for radio resource coordination to support mission-critical wireless communications. To guarantee the capacity and scalability requirements of the industrial environment, ref. [21] considers the deployment of multiple small cells. Each of these small cells can implement a different wireless technology, and has a Local Radio Coordinator (LRC) that is in charge of the fine-grained management of radio resources for devices in its cell. On a higher level, there is a single Global Radio Coordinator (GRC) that carries out the radio resource management on a broader operational area and coordinates the use of radio resources by the different cells to avoid inter-system (for wireless technologies using unlicensed bands) and inter-cell (for those working on licensed bands) interference among them. In ref. [21], the control plane and the data plane are split following the Software-Defined Networking (SDN) principle. Control management is carried out in a centralized mode at LRCs and the GRC. For the data plane, centralized and assisted Device-to-Device (D2D) modes are considered within each cell.</para>
<para>5G networks are also being designed to support, among other verticals, Industrial IoT systems [24]. To this end, the use of Private 5G networks is proposed [25]. Private 5G networks will allow the implementation of local networks with dedicated radio equipment (independent of traffic fluctuation in the wide-area macro network) using shared and unlicensed spectrum, as well as locally dedicated licensed spectrum. The design of these Private 5G networks to support industrial wireless applications considers the imple-mentation of several small cells to cover the whole industrial environment integrated in the network architecture as shown in <link linkend="F5-5">Figure <xref linkend="F5-5" remap="5.5"/></link>. Private 5G networks will have to support Ultra Reliable Low Latency Communications (URLLC) for time-critical applications, and Enhanced Mobile Broadband services for augmented/virtual reality services. In addition, the integration of 5G networks with Time Sensitive Networks (TSN)<footnote id="fn5_1" label="1"> <para>TSN is a set of IEEE 802 Ethernet sub-standards that aim to achieve deterministic com-munication over Ethernet by using time synchronization and a schedule that is shared between all the components (i.e. end systems and switches) within the network. By defining various queues based on time, TSN ensures a bounded maximum latency for scheduled traffic through switched networks, thereby guaranteeing the latency of critical scheduled communication. Additionally, TSN supports the convergence of having critical and non-critical communication sharing the same network, without interfering with each other, resulting in a reduction of costs (reduction of required cabling).</para></footnote> is considered to guarantee deterministic end-to-end industrial communications, as presented in [24]. <link linkend="F5-6">Figure <xref linkend="F5-6" remap="5.6"/></link> summarizes these key capabilities of Private 5G networks for Industrial IoT systems.</para>
<para>The reference communication and data management architecture designed in AUTOWARE is very aligned with the concepts that are being studied for Industrial 5G networks. The support of very different communication requirements demanded for a wide set of industrial applications (from time-critical applications to ultra-high demanding throughput applications) and the integration of different communication technologies (wired and wireless) are key objectives of the designed AUTOWARE communication and data management architecture to meet the requirements of Industry 4.0. In fact, AUTOWARE focuses on the design of a communication architecture that is able to efficiently meet the varying and stringent communication requirements of the wide set of applications and services that will coexist within the factories of the future; in contrast to the architectures proposed in [20] and [21], which are mainly designed to guarantee communication requirements of a given type of service (to provide connectivity to a large number of communication devices in [20], and mission-critical wireless communications in [21]). In addition, this work goes a step further and analyzes the requirements of the communication architecture from the point of view of the data management and distribution.</para>
<fig id="F5-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-5">Figure <xref linkend="F5-5" remap="5.5"/></link></label>
<caption><para>Private 5G Networks architecture for Industrial IoT systems [24].</para></caption>
<graphic xlink:href="graphics/ch05_fig005.jpg"/>
</fig>
<fig id="F5-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-6">Figure <xref linkend="F5-6" remap="5.6"/></link></label>
<caption><para>Key capabilities of Private 5G Networks for Industrial IoT systems [24].</para></caption>
<graphic xlink:href="graphics/ch05_fig006.jpg"/>
</fig>
</section>
<section class="lev1" id="sec5-4">
<title>5.4 Data Management in Industrial Environments</title>
<para>Traditionally, industrial application systems tend to be entirely centralized. For this reason, distributed data management has not been studied extensively in the past, and the emphasis has been put on the efficient wireless and wired communication within the industrial environment. The reader can find state- of-the-art approaches on relevant typical networks in [19, 26&#8211;28].</para>
<para>However, there have been some interesting works on various aspects of the data management process, e.g., end-to-end latency provisioning. In [29], the authors present a centralized routing method, and, consequently, they do not use proxies, data handling special nodes, or hierarchical data management. In [30], the authors address different optimization objectives, focusing on minimizing the maximum hop distance, rather than guaranteeing it as a hard constraint. Also, they assume a bounded number of proxies and they examine only on the worst-case number of hops. In [31], the authors present a cross-layer approach, which combines MAC-layer and cache management techniques for adaptive cache invalidation, cache replacement, and cache prefetching. In [32], the authors consider a different data management objective: replacement of locally cached data items with new ones. As the authors claim, the significance of this functionality stems from the fact that data queried in real applications is not random but instead exhibits locality characteristics. Therefore, the design of efficient replacement policies, given an underlying caching mechanism, is addressed. In [33], although the authors consider delay aspects and a realistic industrial IoT model (based on WirelessHART), their main objective is to bound the worst-case delay in the network. Also, they do not exploit the potential presence of proxy nodes, and consequently, they stick to the traditional, centralized industrial IoT setting. In [34], the authors consider a multi-hop network organized in clusters and provide a routing algorithm and cluster partitioning. Our distributed data management concepts and algorithms can work on top of this approach (and of any clustering approach), for example, by allocating the role of proxies to cluster-heads. In fact, clustering and our solutions address two different problems.</para>
</section>
<section class="lev1" id="sec5-5">
<title>5.5 Hierarchical Communication and Data Management Architecture for Industry 4.0</title>
<para>The network architecture presented in this chapter is designed to provide flexible and efficient connectivity and data management in Industry 4.0.</para>
<para>AUTOWARE proposes a hierarchical management architecture that supports the use of heterogeneous communication technologies. The proposed architecture also establishes multiple tiers where communication cells are functionally classified; different tiers establish different requirements in terms of reliability, latency, and data rates and impose different constraints on the management algorithms and the flexibility to implement them.</para>
<section class="lev2" id="sec5-5-1">
<title>5.5.1 Heterogeneous Industrial Wireless Network</title>
<para>As presented in Section 5.2, industrial applications demand a wide range of different communication requirements that are difficult to be efficiently satisfied with a single communication technology. In this context, the proposed architecture exploits the different capabilities of the available communication technologies (wired and wireless) to meet the wide range of requirements of industrial applications. For example, unlicensed wireless technologies such as WirelessHART, ISA100.11a, or IEEE 802.15.4e must implement mechanisms to minimize the interference generated to other potential devices sharing the same band, as for example, listen-before-talk-based channel access schemes. Although these wireless technologies are suitable to efficiently meet the requirements of non-time-critical monitoring or production applications, they usually fail to meet the stringent latency and reliability requirements of time-critical automation and control applications. In addition, these technologies were designed for static and low-bandwidth deployments, and the digitalization of industries requires significantly higher bandwidth provisioning and the capacity to integrate moving robots and objects in the factory. On the other hand, cellular standards operating on licensed frequency bands introduced in Release 14 [35] mechanisms for latency reduction in order to support certain delay critical applications. Moreover, Factories of the Future represent one of the key verticals for 5G-PPP, and 5G technologies are being developed to support a large variety of applications scenarios, targeting URLLC with a latency of about 1 ms and reliability of 1&#8211;10<superscript>&#8211;9</superscript> [36]. Also, Private LTE and Private 5G networks will be relevant technologies to be used in industrial environments [25]. As a complement of wireless technologies, the use of wired communication technologies, as for example TSN, can also be considered for communication links between static devices.</para>
<para>In this context, we propose that several subnetworks or cells (we will use the term cell throughout the rest of the document) implementing heterogeneous technologies cover the whole industrial plant (or several plants). We adopt and use the concept of cell to manage the communications and data management resources and improve the network scalability. Different cells can use different communication technologies. Cells using different communication technologies could overlap in space. Also, cells using the same technology but in a different channel could cover the same area (or partially). Each network node is connected to the cell that is able to most efficiently satisfy its communication needs. For example, WirelessHART can be used to monitor a liquid level and control a valve, while 5G communications can be employed for time-critical communications between a sensor and an actuator. TSN could be a good candidate to implement long-distance backhaul links between static devices. <link linkend="F5-7">Figure <xref linkend="F5-7" remap="5.7"/></link> illustrates the concept of cells in the proposed heterogeneous architecture with five cells implementing two different technologies. Technology 1 and Technology 2 could represent WirelessHART and 5G technologies. Technology 3 is used to connect each cell through a local management entity, referred to as Local Manager (LM), to a central management entity represented as Orchestrator in <link linkend="F5-7">Figure <xref linkend="F5-7" remap="5.7"/></link> (roles of LMs and the Orchestrator in the proposed reference communication and data management architecture are presented in the next section), and it could be implemented with TSN (the communication link between LMs and the Orchestrator could also be implemented by a multi-hop link using also heterogeneous technologies for improved flexibility and scalability (e.g., IEEE 802.11 and TSN)).</para>
<para>Cells implementing wireless communication technologies that operate in unlicensed spectrum bands can suffer from inter-system and intra-system interferences. Mechanisms to detect external interferences are needed, and cells need to be coordinated to guarantee interworking and coexistence between concurrently operating technologies. Cells implementing a communication technology using licensed spectrum, as for example, LTE or 5G networks, are also possible. Although the use of licensed spectrum bands guarantees communications free of external interference, planning and coordination among multiple cells is still needed to control inter-cell inter-ference. Considering the highly dynamic and changing nature of industrial environments, coordination among cells need to be carried out dynamically in order to guarantee the stringent communication requirements of industrial automation processes.</para>
</section>
<section class="lev2" id="sec5-5-2">
<title>5.5.2 Hierarchical Management</title>
<para>The proposed reference communication and data management architecture considers a hierarchical structure that combines local and decentralized management with centralized decisions to efficiently use the available communication resources and carry out the data management in the system. The management structure is depicted in <link linkend="F5-7">Figure <xref linkend="F5-7" remap="5.7"/></link>, and the functions of the two key components, the Orchestrator and the LMs, are next described.</para>
<fig id="F5-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-7">Figure <xref linkend="F5-7" remap="5.7"/></link></label>
<caption><para>Hierarchical and heterogeneous reference architecture to support CPPS connectivity and data management.</para></caption>
<graphic xlink:href="graphics/ch05_fig007.jpg"/>
</fig>
<section class="lev3" id="sec5-5-2-1">
<title>5.5.2.1 Hierarchical communications</title>
<para>The Orchestrator is in charge of the global coordination of the radio resources assigned to the different cells. It establishes constraints to the radio resource utilization that each cell has to comply with in order to guarantee coordination and interworking of different cells, and finally guarantee the requirements of the industrial applications developed in the whole plant. For example, the Orchestrator must avoid inter-cell interferences between cells implementing the same licensed technology. It must also guarantee interworking among cells implementing wireless technologies using unlicensed spectrum bands in order to avoid inter-system interferences, as for example, dynamically allocating non-interfering channels to different cells based on the current demand. LMs are implemented at each cell. An LM is in charge of the local management of the radio resources within its cell and makes local decisions to ensure that communication requirements of nodes in its cell are satisfied.</para>
<para>As shown in <link linkend="F5-8">Figure <xref linkend="F5-8" remap="5.8"/></link>, LMs are in charge of management functions such as Radio Resource Allocation, Power Control, or Scheduling. These functions locally coordinate the use of radio resources among the devices attached to the same cell and require very short response times. Intra-Cell Interference Control needs to be carried out also by the LM if several transmissions are allowed to share radio resources within the same cell. LMs also report the performance levels experienced within its cell to the Orchestrator. Thanks to its global vision, the Orchestrator has the information required and the ability to adapt and (re-)configure the whole network. For example, under changes in the configuration of the industrial plant or in the production system, the Orchestrator can reallocate frequency bands to cells implementing licensed technologies based on the new load conditions or the new communication requirements. It could also establish new interworking policies to control interferences between different cells working in the unlicensed spectrum. The Orchestrator can also establish constraints about the maximum transmission power or the radio resources to allocate to some transmissions to guarantee the coordination between different cells. It is also in charge of the Admission Control. In this context, the Orchestrator also decides to which cell a new device is attached to consider the communication capabilities of the device, the communication requirements of the application, and the current operating conditions of each cell.</para>
<fig id="F5-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-8">Figure <xref linkend="F5-8" remap="5.8"/></link></label>
<caption><para>Communication and data management functions in different entities of the hierarchical architecture.</para></caption>
<graphic xlink:href="graphics/ch05_fig008.jpg"/>
</fig>
<para>The described hierarchical communication and data management architecture corresponds to the control plane. We consider that control plane and user plane<footnote id="fn5_2" label="2"> <para>The User Plane carries the network user traffic, i.e., the data that is generated and consumed by the AUTOWARE applications and services. The Control Plane carries signaling traffic, and is critical for the correct operation of the network. For example, signaling messages would be needed to properly configure a wired/wireless link to achieve the necessary latency and reliability levels to support an application. They would also be needed to intelligently control the data management process. The Control Plane therefore is needed to enable the user data exchange between the different AUTOWARE components.</para></footnote> are separated. Therefore, although a centralized management is adopted within a cell, nodes in proximity might communicate directly using D2D communications. In some cells, end-devices might also participate in management functions, for example, if distributed radio resource allocation algorithms are considered for D2D communications in 5G cells. End devices can also participate in other management functions such as Power Control or Scheduling (see <link linkend="F5-8">Figure <xref linkend="F5-8" remap="5.8"/></link>).</para>
</section>
<section class="lev3" id="sec5-5-2-2">
<title>5.5.2.2 Data management</title>
<para>The Orchestrator plays an important role in facilitating the development of novel smart data distribution solutions that cooperate with cloud-based service provisioning and communication technologies. Smart proactive data storage/replication techniques can be designed, ensuring that data is located where it can be accessed by appropriate decision makers in a timely manner based on the performance of the underlying communication infrastructure. Consequently, the Orchestrator serves as a great opportunity to implement different types of data-oriented automation functions at reduced costs, like interactions with external data providers or requestors, inter-cell data distribution planning, and management and coordination of the LMs.</para>
<para>On the other hand, it is widely recognized that entirely centralized solutions to collect and manage data in industrial environments are not always suitable [38, 39] This is due to the fact that in order to assure quick reaction, process monitoring and automation control may span among multiple physical locations. Additionally, the adoption of IoT technologies with the associated massive amounts of generated data makes decentralized data management inevitable. A significant challenge is that, when data are managed across multiple physical locations, data distribution needs to be carefully designed, so as to ensure that industrial process control is not affected by the well-known issues related to communication delays and jitters [26, 40].</para>
<para>For data management, allocation of roles on the Orchestrator, LMs, and individual devices is less precisely defined in general, and can vary significantly on a per-application and per-scenario basis. In general, we expect that the Orchestrator would decide on which cells (controlled by one LM each) data need to be available and thus replicated. Also, it would decide out of which cells they must not be replicated due to ownership reasons. It would implement, in collaboration with cloud platforms, authentication of users across cells and, when needed, data transcoding functions. Thus, we expect the Orchestrator to be responsible for managing the heterogeneity issues related to managing data across a number of different cells, possibly owned and operated by different entities. LMs would manage individual cells. They would typically decide where, inside the cell, data need to be replicated, stored, and moved dynamically, based on the requirements of the specific applications, and the resources available at the individual nodes. Note that data will in general be replicated across the individual nodes, and not exclusively at the LMs, to guarantee low delays and jitters, which might be excessive if the LMs operate as unique centralized data managers. In some cases, end-devices can also participate in management functions, for example, by exploiting D2D communications to directly exchange data between them, implementing localized data replication or storage policies. In those cases, the data routing is not necessarily regulated centrally, but can be efficiently distributed, using appropriate cooperation schemes. In the architecture, therefore, the control of data management schemes can be performed centrally at the Orchestrator, locally at the LMs, or even at individual devices, as appropriate. Data management operations become distributed, and they exploit devices that lie between source and destination devices, like the use of proxies for data storage and access.</para>
</section>
</section>
<section class="lev2" id="sec5-5-3">
<title>5.5.3 Multi-tier Organization</title>
<para>In the proposed reference communication and data management architecture, cells are organized in different tiers depending on the communication requirements of the industrial application they support. LMs of cells in different tiers consider the use of different management algorithms to efficiently meet the stringent requirements of the different industrial applications they support. For example, regarding scheduling, a semi-persistent scheduling algorithm could be applied in LTE cells to guarantee ultra-low latency communications; semi-persistent scheduling algorithms avoid delays associated to the exchange of signaling messages to request (from the device to the base station or eNB) and grant (from the base station or eNB to the device) access to the radio resources. However, semi-persistent scheduling algorithms might not be adequate for less demanding latency requirements due to the potential underutilization of radio resources. The different requirements in terms of latency and reliability of the application supported by a cell also affect the exact locations where data should be stored and replicated. For example, in time-critical applications, the lower the data access latency bound is, the closer to the destination the data should be replicated.</para>
<fig id="F5-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-9">Figure <xref linkend="F5-9" remap="5.9"/></link></label>
<caption><para>LM&#8211;Orchestrator interaction at different tiers of the management architecture.</para></caption>
<graphic xlink:href="graphics/ch05_fig009.jpg"/>
</fig>
<para>The requirements of the nodes connected to a cell also influence the type of interactions between the LM of the cell and the Orchestrator. LMs of cells that support communication links with loose latency requirements can delegate some of their management functions to the Orchestrator. For these cells, a closer coordination between different cells could be achieved. Management decisions performed by LMs based on local information are preferred for applications with ultra-high demanding latency requirements (see <link linkend="F5-9">Figure <xref linkend="F5-9" remap="5.9"/></link>).</para>
</section>
<section class="lev2" id="sec5-5-4">
<title>5.5.4 Architectural Enablers: Virtualization and Softwarization</title>
<para>Efficiency, agility, and speed are fundamental characteristics that future communication and networking architectures must accomplish to support the high diverging and stringent performance requirements of future communication systems (including but not limited to the industrial ones) [41]. In this context, the communication and data management architecture proposed within this chapter considers the use of RAN Slicing and Cloud RAN as enabling technologies to achieve the sought flexibility and efficiency.</para>
<section class="lev3" id="sec5-5-4-1">
<title>5.5.4.1 RAN slicing</title>
<para>The proposed architecture considers the use of heterogeneous communication technologies. The assignment of communication technologies to industrial applications does not need to necessarily be a one-to-one matching. There is a clear trend nowadays in designing wireless technologies such that they can support more than one type of application even belonging to different &#8220;verticals&#8221;, each of them with possibly radically different communication requirements. For example, LTE or 5G networks can be used to satisfy the ultra low-latency and high-reliability communications of a time-critical automation process. In addition, the same networks could also support applications that require high-throughput levels, such as virtual reality or 4K/8K ultra-high-definition video. This is typically achieved through network virtualization and slicing, to guarantee isolation of (virtual) resources and independence across verticals, or across applications in the same vertical.</para>
<para>In the proposed architecture, each cell can support several industrial applications with different communication requirements. The industrial applications supported by the same cell might require different management functions or techniques to satisfy their different requirements in terms of transmission rates, delay, or reliability. Moreover, it is important to ensure that the application-specific requirements are satisfied independently of the congestion and performance experienced by the other application supported by the same cell, i.e., performance isolation needs to be guaranteed between different applications. For example, the amount of traffic generated by a given application should not negatively influence the performance of the other application. In this context, we propose the use of RAN Slicing to solve the above-mentioned issues. RAN Slicing is based on SDN (Software-Defined Networking) and NFV (Network Function Virtualization) technologies, and it proposes to split the resources and management functions of an RAN into different slices to create multiple logical (virtual) networks on top of a common network [42]. Each of these slices, in this case, virtual RANs, must contain the required resources needed to meet the communication requirements of the application or service that such slice supports. As presented in [42], one of the main objectives of RAN Slicing is to assure isolation in terms of performance. In addition, isolation in terms of management must also be ensured, allowing the independent management of each slice as a separated network. As a result, RAN Slicing becomes a key technology to deploy a flexible communication and networking architecture capable of meeting the stringent and diverging communication requirements of industrial applications, and in particular, those of URLLC.</para>
<para>In the proposed architecture, each slice of a physical cell is referred to as virtual cell, as shown in <link linkend="F5-10">Figure <xref linkend="F5-10" remap="5.10"/></link>. Virtual cells resulting from the split of the same physical cell can be located at different levels of the multi-tier architecture depending on the communication requirements of the applications. Each virtual cell implements the appropriate functions based on the requirements of the application supported and must be assigned the RAN resources required to satisfy the requirements of the communication links it supports.</para>
<fig id="F5-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-10">Figure <xref linkend="F5-10" remap="5.10"/></link></label>
<caption><para>Virtual cells based on RAN Slicing.</para></caption>
<graphic xlink:href="graphics/ch05_fig010.jpg"/>
</fig>
<para>RAN resources (e.g., data storage, computing, radio resources, etc.) must be allocated to each virtual cell considering the operating conditions, such as the amount of traffic, the link quality, etc. The amount of RAN resources allocated to each virtual cell must be therefore dynamically adapted based on the operating conditions. Within the proposed reference architecture, the Orchestrator is the management entity in charge of creating and managing RAN slices or virtual cells. Thanks to the reports received from the LMs, the Orchestrator has a global view of the performance experienced at the different (virtual) cells. As a result, it is able to decide the amount of RAN resources that must be assigned to each virtual cell to guarantee the communication requirements of the applications.</para>
<para>With respect to data management functions, they will operate on top of the virtual networks generated by RAN Slicing. However, note that the requirements posed by data management will determine part of the network traffic patterns. Therefore, RAN Slicing defined by the Orchestrator might consider the traffic patterns resulting from data management operations, in order to optimize slicing itself.</para>
</section>
<section class="lev3" id="sec5-5-4-2">
<title>5.5.4.2 Cloudification of the RAN</title>
<para>Cloud-based RAN (or simply Cloud RAN) is a novel paradigm for RAN architectures that applies NFV and cloud technologies for deploying</para>
<para>RAN functions [43]. Cloud RAN splits the base station into a radio unit, known as Radio Remote Head (RRH), and a signal-processing unit referred to as Base Band Unit (BBU) [44]. The key concept of Cloud RAN is that the signal processing units, i.e., the BBUs, can be moved to the cloud. Cloud RAN shifts from the traditional distributed architecture to a centralized one, where some or all of the base station processing and management functions are placed in a central virtualized BBU pool (a virtualized cluster which can consist of general purpose processors to perform baseband processing and that is shared by all cells) [43]. Virtual BBUs and RRHs are connected by a fronthaul network. Centralizing processing and management functions in the same location improves interworking and coordination among cells; virtual BBUs are located in the same place, and exchange of data among them can be carried out easier and with shorter delay.</para>
<para>We foresee Cloud RAN as the baseline technology for the proposed architecture, to implement hierarchical and multi-tier communication management. Cloud RAN will be a key technology to achieve a tight coordination between cells in the proposed architecture and to control inter-cell and intersystem interferences. As presented in [45] and [46], Cloud RAN can support different functional splits that are perfectly aligned with the foreseen needs of industrial applications; some processing functions can be executed remotely while functions with strong real-time requirements can remain at the cell site. In the proposed communication and data management architecture, the decision about how to perform this functional split must be made by the Orchestrator considering the particular communication requirements of the industrial applications supported by each cell (see <link linkend="F5-11">Figure <xref linkend="F5-11" remap="5.11"/></link>).</para>
<fig id="F5-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-11">Figure <xref linkend="F5-11" remap="5.11"/></link></label>
<caption><para>Cloudification of the RAN.</para></caption>
<graphic xlink:href="graphics/ch05_fig011.jpg"/>
</fig>
<para>The Cloud RAN architectural paradigm allows for hardware resource pooling, which also reduces operational cost, by reducing power and energy consumption compared to traditional architectures [43], which results in an attractive incentive for industrial deployment. The cloudification of the RAN will also leverage RAN Slicing on a single network infrastructure and will increase flexibility for the construction of on-demand slices to support individual service types or application within a cell.</para>
</section>
</section></section>
<section class="lev1" id="sec5-6">
<title>5.6 Hybrid Communication Management</title>
<para>Communication systems must be able to support the high dynamism of industrial environment, which will result from the coexistence of different industrial applications, different types of sensors, the mobility of nodes (robots, machinery, vehicles, and workers), and changes in the production demands. Industry 4.0 then demands flexible and dynamic communication networks able to adapt their configuration to changes in the environment to seamlessly ensure the communication requirements of industrial applications. To this end, communication management decisions must be based on current operating conditions and the continuous monitoring of experienced perfor-mance. The proposed hierarchical communication and data management architecture allows the implementation of hybrid communication management schemes that integrate local and decentralized management decisions while maintaining a close coordination through a central management entity (the Orchestrator in the reference AUTOWARE architecture) with global knowledge of the performance experienced in the whole industrial communication network. The hybrid communication management introduces flexibility in the management of wireless connections and increases the capability of the network to detect and react to local changes in the industrial environment while efficiently guaranteeing the communication requirements of industrial applications and services supported by the whole network.</para>
<para>In hybrid management schemes, management entities must interact to coordinate their decisions and ensure the correct operation of the whole network. <link linkend="F5-12">Figure <xref linkend="F5-12" remap="5.12"/></link> represents the interactions between the management entities of the hierarchical architecture: the Orchestrator, LMs, and end-devices (as presented in Section 5.2, end-devices might also participate in the communication management). Boxes within each management entity represent different functions executed at each entity:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Local measurements: This function measures physical parameters on the communication link, as for example, received signal level (received signal strength indication or RSSI), signal-to-noise ratio (SNR), etc. In addition, this function also measures and evaluates the performance experienced in the communication, as for example, throughput, delay, packet error ratio (PER), etc. This function is performed by each entity on its communication links.</para></listitem>
<listitem><para>Performance gathering: This function collects information about the performance experienced at the different cells. This function is performed at the LMs, which collect performance information gathered by end-devices within its cell, and also at the Orchestrator, which receives performance information gathered by the LMs.</para></listitem>
<listitem><para>Reasoning: The reasoning function processes the data obtained by the local measurements and the performance gathering functions to synthesize higher-level performance information. The reasoning performed at each entity will depend on the particular application supported (and the communication requirements of the application) and also on the particular management algorithm implemented. For example, if a cell supports time-critical control applications, the maximum value of latency experienced by the 99 percentile of packets transmitted might be of interest, while the average throughput achieved in the communication could be required to analyze the performance of a 3D visualization application.</para></listitem>
<listitem><para>Reporting: This function sends periodic performance reports to the management entity in the higher hierarchical level. Particularly, end-devices send periodic reports to the LMs, which in turn report performance information to the Orchestrator.</para></listitem>
<listitem><para>Global/local/communication management decision: This function executes the decision rule or decision policy. This function can be whatever of the communication management functions shown in <link linkend="F5-8">Figure <xref linkend="F5-8" remap="5.8"/></link>: for example, Admission Control or Inter-Cell Interference Coordination algorithms can be executed as the Global management decision function in the Orchestrator, Power Control or Radio Resource Allocation within a cell can be executed as the Local management decision function in the LMs, and Scheduling or Power Control can be executed as the Communication management decision function at the end-devices.</para></listitem>
</itemizedlist>
<para>As shown in <link linkend="F5-12">Figure <xref linkend="F5-12" remap="5.12"/></link>, an end-device performs local measurements of the quality and performance experienced in its communication links. This local data (1) is processed by the reasoning function that provides high- level performance information (2a) that is reported to the LM in its cell (3). This high-level performance information can also be used by the end-device (2b) to get a management decision (4) and configure its communication parameters in the case that the end-device has management capabilities. In this case, the management decisions taken by different end-devices in the same cell are coordinated by the LM in the cell, which can also configure some communication parameters of the end-devices (7b). Decisions taken by end-devices are constrained by the decisions taken by the LM (7c). If end-devices do not have management capabilities, the communication parameters for the end-devices are directly configured by the LM (8b). The Local management decisions taken by each LM are based on the performance information gathered by all end-devices in its cell (from 1 to <emphasis>n</emphasis> devices in the figure), and also on local measurements performed by the own LM. This data (5a and 5b) is processed by the reasoning function in the LM, and the resulting high-level performance information (6b) is used to take a local management decision and configure the communication parameters of the end-devices in its cell (7a, 7b, and 7c). Each LM also reports to the Orchestrator the processed information about the performance experienced in its cell (8). The Orchestrator receives performance information from all the LMs (from 1 to <emphasis>m</emphasis> LMs in the figure). The performance information gathered by the LMs (9b), together with local measurements performed by the Orchestrator in its communication links with the LMs (9a), is processed by the reasoning function in the Orchestrator. The high-level performance information (10) is used by the Orchestrator to achieve a global management decision and configure radio resources to use at each cell (11a). The global management decisions made by the Orchestrator constrain the local management decisions made by the LMs (11b) to guarantee the coordination among the different LMs in the network, and finally ensure the communication requirements of the industrial applications and services supported by the network.</para>
<fig id="F5-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-12">Figure <xref linkend="F5-12" remap="5.12"/></link></label>
<caption><para>Hybrid communication management: interaction between management entities.</para></caption>
<graphic xlink:href="graphics/ch05_fig012.jpg"/>
</fig>
</section>
<section class="lev1" id="sec5-7">
<title>5.7 Decentralized Data Distribution</title>
<para>The smart data management process provided by the architecture interacts with the underlying networking protocols. In order to provide both efficient data access and end-to-end delay guarantees, one of the technical components of the architecture is a dedicated decentralized data distribution. The main idea behind the decentralized data distribution is decoupling the Network plane from the Data plane. The data-enabled architecture functions selectively move data to different network areas and devise methods on how the data requests should be served, given a known underlying routing protocol. More specifically, the role of the decentralized data distribution component is three-fold:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>It investigates where and when the data should be moved, and to which network areas.</para></listitem>
<listitem><para>It decides which network nodes can serve as special nodes and assume more responsibilities with respect to data management.</para></listitem>
<listitem><para>It indicates how the available data will be distributed and delivered to the individual network devices requesting it.</para></listitem>
</orderedlist>
<para>Note that the architecture enables the storing and replication of data between (i) (potentially mobile) nodes in the factory environment (e.g., the mobile nodes of the factory operators, nodes installed in work cells, nodes attached to mobile robots, etc.); (ii) edge nodes providing storage services for the specific (areas of the) factory; and (iii) remote cloud storage services. All the three layers can be used in a synergic way, based on the properties of the data and the requirements of the users requesting it. Depending on these properties, data processing may need highly variable computational resources. Advanced scheduling and resource management strategies lie at the core of the distributed infrastructure resources usage. However, such strategies must be tailored to the particular algorithm/data combination to be managed. Differently from the past, the scheduling process, instead of looking for smart ways to adapt the application to the execution environment, now aims at selecting and managing the computational resources available on the distributed infrastructure to fulfill some performance indicators.</para>
<para>The suggested architecture can be used in order to efficiently deploy the data management functions over typical industrial IoT networks. Initial results show that the decentralized data management scheme of the proposed architecture can indeed enhance various target metrics when applied to various industrial IoT networking settings. In the following subsections, we briefly review some recent examples, where the decentralized data distribution concepts resulted in an enhanced network performance.</para>
<section class="lev2" id="sec5-7-1">
<title>5.7.1 Average Data Access Latency Guarantees</title>
<para>Assuming that applications in industrial IoT networks require that there is (i) a set of producers generating data (e.g., IoT sensors), (ii) a set of consumers requiring those data in order to implement the application logic (e.g., IoT actuators), and (iii) a maximum latency <emphasis role="strong"><emphasis>L</emphasis></emphasis><subscript>max</subscript> that consumers can tolerate in receiving data after they have requested them; the decentralized data management module (DML) offers an efficient method for regulating the data distribution among producers and consumers. The DML selectively assigns a special role to some of the network nodes, that of the proxy. Each node that can become a proxy potentially serves as an intermediary between producers and consumers, even though the node might be neither a producer nor a consumer. If properly selected, proxy nodes can significantly reduce the average data access latency; however, when a node is selected as a proxy, it has to increase its storing, computational, and communication activities. Thus, the DML minimizes the number of proxies, to reduce as much as possible the overall system resource consumption. In [47], we have provided an extensive experimental evaluation, both in a testbed and through simulations, and we demonstrated that the proposed decentralized data management (i) guarantees that the access latency stays below the given threshold and (ii) significantly outperforms traditional centralized and even distributed approaches, in terms of average data access latency guarantees.</para>
</section>
<section class="lev2" id="sec5-7-2">
<title>5.7.2 Maximum Data Access Latency Guarantees</title>
<para>Another representative example of decentralized data management is the exploitation of the presence of a limited set of pre-installed proxy nodes, which are more capable than resource-limited IoT devices in the resource- constrained network (e.g., fog nodes). Different to the previous example, here we focused on network lifetime and on maximum (instead of average) data access latencies. The problem we addressed in [48] is the maximization of the network lifetime, given the proxy locations in the network, the initial limited energy supplies of the nodes, the data request patterns (and their corresponding parameters), and the maximum latency that consumer nodes can tolerate since the time they request data. We proved that the problem is computationally hard and we designed an offline centralized heuristic algorithm for identifying which paths in the network the data should follow and on which proxies they should be cached, in order to meet the latency constraint and to efficiently prolong the network lifetime. We implemented the method and evaluated its performance in a testbed, composed of IEEE 802.15.4-enabled network nodes. We demonstrated that the proposed heuristic (i) guarantees data access latency below the given threshold and (ii) performs well in terms of network lifetime with respect to a theoretically optimal solution.</para>
</section>
<section class="lev2" id="sec5-7-3">
<title>5.7.3 Dynamic Path Reconfigurations</title>
<para>As in the previous examples, we assume that applications require a certain upper bound on the end-to-end data delivery latency from proxies to consumers and that at some point in time, a central controller computes an optimal set of multi-hop paths from producers to proxies and from proxies to consumers, which guarantee a maximum delivery delay, while maximizing the energy lifetime of the network (i.e., the time until the first node in the network exhaust energy resources). In this example, we focus on maintaining the network configuration in such a way that application requirements are met after important network operational parameters change due to some unplanned events (e.g., heavy interference, excessive energy consumption), while guaranteeing an appropriate utilization of energy resources. In [49], we provided several efficient algorithmic functions that locally reconfigure the paths of the data distribution process, when a communication link or a network node fails. The functions regulate how the local path reconfiguration should be implemented and how a node can join a new path or modify an already existing path, ensuring that there will be no loops. The proposed method can be implemented on top of existing data forwarding schemes designed for industrial IoT networks. We demonstrated through simulations the performance gains of our method in terms of energy consumption and data delivery success rate.</para>
</section>
</section>
<section class="lev1" id="sec5-8">
<title>5.8 Communications and Data Management within the AUTOWARE Framework</title>
<para>The reference communication and data management architecture of AUTOWARE supports the control plane of the communication network and the data management system. As shown in <link linkend="F5-13">Figure <xref linkend="F5-13" remap="5.13"/></link>, end (or field)-devices such as sensors, actuators, mobile robots, etc., are distributed throughout the factory plant participating in different industrial processes or tasks.</para>
<fig id="F5-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-13">Figure <xref linkend="F5-13" remap="5.13"/></link></label>
<caption><para>Integration of the hierarchical and multi-tier heterogeneous communication and data management architecture into the AUTOWARE Reference Architecture.</para></caption>
<graphic xlink:href="graphics/ch05_fig013.jpg"/>
</fig>
<para>These field devices are then included within the Field Devices Layer of the AUTOWARE Reference Architecture defined in <link linkend="ch010">Chapter <xref linkend="ch10" remap="10"/></link>. Various LMs can be implemented at different workcells or production lines to locally manage the communication resources and data in the different communication cells deployed in the industrial plant. These management nodes are included in the Workcell/Production Line Layer, and they form a distributed management infrastructure that operates close to the field devices. As previously presented, both the Orchestrator and the LMs have communication and data management functionalities.</para>
<para>From the point of view of communications, the Orchestrator is in charge of the global management of the communication resources used by the different cells deployed within a factory plant. When there is only one industrial plant or when there are multiple but independent plants (from the communications perspective), the main communication functions of the Orchestrator are in the Factory Layer. However, if different industrial plants are deployed and they are close enough so that the operation of a cell implemented in a plant can affect the operation of a different cell in the other plant, then the Orchestrator should be able to manage the communication resources of the different plants. In this case, some of its communication functions should be part of the Enterprise Layer. Based on the previous reasoning, the Orchestrator and, in particular, the communication management function within the Orchestrator should be flexible and be able to be implemented in the Factory and the Enterprise Layer.</para>
<para>From the point of view of data storage, management, and distribution, the data can be circulated and processed at different levels of the architecture, depending on the targeted use case and the requirements that the industrial operator is imposing on the application. For example, if the requirements necessitate critical and short access latency applications (e.g., Table 5.5), such as condition monitoring, then imposing data transfers back and forth between the Field Layer, the Workcell/Production Line Layer, and the Factory Layer may lead to severe sub-optimal paths, which in turn negatively affect the overall network latency. At the same time, those transfer patterns will lead to poor network performance, as field devices often have to tolerate longer response times than necessary. In this case, the data can be stored and managed at the lower layers of the architecture, with the LMs in the role of the data coordinator. Another example is when the requirements necessitate the employment of computationally more sophisticated methods on larger volumes of data that can only be performed by stronger devices than those at the Field Layer, such as 3D object recognition or video tracking, which come with vast amounts of data. In this case, the data can be forwarded, stored, and processed in the higher levels of the architecture, the Factory Layer, or the Enterprise Layer, with the Orchestrator in the role of the data coordinator.</para>
</section>
<section class="lev1" id="sec5-9">
<title>5.9 Conclusions</title>
<para>A software-defined heterogeneous, hierarchical, and multi-tier communication management architecture with edge-powered smart data distribution strategies has been presented in this chapter to support ubiquitous, flexible, and reliable connectivity and efficient data management in highly dynamic Industry 4.0 scenarios where multiple digital services and applications are bound to coexist. The proposed architecture exploits the different abilities of heterogeneous communication technologies to meet the broad range of communication requirements demanded by Industry 4.0 applications. Integration of the different technologies in an efficient and reliable network is achieved by means of a hybrid management strategy consisting of decentralized management decisions coordinated by a central orchestrator. Local management entities organized in different virtual tiers of the architecture can implement different management functions based on the requirements of the application they support. The hierarchical and multi-tier communication management architecture enables the implementation of cooperating, but distinct management functions to maximize flexibility and efficiency to meet the stringent and varying requirements of industrial applications. The proposed architecture considers the use of RAN Slicing and Cloud RAN as enabling technologies to meet reliably and effectively future Industry 4.0 autonomous assembly scenarios and modular plug &amp; play manufacturing systems. The technological enablers of the communications and data management architecture were identified as part of the AUTOWARE framework, both in the user plane and in the control plane of the AUTOWARE reference architecture.</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This work was funded by the European Commission through the FoF-RIA Project <emphasis>AUTOWARE: Wireless Autonomous, Reliable and Resilient Production Operation Architecture for Cognitive Manufacturing</emphasis> (No. 723909).</para>
</section>
<section class="lev1" id="sec5-10">
<title>References</title>
<para>[1] V. K. L. Huang, Z. Pang, C. J. A. Chen and K. F. Tsang, &#8220;New Trends in the Practical Deployment of Industrial Wireless: From Noncritical to Critical Use Cases&#8221;, in IEEE Industrial Electronics Magazine, vol. 12, no. 2, pp. 50&#8211;58, June 2018.</para>
<para>[2] T. Sauter, S. Soucek, W. Kastner and D. Dietrich, &#8220;The Evolution of Factory and Building Automation&#8221;, in IEEE Industrial Electronics Magazine, vol. 5, no. 3, pp. 35&#8211;48, September 2011.</para>
<para>[3] <emphasis>How Audi is changing the future of automotive manufacturing</emphasis>, Feb. 2017. Available at https://www.drivingline.com/. Last access on 2017/12/01.</para>
<para>[4] C. H. Chen, M. Y. Lin and C. C. Liu, &#8220;Edge Computing Gateway of the Industrial Internet of Things Using Multiple Collaborative Microcon-trollers&#8221;, in IEEE Network, vol. 32, no. 1, pp. 24&#8211;32, January&#8211;February 2018.</para>
<para>[5] Plattform Industrie 4.0, &#8220;Network-based communication for Industrie 4.0&#8221;, <emphasis>Publications of Plattform Industrie 4.0</emphasis>, April 2016. Available at http://www.plattform-i40.de. Last access on 2017/10/20.</para>
<para>[6] 5GPPP, <emphasis>5G and the Factories of the Future</emphasis>, October 2015.</para>
<para>[7] H2020 AUTOWARE project website: http://www.autoware-eu.org/.</para>
<para>[8] M. C. Lucas-Esta&#241;, T. P. Raptis, M. Sepulcre, A. Passarella, C. Regueiro and O. Lazaro, &#8220;A software defined hierarchical communication and data management architecture for industry 4.0&#8221;, <emphasis>14th Annual Conference on Wireless On-demand Network Systems and Services</emphasis> (WONS), Isola 2000, pp. 37&#8211;44, 2018.</para>
<para>[9] P. Zand, et al., &#8220;Wireless industrial monitoring and control networks: The journey so far and the road ahead&#8221;, <emphasis>Journal of Sensor and Actuator Networks</emphasis>, vol. 1, no. 2, pp. 123&#8211;152, 2012.</para>
<para>[10] ETSI, &#8220;Technical Report; Electromagnetic compatibility and Radio spectrum Matters (ERM); System Reference Document; Short Range Devices (SRD); Part 2: Technical characteristics for SRD equipment for wireless industrial applications using technologies different from Ultra-Wide Band (UWB)&#8221;, ETSI TR 102 889-2 V1.1.1, August 2011.</para>
<para>[11] E. Molina, et al., &#8220;The AUTOWARE Framework and Requirements for the Cognitive Digital Automation&#8221;, in <emphasis>Proc. of the 18th IFIP Working Conference on Virtual Enterprises (PRO-VE)</emphasis>, Vicenza, Italy, September 2017.</para>
<para>[12] M.C. Lucas-Esta&#241;, J.L. Maestre, B. Coll-Perales, J. Gozalvez, &#8220;An Experimental Evaluation of Redundancy in Industrial Wireless Com-munications&#8221;, in <emphasis>Proc. 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation</emphasis> (ETFA 2018), Torino, Italy, 4&#8211;7 September 2018.</para>
<para>[13] A. Varghese, D. Tandur, &#8220;Wireless requirements and challenges in Industry 4.0&#8221;, in <emphasis>Proc. 2014 International Conference on Contemporary Computing and Informatics (IC3I)</emphasis>, pp. 634&#8211;638, 2014.</para>
<para>[14] A. Osseiran et al., &#8220;Scenarios for 5G mobile and wireless communications: the vision of the METIS project&#8221;, <emphasis>IEEE Communications Magazine</emphasis>, vol. 52, no. 5, pp. 26&#8211;35, May 2014.</para>
<para>[15] Reference architecture model for the industrial data space, Fraunhofer, 2017. Available at https://www.fraunhofer.de. Last access on 2017/12/01.</para>
<para>[16] S. Montero, J. Gozalvez, M. Sepulcre, G. Prieto, &#8220;Impact of Mobility on the Management and Performance of WirelessHART Industrial Com-munications&#8221;, in <emphasis>Proc. 17th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)</emphasis>, Krakw (Poland), pp. 17&#8211;21, September 2012.</para>
<para>[17] C. Lu et al., &#8220;Real-Time Wireless Sensor-Actuator Networks for Industrial Cyber-Physical Systems&#8221;, <emphasis>Proc. of the IEEE</emphasis>, vol. 104, no. 5, pp. 1013&#8211;1024, May 2016.</para>
<para>[18] Xiaomin Li, Di Li, Jiafu Wan, Athanasios V. Vasilakos, Chin-Feng Lai, Shiyong Wang, &#8220;A review of industrial wireless networks in the context of Industry 4.0&#8221;, <emphasis>Wireless Networks</emphasis>, vol. 23, pp. 23&#8211;41, 2017.</para>
<para>[19] J.R. Gisbert, et al., &#8220;Integrated system for control and monitoring industrial wireless networks for labor risk prevention&#8221;, <emphasis>Journal of Network and Computer Applications</emphasis>, vol. 39, pp. 233&#8211;252, ISSN 1084-8045, March 2014.</para>
<para>[20] R. S&#225;mano-Robles, T. Nordstr&#246;m, S. Santonja, W. Rom and E. Tovar, &#8220;The DEWI high-level architecture: Wireless sensor networks in industrial applications&#8221;, in <emphasis>Proc. of the 2016 Eleventh International Conference on Digital Information Management (ICDIM)</emphasis>, Porto, pp. 274&#8211;280, 2016.</para>
<para>[21] I. Aktas, J. Ansari, S. Auroux, D. Parruca, M. D. P. Guirao, and -B. Holfeld: &#8220;A Coordination Architecture for Wireless Industrial Automation&#8221;, in <emphasis>Proc. of the European Wireless Conference</emphasis>, Dresden, Germany, May 2017.</para>
<para>[22] DEWI Project website. Available at http://www.dewiproject.eu/. Last access on 2017/05/26.</para>
<para>[23] KoI Project website. Available at http://koi-projekt.de/index.html. Last access on 2017/05/26.</para>
<para>[24] Mehmet Yavuz, <emphasis>How will 5G transform Industrial IoT?</emphasis>, Qualcomm Technologies, Inc., June 2018.</para>
<para>[25] Qualcomm, <emphasis>Private LTE networks create new opportunities for industrial IoT</emphasis>, Qualcomm Technologies, Inc., October 2017.</para>
<para>[26] P. Gaj, J. Jasperneite and M. Felser, &#8220;Computer Communication Within Industrial Distributed Environment-a Survey&#8221;, in <emphasis>IEEE Transactions on Industrial Informatics</emphasis>, vol. 9, no. 1, pp. 182&#8211;189, February 2013.</para>
<para>[27] D. De Guglielmo, S. Brienza, G. Anastasi, &#8220;IEEE 802.15.4e: A survey&#8221;, <emphasis>Computer Communications</emphasis>, vol. 88, pp. 1&#8211;24, 2016.</para>
<para>[28] T. Watteyne et al., &#8220;Industrial Wireless IP-Based Cyber &#8211;Physical Sys-tems&#8221;, <emphasis>Proceedings of the IEEE</emphasis>, vol. 104, no. 5, pp. 1025&#8211;1038, May 2016.</para>
<para>[29] J. Heo, J. Hong and Y. Cho, &#8220;EARQ: Energy Aware Routing for Real-Time and Reliable Communication in Wireless Industrial Sensor Networks&#8221;, <emphasis>IEEE Transactions on Industrial Informatics</emphasis>, vol. 5, no. 1, pp. 3&#8211;11, February 2009.</para>
<para>[30] D. Kim et al., &#8220;Minimum Data-Latency-Bound k-Sink Placement Problem in Wireless Sensor Networks&#8221;, <emphasis>IEEE/ACM Transactions on Networking</emphasis>, vol. 19, no. 5, pp. 1344&#8211;1353, October 2011.</para>
<para>[31] C. Antonopoulos, C. Panagiotou, G. Keramidas and S. Koubias, &#8220;Network driven cache behavior in wireless sensor networks&#8221;, in <emphasis>Proc. of the 2012 IEEE International Conference on Industrial Technology</emphasis>, Athens, pp. 567&#8211;572, 2012.</para>
<para>[32] C. Panagiotou, C. Antonopoulos and S. Koubias, &#8220;Performance enhancement in WSN through data cache replacement policies&#8221;, in <emphasis>Proc. of 2012 IEEE 17th International Conference on Emerging Technologies &amp; Factory Automation (ETFA 2012)</emphasis>, Krakow, pp. 1&#8211;8, 2012.</para>
<para>[33] A. Saifullah, Y. Xu, C. Lu and Y. Chen, &#8220;End-to-End Communication Delay Analysis in Industrial Wireless Networks&#8221;, <emphasis>IEEE Transactions on Computers</emphasis>, vol. 64, no. 5, pp. 1361&#8211;1374, 1 May 2015.</para>
<para>[34] J. Li et al., &#8220;A Novel Approximation for Multi-hop Connected Clustering Problem in Wireless Sensor Networks&#8221;, in <emphasis>Proc. of the 2015 IEEE 35th International Conference on Distributed Computing Systems</emphasis>, Columbus, OH, pp. 696&#8211;705, 2015.</para>
<para>[35] 3GPP, Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA); Study on latency reduction techniques for LTE, 3GPP TS 36.881, version 14.0.0, 2016.</para>
<para>[36] ITU-R M.2083-0, IMT Vision &#8211; Framework and overall objectives of the future development of IMT for 2020 and beyond, September 2015.</para>
<para>[37] Qualcomm, <emphasis>Private LTE networks create new opportunities for industrial IoT</emphasis>, Qualcomm Technologies, Inc., October 2017.</para>
<para>[38] P. Gaj, A. Malinowski, T. Sauter, A. Valenzano, &#8220;Guest Editorial Dis-tributed Data Processing in Industrial Applications&#8221;, <emphasis>IEEE Trans. Ind. Informatics</emphasis>, vol. 11, no. 3, pp. 737&#8211;740, 2015.</para>
<para>[39] C. Wang, Z. Bi, L. Da Xu, &#8220;IoT and cloud computing in automation of assembly modeling systems&#8221;, <emphasis>IEEE Trans. Ind. Informatics</emphasis>, vol. 10, no. 2, pp. 1426&#8211;1434, 2014.</para>
<para>[40] Z. Bi, L. Da Xu, C. Wang, &#8220;Internet of things for enterprise systems of modern manufacturing&#8221;, <emphasis>IEEE Trans. Ind. Informatics</emphasis>, vol. 10, no. 2, pp. 1537&#8211;1546, 2014.</para>
<para>[41] Gary Maidment, &#8220;One slice at a time: SDN/NFV to 5G network slicing&#8221;, <emphasis>Communicate (Huawei Technologies)</emphasis>, Issue 81, pp. 63&#8211;66, December 2016.</para>
<para>[42] J. Ordonez-Lucena, et al., &#8220;Network Slicing for 5G with SDN/NFV: Concepts, Architectures, and Challenges&#8221;, <emphasis>IEEE Communications Magazine</emphasis>, vol. 55, Issue 5, pp. 80&#8211;87, May 2017.</para>
<para>[43] A. Checko et al., &#8220;Cloud RAN for Mobile Networks&#8212;A Technology Overview&#8221;, <emphasis>IEEE Communications Surveys &amp; Tutorials</emphasis>, vol. 17, no. 1, pp. 405&#8211;426, 2015.</para>
<para>[44] O. Chabbouh, S. B. Rejeb, Z. Choukair, N. Agoulmine, &#8220;A novel cloud RAN architecture for 5G HetNets and QoS evaluation&#8221;, in <emphasis>Proc. Inter-national Symposium on Networks, Computers and Communications (ISNCC) 2016</emphasis>, pp. 1&#8211;6, Yasmine Hammamet, Tunissia, May 2016.</para>
<para>[45] <emphasis>Cloud RAN</emphasis>, Ericsson White Paper, September 2015.</para>
<para>[46] <emphasis>5G Network Architecture. A High-Level Perspective</emphasis>, Huawei White Paper, July 2016.</para>
<para>[47] T. P. Raptis and A. Passarella, &#8220;A distributed data management scheme for industrial IoT environments&#8221;, in <emphasis>Proc</emphasis>. o<emphasis>f the IEEE 13th International Conference on Wireless and Mobile Computing, Networking and Communications</emphasis> (WiMob), Rome, pp. 196&#8211;203, 2017.</para>
<para>[48] T. P. Raptis, A. Passarella and M. Conti, &#8220;Maximizing industrial IoT network lifetime under latency constraints through edge data distribution&#8221;, in <emphasis>Proc</emphasis>. o<emphasis>f the IEEE Industrial Cyber-Physical Systems</emphasis> (ICPS), Saint Petersburg, Russia, pp. 708&#8211;713, 2018.</para>
<para>[49] T. P. Raptis, A. Passarella and M. Conti, &#8220;Distributed Path Reconfiguration and Data Forwarding in Industrial IoT Networks&#8221;, in <emphasis>Proc</emphasis>. o<emphasis>f the 16th IFIP International Conference on Wired/Wireless Internet Communications</emphasis>, Boston, MA, (WWIC), 2018. </para>
</section>
</chapter>

<chapter class="chapter" id="ch06" label="6" xreflabel="6">
<title>A Framework for Flexible and Programmable Data Analytics in Industrial Environments</title>
<para><emphasis role="strong">Nikos Kefalakis<superscript>1</superscript>, Aikaterini Roukounaki<superscript>1</superscript></emphasis>, <emphasis role="strong">John Soldatos<superscript>1</superscript> and Mauro Isaja<superscript>2</superscript></emphasis></para>
<para><superscript>1</superscript> Kifisias 44 Ave., Marousi, GR15125, Greece</para>
<para><superscript>2</superscript> Engineering Ingegneria Informatica SpA, Italy</para>
<para>E-mail: jsol@ait.gr; arou@ait.gr; nkef@ait.gr; mauro.isaja@eng.it</para>
<para>This chapter presents a dynamic and programmable distributed data analytics solution for industrial environments. The solution includes an edge analytics engine for analytics close to the field and in line with the edge computing paradigm. Each edge analytics engine instance is flexible and dynamically configurable based on an Analytics Manifest (AM). It is also based on distributed ledger technologies for configuring analytics tasks that span multiple edge nodes and instances of the edge analytics engine. In particular, it leverages ledger services for synchronizing and combining various AMs in factory wide analytics tasks. Based on these mechanisms, the presented distributed data analytics infrastructure is therefore flexible, configurable, dynamic and resilient. Moreover, it is open source and provides Open APIs (Application Programming Interfaces) that enable access to its functionalities. These features make it unique and valuable for vendors and integrators of industrial automation solutions.</para>
<section class="lev1" id="sec6-1">
<title>6.1 Introduction</title>
<para>A large number of digital automation applications in modern shopfloors collect and process large amounts of digital data as a means of identifying the status of machines and devices (e.g., a machine&#8217;s condition or failure mode) or the context of industrial processes (e.g., possible defects in an entire production process), including relevant events [1]. This context is accordingly used to support decision making, including decisions that drive automation and control operations on the shopfloor [2] such as the configuration of a production line or the operational mode of a machine. Therefore, data analytics operations are an integral element of most digital automation platforms [3], which is usually integrated within automation and simulation functionalities.</para>
<para>In this context, the automation platform that has been developed in the scope of the FAR-EDGE project includes also distributed data analytics functionalities. In particular, the FAR-EDGE platform offers functionalities in three distinct, yet complementary domains, namely Automation, Analytics and Simulation [4]. The Analytics domain provides the means for collecting, filtering and processing large volumes of data from the manufacturing shopfloor towards calculating indicators associated with manufacturing performance and automation. Analytics functions are offered by a Distributed Data Analytics (DDA) infrastructure, which enables the definition, configuration and execution of analytics functions at two different levels, namely:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Local Level Analytics, i.e. at the edge of a FAR-EDGE deployment. These comprise typically analytics functions that are executed close to the field and have local/edge scope, e.g. they collect and process data streams from a part of a factory such as data streams associated with a station within the factory. Local Level Analytics in FAR-EDGE are configured and executed by means of an Edge Analytics Engine (EAE), which runs within an Edge Gateway (EG) and is a core part of the DDA.</para></listitem>
<listitem><para>Global Level Analytics, i.e. concerning the factory as a whole and spanning instances of local level analytics. In FAR-EDGE, global level analytics combine information from multiple Edge Gateways (EGs) and instances of the Edge Analytics Engine. They can be configured and executed through an Open API. Global Level analytics are supported by the ledge and the cloud infrastructures of the FAR-EDGE platform.</para></listitem>
</itemizedlist>
<para>The distinction between edge/local and global/cloud analytics is very common in the case of Big Data analytics systems (e.g. [5&#8211;7]). Moreover, there are different frameworks that can handle streaming analytics at the edge of the network, which is a foundation for edge analytics. The FAR-EDGE DDA infrastructure goes beyond the state of the art of these Big Data systems through employing novel techniques for the flexible configuration of edge analytics and the synchronization of multiple edge analytics deployments. In particular, the FAR-EDGE DDA includes an infrastructure for registering data sources from the plantfloor, as well as for dynamically discovering them.</para>
<para>Moreover, it includes a modular framework for the deployment of analytics functionalities based on a set of (reusable) processing libraries. The latter can be classified in three main types of data processing functions, which enable the pre-processing of data streams (i.e. pre-processing functions), their data analysis (i.e. analytics functions) and ultimately the storage of the analytics results (i.e. storage functions). In FAR-EDGE, edge analytics tasks are described as combinations of various instances of these three processing functions in various configurations, which are specified as part of relevant analytics workflows.</para>
<para>In this context, different edge analytics tasks can be described using well- defined configuration files (i.e. Analytics Manifests (AMs)), which reflect analytics workflows and are amenable by visual tools. This facilitates the specification and configuration of analytics tasks as part of the DDA. In particular, solution integrators and manufacturers can flexibly configure their analytics operations through defining proper AMs. Based on the use of proper visual tools, such definitions can be performed with almost zero programming, which is an obvious advantage of the FAR-EDGE DDA over conventional edge analytics frameworks. Furthermore, the DDA leverages several distributed ledger services for storing and configuring AMs across different edge nodes, which provides a novel, secure and resilient way for specifying and executing global analytics tasks.</para>
<para>This chapter is devoted to the presentation of the DDA infrastructure of the FAR-EDGE project, which has been briefly introduced in [4]. This chapter extends the work in [4] through providing more details on the design and implementation details of the DDA platform. Special emphasis is put in describing and highlighting the unique value propositions of the FAR-EDGE DDA in terms of configurability, programmability and resilience. The description includes dedicated parts for the Edge Analytics Engine (EAE) that enable edge scoped analytics and for the Ledger Services for data analytics configuration and synchronization that enable configurable global analytics. Note also that the DDA infrastructure complies with the overall FAR-EDGE reference architecture, which has been introduced in an earlier chapter, while leveraging digital models that are presented in a subsequent chapter. Hence, the present chapter does not detail the overall architecture of the FAR-EDGE platform and the digital models that are used as part of it, since they are both described in other parts of the book.</para>
<para>The structure of this chapter is as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Section 6.2 following the chapter&#8217;s introduction presents the main drivers behind the development of a framework for DDA in industrial environments, through enhancing conventional and popular frameworks for Big Data analytics and streaming analytics.</para></listitem>
<listitem><para>Section 6.3 presents the overall architecture of the DDA, including its main modules.</para></listitem>
<listitem><para>Section 6.4 illustrates the edge analytics engine of the DDA, including the anatomy of the analytics workflows.</para></listitem>
<listitem><para>Section 6.5 presents the ledger services that enable the synchronization of different manifests across edge nodes.</para></listitem>
<listitem><para>Section 6.6 presents information about the open source implementation of the DDA, including information about the underlying technologies that have been (re)used.</para></listitem>
<listitem><para>Section 6.7 is the final and concluding section of the chapter.</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="sec6-2">
<title>6.2 Requirements for Industrial-scale Data Analytics</title>
<para>As already outlined, most digital automation platforms need to process large volumes of data (including streaming data) as part of wider simulation, decision making and control tasks. Instead of implementing a data analytics function for every new use case, digital automation platforms can offer entire middleware frameworks that facilitate the distributed data analytics tasks (e.g., [8&#8211;10]). These frameworks offer facilities for dynamically discovering data sources and executing data processing algorithms over them. In principle, they are Big Data frameworks that should be able to handle large data volumes that features the 4Vs (volume, variety, velocity and veracity) of Big Data. Beyond these general and high-level requirements, the FAR-EDGE DDA infrastructure has been driven by the following principles:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">High-Performance and Low-Latency:</emphasis> The FAR-EDGE DDA enables the execution of data analytics logic with high performance, i.e. in a way that ensures low-overhead and low-latency processing of data streams. This is especially important towards handling high-velocity data streams i.e. data with very high ingestion rates such as data streams stemming from sensors attached to a machine.</para></listitem>
<listitem><para><emphasis role="strong">Configurable:</emphasis> The DDA is configurable in order to be flexibly adaptable to different business and factory automation requirements, such as the calculation of various KPIs (Key Performance Indicators) for production processes. Configurability should be reflected in the ability to dynamically select the data sources that should be used as part of a data analytics task.</para></listitem>
<listitem><para><emphasis role="strong">Extensible:</emphasis> The DDA provides extensibility in terms of the supported processing functions, i.e. to provide the ability to implement additional data processing schemes based on fair programming effort. In the case of FAR-EDGE, extensibility concerns the implementation of advanced processing capabilities in terms of pre-processing, analyzing and storing data streams.</para></listitem>
<listitem><para><emphasis role="strong">Dynamic:</emphasis> The DDA is able to dynamically update the results of the analytics functions, upon changes in its configuration. This is essential towards having a versatile analytics engine that can flexibly adapt to changing business requirements and production contexts in volatile industrial environments where data sources join or leave dynamically.</para></listitem>
<listitem><para><emphasis role="strong">Ledger Integration:</emphasis> One of the innovative characteristics of the DDA lies in the use of a distributed ledger infrastructure (i.e. blockchain-based services) [11] towards enabling analytics across multiple EGs, as well as towards facilitating the dynamic configuration of the data analytics rules that comprise these analytics tasks.</para></listitem>
<listitem><para><emphasis role="strong">Stream Handling Capabilities:</emphasis> The DDA can handle streaming data in addition to transactional static or semi-static data. This requirement has been considered in the design and the prototype implementation of the DDA infrastructure, which is based on middleware for handling data streams.</para></listitem>
</itemizedlist>
<para>Table 6.1 associates these design principles with some concrete implementation examples and use cases.</para>
<table-wrap position="float" id="T6-1">
<label><link linkend="T6-1">Table <xref linkend="T6-1" remap="6.1"/></link></label>
<caption><para>Requirements and design principles for the FAR-EDGE DDA</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr><td valign="top" align="left">Design Principles and Goals</td><td valign="top" align="left">Examples and use Cases</td><td valign="top" align="left">DDA Implementation Guidelines</td></tr>
<tr><td valign="top" align="left">High performance and Lowlatency</td><td valign="top" align="left">Complex data analyses over real-time streams should be performed within timescales of a few seconds. As an example, consider the provision of quality control feedback about an automation process in a station, based on the processing of data from the station. The DDA support the collection and analysis of data streams within a few seconds.</td><td valign="top" align="left">Leverage high-performance data streaming technology as background for the EAE implementation (e.g. ECI&#8217;s streaming technology)</td></tr>
<tr><td valign="top" align="left">Configurable</td><td valign="top" align="left"><para>A manufacturer needs to calculate multiple Key Performance Indicators (KPIs) such as indicators relating to quality control and performance of the automation processes. The DDA should flexibly support the on-line calculation of the different KPIs within the same instance of the EAE. To this end, the EAE should be easily configurable to support the calculation of all desired KPIs, ideally with minimal or even zero programming.</para>
<para>Configurability can be gauged based on the time needed to set up and deploy a data analytics workflow comprising several processing functions. The use of EAE is destined to reduce this time, when compared to cases where data analytics are programmed from scratch (i.e. without support from the EAE middleware).</para></td><td valign="top" align="left">
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Specify and implement DDA as a programmable &amp; configurable engine, which executes analytics configurations specified in appropriate files (&#8220;manifests&#8221;).</para></listitem>
<listitem><para>Parse and execute the analytics rules of the configuration files, without a need for explicitly programming these rules</para></listitem></itemizedlist></td></tr>
<tr><td valign="top" align="left">Extensible</td><td valign="top" align="left">The EAE should be extensible in terms of data processing, data mining and machine learning techniques. For example, in cases where deep learning needs to be employed (e.g., estimation of a failure mode in predictive maintenance), the EAE must support the execution of machine learning functions, including AI-based algorithms such as deep neural network. The latter can, for example, support the detection of complex patterns such as production quality degradation patterns.</td><td valign="top" align="left"><itemizedlist mark="bullet" spacing="normal">
<listitem><para>Provide a library of analytics functions/capabilities and integrate it within a directory.</para></listitem>
<listitem><para>Provide the means for discovering and using analytics functions from the library analytics configurations.</para></listitem></itemizedlist></td></tr>
<tr><td valign="top" align="left">Dynamic</td><td valign="top" align="left">The EAE should be able to deploy on the fly (i.e. hot deploy) different data analysis instances. For example, when new KPIs should be calculated, calculation shall be done of the fly, without affecting the rest of deployed KPIs.</td><td valign="top" align="left">Leverage multi-threading and hot deployment capabilities of the selected implementation technologies.</td></tr>
<tr><td valign="top" align="left">Ledger integration</td><td valign="top" align="left">The EAE must integrate functions from the Ledger Services in order to: (i) access configurations of analytics tasks through ledger smart contracts, such as a large scale distributed analytics tasks; (ii) collecting and analyzing data from multiple edge nodes/gateway through access to the publishing services. This can be, for example, the case there data analytics for calculating a product schedule must be computed, as this is likely to span multiple EGs.</td><td valign="top" align="left"><itemizedlist mark="bullet" spacing="normal">
<listitem><para>Represent analytics configurations as smart contracts.</para></listitem>
<listitem><para>Implement publishing services driven by the smart contracts and leveraging information from multiple edge nodes.</para></listitem></itemizedlist></td></tr>
<tr><td valign="top" align="left">Stream handling capabilities</td><td valign="top" align="left">The EAE must be able to handle data-intensive data streams such as sensor data for predictive maintenance and data from other field devices for quality control in automation.</td><td valign="top" align="left">Leveraging streaming handling and management middleware of the ECI.</td></tr>
</tbody>
</table>
</table-wrap>
</section>
<section class="lev1" id="sec6-3">
<title>6.3 Distributed Data Analytics Architecture</title>
<para>A high-level overview of the DDA Infrastructure is provided in <link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link>. The DDA consists of wide range of components, which are described in the following subsections.</para>
<section class="lev2" id="sec6-3-1">
<title>6.3.1 Data Routing and Preprocessing</title>
<para>The Data Routing and Pre-processing (DR&amp;P) component is in charge of routing data from the data sources (i.e. notably industrial devices) to the Edge Analytics Engine (EA-Engine). The component includes a <emphasis role="strong">Device Registry</emphasis>, where the various device and data sources announce (i.e. &#8220;register&#8221;) themselves, as well as the means to access their data (i.e. based on connectivity details such as protocol, IP address and port). The registry makes the system dynamic, as it ensures handling of all data sources that register with it. Moreover, the component provides pre-processing capabilities, which allow for transformations to data streams prior to their delivery to the EA-Engine. Note that the DR&amp;P component is edge-scoped i.e. it is deployed at an Edge Gateway (EG). Likewise, the data sources that are registered and managed in the registry concern the devices that are attached to the specific edge gateway as well.</para>
<para>Along with the Device Registry, the DR&amp;P provides a Data Bus, which is used to route streams from the various devices to appropriate consumers, i.e. processors of the EA-Engine. Moreover, the Data Bus is not restricted to routing data streams stemming directly from the industrial devices and other shopfloor data sources. Rather it can also support the routing of additional data streams and events that are produced by the EA-Engine.</para>
</section>
<section class="lev2" id="sec6-3-2">
<title>6.3.2 Edge Analytics Engine</title>
<para>The EA-Engine is a runtime environment hosted in an EG, i.e. at the edge of an industrial FAR-EDGE deployment. It is the programmable and configurable environment that executes data analytics logic locally to meet stringent performance requirements, mainly in terms of latency. The EA-Engine is also configurable and comprises multiple analytics instances that correspond to multiple edge scoped analytics workflows.</para>
<para>As shown in <link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link>, the EA-Engine comprises several processors, which implement processing functions over the data streams of the Data Bus. As illustrated in a following paragraph, these processors are of three main types, including processors that store/persist data streams, processors devoted to pre-processing functions, as well as processors in charge of data analytics. Furthermore, the outcomes of the EA-Engine can be written to the Data Bus in order to be consumed by other components and processing functions or even written at local/edge data storage.</para>
<fig id="F6-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link></label>
<caption><para>DDA Architecture and main components.</para></caption>
<graphic xlink:href="graphics/ch06_fig001.jpg"/>
</fig>
</section>
<section class="lev2" id="sec6-3-3">
<title>6.3.3 Distributed Ledger</title>
<para>The Distributed Ledger is used to orchestrate analytics functionalities across multiple Edge Gateways. It is in charge of maintaining the configuration of different analytics tasks across multiple EGs, which at the same time keep track of their composition in factory-wide analytics tasks. Moreover, the distributed ledger is used to compute the outcomes of factory-wide analytics. Overall, the distributed ledger offers two kinds of services to the DDA, namely Data Publishing Services that synchronize the analytics computations and Configuration Services that synchronize the configuration of the analytics services.</para>
</section>
<section class="lev2" id="sec6-3-4">
<title>6.3.4 Distributed Analytics Engine (DA-Engine)</title>
<para>While the EA-Engine is in charge of data analytics at edge scope, the DA-Engine is in charge of executing global analytics functions based on the analytics configurations that reside in the distributed ledger. The DA-Engine is configurable thanks to its interfacing with a set of data models that describe the configuration of the DDA infrastructure in terms of edge nodes, edge gateways, data sources and the processing functions that are applied over them as part of the DA-Engine. To this end, the DA-Engine interfaces to a models&#8217; repository, which comprises the digital representation of the devices, data sources and edge gateways that are part of the DDA. The Digital Models are kept up to date and synchronized with the status of the DDA&#8217;s elements. As such, they are accessible from the DR&amp;P and EA-Engine components, which make changes in the physical and logical configuration of the analytics tasks. Note also that the DA-Engine stores data within a cloud-based data storage repository, which is destined to persist and comprise the results of global analytics tasks.</para>
</section>
<section class="lev2" id="sec6-3-5">
<title>6.3.5 Open API for Analytics</title>
<para>The Open API for Analytics enables external systems to take advantage of the DDA infrastructure functionalities, including both the configuration and execution of factory-wide analytics tasks, which span multiple edge gateways and take advantage of the relevant EA-Engine instances. Using the Open API any integrator of industrial solutions can specify and execute data processing functions over data streams stemming from the full range of devices that are registered in the device registries of the DR&amp;P components of the DDA infrastructure. As illustrated in the figure, this gives rise to the use of the DDA infrastructure by third-party applications.</para>
<para>The following sections provide insights into the operation and novel features of the EA-Engine and the Distributed Ledger, which endows the DDA with modularity, extensibility and configurability.</para>
</section>
</section>
<section class="lev1" id="sec6-4">
<title>6.4 Edge Analytics Engine</title>
<section class="lev2" id="sec6-4-1">
<title>6.4.1 EA-Engine Processors and Programmability</title>
<para>One of the unique value propositions of the EA-Engine is that it is configurable and programmable. These properties stem from the fact that it is designed to handle analytics tasks that are expressed based on the combination of three types of processing functions, which are conveniently called &#8220;processors&#8221;. The three types of processors are as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Pre-processors</emphasis>, which perform pre-processing (e.g. filtering) over data streams. In principle, pre-processors prepare data streams for analysis. A pre-processor interacts with a Data Bus in order to acquire streaming data from the field through the DR&amp;P component. At the same time, it also produces and registers new streams in the same Data Bus, notably streams containing the results of the pre-processing.</para></listitem>
<listitem><para><emphasis role="strong">Storage processors</emphasis>, which store streams to some repository such as a data bus, a data store or a database.</para></listitem>
<listitem><para><emphasis role="strong">Analytics processors</emphasis>, which execute analytics processing functions over data streams ranging from simple statistical computations (e.g., calculation of an average or a standard deviation) to more complex machine learning tasks (e.g., execution of a classification function). Similar to pre-processors, analytics processors consume and produce data through interaction with the Data Bus.</para></listitem>
</itemizedlist>
<para>Given these three types of &#8220;processors&#8221;, analytics tasks are represented and described as combinations of multiple instances of such processing functions in the form of workflow or a pipeline. Such workflows are described through an Analytics Manifest (AM), which specifies a combination of the above processors. Hence, an AM follows a well-defined schema (as shown in <link linkend="F6-2">Figure <xref linkend="F6-2" remap="6.2"/></link>), which specifies the processors that comprise the AM. In particular, an AM defines a set of analytics functionalities as a graph of processing functions that comprises the above three types of processors and which can be executed by the EA-Engine.</para>
<fig id="F6-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-2">Figure <xref linkend="F6-2" remap="6.2"/></link></label>
<caption><para>Representation of an Analytics Manifest in XML format (XML Schema).</para></caption>
<graphic xlink:href="graphics/ch06_fig002.jpg"/>
</fig>
<para>Note also that an AM instance is built based on the available devices, data sources, edge gateways and analytics processors, which are part of the data models of the DDA. The latter reflect the status of the factory in terms of available data sources and processing functions, which can be used to specify more sophisticated analytics workflows.</para>
</section>
<section class="lev2" id="sec6-4-2">
<title>6.4.2 EA-Engine Operation</title>
<para>The EA-Engine provides the run-time environment that controls and executes edge analytics instances, which are specified in AMs. In particular, the EA- Engine is able to parse and execute analytics functions specified in an AM, based on the following processes:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Parsing:</emphasis> The EA-Engine parses AMs and identifies the analytics pipeline that has to be executed.</para></listitem>
<listitem><para><emphasis role="strong">Execution:</emphasis> The EA-Engine executes (applied) the analytic functions that are identified following the parsing. Note that the EA-Engine is multi-threaded and enables the concurrent (parallel) execution of multiple analytics pipelines, which can correspond to different AMs.</para></listitem>
</itemizedlist>
<para><link linkend="F6-3">Figure <xref linkend="F6-3" remap="6.3"/></link> illustrates an example topology and runtime operations for EA- Engine. In this example, two streams (CPS1 and CPS2) are pre-processed from Analytics Processor 1 (i.e. Pre-Processor) and Analytics Processor 2 (i.e. Pre-Processor) equivalently in order to enable the execution of an analytics algorithm that is in Analytics Processor 3, which is an Analytics Processor. Finally, the pipelines ends-up storing the result to a Data Store based on Analytics Processor 4, which is a Storage Processor. In this example, the EA-Engine is set up and runs based on the following steps:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Step 1 (Set-up):</emphasis> Based on the description of the topology and required processors in the AM, the engine instantiates and configures the required Analytics Processors. Note that the AM is built based on real information about the factory, which is reflected in the digital models of the DDA infrastructure.</para></listitem>
<listitem><para><emphasis role="strong">Step 2 (Runtime):</emphasis> Analytics Processor 1 consumes and pre-processes streams coming from CPS1. Likewise, Analytics Processor 2 consumes and pre-processes streams coming from CPS2. In both cases, the streams are accessed through the Data Bus.</para></listitem>
</itemizedlist>
<fig id="F6-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-3">Figure <xref linkend="F6-3" remap="6.3"/></link></label>
<caption><para>EA-Engine operation example.</para></caption>
<graphic xlink:href="graphics/ch06_fig003.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Step 3 (Runtime):</emphasis> Analytics Processor 3 consumes the produced streams from Analytics Processor 1 and 2 towards applying the analytics algorithm. In this case, the analytics processor cannot execute without input for the earlier Analytics Processors.</para></listitem>
<listitem><para><emphasis role="strong">Step 4 (Runtime):</emphasis> Store Analytics Processor 4 consumes the data stream produced from Analytics Processor 3 and forwards it to the Data Store, which persists and data coming from Analytics Processor 4.</para></listitem>
</itemizedlist>
<para>This is a simple example of the EA-Engine operation, which illustrates the use of all three types of processors in a single pipeline. However, much more complex analytics workflows and pipelines can be implemented based on the combination of the three different types of processors. The only limiting factor is the expressiveness of the AM, which requires that instances of the three processors are organized in a graph fashion, with one or more processors providing input to others.</para>
<para>Vendors and integrators of industrial automation solutions can take advantage of the versatility of the EA-Engine in two ways:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>First, they can leverage existing processors of the EA-Engine towards configuring and formulating analytics workflows in line with the needs of their application or solution.</para></listitem>
<listitem><para>Second, they can extend the EA-Engine with additional processing capabilities, in the form of new reusable processors.</para></listitem>
</itemizedlist>
<para>In practice, industrial automation solution integrators will use the EA- Engine in both the above ways, which are illustrated in the following paragraphs.</para>
</section>
<section class="lev2" id="sec6-4-3">
<title>6.4.3 Configuring Analytics Workflows</title>
<para>Integrators can configure and execute edge-scoped analytics pipelines. The configuration of a new pipeline involves the following steps:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Discovery of Devices</emphasis> and other data sources registered in the device registry. Analytics workflows can only take advantage of devices and data sources that are registered with the DR&amp;P component.</para></listitem>
<listitem><para><emphasis role="strong">Discovery of available processors</emphasis>, a list of which is maintained in the EA-Engine. The rationale behind this discovery is to reuse existing processors instead of programming new ones. Nevertheless, in cases where the analytics workflow involves a processor that is not yet available, this processor should be implemented from scratch. However, every new processor will become available for reuse in future analytics workflows.</para></listitem>
<listitem><para><emphasis role="strong">Definition and creation of the Analytics Manifest</emphasis>, based on the available (i.e. discovered) devices, data sources and processors. As already outlined, an AM comprises a graph of processors of the three specified types, defines the analytics results to be produced and specified where they are to be stored. The specification of the AM can take place based on the use of the Open API of the DDA. However, as part of our DDA development roadmap, we will also provide a visual tool for defining AMs, which facilitate zero-programming specification of the edge analytics tasks.</para></listitem>
<listitem><para><emphasis role="strong">Runtime execution of the AM</emphasis>, based on the invocation of appropriate functions of the EA-Engine&#8217;s runtime. This step can be implemented based on the Open API of the DDA, yet it is also possible to execute it through a visual tool.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec6-4-4">
<title>6.4.4 Extending the Processing Capabilities of the EA-Engine</title>
<para>Integrators can specify additional processing functions and make them available for use as part of the EA-Engine. The extension process involves the following steps:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Implementation of a Processor Interface:</emphasis> In order to extend the EA- Engine with a new processor, an integrator has to provide an implementation of a specific interface i.e. the interface of the processor. In practice, each of the three processor types comes with its own interface.</para></listitem>
<listitem><para><emphasis role="strong">Registration of the Processor to a Registry:</emphasis> Once a new processor is implemented, it has to become registered to a registry. This will make it discoverable by solution developers and manufacturers that develop AMs for their needs, based on available devices and processors.</para></listitem>
<listitem><para><emphasis role="strong">Using the processor</emphasis>: Once a processor becomes available, it can be used for constructing AMs and executing analytics tasks that make use of the new processor.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec6-4-5">
<title>6.4.5 EA-Engine Configuration and Runtime Example</title>
<para>In this section, we use the topology illustrated in <link linkend="F6-3">Figure <xref linkend="F6-3" remap="6.3"/></link> above in order to provide a more detailed insight into the steps needed to configure the EA- Engine, but also in order to illustrate the interactions between the various components both at configuration time and at run time. As already outlined, the example involves two devices (CPS1, CPS2), which generate two data streams under a topic each one named after their ID. We therefore need to:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Apply some pre-processing to each one of the two streams (by Processor 1 and Processor 2).</para></listitem>
<listitem><para>Apply an Analytics algorithm (Processor 3) to the pre-processed streams.</para></listitem>
<listitem><para>Persist the result to a Data storage (i.e. the Data Storage).</para></listitem>
</itemizedlist>
<para><link linkend="F6-4">Figure <xref linkend="F6-4" remap="6.4"/></link> illustrates the steps required to register a new processor, build the Edge Analytics configuration (AM), register it to the EA-Engine and instantiate the appropriate Analytics Processors. In particular:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The user of the EA-Engine (e.g. a solution integrator) registers new Processors required to the Model Repository. To this end, it can use an API or a visual tool.</para></listitem>
<listitem><para>In order to set up an AM, all the available processors are discovered from the Model Repository and all the available Data Sources (DSMs) are discovered from the Distributed Ledger.</para></listitem>
<listitem><para>The user has all the required information and with the help of the Configuration Dashboards can now set up a valid AM flow for the four Analytic Processors.</para></listitem>
<listitem><para>The AM is set up based on a proper combination of devices data streams and processors. In this example, the AM includes the required configurations for Processor 1 (APM1), Processor 2 (APM2), Processor 3 (APM3) and Processor 4 (APM4).</para></listitem>
</itemizedlist>
<fig id="F6-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-4">Figure <xref linkend="F6-4" remap="6.4"/></link></label>
<caption><para>EA-Engine configuration example (Sequence Diagram).</para></caption>
<graphic xlink:href="graphics/ch06_fig004.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The AM is accordingly sent to the EA-Engine, which instantiates the four Analytic Processors.</para></listitem>
<listitem><para>The output of the AM is automatically described in a new DSM, which is registered to the Device Registry as a new Data Source and synchronized with the Distributed Ledger through the Device Registry mechanisms.</para></listitem>
<listitem><para>The capabilities of the new processor are also registered to the Distributed Ledger to enable the discoverability of the new processor for future use.</para></listitem>
</itemizedlist>
<para><link linkend="F6-5">Figure <xref linkend="F6-5" remap="6.5"/></link> illustrates the interactions between the EA-Engine compo-nents, when the execution of the AM starts. These include:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Instructing the EA-Engine to start the execution of the analytics task, as specified in the analytics manifest (AM1). To this end, the EA-Engine retrieves AM1 from the Distributed Ledger in order to instantiate the processors that AM1 comprises.</para></listitem>
<listitem><para>The EA-Engine instantiates each of the four EA-Processors described in the AM1. Specifically:</para>
<itemizedlist mark="circle" spacing="normal">
<listitem><para>As part of the instantiation of Processor 1 (pre-processor), its specification (APM1) contains the configurations of Processor 1, which includes data inputs, data outputs and processor attributes required for the instantiation. The data type and data model of CPS1 are retrieved from the Ledger Service in order to apply the pre-processing properly. The processor data output description is provided within a new DSM that is registered to the Device Registry. Then, the EA-Processor (Processor 1) subscribes for the &#8220;CPS1&#8221; data stream of the Data Bus to apply the required pre-processing.</para></listitem>

<listitem><para>As part of the instantiation of Processor 2 (pre-processor), its specification (i.e. APM2) contains the configurations of Processor 2, which includes data inputs, data outputs and processor attributes required for the instantiation. The data type and data model of CPS2 are retrieved from the Ledger Service. Also, the EA-Processor (Processor 2) subscribes for the &#8220;CPS2&#8221; data stream of the Data Bus in order to apply the required pre-processing.</para></listitem>
<listitem><para>As part of the instantiation of Processor 3 (analytics processor), its specification (APM3) contains the configurations of Processor 3. Processor 3 subscribes to the topics named after the IDs of Processor 1 and Processor 2 (&#8220;CPS1-Processed 1&#8221; and &#8220;CPS2-Proceesed 2&#8221;, respectively) in order to apply the required analytics.</para></listitem>
<listitem><para>Finally, as part of the instantiation of Processor 4 (store processor), its specification (APM4) is retrieved from the EA-Storage. Processor 4 subscribes to the topics named after the ID of Processor 3 (&#8220;CPS1-CPS2-Processed 3&#8221;) in order to store it to the data storage.</para></listitem></itemizedlist></listitem>
</itemizedlist>

<fig id="F6-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-5">Figure <xref linkend="F6-5" remap="6.5"/></link></label>
<caption><para>EA-Engine initialization example (Sequence Diagram).</para></caption>
<graphic xlink:href="graphics/ch06_fig005.jpg"/>
</fig>
<para>The runtime operation of the EA-Engine is further presented in <link linkend="F6-6">Figure <xref linkend="F6-6" remap="6.6"/></link>, which illustrates the sequence of runtime interactions of the components of the engine, following the conclusion of the above-listed configurations. At runtime, all the different processors run continuously in parallel until they are stopped from the end-user through a proper API command or based on the use of the visual tool. In particular:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Processor 1</emphasis> gets notified every time new CPS1 data is published and collects it. It applies the required pre-processing and pushes the preprocessed data stream back to the data bus under the topic named after its own ID (&#8220;CPS1-Processed 1&#8221;).</para></listitem>
<listitem><para><emphasis role="strong">Processor 2</emphasis> gets notified every time new CPS2 data is published and collects it. It applies the required pre-processing and pushes the preprocessed data stream back to the data bus under the topic named after its own ID (&#8220;CPS2-Processed 2&#8221;).</para></listitem>
</itemizedlist>
<fig id="F6-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-6">Figure <xref linkend="F6-6" remap="6.6"/></link></label>
<caption><para>EA-Engine runtime operation example (Sequence Diagram).</para></caption>
<graphic xlink:href="graphics/ch06_fig006.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Processor 3</emphasis> gets notified every time new Processor 1 and Processor 2 data is published and collects it. It applies the required analytic and pushes the processed data stream back to the data bus under the topic named after its own ID (&#8220;CPS1-CPS2-Processed 3&#8221;).</para></listitem>
<listitem><para><emphasis role="strong">Processor 4</emphasis> gets notified every time new Processor 3 data is published and collects it. It pushes the collected data to the EA-Storage to be persisted.</para></listitem>
</itemizedlist>
</section>
</section>
<section class="lev1" id="sec6-5">
<title>6.5 Distributed Ledger and Data Analytics Engine</title>
<section class="lev2" id="sec6-5-1">
<title>6.5.1 Global Factory-wide Analytics and the DA-Engine</title>
<para>Given the presented functionalities of the EA-Engine, the DA-Engine enables the combination and synchronization of data from multiple edge analytics pipelines towards implementing factory-wide analytics. At a high level, the concept of global analytics workflows is similar to the one of edge analytics ones. In particular, an Analytics Manifest (AM) is used to express an analytics workflow based on the combination of analytics tasks that are configured and executed at edge gateways based on properly configured instances of the EA-Engine. To this end, a mechanism for constructing AMs that comprise global analytics tasks is provided through the Open API of the DDA. In particular, the Open API provides the means for creating, updating, deleting, managing and configuring global analytics tasks based on the combination and orchestration of edge analytics workflows.</para>
<para>At a lower level, the implementation of the AM configuration and execution mechanism is offered in two flavours:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">A conventional edge computing implementation</emphasis>, which is subject to conventional central control. It involves an analytics engine that combines edge analytics workflows to global ones for a central orchestration point. That is in line with the classical edge/cloud computing paradigm.</para></listitem>
<listitem><para><emphasis role="strong">A novel distributed ledger implementation</emphasis>, which is based on a dis-ruptive cooperative approach without central control. This cooperative approach is based on the deployment and use of ledger services in each one of the edge nodes that participate in the DDA infrastructure. In particular, ledger services are deployed in each of the edge gateways in order to enable a consensus-based approach regarding the configuration of the global analytics task, as well as its execution based on publishing and combination of data from the edge gateways. Such a collaborative approach is fully decentralized and hence does not provide a single point of failure. Moreover, it can be generalized beyond edge gateways in order to enable data analytics workflows that comprise data from field objects (i.e. smart objects) and cloud nodes as well.</para></listitem>
</itemizedlist>
<para>The next sub-section illustrates the scope and operation of these ledger services, which enable a novel and more interesting approach to supporting the functionalities of the DA-Engine.</para>
</section>
<section class="lev2" id="sec6-5-2">
<title>6.5.2 Distributed Ledger Services in the FAR-EDGE Platform</title>
<para>For the implementation of the DA-Engine, we leverage the services of a permissioned blockchain, rather than of one of the popular public blockchains such as Bitcoin and Ethereum. The rationale behind this decision is that permissioned blockchains provide the means for controlling participation and authenticating participants to the blockchain network, while offering superior performance over public blockchains [12]. The latter performance is largely due to the fact that peer nodes (i.e. participants) in these blockchains need not employ complex Proof-of-Work (PoW) mechanisms. For these reasons, a per- missioned blockchain is more appropriate for coordinating and synchronizing distributed processes in an industrial context.</para>
<para>In this context, a Ledger Service is a Chaincode program for IBM&#8217;s Hyperledger Fabric, which uses some of the utility services that are provided by the FAR-EDGE platform. Chaincode is always designed to support a well- defined, application-specific process. Hence, the DDA implementation is not based on a generic Ledger Service implementation, but rather on application- specific Ledger Service. Nevertheless, four categories of abstract services are defined as part of the Ledger Tier of the FAR-EDGE Architecture, namely Orchestration, Configuration, Data Publishing and Synchronization. These categories are used to classify the application-specific implementations of Ledger Services rather than to denote some general-purpose framework services. In particular:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Orchestration Services</emphasis> are related to edge automation workflows, aiming at synchronizing distributed edge automation tasks in factory-wide automation workflows.</para></listitem>
<listitem><para><emphasis role="strong">Data Publishing Services</emphasis> support edge analytics algorithms, through the combination of multiple edge analytics pipelines in factory-wide workflows.</para></listitem>
<listitem><para><emphasis role="strong">Synchronization Services</emphasis> enable the reconciliation of several independent views of the same dataset across the factory.</para></listitem>
<listitem><para><emphasis role="strong">Configuration Services</emphasis> support the decentralized system administration.</para></listitem>
</itemizedlist>
<para>Overall, these four categories of Ledger Services cover all the mandatory platform-level functionality that is required for Edge Computing to deliver its promises in a manufacturing context. The Distributed Ledger of the FAR-EDGE platform can then be used to deploy any kind of custom Ledger Service that meets the secure state sharing and/or decentralized coordination requirements of user applications.</para>
<para>Any concrete Ledger Service implementation is responsible for three things:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Defining and managing a data model</emphasis>. While the global state of the Ledger Service is automatically maintained in the background by the DL-Engine &#8211; which logs every state change in the Ledger that is replicated across all the peer nodes of the system &#8211; the data model of such state is shaped in code by the Ledger Service implementation itself. Practically speaking, the data store of a Ledger Service is initialized according to a specific data model by a special code section when the instance is first deployed. Once initialized, no structural changes in the data model occur.</para></listitem>
<listitem><para><emphasis role="strong">Defining and executing business logic</emphasis>. Application logic is coded in software and exposed on the network as a number of application-specific service endpoints, which can be called by clients. These service calls represent the API of the Ledger Service. Through them, callers can query and change the global state of the Ledger Service. The API can be invoked by any authorized client on the network following some well- documented calling conventions of the DL-Engine. Moreover, we have implemented an additional layer of software in order to simplify the development of client applications: each Ledger Service implemented in the project comes with its own client software library &#8211; called Ledger Client &#8211; which an application can embed and use as a local proxy of the actual Ledge Service API. The Ledger Client provides an in-process API, which has simple call semantics.</para></listitem>
<listitem><para><emphasis role="strong">Enforcing (and possibly also defining) fine-grained access and/or usage policies</emphasis>. This is optional one, as a basic level of access control is already provided by the DL-Engine, which requires all clients to have a strong digital identity and be approved by a central authority. When a more fine-grained control is required &#8211; e.g. an Access Control List (ACL) applied to individual service endpoints &#8211; the Ledger Service implementation is required to manage it as part of its code.</para></listitem>
</itemizedlist>
<para>In the specific context of the FAR-EDGE Platform, peer nodes are usually &#8211; but not mandatorily &#8211; installed on Edge Gateway servers, together with Edge Tier components. This setup allows for DL clients that run on Edge Gateways, like the EA-Engine, to refer to a localhost address by default when resolving Ledger Service endpoints. However, this is not the only possible way to deploy the Ledger Tier in FAR-EDGE-enabled system: peer nodes can easily be deployed on the Cloud Tier to make them addressable from anywhere or even embedded in Smart Objects on the Field Tier to make them fully autonomous systems. In complex scenarios, peer nodes can actually be spread across all the three physical layers of the FAR-EDGE architecture (Field, Edge and Cloud), exploiting the flexibility of the DL enabler to its full extent.</para>
<fig id="F6-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-7">Figure <xref linkend="F6-7" remap="6.7"/></link></label>
<caption><para>DL deployment choices (right) and EG deployment detail (left).</para></caption>
<graphic xlink:href="graphics/ch06_fig007.jpg"/>
</fig>
</section>
<section class="lev2" id="sec6-5-3">
<title>6.5.3 Distributed Ledger Services and DA-Engine</title>
<para>The DA-Engine takes advance of two of the above-listed types of Ledger Services, namely the Data Publishing and Configuration services. In particular, the DDA infrastructure implements Data Publishing and Configuration services at the Ledger Tier, in order to configure factory-wide AMs and to implement the respective analytics. In particular:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Configuration Services:</emphasis> DDA configurations (i.e. AMs) are repre-sented as smart contracts. Each smart contract is executed by the peers (notably edge gateway) that participate in the configuration and execution of the factory-wide AM. A set of Configuration services (Ledger Services) are used to ensure the configuration of the global analytics manifest based on consensus across the participating nodes. In this case, the distributed ledger is used as a distributed database that holds all the analytics configurations (in terms of manifests and their component). This allows the resilient configuration of global analytics without a need for centralized coordination and control from a single point of (potential) failure.</para></listitem>
<listitem><para><emphasis role="strong">Publishing Services:</emphasis> Publishing Services are implemented in order to compute factory-wide analytics tasks, based on data streams and analytics (i.e. processors) available across multiple instances of the EA- Engine, which are deployed in different Edge Gateways (EGs). The EGs act as peers in this case.</para></listitem>
</itemizedlist>
</section>
</section>
<section class="lev1" id="sec6-6">
<title>6.6 Practical Validation and Implementation</title>
<section class="lev2" id="sec6-6-1">
<title>6.6.1 Open-source Implementation</title>
<para>The DA-Engine is implemented as open-source software/middleware, which is available at the FAR-EDGE github: https://github.com/far- edge/distributed-data-analytics. In the absence of general-purpose Ledger Services, the implementation includes the middleware for edge analytics framework of Section 6.3, as well as an Open API for creating Analytics Manifests for global, factory-wide analytics. Hence, a subset of the DDA architecture has been actually implemented, which is shown in <link linkend="F6-8">Figure <xref linkend="F6-8" remap="6.8"/></link>. As evident from the figure, the open-source implementation includes the EA-Engine and the DA-Engine, without however general-purpose ledger ser-vices, which is the reason why the Distributed Ledger database is not depicted in the figure. In a nutshell, the implementation includes and integrates the DR&amp;P, the Data Bus, the Device Registry, the Data Storage (including both cloud and local data storage) and the Model Repository components.</para>
<para>The structure of the open-source codebase is as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">edge-analytics-engine</emphasis>, which contains the source code of the EA-Engine component.</para></listitem>
</itemizedlist>
<fig id="F6-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-8">Figure <xref linkend="F6-8" remap="6.8"/></link></label>
<caption><para>Elements of the open-source implementation of the DDA.</para></caption>
<graphic xlink:href="graphics/ch06_fig008.jpg"/>
</fig>
<fig id="F6-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-9">Figure <xref linkend="F6-9" remap="6.9"/></link></label>
<caption><para>DDA Visualization and administration dashboard.</para></caption>
<graphic xlink:href="graphics/ch06_fig009.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">open-api-for-analytics</emphasis>, which contains the component that implements and supports the Open API for Analytics.</para></listitem>
<listitem><para><emphasis role="strong">mqtt-random-data-publisher</emphasis>, which contains an application that sim-ulates the functionality of DR&amp;P component in order to facilitate the easier setup of simple demonstrators.</para></listitem>
</itemizedlist>
<para>Furthermore, a set of administration dashboards that visualize the main entities of the DDA have been implemented. It allows the monitoring and the configuration of main entities like processors, data sources, devices and manifests (see <link linkend="F6-9">Figure <xref linkend="F6-9" remap="6.9"/></link>).</para>
</section>
<section class="lev2" id="sec6-6-2">
<title>6.6.2 Practical Validation</title>
<section class="lev3" id="sec6-6-2-1">
<title>6.6.2.1 Validation environment</title>
<para>The DDA Infrastructure has been also validated in a pilot plant and specifically in the pilot plant of SmartFactoryKL, which is a network with more than 45 industrial and research organizations that support and use an Industrie 4.0 testbed in Kaiserslautern, Germany. In particular, we set up a relatively simple analytics scenario over three Infrastructure Boxes (IB) of the pilot plant. Each Infrastructure Box (IB) provides energy sensors information through an MQTT interface (Broker), where Data are provided every 60 seconds. The available energy information provided includes data about the TotalRealPower, the TotalReactivePower, the TotalApparentPower, the Total- RealEnergy, the TotalReactiveEnergy and the TotalApparentEnergy that are consumed and used by the machine. The business rationale behind analyzing this data is to help the plant operator in finding anomalies during production. Indeed, with the power and energy values, it is possible to understand the machine behaviour as well as the &#8220;response time&#8221; of each business process. Moreover, the use of streaming processing and high-performance analytics enables the identification and understanding of abnormalities almost in real time.</para>
<para>The following components were deployed and used in the pilot plant:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">The Data Routing and Pre-processing (DR&amp;PP) Component</emphasis> (including device registry service), which forwards data generated by Field sources.</para></listitem>
<listitem><para><emphasis role="strong">The Edge Tier Data Storage</emphasis>, which stores data stemming from the EA-Engine and provides a result storage repository.</para></listitem>
<listitem><para><emphasis role="strong">The Model Repository</emphasis>, which supports the sharing of common digital models, which are used from the various analytics components.</para></listitem>
<listitem><para><emphasis role="strong">The EA-Engine</emphasis>, which is the programmable and configurable environment that executes data analytics logic locally.</para></listitem>
<listitem><para><emphasis role="strong">The Analytics Processor</emphasis>, which implements the data processing func-tionalities for an edge analytics task.</para></listitem>
</itemizedlist>
<para>The components are deployed in a Virtual Machine (VM) provided within the Smart Factory premises, which had access to data from the IB based on the MQTT protocol. The DDA has been tested and validated in two different scenarios, involving edge analytics and (global) distributed analytics. Various test cases have successfully run and analytics results have been correctly computed. The following subsections illustrate the setup of the EA-Engine and the DA-Engine in the scope of the two scenarios.</para>
</section>
<section class="lev3" id="sec6-6-2-2">
<title>6.6.2.2 Edge analytics validation scenarios</title>
<para>For the Edge Analytics, we provide the hourly daily consumption from each Infrastructure Box for two parameters, namely TotalRealPower and TotalRealEnergy. The following steps have been followed for setting up and modelling the Edge Analytics scenario:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">IB Modelling:</emphasis> One Edge Gateway is built with each IB. The latter is modelled in line with the FAR-EDGE digital models for data analytics. The respective data model is stored at the Data Model repository in the cloud.</para></listitem>
<listitem><para><emphasis role="strong">IB Instantiation &amp; Registration:</emphasis> The specified Data models are used to generate the Data Source Manifest (DSM) and register it to each Edge Gateway.</para></listitem>
<listitem><para><emphasis role="strong">Edge Analytics Modelling:</emphasis> The required processor is modelled with the help of an Analytics Processor Definition (APD). In particular, the following processors are defined: (i) A processor for hourly average calculation from a single data stream and (ii) Processor for persisting results in a MongoDB. The above information is also stored at the Data Model repository in the cloud.</para></listitem>
<listitem><para><emphasis role="strong">Edge Analytics Installation &amp; Registration</emphasis>: The specified Data models are used to generate the Analytics Processor Manifest (APM) for each required Processor, which is registered to the Edge Gateway. The following processors are set up: (i) A Processor for hourly average calculation from the TotalRealPower data stream; (ii) A Processor for hourly average calculation from the TotalRealEnergy data stream; (iii) A Processor for persisting results in the MongoDB of an EG in order to support edge analytics calculations; and (iv) A Processor for persisting results in a global (cloud) MongoDB in order to support (global) distributed analytics. Moreover, an AM is also created in order to combined values and data from the instantiated processors. The AM is registered and started through the API of the EG.</para></listitem>
</itemizedlist>
<para>Following the setup and configuration of the system, runtime operations are supported, including the following information flows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>IBs pushes the data to MQTT broker.</para></listitem>
<listitem><para>The DR&amp;P retrieves raw/text data from MQTT broker and pushes them to an Apache Kafka Data Bus.</para></listitem>
<listitem><para>The data are retrieved and processed from the Analytics Engine.</para></listitem>
<listitem><para>The data are finally stored to the local Data Storage repository.</para></listitem>
</itemizedlist>
</section>
<section class="lev3" id="sec6-6-2-3">
<title>6.6.2.3 (Global) distributed analytics validation scenarios</title>
<para>For the Distributed Analytics validation, we provide the hourly daily consumption from all IBs for the TotalRealPower and the TotalRealEnergy parameters. The following steps are also needed in addition to setting up the EA-Engine:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Distributed Analytics Modelling:</emphasis> The required processors will be modelled with the help of an Analytics Processor Definition (APD) construct of the FAR-EDGE data models. The processors that are set up include: (i) A Processor for hourly average calculation for values from a MongoDB and (ii) A Processor for persisting results in a MongoDB. The above information is stored at the Data Model repository, which resides on the cloud.</para></listitem>
<listitem><para><emphasis role="strong">Distributed Analytics Installation &amp; Registration:</emphasis> The specified data models are used to generate the Analytics Processor Manifest (APM) for each required Processor and are registered to the Cloud. The following processors are registered: (i) A Processor for hourly average calculation from the TotalRealPower parameters for all IBs based on information residing in the (global) MongoDB in the cloud; (ii) A Processor for hourly average calculation from TotalRealEnergy for all IBs based on information residing in the (global) MongoDB in the cloud; and (iii) A Processor for persisting results in the (global) MongoDB in the cloud. An Analytics Manifest (AM) will be generated for combining data from the instantiated Processors. The AM will be registered and started through the Open API of the DA-Engine.</para></listitem>
</itemizedlist>
</section>
</section></section>
<section class="lev1" id="sec6-7">
<title>6.7 Conclusions</title>
<para>Distributed data analytics is a key functionality for digital automation in industrial plants, given that several automation and simulation functions rely on the collection and analysis of large volumes of data (including streaming data) from the shopfloor. In this chapter, we have presented a framework for programmable, configurable, flexible and resilient distributed analytics. The framework takes advantage of state-of-the-art data streaming frameworks (such as Apache Kafka) in order to provide high-performance analytics. At the same time however, it augments these frameworks with the ability to dynamically register data sources in repository and accordingly to use registered data sources in order to compute analytics workflows. The latter are also configurable and composed of three types of data processing functions, including pre-processing, storage and analytics functions. The whole process is reflected and configured based on digital models that reflect the status of the factory in terms of data sources, devices, edge gateways and the analytics workflows that they instantiate and support.</para>
<para>The analytics framework operates at two levels: (i) An edge analytics level, where analytics close to the field are defined and performance and (ii) A global factory-wide level, where data from multiple edge analytics deployments can be combined in arbitrary workflows. We have also presented two approaches for configuring and executing global level analytics: One following the conventional edge/cloud computing paradigm and another that support decentralized analytics configurations and computations based on the use of distributed ledger technologies. The latter approach holds the promise to increase the resilience of analytics deployments, while eliminated single point of failure and is therefore one of our research directions.</para>
<para>One of the merits of our framework is that it is implemented as open-source software/middleware. Following its more extensive validation and the improvement of its robustness, this framework could be adopted by the Industry 4.0 community. It could be really useful for researchers and academics who experiment with distributed analytics and edge computing, as well as for solution providers who are seeking to extend open-source libraries as part of the development of their own solutions.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>This work was carried out in the scope of the FAR-EDGE project (H2020- 703094). The authors acknowledge help and contributions from all partners of the project.</para>
</section>
<section class="lev1" id="sec6-8">
<title>References</title>
<para>[1] H. Lasi, P. Fettke, H.-G. Kemper, T. Feld, M. Hoffmann, &#8216;Industry 4.0&#8217;, Business &amp; Information Systems Engineering, vol. 6, no. 4, pp. 239, 2014.</para>
<para>[2] J. Soldatos (editor) &#8216;Building Blocks for IoT Analytics&#8217;, River Publishers Series in Signal, Image and Speech Processing, November 2016, ISBN: 9788793519039, doi: 10.13052/rp-9788793519046.</para>
<para>[3] J. Soldatos, S. Gusmeroli, P. Malo, G. Di Orio &#8216;Internet of Things Applications in Future Manufacturing&#8217;, In: Digitising the Industry Internet of Things Connecting the Physical, Digital and Virtual Worlds, Editors: Dr. Ovidiu Vermesan, Dr. Peter Friess. 2016. ISBN: 978-87-93379-81-7.</para>
<para>[4] M. Isaja, J. Soldatos, N. Kefalakis, V. Gezer &#8216;Edge Computing and Blockchains for Flexible and Programmable Analytics in Industrial Automation&#8217;, International Journal on Advances in Systems and Mea-surements, vol. 11 no. 3 and 4, December 2018 (to appear).</para>
<para>[5] T. Yu, X. Wang, A. Shami &#8216;A Novel Fog Computing Enabled Temporal Data Reduction Scheme in IoT Systems&#8217;, GLOBECOM 2017 - 2017 IEEE Global Communications Conference, pp. 1&#8211;5, 2017.</para>
<para>[6] S. Mahadev et al. &#8216;Edge analytics in the internet of things&#8217;, IEEE Pervasive Computing, vol. 14, no. 2, pp. 24&#8211;31, 2015.</para>
<para>[7] M. Yuan, K. Deng, J. Zeng, Y. Li, B. Ni, X. He, F. Wang, W. Dai, Q. Yang, &#8220;OceanST: A distributed analytic system for largescale spatiotemporal mobile broadband data&#8221;, PVLDB, vol. 7, no. 13, pp. 1561&#8211;1564, 2014.</para>
<para>[8] A. Jayaram &#8216;An IIoT quality global enterprise inventory management model for automation and demand forecasting based on cloud&#8217;, Computing Communication and Automation (ICCCA) 2017 International Conference on, pp. 1258&#8211;1263, 2017.</para>
<para>[9] J. Soldatos, N. Kefalakis, M. Serrano, M. Hauswirth, A. Zaslavsky, P. Jayaraman, and P. Dimitropoulos &#8216;Practical IoT deployment on Smart Manufacturing and Smart Agriculture based on an Open Source Platform&#8217;, in Internet of Things Success Stories, 2014.</para>
<para>[10] John Soldatos, Nikos Kefalakis et. al. &#8216;OpenIoT: Open Source Internet- of-Things in the Cloud&#8217;, OpenIoT@SoftCOM, 2014: 13&#8211;25, 2014.</para>
<para>[11] Z. Zheng, S. Xie, H. Dai, X. Chen, and H. Wan. &#8216;An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends&#8217;, Proceedings of IEEE 6th International Congress on Big Data, 2017.</para>
<para>[12] Elli Androulaki et al. &#8220;Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains&#8221;, Proceedings of the Thirteenth EuroSys Conference (EuroSys &#8217;18), Article No. 30, Porto, Portugal, April 23&#8211;26, 2018.</para>
</section>
</chapter>

<chapter class="chapter" id="ch07" label="7" xreflabel="7">
<title>Model Predictive Control in Discrete Manufacturing Shopfloors</title>
<para><emphasis role="strong">Alessandro Brusaferri<superscript>1</superscript>, Giacomo Pallucca<superscript>1</superscript>, Franco A. Cavadini<superscript>2</superscript>, Giuseppe Montalbano<superscript>2</superscript> and Dario Piga<superscript>3</superscript></emphasis></para>
<para><superscript>1</superscript> Consiglio Nazionale delle Ricerche (CNR),</para>
<para>Institute of Industrial Technologies and Automation (STIIMA),</para>
<para>Research Institute, Via Alfonso Corti 12, 20133 Milano, Italy</para>
<para><superscript>2</superscript> Synesis, SCARL, Via Cavour 2, 22074 Lomazzo, Italy</para>
<para><superscript>3</superscript> Scuola Universitaria Professionale della Svizzera Italiana (SUPSI),</para>
<para>Dalle Molle Institute for Artificial Intelligence (IDSIA),</para>
<para>Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland</para>
<para>E-mail: alessandro.brusaferri@itia.cnr.it; giacomo.pallucca@itia.cnr.it;</para>
<para>franco.cavadini@synesis-consortium.eu;</para>
<para>giuseppe.montalbano@synesis-consortium.eu; dario.piga@supsi.ch</para>
<para>This chapter describes the fundamental components of the Software Development Kit architecture developed in Daedalus and its integration in IEC-61499 paradigm, presenting the methodologies selected to face the issues related to the control of aggregated Cyber Physical System (CPS). The aim of the Software Development Kit is to help automation system engineers to synthesize Hybrid Model Predictive Control for aggregated CPS environment.</para>
<para>The guidelines of future development steps of the tool are described. The SDK is composed of three main parts: On-line System Identification (OIS), Online Control Modeller (OCM) and Online Control Solver (OCS). The first one is dedicated to automatically infer the system&#8217;s model of aggregated CPS from input and output measurements. OIS absolves two functions: in a preliminary design phase, it is used in order to estimate a first model of the system; successively during execution, it works in real time for tuning the parameter of the system in relation to input and output measurements. The OCM is the main component of SDK and it contains direct interface to modify and customize the parameters of controller to be designed, like observer tuning, prediction horizon and so on. Moreover, the OCM is the synergic element that orchestrate the work flow of OCS, which performs the calculations during execution. The main computational aspects are related to the requirements of the solution of an optimization problem in the receding horizon fashion: in each step, an MIQP problem must be solved in the cycle time: an adequate solver is fundamental to realize Hybrid Model Predictive Control.</para>
<section class="lev1" id="sec7-1">
<title>7.1 Introduction</title>
<para>Part of the Daedalus project is dedicated to the design and implementation of the Software Development Kit (SDK) that provides helpful tools to develop, implement and deploy advanced control system within a distributed IEC-61499-based control framework, dedicated to automation system engineers.</para>
<para>To such an aim, optimal orchestration of distributed IEC-61499 application is investigated and advanced control techniques as optimal control and model predictive control are considered.</para>
<para>The main features of aggregated Cyber Physical System (CPS) are evaluated to realize an advanced optimal control system: it exhibits, in particular, both continuous and discrete variables to represent the aggregated CPS. Straightforwardly, Hybrid system will be considered, and the various modelling techniques are investigated in Section 7.2.</para>
<para>Another important feature of optimal orchestration of aggregated CPS is the compliance with system constrains on both output variables, i.e. physical limits, and manipulated variables, e.g. actuators saturation and limits. The optimization of a measure of the performance of the system, i.e. the minimization of the cost function, is now a well-established approach in the academia and in certain industries like the chemical and aerospace industries, which have to be widespread in every industrial sector. Therefore, optimization-based control algorithms are investigated for the SDK. Among these, Model Predictive Control stands out as the most promising, considering that Receding Horizon approach offers a way to compensate for disturbances on the system and model mismatch.</para>
<para>Following the last decades of development of control theory, the most suitable solution for above requirements and objectives is Hybrid Model Predictive Control (Section 7.3.1). Indeed, this family of control method guarantees in an implicit manner the respect of constrains and manages multiobjectives control in an optimal way, thanks to Quadratic Programming solver (details will be reported in further sections).</para>
<para>The aim of this chapter is to introduce and carry out an in-depth analysis of the main components of the SDK of Daedalus. <link linkend="F7-1">Figure <xref linkend="F7-1" remap="7.1"/></link> shows the idea of optimal hybrid orchestrator for aggregated CPS. It is divided into three main subcomponents: Online System Identification tool, Online Control Modeller and Online Control Solver, which are discussed in the following sections.</para>
<fig id="F7-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-1">Figure <xref linkend="F7-1" remap="7.1"/></link></label>
<caption><para>Schematic representation of Hybrid Model Predictive Control Toolbox.</para></caption>
<graphic xlink:href="graphics/ch07_fig001.jpg"/>
</fig>
<section class="lev2" id="sec7-1-1">
<title>7.1.1 Hybrid Model Predictive Control SDK</title>
<para>The proposed reference framework is composed of three main parts (shown in <link linkend="F7-1">Figure <xref linkend="F7-1" remap="7.1"/></link>). The first one is the On-line System Identification (OIS) tool, which is able to deduce the model of complex Multi-Input Multi-Output (MIMO) hybrid system. This data-driven tool uses input/output variables to extrapolate mathematical model of the system and it is based on iterative real-time procedure, and more details are reported in Section 7.4. The second block is the Online Control Modeller (OCM), where, given a model from the OIS, an optimal predictive controller able to orchestrate the aggregated Cyber-Physical Systems is synthesized. The OCM is developed based on latest paradigm of HMPC, explained in depth in Section 7.3. The last one is Online Control Solver (OCS) that is strictly related to OCM. This solver must be able to deal with Mixed-Integer Quadratic Problem (MIQP), to solve optimal predictive control problem for hierarchically aggregated CPS with quadratic function cost.</para>
<para>To such an aim, the proposed framework is developed to help control engineer to easily create an optimal controller for complex distributed CPS architecture. Each component will be developed with platform-independent software (see Section 7.1.2), which must be flexible and easy to use in order to create a standard procedure that deals with hybrid complex systems. Moreover, the resulting SDK will be integrated in a distributed IEC-61499-based control architecture (see <link linkend="F7-2">Figure <xref linkend="F7-2" remap="7.2"/></link>).</para>
<para>As analysed in Section 7.3.3, the computational aspect cannot be negligible; indeed, Mixed Integer Programming problem requires high computational power to be solved in runtime. This is more critical when complex systems require large controller bandwidth (Hz order): at 1 Hz, the OCS has to solve a Mixed Integer Problem in less than a second. An additional problem is the non-deterministic solving time of MIP. For the robustness of the modelled controller, it is important to evaluate in simulation the worst case of execution time and use a safety factor to evaluate a realistic and safety bandwidth of the controller. To face this problem, virtual commissioning is helpful: it is indeed possible to test control performance and its feasibility in a virtual environment and tune all control parameters.</para>
</section>
<section class="lev2" id="sec7-1-2">
<title>7.1.2 Requirements</title>
<para>The investigation on orchestration of hierarchically aggregated CPS controller problems had led different needs. The basic development tools, to be compliant with IEC-61499 [1] and to have a platform-independent toolbox, seem to be an object-oriented programming language used in cooperation with nxtControl. The nxtControl respects each paradigm of IEC-61499 and allows to build easily distributed control system using function blocks (for more details, see Section 7.5). The possible choice of object-oriented programming language allows to have a wide range of tool easily integrated in a single development environment. Object-oriented programming is easy to use for the purpose of this SDK, and this programming paradigm allows to develop effortlessly scalable and flexible software, independently from the application.</para>
<fig id="F7-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-2">Figure <xref linkend="F7-2" remap="7.2"/></link></label>
<caption><para>Conceptual map of used software. In the centre, there is object-oriented programming language that better supports an easy development and management between different application&#8217;s needs.</para></caption>
<graphic xlink:href="graphics/ch07_fig002.jpg"/>
</fig>
<para>The investigated programming languages are Python, C++ and JavaScript. Even if the natural choice for a direct integration with nxtControl is C++, Python environment allows a better abstraction layer and enables easily the integration of a wide range of tools and libraries developed for optimization solver and control system. Moreover, nxtControl is able to compile Python with a wrapping toolkit, the computational time waste with the wrapper is negligible with respect to the computational time due to Quadratic Problem solver. This aspect conveys that choice of programming languages is not to be restricted to a specified one.</para>
<para>Another important benefit of possible Python&#8217;s choice is availability of modelling and development environment of MIP solvers, both commercial and free-licence for it. Gurobi [2] and CPLEX [3] are the most powerful and optimized MIP commercial solvers [4], which have dedicated development and modelling environments for Python, also in C++. These environments are easy to configure and more important; they are easily integrable with hierarchically aggregated CPS controller. One limitation of industrial application is the license cost, but the difference of solving time and robustness respect freeware is not negligible. Regarding this, further investigation and benchmark will be done.</para>
<para>First release of the SDK will consider a centralized control scheme, where the on-line system identification tool returns the system&#8217;s model. Straightforward Online control modeller builds up, based on identified model, a hybrid model predictive controller for the system with desired configuration. Finally, the proceeds controller sets up the online control solver and performs the desired performances respecting the tuning parameters chosen by the user, and moreover managing little modelling mismatching and disturbance on input and measurements.</para>
<para><link linkend="F7-2">Figure <xref linkend="F7-2" remap="7.2"/></link> shows the framework of the proposed toolbox. It is possible to see the different MIP solver and the Online Identification toolbox of the SDK; on the right, the different objective platforms where proposed Hybrid Model Predictive Controller will work are shown.</para>
</section>
<section class="lev2" id="sec7-1-3">
<title>7.1.3 Hybrid System</title>
<para>The behaviour of physical phenomena can be represented by mathematical models. When these models exhibit continuous variable (like differential equation) and discrete/logical variables (like state machine), they are called Hybrid System Models. Every physical phenomenon can be described at different levels of detail; in applied science, it is possible to find various models of the same process, in relation of what the model had to describe. These models should not be too simple or too complicated. To formulate these models, we describe with sufficient level of details the behaviour of the physical phenomena efficiently by computational analysis point of view. In the following sections, the report analyzes the trade-off between simple and computational-light model with respect to more complex and computational-heavy model.</para>
<para>In the last three decades, several computer scientists and control theorists have explored models describing the interaction between continuous dynamics and logical components [5]. Such heterogeneous models are denoted as hybrid models; they switch among many operating modes described by differential equation, and mode transitions are triggered by events like states crossing pre-specified thresholds.</para>
<fig id="F7-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-3">Figure <xref linkend="F7-3" remap="7.3"/></link></label>
<caption><para>Subsequence approximation of a non-linear system.</para></caption>
<graphic xlink:href="graphics/ch07_fig003.jpg"/>
</fig>
<para>Another kind of system that is agreeably represented by hybrid model is non-linear system. Indeed, it is possible to represent non-linear system by a piece-wise linearized model, which consists in a sequence linearization of the system&#8217;s model around consecutive operating points (see <link linkend="F7-3">Figure <xref linkend="F7-3" remap="7.3"/></link>). This kind of model representation is presented in Section 7.2.1, where its behaviour is also shown. Indeed, the relationship between every working mode is linear, whose slope changes in each region; this is called linearized model of non-linear system and can be represented like a Hybrid system that switches its operating mode.</para>
</section>
<section class="lev2" id="sec7-1-4">
<title>7.1.4 Model Predictive Control</title>
<para>Model Predictive Control (MPC) arose in the late 1970s and has developed continuously since then. The term MPC does not correspond to specific control strategy, but fairly a wide range of control methods, which use mathematical model of the process to obtain control signal by minimizing an objective function.</para>
<para>Model Predictive Control is an advanced control technique that determinates the control action by solving on-line, at every sampling time k, an open-loop optimal control problem over a <emphasis>p</emphasis>-horizon (Equation (7.2)), based on the current state of the system at k-sample. The optimization generates an input sequence for the specified time horizon p. However, only the first calculated input is applied to the system (<link linkend="F7-4">Figure <xref linkend="F7-4" remap="7.4"/></link>).</para>
<fig id="F7-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-4">Figure <xref linkend="F7-4" remap="7.4"/></link></label>
<caption><para>Model Predictive Control scheme.</para></caption>
<graphic xlink:href="graphics/ch07_fig004.jpg"/>
</fig>
<fig id="F7-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-5">Figure <xref linkend="F7-5" remap="7.5"/></link></label>
<caption><para>Receding horizon scheme.</para></caption>
<graphic xlink:href="graphics/ch07_fig005.jpg"/>
</fig>
<para>The ideas at the basis of predictive control methods are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Explicit use of model to predict the process output evolution at future time instants (horizon).</para></listitem>
<listitem><para>Calculation of control sequence minimizing an objective function.</para></listitem>
<listitem><para>Receding strategy. As shown in <link linkend="F7-5">Figure <xref linkend="F7-5" remap="7.5"/></link>, at each sample time, the control computes the optimal sequence of control signal that minimizes the objective function along the horizon, but only the first control signal is applied to the system. This routine is called receding horizon strategy.</para></listitem>
</itemizedlist>
<para>There are many successful applications of predictive control in use nowadays from process industry [6] to robots [7] through cement industry, chemical industry [8] or steam generation [9]. The good performance of these applications shows the capacity of the MPC to achieve highly durable and efficient control systems.</para>
<para>Moreover, MPC allows to adjust simultaneously all inputs to control all outputs, while accounting for all process interactions. As a result, MPC can take actions that improve plant performance that a more skilled and experienced operator can achieve.</para>
<para>Moreover, Model Predictive Control is able to consider limitations or constraints of the system, like saturation of actuators and/or physical constraints on output or state variables, directly in the problem formulation. This behaviour is a fundamental improvement that respects classical optimal control (like Linear Quadratic Regulator); in this way, the controller is able to calculate the optimal sequence of control actions that minimize a given cost function, respecting each specified constraint.</para>
<para>The most useful model formulation is the state-space form. This formulation is very helpful in both identification problem and optimal control problem. This modelling environment allows to easily relate inputs, outputs and states variable. In discrete time space for continuous variables, the formulation is (Equation (7.1)):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-1.jpg"/></para>
<para>where <emphasis>x</emphasis> (<emphasis>k</emphasis>) in R<superscript><emphasis>n</emphasis></superscript> is a vector of the state variables, <emphasis>u</emphasis>(<emphasis>k</emphasis>) in R<superscript><emphasis>m</emphasis></superscript> are the input variables and <emphasis>y</emphasis> (<emphasis>k</emphasis>) in R<superscript><emphasis>q</emphasis></superscript> are the output variables. The matrices A, B, C and D have proper dimensions. In MPC framework, the control goals, such as the tracking of a reference or the satisfaction of constraints, are formulated as a numerical optimization problem. In most cases, this problem is represented as a Quadratic programming (QP) problem. For such an optimization problem, the cost function is the sum of individual terms that express various control requirements. The objective function is generally composed as follows (Equation (7.2)):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-2.jpg"/></para>
<para>where <emphasis>N</emphasis>={1, 2,&#8734;} represents norm-type that defines the type of minimization problem. A linear problem is defined if <emphasis>N</emphasis> ={1,&#8734;} and quadratic if <emphasis>N</emphasis> = 2. P is the prediction horizon that will be considered. <emphasis>Q</emphasis><subscript>y,u,&#916;u</subscript> are positive defined matrices, also called weight matrices of different objectives of the controller: thanks to these parameters we can tune the controller. For example, if it is not important to control the first output <emphasis>y</emphasis><subscript>1</subscript>, it is possible to easily set <emphasis>Q</emphasis><subscript><emphasis>y</emphasis>1</subscript> = 0, and the same action will be applied for other weights.</para>
<fig id="F7-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-6">Figure <xref linkend="F7-6" remap="7.6"/></link></label>
<caption><para>Flow of MPC calculation at each control execution.</para></caption>
<graphic xlink:href="graphics/ch07_fig006.jpg"/>
</fig>
<para>Overall, the flow of computation for a typical MPC problem is represented in <link linkend="F7-6">Figure <xref linkend="F7-6" remap="7.6"/></link>.</para>
</section>
</section>
<section class="lev1" id="sec7-2">
<title>7.2 Hybrid System Representation</title>
<para>During the last decades, Hybrid system arose naturally its interest in the scientific and research community. Many applications of hybrid system modelling in key areas were presented, such as automotive system [10] or power system [11].</para>
<para>A demonstration of considerable interest in hybrid system is the number of periodic conferences and entire session in major conferences completely devoted to them.</para>
<para>Moreover, this research field is relatively open to new advances. New approaches to mathematical representation of hybrid system have just appeared and a growing interest in applications is straightforward.</para>
<para>Hybrid systems are dynamic systems with both continuous states, discrete-states and event-variables. Consequently, a hybrid system provides a perfect structure to represent large plant of industrial process, which can be seen globally like an agglomeration of subsystems working in different modes, switching along the plant operation points. For example, the mathe-matical car&#8217;s model with gear shift has different traction force curves related to selected gear [12]. To consider these different dynamics behaviour in a unique model, hybrid system modelling is mandatory. Moreover, hierarchical systems can be modelled as hybrid, in which lower components are described by continuous variables and higher-level blocks are governed by logic or decision modules.</para>
<para>Different kinds of models can be used to describe hybrid system. For control purpose, hybrid modelling techniques have to be descriptive enough to capture the behaviour of the interconnections between logic components (automata, switches, software code) and continuous dynamics (physical laws). Simultaneously, the model must to be simple enough to solve analysis and synthesis problems.</para>
<para>The state of the art of hybrid system modelling can be summarized in two main groups (<link linkend="F7-7">Figure <xref linkend="F7-7" remap="7.7"/></link>): the more used piecewise affine (PWA) system [13], mixed logical and dynamical (MLD) models [14] and hybrid automata (HA) [15]; and less used linear complementarity (LC), extended linear complementary (ELC) system and max-min-plus-scaling (MMPS) systems [16].</para>
<fig id="F7-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-7">Figure <xref linkend="F7-7" remap="7.7"/></link></label>
<caption><para>Schematic representation of hybrid system.</para></caption>
<graphic xlink:href="graphics/ch07_fig007.jpg"/>
</fig>
<para>In detail, as proved in [16], all those modelling frameworks are equivalent and it is possible to describe the same system with models of each class. This characteristic is useful, for example, as each formulation offers some advantages in one particular situation: MLD framework is the best for the optimization of the system, while stability and robustness are more easily proved in a PWA formulation.</para>
<para>Hybrid system modelling allows to describe a variety of different kinds of systems, for example, it is possible to deal with complex system like switched dynamics system. Moreover, a hybrid model can describe the complete dynamics of the system and consider different aspects of the same system that works in different ways. For example, when a robot works in a cooperative environment, this type of modelling technique is able to consider each different dynamic, like free motion, contact with operator, different payloads applied at end-effector, etc.</para>
<para>Another kind of system that can be modelled as hybrid system is non-linear system. A common method to face non-linear system consists of piecewise linearization around consecutive operating points. The output of this procedure is a PWA model (see Equation (7.3)).</para>
<para>The main advantage of using this kind of modelling system to syn-thesise a Model Predictive Control (MPC) is that the controller, when is calculating predicted outputs, is able to consider each different dynamics included in the model and optimize the control action in order to minimize the functional cost (i.e. minimize energy consumption, control action magnitude or tracking error).</para>
<section class="lev2" id="sec7-2-1">
<title>7.2.1 Piece-Wise Affine (PWA) System</title>
<para>PWA systems representation is the most studied form of hybrid systems. A PWA system is defined as (Equation (7.3)):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-3.jpg"/></para>
<para>where <emphasis>x</emphasis>(<emphasis>t</emphasis>) &#8712; R<superscript><emphasis>n</emphasis></superscript>, <emphasis>u</emphasis>(<emphasis>t</emphasis>) &#8712; R<superscript><emphasis>m</emphasis></superscript> and <emphasis>y</emphasis> (<emphasis>t</emphasis>) &#8712; R<superscript><emphasis>r</emphasis></superscript> denote the state and the input and output vectors. {&#967;<subscript><emphasis>i</emphasis></subscript>}<subscript><emphasis>i</emphasis></subscript><superscript><emphasis>s</emphasis></superscript><subscript>=1</subscript> is a convex polyhedral partition of the states and input space (i.e. see <link linkend="F7-8">Figure <xref linkend="F7-8" remap="7.8"/></link>). Each &#967;<subscript><emphasis>i</emphasis></subscript> is given by a finite number of linear inequalities.</para>
<fig id="F7-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-8">Figure <xref linkend="F7-8" remap="7.8"/></link></label>
<caption><para>Polyhedral partition representation of a hybrid model. It is possible to see 13 partitions that divide the input state space into 13 pieces-wise sub-systems (using MatLab 2017b).</para></caption>
<graphic xlink:href="graphics/ch07_fig008.jpg"/>
</fig>
</section>
<section class="lev2" id="sec7-2-2">
<title>7.2.2 Mixed Logical Dynamical (MLD) System</title>
<para>In ref. [14], a new type of hybrid systems representation has been defined, in which logic, dynamics and constraints are integrated.</para>
<para>The MLD description is (Equation (7.4)):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-4.jpg"/></para>
<para>The inequalities have to be interpreted component-wise, and they define the switching conditions of different operating modes. The construction of this inequality is based on tools able to convert logical facts involving continuous variables into linear inequalities (for more details, see [17]). This tool will be used to express relations describing the evolution of systems where physical laws, logic rules and operating constrains are interdependent.</para>
<para>Equation (7.4) commits linear discrete-time dynamics for the first two equations. It is possible to build up another formulation describing continuous time version by substituting <emphasis>x</emphasis>(<emphasis>k</emphasis> + 1) by <emphasis>x</emphasis>(<emphasis>t</emphasis>) or a non-linear version by changing the linear equation and inequalities in (7.4) to more non-linear functions. However, in this way, the problem becomes hard tractable by a computational point of view, and more in general, the MLD representation allows to describe a wide range class of systems.</para>
<para>MLD models are successful thanks to good performance in computation aspect. The main claim of their introduction was the easy handling of non-trivial problems, for the formulation of Model Predictive Control for hybrid and non-linear system. This formulation performs well when it is used together with modern Mixed-Integer Programming (MIP) solver for synthesizing predictive controller for hybrid systems, as described in Section 7.4.1.</para>
<para>Note that the class of Mixed Logical Dynamical systems includes the following important system classes:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Linear systems;</para></listitem>
<listitem><para>Finite state machines;</para></listitem>
<listitem><para>Automata;</para></listitem>
<listitem><para>Constrained linear systems;</para></listitem>
<listitem><para>Non-linear dynamic systems.</para></listitem>
</itemizedlist>
<para>In fact, the next section introduces the equivalence between different hybrid system representations and it underlines the potential of MLD models (in <link linkend="F7-9">Figure <xref linkend="F7-9" remap="7.9"/></link>, it is possible to see the interconnection between MLD and other system representation models).</para>
<fig id="F7-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-9">Figure <xref linkend="F7-9" remap="7.9"/></link></label>
<caption><para>Graphic scheme of the links between the different classes of hybrid. The arrow from A to B classes shows that A is a subset of B.</para></caption>
<graphic xlink:href="graphics/ch07_fig009.jpg"/>
</fig>
</section>
<section class="lev2" id="sec7-2-3">
<title>7.2.3 Equivalence of Hybrid Dynamical Models</title>
<para>In ref. [16], there are different demonstrations of equivalence between each hybrid system model, summarized in <link linkend="F7-9">Figure <xref linkend="F7-9" remap="7.9"/></link>. For some transformations, additional conditions like boundedness of the state and input variables or well-posedness have to be made. Typically, the more frequent condition is that the polyhedral partition of input-state space must be univocally</para>
<para>defined, i.e. with no overlapping between different &#967;<subscript><emphasis>i</emphasis></subscript>. These requirements are fundamental in that case where, for example, in PWA or MLD, the modelling framework does not allow overlapping of sub-set of state-input space.</para>
<para>These equivalences are fundamental to demonstrate the properties of different hybrid models and commonly use stability analysis on a single representation, translating its effects on another modelling system.</para>
</section>
</section>
<section class="lev1" id="sec7-3">
<title>7.3 Hybrid Model Predictive Control</title>
<para>Dealing with control of hybrid systems is an open field of research in both academia and industrial world. Model predictive control based its main advantage on the prediction of future outputs, which requires a model that considers the evolution of the system. In case of hybrid systems, discrete variables must be included. For this aim, the modelling frameworks described in Section 7.2 have to be considered.</para>
<section class="lev2" id="sec7-3-1">
<title>7.3.1 State of the Art</title>
<para>Model predictive control was proposed for the first time in the late 1970s by Richalet et al. [9], who predicted future outputs in a heuristic manner. During that time, the application field of MPC was process industry, from chemical to oil and gas extraction through pharmaceutical industries.</para>
<para>Since then, model predictive control has been extended to a wide range of control problems. During the 1990s [18], the academics world was interested on stability analysis, because it is a very challenging problem not only for control engineers but also for mathematicians. Control engineers moved their focus to large systems, where both continuous and discrete variables describe the model of the system, therefore requiring a hybrid model predictive control solution [14]. HMPC consists in a repetitive solution of a Mixed-Integer Programming (MIP) problems, where variables could be both continuous and discrete. If the objective function is quadratic, these problems are classified as Mixed Integer Quadratic programming (MIQP) or Mixed Integer Linear programming (MILP), if a linear objective function is used.</para>
<para>MILP and MIQP problems are much more difficult to solve than a linear or quadratic programming problem (LP or QP), and some properties like convexity are lost (see ref. [19] for a more detailed description).</para>
<para>The computational load for solving an MIP problem is a key issue, as a brutal force approach consists of the evaluation of every possible combination. The optimal solution would be to solve every QP or LP related to all the feasible combinations of discrete decision variables. The solution is the minimum of all the computed solution of QP/LP problems. For example, if all the discrete decision variables are Boolean, then the number of possible LP/QP problems is 2ˆ(n b). Fortunately, there exists an entire research field on this topic and nowadays, there is a wide range of commercial solvers able to deal with MIP problem in a very fast way. These software are mainly based on branch and bound methods [20]; the most known and used are CPLEX (ILOG Inc. [3]), GLPK (Makhorin [21]) or GUROBI [2] for which APIs for many programming languages are available.</para>
<para>The application of the Model Predictive Control arose in the early 1990s. One of the first fundamental studies was made by Bemporad and Morari [14]: they proposed a rigorous approach to mathematical modelling of hybrid system where it is possible to obtain a compact representation of system called Mixed Logical Dynamical (MLD, see Section 7.3.2). Then, following the optimization step, it is possible to synthesize an optimal constrained receding horizon control. This methodology is helpful to optimize and orchestrate both large systems with mixed-variables and non-linear systems linearized around sequential operating points.</para>
<para>As in birth of MPC, the first implementation was in the field of refinery and chemical process. In these fields, Model Predictive Control was already a standard, and the possibility to build up a unique mathematical model that represents the whole system, like plant with all its components, and synthesize a unique controller able to find the optimal solution that respects every specified constrain was a revolution. In the next section, we deeply explore the issues and limits of Hybrid Model Predictive Control, which are roughly synthesizable in computational time and computational power. In that period, the solution of this problem was overcome by using offline optimization, also called Explicit MPC. This control method is able to properly work only in a predetermined range of variable states: in fact, the on-line optimization was replaced by an off-line optimization, summarized in a lookup table. Using this methodology, the application of Hybrid MPC could be extended to mechanical and mechatronics system, where the cycle time can be very small. Some applications are summarized in refs. [10&#8211;12].</para>
<para>Indeed, in refinery and chemical process or more generally in process industry, the sampling time of the controller is in minutes-order. Since the solution of Mixed-Integer Programming problem is feasible, in these fields, it is used as industrial standard. However, in the last two decades, from ref. [14], the computational power of embedded micro-processor or Industrial PC has grown exponentially, as Moore&#8217;s law said, and the commercial MIP solvers increase their &#8220;power&#8221; dramatically. These evolutions allow to rethink to Hybrid MPC with on-line optimization applied to fast system, with sampling time in the range of a few seconds. The aim of this study is to build a standard method to synthesize Model Predictive Control for hybrid system (aggregated CPS too) and have the opportunity to test a possible on-line execution of the controller, in order to understand the minimum sampling time of the controller. This possibility is a killer-feature in refinery and chemical process where Hybrid MPC already is in use, but there is not a powerful and standard tool able to help control&#8217;s engineers to design HMPC for process industry. Otherwise, in the mechanical and mechatronics system control field, this tool can be revolutionary because it simplifies the design of the controller and standardizes it: in this way, the focus to realize a feasible controller is moved on MIP solving time. In addition, the designer can check in a meticulous, but fast, way the feasibility of the Hybrid Model Predictive Control and its performance.</para>
</section>
<section class="lev2" id="sec7-3-2">
<title>7.3.2 Key Factors</title>
<para>In the last decades, since the introduction of MPC in control theory, a wide variety of application has been presented. All these applications are related to notable capabilities of fitting the control goals. Indeed, this methodology is able to realize very smooth and precise control. Moreover, MPC is capable of being tuned in a straightforward way in relation to desired performance of the system. As described in Section 7.2, a typical function cost contains different weights, which offer the possibility to tune the performance of the controller, easily to tune also for non-technical people. Moreover, the definition of constrains is direct in the optimization problem and it is simple to impose constraints on Manipulated Variables (MVs) and Output Variables (OVs), which means limits on actuator saturation, dynamical constrain on actuators and physic limits of the controlled system.</para>
<para>Summarizing the benefit of Model Predictive Control:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Most widely used control algorithm in material and chemical processing industries [22];</para></listitem>
<listitem><para>Increased consistency of discharge quality. Reduced off-specs products during grade changeover. Increased throughput. Minimizing the operating cost while meeting constrains (optimization, economic) [23];</para></listitem>
<listitem><para>Superior for process with a large number of manipulated and controlled variables (multivariable, strong coupling) [24];</para></listitem>
<listitem><para>Allows constraints to be imposed on both MVs and CVs. The ability to operate closer to constraints and over those (soft constraints);</para></listitem>
<listitem><para>Allow time delays, inverse response, inherent non-linearities (difficult dynamics), changing control objectives and sensor failure (predictive);</para></listitem>
<listitem><para>Optimal rejection to modelling error and disturbances;</para></listitem>
<listitem><para>Multi-objectives control technique [25].</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec7-3-3">
<title>7.3.3 Key Issues</title>
<para>The basic issue of Hybrid MPC, and MPC in general, is related to the computational time needed to solve in real time the optimization problem. Indeed, when dealing with a large and fast system, the model of the system becomes really complex and the required closed loop time very precise and the online optimization is not achievable. In order to minimize the problem caused by large system, a pre-stored control allocation law can be used to avoid increased number of decision variables and increased solving time. This technique is known as Explicit Model Predictive Control [26], where the controller creates a look-up table during off-line simulation and uses it during the execution time. This method is able to avoid the main drawback of MPC removing the optimization procedure that is very time-consuming. This benefit enables the use of MPC, and mainly Hybrid MPC, inside application with very high sampling rates.</para>
<para>Another important issue is the difficulty to demonstrate the robustness of</para>
<para>the control respect to the classical robust control technique like H &#8734; [27]. A possible solution of this issue is to couple with the MPC controller an Online system identification tool, as it is shown in Errore. L&#8217;origine riferimento non &#232; stata trovata., that is able to realize a more robust control. This is because the online system identification checks and tunes the system model recursively, compensating modellation errors.</para>
</section>
</section>
<section class="lev1" id="sec7-4">
<title>7.4 Identification of Hybrid Systems</title>
<para>The design of a hybrid model predictive controller needs to describe the plant dynamics in terms of a hybrid linear model, which is used to simulate the</para>
<para>plant behaviour within the prediction horizon. As known, there are basically two ways to construct a mathematical model of the plant:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Analytic approach, where models are derived from first-principle physics laws (like Newton&#8217;s laws, Kirchhoff&#8217;s laws, balance equations). This approach requires an in-depth knowledge and physical insight into the plant, and in the case of complex plants, it may lead to non-linear mathematical models, which cannot be easily expressed, converted or approximated in terms of hybrid linear models;</para></listitem>
<listitem><para>System identification approach, where models are derived and validated based on a set of data gathered from experiments. Unlike the analytic approach, the model constructed through system identification has a limited validity (e.g., it is valid only at certain operating conditions and for certain types of inputs) and it does not give physical insights into the system (i.e., the estimated model parameters may have no physical meaning). Nevertheless, system identification does not need, in principle, in-depth physical knowledge of the process, thus reducing the modelling efforts.</para></listitem>
</itemizedlist>
<para>In this project, hybrid linear models of the process of interested will be derived via system identification, and physical insights into and knowledge of the plant will be used, if needed, to assist the whole identification phase, such as choosing the appropriate inputs to perform experiments, choosing the structure of the hybrid model (defined, for instance, in terms of number of discrete states and dynamical order of the linear subsystems), debugging the identification algorithms and assessing quality of the estimated model.</para>
<para>The following two classes of hybrid linear models will be considered, which mainly differ in the assumption behind the switches among the (linear/affine) time-invariant sub-models:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Jump Affine (JA) models, where the discrete-state switches depend on an external signal, which does not necessarily depend on the value of the continuous state. The switches among the discrete states can be governed, for instance, by a Markov chain, and thus described in terms of state transition probabilities. Alternatively, in deterministic jump models, the mode switches are not described by a stochastic process, but they are triggered by or associated to determinist events (e.g. gear or speed selectors, evolutions dependent on if-then-else rules, on/off switches and valves). In this chapter, we will focus on the identification of deterministic jump models. Stochastic models might be considered at a later stage, only if necessary.</para></listitem>
</itemizedlist>
<fig id="F7-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-10">Figure <xref linkend="F7-10" remap="7.10"/></link></label>
<caption><para>Example of a three-dimensional PWA function <emphasis>y</emphasis> = <emphasis>f</emphasis>(<emphasis>x</emphasis><subscript>1</subscript>, <emphasis>x</emphasis><subscript>2</subscript>).</para></caption>
<graphic xlink:href="graphics/ch07_fig010.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Piece-Wise Affine (PWA) models, where the active dynamic affine sub-model at each time instant only depends on the value of the continuous state. More specifically, in PWA models, the (continuous) state space is partitioned into a finite number of polyhedral regions with non-overlapping interiors, and only one dynamical affine model is associated to each polyhedron. PWA models can be used to accurately describe dynamical systems that evolve according to different dynamics depending on the specific point in the state-input space (e.g. a bouncing ball or switching feedback control laws where the switches between the controllers depend on the state of the system). Furthermore, thanks to the universal approximation property of PWA maps, PWA models can be also used to approximate non-linear/non-smooth phenomena with an arbitrary degree of precision [28]. For the sake of visualization, an example of a three-dimensional PWA function, defined over four polyhedral regions of the state space, is plotted in <link linkend="F7-10">Figure <xref linkend="F7-10" remap="7.10"/></link>.</para></listitem>
</itemizedlist>
<para>Note that Jump models and PWA models can be also combined to describe, for instance, finite state machines (with linear dynamics at each mode), where the mode transition depends on both an external event and the current value of the continuous state, input and output.</para>
<para>In the following, we formalize the hybrid system identification problem and discuss its main challenges. Finally, we provide an overview of the algorithm that will be used and implemented in the DAEDALUS platform, for the identification of both Jump Affine and PWA models.</para>
<section class="lev2" id="sec7-4-1">
<title>7.4.1 Problem Setting</title>
<para>Let us consider a training dataset of input/output pairs D = {<emphasis>u</emphasis>(<emphasis>t</emphasis>), <emphasis>y</emphasis>(<emphasis>t</emphasis>)}<emphasis><subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript></emphasis><subscript>=1 </subscript>(generated by the plant we would like to model), where <emphasis>t</emphasis> denotes the time index, <emphasis>u</emphasis>(<emphasis>t</emphasis>) &#8712; Rnu and <emphasis>y</emphasis>(<emphasis>t</emphasis>) &#8712; Rny are the input and output of the system at time t, respectively, and N is the length of the training set. Our goal is to estimate, from the considered training set D, a hybrid linear dynamical model approximating the input/output relation of the system and described in the input/output Auto-Regressive with Exogenous input (ARX) form (Equation (7.5)):</para>

<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-5.jpg"/></para>

<para>where <emphasis>&#375;</emphasis>(<emphasis>t</emphasis>) &#8712; Rny is the output of the estimated model, <emphasis>s</emphasis>(<emphasis>t</emphasis>) &#8712; {1, . . ., s} is the active mode at time <emphasis>t</emphasis> (i.e. the value of the discrete state at time t) and <emphasis>x</emphasis>(<emphasis>t</emphasis>) &#8712; <emphasis>X</emphasis> &#8834; Rnx is the regressor vector containing past values of the input and of the output (Equation (7.6)), i.e.</para>

<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-6.jpg"/></para>

<para>for some fixed values of n<emphasis>a</emphasis> and n<emphasis>b</emphasis>, and &#920;<emphasis>s</emphasis> &#8712; Rny, nx (with <emphasis>s</emphasis> = 1, . . ., <emphasis>s</emphasis>) is the parameter matrix describing the linear sub-model associated to the discrete state <emphasis>s</emphasis>.</para>
<para>The identification of a hybrid linear dynamical model (Equation (7.5)) thus requires: (i) choosing the number s of modes (i.e. size of the discrete state); (ii) computing the parameter matrices &#920;<emphasis>s</emphasis> (with <emphasis>s</emphasis> = 1, . . ., s) charac-terizing the affine sub-models; (iii) finding the hidden sequence of discrete</para>
<para>states {<emphasis>s</emphasis>(<emphasis>t</emphasis>)<emphasis><subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript></emphasis><subscript>=1</subscript>} and (iv) in the case of PWA model identification, finding the polyhedral partition of the regressor space <emphasis>X</emphasis> where the affine sub-models are defined.</para>
<para>When choosing the dimension s of the discrete state, one must take into account the trade-off between data fitting and model complexity. For small values of s, the hybrid model cannot accurately capture the non-linear and time-varying dynamics of the system. On the other hand, increasing the number of modes also increases the degrees of freedom in the description of model, which may cause overfitting and poor generalization to unseen data (i.e., the final estimate is sensitive to the noise corrupting the observations), besides increasing the complexity of the estimation procedure and of the resulting model. In the identification algorithms, which will be developed during the project, we will assume that s is fixed by the user. The value of s (as well as the values of the parameters na and nb defining the dynamical order of the affine sub-models) will be chosen through cross-validation, with a possible upper-bound dictated by the maximum tolerable complexity of the estimated model or by some physical insight into the system.</para>
<para><emphasis role="strong">Fitting-Error Minimization</emphasis></para>
<para>The hybrid linear model structure in Equation (7.5) suggests to formulate the identification of the hybrid models as the following fitting-error minimization problem</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-7.jpg"/></para>
<para>which aims at minimizing, over the parameter matrices &#920;<subscript>s</subscript> (with <emphasis>s</emphasis> = 1, . . ., s) and the discrete state sequence {<emphasis>s</emphasis>(<emphasis>t</emphasis>)}<subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript><subscript>=1</subscript>, the power of the error between the measured output y (<emphasis>t</emphasis>) and the model output &#375;(<emphasis>t</emphasis>)= &#920;<subscript><emphasis>s</emphasis>(<emphasis>t</emphasis>)</subscript> <emphasis>x</emphasis>(<emphasis>t</emphasis>).</para>
<para>In the cases where the discrete state sequence {<emphasis>s</emphasis>(<emphasis>t</emphasis>)}<subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript><subscript>=1</subscript> is exactly known (e.g. when <emphasis>s</emphasis>(<emphasis>t</emphasis>) is associated to the gear number in a car or to an external switching signal controlled by the user, or, for PWA models, the partition of the regressor space <emphasis>X</emphasis> is fixed a priori), the fitting-error minimization problem (7.7) becomes a simple linear regression problem, and the parameter matrices &#920;<subscript>s</subscript> (with <emphasis>s</emphasis> = 1, . . ., s) defining the affine sub-models can be easily estimated through standard least squares, i.e.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-8.jpg"/></para>
<para>with <emphasis>I</emphasis> {<emphasis>s</emphasis>=<emphasis>s</emphasis>(<emphasis>t</emphasis>)} denoting the indicator function, i.e.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-9.jpg"/></para>
<para>Namely, in computing an estimate of &#920;<subscript>s</subscript> through Equation (7.8), only the regressor/output pairs (<emphasis>x</emphasis>(<emphasis>t</emphasis>), y (<emphasis>t</emphasis>)) such that s=<emphasis>s</emphasis>(<emphasis>t</emphasis>) are considered.</para>
<para>In the more general case, where the discrete state sequence {<emphasis>s</emphasis>(<emphasis>t</emphasis>)}<subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript><subscript>=1</subscript> is not available, the identification of hybrid models becomes NP hard (strictly speaking, Equation (7.8) is a mixed-integer quadratic programming problem, which might be computationally intractable, except for small-scale problems). Furthermore, besides reconstructing the discrete1 state sequence</para>
<para>{<emphasis>s</emphasis>(<emphasis>t</emphasis>)}<subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript><subscript>=1</subscript> and estimating the parameter matrices &#920;<subscript>s</subscript> (with <emphasis>s</emphasis> = 1, . . ., s), the identification of PWA models also requires to compute a polyhedral partition of the regressor space <emphasis>X</emphasis>.</para>
</section>
<section class="lev2" id="sec7-4-2">
<title>7.4.2 State-of-the-Art Analysis</title>
<para>Several heuristics have been proposed in the literature to overcome the challenges encountered in hybrid system identification (see [29, 30] for an exhaustive overview of algorithms for identification of Jump Affine and PWA models). Among the proposed algorithms, we have analyzed:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>the bounded-error approach [31], which addresses the identification of Jump Affine models under the assumption that the noise corrupting the output observations <emphasis>y</emphasis>(<emphasis>t</emphasis>) is norm-bounded (with known bound). The goal is to estimate the set of all model parameters &#920;<subscript>s</subscript>, which are compatible with the a-priori assumptions on the noise bound, the chosen model structure and the observations. A polynomial optimization problem is formulated, whose solution is approximated through convexrelation techniques based on the theory of moments [32]. This approach turns out to be very sensitive to outliers (i.e. noise outside the supposed bounds) and conservative if a large bound on the noise is assumed. Furthermore, it suffers from high computational complexity because of the high computational burden of the employed theory-of-moment- based relaxation;</para></listitem>
<listitem><para>the sparse optimization-based approaches [33] and [34], which address the segmentation of linear models by formulating an optimization problem penalizing the fitting error and the number of switches among the affine sub-models. Therefore, these methods are suited only for Jump Affine systems with infrequent switches;</para></listitem>
<listitem><para>the mixed-integer quadratic programming approach [35], which addresses the identification of PWA systems using hinging-hyperplane ARX models and piecewise affine Wiener models. A mixed-integer quadratic programming problem is formulated (similar, but not exactly equal to (6.3)) and solved through brunch-and-bound. Unfortunately, the number of integer variables increases with the number of training samples, limiting the applicability of the method to small-/medium-scale problems;</para></listitem>
<listitem><para>the two-stage clustering based approach [36], which can be used for both Jump Affine and PWA model identification. At the first stage, the regressor observations are clustered by assigning each data-point to a sub-model through a k-means-like algorithm, and the affine sub-model parameters &#920;<subscript>s</subscript> are estimated at the same time. In the case of PWA identification, a second stage is performed to compute a partition of the regressor space <emphasis>X</emphasis>. Although ref. [36] is able to handle large training sets, poor results might be obtained when the affine local sub-models are over-parameterized (i.e. large values of the parameters na and nb in the definition of the regressor (6.2) are used), since the distances in the regressor space (namely, the only criterion used for clustering) turns out to be corrupted by redundant, thus irrelevant, information;</para></listitem>
<listitem><para>the recursive two-stage clustering-based approach [37], which is based on the same two-stage clustering philosophy of [36], is suited for both Jump Affine and PWA model identification. The proposed approach consists of two stages: (S1) simultaneous clustering of the regressor vector and estimation of the model parameters &#920;<subscript>s</subscript> (<emphasis>s</emphasis> = 1, . . ., <emphasis>s</emphasis>). This step is performed recursively by processing the training regressor/output pairs sequentially; (S2) computation of a polyhedral partition of the regressor space through efficient multi-class linear separation methods. This step is performed either in a batch way (i.e. offline) or recursively (i.e. online). Note that stage S2 is required only for PWA system identification. Because of its computational efficiency and the possibility to be used both for batch and recursive identification, we have decided to use and implement this algorithm in the DAEDALUS project. Further details on this algorithm are discussed below.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec7-4-3">
<title>7.4.3 Recursive Two-Stage Clustering Approach</title>
<para>The main ideas behind the recursive two-stage clustering approach proposed in ref. [37] are presented in this section. As mentioned in the previous paragraph, the hybrid system identification problem is tackled in two stages: S1 (iterative clustering and parameter estimation) and S2 (polyhedral partition of the regressor space, necessary only for PWA model estimate).</para>
<para>Stage S1 is carried out as described in Algorithm 1, where clusters and sub-model parameters are updated iteratively, making the algorithm suitable for online applications, when data are acquired in real time.</para>
<para><emphasis role="strong">Algorithm 1</emphasis> Recursive clustering and parameter estimation</para>
<para><emphasis role="strong">Input:</emphasis> Observations {<emphasis>x</emphasis>(<emphasis>t</emphasis>), y(<emphasis>t</emphasis>)}<subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript><subscript>=1</subscript>, desired number s of affine submodels, initial condition for model parameter matrices &#920;1, . . ., &#920;<subscript>s</subscript> .</para>
<para>1. <emphasis role="strong">let</emphasis> C<emphasis>s</emphasis> &#8592; ∅, <emphasis>s</emphasis> = 1, . . ., s;</para>
<para>2. <emphasis role="strong">for</emphasis> <emphasis>t</emphasis> = 1, . . ., N <emphasis role="strong">do</emphasis></para>
<para>2.1. <emphasis role="strong">let</emphasis> e<emphasis>s</emphasis>(<emphasis>t</emphasis>) &#8592; y(<emphasis>t</emphasis>) - &#920;<subscript>s</subscript><emphasis>x</emphasis>(<emphasis>t</emphasis>),</para>
<para>2.2. <emphasis role="strong">let</emphasis> <emphasis>s</emphasis>(<emphasis>t</emphasis>) &#8592; arg mins=1,...,s ^e<emphasis>s</emphasis>(<emphasis>t</emphasis>)^<superscript>2</superscript><subscript>2</subscript>;</para>
<para>2.3. <emphasis role="strong">let</emphasis> C<emphasis>s</emphasis>(<emphasis>t</emphasis>) &#8592; C<emphasis>s</emphasis>(<emphasis>t</emphasis>) ∪ <emphasis>x</emphasis>(<emphasis>t</emphasis>);</para>
<para>2.4. <emphasis role="strong">update</emphasis> &#920;<subscript><emphasis>s</emphasis>(<emphasis>t</emphasis>)</subscript> using recursive least-squares;</para>
<para>3. <emphasis role="strong">end for</emphasis>;</para>
<para>4. <emphasis role="strong">end</emphasis>.</para>
<para><emphasis role="strong">Output:</emphasis> Estimated matrices &#920;1, . . ., &#920;<subscript>s</subscript>, clusters C1, . . ., <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript>, sequence of active modes {<emphasis>s</emphasis>(<emphasis>t</emphasis>)}<subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript><subscript>=1</subscript> .</para>
<para>The main idea of Algorithm 1 is to compute, at each time instant t, the fitting error e<emphasis>s</emphasis>(<emphasis>t</emphasis>) = y(<emphasis>t</emphasis>)-&#920;<subscript>s</subscript><emphasis>x</emphasis>(<emphasis>t</emphasis>)(s &#8712; {1,...,s}) achieved by all the s local affine sub-models, and select the local model that &#8220;best fits&#8221; the current output observation y (<emphasis>t</emphasis>) (Steps 2.1 and 2.2). The regressor <emphasis>x</emphasis>(<emphasis>t</emphasis>) is then assigned to the cluster <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript> (<emphasis>t</emphasis>) (Step 2.3) and the parameter matrix &#920;<subscript>s</subscript> (<emphasis>t</emphasis>) associated to the selected submodel is updated using recursive least squares (Step 2.4).</para>
<para>Due to the greedy nature of Algorithm 1, the estimates of the model parameters &#920;<subscript>s</subscript> and the clusters <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript> are influenced by the initial choice of the parameters &#920;<subscript>s</subscript> . A possible initialization for the parameter matrices is to take &#920;1, . . ., &#920;<subscript>s</subscript> all equal to the best linear model, i.e.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-10.jpg"/></para>
<para>Moreover, the estimation quality can be improved by reiterating Algorithm 1 multiple times, using its output as an initial condition for the following iteration. This can be performed only if the algorithm is executed in a batch mode (offline). Alternatively, a subset of data can be processed in a batch mode to find proper initial conditions. Then, Algorithm 1 is executed in real time to iteratively process data streaming.</para>
</section>
<section class="lev2" id="sec7-4-4">
<title>7.4.4 Computation of the State Partition</title>
<para>If a PWA identification problem is addressed, besides estimating the model parameters {&#920;<subscript>s</subscript> }<superscript>s</superscript><subscript>s=1</subscript> and the sequence of active modes {<emphasis>s</emphasis>(<emphasis>t</emphasis>)}<subscript><emphasis>t</emphasis></subscript><superscript><emphasis>N</emphasis></superscript><subscript>=1</subscript>, also a polyhedral partition of the regressor space <emphasis>X</emphasis> should be found. More specifically, let Xs (with <emphasis>s</emphasis> = 1, . . ., s) be a collection of polyhedra which form a complete polyhedral partition<footnote id="fn7_1" label="1"> <para>A collection {Xs }<subscript>s</subscript><superscript>s</superscript><subscript>=1</subscript> is a complete partition of the regressor domain <emphasis>X</emphasis> if <superscript>s</superscript><subscript>s=1</subscript> X<emphasis>s</emphasis> = <emphasis>X</emphasis> and X<subscript>s</subscript><superscript>◦</superscript> &#8745; X<subscript>j</subscript><superscript>◦</superscript> = ∅, ∀<emphasis>s</emphasis> = j, with X<subscript>s</subscript><superscript>◦</superscript> denoting the interior of <emphasis>X</emphasis> s.</para></footnote> of the regressor space <emphasis>X</emphasis>. Each polyhedron <emphasis>X</emphasis> s is defined as:</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-11.jpg"/></para>
<para>for some matrix H s and vector Bs of proper dimensions. The goal is thus to estimate Hs and Bs (with <emphasis>s</emphasis> = 1,...,s) defining the polyhedron Xs, where the s-th local affine submodel is active. Two approaches can be followed:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>according to the idea discussed in [11], the Voronoi diagram generated by the clusters&#8217; centroids can be used as a polyhedral partition of the regressor space <emphasis>X</emphasis>. Specifically, let cs be the centroid of cluster <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript>. Then, the polyhedron Xs associated to cluster <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript> (Equation (7.11)) is the set of all the values of the continuous state x such that cs is the closest centroid to x among all the other centroids cj (with j = s), i.e.,</para></listitem>
</itemizedlist>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-12.jpg"/></para>
<para>Through simple algebraic manipulations, <emphasis>X</emphasis> s can be expressed in a form like Equation (7.10), i.e.</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-13.jpg"/></para>
<para>Note that the definition of the polyhedron <emphasis>X</emphasis> s (Equation (7.12)) only depends on the clusters&#8217; centroids, which can be easily updated recursively once the cluster <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript> is updated (Step (2.3) of Algorithm 1). This makes the use of the Voronoi diagram particularly suited for real-time applications, where data are processed iteratively. However, a limitation of the Voronoi diagram is that it does not take into account how much the points are spread around the clusters&#8217; centres, making the state-space partition less flexible than general linear separation maps. In order to overcome this limitation, the approach described below can be followed.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>separate the clusters {<emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript> }<subscript>i</subscript><superscript>s</superscript><subscript>=1</subscript> provided by Algorithm 1 via linear multicategory discrimination (see, e.g. [37&#8211;39]). In the following, we briefly describe the algorithm used in [37], which is suited for both offline and online computations of the state partition.</para></listitem>
</itemizedlist>
<para>The linear multi-category discrimination problem is tackled by searching for a convex piecewise affine separator function <emphasis>&#981;</emphasis>: Rnx &#8594; R discriminating between the clusters <emphasis>C</emphasis><sub>1</sub>, . . ., <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript> . The separator <emphasis>&#981;</emphasis> (Equation (7.13)) is defined as the maximum of s¯ affine functions {<emphasis>&#981;</emphasis>i(x)}<subscript>i</subscript><superscript>s</superscript><subscript>=1</subscript>, i.e.</para>

<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-14.jpg"/></para>

<para>with <emphasis>&#981;</emphasis><subscript>s(x)</subscript> described as (Equation (7.14))</para>

<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-15.jpg"/></para>

<para>where &#969;s &#8712; Rnx (<emphasis>s</emphasis> = 1, . . ., s) are the parameters to be computed.</para>
<para>For <emphasis>s</emphasis> = 1, . . ., s, let M s be an <emphasis>ms</emphasis> &#215; <emphasis>nx</emphasis> dimensional matrix (with <emphasis>ms</emphasis> denoting the cardinality of cluster <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript>) obtained by stacking the regressors <emphasis>x</emphasis>(<emphasis>t</emphasis>)<superscript>&#8217;</superscript> belonging to <emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript> in its rows. If the clusters {<emphasis>C</emphasis><subscript><emphasis>s</emphasis></subscript>}<superscript>s</superscript><subscript>s=1</subscript> are linearly separable, the piecewise-affine separator <emphasis>&#981;</emphasis> satisfies the conditions:</para>

<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-16.jpg"/></para>

<para>where 1ms is an ms-dimensional vector of ones.</para>
<para>The piecewise-affine separator <emphasis>&#981;</emphasis> thus satisfies the conditions (Equation (7.16)):</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-17.jpg"/></para>
<para>From (7.16), the polyhedra {<emphasis>X</emphasis><sub><emphasis>s</emphasis></sub> }<superscript>s</superscript><subscript>s=1</subscript> are defined as</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-18.jpg"/></para>
<para>The condition (7.15) thus suggests computing the parameters {&#969; <superscript>s</superscript> }<superscript>s</superscript>s=1 by minimizing the convex cost</para>

<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/eq7-19.jpg"/></para>

<para>with (&#183;)+ defined as <emphasis>f</emphasis>+ = max{0, <emphasis>f</emphasis>}. Problem (7.17) minimizes the averaged squared 2-norm of the violation of the inequalities in Equation (7.15). The solution of the convex problem (7.17) can be then computed numerically in two ways: (i) offline through a Regularized Piecewise-Smooth Newton method or (ii) online through a Stochastic Gradient Descent method, as explained in [10].</para>
</section>
</section>
<section class="lev1" id="sec7-5">
<title>7.5 Integration of Additional Functionalities to the IEC 61499 Platform</title>
<para>The DAEDALUS automation platform is built on top of the IEC-61499 standard and makes it the main core technology to enable the implementation of industrial grade applications in distributed control scenarios. The function block (FB) is one of the base elements of this standard. Function blocks are a concept to define solid, reusable software components in industrial automation systems. They allow the encapsulation of algorithms in an easy, understandable, even for newcomer, and usable form. Each function block has defined inputs, which are read and processed from an internal algorithm. The result will be outputted at defined outputs. Whole applications can be created out of various function blocks by connecting their inputs and outputs. Concretely, each function block consists of a head, a body, input/output events and input/output data.</para>
<para>The IEC 61499 standard defines various kinds of function blocks:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Basic Function Blocks. Basic function blocks are used to implement basic functionalities of applications. Basic function blocks include internal variables, one or more algorithms and an &#8220;Execution Control Chart&#8221;, to define the processing of the algorithms;</para></listitem>
<listitem><para>Service Function Blocks. Service function blocks represent the interfaces to the hardware;</para></listitem>
<listitem><para>Composite Function Blocks. Several basic, service or composite function blocks as well can be grouped to form a composite function block. The composite FB presents itself as a closed function block with a clearly defined interface.</para></listitem>
</itemizedlist>
<section class="lev2" id="sec7-5-1">
<title>7.5.1 A Brief Introduction to the Basic Function Block</title>
<para>Basic function blocks are the atomic units of execution in IEC 61499. A basic FB consists of two parts, i.e. a function block interface and an execution control chart (ECC) that operates over a set of events and variables. The execution of a basic FB entails accepting inputs from its interface, processing the inputs using the ECC and emitting outputs.</para>
<para>A basic FB is encapsulated by a function block interface, which exposes the respective inputs and outputs using ports. These input and output ports may be classified as either event or data ports.</para>
<para><link linkend="F7-11">Figure <xref linkend="F7-11" remap="7.11"/></link> shows the interface of the function block that implements a valve control logic. This interface exposes input events (INIT, MODE CHANGED, SP CHANGED), output events (INITO, CNF), as well as input variables (AutoSP, ManSP, mode) and output variables (cp, isMan).</para>
<fig id="F7-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-11">Figure <xref linkend="F7-11" remap="7.11"/></link></label>
<caption><para>Valve: an example of basic function block.</para></caption>
<graphic xlink:href="graphics/ch07_fig011.jpg"/>
</fig>
<para>Event ports are specialized to accept or emit events, which are pure signals that represent status only, i.e. they are either absent or present. On the other hand, data ports can accept or emit valued signals that consist of a typed value, such as integer, string or Boolean. Variable ports of a special type Any can accept data from a range of typed values. In addition, a concept of multiplicity is also applicable to data ports, which allows accepting or emitting arrays of values. A data port can be associated with one or more event ports.</para>
<para>As shown in <link linkend="F7-11">Figure <xref linkend="F7-11" remap="7.11"/></link>, for example, Mode is associated with MODE CHANGED.</para>
<para>However, this association can only be defined for ports of the matching flow direction, e.g. input data ports can only be associated with input event ports. This event&#8211;data association regulates the data flow in and out of a basic FB, i.e. new values are loaded or emitted from the data ports on the interface when an associated event is present.</para>
<para>The behaviour of a basic FB is expressed as a Moore-type state machine, known as an ECC. An ECC reacts to input events and performs actions to generate the appropriate outputs.</para>
<para><link linkend="F7-12">Figure <xref linkend="F7-12" remap="7.12"/></link> shows the ECC of the valve basic function block, which consists of four states: START, INIT, exec_SPChange and exec_ModeChange.</para>
<para>States in ECCs have provision to execute algorithms and emit output events upon ingress, which are represented as ordered elements in their respective action sets.</para>
<para>As an example, in <link linkend="F7-12">Figure <xref linkend="F7-12" remap="7.12"/></link>, the algorithm exec_SPChange is executed (represented as a gray label), and the CNF event is emitted upon entering the exec_SPChange state (represented as a blue oval).</para>
<para>The execution of an ECC starts from its initial state (START in <link linkend="F7-12">Figure <xref linkend="F7-12" remap="7.12"/></link>) and progresses by taking transitions, which are guarded by an input event and an optional Boolean expression over input and/or internal variables. Upon evaluation, a transition is considered to be enabled if the respective guard condition evaluates to true. The ECC will then transition to the next state by taking the enabled egress transition from the source state to the corresponding target state.</para>
<fig id="F7-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-12">Figure <xref linkend="F7-12" remap="7.12"/></link></label>
<caption><para>Example of execution control chart (ECC).</para></caption>
<graphic xlink:href="graphics/ch07_fig012.jpg"/>
</fig>
<fig id="F7-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-13">Figure <xref linkend="F7-13" remap="7.13"/></link></label>
<caption><para>exec_SPChange algorithm from the valve basic FB.</para></caption>
<graphic xlink:href="graphics/ch07_fig013.jpg"/>
</fig>
<para>An algorithm is a finite set of ordered statements that operate over the ECC variables. Typically, an algorithm consists of loops, branching and update statements, which are used to consume inputs and generate outputs. The IEC 61499 standard allows algorithms to be specified in a variety of implementation-dependent languages. As an example, the implementation from nxtControl allows the development of custom algorithms in Structured Text (ST).</para>
<para>The exec_SPChange algorithm from the valve basic FB is presented in <link linkend="F7-13">Figure <xref linkend="F7-13" remap="7.13"/></link> that uses the ST language as defined in IEC-61131-3. Here, the IF&#8211;THEN&#8211;ELSE construct is used to update the output value of cp based on the value of the input isMan.</para>
</section>
<section class="lev2" id="sec7-5-2">
<title>7.5.2 A Brief Introduction to the Composite Function Block</title>
<para>Composite function blocks facilitate the representation of structural hierarchy. Composite FBs are similar to basic FBs in the sense that they too are encapsulated by function block interfaces. However, unlike a basic FB, the behaviour of a composite FB is implemented by a network of function blocks.</para>
<para>Basic and composite function blocks characterize different types of specifications, which are referred to as function block types (FBTypes). A function block network may consist of instances of various FBTypes, where any given FBType may be instantiated multiple times. This concept is very similar to the object-oriented programming paradigm, which contains classes (analogous to FBTypes) and their instances, namely objects (analogous to FB instances). These FB instances connect and communicate with each other using wire connections, and with external signals via the encapsulating function block interface. This facilitates the structural hierarchy, i.e. a given function block network may contain instances of other composite FBs that encapsulate sub-FBNs.</para>
<para><link linkend="F7-14">Figure <xref linkend="F7-14" remap="7.14"/></link> shows a function block network with three function block instances that communicate with each other using wire connections, e.g. a Real output value SetPoint of the AutoCommand instance can be read as AutoSP by the valve instance.</para>
<para>Furthermore, some signals directly flow from the interface of the top- level composite FB into the encapsulated function block network, e.g. the event MODE UPDATED is read from an external source and made available to the MODE CHANGED input event of both the AutoCommand and valve instances. However, only compatible signals flow in this manner, meaning that an input event on a composite FB interface can only flow into an input event of nested FB interfaces. Similarly, data flow in this manner must also conform to data-type compatibility, e.g. a Boolean input on the composite FB interface cannot flow into a string type input of the nested FB interface. One exception to this rule is the Any type, which, as the name suggests, can accept any data type. This mode of signal flow is thus directly responsible for effecting the interface definition of a composite FB, i.e. if a nested FB needs an input from an external source, there must be an input defined on the composite FB interface, which flows into the said nested FB. This encapsulation of nested FBs from external sources simplifies the reuse of FBTypes.</para>
<fig id="F7-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-14">Figure <xref linkend="F7-14" remap="7.14"/></link></label>
<caption><para>A composite function block with an encapsulated function block network.</para></caption>
<graphic xlink:href="graphics/ch07_fig014.jpg"/>
</fig>
</section>
<section class="lev2" id="sec7-5-3">
<title>7.5.3 A Brief Introduction to the Service Interface Function Block</title>
<para>Service interface function blocks (SIFB) can be considered as device drivers that connect the external environment with function block applications. These blocks are used to provide services to a function block application, such as the mapping of I/O pin interactions to event and data ports and the sending of data over a network.</para>
<para>There are two categories of SIFBs described in the standard, namely communication function blocks and management function blocks. While composite FBs capture centralized entities, resources are reminiscent of tasks and devices represent PLCs. Hence, both resources and devices need specific entities that facilitate either task-level (inter-resource) or distributed (inter-device) communication.</para>
<para>Communication function blocks are SIFBs providing interfaces that enable communication between IEC 61499 resources. Within the context of IEC 61499, a resource is a functional unit contained in a device that has independent control of its operations, so it may be created, configured, parameterized, started up, deleted, etc., without affecting other resources. The goal of a resource is to accept data and/or events from one or more interfaces, elaborate them and return data and/or events to some interfaces.</para>
<para>For the sake of completeness, it is worth mentioning that an IEC 61499 device contains one or more interfaces and those interfaces can be of two different types: communication and process. While communication interfaces provide a mapping between resources and the information exchanged via a communication network, a process interface provides a mapping between the physical process (e.g. analog measurements, discrete I/O, etc.) and the resources. Different types of communication function blocks may be used to describe a variety of communication channels and protocols.</para>
<para>On the other hand, management function blocks are SIFBs that are used to coordinate and manage application-level functionalities by providing services, such as starting, stopping, creating and deleting function block instances or declarations. They are somewhat analogous to a task manager in a traditional operating system. Unlike basic FBs, where the behaviour is specified using an ECC, SIFBs are specified using time-sequence diagrams.</para>
</section>
<section class="lev2" id="sec7-5-4">
<title>7.5.4 The Generic DLL Function Block of nxtControl</title>
<para>The IEC 61499 software tool engineered by nxtControl provides a mechanism to integrate custom code in an IEC 61499 application. The mechanism is called Generic DLL function block and enables the exploitation of custom IEC 61499 function blocks interfaced by means of an abstract interface layer.</para>
<para>It provides the possibility to implement basic or service IEC 61499 function blocks in a custom programming language that are compiled in a dynamical loadable library (DLL) and then loaded and bound to the IEC 61499 runtime at the execution phase.</para>
<para>The Generic DLL function block mechanism builds on top of two components:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>a DLL that exposes a C interface where a predefined number of functions and data structures (embedded in a prototype which follows a well- defined template) implement the custom functionalities to be integrated in the distributed control application;</para></listitem>
<listitem><para>a graphical representation of the custom function block, whose FBType is FB_DLL, and which is used in the nxtControl&#8217;s engineering software environment to instantiate as many FBs as needed.</para></listitem>
</itemizedlist>
<para>Such a mechanism enables the development of customized FB, providing:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>a representation of the IEC 61499 simple data types (as well as one-dimensional arrays of them) and plain C types;</para></listitem>
<listitem><para>an input/output interface for passing these data between the IEC 61499 runtime software and the DLL implementation;</para></listitem>
</itemizedlist>
<fig id="F7-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-15">Figure <xref linkend="F7-15" remap="7.15"/></link></label>
<caption><para>Example of FB_DLL function block.</para></caption>
<graphic xlink:href="graphics/ch07_fig015.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>an interface for a custom function block where one initialization event and an arbitrary number of input events can be fed;</para></listitem>
<listitem><para>the possibility to generate output events asynchronously;</para></listitem>
<listitem><para>an interface to register and unregister a function block with the custom DLL;</para></listitem>
<listitem><para>a way to query the provided data interface, so it is possible to implement consistency checks or to implement operations on different data types by one implementation;</para></listitem>
<listitem><para>the possibility to implement several function blocks through a single DLL.</para></listitem>
</itemizedlist>
<para>More than one instance of the Generic DLL function block (FB_DLL, <link linkend="F7-15">Figure <xref linkend="F7-15" remap="7.15"/></link>) can be instantiated in an IEC 61499 application, and the parameters provided as input to those FBs are exploited to select the appropriate DLL. All the FB_DLL instances are characterized by an INIT input event that is used to load the DLL: in particular, when the INIT event of any FB_DLL is received for the first time, the associated DLL is loaded and the IEC 61499 runtime registers the function block with that DLL. Furthermore, if the constructor is implemented in the custom code, then it is run afterward.</para>
<para>To leverage this flexible customization mechanism for implementing distributed automation applications, the custom code has to expose a data structure whose specification is detailed in the nxtControl&#8217;s documentation material. That interfacing structure defines different elements that characterize the generic DLL function block, like:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>the number of input and output events;</para></listitem>
<listitem><para>the number of data values that are associated to the input and output events;</para></listitem>
<listitem><para>the data type associated to data values.</para></listitem>
</itemizedlist>
<para>In addition to the description of the input/output events and data, the custom code used in a generic DLL function block has to define a precise set of functions that the IEC 61499 runtime uses to interact with the DLL when the distributed control application needs to execute the custom code. The most relevant of such functions are those used to register/unregister an FB_DLL with the appropriate DLL, the one used to execute the code associated to a specific input event, as well as the one dedicated to signal the triggering of an output event. In addition to those, there is also a function dedicated to the log information that can be used by the code in the DLL to report diagnosis information to the IEC 61499 runtime.</para>
</section>
<section class="lev2" id="sec7-5-5">
<title>7.5.5 Exploiting the FB_DLL Function Block as Interfacing Mechanism between IEC 61499 and External Custom Code</title>
<para>Leveraging the generic DLL function block it is possible to extend functionalities available in the nxtControl automation platform with additional features that can be integrated in a seamless manner into an IEC 61499 control application.</para>
<para>That possibility opens the opportunity to integrate in an engineering software tool, designed to develop IEC 61499 applications, features that are not strictly related to the standard itself but that are interesting for implementing advanced distributed control applications. Actually, this can be leveraged to integrate the advanced functionalities that characterize a CPS that conforms to the DAEDALUS&#8217; vision, as for example, the integration of the &#8220;simulation dimension&#8221; and advanced MPC algorithms.</para>
<para>The possibility to extend the type of elaborations that can be performed within a function block in a distributed control application based on IEC 61499 enables the possibility to introduce new functionalities. Furthermore, it enables to test new features while respecting the normative rules and constraints of the standard and, as a consequence, allows to keep a high level of portability of the solution developed by means of this mechanism.</para>
<para>Since the DLL code is developed and compiled outside the classic development toolchain that is normally used for a plain IEC 61499 application (i.e. leveraging the development environment from nxtControl), the DLL has to be compiled by means of appropriate software tools to address the specific platform where the DLL will run. This means that an appropriate software toolchain is needed to generate a binary code that can run on the controller platform selected.</para>
<para>The main constraints that characterize this approach are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>All the algorithms that define the behaviour of the FB_DLL have to be compiled as a dynamic loadable library (DLL) with a binary format compatible with the architecture of the controller, where the DLL will have to be installed;</para></listitem>
<listitem><para>The DLL has to expose a C interface corresponding to the template imposed by the generic DLL function block mechanism;</para></listitem>
<listitem><para>In the case where the FB_DLL is conceived to provide an output event to confirm the completion of the elaboration performed by the FB before a new input event can be processed by the FB, the elaboration performed by the DLL has not to take too much time before generating the output event. Otherwise that elaboration can affect negatively the controller&#8217;s real-time performance;</para></listitem>
<listitem><para>When the elaboration to be performed takes many computational resources and a lot of time to generate a result from the elaboration, another approach should be used: for example, the approach to run elaborations in parallel and generate output events asynchronously is a valid alternative;</para></listitem>
<listitem><para>One of the aspects that needs to be considered at design is that a DLL can be shared by all the FB_DLL instances that make use of that library. This means, as a consequence, that the current number of function blocks registered with a DLL have to be managed appropriately, in order to keep track of the code portions that need to be executed for each FB_DLL instance.</para></listitem>
</itemizedlist>
<para><emphasis role="strong">The compact approach</emphasis></para>
<para>The first approach enabled by the use of the generic DLL function block consists in exploiting the mechanism to implement a basic function block fully customized, where the constraint of using an execution control chart (ECC) is no more effective. In this case, the developer can freely design the finite state machine for government of the function block&#8217;s logic states by exploitation of any preferred development tool (<link linkend="F7-16">Figure <xref linkend="F7-16" remap="7.16"/></link>).</para>
<para>By means of this approach, the logic algorithms that need to be executed when the associated input events are received by the FB_DLL instance can be designed and implemented following a customized approach that satisfies the developer&#8217;s preferences and needs. At the same time, this mechanism enables to leverage other programming languages to implement the algorithms of the basic function block, in addition to the structured text (ST) language currently supported by the nxtControl software development tool.</para>
<fig id="F7-16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-16">Figure <xref linkend="F7-16" remap="7.16"/></link></label>
<caption><para>Illustration of the compact approach based on exploitation of generic DLL FBs.</para></caption>
<graphic xlink:href="graphics/ch07_fig016.jpg"/>
</fig>
<para><emphasis role="strong">The extended approach</emphasis></para>
<para>A generalization of the previous approach consists in leveraging one or more additional DLLs when implementing the code associated to the FB_DLL instance. This basically means that the dynamic loadable library associated to the generic DLL function block is linked, in turn, to one or more other DLLs (<link linkend="F7-17">Figure <xref linkend="F7-17" remap="7.17"/></link>).</para>
<para>In such a case, it is possible that the exploitation of third parties&#8217; libraries implements customized function blocks usable in an IEC 61499 distributed control application. In this way, it is possible to develop custom service interface function blocks, making use of operating system function calls to access low-level hardware features or input/output data via interfacing devices.</para>
<fig id="F7-17" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-17">Figure <xref linkend="F7-17" remap="7.17"/></link></label>
<caption><para>Illustration of the extended approach based on exploitation of generic DLL FBs.</para></caption>
<graphic xlink:href="graphics/ch07_fig017.jpg"/>
</fig>
<para>In order to make this approach applicable, all the DLLs that are going to be exploited within the code of a general DLL function block have to be compiled for the specific architecture of the controller that will run that code.</para>
<para>That constraint can be limiting in certain scenarios, where the DLLs referenced by the custom code are not available for the platform selected and therefore it makes the use of those libraries impossible in such a scenario. On the other hand, that limitation has not to be ascribed to the generic DLL function block mechanism but to the lack of a compatible version of third party&#8217;s libraries.</para>
<para>All the considerations that have been done for the basic approach of exploiting the FB_DLL are valid also for this extended case.</para>
<para><emphasis role="strong">The distributed approach</emphasis></para>
<para>The most general and flexible exploitation approach of the generic DLL function block mechanism consists not only in leveraging the FB_DLL FBs to integrate custom made and/or third-party software algorithms, but also in expanding the distributed computational network with additional elaboration devices via interfacing mechanisms that can co-exist in parallel to the IEC-61499 communication interface (<link linkend="F7-18">Figure <xref linkend="F7-18" remap="7.18"/></link>).</para>
<fig id="F7-18" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-18">Figure <xref linkend="F7-18" remap="7.18"/></link></label>
<caption><para>Illustration of the distributed approach based on exploitation of generic DLL FBs.</para></caption>
<graphic xlink:href="graphics/ch07_fig018.jpg"/>
</fig>
<para>This means that in addition to custom and advance algorithms embedded in DLLs that run locally in the controller where the FB_DLL instance is mapped, we can leverage the computational resources of other devices, in which specific data processing is allocated.</para>
<para>In such a scenario, the dynamic loadable library associated to an FB_DLL instance is used to open appropriate communication channels toward other computational nodes of the network where the data elaboration is actually performed. The FB_DLL has to leverage the asynchronous generation of events and appropriate mechanism to accept new requests in order to manage appropriately the elaboration and communication time without affecting negatively on the responsiveness of the IEC 61499 controller where the FB-DLL instance is running.</para>
</section>
</section>
<section class="lev1" id="sec7-6">
<title>7.6 Conclusions</title>
<para>A deeply review of state of the art regarding solutions for controlling aggregated CPS has been carried out: the focus has been pointed on Model Predictive Control, especially on Hybrid Model Predictive Control. The analysis delves into Hybrid System representation and modelling, showing different techniques, mainly PWA and MLD. The advantages of PWA representation is related to the presence of numerous tools developed in the control system and identification fields, which are able to perform stability proof of system, convergence analysis. Moreover, PWA allows to build up an easier-to-use interface for SDK Interface in future development step. On the other hand, the MLD representation allows a deeply computational cost reduction for the solver as shown in Section 7.2.2. Both PWA and MLD are used in the SDK of Daedalus and they will work synergistically to improve the performance and the usability of the toolbox (SDK).</para>
<para>A review of the literature on data-driven modelling of hybrid systems has been carried out, with emphasis on PieceWise Affine (PWA) models and Jump Affine models, where the switches among the discrete state are triggered by deterministic events (e.g. if&#8211;then&#8211;else rules). These two models will be combined in the future stages of the project to arrive at Jump PieceWise Affine (JPWA) models, where the PieceWise Affine part will be used to describe the non-linear dynamics of the continuous (physical) states of the CPS, while the Jump part will be used to describe the time-evolution of the discrete (logical) states.</para>
<para>As a next step, a user-friendly software toolbox for identification of hybrid systems will be developed and the software functions will be integrated in the Daedalus&#8217; platform. This toolbox for on-line identification will contain the algorithm in ref. [37]. If necessary, improvements and/or extensions of this identification algorithm will be proposed and implemented in the toolbox. Benchmark examples available in the literature and case studies proposed by the project&#8217;s partners will be used to test the implemented identification algorithms.</para>
<para>The IEC-61499 standard defines a technology for the implementation of distributed control applications applicable on several industrial scenarios. Many are the key aspects that make such a technology a valid solution for the development of the new generation of industrial control systems, leveraging networks of interacting CPSs.</para>
<para>The modularity that characterizes the control software design approach, which builds on the concept of function block, and the event-based execution paradigm are, just as an example, two of the core architectural aspects of the IEC-61499 standard that provide an effective development tool for complex control applications.</para>
<para>Advanced control software can be implemented exploiting the hierarchical development approach based on nesting of different types of function blocks. Custom algorithms can be implemented both through the composition of function blocks and by the development of Basic Function Blocks, leveraging the programming languages supported by the selected software development toolkit.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>This work was achieved within the EU-H2020 project DAEDALUS, which has received funding from the European Union&#8217;s Horizon 2020 research and innovation programme, under grant agreement No. 723248.</para></section>
<section class="lev1" id="sec7-7">
<title>References</title>
<para>[1] V. Vyatkin, &#8220;The IEC 61499 standard and its semantics&#8221;, IEEE Industrial Electronics Magazine, vol. 3, 2009.</para>
<para>[2] G. Optimization et al., &#8220;Gurobi optimizer reference manual&#8221;, URL: http://www.gurobi.com, vol. 2, pp. 1&#8211;3, 2012.</para>
<para>[3] I. L. O. G. CPLEX, Reference Manual, 2004, 2011.</para>
<para>[4] B. a. T. M. Meindl, &#8220;Analysis of commercial and free and open source solvers for linear optimization problems&#8221;, Forschungsbericht CS-2012-1, 2012.</para>
<para>[5] J. Lunze, F. Lamnabhi-Lagarrigue, Handbook of hybrid systems control: theory, tools, applications, Cambridge University Press, 2009.</para>
<para>[6] S. A. Nirmala, B. V. Abirami, and D. Manamalli, &#8220;Design of model predictive controller for a four-tank process using linear state space model and performance study for reference tracking under disturbances&#8221;, in Process Automation, Control and Computing (PACC), 2011 International Conference on, 2011.</para>
<para>[7] J. G. Ortega, E. F. Camacho, &#8220;Mobile robot navigation in a partially structured static environment, using neural predictive control&#8221;, Control Engineering Practice, vol. 4, pp. 1669&#8211;1679, 1996.</para>
<para>[8] M. Kvasnica, M. Herceg, L. irka, M. Fikar, &#8220;Model predictive control of a CSTR: A hybrid modeling approach&#8221;, Chemical papers, vol. 64, pp. 301&#8211;309, 2010.</para>
<para>[9] J. Richalet, A. Rault, J. L. Testud, J. Papon, &#8220;Model predictive heuristic control: Applications to industrial processes&#8221;, Automatica, vol. 14, pp. 413&#8211;428, 1978.</para>
<para>[10] F. Borrelli, A. Bemporad, M. Fodor, D. Hrovat, &#8220;An MPC/hybrid system approach to traction control&#8221;, IEEE Transactions on Control Systems Technology, vol. 14, pp. 541&#8211;552, 2006.</para>
<para>[11] G. Ferrari-Trecate, E. Gallestey, P. Letizia, M. Spedicato, M. Morari, M. Antoine, &#8220;Modeling and control of co-generation power plants: a hybrid system approach&#8221;, IEEE Transactions on Control Systems Technology, vol. 12, pp. 694&#8211;705, 2004.</para>
<para>[12] D. Corona, B. De Schutter, &#8220;Adaptive cruise control for a SMART car: A comparison benchmark for MPC-PWA control methods&#8221;, IEEE Transactions on Control Systems Technology, vol. 16, pp. 365&#8211;372, 2008.</para>
<para>[13] E. Sontag, &#8220;Nonlinear regulation: The piecewise linear approach&#8221;, IEEE Transactions on automatic control, vol. 26, pp. 346&#8211;358, 1981.</para>
<para>[14] A. Bemporad, M. Morari, &#8220;Control of systems integrating logic, dynamics, and constraints&#8221;, Automatica, vol. 35, pp. 407&#8211;427, 1999.</para>
<para>[15] R. Alur, C. Courcoubetis, T. A. Henzinger, P.-H. Ho, &#8220;Hybrid automata: An algorithmic approach to the specification and verification of hybrid systems&#8221;, in Hybrid systems, Springer, pp. 209&#8211;229, 1993.</para>
<para>[16] W. P. M. H. Heemels, B. De Schutter, A. Bemporad, &#8220;Equivalence of hybrid dynamical models&#8221;, Automatica, vol. 37, pp. 1085&#8211;1091, 2001.</para>
<para>[17] H. P. Williams, Model building in mathematical programming, John Wiley &amp; Sons, 2013.</para>
<para>[18] J. H. Lee, &#8220;Model predictive control: Review of the three decades of development&#8221;, International Journal of Control, Automation and Systems, vol. 9, no. 3, pp. 415&#8211;424, 2011.</para>
<para>[19] C. A. Floudas, Nonlinear and mixed-integer optimization: fundamentals and applications, Oxford University Press, 1995.</para>
<para>[20] R. Fletcher, S. Leyffer, &#8220;Numerical experience with lower bounds for MIQP branch-and-bound&#8221;, SIAM Journal on Optimization, vol. 8, pp. 604&#8211;616, 1998.</para>
<para>[21] A. Makhorin, &#8220;GLPK&#8221;, GNU Linear Programming Kit, 2004.</para>
<para>[22] E. F. a. C. B. Camacho, Model predictive control in the process industry, Springer Science &amp; Business Media, 2012.</para>
<para>[23] E. B. E. Y. a. I. E. G. Perea-Lopez, &#8220;A model predictive control strategy for supply chain optimization&#8221;, Computers &amp; Chemical Engineering 27.8 (2003), vol. 27, no. 8&#8211;9, pp. 1201&#8211;1218, 2003.</para>
<para>[24] J. e. a. &#321;iroki, &#8220;Experimental analysis of model predictive control for an energy efficient building heating system&#8221;, Applied energy, vol. 89, no. 9, pp. 3079&#8211;3087, 2011.</para>
<para>[25] S. e. a. Li, &#8220;Model predictive multi-objective vehicular adaptive cruise control&#8221;, IEEE Transactions on Control Systems Technology, vol. 19, no. 3, pp. 556&#8211;566, 2011.</para>
<para>[26] A. A. a. A. Bemporad, &#8220;A survey on explicit model predictive control&#8221;. Nonlinear model predictive control, in Non Linear Model Predictive Control, Springer Berlin Heidelberg, pp. 345&#8211;369, 2011.</para>
<para>[27] S. Y. Xu, T. W. Chen et al., &#8220;Robust H-infinity control for uncertain stochastic systems with state delay&#8221;, IEEE Transactions on Automatic Control, vol. 47, pp. 2089&#8211;2094, 2002.</para>
<para>[28] L. Breiman, &#8220;Hinging hyperplanes for regression, classification, and function approximation&#8221;, IEEE Transactions on Information Theory, vol. 39, pp. 999&#8211;1013, 1993.</para>
<para>[29] S. Paoletti, A. L. Juloski, G. Ferrari-Trecate, R. Vidal, &#8220;Identification of hybrid systems a tutorial&#8221;, European journal of control, vol. 13, pp. 242&#8211;260, 2007.</para>
<para>[30] A. Garulli, S. Paoletti, A. Vicino, &#8220;A survey on switched and piecewise affine system identification&#8221;, in 16th IFAC Symposium on System Identification, Brussels, 2012.</para>
<para>[31] N. Ozay, C. Lagoa, M. Sznaier, &#8220;Set membership identification of switched linear systems with known number of subsystems&#8221;, Automatica, vol. 51, pp. 180&#8211;191, 2015.</para>
<para>[32] J. B. Lasserre, &#8220;Global optimization with polynomials and the problem of moments&#8221;, SIAM Journal on Optimization, vol. 11, pp. 796&#8211;817, 2001.</para>
<para>[33] H. Ohlsson, L. Ljung, S. Boyd, &#8220;Segmentation of ARX-models using sum-of-norms regularization&#8221;, Automatica, vol. 46, pp. 1107&#8211;1111, 2010.</para>
<para>[34] D. Piga, R. Tth, &#8220;An SDP approach for 0-minimization: Application to ARX model segmentation&#8221;, Automatica, vol. 49, pp. 3646&#8211;3653, 2013.</para>
<para>[35] J. Roll, A. Bemporad, L. Ljung, &#8220;Identification of piecewise affine systems via mixed-integer programming&#8221;, Automatica, vol. 40, pp. 37&#8211;50, 2004.</para>
<para>[36] G. Ferrari-Trecate, M. Muselli, D. Liberati, M. Morari, &#8220;A clustering technique for the identification of piecewise affine systems&#8221;, Automatica, vol. 39, pp. 205&#8211;217, 2003.</para>
<para>[37] V. Breschi, D. Piga, A. Bemporad, &#8220;Piecewise Affine Regression via Recursive Multiple Least Squares and Multicategory Discrimination&#8221;, Automatica, vol. 73, pp. 155&#8211;162, 2016.</para>
<para>[38] K. P. Bennett, O. L. Mangasarian, &#8220;Multicategory Discrimination via Linear Programming&#8221;, Optimization Methods and Software, vol. 3, pp. 27&#8211;39, 1994.</para>
<para>[39] Y. J. Lee, O. L. Mangasarian, &#8220;SSVM: A Smooth Support Vector Machine for Classification&#8221;, Computational Optimization and Applications, vol. 20, pp. 5&#8211;22, 2001.</para>
</section>
</chapter>

<chapter class="chapter" id="ch08" label="8" xreflabel="8">
<title>Modular Human&#8211;Robot Applications in the Digital Shopfloor Based on IEC-61499</title>
<para><emphasis role="strong">Franco A. Cavadini<superscript>1</superscript> and Paolo Pedrazzoli<superscript>2</superscript></emphasis></para>
<para><superscript>1</superscript> Synesis, SCARL, Via Cavour 2, 22074 Lomazzo, Italy</para>
<para><superscript>2</superscript> Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland</para>
<para>E-mail: franco.cavadini@synesis-consortium.eu; paolo.pedrazzoli@supsi.ch</para>
<para>This chapter presents the results of the conception effort done under Daedalus to transfer the technological results of IEC-61499 into the industrial domain of Human&#8211;Robot collaboration, with the aim of deploying the concept of mutualism in next-generation continuously adaptive Human&#8211;Machine interactions, where operators and robots mutually complement their physical, intellectual and sensorial capacities to achieve optimized quality of the working environment, while increasing manufacturing performance and flexibility. The architecture proposed envisions a future scenario where Human&#8211;Machine distributed automation is orchestrated through the IEC-61499 formalism, to empower worker-centred cooperation and to capitalize on both worker&#8217;s and robot&#8217;s strengths to improve synergistically their integrated effort.</para>
<section class="lev1" id="sec8-1">
<title>8.1 Introduction</title>
<para>Personnel costs in Europe are higher compared to other industrial regions; hence, EU industry today competes in the global market by offering high added-value products. This is possible thanks to the extreme qualification level and know-how of its 17 million shopfloor workers.</para>
<para>To keep this true, European manufacturing industry needs to adopt a new production paradigm, focusing on processes where robots collaborate with humans with mutual benefits in terms of skill growth and support. The problem is that current automation approaches in Europe have disregarded the importance of added value of workers, enhancing de-skilling of European workforce and labour shedding.</para>
<para>Future European value-adding manufacturing industries will have to rely more and more on the virtuous combination of machines and operators [1], to increase the standard of quality of their shopfloors while remaining competitive with low-wage countries. To exploit new synergies between operators and machines, future manufacturing processes will have to exhibit a dynamically reconfigurable overall behaviour, through continuous physical interactions and bidirectional exchange of information between them.</para>
<para>A possible solution towards a comprehensive management of this highly integrated collaboration, by pushing the boundaries of the topic of human&#8211;robot Mutualism, is to apply the IEC-61499 standard to orchestrate their joint behaviour.</para>
<para>In fact, in collaborative tasks, the overall dynamics of the interactions between a robot and a human is currently an emergent property that implicitly arises from their individual behaviours. On the contrary, the aim should be of making these implicit properties explicit, by representing, standardizing and orchestrating the overall dynamics of human&#8211;robot interaction. Achieving human&#8211;robot mutualism through such orchestration will dramatically improve the transparency and acceptance of robots for users, as well as substantially increase the ergonomy and efficiency of collaborative tasks.</para>
<para>The main bottleneck in orchestration for mutualism is that current scheduling and planning algorithms, powerful as they may be, are limited by the expressiveness and fidelity of the representations they operate upon. Therefore, the need of the market (partially tackled in Daedalus) is to develop and standardize such representations for open manufacturing processes and ensure their compatibility with existing norms, with the IEC-61499 standard for industrial distributed automation being the cornerstone. This can be achieved only by working on three high-level objectives:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Standardizing and homogenizing the way intelligent agents, both human and robotics, are represented and orchestrated, from design to runtime stage, in a team with multiple dynamically varying objectives;</para></listitem>
</itemizedlist>
<fig id="F8-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link></label>
<caption><para>Life-cycle stages to achieve human&#8211;robot symbiosis from design to runtime through dedicated training.</para></caption>
<graphic xlink:href="graphics/ch08_fig001.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Engineering a new generation of mechatronic intelligent devices for human&#8211;robot collaboration, to facilitate and augment bidirectional interactions with operators while exhibiting inherently safe behaviours;</para></listitem>
<listitem><para>A widespread application of AI-based techniques, from semantic planning for orchestration and task planning to deep learning for human intent recognition. Applying AI at different functional levels and life-cycle stages is essential to go beyond current rigidity of robotics systems and, thus, to increase by-design compatibility with human operators.</para></listitem>
</itemizedlist>
<para>What European manufacturing processes need is a more effective and extensive symbiosis between humans and robots in the work environment, achieved by proposing new technological solutions but also reshaping the way those processes (and the systems executing them) are conceived, designed, run and reconfigured (<link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link>).</para>
</section>
<section class="lev1" id="sec8-2">
<title>8.2 Human and Robots in Manufacturing: Shifting the Paradigm from Co-Existence to Mutualism</title>
<para>European manufacturing industry, with more than 2M enterprises that employ 30M people, must face two main challenges: (i) reduction of product life cycles, with a corresponding reduction in the amortization time for the investments; (ii) increase of customization from the market, that requires flexibility [2] and adaptability for manufacturing smaller batch sizes with constant product changes [3]. EU industry therefore competes in the global market by offering high-quality products, thanks to the high qualification level and know-how of its 17 million shopfloor workers. Sustaining it requires to maintain as much time as possible these high skilled workers, postponing their retirement age and ensuring a smooth transition by incorporating younger workers. However, the negative impression of manufacturing due to its negative impact on health [4] and low attractiveness (monotonous and boring work), combined with the aging of European population, will provoke a lack of qualified shop floor workers in 2030 [5].</para>
<para>The problem is that current automation approaches in Europe have disregarded the importance of added value of workers, enhancing de-skilling process in European workforce and labour shedding [6]. However, Japan, one of the key manufacturers of robotics, has shown that other manufacturing models where robots and technologies support and improve worker, instead of substituting them, are possible [7].</para>
<para>As explored in the literature and in previously approved EC-funded projects (e.g. MAN-MADE [8]), the workplaces of the future are expected to be worker-centric (as opposed to task-centric), with an increased role of workers in pursuing production performances and personal well-being. In this new paradigm, it is the task that suits skills, experience, capacities and needs of the worker, here turned from a passive constraint to a variable opportunity. Previous research and pilot implementations have in fact demonstrated that collaborative human&#8211;robot workspaces, knowledge networks and augmented reality support [9] can improve productivity and workers&#8217; well-being, reducing 50% the time of first time assembly [10] or bringing near novice workers to experienced ones [11].</para>
<para>The paradigm of human&#8211;machine symbiosis suggested by Tzafestas [12] and revised (with a focus on assembly systems) in Ferreira et al. [13] appoints advanced human&#8211;machine symbiotic workplaces as the foundation for human-centric factories of the future.</para>
<para>Ultimately, such a positive and human-centric vision still lacks in actual instantiations. The reasons are several:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A new conceptual framework, intended to deploy symbiotic human&#8211;robot ecosystems, needs to be created for the manufacturing environment, effectively describing workers, intelligent machine and, especially, their interactions and collaborative tasks;</para></listitem>
<listitem><para>Advanced algorithms and tools based on artificial intelligence are missing to &#8220;augment&#8221; the mutual perception and understanding of the behaviour of robots and operators, for an effective synergistic approach in executing joint tasks;</para></listitem>
<listitem><para>Trust-creation environments, where human&#8211;machine team can orchestrate their respective actions in a controlled situation and where machines are aware of not only the human worker&#8217;s physical but also mental state. In fact, human&#8211;machine interaction is usually conceived in one way, with one of the elements providing and the other receiving support;</para></listitem>
<listitem><para>Worker-centric human&#8211;machine symbiotic interaction in real industrial environments is currently limited by available legislation (e.g. EU Machinery Directive 2006/42/EC, ISO 10218 standard) and insufficient empirical data, so that only sequential and parallel cooperation as forms of co-work are possible;</para></listitem>
<listitem><para>Configuration of human&#8211;machine interaction is currently performed at its setting-up and, then, seldom updated. But human operators (and, just partially, machines) modify their behaviour and mood daily: a manufacturing symbiosis needs to be constantly adapted and tuned, considering actual symbionts&#8217; behaviours;</para></listitem>
<listitem><para>A unified definition of what is &#8220;good&#8221; for the worker still lacks: a new approach combining subjective and objective measures is needed for this evaluation.</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="sec8-3">
<title>8.3 The &#8220;Mutualism Framework&#8221; Based on IEC-61499</title>
<para>Advanced human&#8211;machine interaction is the most promising approach to enable worker-centric manufacturing in the factory of the future. Some authors [13, 15] have recently suggested the implementation of a &#8220;symbiotic system&#8221; paradigm, where human and robotic operators cooperate for an effective accomplishment of manufacturing tasks.</para>
<para>What has been conceived aims at embracing this approach and proposes a more complete and concrete interpretation based on the biological concept of Mutualism, a peculiar relationship between two organisms of different species where each individual benefit from the activity of the other. Mutualism is a specific instance of symbiosis, establishing a win&#8211;win interaction.</para>
<para>This paradigm is adopted as basis for an innovative methodological framework that supports the effective integration and implementation of collaborative robotics technology over the life cycle of the plant, from conceptual stage to runtime and re-configuration. The objective is to sustain a deeper and more extensive collaboration between humans and robots, 
as intelligent agents orchestrated to achieve dynamically varying objectives, while mutually compensating for their limits through their respective strengths.</para>
<para>Members of this orchestrated manufacturing team are called symbionts and they are either humans or robots/intelligent devices. Regardless of their nature, symbionts are all able to provide and receive support (i.e. giving or receiving a quantifiable benefit thanks to their interaction).</para>
<para>Symbionts are intended to operate in real manufacturing environments, and the effectiveness of their Mutualism is assessed continuously (from design to runtime) considering a holistic worker-centric perspective of &#8220;well-being&#8221; and psychological safety. Clearly, Symbionts are &#8220;living&#8221; entities (both humans and robots) in that they adjust their behaviour on a task-wise basis and modify their performances according to changing exogenous elements.</para>
<para>These qualitative characteristics result in the following concepts composing the so-called &#8220;Mutualism Framework&#8221;.</para>
<section class="lev2" id="sec8-3-1">
<title>8.3.1 &#8220;Orchestrated Lean Automation&#8221;: Merging IEC-61499 with the Toyota Philosophy</title>
<para>When dealing with classical industrial automation use-cases, a core concept is the real-time orchestration of automation tasks &#8211; provided by various subsystems, machines or robotic manipulators &#8211; to guarantee the exact execution of a well-specified behaviour, usually within the constraints of a time cycle. On the other hand, complex production processes, especially where the intervention of highly qualified human operators is essential, have been optimized throughout the last decades with the lean manufacturing approach, mostly under the Toyota philosophy.</para>
<para>For a more holistic integration of humans and robots, it has been introduced the concept of hybrid orchestration (at runtime stage) of IEC-61499 automation tasks executed by both types of Symbionts but framed in a lean methodological and engineering context (in the design stage). The synthesis of these two elements (<link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link>) requires adaptation and evolution of both, where orchestration will have to consider the inevitable variability (and, partly, unpredictability) of human tasks (thanks to the support of artificial intelligence), while the lean concepts will have to be extended considering the very peculiar capacities of intelligent mechatronic systems.</para>
<fig id="F8-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link></label>
<caption><para>Bidirectional exchange of support between humans and robots.</para></caption>
<graphic xlink:href="graphics/ch08_fig002.jpg"/>
</fig>
</section>
<section class="lev2" id="sec8-3-2">
<title>8.3.2 A Hybrid Team of Symbionts for Bidirectional Mutualistic Compensation</title>
<para>Latest studies [13, 15] addressing human&#8211;machine interaction propose to establish symbiotic environments for optimal collaboration between human and machine. Nevertheless, proposed symbioses are always unidirectional, namely pursuing the benefit for just the human component. This confines the machine to a purely servant role: they actually co-exist with humans in the same working environment (under more flexible safety constraints, thanks to the advancement of collaborative robotics), but just giving them the required support in very specific and limited situations.</para>
<para>The Mutualism Framework aims at overstepping this concept by proposing that both robots and human operators are treated as intelligent agents (Symbionts), each with its own special traits, exchanging bidirectionally physical support, information and even knowledge of the process. In this new vision, also robots (machines) and, more in general, intelligent devices can be the addressees of support and training actions, with human operators transmitting knowledge and experience to them.</para>
<para>Making the symbiosis mutualistic provides two innovative ways of exploiting the role of machines:</para>
<orderedlist numeration="roman" continuation="restarts" spacing="normal">
<listitem><para>human&#8211;robot automation tasks can be designed exploiting real collaboration, thanks to the continuous exchange of orchestration signals between symbionts;</para></listitem>
<listitem><para>intelligent mechatronic systems are transformed into active repositories of manufacturing knowledge.</para></listitem>
</orderedlist>
</section>
<section class="lev2" id="sec8-3-3">
<title>8.3.3 Three-Dimensional Characterization of Symbionts&#8217; Capabilities</title>
<para>Worker characterization is traditionally performed considering just a sub-set of all the possible describing dimensions: vital statistics, ergonomics and anthropometry; functional capacities; knowledge and experience. In fact, these are rarely included at the same time in the creation of a dynamic worker profile where all these elements holistically concur to a unique profiling strategy.</para>
<para>On the other hand, the ongoing transition towards the concept of CyberPhysical Systems (CPS) for mechatronics is still incomplete, meaning that we have not yet reached a level of maturity where CPS are treated as intelligent agents that may evolve in time (i.e. machine learning) and, as such, require an equally dynamic characterization strategy.</para>
<para>The Mutualism Framework aims at providing a comprehensive assessment and characterization approach of Symbionts (both humans and robots) operating in its shop floors, to find valuable win&#8211;win combinations for effective execution of collaborative tasks.</para>
<para>Under a common overarching interpretation methodology, all the considered and monitored characteristics will be used to define, on a task- wise basis, a three-dimensional picture representing a generic symbiont profile (<link linkend="F8-3">Figure <xref linkend="F8-3" remap="8.3"/></link>) in terms of:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Experience, which indicates the symbiont level of practice in executing that tasks. A proper taxonomy is thus required to quantify and qualify the level of experience also considering the practice gained using edutainment tools and virtual/augmented reality environments;</para></listitem>
</itemizedlist>
<fig id="F8-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-3">Figure <xref linkend="F8-3" remap="8.3"/></link></label>
<caption><para>Three dimensions of characterization of Symbionts.</para></caption>
<graphic xlink:href="graphics/ch08_fig003.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Specific knowledge, related to the level of knowledge to correctly understand and handle a specific subject. It refers to a theoretical or practical understanding of the topic which can be either implicit or explicit;</para></listitem>
<listitem><para>Physical performances, to include the quantification of the physical characteristics (such as movement capacities, strength in different positions and operations, sensorial capabilities, etc.) that are needed to execute that task.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec8-3-4">
<title>8.3.4 Machine Learning Applied to Guarantee Dynamic Adherence of Models to Reality</title>
<para>Kruger et al. [14], in a review study on human&#8211;machine cooperation types in manufacturing systems, state that cooperation is an important aspect for flexibility, adaptability and reusability. As also stated by Ferreira et al. [15], manufacturing systems have been pressed in recent years to provide highly adaptable and quickly deployable solutions to deal with unpredictable changes following market trends. But what about the adaptability (short term) and evolution (mid/long term) of the capacities of both human operators and robots? In fact, workers&#8217; performances may importantly vary even on a day- by-day basis, while learning-augmented mechatronic systems are conceived to improve over their life cycle.</para>
<para>The Mutualism Framework puts the dynamicity of a Symbiont&#8217;s characterization as one of its cornerstones, proposing to integrate this sort of &#8220;live portrait&#8221; into a so-called Virtual Avatar, that is, a digital representation of the Symbiont that implements the characterization data model and is fed of real-time information coming from the shopfloor, where humans and robots operate and interact. Avatars will not be simply passive containers of information concerning symbionts; they will also be able to represent that part of their dynamics (= behaviours) which is needed to design and then run an adequate orchestration towards the common manufacturing goals</para>
</section>
</section>
<section class="lev1" id="sec8-4">
<title>8.4 Technological Approach to the Implementation of Mutualism</title>
<para>To transform the aforementioned concepts into a valid implementation of the Mutualism Framework, several technological contributions are needed, integrated in a coherent functional architecture, as represented in <link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link>.</para>
<fig id="F8-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link></label>
<caption><para>Qualitative representation of the technological key enabling concepts of the Mutualism Framework.</para></caption>
<graphic xlink:href="graphics/ch08_fig004.jpg"/>
</fig>
<section class="lev2" id="sec8-4-1">
<title>8.4.1 &#8220;Mutualism Framework&#8221; to Sustain Implementation of Symbionts-Enhanced Manufacturing Processes</title>
<para>Mutualistic symbiosis constitutes the fundamental brick to boost worker centrality in real production environments, where human&#8211;robot collaboration is put as a cornerstone. For this to become a new design and runtime paradigm, a &#8220;Mutualism Framework&#8221; must be established, that is, a sound methodological characterization of what the Symbionts are and of how their mutualistic interactions are modelled and exploited.</para>
<para>The Mutualism Framework (MF) explores four major aspects of the above-mentioned functional architecture:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A dedicated semantic data model to describe all the elements of the Mutualism concept;</para></listitem>
<listitem><para>An assessment model to capture the dynamics of their characteristics (slow) and behaviours (fast);</para></listitem>
<listitem><para>The functional mapping, on a per-task basis, between the dynamic rep-resentation of Symbionts and their 3-dimensional (experience, specific knowledge and physical performances) profile;</para></listitem>
<listitem><para>An overarching set of indicators to evaluate the multi-dimensional performance of Symbionts.</para></listitem>
</itemizedlist>
<para>Clearly, for what concerns the intelligent mechatronic systems (= robotic symbionts), the assessment model is directly linked to their functional and non-functional specifications. On the other hand, characterization of human symbionts must consider in a holistic approach all its major &#8220;dimensions&#8221;:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem>
<para>Vital statistics, ergonomics and anthropometry, filling the existing gaps of the state of the art, especially for what concerns the variability of these aspects due to the physical and non-physical exposure at the collaborative workplace;</para></listitem>
<listitem><para>Functional capacities, to consider the relation between workers&#8217; abilities and the potentially assigned tasks, and to sustain the corresponding AI learning of robots;</para></listitem>
<listitem><para>Knowledge and expertise, to effectively and systematically enable knowledge sharing and transfer among workers but also between humans and robots.</para></listitem>
</orderedlist>

<para>The second key focus of the Mutualism Framework is about quantification of performance of collaborating human and robots (measured and/or foreseen), on a per-task basis, but under different perspectives. In fact, the 3D profiling previously introduced provides a simplified but effective way of evaluating how much different Symbionts are &#8220;fitting&#8221; to execute a specific activity, independently from how such level of adequacy is achieved (i.e. differently between operators and machines); this enables the adaptable orchestration of collaboration based on IEC-61499.</para>
<para>Contemporary, performance of the mutualism must be assessed, to define and then evaluate the achievement of specific process objectives. The MF puts worker&#8217;s safety (Physical AND Psychological) and well-being as cornerstone of its set of KPIs, without forgetting about manufacturing sustainability (economic, environmental and social) and reconfigurability, and compliance with regulations in force.</para>
</section>
<section class="lev2" id="sec8-4-2">
<title>8.4.2 IEC-61499 Engineering Tool-Chain for the Design and Deployment of Real-Time Orchestrated Symbionts</title>
<para>During the ongoing transition towards Industry 4.0, the concept of highly distributed intelligent mechatronic devices (also called CPS) is emerging, whose joint behaviour satisfies the production objectives, while guaranteeing much higher flexibility and reconfigurability. This convergence of the industrial world towards an agent-based paradigm for industrial automation asks for adequate new methodologies and technologies for the so-called (real-time) Orchestration of these distributed systems and, thanks to the support of Daedalus, the open and interoperable IEC-61499 standard is taking the lead in solving this need.</para>
<para>With the Mutualism approach, it is recognized that a collaborating team of human and robotic symbionts is, in fact, an extension of the above- mentioned concept of distributed intelligence, to encompass the hybrid nature of shopfloors where operators and machines work shoulder-to-shoulder. This means that the concept of Mutualism must be developed towards its technical dimension of (soft) real-time orchestration of Symbionts, designed, deployed and then executed at runtime thanks to the usage of an IEC-61499-based engineering tool-chain.</para>
<para>The design stage of a Mutualistic manufacturing process will consider both the conceptual definition of the specifications of the process itself, and the engineering of the automation logics (through the IEC-61499 formalism and programming language) that will control orchestration of the distributed intelligence of Symbionts.</para>
<para>Conception of the Mutualism is where the principles of Lean Manufacturing are applied towards a new production model that considers the opportunities of human&#8211;robot collaboration and therefore exploits them. It originates from the key principles of the &#8220;Toyota Way&#8221; (especially the kaizen) to implement Mutualism keeping the Human operator at the centre of the process.</para>
<para>Leveraging on the state of the art of R&amp;D on &#8220;Lean Automation&#8221;, it is possible to focus mostly on the implementation of those design-support tools that can simplify the definition of requirements for Mutualistic tasks, help assembling the most appropriate team of Symbionts for those tasks and support the generation of specifications for the corresponding IEC-61499 orchestration.</para>
<para>For what concerns the engineering of orchestration logics, the usage of the IEC-61499 IDE and runtime developed in Daedalus allows to: (i) Guarantee ease of interfacing and functional wrapping of lower-level automation architecture of specific robotic symbionts; (ii) Use 61499 formalism to consider the 3D performance of Symbionts and (iii) Integrate with an adequate perceptual learning platform.</para>
</section>
<section class="lev2" id="sec8-4-3">
<title>8.4.3 AI-Based Semantic Planning and Scheduling of Orchestrated Symbionts&#8217; Tasks</title>
<para>Complementary to the IEC61499-based design stage of Mutualism is the development of the Mutualism Execution Platform (MEP), that is, a set of IEC-61499-Compliant runtime modules to adapt continuously the orchestration of human and robotics symbionts. In fact, the management of the runtime stage of the orchestration of distributed symbionts is where the flexibility and adaptability enabled by the mutualistic symbiosis of humans and robots is achieved effectively.</para>
<para>The collaboration between operators and machines designed within the IEC-61499 IDE is then operatively achieved by the MEP, deployed within the shopfloor and connected in real time to both machines and workers through the IEC-61499 runtime framework. In fact, it is responsible for defining the most suitable optimization pattern(s) to be executed over a specific time horizon and adapting coherently with the variability of production and workers&#8217; needs.</para>
<para>In practice, the MEP can exploit availability of ready-to-use AI-based techniques to tackle the operational challenges of robust planning and scheduling (over a distributed agents&#8217; functional architecture) the team of Symbionts needed to reach specific production objectives, considering that:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>First, P/S human activities are complex by itself, since people have different skills, attitudes and preferences; moreover, working shifts and rosters are subject to strict union and legislative regulations that cannot be violated, but force severe restrictions on working plan feasibility.</para></listitem>
<listitem><para>Second, although machines do not have &#8220;personal&#8221; preferences and they are not subject to &#8220;union&#8221; regulations, they must undergo precise and rigorous maintenance plans, which must ensure that the operating conditions of each machine meet very high operating standards and security rules.</para></listitem>
<listitem><para>Third, humans and machines are called to cooperate in a highly dynamic and uncertain environment where the tasks to be executed constantly evolve and their distribution (over different symbionts) and scheduling must be very robust.</para></listitem>
</itemizedlist>
<para>The second key aspects of dealing with the runtime management of Mutualism is to provide intuitive programming tools, which enable advanced users and shopfloor workers to program novel robot skills that are verifiably compliant with IEC-61499. This requires the analysis of robotic skills with linear time temporal logic model checking. These programming tools can be based, for instance, on RAFCON [16], a visual programming tool for robotics skills that enables logging with semantic labels, so that the context in which data was logged is known. Having contextual knowledge greatly facilitates the data-driven analysis of skills [17], and the application of data mining techniques.</para>
</section>
<section class="lev2" id="sec8-4-4">
<title>8.4.4 Modular Platform for Perceptual Learning and Augmentation of Human Symbionts</title>
<para>Intelligent mechatronic systems exchange (potentially strict real time) I/O signals to sustain high-performance interactions. Humans do not; in fact, despite having an incredibly sophisticated and flexible set of sensing and actuating apparatus, their capacity of receiving and transmitting complex and structured information is very limited. This (very simply introduced) specific issue is also one of the major reasons for which, until now, human&#8211;robot collaboration has been severely hindered: orchestrating physical tasks accomplished by distributed agents requires a bi-directional information flow continuously exchanged WHILE those tasks are being executed.</para>
<para>This aspect of the Mutualism is tackled from a systemic point of view, aware of the fact that what is needed today is not so much a new, very specific smart sensing or interfacing solution (those are already developed and released continuously at market level) as rather a HW/SW abstraction layer to:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Simplify the aggregation and elaboration of several and heterogeneous signals coming from and going into the shopfloor (through multiple devices) and</para></listitem>
<listitem><para>Provide already integrated interpretation and machine learning functionalities, then available to the orchestration layer for increased flexibility and adaptability.</para></listitem>
</orderedlist>
<para>Following the natural distinction induced by the bi-directionality of signals to be exchanged between human and robotic Symbionts, it is correspondingly possible to identify two major contributions: holistic monitoring and learning of Virtual Avatars, and adaptive cognitive interfacing.</para>
<para>The first component is a hierarchical functional architecture composed of three layers: A modular monitoring system composed of distributed smart sensors; a data-interpretation layer and a machine learning middleware to provide high-level tasks. To assure an adaptive manufacturing environment, data gathering for worker profile adaptation will be continuous, performed both during everyday manufacturing operations and within properly designed training sessions.</para>
<para>Complementary to this is the possibility of sending feedback to an operator, directly from a robot, to enable the coordination of their respective activities during the execution of joint tasks (which may involve also several symbionts). We call this an adaptive cognitive interfacing, achieved through the augmentation of the capabilities of the worker through dedicated smart devices. Augmented/Mixed/Virtual Reality will be the most important approach to tackle this challenge, knowing that the area of human&#8211;robot collaboration is still developing and novel interfaces supporting in an effective way the collaboration need to be designed, implemented and evaluated.</para>
</section>
<section class="lev2" id="sec8-4-5">
<title>8.4.5 Training Gymnasium for Progressive Adaptation and Performance Improvement of Symbionts&#8217;</title>
<para><emphasis role="strong">Mutualistic Behaviours</emphasis></para>
<para>All biological symbioses go through a preliminary training phase, where symbionts know each other and measure up. This step is fundamental also for Mutualism. In fact, improper human&#8211;robot interaction may cause counter-effects, such as misuse of machine and/or safety issues.</para>
<para>Because the trust of human to robots will directly affect the degree of autonomy of the industrial robot, which is related to the efficiency of manufacturing processes, trust is a critical element in HRI when a human worker observes a discrepancy between his/her performance and what he/she expects from the robot partner, his/her trust to the robot decreases accordingly. When the robot performance matches human expectation, the human&#8217;s trust to robot increases.</para>
<para>The solution to this is the so-called Training Gymnasium, where:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A task recording &amp; displaying facility will enable the recording and retrieval of working parameters, machines movements, machine and worker roles, etc. According to worker literacy rate, the training facility may record or retrieve the above-mentioned parameters;</para></listitem>
<listitem><para>Virtual reality environments and devices will be implemented thanks to the augmentation platform. These solutions are especially intended for machines still under design to assure rich workers&#8217; experience and value-adding data gathering since the design of new workplaces (with a closed loop with the other phases of the life cycle of the manufacturing process);</para></listitem>
<listitem><para>Augmented reality will be used to guide the less skilled workers in interacting with robotics Symbionts and other physical machines.</para></listitem>
</itemizedlist>
</section>
</section>
<section class="lev1" id="sec8-5">
<title>8.5 The Potential to Improve Productivity and the Impact this Could Have on European Manufacturing</title>
<para>According to Holdren [18], manufacturing has a larger multiplier effect than any other major economic activity: every euro spent in manufacturing drives an additional e1.35. EC data show that in 2012 the manufacturing sector in the EU employed 30 million persons directly and provided twice as many jobs indirectly, manufactured goods amount to more than 80% of total EU exports and manufacturing accounted for 80% of private Research &amp; Development expenditure. This notwithstanding, for diverse and widely discussed reasons, EU manufacturing has slightly declined in the last few years.</para>
<para>European Commission made huge investments in manufacturing topics to reverse this trend, and Advanced Manufacturing has been identified as the major driving force for improving competitiveness of the European Industry, namely through [19] ICT-enabled intelligent manufacturing, more sustainable technologies and processes, and high-performance production. To guarantee flexibility and even total flexibility [20], the European industry needs to adopt new paradigms of production models, focusing on human&#8211;machine collaboration, more than what is currently done in fully automatized islands, where humans are out of the decision loop [21], to take benefit of workers abilities, fostering human skills and human motivation [22].</para>
<para>It is therefore necessary to move to a new concept of human-centred automation [23], and it is under this conceptual framework that the Mutualism Framework seeks for an effective symbiosis between human and robots, to overcome the challenges that the European manufacturing industry must face in the years to come:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>To promote value-adding non-repetitive non-alienating jobs in the manufacturing industry;</para></listitem>
<listitem><para>Claiming for R&amp;D investments in a wide plethora of knowledge fields;</para></listitem>
<listitem><para>Making economically sustainable high-quality production processes that targets overall sustainability;</para></listitem>
<listitem><para>Supporting lifelong learning in the shopfloor exploiting AVR;</para></listitem>
<listitem><para>Promoting re-shoring of sustainable businesses.</para></listitem>
</itemizedlist>
<para>This is clearly aligned with the Europe 2020 policy framework targets for Smart, Sustainable and Inclusive Growth, towards its five major objectives of Employment (75% of the 20&#8211;64 year olds to be employed), R&amp;D (3% of the EU&#8217;s GDP to be invested in R&amp;D), climate change and energy sustainability (greenhouse gas emissions 20% lower than that in 1990; 20% of energy from renewables; 20% increase in energy efficiency), education (reducing the rates of early school leaving below 10%; at least 40% of 30&#8211;34 year olds completing third-level education) and fighting poverty and social exclusion (at least 20 million fewer people in or at risk of poverty and social exclusion).</para>
<para>As for different application domains, manufacturing is facing a bend where traditional production processes and work methods must evolve to be more flexible and adaptive to a quick changing context. Many manufacturing companies experience unpredictable and dynamic production environment due to increased customization in their production, low product life cycles and increased competition from low labour countries. To remain competitive in such globalized market, they must adapt their production systems accordingly and create flexible automatic solutions.</para>
<para>Adaptability in manufacturing can be defined as the ability of the production system to alter itself efficiently to changed production requirements. According to J&#228;rvenp&#228;&#228; [24], manufacturing system adaptability can be achieved either statically (i.e.: while the system is not operating) or dynamically (while the system is running) and working on: (a) physical adaptation (of layout, machines and machine elements); (b) logical adaptation (re-routing, re-planning, re-scheduling and re-programming) and (c) parametric adaptation (changing machine settings).</para>
<para>To fulfil these requirements, reconfigurable manufacturing systems have been proposed as a set of possible solutions, &#8220;designed at the outset for rapid change in structure, as well as in hardware and software components, to quickly adjust production capacity and functionality within a part family in response to sudden changes in market or regulatory requirements&#8221; [25]. In these systems, human operators remain an invaluable resource, by being superior to robots at rapidly interpreting unplanned tasks and situations and handling flexibility and complexity.</para>
<para>Six core properties of reconfigurable manufacturing systems impact the overall time and cost of reconfiguration and for each of them the Mutualistic Framework based on IEC-61499 is capable of providing a concrete technological and methodological answer:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Scalability</emphasis> (design for capacity changes): The dynamic creation of Mutualistic teams, thanks to the IEC-61499 orchestration layer, allows to manage rapid changes of production capacity by simply adding or removing new symbionts from the shopfloor event at runtime stage. Two major technological innovations guarantee this level of reconfigurability: (i) The IEC-61499 platform support to plug &amp; produce, integrating functionalities of auto-discovery and auto-configuration of symbionts; (ii) The dynamic planning and scheduling of mutualistic tasks, looking for the best coupling of characteristics and skills of Symbionts, which guarantees to exploit the introduction (or removal) of one member of the team in the most appropriate way.</para></listitem>
<listitem><para><emphasis role="strong">Convertibility</emphasis> (design for functionality changes): the Mutualism Framework brings ease of programmability directly at the hands of human operators, thanks to its dedicated intuitive interfaces. To achieve a greater flexibility along with a more efficient production, traditional industrial robots are substituted by flexible and autonomous robotic systems with an intuitive on-the-fly programming, enabling an ideal batch size of 1. This translates into a faster and less costly management of the re-conversion of functionalities of robotic symbionts, even over the very short term. This aspect is further enhanced by machine learning capabilities of robots, which enables a direct adaptation (without programming actions) of the behaviour with respect to minor changes of requested functionalities.</para></listitem>
<listitem><para><emphasis role="strong">Diagnosability</emphasis> (design for easy diagnostics): An effect of the human&#8211;robot symbiosis is that the monitoring and inspection capabilities of robots can be directly used by human operators to augment their perception of the working environment and of the production process. Thanks to an appropriate AVR layer, it is possible to provide to human Symbionts contextual and dynamic information, leading to much better error diagnosis.</para></listitem>
<listitem><para><emphasis role="strong">Customization</emphasis> (flexibility limited to part family): This is one of the drivers most impacted by a more extensive usage of HRC, since it is where manufacturing activities are currently done mostly by human operators. Through its IEC-61499-based approach to orchestration, the framework guarantees a new degree of collaboration in executing joint tasks, especially those which require variations within the same part family. The flexible implementation of the automation logics for these mutualis- tic tasks considers, from design stage, how they will be executed by hybrid human&#8211;robot teams, where single behaviours will be dynamically adapted. This means that operators will be able to exploit their natural cognitive flexibility to modify the activity with respect to the single part, while robots will consequently adapt to this human-induced changed.</para></listitem>
<listitem><para><emphasis role="strong">Modularity</emphasis> (modular components): the whole concept of Symbionts is targeted towards an extreme modularization of functional units but guaranteeing their ease of interchange through an elevated degree of decentralized intelligence. While this is natural for human operators, the innovation of the Mutualistic Framework is to bring the same approach to robots and to then bring physical and functional modularity into a dynamic integration with the unique IEC-61499 orchestration layer. This means proposing a systematic way to design its orchestration in a robustly reconfigurable way.</para></listitem>
<listitem><para><emphasis role="strong">Integrability</emphasis> (interfaces for rapid integration): this key driver of reconfigurability is where the interoperable and open nature of IEC-61499 maximizes its impact. The approach of proposing this integration and orchestration layer is a major enabler to implement real plug &amp; produce functionalities for the intelligent symbionts of Mutualism. The 61499 platform acts both as abstraction layer with respect to the lower levels of automation and, at runtime stage, as manager of the dynamic integrability of new Symbionts into a specific mutualistic task.</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="sec8-6">
<title>8.6 Conclusions</title>
<para>During the coming decades, the whole European manufacturing sector will have to face important social challenges. In fact, while shop floor operators are usually considered a &#8220;special&#8221; community, with their job being considered one of the most disadvantaged in terms of workplace healthiness and safety [26], Europe&#8217;s issue of an ageing population will lead inevitable workers to postpone their retirement age. With this prospect, without a concrete solution, the European industry is condemned to lose qualified workers who are needed for manufacturing high-quality products, while national assistance frameworks of EU-27 will have to assist retired workers who need to be kept active to balance the new demographic distribution of population.</para>
<para>Through the Mutualism Framework based on the IEC-61499 platform of Daedalus, we answer back to the popular belief that automation wipes out many jobs and that is currently under the attack of many Industry 4.0 opposers. Indeed, recently, several experts have repeatedly proven this thesis as groundless, demonstrating the mutually virtuous coexistence of humans and machines interacting in industrial environments [27]. Even in advanced automated scenarios, where machine learning can support adaptation to variable and unpredictable situations, interaction with humans is still essential in the process of reacting to contextual information (thus, machines need workers). Contemporarily, automation encompasses not only repetitive tasks, but also sophisticated and high-performance functionalities that human&#8217;s senses and capacities are not keen to; moreover, machines could compensate human knowledge gaps, thus extending the opportunity to have actual support for junior workers (thus, workers can benefit from machines).</para>
<para>The deployment of these technologies may improve the mental and physical strain on human operators, reducing the number of injuries related to manufacturing work. In the medium term, this will also improve the perception of shopfloor workers about how their job negatively influences their health status (currently 40% [28]). This will be mirrored within the society, improving the general perception that population has about shopfloors and increasing social acceptance of this profession.</para>
<para>At the same time, the opportunity is to bring new skills to the role of the shopfloor worker, increasing the reputation of operators at social level. The new task typology that operators will have to perform will create jobs opportunities at shop floor level for more qualified profiles like technicians, increasing appealing for younger workers. This is in line with current FOF Roadmap 2020 to achieve sustainable and social acceptance of this sector and to strengthen the global position of the EU manufacturing Industry.</para>
<para>Finally, the implementation of Mutualism distributes dynamically tasks between operators and robots and coordinates their collaborative execution according to their strengths and weaknesses. This approach offers new job opportunities to people with disabilities, as the automation can overcome functional limitations, facilitating inclusion of this community.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>The work hereby described was achieved within the EU-H2020 project DAEDALUS, which was funded by the European Union&#8217;s Horizon 2020 research and innovation programme, under grant agreement No. 723248.</para></section>
<section class="lev1" id="sec8-7">
<title>References</title>
<para>[1] Brown, A. S. &#8220;Worker-friendly robots&#8221;, Mechanical Engineering 136.9, pp. 10&#8211;11, September 2014.</para>
<para>[2] Spring et al. Product customisation and manufacturing strategy. Int. J. of Operations &amp; Production Management, 20(4), pp. 441&#8211;467, 2000.</para>
<para>[3] EFRA. Factories of the Future. Multi-annual roadmap for the contractual PPP under Horizon 2020, 2013.</para>
<para>[4] European Working Conditions Surveys, http://www.eurofound.europa. eu/surveys</para>
<para>[5] EUROSTATS: ec.europa.eu/eurostat/statistics-explained/index.php/ Population structure and ageing</para>
<para>[6] Buffington, J. The Future of Manufacturing: An End to Mass Production. In Frictionless Markets (pp. 49&#8211;65). Springer International Publishing, 2016.</para>
<para>[7] Sawaragi et al. Human-Robot collaboration: Technical issues from a viewpoint of human-centred automation. ISARC, pp. 388&#8211;393, 2006.</para>
<para>[8] MANufacturing through ergonoMic and safe Anthropocentric aDaptive workplacEs for context aware factories in EUROPE, FP7-2013-NMP- ICT-FOF(RTD), prj. ref.: 609073</para>
<para>[9] Kr&#252;ger et al. Cooperation of Human and Machines in Assembly Lines. Annals of the CIRP, 58/2, pp. 628&#8211;646, 2009.</para>
<para>[10] Duan et al. Application of the Assembly Skill Transfer System in an Actual Cellular Manufacturing System. Autom. Sce &amp; Eng., 9(1), pp. 31&#8211;41, 2012.</para>
<para>[11] Ong et al. Augmented reality applications in manufacturing: a survey, Int. J. of Production Research, 46 (10), pp. 2707&#8211;2742, 2008.</para>
<para>[12] Tzafestas, S.: Concerning Human-Automation Symbiosis in the Society and the Nature. Int&#8217;l. J. of Factory Automation, Robotics and Soft Computing, 1(3), pp. 16&#8211;24, 2006.</para>
<para>[13] Ferreira, P., Doltsinis, S. and N. Lohse, Symbiotic Assembly Systems &#8211; A New Paradigm, Procedia CIRP, vol. 17, pp. 26&#8211;31, 2014, ISSN 22128271, http://dx.doi.org/10.1016/j.procir.2014.01.066</para>
<para>[14] Kr&#252;ger J. T., K. Lien, A. Verl. Cooperation of human and machines in assembly lines. CIRP Annals-Manufacturing Technology 58.2, 2009.</para>
<para>[15] Ferreira, P. and N. Lohse. Configuration Model for Evolvable Assembly Systems. in 4th CIRP Conference On Assembly Technologies, 2012.</para>
<para>[16] Brunner, S. G.; Steinmetz, F.; Belder, R. &amp; D&#246;mel, A. RAFCON: A Graphical Tool for Engineering Complex, Robotic Tasks Intelligent Robots and Systems, 2015. IROS 2015. IEEE/RSJ International Conference, 2016.</para>
<para>[17] Design, Execution and Post-Mortem Analysis of Prolonged Autonomous Robot Operations. Accepted for R-AL. Final reference will follow.</para>
<para>[18] J. P. Holdren, &#8220;A National strategic plan for advanced manufacturing,&#8221; Report of the Interagency working group on Advanced Manufacturing IAM to the National Science and Technology Council NSTC Committee on Technology CoT, Executive Office of the President National Science and Technology Council, Washington, D.C., 2012.</para>
<para>[19] http://europa.eu/rapid/press-release MEMO-14-193 en.htm</para>
<para>[20] Duguay et al. From mass production to flexible/agile production. Int. J.of Operations &amp; Production Management, 17(12), pp. 1183&#8211;1195, 1997.</para>
<para>[21] Kr&#252;ger, J., Lien, T. K., Verl, A., Cooperation of Human and Machines in Assembly Lines, Annals of the CIRP, 58/2, pp. 628&#8211;646, 2009.</para>
<para>[22] Sawaragi et al.. Human-Robot collaboration: Technical issues from a viewpoint of human-centred automation. ISARC, pp. 388&#8211;393, 2006.</para>
<para>[23] Billings Human-centered Aircfraft Automation Phylosophy, NASA Technical Memorandum 103885, NASA Ames Research Center, 1991.</para>
<para>[24] Eeva J&#228;rvenp&#228;&#228;, Pasi Luostarinen, Minna Lanz and Reijo Tuokko. Adaptation of Manufacturing Systems in Dynamic Environment Based on Capability Description Method, Manufacturing System, Dr. Faieza Abdul Aziz (Ed.), ISBN: 978-953-51-0530-5, 2012.</para>
<para>[25] Koren, Y., Heisel, U., Jovane, F., Moriwaki, T., Pritschow, G., Ulsoy, G., et al. Reconfigurable manufacturing systems. CIRP Annals: Manufacturing Technology, 48(2), pp. 527&#8211;540, 1999.</para>
<para>[26] Occupational Safety and Health Administration, https://osha.europa.eu.</para>
<para>[27] Autor, D. H. &#8220;Why are there still so many jobs? The history and future of workplace automation.&#8221; The Journal of Economic Perspectives 29.3, pp. 3&#8211;30, 2015.</para>
<para>[28] European Working Conditions Surveys, http://www.eurofound. europa.eu/surveys</para>
</section>
</chapter>
</part>
<part class="part" id="part02" label="II" xreflabel="II" role="PART">
<title>PART II</title>
<chapter class="chapter" id="ch09" label="9" xreflabel="9">
<title>Digital Models for Industrial Automation Platforms</title>
<para><emphasis role="strong">Nikos Kefalakis, Aikaterini Roukounaki and John Soldatos</emphasis></para>
<para>Kifisias 44 Ave., Marousi, GR15125, Greece</para>
<para>Email: nkef@ait.gr; arou@ait.gr; jsol@ait.gr</para>
<para>This chapter presents the role and uses of digital models in industrial automation applications of the Industry 4.0 era. Accordingly, it reviews a range of standard-based models for digital automation and their suitability for the tasks of plant modelling and configuration. Finally, the chapter introduces the digital models specified and used in the scope of the FAR-EDGE automation platform, towards supporting digital twins and system configuration use cases.</para>
<section class="lev1" id="sec9-1">
<title>9.1 Introduction</title>
<para>The digital modelling of the physical world is one of the core concepts of the digitization of industry and the fourth industrial revolution (Industry 4.0). It foresees the development of digital representations of physical world objects and processes as a means of executing automation and control operations, based on digital operations functionalities (i.e. at the cyber rather than at the physical world) [1]. The motivation for this stems from the fact that digital world operations can be flexibly altered or even undone at a low cost, while this is impossible in the physical world. Hence, plant operators can experiment with operations over digital models, run what-if scenarios and ultimately derive optimal deployment configurations for automation operations, while also deploying them on the field based on IT applications and tools, such as Industrial Internet of Things (IIoT) tools.</para>
<para>The concept of simulating and experimenting with automation operations in the realm of digital models of the plant is conveniently called &#8220;digital twin&#8221; and is a key enabler of the digitization of industrial processes. One of the automation platforms developed by the co-authors of this book, namely the FAR-EDGE edge computing platform, takes advantage of this concept based on the integration of digital models that represent the manufacturing shopfloor, as well as other physical and logical components of the automation platform such as edge gateways. In particular, the FAR-EDGE reference architecture and platform design specify a set of digital models as an integral element of the FAR-EDGE automation platform. In line with the Industry 4.0 &#8220;digital twin&#8221; concept, these digital models serve several complementary and important objectives:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Digital Simulation:</emphasis> FAR-EDGE implements digital twins in order to support digital simulations of the plant, including what-if scenarios. The latter can be evaluated and used to decide optimal configurations of automation elements.</para></listitem>
<listitem><para><emphasis role="strong">Semantic Interoperability:</emphasis> The FAR-EDGE digital models provide a uniform representation of the concepts and entities that comprise a FAR-EDGE deployment, which boosts semantic interoperability across diverse digital systems and physical devices. The use of common data model provides a uniform vocabulary for describing various entities (e.g. sensors, CPS devices, SCADA Supervisory Control and Data Acquisition systems, production systems) across different applications in the automation, analytics and simulation domains of the platform.</para></listitem>
<listitem><para><emphasis role="strong">Information Exchange:</emphasis> The digital models in FAR-EDGE provide the means for exchanging information across different FAR-EDGE deployments. This is closely related to the above-listed semantic interoperability objective: By exchanging information in a common agreed format, two or more different FAR-EDGE deployments can become interoperable despite differences in their internal implementation details.</para></listitem>
<listitem><para><emphasis role="strong">System Configuration:</emphasis> The design and deployment of digital models is a key prerequisite for performing automation and control operations at IT (Information Technology) timescales. As part of the digitization of industrial processes, automation systems (i.e. Operational Technology (OT)) can be configured through IT systems and tools. The latter configure and update digital models, which reflect the status of the physical world. In this way, automation and configuration operations are performed at the level of IT rather than at the level of OT (Operational Technology). This requires a synchronization between digital models and the status of the physical world, which is challenging to implement.</para></listitem>
</itemizedlist>
<para>This chapter provides insights into the digital models that are used to support information exchange, digital simulations, semantic interoperability and digital operations as part of the FAR-EDGE platform. It first analyses the rationale behind the specification and implementation of digital models in the FAR-EDGE platform, along with some of the main requirements that drive the specification of the models. These requirements include standards compliance, extensibility, high performance, as well as support of FAR-EDGE functionalities in the platform&#8217;s simulation domain. Following the review of these requirements for the FAR-EDGE digital models, we present a number of standards-based models (i.e. digital models and schemas specified as part of Industry 4.0 standards) against their suitability in supporting these requirements. As part of this chapter, we highlight the suitability of AutomationML and the standards-based schemas that it comprises (e.g. CAEX) for the simulation functionalities of the FAR-EDGE platform. Accordingly, we introduce a range of new proprietary models that can represent FAR-EDGE deployment configurations, based on concepts that cover the platform&#8217;s edge computing model to automation and distributed data analytics. Specifically, we introduce new digital models that reflect concepts specified and used as part of the FAR-EDGE RA and the edge computing infrastructure of the project, such as edge gateways, data channels, measurement devices, as well as live data streams. These concepts can be blended with AutomationML and CAEX concepts as a means of putting plant models (e.g. CAEX instances) in the context of FAR-EDGE edge computing deployments.</para>
<para>Another important part of the chapter is the presentation of the linking between the above-listed models for edge computing configurations with the AutomationML-based models used for digital twins and digital simulations as part of the platform. The presented methodology is based on well- known concepts from the areas of data models linking and interoperability, including the concept of common repositories and registries for data models interoperability.</para>
<para>This chapter is structured as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Section 9.2 following the chapter&#8217;s introduction presents the rationale behind the use of digital models in Industry 4.0 in general and in FAR-EDGE in particular;</para></listitem>
<listitem><para>Section 9.3 reviews a set of standards-based digital models, which are commonly used for plant modelling and representation;</para></listitem>
<listitem><para>Section 9.4 introduces the proprietary FAR-EDGE data models that are used for configuring the distributed data analytics functionalities of the platform;</para></listitem>
<listitem><para>Section 9.5 presents a methodology for linking the FAR-EDGE proprietary data models with standards-based data models used for digital twins&#8217; representations in the platform&#8217;s simulation domain.</para></listitem>
<listitem><para>Section 9.6 is the final and concluding section of the chapter.</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="sec9-2">
<title>9.2 Scope and Use of Digital Models for Automation</title>
<section class="lev2" id="sec9-2-1">
<title>9.2.1 Scope of Digital Models</title>
<para>Industry 4.0 applications are based on Cyber-Physical Systems (CPS). One of their main characteristics is that they implement automation functionalities at the cyber layer of production systems. In this context, they also take advantage of digital models as a pool of schemas and functions that are used for the digital representation of the factory, including the synchronization of their digital properties with the status of the real-world entities that they represent. At a finer level of detail, the functionalities of digital models support the operations and features that are described in the following paragraphs.</para>
</section>
<section class="lev2" id="sec9-2-2">
<title>9.2.2 Factory and Plant Information Modelling</title>
<para>Primarily, digital models enable modelling of information at the factory and plant levels. In particular, the models provide a digital representation of the factory, which includes information about the elements (e.g. systems, devices and people) that comprise the plant. Automation and analytic applications can access the models in order to obtain information about the configuration of the plant, which they can use for implementing and validating automation processes. For example, in the FAR-EDGE project, digital models provide information about the hierarchical relationships between physical and logical entities in the scope of a FAR-EDGE deployment such as the sensors and devices that are associated with a given station.</para>
<para>In principle, a detailed and exhaustive description of the plant facilitates the implementation of many different processes and applications, including automation and analytics, as well as enterprise processes. However, developing and maintaining a detailed and exhaustive representation of the plant is very challenging. Therefore, FAR-EDGE and other digital automation platforms model only a subset of the plant, according to a &#8220;mini-world&#8221; that pertains to target automation and analytics use cases. Nevertheless, the digital modelling process can be open and extensible, in order to provide opportunities for supporting a broader set of functionalities and use cases, based on a fair additional effort.</para>
</section>
<section class="lev2" id="sec9-2-3">
<title>9.2.3 Automation and Analytics Processes Modelling</title>
<para>Beyond a static representation of the structure of a factory and a plant, digital models should be able to represent the more dynamic automation and analytics processes, which form part of the plant&#8217;s dynamic behaviour. Such processes should be represented based on the elements that are entailed in each processes, including their relationships and their evolution over time. Again, instead of an exhaustive modelling and representation of all possible workflows (e.g. through appropriate state machines), most automation platforms (including the FAR-EDGE automation platform) tend to focus on the processes that comprise a set of target use cases.</para>
</section>
<section class="lev2" id="sec9-2-4">
<title>9.2.4 Automation and Analytics Platforms Configuration</title>
<para>The modelling of automation and analytics processing provides also a basis for their configuration and reconfiguration, as a means of changing the automation or the analytics logic based on IT functions. For example, using the digital model for an analytics process, it is possible to configure the devices and other data sources entailed in analytics processes, as well as the analytics (e.g. machine learning) algorithms applied on their data. This can provide increased flexibility in configuring and deploying different automation and analytics workflows in a factory. It can also support the implementation of the popular &#8220;plug and produce&#8221; concept [2].</para>
</section>
<section class="lev2" id="sec9-2-5">
<title>9.2.5 Cyber and Physical Worlds Synchronization</title>
<para>As already outlined, the digital models can enable the configuration of automation functions and workflows at IT rather than OT times. In particular, automation operations can be configured at the IT layer of a digital automation platform, while being reflected in the physical world. The idea behind this configuration approach is that dealing with IT functions is much easier and more flexible than dealing with OT technology.</para>
<para>In order to provide this IT layer flexibility, there is a need to reflect changes in the IT layer to the OT layer (i.e. to the field) and vice versa. Hence, mechanisms for synchronizing the status of the physical world with its digital representation are needed based on digital models.</para>
<para>The synchronization between the physical and digital worlds can be also used to improve the results of digital simulations based on the so-called digital twins. In particular, it allows digital simulation applications to operate not only based on simulated data, but also with real data stemming from the synchronization of the physical and digital worlds. This can facilitate more accurate and realistic simulations, given that part of them can rely on real data that are seamlessly blended in the simulation application. The development of such realistic simulations is therefore based on dynamic access to plant information, which is illustrated in the following paragraph.</para>
</section>
<section class="lev2" id="sec9-2-6">
<title>9.2.6 Dynamic Access to Plant Information</title>
<para>Digital models facilitate dynamic access to the status of the plant, at the cyber layer of the digital factory automation systems. Access to such information is needed in order to identify the configuration of automation processes, as well as the status of production processes and KPIs (Key Performance Indicators). Hence, digital models can serve as a vehicle for representing dynamic up-to-date information about the field, in addition to static (or semistatic) metadata of the shopfloor. One prominent use of the dynamic access to plant information involves the use of real-life data in order to boost the performance and accuracy of digital simulations, as outlined in the previous paragraph.</para>
<para>It should be underlined that the digital models specify the schema used to model the structure of the plant information. In the scope of the digital platform&#8217;s operation, this schema is populated with instance data, which reflect the status of the plant at a given time instant. Hence, dynamic access to plant information is based on querying the instance of the plant database, which will follow the structure of the digital models. The concept of dynamic access to plant information and the importance of the synchronization between the digital models and the actual status of the plant is presented in <link linkend="F9-1">Figure <xref linkend="F9-1" remap="9.1"/></link>.</para>
<fig id="F9-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-1">Figure <xref linkend="F9-1" remap="9.1"/></link></label>
<caption><para>Digital Models and Dynamic Access to Plant Information.</para></caption>
<graphic xlink:href="graphics/ch09_fig001.jpg"/>
</fig>
</section>
</section>
<section class="lev1" id="sec9-3">
<title>9.3 Review of Standards Based Digital Models</title>
<para>In the following paragraphs, we provide a review of representative standards- based schemas that can be used for digital modelling in the scope of digital automation platforms. The review is by no means exhaustive, yet it covers some of the most popular models and schemas. Moreover, it provides some insights in terms of the ability of these standards to support the requirements and functionalities illustrated in the previous paragraph. Readers can also consult similar works on reviewing digital models (e.g. [3]), including works that have performed a comparative evaluation of alternative models [4].</para>
<section class="lev2" id="sec9-3-1">
<title>9.3.1 Overview</title>
<para>For over a decade, various industrial standards have been developed, including information models that are used for information modelling in factory automation. Several standards come with a set of semantic definitions, which are typically used for modelling and exchanging data across systems and applications. These standards include, for example, the IEC 62264 standard that complies with the mainstream ISA-95 standard for factory automation. IEC 62264 boosts interoperability and integration across differ- ent/heterogeneous enterprise and control systems. Likewise, ISA-88 for batch processes comes with IEC 61512, and IEC 62424 supports exchange of data between process control and productions tools, while IEC 62714 covers engineering data of industrial automation systems [5]. Several of these standards are referenced and/or used by the RAMI 4.0 reference model [6], which is driving the design and development of several digital automation platforms. In the following paragraphs, we briefly describe some of these standards.</para>
</section>
<section class="lev2" id="sec9-3-2">
<title>9.3.2 IEC 62264</title>
<para>IEC 62264 is a standard for enterprise-control system integration. It is based on the ANSI/ISA-95 hierarchy for automation systems. With reference to this hierarchy, the standard covers the domain of manufacturing operations management (i.e. Level 4) and the interface content and transactions within Level 3 and between Level 3 and Level 4. Hence, the standard is primarily focused on the integration between manufacturing operations and control, rather than on pure control (i.e. Levels 1, 2 and 3) operations only.</para>
<para>In practice, the standard defines activity models, function models and object models in the MOM (Manufacturing Operations Management) domain. The models are hierarchical and describe the MOM domain and its activities, the interface content and associated transactions within MoM level and between MoM and Enterprise level. Examples of entities that are modelled by the standard include materials, equipment, personnel, product definition, process segments, production schedules, product capabilities, production performance and more.</para>
<para>Note that IEC 62264 is among the standards referenced and used in RAMI 4.0. Due to its compliance with RAMI 4.0, IEC 62264 meets several of the requirements listed in the previous paragraph. However, it is focused on Level 3 and Level 4 entities of the ISA-95 standards and hence it is not very appropriate for use cases involving Levels 1, 2 and 3.</para>
</section>
<section class="lev2" id="sec9-3-3">
<title>9.3.3 IEC 62769 (FDI)</title>
<para>The Information Model that is associated with the IEC 62769 (FDI) standard aims at reflecting the topology of the automation system. It therefore represents the devices of the automation system, as well as the communication networks that connect them. It includes attributes that are appropriate for modelling the main properties, relationships, operations of networks and field devices.</para>
<para>IEC 62769 is appropriate for modelling the field layer of the factory. This makes it appropriate for several of automation use cases, yet it does not provide the means for mapping and modelling some of the edge computing concepts of the FAR-EDGE automation platform (e.g. edge gateways and ledger services).</para>
</section>
<section class="lev2" id="sec9-3-4">
<title>9.3.4 IEC 62453 (FDT)</title>
<para>IEC 62453 Field Device Tool (FDT) is an open standard for industrial automation integration of networks and devices. It provides standardized software to enable intelligent field devices that can be integrated seamlessly into automation applications, from the commissioning tool to the control system. FDT supports the coupling of software modules, which have been imple-mented as representatives for field devices and are therefore able to provide and/or exchange information. However, IEC 62453 is limited to the modelling of networks and devices and hence not suitable for plant-wide modelling.</para>
</section>
<section class="lev2" id="sec9-3-5">
<title>9.3.5 IEC 61512 (Batch Control)</title>
<para>IEC 61512 &#8211; Batch control is also referenced by RAMI 4.0. It models batch production records, including information about production of batches or elements of batch production. IEC 61512 focuses on batch manufacturing and production processes.</para>
</section>
<section class="lev2" id="sec9-3-6">
<title>9.3.6 IEC 61424 (CAEX)</title>
<para>IEC 61424 (CAEX) provides the means for modelling a plant in a hierarchical way and in an XML format (i.e. CAEX is provided as an XML Schema through an XML Schema language (XSD) file). CAEN abstracts a plant by considering it as a set of interconnected modules or components. CAEX models and stores such modules in an object-oriented way and based on object-oriented concepts such as classes, encapsulation, class libraries, instances, instance hierarchies, inheritance, relations, attributes and interfaces.</para>
<para>CAEX separates vendor-independent information (e.g. objects, attributes, interfaces, hierarchies, references, libraries, classes) and application- dependent information such as certain attribute names, specific classes or object catalogues. CAEX is appropriate for storing static metadata, but it is not designed to hold dynamic information. Nevertheless, it can be extended with special classes that could hold dynamic information and behaviour of the various modules.</para>
<para>IEC 61424 provides a sound basis for modelling the meta-data of a plant, which is one of the requirements for the digital models of an automation platform. However, there is also a need for supporting dynamic information as well, which asks for extensions to this model. CAEX is part of AutomationML compliant modelling, and as such, it is used in scope of FAR-EDGE in order to support the digital twins that are used from the simulation functionalities of the platform.</para>
</section>
<section class="lev2" id="sec9-3-7">
<title>9.3.7 Business to Manufacturing Markup Language (B2MML)</title>
<para>B2MML is an XML implementation of the ANSI/ISA-95, Enterprise-Control System Integration, family of standards (ISA-95). As such, it is closely related to the above-listed IEC 62264 international standard, i.e. it provides a data representation that is fully compliant to the scope and semantics of IEC 62264. In practice, B2MML comprises a series of XML schemas, which are available as XML Schema language (XSD) files. Hence, B2MML supports the modelling of a large number of different entities, which represent MOM objects and transactions, as well as other interfaces between the enterprise and control layers.</para>
<para>B2MML is an excellent choice for supporting integration of business systems (such as Enterprise Resource Planning (ERP) and Supply Chain</para>
<para>Management (SCM) systems), with control systems (e.g. SCADA, DCS) and manufacturing execution systems (MES). This holds not only for B2MML compliant business systems (i.e. systems that support directly the interpretation of B2MML messages), but also for legacy ERP/SCM systems which can be made B2MML-compliant based on the implementation of relevant middleware adapters that transform B2MML to their own semantics and vice versa.</para>
<para>The language can be considered RAMI 4.0-compliant, given that RAMI 4.0 uses ISA-95 concepts and references of relevant standards (such as IEC 62264). It is also important that the B2MML schemas provide support for the entire ISA-95 standard, rather than a subset of it.</para>
<para>B2MML is characterized by compatibility with enterprise systems (e.g. ERP and PLM systems), which makes it appropriate for supporting information modelling for use cases involving enterprise-level entities and concepts. Furthermore, B2MML can boost compatibility with a wide range of available ISA-95-compliant systems, while at the same time adhering to information models referenced in RAMI 4.0. Therefore, B2MML could be exploited in the scope of use cases involving enterprises systems and entities, as soon as it is used in conjunction with additional models supporting concepts and entities for the configuration of an automation platform (e.g. like edge node, edge gateways and edge processes in the scope of an edge computing platform like FAR-EDGE).</para>
</section>
<section class="lev2" id="sec9-3-8">
<title>9.3.8 AutomationML</title>
<para>AutomationML is an XML-based open standard, which provides the means for describing the components of a complex production environment. It has a hierarchical structure and is commonly used to facilitate consistent exchange and editing of plant layout data across heterogeneous engineering tools. AutomationML takes advantage of existing standards such as PLCopen XML or COLLADA. It provides the means for modelling plant information and automation processes based on objects structured in a hierarchical fashion, including information about geometric, model logic, behaviour sequences and I/O connections. AutomationML comprises different standards that support modelling for various entities and concerns. In particular, it relies on the following standards:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>CAEX (IEC 62424), in order to model topological information.</para></listitem>
<listitem><para>COLLADA (ISO/PAS 17506) of the Khronos Group in order to model and implement geometry concepts and 3D information as well as Kinematics (i.e. the geometry of motion). Support for Kinematics ensures also the modelling of connections and dependencies among objects as part of motion planning.</para></listitem>
<listitem><para>PLCopen XML (IEC 61131) in order to model sequences of actions, internal behaviour of objects and I/O connections.</para></listitem>
</itemizedlist>
<para>Note that AutomationML and the three above-listed standards are also in the list of Industry 4.0 standards that are directly connected to RAMI 4.0 in order to boost semantic interoperability.</para>
<para>AutomationML satisfies several of the requirements of the digital mod-elling requirements in FAR-EDGE and is appropriate for supporting digital simulations based on the development of a &#8220;digital twin&#8221; of the plant. It is therefore the standards-based digital model that supports plant modelling at the FAR-EDGE simulation domain. Moreover, the proprietary digital models that are used in FAR-EDGE can be linked to instances of Automa- tionML/CAEX digital models, towards ensuring uniqueness of the referenced entities and bridging of the diverse concepts that are captured by the two models. This is further discussed in Section 9.5.</para>
</section>
</section>
<section class="lev1" id="sec9-4">
<title>9.4 FAR-EDGE Digital Models Outline</title>
<section class="lev2" id="sec9-4-1">
<title>9.4.1 Scope of Digital Modelling in FAR-EDGE</title>
<para>In line with the uses of digital models that are described in Section 9.2, the FAR-EDGE digital automation platform leverages digital modelling for a dual purpose:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Data persistence and plant modelling for digital simulation and digital twins</emphasis>. This is the reason why FAR-EDGE uses digital models for its simulation functionalities. The respective digital models are based on AutomationML, which has been described in the previous section.</para></listitem>
<listitem><para><emphasis role="strong">Configuration of the FAR-EDGE platform, including configuration of its automation and analytics functionalities</emphasis>. In particular, the FAR-EDGE platform holds digital presentations of the logical and physical configurations of FAR-EDGE components such as data sources, devices and edge gateways. The FAR-EDGE platform makes use of these configurations in order to configure its analytics and automation functionalities, based on functionalities such as the definition and configuration of data sources, association of these data sources to gateways and more. Specifically, the platform offers APIs and tools for manipulating these data models towards configuring the platform. The respective data models are proprietary and complement the use of AutomationML in the simulation domain. The following paragraphs present briefly the data modelling entities used for the configuring data analytics in FAR-EDGE. The latter models come with an open-source implementation of functionalities for their management and are part of the FAR-EDGE platform. They are outlined in the following paragraphs.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec9-4-2">
<title>9.4.2 Main Entities of Digital Models for Data Analytics</title>
<para>The proprietary FAR-EDGE data models that are used for configuring distributed data analytics functionalities, model factory data and metadata, along with the analytics functions and workflows that process them.</para>
<para><emphasis role="strong">Factory Data and Metadata</emphasis></para>
<para>The representation of factory data and metadata is based on the following entities:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Data Source Definition (DSD):</emphasis> This defines the properties of a data source in the shopfloor, such as a data stream from a sensor or an automation device.</para></listitem>
<listitem><para><emphasis role="strong">Data Interface Specification (DI):</emphasis> The DI is associated with a data source and provides the information need to connect to it and access its data, including details like network protocol, port, network address and more.</para></listitem>
<listitem><para><emphasis role="strong">Data Kind (DK):</emphasis> This specifies the semantics of the data of the data source, which provides flexibility in modelling different types of data. The DK is an XML specification and hence it can be used to define virtually any type of data in an open and extensible way.</para></listitem>
<listitem><para><emphasis role="strong">Data Source Manifest (DSM):</emphasis> A DSM specifies a specific instance of a data source in line with its DSD, DI and DK specifications. Multiple manifests (i.e. DSMs) are therefore used to represent the data sources that are available in the factory in the scope of the FAR-EDGE automation platform.</para></listitem>
<listitem><para><emphasis role="strong">Data Consumer Manifest (DCM):</emphasis> This models an instance of a data consumer, i.e. any application that accesses a data sources.</para></listitem>
<listitem><para><emphasis role="strong">Data Channel Descriptor (DCD):</emphasis> A DCD models the association between an instance of a consumer and an instance of a data source. It is useful to keep track of the established connections and associations between data sources and data consumers.</para></listitem>
<listitem><para><emphasis role="strong">LiveDataSet:</emphasis> This entity models and represents the actual dataset that stem from an instance of a data source that is represented through a DSM. Hence, it references a DSM, which drives the specification of the types of the attributes of the LiveDataSet in line with the DK. A LiveDataSet is associated with a timestamp and keeps track of the location of the data source in case it is associated with a mobile (rather than a stationary) edge node. Hence, it has a location attribute as well. In principle, the data source comprises a set of name&#8211;value pairs, which adhere to different data types in line with the DK of the DSM.</para></listitem>
<listitem><para><emphasis role="strong">Edge Gateway:</emphasis> This entity models an edge gateway of a FAR-EDGE edge computing deployment. In the scope of a FAR-EDGE deployment, data sources are associated with an edge gateway. This usually implies not only a logical association, but also a physical association, i.e. an edge gateway is deployed at a station and manages data sources in close physical proximity to the station.</para></listitem>
</itemizedlist>
<para>Based on the above entities, it is possible to represent the different data sources of a digital shopfloor in a modular, dynamic and extensible way. This is based on a repository (i.e. registry) of data sources and their manifests, which keeps track of the various data sources that register to it. The FAR-EDGE platform includes such a registry, which provides dynamicity in creating, registering and using data sources in the industrial plant.</para>
<para><emphasis role="strong">Factory Data Analytics Metadata</emphasis></para>
<para>In order to facilitate the management and configuration of analytics functions and workflows over the various data sources, the FAR-EDGE digital models specify a number of analytics-related entities. In particular:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Analytics Processor Definition (APD):</emphasis> This specifies a processing function to be applied on one or more data sources. In the scope of FAR-EDGE, three processing functions are defined, including functions that pre-process that data of a data source (i.e. Pre-Processors), functions that store the outcomes of the processing (i.e. Store Processors) and functions that analyse the data from the data sources (i.e. Analytics Processors). These three types of processors can be combined in various configurations over the data sources in order to define different analytics workflows.</para></listitem>
<listitem><para><emphasis role="strong">Analytics Processor Manifest (APM):</emphasis> This represents an instance of a processor that is defined through the APD. The instance specifies the type of processors and its actual logic through linking to a programming function. In the case of FAR-EDGE, the latter is a class/programme implemented in the Java language.</para></listitem>
<listitem><para><emphasis role="strong">Analytics orchestrator Manifest (AM):</emphasis> An AM represents an entire analytics workflow. It defines a combination of analytics processor instances (i.e. of APMs) that implements a distributed data analytics task. The latter is likely to span multiple edge gateways and to operate over their data sources.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec9-4-3">
<title>9.4.3 Hierarchical Structure</title>
<para>The FAR-EDGE Digital Models for distributed data analytics follow a hierarchical structure, which defines the different relationships between the various entities. For example, an edge gateway comprises multiple data source manifests. Each one of the latter is associated with a data source definition. Likewise, LiveDataSets are associated with instances of data sources, i.e. data sources manifests. As an example, <link linkend="F9-2">Figure <xref linkend="F9-2" remap="9.2"/></link> illustrates a snapshot of the FAR-EDGE digital models structure, which shows the association of each edge gateway with data source manifests and data analytics manifests. A more detailed presentation of the hierarchical structure of our data models is beyond the scope of this chapter. Interested readers can consult directly our XML schemas, which are part of our open source implementation of the FAR-EDGE digital models repository that is also an integral part of the FAR-EDGE platform.</para>
<fig id="F9-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-2">Figure <xref linkend="F9-2" remap="9.2"/></link></label>
<caption><para>Snapshot of the FAR-EDGE Digital Models Structure.</para></caption>
<graphic xlink:href="graphics/ch09_fig002.jpg"/>
</fig>
</section>
<section class="lev2" id="sec9-4-4">
<title>9.4.4 Model Repository Open Source Implementation</title>
<para>As part of the open source implementation of the FAR-EDGE automation platform, we have implemented a data models repository, which provides support for the entities outlined in the previous paragraphs, including support for managing data kinds, data interfaces, data source definitions and analytics processor definitions. The implementation of the open source repository supports create, update, delete, get and discover functionalities, which are defined as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Create:</emphasis> This operation provides the means of creating an instance of the entity.</para></listitem>
<listitem><para><emphasis role="strong">Update:</emphasis> This allows updating an existing instance of the entity.</para></listitem>
<listitem><para><emphasis role="strong">Delete:</emphasis> This permits the deletion of an instance from the repository.</para></listitem>
<listitem><para><emphasis role="strong">Get:</emphasis> This fetches an instance of an entity based on its unique identifier.</para></listitem>
<listitem><para><emphasis role="strong">Discover:</emphasis> This helps model users to dynamically discover instances of</para></listitem>
</itemizedlist>
<para>one or more entities subject to given criteria.</para>
<para>The FAR-EDGE digital models repository implementation is available at the GitHub of the project at: https://github.com/far-edge/DigitalModels. The implementation comprises all schemata (i.e. see far-edge.dm.schemata) along with relevant (&#8220;generated&#8221;) documentation in HTML (HyperText Markup Language) and PDF (Portable Data Format) formats. It also provides access to Java libraries, i.e. annotated libraries according to the JAXB (Java Architecture for XML Binding) framework in a proper Maven project (see far- edge.dm.commons). This open source implementation can provide a basis for researchers and engineers who might opt to implement their own digital models based on a similar approach. At the same time, they provide a means for implementing, using or even extending the FAR-EDGE analytics framework.</para>
</section>
</section>
<section class="lev1" id="sec9-5">
<title>9.5 Simulation and Analytics Models Linking and Interoperability</title>
<para>The review of models in Section 9.3 justified the suitability of Automa- tionML for supporting the FAR-EDGE digital simulation functionalities. Furthermore, in Section 9.4, we introduced a digital model for representing and configuring the analytics functionalities of the FAR-EDGE platform. The use of a dedicated model for each of the two functional domains of the platform (i.e. analytics, simulation) provides flexibility to developers and deployers of analytics and simulation solutions, since they can use the model of their choice. Nevertheless, it could also create consistency and interoperability issues, especially in cases where functionalities and data from the two different domains need to be combined. To alleviate such problems, there is not only a need for linking entities in the two different models, so as to allow developers and deployers to access information for an entity in any of the two models, but also for combining information from the two models when needed. The merits of such linking become evident when considering the following examples:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A digital simulation that needs to access information stemming from data analytics on real-life shopfloor data. For instance, a digital simulation may need to access maintenance-related parameters of a piece of equipment, following proper data analytics over sensor data (e.g. analytics vibration or ultrasound data for a machine). To this end, the machine representation in the simulation model (e.g. AutomationML) needs to be linked to the corresponding representation in the data model used for the distributed data analytics of the platform.</para></listitem>
<listitem><para>Another digital simulation application that needs to analyse data sources using the distributed data analytics engine. In such a case, the simulation application needs to convey to the analytics engine the data sources to be used. To this end, there is a need for linking the representations of devices and data sources in the simulation domain, with the corresponding representations of the very same devices in the analytics domain.</para></listitem>
</itemizedlist>
<para>In order to realize this linking, the FAR-EDGE data models include placeholders for data linking entities, i.e. linking of two representations of the same object/entity in different domains. In particular, both DSMs and logical entities in the simulation domain are linked based on a Universally Unique IDentifier (UUID). DSMs are assigned a UUID in an analytics domain whenever they are created and introduced to the system. Likewise, simulation applications assign a UUID to the main entities entailed in the simulation. The linking and harmonization of these UUIDs provide the means for linking the entities of two models.</para>
<para>This linking concept resembles to the concept of a Common Interoperability Registry (CIR), which is very commonly used in O&amp;M (Operations and Maintenance). This registry is destined to provide &#8220;Yellow- Pages&#8221; lookup for all systems. This facilitates location of an object in any of the systems where it is registered, as soon as it is referenced with its UUID. Hence, different systems and models that have different identifiers for the very same entity or objects are glued together and are able talk &#8220;online&#8221;. The main vehicle for this gluing is the specification and use of globally unique identifiers, which are linked to &#8220;local&#8221; object identifiers, i.e. identifiers pertaining to each one of the models.</para>
</section>
<section class="lev1" id="sec9-6">
<title>9.6 Conclusions</title>
<para>This chapter has analysed the rationale behind the specification and integration of digital models in emerging digital automation platforms, which included a discussion of the main requirements that drive any relevant digital modelling effort. Moreover, it has presented a range of standards-based digital models, notably models that are used for semantic interoperability and information exchange in Industry 4.0 systems and applications. Following this review, it has illustrated why AutomationML is suitable for supporting the digital simulation functionalities of the FAR-EDGE platform.</para>
<para>The chapter has also introduced a proprietary model for representing and configuring the analytics part of the platform. This model provides the means for modelling and representing data sources and analytics workflows based on appropriate manifests. The respective models are implemented and persisted in a models repository, which is provided as a set of schemas and open source libraries as part of the FAR-EDGE digital automation platform. Hence, they can serve as a basis for using the FAR-EDGE digital models in analytics scenarios, as well as for implementing similar digital modelling ideas.</para>
<para>As part of this chapter, we have also outlined how globally unique identifiers can be used to link different models that refer to same entity or object in the factory based on their own local identifiers. The use of such global identifiers permits the association of entities referenced and used in both the AutomationML models of FAR-EDGE simulation and the FAR-EDGE models of the analytics engine. As part of our implementation roadmap, we also plan to implement a Common Interoperability Registry (CIR) that will keep track of all global identifiers and their mapping to local identifiers used by the digital models of the simulation, analytics and automation domains. This will strengthen the generality and versatility of our approach to digital model interoperability.</para>
<para>Overall, this chapter can be a good start for researchers and engineers who wish to start working with digital modelling and digital twins in Industry 4.0, as it presents the different use cases of digital models, along with the specification and implementation of a digital model for distributed data analytics in industrial plants.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>This work was carried out in the scope of the FAR-EDGE project (H2020- 703094). The authors acknowledge help and contributions from all partners of the project.</para></section>
<section class="lev1" id="sec9-7">
<title>References</title>
<para>[1] H. Lasi, P. Fettke, H.-G. Kemper, T. Feld, M. Hoffmann, &#8216;Industry 4.0&#8217;, Business &amp; Information Systems Engineering, vol. 6, no. 4, pp. 239, 2014.</para>
<para>[2] G. Di Orio, A. Rocha, L. Ribeiro, J. Barata, &#8216;The prime semantic language: Plug and produce in standard-based manufacturing production systems&#8217;, The International Conference on Flexible Automation and Intelligent Manufacturing (FAIM 2015), 23&#8211;26 June 2015.</para>
<para>[3] W. Lepuschitz, A. Lobato-Jimenez, E. Axinia, M. Merdan, &#8216;A survey on standards and ontologies for process automation&#8217;, in Industrial Applications of Holonic and Multi-Agent Systems, Springer, pp. 22&#8211;32, 2015</para>
<para>[4] R. S. Peres, M. Parreira-Rocha, A. D. Rocha, J. Barbosa, P. Leit&#227;o and J. Barata, &#8216;Selection of a data exchange format for industry 4.0 manufacturing systems,&#8217; IECON 2016 - 42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, pp. 5723&#8211;5728, doi: 10.1109/IECON.2016.7793750, 2016.</para>
<para>[5] &#8216;IEC 62714 engineering data exchange format for use in industrial automation systems engineering - automation markup language - parts 1 and 2&#8217;, in International Electrotechnical commission, pp. 2014&#8211;2015.</para>
<para>[6] K. Schweichhart, &#8216;Reference Architectural Model Industrie 4.0 - An Introduction&#8217;, Deutsche Telekom, April 2016 online resource: https://ec.europa.eu/futurium/en/system/files/ged/a2-schweichhart-reference_architectural_model_industrie_4.0_rami_4.0.pdf</para>
</section>
</chapter>

<chapter class="chapter" id="ch010" label="10" xreflabel="10">
<title>Open Semantic Meta-model as a Cornerstone for the Design and Simulation of CPS-based Factories</title>
<para><emphasis role="strong">Jan Wehrstedt<superscript>1</superscript>, Diego Rovere<superscript>2</superscript>, Paolo Pedrazzoli<superscript>3</superscript>, Giovanni dal Maso<superscript>2</superscript>, Torben Meyer<superscript>4</superscript>, Veronika Brandstetter<superscript>1</superscript>, Michele Ciavotta<superscript>5</superscript></emphasis>, <emphasis role="strong">Marco Macchi<superscript>6</superscript> and Elisa Negri<superscript>6</superscript></emphasis></para>
<para><superscript>1</superscript> SIEMENS, Germany</para>
<para><superscript>2</superscript>TTS srl, Italy</para>
<para><superscript>3</superscript> Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland</para>
<para><superscript>4</superscript>VOLKSWAGEN, Germany</para>
<para><superscript>5</superscript> Universit&#224; degli Studi di Milano-Bicocca, Italy</para>
<para><superscript>6</superscript> Politecnico di Milano, Milan, Italy</para>
<para>E-mail: janchristoph.wehrstedt@siemens.com; rovere@ttsnetwork.com; pedrazzoli@ttsnetwork.com; dalmaso@ttsnetwork.com; torben.meyer@volkswagen.de; veronika.brandstetter@siemens.com; michele.ciavotta@unimib.it; marco.macchi@polimi.it; elisa.negri@polimi.it</para>
<para>A key enabler towards the fourth industrial revolution is the ability to maintain the digital information all along the factory life cycle, despite changes in purpose and tools, allowing data to be enriched and used as needed for that specific phase (digital continuity). Indeed, a fundamental issue is the lack of common modelling languages, and rigorous semantics for describing interactions &#8211; physical and computational &#8211; across heterogeneous tools and systems, towards effective simulation. This chapter describes the definition of a semantic meta-model meant to describe the functional characteristics of a CPS, which are relevant from its design and simulation for its integration and coordination in an industrial production environment.</para>
<para>Actually, digital continuity needs to be empowered by a standardized, open semantic meta-model capable of fully describing the properties and functional characteristics of the CPS simulation models, as a key element to empower multidisciplinary simulation tools. The hereby described meta-model is able to provide a cross-tool representation of the different specific simulation models defining both static information (3D models, kinematics chains, multi-body physics skeletons, etc.) and behavioural information (observable properties, inverse kinematics processors, motion-low computation functions, resource consumption logics, etc.).</para>
<section class="lev1" id="sec10-1">
<title>10.1 Introduction</title>
<para>In order to empower simulation methodologies and multidisciplinary tools for the design, engineering and management of CPS-based (Cyber Physical Systems) factories, we need to target the implementation of actual digital continuity, defined as the ability to maintain digital information all along the factory life cycle, despite changes in purpose and tools.</para>
<para>A Semantic Data Model for CPS representation is the foundation to achieve digital continuity, because it provides a unified description of the CPS-based simulation models that different simulation tools can rely on to operate.</para>
<para>Cyber Physical Systems are engineered systems that offer close interaction between cyber and physical components. CPS are defined as the systems that offer integrations of computation, networking, and physical processes, or in other words, as the systems where physical and software components are deeply intertwined, each operating on different spatial and temporal scales, exhibiting multiple and distinct behavioural modalities, and interacting with each other in a myriad of ways that change with context [2, 3]. From this definition, it is clear that the number and complexity of features that a CPS data model has to represent are very high, even if limited to the simulation field. Moreover, many of the aspects that concur to define a CPS for simulation (3D models, kinematics structures, dynamic behaviours, etc.) have been already investigated and formalized by many well-established data models that are, or can be considered, to all extents data exchange standards.</para>
<para>For these reasons, the goal of an effective CPS Semantic Data Model is providing a gluing infrastructure that refers existing interoperability standards and integrates them into a single extensible CPS definition. This approach reduces the burden on the simulation software applications to access the new data structures because they mainly add a meta-information level whereas data for specific purposes is still available in standard formats.</para>
<para>AutomationML [1] is a standard technology that is based on this &#8220;Integration philosophy&#8221; and defines the semantics of many elements of the manufacturing systems so that it is suitable to be adopted as the foundation of our CPS Semantic Data Model.</para>
</section>
<section class="lev1" id="sec10-2">
<title>10.2 Adoption of AutomationML Standard</title>
<para>The meta-data model needs basis on which data is saved and processed. The goal of AutomationML is to interconnect engineering tools in their different disciplines, e.g. mechanical plant engineering, electrical design, process engineering, process control engineering, HMI development, PLC programming, robot programming, etc. It is a standard focused on data exchange in the domain of automation engineering, defined in four whitepapers that focus each on one of the following aspects:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Architecture and general requirements;</para></listitem>

<listitem><para>Role class libraries;</para></listitem>

<listitem><para>Geometry and kinematics;</para></listitem>

<listitem><para>Logic.</para></listitem>
</orderedlist>
<para>The data exchange format defined in these documents is the Automation Markup Language (AML), an XML schema-based data format and has been developed in order to support the data exchange in a heterogeneous engineering tools landscape for the production.</para>
<para>Engineering information is stored following the Object-Oriented Paradigm, and physical and logical plant components are modelled as data objects encapsulating different aspects. An object may consist of other sub-objects and may itself be part of a larger composition or aggregation. Typical objects in plant automation comprise information on topology, geometry, kinematics and logic, whereas logic comprises sequencing, behaviour and control. Therefore, an important focus in the data exchange in engineering is the exchange of object-oriented data structures, geometry, kinematics and logic.</para>
<para>AML combines existing industry data formats that are designed for the storage and exchange of different aspects of engineering information. These data formats are used on an &#8220;as-is&#8221; basis within their own specifications and are not branched for AML needs. The core of AML is the top-level data format CAEX that connects the different data formats (e.g. COLLADA for geometries or PLCOPEN-XML for logic). Therefore, AML has an inherent distributed document architecture. The goals and basic concepts of Automa- tionML are well aligned with our objectives, and it can be used as the base of the semantic meta data model; nonetheless, it is mainly a specification for data exchange only and it falls short when it comes to describe some operational aspects of simulation. For these reasons, we decided to extend AML aiming at targeting a more integrated connection between real/digital CPS and simulation tools.</para>
</section>
<section class="lev1" id="sec10-3">
<title>10.3 Meta Data Model Reference</title>
<para>This chapter documents the <emphasis>Meta Data Model</emphasis> developed. It is organized into eight sections that correspond to the eight main semantic areas in which the data model is organized:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para><emphasis>Base Model (</emphasis>&#167;<emphasis>10.3.1)</emphasis>: documents low-level utility classes that are used for the definition of high-level classes of the other sections.</para></listitem>

<listitem><para><emphasis>Assets and Behaviours (</emphasis>&#167;<emphasis>10.3.2)</emphasis>: documents, classes and concepts related to the possibility of using external data sources to define additional resource models.</para></listitem>

<listitem><para><emphasis>Prototypes Model (</emphasis>&#167;<emphasis>10.3.3)</emphasis>: introduces the concepts of resource prototypes and resource instances that are at the basis of the model reuse paradigm and documents the classes defining the resource model prototypes.</para></listitem>

<listitem><para><emphasis>Resources Model (</emphasis>&#167;<emphasis>10.3.4)</emphasis>: documents all the classes related to representation of intelligent and passive resources constituting the model of a manufacturing plant.</para></listitem>

<listitem><para><emphasis>Device Model (</emphasis>&#167;<emphasis>10.3.5)</emphasis>: documents all the classes related to the representation of the data connection with the physical devices, including the definition of all the relevant I/O signals that are exchanged with the digital counterpart.</para></listitem>

<listitem><para><emphasis>Project Model (</emphasis>&#167;<emphasis>10.3.6)</emphasis>: documents all the classes that represent complex multi-disciplinary simulation projects and that enable simulation tools to share plant models and results.</para></listitem>

<listitem><para><emphasis>Product Routing Model (</emphasis>&#167;<emphasis>10.3.7)</emphasis>: documents all the classes related to the definition of a discrete product, of the manufacturing processes and of the production plans that should be used for plant simulation.</para></listitem>

<listitem><para><emphasis>Security Model (</emphasis>&#167;<emphasis>10.3.8)</emphasis>: documents the classes that are related to the access control and that define the authentication and authorization levels needed to work on a certain resource.</para></listitem>
</orderedlist>
<para>Each section is introduced with a diagram view (based on UML Class Diagram) that contains only the classes composing that specific data model area and their relationships with the main classes belonging to the other data model areas. Therefore, it is possible to find the same class representation (e.g. Property class) in many different diagrams, but each class is documented only once in the proper semantic section.</para>
<section class="lev2" id="sec10-3-1">
<title>10.3.1 Base Model</title>
<para>This section documents some low-level and general-purpose classes that are shared by other higher-level models described in the following sections. In particular, the classes related to the possibility of modelling generic, simple and composite properties of plant resources are documented (<link linkend="F10-1">Figure <xref linkend="F10-1" remap="10.1"/></link>).</para>
<section class="lev3" id="sec10-3-1-1">
<title>10.3.1.1 Property</title>
<para>Property is an abstract class derived by IdentifiedElement and represents runtime properties of every resource and prototype. These properties are relevant information that can be dynamically assigned and read by the simulation tools.</para>
</section>
<section class="lev3" id="sec10-3-1-2">
<title>10.3.1.2 CompositeProperty</title>
<para>CompositeProperty is a class derived by Property and represents a composition of different properties of every resource and prototype. This composition is modelled to create a list of simple properties of the resource, or even a multilevel structure of CompositeProperty instances. <link linkend="F10-2">Figure <xref linkend="F10-2" remap="10.2"/></link> shows a possible application of the base model classes to represent properties, meta information and documentation of a sample CPS. A resource (in this case, CPS4) can have many properties instances associated to it and these properties can be simple (as ToolLength, EnergyConsumption and TempCPS4) or composite that allow creating structured properties (CurrProd).</para>
</section>
</section>
<section class="lev2" id="sec10-3-2">
<title>10.3.2 Assets and Behaviours</title>
<para>The goal of the CPS Semantic Data Model is providing a gluing infrastructure that refers existing interoperability standards and integrates them into a single extensible CPS definition. For this reason, the implemented model includes the mechanisms to reference external data sources (<link linkend="F10-3">Figure <xref linkend="F10-3" remap="10.3"/></link>).</para>
<fig id="F10-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-1">Figure <xref linkend="F10-1" remap="10.1"/></link></label>
<caption><para>Class diagram of the base classes.</para></caption>
<graphic xlink:href="graphics/ch10_fig001.jpg"/>
</fig>
<section class="lev3" id="sec10-3-2-1">
<title>10.3.2.1 ExternalReference</title>
<para>ExternalReference is abstract and extends IdentifiedElement. This class represents a generic reference to a data source that is external to the <emphasis>Meta Data Model</emphasis> (e.g. a file stored on the Central Support Infrastructure (CSI, see <link linkend="ch013">Chapter <xref linkend="ch13" remap="13"/></link>)). The external source can contain any kind of binary data in proprietary or interoperable format, depending on the type of resource. Using external references allows avoiding re-defining data models and per-sistency formats for all the possible technical aspects related to a certain resource. The approach that has been adopted is like AutomationML one, where additional data is stored in external files using already existing standards (e.g. COLLADA for 3D models or PLCopen for PLC code).</para>
<fig id="F10-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-2">Figure <xref linkend="F10-2" remap="10.2"/></link></label>
<caption><para>Object diagram of the base model.</para></caption>
<graphic xlink:href="graphics/ch10_fig002.jpg"/>
</fig>
</section>
<section class="lev3" id="sec10-3-2-2">
<title>10.3.2.2 Asset</title>
<para>Asset is an extension of ExternalResource. This class represents a reference to an external relevant model expressed according to interoperable standard or binary format that behavioural models want to use. An important feature that the CPS data model should support is the possibility to create links between runtime properties and properties defined inside assets and between properties defined by two different assets. Assets can be considered static data of the CPS because they represent self-contained models (e.g. 3D Models) that should be slowly changing.</para>
</section>
<section class="lev3" id="sec10-3-2-3">
<title>10.3.2.3 Behaviour</title>
<para>Behaviour is an extension of ExternalResource. This class represents a reference to runnable behavioural models that implement: (i) functionalities and operative logics of the physical systems and (ii) raw data stream aggregation and processing functions. Simulation Tools should be able to use directly the former to improve reliability of simulations, whereas the latter should run inside the CSI to update the runtime properties of the CPS model.</para>
<fig id="F10-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-3">Figure <xref linkend="F10-3" remap="10.3"/></link></label>
<caption><para>Class diagram for assets and behaviours.</para></caption>
<graphic xlink:href="graphics/ch10_fig003.jpg"/>
</fig>
</section>
</section>
<section class="lev2" id="sec10-3-3">
<title>10.3.3 Prototypes Model</title>
<para>This section is meant to describe the classes and concepts related to the definition of prototype resources that can be defined once and reused many times to create different plant models.</para>
<section class="lev3" id="sec10-3-3-1">
<title>10.3.3.1 Prototypes and instances</title>
<para>One of the most exploited features of manufacturing plants is the fact that they are mostly composed of standard &#8220;off-the-shelf&#8221; components (machine tools, robots, etc.) that are composed in a modular way. Thanks to this and with a good organization of modules, in fact, it is possible to speed up the simulation set up, reusing as much as possible already developed models. For this reason, usually simulation software tools adopt a mechanism based on the definition of libraries of models that can be applied to assemble a full plant layout.</para>
<fig id="F10-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-4">Figure <xref linkend="F10-4" remap="10.4"/></link></label>
<caption><para>Prototype-resource object diagram.</para></caption>
<graphic xlink:href="graphics/ch10_fig004.jpg"/>
</fig>
<para>The data model aims at natively supporting the same efficient re-use approach implementing the classes to describe &#8220;ready to use&#8221; resources, called &#8220;prototypes&#8221; and &#8220;instances&#8221; of such elements that are the actual resources composing plants. The relationship that exists between prototypes and instances is the same that in OOP exists between a class and an object (instance) of that class.</para>
<para>A prototype is a Resource model that is complete from a digital point of view, but it is still not applied in any plant model. It contains all the relevant information, assets and behaviours that simulation tools may want to use and, ideally, device manufacturers should directly provide Prototypes of their products ready to be assembled into production line models.</para>
<para>As shown in <link linkend="F10-4">Figure <xref linkend="F10-4" remap="10.4"/></link>, a Resource instance is a ResourcePrototype that has become a well-identified, specific resource of the manufacturing plant. Each instance shares with its originating Prototype the initial definition, but during life cycle, its model can diverge from the initial one because properties and even models change. Therefore, a single ResourcePrototype can be used to instantiate many specific resources that share the same original model.</para>
</section>
<section class="lev3" id="sec10-3-3-2">
<title>10.3.3.2 Prototypes and instances aggregation patterns</title>
<para>An important aspect that <emphasis>Meta Data Model</emphasis> defines is the one related to the composition of resources into higher-level resources. This concept is at the basis of the creation of a hierarchy of resources within a plant and it is an intrinsic way of organizing the description of a manufacturing system. Nevertheless, depending on each specific discipline, there are many ways resource instances (and therefore CPSs) can be grouped in a hierarchical structure. For example, spatial relationships define the topological hierarchy of a system, but from a safety grouping or electrical perspective, the same resources should be organized into different hierarchies (e.g. in the automotive, a cell safety group contains the robot and the surrounding fences, but from an electrical point of view, fences are not represented at all).</para>
<fig id="F10-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link></label>
<caption><para>Example of usage of main and secondary hierarchies.</para></caption>
<graphic xlink:href="graphics/ch10_fig005.jpg"/>
</fig>
<para>For this reason, <emphasis>Meta Data Model</emphasis> provides an aggregation system that is based on two levels:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>a first main hierarchy structure that is implemented in the two base classes for prototypes and instances, AbstractResourcePrototype and AbstractResource (<link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link>);</para></listitem>
<listitem><para>a second level, discipline-dependent, that is defined in parallel to the main one and that should be contained inside domain-specific Assets.</para></listitem>
</itemizedlist>
<para>The former hierarchy level is meant to provide a reference organization of the plant that enables both simulation tools and the CSI to access resources in a uniform way. In fact, the main hierarchy has the fundamental role of controlling the &#8220;visibility level&#8221; of resources, setting the lower access boundaries that constrain the resources to which the secondary (&#8220;parallel&#8221;) hierarchies should be associated.</para>
<para><link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link> shows an example of application of the main resources hierarchy and the secondary, domain-specific one. The main hierarchy organizes the two robots and the surrounding security fence with a natural logical grouping since Robot1, Robot2 and SecurityFence belong physically to the same production cell, Painting Cell1. Even if this arrangement of the instances is functional from a management point of view, it is not directly corresponding to the relationships defined in the electrical schema of the plant, for which the only meaningful resources are the two robots. Imagining that an electric connection exists between the two robots, a secondary, domain-specific schema (in this case, the domain is the electric design) needs to be defined separately. The Painting Cell1 resource acts as the aggregator of the two robot CPS; therefore, it has the &#8220;visibility&#8221; on the two resources of the lower level (Level 1), meaning that they exist and it knows how to reference them. For this reason, the electrical schema that connects Robot1 and Robot2 is defined at Level 2 as the &#8220;ElectricConnections&#8221; Asset associated to the Painting Cell1. This asset, if needed, is allowed to make references to each electric schema of the lower-level resources.</para>
</section>
<section class="lev3" id="sec10-3-3-3">
<title>10.3.3.3 AbstractResourcePrototype</title>
<para>AbstractResourcePrototype is abstract and extends IdentifiedElement (see <link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link>). It represents the base class containing attributes and relationships that are common both to prototypes of intelligent devices and to prototypes of simple passive resources or aggregation of prototypes. The main difference between prototype and instance classes is that the former does not have any reference to a Plant model, because they represent &#8220;not-applied&#8221; elements.</para>
<fig id="F10-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link></label>
<caption><para>Prototype Model class diagram.</para></caption>
<graphic xlink:href="graphics/ch10_fig006.jpg"/>
</fig>
<para>Each AbstractResourcePrototype can aggregate other AbstractResource- Prototype (i.e. CPSPrototype and ResourcePrototype instances), and it can use its Assets and Behaviours to create higher-level complex models and functionalities starting from the lower-level ones.</para>
</section>
<section class="lev3" id="sec10-3-3-4">
<title>10.3.3.4 ResourcePrototype</title>
<para>ResourcePrototype extends AbstractResourcePrototype. This class represents the prototype of a generic passive resource of the plant that does not have any electronic equipment capable of sending/receiving data to/from its digital counterpart, or an aggregation of multiple resource prototypes. Examples of simple resources are cell protection fences, part positioning fixtures, etc.</para>
<para>Resource class is the direct instance class of a ResourcePrototype.</para>
<para>Since a ResourcePrototype must be identifiable within the libraries of prototypes, its ID attribute should be set to a valid UUID that should be unique within an overall framework deployment.</para>
</section>
<section class="lev3" id="sec10-3-3-5">
<title>10.3.3.5 CPSPrototype</title>
<para>CPSPrototype extends AbstractResourcePrototype. This class represents a prototype of an &#8220;intelligent&#8221; resource that is a resource equipped with an electronic device, capable of sending/receiving data to/from its digital counterpart. A CPSPrototype defines the way its derived instances should connect to the physical devices to maintain synchronization between shop floor and simulation models. CPS class is the direct instance class of a CPSPrototype. Since a CPSPrototype must be identifiable within the libraries of prototypes, its ID attribute should be set to a valid UUID that should be unique within an overall framework deployment.</para>
</section>
</section>
<section class="lev2" id="sec10-3-4">
<title>10.3.4 Resources Model</title>
<para>From <emphasis>Meta Data Model</emphasis> perspective, each simulated plant can be represented as a bunch of resources (machine tools, robots, handling systems, passive elements, etc.). Each resource can have a real physics counterpart to which it can be connected or defined from a product life cycle management point of view. This section of the model is meant to document the classes that support the description of resource instances (see &#167;<emphasis role="strong">10.3.4.1 Prototypes and instances</emphasis> for the definition of the instance concept).</para>
<section class="lev3" id="sec10-3-4-1">
<title>10.3.4.1 AbstractResource</title>
<para>AbstractResource is abstract and extends IdentifiedElement (<link linkend="F10-7">Figure <xref linkend="F10-7" remap="10.7"/></link>). This class represents the generalization of the concept of plant resource. As cited at the beginning of the section, a plant is a composition of intelligent devices (e.g. machines controlled by PLC, IoT ready sensors, etc.) or passive elements (fences, fixtures, etc.). Even if such resources are semantically different, from a simulation point of view, they have a certain number of common properties. This fact justifies, from a class hierarchy perspective, the definition of a base class that CPS and Resource classes extend.</para>
<fig id="F10-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-7">Figure <xref linkend="F10-7" remap="10.7"/></link></label>
<caption><para>Class diagram of resources section.</para></caption>
<graphic xlink:href="graphics/ch10_fig007.jpg"/>
</fig>
<para>An AbstractResource is identified by its ID, which must be unique within the same plant.</para>
<table-wrap position="float" id="T2">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr><td valign="top" align="left">Field</td><td valign="top" align="left">Type</td><td valign="top" align="left">Description</td></tr>
<tr><td valign="top" align="left">digitalOnly</td><td valign="top" align="left">Boolean</td><td valign="top" align="left"><para>This flag indicates whether this resource (be it a CPS or a simple resource) has a physical counterpart somewhere in the real plant or if it is purely a virtual element.</para>
<para>In design phase of a plant that goes on green field, resources will all have digitalOnly = true, while during the reconfiguration of a plant, there will be a mixed condition with some resources having the flag set to true (the ones existing in the running production lines) and some others set to false (the ones that are going to be evaluated with simulation).</para></td></tr>
<tr><td valign="top" align="left">properties</td><td valign="top" align="left">Property[]</td><td valign="top" align="left"><para>Runtime properties of the resource.</para>
<para>Each property of the resource represents a relevant piece of information that can be shared (accessed and modified) by the simulation tools and by the functional and behavioural models.</para>
<para>The length of the array can be 0 to n.</para></td></tr>
<tr><td valign="top" align="left">resources</td><td valign="top" align="left">AbstractResource[]</td><td valign="top" align="left"><para>List of the resources that this instance aggregates. This field implements the hierarchy relationships among resources inside a plant. See &#167;<emphasis role="strong">Prototypes and instances aggregation patterns</emphasis>.</para>
<para>The length of the array can be 0 to n.</para></td></tr>
</tbody>
</table>
</table-wrap>
</section>
<section class="lev3" id="sec10-3-4-2">
<title>10.3.4.2 CPS</title>
<para>CPS extends AbstractResource. This class represents each &#8220;intelligent&#8221; device belonging to the plant equipped with an electronic device capable of sending/receiving data to/from its digital counterpart. A CPS can be connected with the physical device to maintain synchronization between shopfloor and simulation models. A CPS can be an aggregation of other CPSs and simple Resources, using its Assets and Behaviours to aggregate lower-level models and functionalities.</para>
<para>Each CPS must be identified by a string ID that must be unique within the plant.</para>
 
<table-wrap position="float" id="T3">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr><td valign="top" align="left">Field</td><td valign="top" align="left">Type</td><td valign="top" align="left">Description</td></tr>
<tr><td valign="top" align="left">cps-Prototype</td><td valign="top" align="left">CPS-Prototype</td><td valign="top" align="left"><para>Each CPS can be an instantiation of a prototype CPS that has been defined in a library of models (usually stored in the CSI) that simulation tools can access and use. See &#167;<emphasis role="strong">10.3.4.1 Prototypes</emphasis></para>
<para><emphasis role="strong">and instances</emphasis>.</para>
<para>This field can be null if the CPS does not derive from the instantiation of a prototype.</para></td></tr>
<tr><td valign="top" align="left">device</td><td valign="top" align="left">Device</td><td valign="top" align="left"><para>Represents the description of the device that ensures the data connection between the physical and digital contexts. This object characterizes all the I/Os that can be received and sent from and to the real equipment.</para>
<para>This field cannot be null, while it is possible that the device, even if fully defined, is not connected to real electronic equipment.</para></td></tr>
<tr><td valign="top" align="left">principal</td><td valign="top" align="left">Principal</td><td valign="top" align="left"><para>Each CPS has a related access level that is defined in compliance with the security data model described in section &#8220;Security Data</para>
<para>Model&#8221; and implemented by the CSI.</para></td></tr>
</tbody>
</table>
</table-wrap>
</section>
</section>
<section class="lev2" id="sec10-3-5">
<title>10.3.5 Device Model</title>
<para>This section contains the documentation of the classes needed to model the electronic equipment of the intelligent resources. This equipment is described in terms of the interfaces that can be used by the digital tools to open data streams with the real devices (<link linkend="F10-8">Figure <xref linkend="F10-8" remap="10.8"/></link>).</para>
<fig id="F10-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-8">Figure <xref linkend="F10-8" remap="10.8"/></link></label>
<caption><para>Class diagram of devices section.</para></caption>
<graphic xlink:href="graphics/ch10_fig008.jpg"/>
</fig>
<section class="lev3" id="sec10-3-5-1">
<title>10.3.5.1 Device</title>
<para>Device is an IdentifiedElement and represents an electronic equipment of physical layer that can be connected to the digital counterpart to send/receive data.</para>
 
<table-wrap position="float" id="T4">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr><td valign="top" align="left">Field</td><td valign="top" align="left">Type</td><td valign="top" align="left">Description</td></tr>
<tr><td valign="top" align="left">device Connection</td><td valign="top" align="left">Device Connection</td><td valign="top" align="left"><para>It contains all the details to open data streams with the physical device. E.g. for Ethernet-based connections, it contains IP address as well as information on ports, protocols and possibly the security parameters to apply to receive access rights to the specific resource.</para>
<para>The field can be null.</para></td></tr>
<tr><td valign="top" align="left">device Configuration</td><td valign="top" align="left">Device Configuration</td><td valign="top" align="left">It contains details on the device hardware and software configuration (e.g. version of the running PLC code). This object can be updated dynamically based on data read from the physical device to reflect the actual working condition of the device.<break/>The field can be null.</td></tr>
<tr><td valign="top" align="left">deviceIO</td><td valign="top" align="left">DeviceIO</td><td valign="top" align="left"><para>It contains the map of Input/Output data signals that can be exchanged with the physical device.</para>
<para>The field cannot be null. If no signal can be exchanged with the device, the DeviceIO map is present but empty.</para>
<para>Normally, this should not happen (except during the drafting phase) because if a device does not allow any data exchange with its digital counterpart, then it should be treated as a passive resource.</para></td></tr>
</tbody>
</table>
</table-wrap>
</section>
<section class="lev3" id="sec10-3-5-2">
<title>10.3.5.2 DeviceIO</title>
<para>DeviceIO represents a map of input and output signals that can be exchanged with a specific device. Moreover, the DeviceIO represents the communication between CPS on IO-Level.</para>
 
<table-wrap position="float" id="T5">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr><td valign="top" align="left">Field</td><td valign="top" align="left">Type</td><td valign="top" align="left">Description</td></tr>
<tr><td valign="top" align="left">input Signals</td><td valign="top" align="left">DeviceSignal[]</td><td valign="top" align="left"><para>Array of DeviceSignal describing input signals. Signal direction is seen by the device;</para>
<para>therefore, this is the list of data that can be sent TO the device.</para>
<para>The field cannot be null.</para>
<para>Length of the array can be 0 to n</para>
<para>All DeviceSignal instances belonging to this collection must have <emphasis>direction</emphasis> attribute set to</para>
<para><emphasis>SignalDirection.Input</emphasis>.</para></td></tr>
<tr><td valign="top" align="left">output Signals</td><td valign="top" align="left">DeviceSignal[]</td><td valign="top" align="left"><para>Array of DeviceSignal describing output signals. Signal direction is seen by the device;</para>
<para>therefore, this is the list of data that can be received FROM the device.</para>
<para>The field cannot be null.</para>
<para>Length of the array can be 0 to n</para>
<para>All DeviceSignal instances belonging to this collection must have <emphasis>direction</emphasis> attribute set to</para>
<para><emphasis>SignalDirection.Output</emphasis>.</para></td></tr>
</tbody>
</table>
</table-wrap>
<fig id="F10-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-9">Figure <xref linkend="F10-9" remap="10.9"/></link></label>
<caption><para>Class diagram of the Project Model section.</para></caption>
<graphic xlink:href="graphics/ch10_fig009.jpg"/>
</fig>
</section>
</section>
<section class="lev2" id="sec10-3-6">
<title>10.3.6 Project Model</title>
<para>This section describes the classes related to the management of projects, scenarios and results of simulations for a certain plant that are produced and consumed by simulation tools (<link linkend="F10-9">Figure <xref linkend="F10-9" remap="10.9"/></link>).</para>
<section class="lev3" id="sec10-3-6-1">
<title>10.3.6.1 Project</title>
<para>A project is an IdentifiedElement. It can be considered mainly as a utility container of different simulation scenarios that have been grouped together because they are related to the same part of the plant (e.g. different scenarios for the same painting cell of the production line).</para>
<para>A project could identify a design or a reconfiguration of a part of the plant for which each SimulationScenario represents a hypothesis of layout of different resources.</para>
</section>
<section class="lev3" id="sec10-3-6-2">
<title>10.3.6.2 Plant</title>
<para>Plant is an extension of IdentifiedElement and represents an aggregation of projects and resources. A plant instance could be considered as an entry point for simulation tools that want to access models stored on the CSI. It contains references to all the resource instances that are subject of SimulationScenarios. In this way, it is possible to have different simulation scenarios, even with simulation of different types, bound to a single resource instance.</para>
<para>Note: the fact that different simulations of different nature can be set up for the same resource (be it a cell, a line, etc.) is not related to the concept of multi-disciplinary simulation that is, instead, implemented by the Simulation Framework and refers to the possibility of running concurrent, interdependent simulations of different types.</para>
<para>The ID of the Plant must be unique within the overall framework deployment.</para>
</section>
<section class="lev3" id="sec10-3-6-3">
<title>10.3.6.3 SimulationScenario</title>
<para>SimulationScenario is an extension of IdentifiedElement and represents the run of a SimModel producing some SimResults. A simulation scenario refers to a root resource that is not necessarily the root resource instance of the whole plant, because a simulation scenario can be bound to just a small part of the full plant. A simulation scenario can set up a multi-disciplinary simu-lation, defining different simulation models for the same resource instance to be run concurrently by the Simulation Framework.</para>
</section>
<section class="lev3" id="sec10-3-6-4">
<title>10.3.6.4 SimModel</title>
<para>SimModel is an IdentifiedElement and represents a simulation model within a particular SimulationScenario. Each model can assemble different behavioural models of the root resource into a specific simulation model, creating scenario-specific relationships that are stored inside simulation assets that can be expressed both in an interoperable format (e.g. Automa- tionML) when there is need for data exchange among different tools and in proprietary formats.</para>
<para>The ID of a SimModel instance must be unique within a Simulation Scenario.</para>
<fig id="F10-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-10">Figure <xref linkend="F10-10" remap="10.10"/></link></label>
<caption><para>Object diagram of the Project Model.</para></caption>
<graphic xlink:href="graphics/ch10_fig010.jpg"/>
</fig>
<para>The object diagram shown below (<link linkend="F10-10">Figure <xref linkend="F10-10" remap="10.10"/></link>) shows a possible application of the Project Model: a set of simple resources and CPS is organized into two hierarchies: one representing the actual demo line and a second hierarchy modelling a hypothesis of redesign of the demo plant. All the Resource and CPS instances belong to the plant model Plant1 (relationships in this case have not been reported to keep the diagram tidy). The user wants to perform two different simulations, one for each root resource. For this reason, he/she sets up two SimulationScenario instances: MD-DESScenario1 and DESScenario2. Each one refers to a different root resource. The former is a multi-disciplinary scenario of the DemoPlantNew that will use a combination of a DES model and an Energy Consumption model, while the latter represents a simple DES-only scenario of the original DemoPlant. These scenarios are aggregated in a Project instance (BendingCellProject) that belongs to the Plant1 project and that is meant to compare the performance of the plant using two different setups of the bending cell. For DESScenario2, there are already simulation results Result2.</para>
</section>
</section>
<section class="lev2" id="sec10-3-7">
<title>10.3.7 Product Routing Model</title>
<para>In this paragraph, a description of the product routing section of the meta data model is given. Structural choices as well as requirements consideration are reported, with a particular focus on the validation points that have been reviewed by experts. In order to describe this part of the model, each class is treated separately and clusters of functional areas have been created for simplicity. All attributes, cardinality indications and relationships are described with respect to the single entity and in the general data model perspective.</para>
<section class="lev3" id="sec10-3-7-1">
<title>10.3.7.1 Relationship between product routing model and ISO 14649-10 standard</title>
<para>The product routing section of the data model has been developed according to the ISO 14649-10 standard, &#8220;Industrial automation systems and integration &#8211; Physical device control &#8211; Data model for computerized numerical controllers &#8211; Part 10: General process data&#8221;, which was deeply analysed and chosen as best-fitting standard for the product feature &#8211; operation coupling part. Its characteristics and focus areas are suitable from the functional point of view, as it tackles some aspects that the model needs to cover in exactly the same application environment. In fact, it supports the communication between CAD and CNC. ISO 14649-10 specifies the process data that is generally needed for NC programming in any of the possible machining technologies. These data elements describe the interface between a computerized numerical controller and the programming system (i.e. CAM system or shopfloor programming system). On the programming system, the programme for the numerical controller is created. This programme includes geometric and technological information. It can be described using this part of ISO 14649 together with the technology-specific parts (ISO 14649-11, etc.). This part of ISO 14649 provides the control structures for the sequence of programme execution, mainly the sequence of working steps and associated machine functions.<footnote id="fn10_1" label="1"> <para>ISO 14649. http://www.iso.org/iso/catalogue detail?csnumber=34743</para></footnote> The standard ISO 14649-10 gives a set of terms and a certain hierarchy among them, though without specifying the type of relations. Being focused on process data for CNC (Computerized Numerical Control), the terminology is deeply technical in describing all different types of manufacturing features, mechanical parameters and measures. The relationship between workpiece features, operations and sequencing is of relevance for the purpose of this work, so a number of entities have been selected. Only after that, the distinction between classes and attributes was made, together with the definition of the types of relationships and references among the classes.</para>
<fig id="F10-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-11">Figure <xref linkend="F10-11" remap="10.11"/></link></label>
<caption><para>Schedule and workpiece representation.</para></caption>
<graphic xlink:href="graphics/ch10_fig011.jpg"/>
</fig>
</section>
<section class="lev3" id="sec10-3-7-2">
<title>10.3.7.2 Workpiece</title>
<para>Workpiece class (<link linkend="F10-11">Figure <xref linkend="F10-11" remap="10.11"/></link>) represents the part or product that needs to be machined, assembled or disassembled. Each schedule realizes at least one workpiece, but it may also realize different product variants, with various features. Each product variant is a different instantiation of the class &#8220;Workpiece&#8221; and extends the IdentifiedElement class. Being a central entity for the data model, the workpiece has a further development side that concerns the production scheduling and product routing. Manufacturing methods and instructions are not contained in the workpiece information but are determined by the operations themselves.</para>
</section>
<section class="lev3" id="sec10-3-7-3">
<title>10.3.7.3 ProgramStructure</title>
<para>ProgramStructure determines how the different operations are executed for a specific work piece, i.e. in series or parallel (see also <link linkend="F10-12">Figure <xref linkend="F10-12" remap="10.12"/></link>).</para>
<fig id="F10-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-12">Figure <xref linkend="F10-12" remap="10.12"/></link></label>
<caption><para>Program structure representation.</para></caption>
<graphic xlink:href="graphics/ch10_fig012.jpg"/>
</fig>
<para>A program structure, at low level, is composed of single, ordered steps, called &#8220;Executables&#8221;. Depending on the type of program structure, the executables are realized in series or parallel. The program structure thus defines how the different steps are executed and at the same time gives some flexibility in the choice, by taking into account data from the system.</para>
</section>
<section class="lev3" id="sec10-3-7-4">
<title>10.3.7.4 ProgramStructureType</title>
<para>Enumeration representing the allowed types of a ProgramStructure instance (<link linkend="F10-12">Figure <xref linkend="F10-12" remap="10.12"/></link>).</para>
</section>
<section class="lev3" id="sec10-3-7-5">
<title>10.3.7.5 MachiningExecutable</title>
<para>Machining executables initiate actions on a machine and need to be arranged in a defined order. They define all those tasks that cause a physical trans-formation of the workpiece. MachiningExecutable class extends the Identi- fiedElements class and is a generalization of machining working steps and machining NC functions, since both of these are special types of machining executables. Hierarchically, it is also a sub-class of program structures, being their basic units, as it constitutes the steps needed for the execution of the program structure. Starting from the machining executable, the connected classes are represented in <link linkend="F10-12">Figure <xref linkend="F10-12" remap="10.12"/></link>.</para>
</section>
<section class="lev3" id="sec10-3-7-6">
<title>10.3.7.6 AssemblyExecutable</title>
<para>AssemblyExecutable also extends IdentifiedElement class. AssemblyExecutable are a specialization of program structures and generalizations of working steps or NC functions. As in the case of machining executables, they initiate actions on a machine and need to be arranged in a defined order: assembly executables include all those operations that allow creating a single product from two or more work pieces. Starting from the assembly executable, the connected classes are represented in <link linkend="F10-12">Figure <xref linkend="F10-12" remap="10.12"/></link>.</para>
</section>
<section class="lev3" id="sec10-3-7-7">
<title>10.3.7.7 DisassemblyExecutable</title>
<para>DisassemblyExecutable is derived from IdentifiedElement. DisassemblyEx- ecutables are generalizations of working steps or NC functions. As in the case of machining and assembly executables, they are also a specialization of program structures, being their basic units, as these three classes constitute the steps needed for the execution of the program structure. Thus, it can be imagined that one or more machining executables, one or more assembly exe-cutables and one or more disassembly executable compose program structure. Disassembly executables also initiate actions on a machine and need to be arranged in a defined order: disassembly executables perform an opposite activity with respect to assembly, which means that from a single part it extrapolates more than one part. Starting from the disassembly executable, the connected classes are represented in <link linkend="F10-12">Figure <xref linkend="F10-12" remap="10.12"/></link>.</para>
</section>
<section class="lev3" id="sec10-3-7-8">
<title>10.3.7.8 MachiningNcFunction</title>
<para>MachiningNcFunction is an IdentifiedElement and a specialization of MachiningExecutable (<link linkend="F10-13">Figure <xref linkend="F10-13" remap="10.13"/></link>) that differentiates from the machining working step for the fact that it is a technology-independent action, such as a handling or picking operation or rapid movements. It has a specific purpose and given parameters. If needed, other parameters regarding speed or other technological requirements can be added as attributes.</para>
</section>
<section class="lev3" id="sec10-3-7-9">
<title>10.3.7.9 MachiningWorkingStep</title>
<para>MachiningWorkingStep is an IdentifiedElement that is also a specialization of MachiningExecutable, the most important one for the purpose of this work. It is the machining process for a certain area of the workpiece, and as such, it is related to a technology like milling, drilling or bending. It cannot exist independent of a feature, but rather specifies the association between a distinct feature and an operation to be performed on the feature. It creates an unambiguous specification, which can be executed by the machine. An operation can be replicated for different features, while a working step is unique in each part program as it spans for a defined period of time and relates to a specific workpiece and a specific manufacturing feature. Each working step thus defines the conditions under which the relative operation has to be performed. This means also that the operation related to the machining working step must be in the list of possible operations related to a certain manufacturing feature (<link linkend="F10-13">Figure <xref linkend="F10-13" remap="10.13"/></link>).</para>
<fig id="F10-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-13">Figure <xref linkend="F10-13" remap="10.13"/></link></label>
<caption><para>Machining executable representation.</para></caption>
<graphic xlink:href="graphics/ch10_fig013.jpg"/>
</fig>
</section>
<section class="lev3" id="sec10-3-7-10">
<title>10.3.7.10 MachiningWorkpieceSetup</title>
<para>MachiningWorkpieceSetup has a direct reference to the workpiece and is defined for each machining working step, since it defines its position for machining. In fact, it may change according to the position of the single machining feature on the workpiece. In fact, also the reference to the manufacturing feature for which it is defined is unique: a single workpiece setup, in fact, refers to only one machining working step that is meant to realize a defined feature.</para>
</section>
<section class="lev3" id="sec10-3-7-11">
<title>10.3.7.11 MachiningSetupInstructions</title>
<para>For each single operation in time and space, precise setup instructions may be specified, connected to the workpiece setup, such as operator instructions and external material in the forms of tables, documents and guidelines. MachiningSetupInstructions class extends the IdentifiedElement class.</para>
</section>
<section class="lev3" id="sec10-3-7-12">
<title>10.3.7.12 ManufacturingFeature</title>
<para>ManufacturingFeature is an IdentifiedElement that is a characteristic of the workpiece, which requires specific operations. For 3D simulation and Computer Aided Design, it is fundamental to have the physical characteristics specifications: as shown in <link linkend="F10-13">Figure <xref linkend="F10-13" remap="10.13"/></link>, the workpiece manufacturing features are a relevant piece of information for modelling and simulation, as they determine the required operations.</para>
</section>
<section class="lev3" id="sec10-3-7-13">
<title>10.3.7.13 MachiningOperation</title>
<para>MachiningOperation is an IdentifiedElement that specifies the contents of a machining working step and is connected to the tool to be used and a set of technological parameters for the operation. The tool choice depends on the specific working step conditions (<link linkend="F10-13">Figure <xref linkend="F10-13" remap="10.13"/></link>). The more information is specified for tool and fixture, the more limited the list of possible matches is. Therefore, only the relevant, necessary values should be specified.</para>
</section>
<section class="lev3" id="sec10-3-7-14">
<title>10.3.7.14 MachiningTechnology</title>
<para>MachiningTechnology collects a set of parameters, such as feed rate or tool reference point. The addition of new attributes would expand the possibilities of technological specifications.</para>
</section>
<section class="lev3" id="sec10-3-7-15">
<title>10.3.7.15 FixtureFixture</title>
<para>Fixture class is an IdentifiedElement that represents the fixtures required by machining operations, if any. Given that the same operation may be performed under different conditions, the choice of a fitting fixture is done for the single working step.</para>
</section>
<section class="lev3" id="sec10-3-7-16">
<title>10.3.7.16 Assembly and disassembly</title>
<para>In Figures 10.14 and 10.15, assembly-executable and disassembly-executable branches are examined, even though their development is very similar to the machining executable branch. In fact, they differ only for a low number of details and specifications. These differences are presented in the following subsections.</para>
<fig id="F10-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-14">Figure <xref linkend="F10-14" remap="10.14"/></link></label>
<caption><para>Assembly-Executable representation.</para></caption>
<graphic xlink:href="graphics/ch10_fig014.jpg"/>
</fig>
<fig id="F10-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-15">Figure <xref linkend="F10-15" remap="10.15"/></link></label>
<caption><para>Disassembly representation.</para></caption>
<graphic xlink:href="graphics/ch10_fig015.jpg"/>
</fig>
</section>
</section>
<section class="lev2" id="sec10-3-8">
<title>10.3.8 Security Model</title>
<para>The phases of requirement gathering and analysis highlighted that security and privacy are two of the principal issues that must be properly addressed in a simulation platform.</para>
<para>Here, security and privacy will be enforced focusing mainly on the following aspects:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The implementation of suitable <emphasis role="strong">authentication/authorization</emphasis> mechanisms</para></listitem>
<listitem><para><emphasis role="strong">Securing communication and data storage</emphasis> via encryption</para></listitem>
</itemizedlist>
<para>These aspects fall under the so-called Privacy-Enhancing Technologies (PETs).</para>
<para>More in detail, authentication is the process of confirming the identity of an external actor in order to avoid possible malicious accesses to the system resources and services. Authentication, however, is only one side of the coin, it is in fact tightly coupled with the concept of authorization, which can be defined as the set of actions a software system has to implement in order to grant (authenticated) users the permission to execute an operation on one or more resources. Authentication and authorization are concepts related to both security (unwanted possible catastrophic access to inner resources) and privacy and data protection issues (malicious access to other users&#8217; data).</para>
<fig id="F10-16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-16">Figure <xref linkend="F10-16" remap="10.16"/></link></label>
<caption><para>Class diagram for the security section of the <emphasis>Meta Data Model</emphasis>.</para></caption>
<graphic xlink:href="graphics/ch10_fig016.jpg"/>
</fig>
<para>Securing communication is the third piece of this security and privacy puzzle, and it is as necessary as authentication and authorization. As a matter of fact, most physical devices (e.g. wireless networks) show very few privacy guaranties, and in many cases, it is practically impossible to secure wide networks against eavesdroppers. Nonetheless, confidentiality and privacy are fundamental rights (acknowledged by the European Convention on Human Rights) and must be enforced over often unsecure (communication and storage) infrastructures. For this reason, the simulation platform is committed to employ state-of-the-art encryption mechanisms (e.g. SSL and TLS) on both data storage and transport.</para>
<para>In the following sections of the document, the part of <emphasis>Meta Data Model</emphasis> devoted to security/access control management is reported and discussed. The elements of the meta model that play a role in security-related scenarios are depicted in <link linkend="F10-16">Figure <xref linkend="F10-16" remap="10.16"/></link>.</para>
</section>
</section>
<section class="lev1" id="sec10-4">
<title>10.4 Conclusions</title>
<para>Multidisciplinary simulation is increasingly important with regard to the design, deployment and management of CPS-based factories. There are many challenges arising when exploiting the full potential of simulation technologies within Smart Factories, where a consistent technological barrier is the lack of digital continuity. Indeed, this chapter targets the fundamental issue of the lack of common modelling languages and rigorous semantics for describing interactions &#8211; physical and digital &#8211; across heterogeneous tools and systems towards effective simulation applicable along the whole factory life cycle.</para>
<para>The data model described in this chapter is the result of the joint effort of different actors from the European academia and industry. From the reference specifications presented in this chapter, which should be considered as a first release of a broader collaboration, a model has indeed been developed and has subsequently been validated within both an automotive industry use case and a steel carpentry scenario.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>This work was achieved within the EU-H2020 project MAYA, which received funding from the European Union&#8217;s Horizon 2020 research and innovation programme, under grant agreement No. 678556.</para></section>
<section class="lev1" id="sec10-5">
<title>References</title>
<para>[1] www.automationml.org, accessed on March 24, 2017.</para>
<para>[2] Weyer, Stephan, et al.: Towards Industry 4.0-Standardization as the crucial challenge for highly modular, multi-vendor production systems. IFAC-PapersOnLine, 48. Jg., Nr. 3, S. 579&#8211;584, 2015.</para>
<para>[3] Baudisch, Thomas and Brandstetter, Veronika and Wehrstedt, Jan Christoph and Wei{\ss}, Mario and Meyer, Torben: Ein zentrales, multiperspektivisches Datenmodell fur die automatische Generierung von Simulationsmodellen fur die Virtuelle Inbetriebnahme. Tagungsband Automation 2017.</para>
</section>
</chapter>

<chapter class="chapter" id="ch011" label="11" xreflabel="11">
<title>A Centralized Support Infrastructure (CSI) to Manage CPS Digital Twin, towards the Synchronization between CPS Deployed on the Shopfloor and Their Digital Representation</title>
<para><emphasis role="strong">Diego Rovere<superscript>1</superscript>, Paolo Pedrazzoli<superscript>2</superscript>, Giovanni dal Maso<superscript>1</superscript>, Marino Alge<superscript>2</superscript> and Michele Ciavotta<superscript>3</superscript></emphasis></para>
<para><superscript>1</superscript>TTS srl, Italy</para>
<para><superscript>2</superscript> Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C, CH-6928 Manno, Switzerland</para>
<para><superscript>3</superscript> Universit&#224; degli Studi di Milano-Bicocca, Italy</para>
<para>E-mail: rovere@ttsnetwork.com; pedrazzoli@ttsnetwork.com; dalmaso@ttsnetwork.com; marino.alge@supsi.ch; michele.ciavotta@unimib.it</para>
<para>In order to support effective multi-disciplinary simulation tools in all phases of the factory life cycle, it is mandatory to ensure that the Digital Twin mirrors constantly and faithfully the state of the CPS. CPS nameplate values change over time due to situation and strain. Thereupon, this chapter describes the future CPS as equipped with special assets named Functional Models to be uploaded to CSI for synchronization and data analysis. Functional Models are essentially software routines that are run against data sent by the CPS. Such routines can regularly update CPS reference values, estimate indirect metrics, or train predictive models. Functional Models are fully managed (registered, executed, and monitored) by the CSI middleware.</para>
<section class="lev1" id="sec11-1">
<title>11.1 Introduction</title>
<para>The main purpose of the CSI is to manage CPS Digital Twins (DTs) allowing the synchronization between CPS deployed on the shopfloor and their digital representation. In particular, during the whole factory life cycle, the CSI will provide services (via suitable API endpoints) to analyze the data streams coming from the shopfloor and to share simulation models and results among simulators.</para>
<para>In this chapter, we present the implementation of a distributed middleware developed within the frame of MAYA European project, tailored to enable scalable interoperability between enterprise applications and CPS with especial attention paid to simulation tools. The proposed platform strives for being the first solution based on both Microservices [1, 2] and Big Data [3] paradigms to empower shopfloor CPS along the whole plant life cycle and realize real-digital synchronization ensuring at the same time security and confidentiality of sensible factory data.</para>
</section>
<section class="lev1" id="sec11-2">
<title>11.2 Terminology</title>
<para><emphasis role="strong">Shopfloor CPS</emphasis> &#8211; With the expression &#8220;Shop-floor CPS&#8221; we refer to Digital- Mechatronic systems deployed at shopfloor level. They are physical entities that intervene in various ways in the manufacture of a certain product. For the scope of this chapter, Shopfloor CPS (referred to as Real CPS or simply CPS) can communicate to each other and with the CSI.</para>
<para><emphasis role="strong">CPS Digital Twin (or just Digital Twin)</emphasis> &#8211; In the smart factory, each shopfloor CPS is mirrored by its virtual alter ego, called Digital Twin (DT). The Digital Twin is the semantic, functional, and simulation-ready representation of a CPS; it gathers together heterogeneous pieces of information. In particular, it can define, among other things, Shopfloor CPS performance specifications, Behavioral (simulation) Models, and Functional Models.</para>
<para>Digital Twin is a composite concept that is specified as follows:</para>
<para><emphasis role="strong">CPS Prototype (or just Prototype)</emphasis> &#8211; <link linkend="ch012">Chapter <xref linkend="ch12" remap="12"/></link> proposes a meta-model that paves the way to a semantic definition of CPS within the CSI. Following the Object-Oriented Programming (OOP) approach, we distinguish between a Prototype (or class) and its derived instances. A CPS prototype is a model that defines the structure and the associate semantic for a certain class of CPS. A prototype defines fields representing both general characteristics of the represented CPS class and the state of a specific Shopfloor CPS.</para>
<para><emphasis role="strong">CPS Instance</emphasis> &#8211; Once a shopfloor CPS is connected to the CSI platform, a set of processes are run to instantiate, starting from a CPS prototype, the Digital Twin. The Digital Twin is an instance of a specific CPS prototype. Therefore, a CPS instance can be defined as the computer-based representation (live object in memory or stored in a database) of its Digital Twin, which can be considered a more abstract concept even independent of this implementation within the CSI.</para>
<para><emphasis role="strong">Behavioral Models</emphasis> &#8211; These are simulation models, linked to the semantic representation of a CPS (prototype and instance) and stored within the CSI. Each Digital Twin can feature behavioral models of different nature to enable the multi-disciplinary approach to simulation.</para>
<para><emphasis role="strong">Functional Models</emphasis> &#8211; In layman&#8217;s terms, functional models are pieces of software to be run on a compliant platform created to analyze data coming from the shopfloor. Data can enter a platform in the form of streams or imported from other sources (text files, excel, databases, etc.). The results of the analysis are used to enrich the Digital Twin implementing the real-to-digital synchronization. They can be used, for instance, to update license plate data of Digital Twins or to enable predictive maintenance specific on the considered CPS.</para>
</section>
<section class="lev1" id="sec11-3">
<title>11.3 CSI Architecture</title>
<para>The overall CSI component diagram is shown in <link linkend="F11-1">Figure <xref linkend="F11-1" remap="11.1"/></link>: a relevant part of the platform consists of a microservice-based infrastructure devoted to administrative tasks related to Digital Twins and a Big Data deployment accountable for processing shopfloor data. Since the two portions of our middleware have different requirements, being also grounded on different technological solutions, in what follows, they are presented and discussed separately.</para>
<section class="lev2" id="sec11-3-1">
<title>11.3.1 Microservice Platform</title>
<para>In a nutshell, the microservice architecture is the evolution of the classical Service Oriented Architecture (SOA) [4] in which the application is seen as a suite of small services, each devoted to a single activity. Within the CSI, each microservice exposes a small set of functionalities and runs in its own process, communicating with other services mainly via HTTP resource API or messages. Four groups of services can be identified and addressed in what follows.</para>
<fig id="F11-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-1">Figure <xref linkend="F11-1" remap="11.1"/></link></label>
<caption><para>CSI Component Diagram.</para></caption>
<graphic xlink:href="graphics/ch11_fig001.jpg"/>
</fig>
<section class="lev3" id="sec11-3-1-1">
<title>11.3.1.1 Front-end services</title>
<para>Front-end services are designed to provide the CSI with a single and secure interface to the outer world. As a consequence, any other service can be accessed only through the front-end and only by trusted entities. The main services in this group are:</para>
<para><emphasis role="strong"><emphasis>Web-based UI</emphasis></emphasis></para>
<para><emphasis>The Web-based UI is a Web application for human&#8211;machine interaction; it provides a user friendly interface to register new CPS or to execute queries. Administration tools such as security management and platform monitoring are available as well</emphasis>.</para>
<para><emphasis role="strong"><emphasis>API Gateway</emphasis></emphasis></para>
<para><emphasis>The API Gateway, instead, is a service designed to provide dynamic and secure API routing, acting as a front door for the requests coming from authorized players, namely users via the Web UI and devices/CPS executing REST/WebSocket calls. In layman&#8217;s terms, all the other platform services are accessible only through the gateway and only by trusted entities</emphasis>.</para>
<para>The gateway is based on Netflix Zuul<footnote id="fn11_1" label="1"><para>https://github.com/Netflix/zuul/wiki</para></footnote> for dynamic routing, monitoring, and security, and Ribbon<footnote id="fn11_2" label="2"> <para>https://github.com/Netflix/ribbon</para></footnote>, a multi-protocol inter-process communication library that, in collaboration with Service Registry (see SOA enabling services), dispatches incoming requests applying load-balance policy. The API gateway, finally, offers an implementation of the Circuit Breaker<footnote id="fn11_3" label="3"> <para>https://martinfowler.com/bliki/CircuitBreaker.html</para></footnote> pattern impeding the system to get stuck in case the target back-end service fails to answer within a certain time.</para>
</section>
<section class="lev3" id="sec11-3-1-2">
<title>11.3.1.2 Security and privacy</title>
<para>Security policies are enforced by the User Account and Authentication (UAA) service, which is in charge of the authentication and authorization tasks:</para>
<para><emphasis role="strong"><emphasis>UAA Service</emphasis></emphasis></para>
<para><emphasis>In a nutshell the main task of this service is to check users&#8217; (human operators, CPS or microservices) credentials to verify the identity and issuing a time- limited OAuth2 [13] token to authorize a subset of possible actions that depends on the particular role the user has been assigned to. Users&#8217; data, roles and permission are stored in a relational database: currently, MySQL</emphasis><footnote id="fn11_4" label="4"> <para>www.mysql.com</para></footnote> <emphasis>database is used to this end</emphasis>.</para>
<para>It is worth to notice that authentication and authorization is required not only for human users and CPS but also to establish a trustful collaboration between microservices avoiding malevolent and tampering actions.</para>
</section>
<section class="lev3" id="sec11-3-1-3">
<title>11.3.1.3 SOA enabling services</title>
<para>SOA enabling services: this group of services has the task to support the microservice paradigm; it features:</para>
<para><emphasis role="strong"><emphasis>Service Registry</emphasis></emphasis></para>
<para><emphasis>This service provides a REST endpoint for service discovering. This service is designed to allow transparent and agnostic service communication and load balancing. Based on Netflix Eureka</emphasis><footnote id="fn11_5" label="5"> <para>https://github.com/Netflix/eureka/wiki</para></footnote>, <emphasis>it exposes APIs for service registration and for service querying, allowing the services to communicate without referring to specific IP addresses. This is especially important in the scenario in which services are replicated in order to handle a high workload</emphasis>.</para>
<para><emphasis role="strong"><emphasis>Configuration Server</emphasis></emphasis></para>
<para><emphasis>The main task of this service is to store properties files in a centralized way for all the micro-services involved in the CSI. This is a task of paramount importance in many scenarios involving the overall life cycle of the platform. Among the benefits of having a configuration server, we mention here the ability to change the service runtime behavior in order to, for example, perform debugging and monitoring</emphasis>.</para>
<para><emphasis role="strong"><emphasis>Monitoring Console</emphasis></emphasis></para>
<para><emphasis>This macro-component with three services implements the so-called ELK stack (i.e., Elasticsearch, Logstash, and Kibana) to achieve log collection, analyzing, and monitoring services. In other words, logs from every microservice are collected, stored, processed, and presented in a graphical form to the CSI administrator. A query language is also provided to enable the administrator to interactively analyze the information coming from the platform</emphasis>.</para>
</section>
<section class="lev3" id="sec11-3-1-4">
<title>11.3.1.4 Backend services</title>
<para>To this group belong those services that implement the <link linkend="ch012">Chapter <xref linkend="ch12" remap="12"/></link> meta-data model and manage the creation, update, deletion, storage, retrieval, and query of CPS Digital Twins as well as simulation-related information. In particular, the CSI features the following services:</para>
<para><emphasis role="strong"><emphasis>Orchestrator</emphasis></emphasis></para>
<para><emphasis>The Orchestrator microservice coordinates and organizes other services&#8217; execution to create high-level composite business processes</emphasis>.</para>
<para><emphasis role="strong"><emphasis>Scheduler</emphasis></emphasis></para>
<para><emphasis>Service for the orchestration of recurring action. Example of those jobs are: importing data from external sources at regular intervals, updating CPS Prototypes and instances, removing from internal databases stale data, and sending emails enclosing a report on the system&#8217;s healthy to administrators</emphasis>.</para>
<para><emphasis role="strong"><emphasis>Models MS/Assets MS</emphasis></emphasis></para>
<para><emphasis>Models and Assets microservices handle the persistence of Digital Twin information (their representation and assets, respectively) providing endpoints for CRUD operations. In the current version of the CSI, these two components are merged into a single service in order to streamline the access to MongoDB and avoid synchronization issues</emphasis>.</para>
<para><emphasis role="strong"><emphasis>FMService</emphasis></emphasis></para>
<para><emphasis>This service is able to communicate with the Big Data platform; its main task is to submit the Functional Models to Apache Spark, to monitor the execution, cancel, and list them</emphasis>.</para>
<para><emphasis role="strong"><emphasis>Updater MS</emphasis></emphasis></para>
<para><emphasis>This service is designed to interact with the Big Data platform (in particular with Apache Cassandra) to retrieve data generated by the Functional Models</emphasis>.</para>
<para><emphasis role="strong"><emphasis>Simulations MS</emphasis></emphasis></para>
<para><emphasis>This service is appointed to managing the persistence of simulation-related data within a suitable database</emphasis>.</para>
</section>
</section>
<section class="lev2" id="sec11-3-2">
<title>11.3.2 Big Data Sub-Architecture</title>
<para>Big Data technologies are becoming innovation drivers in industry [5]. The CSI is required to handle unprecedented volumes of data generated by the digital representation of the factory in order to keep updated the CPS nameplate information. To this end, a data processing platform, specifically a Lambda architecture [6], has been implemented according to the best practices of the field. The Lambda Architecture was introduced as a generic, linearly scalable, and fault-tolerant data processing architecture. In particular, both data in rest and data in motion patterns are enforced by the platform, making it suitable for both stream and batch processing.</para>
<para>The Lambda Architecture encompasses three layers, namely batch, speed, and serving layers. The batch layer is appointed to the analysis of large quantities (but still finite) of data. A typical scenario is that wherefore the data ingested by the system are inserted in NoSQL Databases. Pre-computation is applied periodically on batches of data. The purpose is to offer the data a suitable aggregated form for different batch views. Note that the batch layer has a high processing latency because it is intended for historical data.</para>
<para>The speed layer is in charge of processing infinite streams of information. It is the purpose of the Speed Layer to offer a low latency, real-time data processing. The speed layer processes the input data as they are streamed in and it feeds the real-time views defined in the serving layer.</para>
<para>The Serving Layer has the main responsibility to offer a view on the results of the analysis. The layer responds to queries coming from external systems; in this particular case, the serving layer provides an interface that integrates with the rest of the CSI.</para>
<fig id="F11-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-2">Figure <xref linkend="F11-2" remap="11.2"/></link></label>
<caption><para>Lambda Architecture.</para></caption>
<graphic xlink:href="graphics/ch11_fig002.jpg"/>
</fig>
<para>Designing and setting up a Big Data environment, here in the form of the Lambda Architecture (<link linkend="F11-2">Figure <xref linkend="F11-2" remap="11.2"/></link>), is a complex task that starts with doing some structural decisions. In what follows, some high-level considerations about the technological choices made are presented:</para>
<section class="lev3" id="sec11-3-2-1">
<title>11.3.2.1 Batch layer</title>
<para>The field of Big Data is bursting with literally hundreds of tools and frameworks, each with specific characteristics; however, recently, some new solutions have appeared on the market that natively extend MapReduce [7] paradigm reduce, and, among other things, provide a more flexible and complete programming paradigm paving the way to the realization of new and more complex algorithms.</para>
<para>The solution selected to implement this layer, Apache Spark [8], claims to be up to 100&#215; faster than Hadoop on memory and up to 10&#215; faster on disk. This is mainly due to a particular distributed, in memory data structure called Resilient Distributed Datasets (RDD). Shortly, Apache Spark attracted the interest of important players and gathered a vast community of contributors, only to mention a few: Intel, IBM, Yahoo!, Databricks, Cloudera, Netflix, Alibaba, and UC Berkely. Moreover, Spark implements both map-reduce and streaming paradigm, features out-of-the-box an SQL-like language for automatic generation of jobs, and supports several programming languages (Java, Scala, Python, and R).</para>
</section>
<section class="lev3" id="sec11-3-2-2">
<title>11.3.2.2 Stream processing engine</title>
<para>If the batch processing engine enables the analysis of large historical data (often referred to as Data at Rest), then the stream processing engine is the component of the Lambda Architecture that is in charge of continuously manipulating the incoming data in quasi real-time fashion (i.e., the Data in Motion scenario). Recently, stream processing has increased in popularity. Only within the Apache Foundation, we identified several tools supporting different flavors of stream processing. Among them is Spark Streaming [9], the tool used to implement this layer.</para>
<para>Spark Streaming relies on Spark core to implement micro-batching stream processing. This means that the elements of the incoming streams are grouped together in small batches and then manipulated. As a consequence, Spark shows a higher latency (about 1 second). Spark Streaming is a valid alternative owing to the rich API, the large set of libraries, and its stability.</para>
<para>Spark can work in standalone mode featuring on its own resource manager or it can rely on external resource managers (as YARN). Other resource managers exist (e.g. Apache Mesos), but they are related more to cluster management than on Big Data. Nonetheless, Spark can be executed over both YARN and Mesos.</para>
</section>
<section class="lev3" id="sec11-3-2-3">
<title>11.3.2.3 All data store</title>
<para>A central role in the Lambda Architecture is played by the All Data Store, which is the service in charge of storing and retrieving the historical data to be analyzed. Depending on the type of data entering the system, this element of the platform can be realized in different ways. In MAYA, we decided to implement it through a NoSQL database particularly suitable for fast updates, Apache Cassandra [10]. It is the most representative champion of the column- oriented group. It is a distributed, linear scalable solution capable of ensuring high volumes of data. Cassandra is widely adopted (it is the most used column-oriented database) and features an SQL-like query language named CQL (Cassandra Query Language) along with a Thrift<footnote id="fn11_6" label="6"> <para>https://thrift.apache.org/</para></footnote> interface. As far as stream views are concerned, Cassandra has been successfully used to handle time series for IoT and Big Data.</para>
</section>
<section class="lev3" id="sec11-3-2-4">
<title>11.3.2.4 Message queueing system</title>
<para>In a typical Big Data scenario, data flows coming from different sources continuously enter the system; the most used integration paradigm to handle data flows consists in setting up a proper message queue. A message queue is a middleware implementing a publisher/subscriber pattern to decouple producers and consumers by means of an asynchronous communications protocol. Message queues can be analyzed under several points of view, in particular policies regarding Durability, Security, Filtering, Purging, Routing, and Acknowledgment, and message protocols (as AMQP, STOMP, MQTT) must be carefully considered.</para>
<para>Message queue systems are not a novelty and many proprietary as well as open-source solutions have appeared on the market in the last years. Among the open-source ones, there is Apache Kafka [11]. A preliminary analysis seems to demonstrate that Kafka is the most widely used in big players&#8217; production environments as, for instance, in LinkedIn, Yahoo!, Twitter, Netflix, Spotify, Uber, Pinterest, PayPal, Cisco, and Coursera among the others. Kafka is written in Java and originally developed at LinkedIn; it provides a distributed and persistent message passing system with a variety of policies. It relies on Apache Zookeeper [12] to maintain the state across the cluster. Kafka has been tested to provide close to 200,000 messages/second for writes and 3 million messages/second for reads, which is an order of magnitude more that its alternatives.</para>
</section>
<section class="lev3" id="sec11-3-2-5">
<title>11.3.2.5 Serving layer</title>
<para>This layer provides a low-latency storage system for both batch and speed layers. The goal of this layer is to provide an engine able to ingest different types of workloads and query them showing a unified view of data. The rationale is that the outcomes of the different computations must be suitably handled to later be further processed. In particular, batch views will contain steady, structured, and versioned data, whereas stream views will contain time-related data. Within the CSI, we have adopted the following flexible approach: in case Batch activities are required, the serving layer is implemented by means of Apache Cassandra NoSQL database, otherwise Apache Kafka is exploited. Notice that it is not uncommon to use a persistent and distributed message system as serving layer as, for example, in ORYX2<footnote id="fn11_7" label="7"> <para>http://oryx.io/</para></footnote>, where precisely Kafka is used.</para>
</section>
</section>
<section class="lev2" id="sec11-3-3">
<title>11.3.3 Integration Services</title>
<para>Technically, these services do not belong to the CSI at the moment, but we envision their development in the following phases of the project with the aim of streamlining the interaction processes with external tools and databases; in particular, at the moment of writing we foresee the following services:</para>
<para><emphasis role="strong"><emphasis>MSF Connector</emphasis></emphasis></para>
<para><emphasis>This component passes the CPS id, the simulation model in AutomationML format, and the simulation types requested by the user. The MSF sends in return the simulation results per each simulation type requested</emphasis>.</para>
<para><emphasis role="strong"><emphasis>Field Connector</emphasis></emphasis></para>
<para><emphasis>This service serves to bridge the gap between the communication layer and the field in case of CPS non-compliant with the CSI. In particular, it will create suitable WebSocket channels for data streams coming from the field and root those data to the Big Data platform inside the CSI</emphasis>.</para>
<para><emphasis role="strong"><emphasis>DB Importer</emphasis></emphasis></para>
<para><emphasis>Database Importers will be in charge of importing valuable data from external databases to enable the execution of Functional Models on those data</emphasis>.</para>
</section>
</section>
<section class="lev1" id="sec11-4">
<title>11.4 Real-to-Digital Synchronization Scenario</title>
<para>Several usage scenarios are possible to be executed within the CSI. Nonetheless, we propose the following as a reference use case, as it involves a good part of CSI components and functionalities. The objective is to use it as a reading key to better understand the relationships among the CSI and how they are reflected into the architecture. The considered scenario concerns the automated processing of data streams coming from CPS and can be described as follows:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>A human operator registers a new CPS. This action can be performed via the graphical UI or by means of available REST [13] endpoints;</para></listitem>

<listitem><para>The CPS logs in on the CSI, its digital identity is verified and the Digital Twin is activated;</para></listitem>

<listitem><para>The Functional Model featured by the Digital Twin (if any) is set up, scheduled, and executed;</para></listitem>

<listitem><para>WebSocket channel is established between the CPS and CSI. The CPS starts sending data to the platform;</para></listitem>

<listitem><para>The Functional Model periodically generates updates for a subset of attributes of the corresponding Digital Twin;</para></listitem>

<listitem><para>The CPS disconnects from the CSI and consequently the related Functional Models is halted and dismissed.</para></listitem>
</orderedlist>
<para><link linkend="F11-3">Figure <xref linkend="F11-3" remap="11.3"/></link> describes in UML the main actions carried out by the CPS and by the CSI in the scenario at hand. In particular, the CPS connects by logging in on the platform, at that point it is associated to a WebSocket endpoints and it can start sending data up to the CSI. The CSI, on the other hand, launches the execution of the Functional Model associated with the CPS.</para>
<para>A deeper insight is gained by means of <link linkend="F11-4">Figure <xref linkend="F11-4" remap="11.4"/></link>; in it, the interactions among services within the CSI are highlighted. It is clear, in fact, that the CPS connects with the CSI via the API <emphasis role="strong"><emphasis>Gateway</emphasis></emphasis>. In the current version of the CSI, the Gateway is in charge of checking whether the CPS asking for being attended is legit (it must have been created within the platform beforehand). To do this, the Gateway interrogates the <emphasis role="strong"><emphasis>Models MS</emphasis></emphasis> service. The Gateway then creates a WebSocket endpoint for the CPS, redirects the incoming workload to Kafka, and notifies the <emphasis role="strong"><emphasis>Orchestrator</emphasis></emphasis>. This, in turn, is in charge of running the Functional model(s) associated with the CPS. The Functional models are executed within the Big Data platform (in Apache Kafka cluster) and in particular they use Kafka not only as source of data but also as the endpoint where to post the results of the computation. Meanwhile the Orchestrator has scheduled a recurrent job on the <emphasis role="strong"><emphasis>Scheduler</emphasis></emphasis> that picks up the updated from the output Kafka topic and uses them to update the nameplated values of the CPS Digital Twin.</para>
<fig id="F11-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-3">Figure <xref linkend="F11-3" remap="11.3"/></link></label>
<caption><para>CPS connection.</para></caption>
<graphic xlink:href="graphics/ch11_fig003.jpg"/>
</fig>
<fig id="F11-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-4">Figure <xref linkend="F11-4" remap="11.4"/></link></label>
<caption><para>Sequence diagram.</para></caption>
<graphic xlink:href="graphics/ch11_fig004.jpg"/>
</fig>
<para>During the whole process, the Security is present in the form of SSL connection, CPS log in via OAuth2, and service-to-service authorization and authentication. We outlined the real-to-digital synchronization in <link linkend="F11-5">Figure <xref linkend="F11-5" remap="11.5"/></link>, wherein the reader can spot the presence of all the players present in the sequence diagram plus the UAA Service in charge of the authentication and authorization tasks. The actions performed by this service are pervasive and would have made the sequence diagram unintelligible.</para>
<fig id="F11-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-5">Figure <xref linkend="F11-5" remap="11.5"/></link></label>
<caption><para>Outline of the Real-to-digital synchronization.</para></caption>
<graphic xlink:href="graphics/ch11_fig005.jpg"/>
</fig>
</section>
<section class="lev1" id="sec11-5">
<title>11.5 Enabling Technologies</title>
<para>CSI aims at being the first reference middleware for smart factories based on a composite Microservices/Big Data approach paying particular attention to security concerns. In the following paragraphs, we examine the reasons behind the technical choices made.</para>
<section class="lev2" id="sec11-5-1">
<title>11.5.1 Microservices</title>
<para>The Microservices approach proposes to have numerous small code bases managed by small teams instead of having a giant code base that eventually every developer touch with the result of making more complex, slow, and painful the process of delivering a new version of the system.</para>
<para>In a nutshell, the microservice architecture is the evolution of the classical Service-Oriented Architecture (SOA), in which the application is seen as a suite of small services, each devoted to as single activity. Each microservice exposes an atomic functionality of the system and runs in its own process, communicating with other services via HTTP resource API (REST) or messages.</para>
<para>The adoption of the microservice paradigm provides several benefits, as well as presents inconveniences and new challenges. Among the benefits of this architectural style, the following must be enumerated:</para>
<para><emphasis role="strong">Agility</emphasis> &#8211; Microservices fit into the Agile/DevOps development methodology [2], enabling business to start small and innovate fast by iterating on their core products without affording substantial downtimes. A minimal version of an application, in fact, can be created in shorter time reducing time-to-market and up-front investment costs, and providing an advantage with respect to competitors. Future versions of the application can be realized by seamlessly adding new microservices.</para>
<para><emphasis role="strong">Isolation and Resilience</emphasis> &#8211; Resiliency is the ability of self-recovery after a failure. A failure in a monolithic application can be a catastrophic event, as the whole platform must recover completely. In a microservice platform, instead, each service can fail and heal independently with a possibly reduced impact on the overall platform&#8217;s functionalities. Resilience is strongly depen-dent on compartmentalization and containment of failure, namely Isolation. Microservices can be easily containerized and deployed as single process, reducing thus the probability of cascade-fail of the overall application. Isolation, moreover, enables reactive service scaling and independent monitoring, debugging, and testing.</para>
<para><emphasis role="strong">Elasticity</emphasis> &#8211; A platform can be subject to variable workloads especially on seasonal basis. Elasticity is the ability to respond to workload changes provisioning or dismissing computational power. This is usually translated into scaling up and down services. This process can be particularly painful and costly in case of on premise software; easier and automated in case of cloud-based applications. Nonetheless, microservices allows for a finer grain approach, in which services in distress (e.g., that are not meeting their Quality of Service) can be identified and singularly scaled taking full advantage of cloud computing since it requires the provisioning of just the right amount of resources. This approach can lead to substantial savings in the cloud that usually implements pay-per-use provisioning policies.</para>
<para>As far as the challenges and drawbacks derived by the choice of adopting microservices are concerned, we mention here:</para>
<para><emphasis role="strong">Management of Distributed Data</emphasis> &#8211; As each microservice might have its private database, it is difficult to implement business transactions that maintain data consistency across multiple databases.</para>
<para><emphasis role="strong">Higher Complexity of the Resulting System</emphasis> &#8211; Proliferation of small services could translate into a tangle Web of relationships among them. Experienced teams must be put together to deal in the best possible way with microservice platforms.</para>
</section>
<section class="lev2" id="sec11-5-2">
<title>11.5.2 Cloud Ready Architecture: The Choice of Docker</title>
<para>Containerization services (among which the most known is definitely Docker [14]) and microservice are two closely related yet different aspects of the same phenomenon; although containerization is not essential to realize microservice architectures, it is certainly true that it enables microservices to fully realize their potential; Docker&#8217;s <emphasis>agility</emphasis>, <emphasis>isolation</emphasis>, and <emphasis>portability</emphasis>, in fact, powered the rise and success of the microservice pattern while the latter gathered an ever-increasing interest around containers. It can be safely said that there are now two faces of the same coin and have made the fortune of each other.</para>
<para>At this point, it is important to answer to the simple question: what is a containerization system? A containerization system (hereinafter, we will use Docker and containerization system interchangeably) is a para-virtualization platform that exploits isolation features of Linux kernel, as namespaces and <emphasis>cgroups</emphasis> (recently also Windows&#8217; ones), to create a secure and isolate environment for the execution of a process. Each process running in a container has access to its own file system and libraries, but it shares with other containers the underpinning kernel.</para>
<para>This approach is defined para-virtualization because, unlike virtualization systems that emulate hardware to execute whole virtual machines to run atop, there is no need to emulate anything. Moreover, Docker do not depend on specific virtualization technologies and, therefore, it can run wherever a Linux kernel available. The overall approach results to be lightweight with respect to more traditional hypervisor-based virtualization platform allowing for a better exploitation of the available resources and for the creation of faster and more reactive applications. In the light of these considerations, it should be clear how Docker fits perfectly for microservices, as it isolates containers to one process and makes it simple and fast to handle the full life cycle of these services.</para>
<para>The current version of the CSI is provided with a set of scripts for automatic creation of Docker images for each of the services involved in the platform. Deployment scripts, which rely on a tool called Docker- compose, are provided as well to streamline the deployment on a local testbed. Nonetheless, a similar approach can be used to execute the platform on the most important Clouds (e.g. Amazon ECS, Azure Container Service).</para>
</section>
<section class="lev2" id="sec11-5-3">
<title>11.5.3 Lambda Architecture</title>
<para>A very important subset of CSI functionalities consists in the capability to handle unprecedented volume generated by the digital representation of the factory. To this end, a Big Data platform has been integrated with the microservice one. The phrase Big Data usually refers to a large research area that encompasses several facets. In this work, in particular, we refer to Big Data architectures. The following benefits deserve to be enumerates:</para>
<para><emphasis role="strong">Simple but Reliable</emphasis> &#8211; The CSI Big Data platform has been implemented employing a reduced number of tools; all of them are considered state of the art, are used in production by hundreds of companies worldwide, and are backed by large communities and big Information and Communications Technologies players.</para>
<para><emphasis role="strong">Multi-paradigm and General Purpose</emphasis> &#8211; Batch and Stream processing as well as ad-hoc queries are supported and can run concurrently. Moreover, the unified execution model, coupled with a large set of libraries, permits the execution of complex and heterogeneous tasks (as machine learning, data filtering, ETL, etc.).</para>
<para><emphasis role="strong">Robust and Fault Tolerant</emphasis> &#8211; In case of failure, the data processing is automatically rescheduled and restarted on the remaining resources.</para>
<para><emphasis role="strong">Multi-tenant and Scalable</emphasis> &#8211; In MAYA, this means that several Functional Models can run in parallel sharing computational resources. Furthermore, in case more resources are provisioned and the platform will start to exploit them without downtimes.</para>
<para>The downside of this approach is that it is fundamentally and technologically different for the rest of the platform and required quite an integration work. For this reason, the main elements of the CSI Big Data architecture had to be interfaced with expressly created microservices (as FMserver and Updates MS, see Section 4.1.4 for more details). Finally, Big Data solutions generally require steep learning curves to be fully exploited being moreover really resource eager.</para>
</section>
<section class="lev2" id="sec11-5-4">
<title>11.5.4 Security and Privacy</title>
<para>Security and privacy issues assume paramount importance in Industrial IoT. Here, we enforce those aspects since the earliest stages of the design, focusing on suitable Privacy-Enhancing Technologies (PETs) that encompass Authentication, Authorization, and Encryption mechanisms.</para>
<para>More in detail, authentication is the process of confirming the identity of an actor in order to avoid possibly malicious accesses to the system resources and services. Authentication can be defined as the set of actions a software system has to implement in order to grant the actor the permissions to execute an operation on one or more resources.</para>
<para>Specifically, seeking for more flexibility, we implemented a role-based access control model that permits the authentication process to depend on the actor&#8217;s role. Suitable authentication/authorization mechanisms (based on the Oauth2 protocol) have been developed for human operators, and services and CPS.</para>
<para>Securing communication is the third piece of this security and privacy puzzle, as no trustworthy authentication and authorization mechanism can be built without the previous establishment of a secure channel. For this reason, the CSI committed to employ modern encryption mechanisms (e.g. SSL and TLS) for the communication and data storage as well.</para>
</section>
</section>
<section class="lev1" id="sec11-6">
<title>11.6 Conclusions</title>
<para>This document presented the Centralized Support Infrastructure built within the H2020 MAYA project: an IoT middleware designed to support simulation in smart factories. To the best of our knowledge, it represents the first example of Microservice platform for manufacturing. Since security and privacy are sensitive subjects for the industry, special attention has been paid on their enforcement from the earliest phases of the project. The proposed platform has been here described in detail in connection with CPS and simulators. Lastly, the overall architecture has been discussed along with benefits and challenges.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>The work hereby described has been achieved within the EU-H2020 project MAYA, which has received funding from the European Union&#8217;s Horizon 2020 research and innovation program, under grant agreement No. 678556.</para></section>
<section class="lev1" id="sec11-7">
<title>References</title>
<para>[1] N. Dragoni et al., &#8220;Microservices: yesterday, today, and tomorrow,&#8221; in <emphasis>Present and Ulterior Software Engineering</emphasis>, Springer Berlin Heidelberg, 2017.</para>
<para>[2] S. Newman, <emphasis>Building microservices</emphasis>. &#8220; O&#8217;Reilly Media, Inc.,&#8221; 2015.</para>
<para>[3] J. Manyika et al., &#8220;Big data: The next frontier for innovation, competition, and productivity,&#8221; 2011.</para>
<para>[4] S. Newman, <emphasis>Building microservices</emphasis>. &#8220; O&#8217;Reilly Media, Inc.,&#8221; 2015.</para>
<para>[5] C. Yang, W. Shen, and X. Wang, &#8220;Applications of Internet of Things in manufacturing,&#8221; in <emphasis>Proceedings of the 2016 IEEE 20th International Conference on Computer Supported Cooperative Work in Design, CSCWD 2016</emphasis>, pp. 670&#8211;675, 2016.</para>
<para>[6] R. Drath, A. Luder, J. Peschke, and L. Hundt, &#8220;AutomationML-the glue for seamless automation engineering,&#8221; in <emphasis>Emerging Technologies and Factory Automation, 2008. ETFA 2008. IEEE International Conference on</emphasis>, pp. 616&#8211;623, 2008.</para>
<para>[7] J. Dean and S. Ghemawat, &#8220;MapReduce: Simplified Data Processing on Large Clusters,&#8221; <emphasis>Proc. OSDI - Symp. Oper. Syst. Des. Implement</emphasis>., pp. 137&#8211;149, 2004.</para>
<para>[8] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, &#8220;Spark?: Cluster Computing with Working Sets,&#8221; <emphasis>HotCloud&#8217;10 Proc. 2nd USENIX Conf. Hot Top. cloud Comput</emphasis>., p. 10, 2010.</para>
<para>[9] M. Zaharia, T. Das, H. Li, T. Hunter, S. Shenker, and I. Stoica, &#8220;Dis-cretized Streams: Fault-Tolerant Streaming Computation at Scale,&#8221; <emphasis>Sosp</emphasis>, no. 1, pp. 423&#8211;438, 2013.</para>
<para>[10] A. Lakshman and P. Malik, &#8220;Cassandra,&#8221; <emphasis>ACM SIGOPS Oper. Syst. Rev</emphasis>., vol. 44, no. 2, p. 35, 2010.</para>
<para>[11] J. Kreps and L. Corp, &#8220;Kafka: a Distributed Messaging System for Log Processing,&#8221; <emphasis>ACM SIGMOD Work. Netw. Meets Databases</emphasis>, p. 6, 2011.</para>
<para>[12] P. Hunt, M. Konar, F. P. Junqueira, and B. Reed, &#8220;ZooKeeper: Wait-free Coordination for Internet-scale Systems,&#8221; in <emphasis>USENIX Annual Technical Conference</emphasis>, vol. 8, p. 11, 2010.</para>
<para>[13] R. T. Fielding and R. N. Taylor, &#8220;Principled Design of the Modern Web Architecture,&#8221; <emphasis>ACM Transactions on Internet Technology</emphasis>, vol. 2, no. 2. pp. 407&#8211;416, 2002.</para>
<para>[14] D. Jaramillo, D. V. Nguyen, and R. Smart, &#8220;Leveraging microservices architecture by using Docker technology,&#8221; in <emphasis>Conference Proceedings - IEEE SOUTHEASTCON</emphasis>, 2016, July 2016.</para>
</section>
</chapter>
</part>
<part class="part" id="part03" label="III" xreflabel="III" role="PART">
<title>PART III</title>
<chapter class="chapter" id="ch012" label="12" xreflabel="12">
<title>Building an Automation Software Ecosystem on the Top of IEC 61499</title>
<para><emphasis role="strong">Andrea Barni, Elias Montini, Giuseppe Landolfi, Marzio Sorlini and Silvia Menato</emphasis></para>
<para>Scuola Universitaria Professionale della Svizzera Italiana (SUPSI), The Institute of Systems and Technologies for Sustainable Production (ISTEPS), Galleria 2, Via Cantonale 2C,</para>
<para>CH-6928 Manno, Switzerland</para>
<para>Email: andrea.barni@supsi.ch; elias.montini@supsi.ch;</para>
<para>giuseppe.landolfi@supsi.ch; marzio.sorlini@supsi.ch;</para>
<para>silvia.menato@supsi.ch</para>
<para>The adoption of Cyber Physical System (CPS) technologies at European level is constrained by a still emerging value chain and by the challenging transformation of manufacturing processes and business ecosystems that their deployment requires. This issue becomes even more challenging when the concept of CPS is exploited to propose cyber-physical machines and manufacturing systems, where the complexity of the controlling intelligence and of the digital counterpart explodes. As a matter of fact, the market behind CPS has a potential that is still scarcely supported by methodologies and tools able to foster the rise of a solid ecosystem required for a relevant market uptake. Multi-sided platforms (MSPs) have demonstrated to play the pivotal role of providing the environments and the technological infrastructures able to match make the needs of manifold user insisting on them. The manufacturing sector did not remain untouched by this trend and moves its first step towards the integration of platform logics across value networks: the CPS business ecosystem is one of those.</para>
<para>In this chapter, beyond an analysis of the current state of the automation value network, the design and implementation of a multi-sided platform for CPS deployment within the automation sector are described. The proposed platform can provide the infrastructure to incentivize CPS adoption, creating the technological and value drivers supporting the transition towards new paradigms for the development of the software components of a mechatronic system. Developing an infrastructure on the top of which the CPS value network can be instantiated and orchestrated, the proposed platform provides the technical means to incentivize the creation of an ecosystem able to support especially SMEs in their transition towards Industry 4.0.</para>
<section class="lev1" id="sec12-1">
<title>12.1 Introduction</title>
<para>Technological innovation is the main engine behind economic development that aims at supporting companies in adapting to the rhythm of the market dictated by globalization [1, 2]. According to Stal [3], innovation is the development of new methods, devices or machines that could change the way in which things happen. The fourth industrial revolution, pursuing the extensive adoption of innovative technologies and systems, increasingly impact almost every industry. According to Bharadwaj et al. [4], &#8220;digital technologies (viewed as combinations of information, computing, communication, and connectivity technologies) are fundamentally transforming business strategies, business processes, firm capabilities, products and services, and key interfirm relationships in extended business networks&#8221;.</para>
<para>The automation industry has historically a leading role in experimenting and pushing this transformation, with technological and process-related innovation being assimilated all-inclusively across the whole value network [5]. However, the characteristics that the automation market acquired in the last decades, where complex value networks support standard-based technologies relying on legacy systems, make purely technological advancements no more enough to satisfy the need of innovation expressed by the market. As proposed in the Technology-Organization-Environment Framework [6], the propensity of companies towards the adoption of innovations is indeed not only dependent on the technology per se, but it is influenced by the technological context, the organizational context, and the environmental context. The technological context includes the internal and external technologies that are relevant to the firm. The organizational context refers to the characteristics and resources of the firm. The environmental context includes the size and structure of the industry, the firm&#8217;s competitors, the macroeconomic context, and the regulatory environment [6&#8211;8]. These three elements present both constraints and opportunities for technological innovation and influence the way a firm sees the need for, searches for, and adopts new technology [9].</para>
<para>In this context, the European initiative Daedalus supports companies in facing opportunities and challenges of Industry 4.0 starting from the overcoming of the rigid hierarchical levels of the automation pyramid. This is done by supporting CPS orchestration in real time through the IEC-61499 automation language, to achieve complex and optimized behaviors impossible with other current technologies. To do so, it proposes a methodology and the related supporting technologies that, integrated within an &#8220;Industry platform<footnote id="fn12_1" label="1"> <para>An industry platform is defined as: foundation technologies or services that are essential for a broader, interdependent ecosystem of businesses [17, 18].</para></footnote>&#8221; and brought to the market by means of a Digital Marketplace, are meant to foster the evolution of the automation value network. The desired evolution is expected not only to impact on how companies manage their production systems, providing extended functionalities and greater flexibility across the automation pyramid, but also to broadly impact the automation ecosystem, creating and/or improving connections, relationships and value drivers among the automation stakeholders.</para>
<para>In the following sections, the principal characteristics of the current automation domain are analysed by focussing on the stakeholders (hereinafter complementors) that populate the ecosystem and on the structure of the relationships among them. The Daedalus platform is therefore presented by highlighting, beyond technological aspects described in <link linkend="ch05">Chapter <xref linkend="ch5" remap="5"/></link>, the value exchanges managed by the digital marketplace. The impact that the creation of such ecosystem has on the complementors is eventually discussed through an analysis of the to-be business networks.</para>
</section>
<section class="lev1" id="sec12-2">
<title>12.2 An Outlook of the Automation Value Network</title>
<para>The automation industry is an interdisciplinary field, which involves a wide variety of tasks, product portfolios (machinery, control, equipment, small elements, etc.), technologies (robotics, software, etc.), standards and services, serving different sectors, with distinct requirements and needs. This environment is populated by many stakeholders, which, through complex and articulated value chains, collaborate to develop complete automation solutions. This section aims to provide an overview of this complex domain, describing its characteristics, its players and the relation that they have established over time.</para>
<section class="lev2" id="sec12-2-1">
<title>12.2.1 Characteristics of the Automation Ecosystem Stakeholders</title>
<para>The automation environment is currently populated by several different players, which can be grouped into five macro-categories:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Component suppliers (CSs);</para></listitem>
<listitem><para>Automation solutions providers (ASPs);</para></listitem>
<listitem><para>Equipment and machines builders (E&amp;MBs);</para></listitem>
<listitem><para>System integrators (SIs);</para></listitem>
<listitem><para>Plant owners (POs).</para></listitem>
</itemizedlist>
<para>These macro-categories are the most relevant entities concurring in the design and development of industrial automation solutions in which different hardware &amp; software elements, characterized by a high granularity of functionalities and complexity are integrated into the building of complex mechatronic systems. In <link linkend="F12-1">Figure <xref linkend="F12-1" remap="12.1"/></link>, connections, flows, and relationships among those of the automation domain have been thus summarized by providing a general overview of the current automation value chains with a particular focus on: (i) Type of relation, (ii) Distribution Channels and (iii) Value-proposition. These elements drawn in boxes among complementors are intended to be univocal: for example, the value proposition of one player can vary a lot in accordance to the customer he serves.</para>
<fig id="F12-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-1">Figure <xref linkend="F12-1" remap="12.1"/></link></label>
<caption><para>Automation value network general representation.</para></caption>
<graphic xlink:href="graphics/ch12_fig001.jpg"/>
</fig>
<para>The main interactions that can exist among the automation players are here presented with the aim of not covering all the possible interactions (biggest players frequently group under their umbrella more than one of the proposed stakeholders&#8217; functions; similarly, it is frequent that companies establish partnerships exposing a unique contact point with the customer), but describing the most common ones. The resulting schema highlights the linearity of the current automation ecosystem, where automation solutions, i.e. manufacturing lines, are the result of a &#8220;chronological&#8221; (even if very complex) interaction among players that goes from the granularity of low intelligence components, to the high integration and desired smartness of full manufacturing lines.</para>
<para>In the existing value chains, the automation solution to be purchased is still selected merely considering its hardware elements. Despite the great commitment exerted in software development to create integrated and versatile automation solutions, resulting in high impacts the software has in terms of costs and implementation efforts, but still it is not a primary decision-making parameter. The decision between a solution or another one depends first on the hardware (the component, the control system, the machine, the equipment, the production line) and, only in second instance, the software to integrate, coordinate, and/or use the entire system is selected. To this end, in the schema, it is not underlined in the relevance of the software, being considered a player in the background.</para>
<section class="lev3" id="sec12-2-1-1">
<title>12.2.1.1 Automation solution providers</title>
<para>The automation solution provider (ASP) produces controllers for automation, such as PLC&#8217;s, servo-drivers, HIM, safety devices and a wide variety of products. Companies such as Siemens, Allen-Bradley, Turk, Omron, Phoenix Contact, Rexroth, Mitsubishi and Schneider/Modic provide the hardware components of the control solution, the software to develop the programs and the standards on which the logic controls are based.</para>
<para>The choice between the different automation solution brands is based on several parameters including integrability, flexibility, scalability and reliability.</para>
<para>Controllers have a relevant impact on the realization of an automation solution. Developing a plant totally based on one single brand guarantees reliability and easy integration. Nevertheless, this decision involves a strong dependency, which can bring disadvantages in terms of costs, flexibility and change opportunities.</para>
<para>ASPs have usually a strict relation with machine builders. Depending on the adopted business model, it may happen that the automation element is directly provided by the equipment &amp; machine builder (E&amp;MB), which is an ASP itself (e.g. ABB and Yaskawa). In other cases, E&amp;MBs develop some kind of drivers within their products, allowing to work with different automation solutions. For example, they may create a special driver for communication with profibus for Siemens, or Etherner/IP for Allen Bradley or just leave an open port of communication like MODBUS, to work with any other device. In many cases, an E&amp;MB proposes to its customer automation solutions that are compliant with its machines. The customer decides which one to implement.</para>
<para>Customers have a relevant role for the ASP&#8217;s business and their relationship can be resumed in two categories:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Self-service: customers have a limited interaction with automation solution providers&#8217; employees. The main relation channel is often the website, where self-help resources such as white papers, case studies, videos and answers to frequently asked questions are available. There is often the possibility to use also a personal assistance in the form of phone and e-mail support (e.g. ABB).</para></listitem>
<listitem><para>Consulting: direct sales forces consult the costumer to ensure that all the needs are met. The main objective is to establish a long-term relationship. Technical support is provided through personal and on-site assistance, but also through phone or online resources.</para></listitem>
</itemizedlist>
<para>ASPs can offer consultancy services either directly or through the support of system integrators. Some system integrators (SIs) prefer to remain completely independent from ASPs, while others choose to form alliances, which take the form of membership in an ASP&#8217;s program. This provides to its program members a wide number of benefits including training, advertising, marketing assistance, beta product trials, free product samples and more [5].</para>
</section>
<section class="lev3" id="sec12-2-1-2">
<title>12.2.1.2 Components suppliers</title>
<para>Components supplier (CS) produces devices not executing, on their own, complex functionalities (i.e. influencing alone a whole production process).</para>
<para>Their main customers are E&amp;MBs, and sensors, drives, panels, I/O clamps, etc. are typical &#8220;deliverables&#8221;. Production is usually oriented towards a make-to-stock approach in large numbers, aiming at a wide application scope. Their business model mainly focuses on the premium segment and/or on the customization, where it is possible to obtain the largest profits, with a strong emphasis on their home country [10]. CSs usually try to grow through joint ventures and cooperation, exploring adjacent businesses also with horizontal integration.</para>
</section>
<section class="lev3" id="sec12-2-1-3">
<title>12.2.1.3 Equipment and machines builders</title>
<para>In the current automation environment, the main business of an equipment &amp; machine builder (E&amp;MB) is the design and production of equipment and machines, through the assembly of different simple components, including low-level controllers and their logic control, in order to obtain more complex and functional systems. The integration and configuration of HMI, PLC, CNC, other accessories and tools depend on the business model of the player. In some cases, these activities are developed in-house, and in others, they are developed by SIs or directly by customers. E&amp;MB is usually characterized by a strong level of internationalization that it intensifies through local value creation and shorter time to market. E&amp;MBs are, with ASPs, the most advanced player from the technological point of view.</para>
<para>Considering their supply chain, the dependency of an E&amp;MB on external suppliers is heavy in the case of high-value added elements such as numerical controls, drives, linear guides, spindles, clamps or specific/custom automation components. Some of these are bought on the market from multinational companies (typical examples are NC, drives and PLC), others are produced by specialized companies working closely with E&amp;MBs (e.g. clamps and tooling).</para>
<para>In some cases, machines and the equipment are sold without the integration of the automation control. It is directly the customer or an SI that implements the automation. Who produces machines usually provides a list of compliant automation solutions, without expressing specific recommendations. It is the ASP that has to promote its products and be able to influence the buyer to install it. In the same way, the ASP does not suggest any E&amp;MB. E&amp;MB&#8217;s business model has a particular impact on the relation with customers. Sometimes, the E&amp;MBs rely on distributors partners, which sell and implement the basic configuration of their products based on customer needs. In other cases, in particular when the E&amp;MBs is a big company, there is a direct relation, where consultants or agents interact with the customers.</para>
<para>It is necessary to consider also that customers can be both plant owners, who directly purchase the machines and equipment, and the SIs, who purchase it for a third party. Another element, which is influenced by E&amp;MBs, is the machines integration. Some E&amp;MBs provide this service, while others provide only the product and leave the integration to a third-party actor or to the customer. Depending on its needs and the type of equipment and machine, builders can produce basic, highly standardized and high-volume machines or customized ones, involving the customer in the development with a strict collaboration between customer and supplier.</para>
</section>
<section class="lev3" id="sec12-2-1-4">
<title>12.2.1.4 System integrators</title>
<para>The main business of system integrators (SIs) focuses on assembly and integration of combined machinery systems, lines and equipment. SI has usually vertical competences in a specific sector (e.g. food and beverage, wood, textile, packing etc.), due to the need of having a deep knowledge of available technologies.</para>
<para>The SI starts from customer&#8217;s functional needs to design, propose and implement turnkey solutions. These are developed through the combination of existent and new resources. In many cases, SI develops codes in different languages, providing low-level SW (application, libraries, algorithms) to connect, integrate or add basic functionalities to machines, lines and plants. SI can provide support also for request-for-proposal and after-sale maintenance.</para>
<para>Usually, every SI has its main, trusty and reliable suppliers they select among on the basis of specific customer requirements. If it has not specific brand needs, the system integrator appeals on well-known or partner suppliers. SI usually purchases components from different vendors, depending on which integration it is working on and on the customer requirements. The relationship with components suppliers, which is often intermediated by the distributors, is driven by different elements, such as personal relationship and past relationships/experience. In many cases, the system integrator prefers known suppliers, with a long time, inter-personal relationship, which guarantee a service that goes beyond a simple buyer&#8211;supplier relationship (e.g. delivery outside the working time). Price is also a relevant aspect, especially when the customer is directly involved in the choice. If the customer has no specific requirements about the automation controllers, HMI, software, etc., the SI adopts the technology he knows better. If the employee knows specific languages and software and there is not the need to change them, then the relative automation solution will be adopted.</para>
</section>
</section>
<section class="lev2" id="sec12-2-2">
<title>12.2.2 Beyond Business Interactions: Limitations and Complexities of the Existing Automation Market</title>
<para>Adopting and using automation solutions requires the involvement of experienced and skilled employees, whose competences are developed during time and cover all the value and supply chain steps. Being able to maintain a sustainable value chain, where all the members have the proper revenue seems to be therefore the winning factor fostering continuity, customer loyalty and product familiarity. For this reason, in particular from the final user point of view, consistency, time continuity and familiarity of the supplier/solution are often more relevant than innovation itself. Also, price and services are relevant aspects to be considered for an automation solution, but if they are aligned between competitors, the personal relationships and the known modus operandi have a higher impact.</para>
<para>Actually, the automation environment is very conservative and slow to make changes: evolutions and innovations are often seen more as concerns than as opportunities of improvement. Automation solution providers, whose main products are PLCs and control systems, are the main rulers. Current PLC technologies, which impact the deployment of industrial automation applications, are a legacy of the 1980s, unsuited for sustaining complex systems composed of intelligent devices. The current control systems, which have at the base the IEC-61131 standard, are outdated from a software engineering point of view. Moreover, they have been implemented by each vendor through specific &#8220;dialects&#8221; that prevent real interoperability and they are strictly dependent on the hardware on which they run. Therefore, the automation solution brand is considered by the customer as a relevant aspect. If a company wants to access a specific market, it has to adapt its product to that context specificities. For example, in Germany, if a machine does not have Siemens PLC, probably it will not be sold. The low inclination to changes of this sector, due to the strong dependency on reliability and on well- established know-how, does not push the actors towards innovative changes (not even ones pulled by the Industry 4.0 principles).</para>
<para>In addition to the previously mentioned domain&#8217;s issues, there are different technological limits that should be included to obtain a wide represen-tation. Obsolescence of automation systems has a relevant impact on the all automation solution life cycle. In order to be compliant to the 4th industrial revolution principles, the access to data related to an equipment, a machine, a line, a plant and a factory should be available at any level of the supervisory and management hierarchical architecture. On the contrary, constraints and limits imposed by HMI or SCADA systems, designed and implemented to fulfill the requirements identified at the design stage restrict the access to data. Moreover, should additional information requirements emerge not included/considered in the initial design (e.g. for monitoring performance improvement purposes), existing legacy systems require a modification of the PLCs and a reconfiguration of the SCADA system (and/or HMIs). This upgrading of the system becomes expensive and risky, in particular, if applied to many controllers. In addition, flexibility and optimization of the manufacturing plants do not merely ask to access the raw data available on controllers (like status variables and/or sensors data), but also to computation and/or smart functionalities offered by the increasingly embedded intelligence. The software tools composing the ICT infrastructure of a factory could take advantage of the equipment/machine-embedded computation capabilities with the application of the appropriate functionalities. In classical automation systems, all these kinds of interactions, data elaborations and data delivery are defined at the design stage of the automation software, considering the requirements available in that step. When changes of those specifications should be considered after the automation system is implemented, there could be the need to modify the automation software on many controllers and this requires to be aware of the details about how the systems were implemented. These kinds of modifications are rarely applicable in real productive scenarios and this affects the upgrading and revamping of legacy systems, actually dampening innovation.</para>
</section>
</section>
<section class="lev1" id="sec12-3">
<title>12.3 A Digital Marketplace to Support Value Networks Reconfiguration in the Automation Domain</title>
<para>As highlighted in the previous chapter, a classic value chain, characterized by processes linked together in support of a value network is not typical of the automation industry. In fact, the influence of the upstream companies is relevant on the final product, be it a machine, an entire line or a plant. For this reason, value creation in the interdisciplinary automation industry cannot be represented as a chain: it is a value network where the same company can act both as a supplier and as a consumer of products and solutions. In this value network, services along the process steps are becoming more and more important, in particular when offered in connection with a physical product [5] (the so-called &#8220;product-related services&#8221;).</para>
<para>Digital platforms have been widely adopted in the last decade as instruments to support the diffusion of product-related services, reducing transaction costs and facilitating exchanges that otherwise would not have occurred [11]. The main value that the platforms create is the reduction of the barriers of use for their customers and suppliers. A platform encourages producers and suppliers to provide contents, removing hurdles and constraints that are part of the traditional businesses. As for suppliers and producers, platforms create significant value also for consumers, providing ways to access to products and services that they have not even been imagined before. Platforms allow users to trust in strangers, allowing them to enter in their rooms (Airbnb), renting their cars (Uber) and using their applications (Phone and PC marketplaces). Platforms provide and guarantee for users&#8217; reliability and quality. New sources of supply can cause undesirable contents, if not filtered, while thanks to the reliability and quality mechanisms that platforms integrate, this issue becomes not relevant.</para>
<para>The platform developed within the Daedalus project follows this trend by extending platform logics to the automation domain. This is done exploiting as a foundation the new evolution of the IEC-61499 standard that envisages the technology on the top of which additional value drivers for the automation complementors can be set up. The Standard allows: (i) the design and modelling of distributed control systems and application execution on distributed resources (not necessarily PLC), (ii) the creation of portable and interchangeable data and models and the re-utilization of the code, (iii) the seamless management of the communication between the different function blocks of an application (independently from the hardware resource they run on) and (iv) the decoupling of the elements of the CPS (its behavioral models) from the physical devices and reside (designed, used and updated) in-Cloud, within the &#8220;cyber-world&#8221;, where all the other software tools of the virtualized automation pyramid can access them and exploit their functionalities.</para>
<para>Among the others, code modularity, reusability and reconfigurability of systems are the main features that are advertised as practical benefits of applying this Standard [12]. The final result is the ability of designing more flexible and competitive automation systems by providing the functionality to combine hardware components and software tools of different vendors within one system as well as the reuse of code [13, 14].</para>
<para>The Daedalus platform is therefore meant to bring together automation complementors and give them the infrastructural support to technologies, services and skills essential for systems improvement through CPS integration and orchestration [15]. This is done by opportunely adapting the functional model already implemented successfully within the IT world for mobile applications developing a digital place (i.e. a marketplace), 
where automation-related applications and services will be shared among platform users. The digital infrastructure will allow also machine, equipment and components manufacturers to exploit a common platform where to share updates and extended functionalities of their systems.</para>
<section class="lev2" id="sec12-3-1">
<title>12.3.1 Architectural Characteristics of the Digital Marketplace</title>
<para>The Digital Marketplace represents the reference interface to be adopted by developers/manufacturers of IEC 61499-compliant CPS(s), interested to sell their products via a multi-sided marketplace and able to match-make product offer with plant owners, equipment manufacturers or system integrators needing their solutions.</para>
<para>The proposed digital infrastructure takes advantage of the faceted nature of a CPS (aggregation of hardware, software and digital twin), to make the decoupling between the mechatronic system, the control application and the digital twin the lever to support the integration of third-party developers and service providers. Thanks to the nature of the IEC 61499, the platform relies on CPS(s), and mechatronic systems may be indeed controlled by different intelligences, potentially made by different developers, which represents a big opportunity for developers who want to create their own control application. Therefore, the Digital Marketplace is not only a repository of CPS, but it provides a set of services enabling developers and manufacturers to test and validate their own products. The Digital Twin is used in this case as the instrument to simulate the mechatronic system in a well-known and certified simulation environment, providing a digital way to validate the cyber part of a CPS.</para>
<para>The Digital Marketplace is a Web-based application that exposes a set of Web services that allow external components, such as portals, IDE, applications, etc. to be connected with the Digital Marketplace and exploit its provided functionalities. At the architectural level, the marketplace is composed of the software components, exposed interfaces and interaction flows proposed in 0, whose main elements are (<link linkend="F12-2">Figure <xref linkend="F12-2" remap="12.2"/></link>):</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The <emphasis>Persistency Layer:</emphasis> it is the fundamental layer on which the rest of the architecture is based, and is divided into two main components: the <emphasis>Repository</emphasis> and the <emphasis>Persistency Manager</emphasis>. The first one represents the knowledge base of the Marketplace and it is composed of the hosted CPS(s), the <emphasis>Economic Data Model</emphasis>, meant to describe all economic aspects of the products (pricing strategies, fees, etc.) and the <emphasis>Semantic Data Model</emphasis> meant to characterize the submitted product in order to be accurately searched and identified. The <emphasis>Semantic Data Model</emphasis> is managed by the <emphasis>Product Manager</emphasis> component which is also in charge of providing discovery functionalities of the hosted products. In particular, once the CPS has been successfully submitted and validated, it becomes available to customers who want to buy and use it. For this purpose, the Digital Marketplace provides a search engine mechanism based on a set of algorithms meant to result the best possible answer to a search query. The aim of this semantic search is to improve search accuracy by understanding the customer&#8217;s intent and the contextual meaning of terms as they appear in the searchable dataspace, within the system, to generate more relevant results. Semantic search systems consider various points including context of search, intent, variation of words, synonyms, meaning, generalized and specialized queries, concept matching and natural language queries to provide relevant search results. The semantic search will produce a result containing the list of suggested products, whose characteristics answer the customer&#8217;s needs.</para></listitem>
</itemizedlist>
<fig id="F12-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-2">Figure <xref linkend="F12-2" remap="12.2"/></link></label>
<caption><para>Digital Marketplace Architecture.</para></caption>
<graphic xlink:href="graphics/ch12_fig002.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The <emphasis>Submission Manager</emphasis> is the software component in charge of regu-lating the CPS submissions process starting from the request, passing through the validation, to the payments. Both the payment service connector and the validation manager are directly connected with the product manager.</para>
<para>The submission manager exposes a submission interface, which allows to describe the submitted product in terms of: general description of the product, set of functionalities provided by the CPS, set of compatibilities with existing mechatronic systems and pricing strategies by which the marketplace will manage contracts of the products&#8217; usage between the marketplace itself and the customers.</para></listitem>
</itemizedlist>

<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The <emphasis>Validation Manager</emphasis> has the aim to validate all the submitted products in terms of provided functionalities. This component is in charge of managing the validation process that requires to simulate or test the Digital Twin of the submitted CPS into a simulation/testing environment, properly built by the certifier, where both the context of execution and the CPS&#8217; operations are replicated. The validation process follows a well-defined protocol based on the objective criteria, aimed to verify if the CPS specifications/functionalities, under certain conditions, are satisfied or not. In order to guarantee the tests repeatability, the CPS tester must publish, into the Digital Marketplace, the testing results accompanied by the applied testing protocol.</para></listitem>
<listitem><para>The high-level component belonging to the Digital Marketplace is the User Profiling Manager, which is in charge of managing the user profiling in terms of roles, authentication and authorization.</para></listitem>
</itemizedlist>
<para>In order to transform the described Architecture in a functional marketplace, an overall data model, encompassing both the digital integration of all technological elements of the project and the definition of revenue creation mechanisms has been therefore developed. The basic idea of this model is to provide a set of technical functional specifications (by using UML diagrams) aimed to cover the design of all needed mechanisms meant to guarantee the economic and technical aspects behind the Digital Marketplace.</para>
<para>The <emphasis>Digital Marketplace data model</emphasis> (<link linkend="F12-3">Figure <xref linkend="F12-3" remap="12.3"/></link>) aimed to cover, on the one hand, all the economic aspects of the products in terms of prices, contracts, etc. and, on the other hand, a detailed description of the hosted CPS.</para>
<para>The <emphasis>Digital Marketplace data model</emphasis> not only aims to describe the automation application in terms of &#8220;what a certain automation application does&#8221; but also how it does something. The design of the data model has the aim to fully characterize a product of the ecosystem in order to be accurately searched and identified. For that reason, the creation of such data model considered aspects like exposed functionalities, compatibilities, specifications, meaning of the application I/O, application extensibilities, description of the logics that the automation application wants to provide and the openness of its source code.</para>
<fig id="F12-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-3">Figure <xref linkend="F12-3" remap="12.3"/></link></label>
<caption><para>Digital Marketplace Data Model.</para></caption>
<graphic xlink:href="graphics/ch12_fig003.jpg"/>
</fig>
<para>The designed data model has been divided into five sections, each respective to one of the five packages of the structure presented in the figure above:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para><emphasis role="strong"><emphasis>User Characterization package</emphasis></emphasis><emphasis role="strong">:</emphasis> it contains all data entities related to the user description and characterization. This part of the data model deals with the representation of the User, being it a developer (Developer class) or a manufacturer (Manufacturer class) or a customer (Customer class), and all the related information.</para></listitem>

<listitem><para><emphasis role="strong"><emphasis>Product Description package</emphasis></emphasis><emphasis role="strong">:</emphasis> it contains the data objects needed to describe the products (hardware, application and services) hosted by the Marketplace. This package groups the set of entities needed to formalize the data structure for describing the hosted products in terms of features, possible relationships with other products, product contract configurability, product validation and certification.</para></listitem>

<listitem><para><emphasis role="strong"><emphasis>Contract Definition package</emphasis></emphasis><emphasis role="strong">:</emphasis> this package contains all entities needed to formalize all possible configuration options of the contract that regulate the economic aspects between the Marketplace and the customers about the use of the products.</para></listitem>

<listitem><para><emphasis role="strong"><emphasis>Validation and Certification package</emphasis></emphasis><emphasis role="strong">:</emphasis> this part of the data model is dedicated to formalize those entities meant to support the validation of the submitted product and the optional product certification.</para></listitem>
</orderedlist>
<para>The validation phase has to be intended as part of the product sub-mission process, where, according to the terms of the contract between the developer/manufacturer and the Marketplace, the submitted product undergoes a validation of the provided features. In particular, a validation has to be considered as a more specific validation service provided by the Marketplace and released by a validation service provider.</para>
</section>
<section class="lev2" id="sec12-3-2">
<title>12.3.2 Value Exchange between the Digital Platform and Its Complementors</title>
<para>If the marketplace described in the previous chapter is the technological infrastructure supporting the exchange of value (products, money, services) among automation stakeholders, the data model underpinning it provides the logical constructs enabling to deploy the rules running ecosystems exchanges. The business model characterizing the Digital Marketplace instantiation is eventually the description of how these rules are managed and how the cost/revenue structure of the marketplace is arranged [16]. The entity of the cost/revenue structure behind value exchange among platform comple- mentors is strictly dependent on the specific implementation scenario that the marketplace will assume (type of platform owner, network of existent suppliers involved, approach to system integrators, etc.). Therefore, the economic dimensions required to assure the profitability of the whole ecosystem have to be defined on a case-by-case basis, in accordance with the specific implementation scenario and the specific business case. The marketplace dynamics driving exchange of value among complementors and more in general, the type of transactions that it can enable can be generalized and discussed without referring to a specific business case.</para>
<para>Four sources of value are at the base of the marketplace dynamics:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para><emphasis>Money and credits:</emphasis> this is the most common form of value that is exchanged by customers and suppliers in return for goods and services delivered. As normally happens, on these transactions, the Marketplace builds its profitability.</para></listitem>

<listitem><para><emphasis>Goods and services:</emphasis> as anticipated in &#167;0, the Marketplace supports the trading of IEC 61499-compliant CPS (aggregation of hardware, software and digital twin), exploiting the independent nature of each CPS component to extend the number of elements that can be traded. Goods and services are therefore hardware components, the related software control part (developed by the company or by application developers), software applications that can support the integration of hardware components across lines and/or the integration of CPS with IT components of higher levels of the automation pyramid and services provided by third parties related to the deployment of CPS (e.g. integration of simulation services supporting software applications testing and validation).</para></listitem>

<listitem><para><emphasis>Information:</emphasis> the Marketplace is expected to host supporting material for companies/system integrators intended to integrate IEC 61499 technologies in their business and for developers approaching IEC 61499 programming, together with the related software development kit (SDK) supporting software development.</para></listitem>

<listitem><para><emphasis>Intangible value</emphasis>: in order to support customers in selecting hardware and/or software components and services to be deployed, the marketplace supports the delivery of intangible value across each trans-action in the form of evaluations of delivered products/services. The customer can therefore rely on a set of credentials of the supplier represented by the evaluation of its work provided by other customers.</para></listitem>
</orderedlist>
<para>In <link linkend="F12-4">Figure <xref linkend="F12-4" remap="12.4"/></link>, for each complementor, the main exchanges supported by the marketplace have been therefore highlighted by representing through arrows the forms of value described above. The direction of the arrow shows whether the value is taken form the platform, delivered to it or both. The diagram also summarizes the impact on the value proposition that the platform supports (further discussed in &#167;12.3.3).</para>
<para>To describe the main interactions occurring among marketplace and Complementors, the complementors have been grouped into four categories of stakeholders (Table 12.1): Customers, Application developers, Service providers and Hardware developers. In the following table, the mapping among the proposed categories and the overall Marketplace complemen- tors is presented. Some of them can play the role of both providers of product/services delivered by the marketplace as well as customers.</para>
<section class="lev3" id="sec12-3-2-1">
<title>12.3.2.1 Customers</title>
<para>The main relationships that customers have with the marketplace are: the purchase of product/services, agreements with product/service providers mediated by the marketplace and rating of the delivered product services. To this end, customers are meant to start the interaction with the marketplace by performing a registration that allows them to store and retrieve data related to their buying experience. By browsing the hardware and software catalogues, customers can select the product/service they are interested in and visualize the software/hardware products or services associated to the selected product.</para>
<fig id="F12-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-4">Figure <xref linkend="F12-4" remap="12.4"/></link></label>
<caption><para>High-level definition of marketplace interactions with main Daedalus stake-holders.</para></caption>
<graphic xlink:href="graphics/ch12_fig004.jpg"/>
</fig>
<table-wrap position="float" id="T12-1">
<label><link linkend="T12-1">Table <xref linkend="T12-1" remap="12.1"/></link></label>
<caption><para>Mapping of stakeholders on Marketplace ecosystem</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="group">
<tbody>
<tr><td valign="top" align="left">Type of Stakeholder</td><td valign="top" align="left">Mapping on Marketplace Ecosystem</td></tr>
<tr><td valign="top" align="left">Customer</td><td valign="top" align="left">Plant owners; system integrators; equipment/machine developers; component suppliers</td></tr>
<tr><td valign="top" align="left">Application developer</td><td valign="top" align="left">Application developers; system integrators; equipment machine developers</td></tr>
<tr><td valign="top" align="left">Service providers</td><td valign="top" align="left">Service providers</td></tr>
<tr><td valign="top" align="left">Hardware developers</td><td valign="top" align="left">Equipment/machine developers; component suppliers</td></tr>
</tbody>
</table>
</table-wrap>
<para>Each product is indeed connected with specific software/hardware/services that can operate together (i.e. if browsing a sensor, then the marketplace suggests the applications supported by the hardware itself and the services of integration supported).</para>
<para>The selection of one product enables the customer to access the contractual area, where the contract among the customer and the marketplace is agreed, and recall the payment service. In its interactions with the marketplace, the customer is not charged for the services provided: it is always the product/service provider that pays a percentage fee.</para>
<para>Once completed the purchase, the customer can exit the marketplace. He will then receive the products/services according to the modalities agreed within the contract. Customer will receive notifications with respect to software updates in order to improve the customer experience and support the maintenance of updated hardware/software functionalities.</para>
</section>
<section class="lev3" id="sec12-3-2-2">
<title>12.3.2.2 Hardware developers</title>
<para>Hardware developers are a category of marketplace end-users very important to its deployment; indeed, it is expected that in the first stages of marketplace life, hardware developers will provide both hardware and software applications to run on it. To this end, they will be the first category of stakeholders providing contents of the marketplace.</para>
<para>If maintaining the perspective on the sole hardware, then the marketplace will give hardware manufacturers the possibility to store product catalogues, giving the facilities to define characteristics, specification and costs of their products. As for customers, the first access will require a registration giving them the access to a dedicated page where they will be able to set up the characteristics of their account. In parallel, the marketplace will also provide the infrastructure for the definition of the contracts with customers, leaving manufacturers the freedom, among certain constraints set by the marketplace, to configure the contracts setting the relationship with the customer (i.e. cost of product, type of business model, purchase/product as a service, etc.).</para>
<para>The hardware manufacturer will be billed by the marketplace at two levels: on a first tier, paying a fixed cost for the hosting of the products catalogue and, on a second level, with a percentage fee on each transaction with customers. The economical dimensions of both the fixed cost and the transaction fee will be decided according with the specificity of instantiation of the marketplace.</para>
</section>
<section class="lev3" id="sec12-3-2-3">
<title>12.3.2.3 Application developers</title>
<para>Application developers will find on the marketplace the infrastructure to host their applications and sell them. Similarly, to hardware developers, the marketplace will give them the facilities to define characteristics, specification and costs of their products. Moreover, considering the type of product sold, the marketplace will also provide specific contracts templates supporting characteristics of an application sale (in-app purchase, period-based licence, etc.).</para>
<para>The marketplace will be also the place where developers will be able to find, accessing dedicated spaces, the quality procedures and SDK required to develop applications compliant with the ecosystem. These services are provided without additional costs to the developers.</para>
</section>
<section class="lev3" id="sec12-3-2-4">
<title>12.3.2.4 Service providers</title>
<para>Service providers are meant to benefit from the marketplace by increasing the visibility of the provided services. Similarly, to other stakeholders, the marketplace gives them the facilities to describe and host their services and set up contracts related to service provision. In exchange, the marketplace charges them a percentage fee for the services sold. The marketplace also make revenues by giving priority advertise for those service providers paying an additional fee.</para>
</section>
</section>
<section class="lev2" id="sec12-3-3">
<title>12.3.3 Opportunities of Exploitation of an Automation Platform</title>
<para>As already mentioned in the previous sections, the creation of a platform- based automation ecosystem is expected to have a relevant impact on the way that automation complementors manage their business. A platform relying on IEC 61499, to support transactions in a complex ecosystem as that of automation, should guarantee to be completely open and hardware-independent, enabling full interoperability and much-deeper portability and reusability of application developments. The specific deployment of technologies and ecosystem should be first targeted to the most innovative and pioneering SMEs and large enterprises in Europe, already oriented to accepting a decentralized approach to automation. These will be the first players that can adapt their business model, in order to be successful in a platform-based automation ecosystem. In the transition toward such ecosystem, opportunities and challenges are clearly generated for all the automation stakeholders. For each of them, a brief description of the expected challenges is provided hereinafter.</para>
<section class="lev3" id="sec12-3-3-1">
<title>12.3.3.1 Opportunities for system integrators</title>
<para>Among the automation players, System Integrators (SIs) are one of the stakeholders closest to the customers. Considering their active role in understanding customers&#8217; functional requirements to propose ready-to-use solution, they have a direct vision on their main needs. In this context, SIs are realizing, more than any other automation player, that customers are requiring more flexible and reconfigurable solutions, capable of increasing production performance and providing more advanced functionalities.</para>
<para>On the other hand, in the current automation environment, SIs have a marginal role in adding value for customers and have low technological competences. They usually integrate different components, equipment and machines to provide functional and ready-to-use solutions. They mainly perform the operative part, which does not only allow to cover customer needs, but only to satisfy their functional requirements.</para>
<para>Adhering to a platform-based ecosystem, SIs will no longer be a simple assembler, but they will have the opportunity to add relevant value to the automation solution. This could be done by developing SW for their customers and proposing dedicated solutions, which add functionalities, improve performances and manage orchestration and distributed architecture between the different factory levels. SIs have therefore the opportunity not only to deliver functional solutions meeting customer requirements but also to add functionalities to the systems, increasing the value of the overall proposed solution. Moreover, thanks to reduced hardware dependence, code re-usability and modularity achieved through the adoption of IEC 61499 logics, SW use could be extended in different contexts, for different customers application.</para>
<para>The first opportunity for SIs will be the update of existing legacy automation systems. For the first adoption of platform principles, CPS-izers (systems that are meant to act as an adapter among legacy and IEC 61499 technologies) develop a fundamental role, allowing SIs to transform solutions tied to old legacies to compliant ones. The higher integrability of components, equipment and machines will allow SIs to reduce the effort to provide ready-to-use solutions and to ease the integration of new functionalities by developing dedicated SW. This becomes a relevant activity that is expected to be mainly internalized by SIs. Thanks to the platform and the related marketplace, they will have the opportunity to re-use libraries and algorithms developed by third-party developers to improve or speed up the development of their SW solutions.</para>
<para>All these elements are meant to increase the value proposed to cus-tomers, allowing to extend solutions&#8217; functionalities and enabling to dedicate more resources to the development of high-level applications and SW, while reducing efforts and resources for components, equipment and machines integration and basic functions programming.</para>
<para>It is necessary to consider that SIs are the players that can achieve the highest benefits from platform-based automation ecosystem, but to which are also required the main transition efforts. In this kind of domain, SI becomes a more advanced player, to which are required more technological competences. It is no more a simple consultant, but it also a product (SW) developer. It is expected that SIs expand their know-how and competences from low operative level to higher, with the objective to provide more added value to its customers, not only through integration, but also through the improvement of performance and functionalities, maintaining them during the whole solution&#8217;s life cycle.</para>
</section>
<section class="lev3" id="sec12-3-3-2">
<title>12.3.3.2 Opportunities for equipment and machines builders</title>
<para>E&amp;MBs, adhering at the ecosystem and adopting the related technologies, have the opportunity to release more advanced products, able to work in flexible and orchestrated production systems. E&amp;MBs can produce complex manufacturing systems as aggregation of CPS, focusing their effort on the assembling and orchestration of the automation tasks of these composing elements. The adoption of platform technologies can allow an E&amp;MB to develop products that can take advantage of all the components (control software, applications and services) IEC-61499 compliant.</para>
<para>For E&amp;MBs, the platform will become a relevant resource, being one of the structural technologies on which its products will be designed and pro-duced. The management of this resource should be performed with particular attention, in order to spread out all the possible benefits and to maximize products&#8217; performances.</para>
</section>
<section class="lev3" id="sec12-3-3-3">
<title>12.3.3.3 Opportunities for components suppliers</title>
<para>The platform-based ecosystem generates opportunities also to CSs. They have to become capable of releasing more functional, intelligent and independent components. Components can be designed and developed as more complex elements (such as CPS), already equipped with on-board distributed intelligence. A CS should not be focused only on reliability, quality, price and lead time. It should innovate its products adding functionalities. Therefore, CSs will have the opportunity to provide not only hardware, but also SW, adding value to their solutions and increasing the revenue opportunities, creating a closer relation with their customers.</para>
</section>
<section class="lev3" id="sec12-3-3-4">
<title>12.3.3.4 Opportunities for automation solutions providers</title>
<para>Thanks to the extended functionalities that it brings by, the IEC 61499 standard can have the potentiality to affirm as a competing standard to the IEC 61131, currently largely adopted by Programmable Logic Controllers. If this situation actually happens, ASPs are expected to have two behaviors:</para>
<para>(i) they can adopt the IEC-61499 standard, implementing their own &#8220;dialect&#8221; and tools, to create their own IEC-61499 automation ecosystem and (ii) they can try to stop its adoption, taking advantages of their position of strength which ties customer to their legacy solutions.</para>
</section>
<section class="lev3" id="sec12-3-3-5">
<title>12.3.3.5 Opportunities for new players</title>
<para>The platform-based ecosystem and in particular the marketplace create the opportunities to all those ICT companies and software developers that aim to make business in the automation market. Application developers (AD) will be a new player of this environment that arises through thanks to the transition to platform-based business model.</para>
<para>These players will have the opportunity to develop compliant software for general-purpose usage scenarios, customizable by CSs, E&amp;MBs, SIs and/or customers for their specific projects. Through the distributed intelligence, software will acquire a more relevant role, through which customers can increase functionalities and performances of equipment, machines, lines and plants, obtain data and/or perform analysis. Added value is provided by guaranteeing special functionalities based on specific competence, quality of implementation and performance achieved.</para>
</section>
<section class="lev3" id="sec12-3-3-6">
<title>12.3.3.6 Service providers</title>
<para>SP provides services and support to POs and SIs. Exploiting the IEC-61499 benefits the possibility to develop an extended amount of new services with the aim of creating a digital representation of the system, perform simulation, analysis, test application, and/or store data. The described platform can become the environment where these services are made visible and brought to the market. In this sense, their business model is similar to AD&#8217;s, but instead of providing SW, SP provides services to be integrated in manufacturing lines design and deployment.</para>
</section>
</section></section>
<section class="lev1" id="sec12-4">
<title>12.4 Conclusions</title>
<para>In the last decades, the automation domain has been characterized by an ecosystem ruled by legacy technologies, where the dominant role of the chosen hardware solutions strongly constrains reusability, upgradability and orchestration of manufacturing systems. This situation led to the rise of important barriers to the shift towards competing or optimized solutions, limiting the potentialities of upgrade and flexibility of the systems.</para>
<para>In this context, the digital platform developed within the DAEDALUS project, relying on the extended functionalities provided by the upgrade and deployment of IEC 61499 in the CPS domain, stands out as a ground-breaking platform able to revolutionize the whole approach to how automation systems are conceived, designed and set up. The infrastructure developed is therefore the first step to achieve the challenge of developing a platform able to foster the creation and deployment of more efficient, flexible and orchestrated production systems, easy to be integrated, monitored and updated. The proposed platform is able to widely manage CPS in their multifaceted sense (HW, SW, Digital Twin), reaching different (even complementary) customers and offering new opportunities to developers in terms of possibility to create own(ed) control applications and of exploiting validation services thanks to the hosted digital twin. As a consequence, the platform drives a reconfiguration of the automation value network, with the aim of releasing the main issues currently faced by the sector and extending the value drivers that characterize their interactions.</para>
<para>Next steps to be carried out in order to create a digital platform meet-ing the needs of the current industrial markets (customers) are envisaged in (i) the creation of specific mechanisms and procedures, software interfaces, and incentivizing system, all supporting the large adoption of the platform, (ii) further elaborating methodologies and outcomes of processes and services supporting CPS validation, (iii) integrating in the platform value added services for customers (e.g. performances assessment of the machines, management of manufacturing systems, manufacturing data elaboration for predictive maintenance forecasting) and (iv) implementing a business development strategy intended to actually deploy in the market the logics proposed by the Digital Marketplace.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>The work hereby described has been achieved within the EU-H2020 project DAEDALUS, that has received funding from the European Union&#8217;s Horizon 2020 research and innovation programme, under grant agreement No. 723248.</para></section>
<section class="lev1" id="sec12-5">
<title>References</title>
<para>[1] W. B. Arthur, <emphasis>The Nature of Technology - What It Is and How It Evolves</emphasis>, 2011.</para>
<para>[2] K. C. Mussi, F.B., Canuto, &#8220;Percep&#231;&#227;o dos usu&#225;rios sobre os atributos de uma inova&#231;&#227;o,&#8221; <emphasis>REGE Rev. Gest&#227;o</emphasis>, vol. 15, pp. 17&#8211;30, 2008.</para>
<para>[3] R. da S. Pereira, I. D. Franco, I. C. dos Santos, and A. M. Vieira, &#8220;Ensino de inova&#231;&#227;o na forma&#231;&#227;o do administrador brasileiro: contribui&#231;&#245;es para gestores de curso,&#8221; <emphasis>Adm. Ensino e Pesqui</emphasis>., vol. 16, no. 1, p. 101, March 2015.</para>
<para>[4] A. Bharadwaj, O. A. El Sawy, P. A. Pavlou, and N. Venkatraman, &#8220;Digital Business Strategy: Toward a Next generation of insights,&#8221; vol. 37, no. 2, pp. 471&#8211;482, 2013.</para>
<para>[5] M. M&#252;ller-Klier, &#8220;Value Chains in the Automation Industry.&#8221;</para>
<para>[6] R. Depietro, E. Wiarda, and M. Fleischer, &#8220;The context for change: Organization, technology and environment,&#8221; in <emphasis>The processes of technological innovation</emphasis>, Lexington, Mass, pp. 151&#8211;175, 1990.</para>
<para>[7] J. Tidd, &#8220;Innovation management in context: environment, organization and performance,&#8221; <emphasis>Int. J. Manag. Rev</emphasis>., vol. 3, no. 3, pp. 169&#8211;183, September 2001.</para>
<para>[8] J. Tidd, J. Bessant, and K. Pavitt, <emphasis>Integrating Technological, Market and Organizational Change</emphasis>. John Wiley &amp; Sons Ltd, 1997.</para>
<para>[9] Z. Arifin and Frmanzah, &#8220;The Effect of Dynamic Capability to Technology Adoption and its Determinant Factors for Improving Firm&#8217;s Performance; Toward a Conceptual Model,&#8221; <emphasis>Procedia - Soc. Behav. Sci</emphasis>., vol. 207, pp. 786&#8211;796, 2015.</para>
<para>[10] Mckinsey&amp;Company, &#8220;How to succeed: Strategic options for European Machinery,&#8221; 2016.</para>
<para>[11] P. Mu&#241;oz and B. Cohen, &#8220;Mapping out the sharing economy: A configurational approach to sharing business modeling,&#8221; <emphasis>Technol. Forecast. Soc. Change</emphasis>, 2017.</para>
<para>[12] V. Vyatkin, &#8220;IEC 61499 as Enabler of Distributed and Intelligent Automation: State-of-the-Art Review,&#8221; <emphasis>IEEE Trans. Ind. Informatics</emphasis>, vol. 7, no. 4, pp. 768&#8211;781, November 2011.</para>
<para>[13] M. Wenger, R. Hametner, and A. Zoitl, &#8220;IEC 61131-3 control applications vs. control applications transformed in IEC 61499,&#8221; <emphasis>IFAC Proc. Vol</emphasis>., vol. 43, no. 4, pp. 30&#8211;35, 2010.</para>
<para>[14] T. Bangemann, M. Riedl, M. Thron, and C. Diedrich, &#8220;Integration of Classical Components Into Industrial Cyber&#8211;Physical Systems,&#8221; <emphasis>Proc. IEEE</emphasis>, vol. 104, no. 5, pp. 947&#8211;959, May 2016.</para>
<para>[15] G. Landolfi, A. Barni, S. Menato, F. A. Cavadini, D. Rovere, and G. Dal Maso, &#8220;Design of a multi-sided platform supporting CPS deployment in the automation market,&#8221; in <emphasis>2018 IEEE Industrial Cyber-Physical Systems (ICPS)</emphasis>, pp. 684&#8211;689, 2018.</para>
<para>[16] A. Barni, E. Montini, S. Menato, and M. Sorlini, &#8220;Integrating agent based simulation in the design of multi-sided platform business model?: a methodological approach,&#8221; in <emphasis>2018 IEEE International Conference on Engineering, Technology and Innovation/International Technology Management Conference (ICE/ITMC)</emphasis>, 2018.</para>
<para>[17] A. Gawer, &#8220;Platform Dynamics and Strategies: From Products to Ser-vices,&#8221; in <emphasis>Platforms, Markets and Innovation</emphasis>, Edward Elgar Publishing.</para>
<para>[18] A. Gawer and M. Cusumano, &#8220;Industry Platform and Ecosystem Innovation,&#8221; <emphasis>J. Prod. Innov. Manag</emphasis>., vol. 31, no. 3, pp. 417&#8211;433, 2013.</para>
</section>
</chapter>

<chapter class="chapter" id="ch013" label="13" xreflabel="13">
<title>Migration Strategies towards the Digital Manufacturing Automation</title>
<para><emphasis role="strong">Ambra Cal&#224;<superscript>1</superscript>, Filippo Boschi<superscript>2</superscript>, Paola Maria Fantini<superscript>2</superscript>, Arndt L&#252;der<superscript>3</superscript>, Marco Taisch<superscript>2</superscript> and J&#252;rgen Elger<superscript>1</superscript></emphasis></para>
<para><superscript>1</superscript> Siemens AG Corporate Technology, Erlangen, Germany</para>
<para><superscript>2</superscript> Politecnico di Milano, Milan, Italy</para>
<para><superscript>3</superscript> Otto-von-Guericke University Magdeburg, Magdeburg, Germany</para>
<para>E-mail: ambra.cala@siemens.com; filippo.boschi@polimi.it; paola.fantini@polimi.it; arndt.lueder@ovgu.de; marco.taisch@polimi.it; juergen.elger@siemens.com</para>
<para>Today, industries are facing new market demand and customer requirements for higher product personalization, without jeopardizing the low level of production costs achieved through mass production. The joint pursuit of these objectives of personalization and competitiveness on costs is quite difficult for manufacturers that have traditional production systems based on centralized automation architectures. Centralized control structures, in fact, do not guarantee the system adaptability and flexibility required to achieve increasing product variety at shorter time-to-market. In order to avoid business failure, industries need to quickly adapt their production systems and migrate towards novel production systems characterized by digitalization and robotization.</para>
<para>The objective of this chapter is to illustrate a methodological approach to migration that supports decision makers in addressing the transformation. The approach encompasses the initial assessment of the current level of manufacturing digital maturity, the analysis of priorities based on the business strategy, and the development of a migration strategy. Specifically, this chapter presents an innovative holistic approach to develop a migration strategy towards the digital automation paradigm with the support of a set of best practices and tools. The application of the approach is illustrated through an industrial case.</para>
<section class="lev1" id="sec13-1">
<title>13.1 Introduction</title>
<para>In recent years, lot of research has been devoted to the improvement of control automation architectures for production systems. Latest advances in manufacturing technologies collaborate under the Industry 4.0 paradigm in order to transform and readapt the traditional manufacturing process in terms of automation concepts and architectures towards the fourth industrial revolution [1]. The increasing frequency of new product introduction and new technological development leads to more competitive, efficient and productive industries in order to meet the volatile market demands and customer requirements.</para>
<para>The Industry 4.0 initiative promotes the digitalization of manufacturing in order to enable a prompt reaction to continuously changing requirements [2]. The envisioned digitalization is supported by innovative information and communication technologies (ICT), Cyber-Physical Systems (CPS), Internet of Things (IoT), Cloud and Edge Computing (EC), and intelligent robots. The control architecture is a key factor for the final performance of these application systems [3]. Therefore, new automation architectures are required to enhance flexibility and scalability, enabling the integration of modern IT technologies and, consequently, increasing efficiency and production performance.</para>
<para>For this purpose, within the last years, a lot of decentralized control architectures have been developed in different research projects highlighting the benefit of decentralized automation in terms of flexibility and reconfigurability of heterogeneous devices [4]. However, after years of research, the reality today shows the dominance of production system based on the traditional approach, i.e. the automation pyramid based on the ISA-95 standard, characterized by a hierarchical and centralized control structure.</para>
<para>The difficulty in adopting new architectural solutions can be summarized in two main problems:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Enterprises that are reluctant to make the decision to change;</para></listitem>
<listitem><para>Projects that fail during the implementation or take-up.</para></listitem>
</itemizedlist>
<para>Manufacturers are reluctant to adopt decentralized manufacturing technologies due to their past large investments on their current production facilities, whose current lifetime is long and, therefore, the required changes are sporadic and limited. In addition, methods and guidelines on how to integrate, customize, and maintain the new technologies into the existing ICT infrastructure are unclear and often incomplete. Nevertheless, with the advent of future technologies and with current market requirements, changes during the whole life cycle of the devices and services are necessary.</para>
<para>These changes lead to the transformation of the existing production systems and their migration towards the digital manufacturing of the Industry 4.0 paradigm. The term &#8220;migration&#8221; refers to the changing process from an existing condition of a system towards the desired one. Here, specifically, the migration is considered as a progressive transformation that moves and the existing production system towards digitalization. Migration strategies are thus essential to support the implementation of digital technologies in the manufacturing sector and the decentralization of the automation pyramid, in order to achieve a flexible manufacturing environment based on rapid and seamless processes as response to new operational and business demands.</para>
<para>Aligned to this vision, the aim of the EU funded project FAR-EDGE (Factory Automation Edge Computing Operating System Reference Imple-mentation) [5] is twofold: it intends not only to virtualize the conventional automation pyramid, by combining EC, CPS and IoT technologies, but also to mitigate manufacturers&#8217; conservatism in adopting these new technologies in their existing infrastructures. To this end, it aims at providing them with roadmaps and strategies to guarantee a smooth and low-risk transition towards the decentralized automation control architecture based on FAR-EDGE solutions. Indeed, migration strategies are expected to play an essential role to the success of the envisioned virtualized automation infrastructure. To this end, FAR-EDGE is studying and providing smooth migration path options from legacy-centralized architectures to the emerging FAR-EDGE-based ones.</para>
<para>This chapter aims at describing the migration approach developed within the FAR-EDGE project. After this brief introduction, the state-of-the-art migration processes, change management approaches and maturity models are presented in Section 13.2, providing the founding principles of the FAR-EDGE migration approach presented in Section 13.3. An industrial use case application scenario is presented in Section 13.4, which is assessed and analyzed in Section 13.5, providing an example of migration path alternatives. Finally, Section 13.6 gives an outlook and presents the main conclusions.</para>
</section>
<section class="lev1" id="sec13-2">
<title>13.2 Review of the State-of-the Art Approaches</title>
<section class="lev2" id="sec13-2-1">
<title>13.2.1 Migration Processes to Distributed Architectures</title>
<para>There are several other migration processes that have been developed in other projects that allow for a smooth migration between different systems. The work developed in the IMC-AESOP project [6] focused mainly on the implementation of Service Oriented Architecture (SOA) to change the existing systems into distributed and interoperable systems. The migration of systems towards SOA has four major steps, such as Initiation, Configuration, Data Processing, and Control Execution. This migration process makes use of the mediator technology to communicate with the legacy systems, i.e. the old systems. The four steps aim at maintaining the perception of conformity between the several systems&#8217; interfaces.</para>
<para>Similarly, the SOAMIG project [7] developed a migration process towards SOA, which is developed as an iterative process and is represented by four phases: Preparation, Conceptualization, Migration and Transition. This migration process aims at a single specific target solution, which is derived step-by-step.</para>
<para>The SMART project [8] performed the analysis of the legacy systems by determining if they can be &#8220;linked&#8221; to SOA. SMART is an iterative process of six steps: Establish migration Context, Define Candidate Services, Describe Existing Capability, Describe Target SOA Environment, Analyze the Gap, and Develop Migration Strategy. This migration process is mostly used for migrating legacy Information Technology (IT) to SOA.</para>
<para>The MASHUP [9] is another technique for migrating legacy systems into service oriented computing. This migration process proposes a six steps process: Model, Analyze, Map and Identify, Design, Define, and Implement and Deploy. This technique is mainly used to overcome some SOA difficulties, such as the Quality of Service.</para>
<para>The Cloudstep [10] is a step-by-step decision process that supports the migration of legacy application to the cloud, identifying and analyzing the factors that can influence the selection of the cloud solution and also the migration tasks. It comprehends nine activities: Define Organization Profile, Evaluate Organizational Constraints, Define Application Profile, Define Cloud Provider Profile, Evaluate Technical and/or Financial Constraints, Address Application Constraints, Change Cloud Provider, Define Migration Strategy, and Perform Migration.</para>
<para>The XIRUP [11] process aims at the modernization of component- based systems, in an iterative approach. This method comprehends four stages: Preliminary Evaluation, Understanding, Building, and Migration. The ultimate goal of the XIRUP process is to provide cost-effective solutions and tools for modernization.</para>
<para>The different migration processes found in the literature present some similarities, regardless of the domain and target of migration. Generally, following a stepwise approach, first the legacy system and the target system are analyzed and the requirements defined, and then the target system is developed and finally the migration is defined and performed. Processes like SOAMIG and IMC-AESOP focus mainly on the technical constraints and characteristics of the migration, while SMART, MASHUP, and XIRUP pay attention also to business requirements and involved stakeholders, and Cloud- step includes legal, administrative and organizational constraints. In addition, most of the described processes analyze the migration iteratively, but only the XIRUP process considers the integration of the possible new features after the successful validation of the migrated components.</para>
<para>The existing migration processes or methods are all target based, taking only in consideration a specific goal, e.g. service-oriented architectures. While the described processes try to migrate and transform only technologies, now it is fundamental to start considering changing business paradigms. For the implementation of a new business paradigm, in this case Industry 4.0, it is necessary to have a migration process that allows for holistic and continuous improvement. A process that supports the lean approach for continuous improvement, adaptation to change and system&#8217;s innovation is the migration process proposed by Cal&#224; et al. [12] within the PERFoRM project, which constitutes the baseline for the migration strategy towards the digital manufacturing automation presented in this chapter.</para>
</section>
<section class="lev2" id="sec13-2-2">
<title>13.2.2 Organizational Change Management</title>
<para>Architectures and information systems represent the backbone of enterprises, and their transformation is a part of the comprehensive process of an organizational change. There is a rich management literature addressing the theme of how to introduce, implement, and support changes that impact the role and work of people in the organizations. In his seminal work, Lewin has highlighted how social groups operate in a sort of equilibrium among contrasting interests and that any attempt to force a change may stimulate an increase in opposing forces [13]. Changes have implications on the employees, who, in most cases, show reactions such as concern, anxiety and uncertainty, which may develop into resistance [14]. In order to prevent and overcome resistance, Lewin proposed a three steps process: (i) unfreezing, (ii) moving, and (iii) freezing. The first step aims at destabilizing the equilibrium correspondent to the status-quo, so that current behaviours become uncomfortable and can be discarded, i.e. unlearnt, opening up for new behaviours. In practice, unfreezing can be achieved by provoking some emotional feeling, such as anxiety about the survival of the business; the second step consists in a process of searching for more acceptable behaviours, in which individuals and groups progress in learning; the third steps aim at consolidating the conditions of a new quasi-stationary equilibrium [15].</para>
<para>Lewin&#8217;s work, by providing insight about the mechanisms that rule human groups and operate within the organizations, and by delivering guidance about change management strategies, has opened the way to following studies. In the last decades, several frameworks and approaches have been defined in order to successfully undertake transformation processes and overcome possible resistance. Starting from the analysis of why change effort fails, Kotter [16] has identified a sequence of eight steps for enacting changes in organizations: (i) creating a sense of urgency, e.g., by attracting the attention on potential downturn in performances or competitive advantage and discussing the dramatic implications of such crisis and timely opportunities to be grasped; (ii) building a powerful guiding coalition, i.e., forming a team of people with enough power, interest and capability to work together for leading the change effort; (iii) creating a vision, i.e., building a future scenario to direct the transformation; (iv) communicating the vision, including teaching by the example of the new behaviours of the guiding coalition; (v) empowering others to behave differently, also by changing the systems and the architectures; (vi) planning actions with short term returns, limited changes that bring visible increases in performances and, through acknowledgment and rewarding practices, can be used as examples; (vii) consolidating improvements, developing policies and practices that reinforce the new behaviours; and (viii) institutionalizing new approaches, by struc-turing and sustaining the new behaviours. Another quite famous framework for managing changes is the Prosci ADKAR Model [17], which suggests to pursue changes through a sequence of five steps corresponding to the initial letters of ADKAR, i.e. (i) awareness about the need for change; (ii) desire to support the change; (iii) knowledge about how to change; (iv) ability to demonstrate new behaviours and competencies; and (v) reinforcement to stabilize the change.</para>
<para>The focus of some researchers and practitioners has shifted from an episodic to a continuous change.</para>
<para>This type of approach includes the continuous improvement of Kaizen [18], with its three principles: (i) process-orientation, as opposed to result-orientation: (ii) improving and maintaining standards, as opposed to innovations that do not impact on all the practices and are not sustainable; and (iii) people orientation as opposed to an involvement of the employees limited to the higher levels of management.</para>
<para>The concept of a learning organization, capable to build, capture, and mobilize knowledge to adapt to a changing environment has been introduced by Senge in 1990 [19]. The basis for the development of a learning organization consists of five disciplines: (i) mental models, (ii) personal mastery, (iii) systems thinking, (iv) team learning, and (v) building shared vision [20]. Other recent literature supports the theory of an organization that continuously changes through engaging and learning.</para>
<para>The case discussed in this chapter, the migration from conventional centralized automation (e.g., ISA-95) to distributed architectures for the digital shopfloor, concerns a major transformation in which the enterprise information systems play a crucial role for realizing the business vision and converting the strategy into change [21]. The theories and strategies of change management can thus provide some guidance about the path to be followed and the mistakes to avoid for the migration. However, organizations participate in a process of continuous change through engagement and learning [22], which involve the continuous transformation and integration of Enterprise Information Systems [21]. Therefore, rather than targeting the final state of a successfully deployed digital automation model, the migration roadmap should aim at incorporating further continuous transformation of distributed automation architectures in the continuous learning and improvement of the organization, in a never-ending process.</para>
</section>
<section class="lev2" id="sec13-2-3">
<title>13.2.3 Maturity Models</title>
<para>In order to understand what maturity models are, the basics concepts of maturity models are given. To this aim, it is appropriate to provide some definitions, since the notion of maturity concepts might not be one and the same [23].</para>
<para>Maturity can be defined as &#8220;<emphasis>the state of being complete, perfect or ready</emphasis>&#8221;. Adding to this definition, there is another point of view of maturity concept given by Maier et al. in 2012 [23], who believe maturity implies an evolutionary progress from an initial to a desired or normally occurring end stage [24]. This last consideration, which stresses the process toward maturity, introduces another important concept, which is the one of <emphasis>stages of growth</emphasis> or <emphasis>maturity levels</emphasis>.</para>
<para>The concept of stages of growth started to appear in literature for the first time around the 1970s. In particular, the authors who used these concepts for the first time are Nolan and Crosby in 1979 [25, 26]. The first one published an article where maturity model is seen as a tool to assess in which stage of growth the organization is, assuming it evolves automatically over the time, passing all the stages due to improvements and learning effects [25]. Simultaneously, Crosby [26] proposed a maturity grid for quality management process, as a tool which can be used to understand what is necessary to achieve a higher maturity level, if desired.</para>
<para>From this consideration, it is possible to state that in the same year, two concepts of maturity model have been proposed. On the one hand, Nolan proposed an &#8216;<emphasis>evolutionary model</emphasis>&#8217; that sees the stages of maturity as steps through which every company will improve, and on the other hand, Crosby introduced the &#8220;<emphasis>evolutionist models</emphasis>&#8221; that consider the maturity as a series of steps towards progressively more complex or perfect version of the current status of a company.</para>
<para>Therefore, it has been noticed that, in literature, there is not a general and clear classification of maturity models because of the different interpretation of the maturity concept, of the different approach with which the models (evolutionist/evolutionary) were conceived and according to the different sectors in which they are applied. Nevertheless, Fraser et al. [27] presented a first clear classification per typology of maturity models. In their paper, they distinguish three typologies of maturity models that are, respectively, Maturity grids, Likert-like questionnaires, CMM-like models.</para>
<para>The maturity grids typically illustrate maturity levels in a simple and textual manner, structured in a matrix or a grid. As mentioned before, the first type of maturity grid was the one of Crosby [26], and its main characteristic is that it is not specified what a particular process should look like. Maturity grids only identify some characteristics that any process and every enterprise should have in order to reach high-performance processes [23].</para>
<para>The Likert-like questionnaires are constructed by &#8220;questions&#8221;, which are no more than statements of good practice. The responder to the questionnaire has to score the related performance on a scale from 1 to n. They can be defined as hybrid models, since they combine the questionnaires approach with the definition of maturity. Usually, they have only a description of each level, without specifying the different activities that have to be performed to achieve a precise maturity level.</para>
<para>Finally, there is the Capability Maturity Model (CMM). Its architecture is more formal and complex compared to the first two. They are composed of process areas organized by common features, which specify a number of key practices to address a series of goals. Typically, the CMMs exploit Likert questionnaires to assess the maturity. These models have been improved successively by the Capability Maturity Model Integration (CMMI) [28].</para>
<para>Although Nolan and Crosby have been the pioneers of the maturity assessment tools, as stated by Wendler [29], the maturity models field is clearly dominated by the CMM(I)&#8217;s inspired models. For this reason, FAR-EDGE approach is based on this model and, therefore, its relevant features will be described in this chapter.</para>
<para>The CMM was developed at the end of the 1980s by Watts Humphrey and his team from the Software Engineering Institute (SEI) in Carnegie Mellon University. It was used as a tool for objectively assessing the ability of government contractors&#8217; processes to perform a contracted software project. Although the focus of the first version of the CMM lies on the software development processes, successively, it has been applied in other process areas [30]. CMM decomposes each maturity level (shown in the <link linkend="F13-1">Figure <xref linkend="F13-1" remap="13.1"/></link> [38]) into basic parts with the exception of level 1, which is the initial one. These levels define a scale for measuring process maturity and evaluating process capability. Each level is composed by several <emphasis>key process areas</emphasis>. Each key process area is organized into five sections called <emphasis>common features</emphasis>, which in turn specify <emphasis>key practices</emphasis>.</para>
<para>The key process areas specify where an organization should focus on improving processes. In other words, they identify a cluster of related activities, which, if performed collectively, achieve a set of goals considered important for improving process capability.</para>
<fig id="F13-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-1">Figure <xref linkend="F13-1" remap="13.1"/></link></label>
<caption><para>CMM&#8217;s five maturity levels (from [38]).</para></caption>
<graphic xlink:href="graphics/ch13_fig001.jpg"/>
</fig>
<para>The practices that describe the key process areas are organized by common features. These are attributes that indicate whether the implementation of a key process area is effective, repeatable and lasting.</para>
<para>Finally, each process area is described in terms of <emphasis>key practices</emphasis>. They define the activities and infrastructure for an effective implementation and institutionalization of the key process area. In other words, they describe what to do, but not how to do it.</para>
<para>In 2002, the CMMI was proposed [28]. It is considered as an improvement of the CMM model, but in contrast to this model that was built for software development, the purpose of CMMI has been to provide guidance for improving organizations&#8217; processes and their ability to manage the development, acquisition, and maintenance of products or services in general [28]. Furthermore, the focus of this model lies on the representation of the current maturity situation of the organization/process (coherently with the evolutionary model) and on giving indications on how a higher maturity level can be achieved (as proposed by evolutionist model). For these reasons, considering also the FAR EDGE purposes, the CMMI can be considered as the most appropriate to be taken as a reference model to implement a blueprint migration strategy.</para>
</section>
</section>
<section class="lev1" id="sec13-3">
<title>13.3 The FAR-EDGE Approach</title>
<para>The envisioned cyber-physical production and automation systems are characterized by complex smart and digital technology solutions that cannot be implemented in an existing production system in one step without considering their impact on the legacy systems and processes. Only a smooth migration strategy, which applies the future technologies in the existing infrastructures with legacy systems through incremental migration steps, could lower risks and deliver immediate benefits [4]. Indeed, a stepwise approach can mitigate risks at different dimensions of the factory by breaking down the long-term vision, i.e. the target of the migration, in short-term goals. This approach, as represented in <link linkend="F13-2">Figure <xref linkend="F13-2" remap="13.2"/></link>, is based on the lean and agile techniques, such as the Toyota Improvement Kata [31], to implement the new system step- by-step and support the continuous improvement, adaptation to changes and innovation at technical, operational and human dimensions.</para>
<para>The methodology adopted in FAR-EDGE to define a migration approach is described in [32]. Workshop and questionnaire results led to the identification of the important impact aspects of the FAR-EDGE reference architectures to the existing traditional production systems. Considering the identified factory dimensions of impact, an assessment tool has been realized to support the analysis of the current situation and the desired ones of the manufacturing systems before defining their migration path.</para>
<fig id="F13-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-2">Figure <xref linkend="F13-2" remap="13.2"/></link></label>
<caption><para>Migration path definition.</para></caption>
<graphic xlink:href="graphics/ch13_fig002.jpg"/>
</fig>
<para>Inspired by the migration process defined in [12], a methodology to define and evaluate different architectural blueprints has been defined within the FAR-EDGE project to support companies in investigating the possible technology alternatives towards the digital manufacturing automation with a positive return on investments.</para>
<para>First, there is a preparation phase [12] that aims at analyzing the current domain of the company, as well as the business long-term vision. Through questionnaires and workshops with people involved in the manufacturing process (i.e. production and operation management, IT infrastructure, and change management), the migration goal and starting point are defined, as well as the possible impact and the typical difficulties that the FAR-EDGE solution can have.</para>
<para>The scope of this phase is to have a clear picture on what should be changed in a company&#8217;s business by investigating the technology and business process points of view simultaneously and deriving the implication at technical, operational and human dimensions in a holistic approach. In fact, it is important to keep in mind that the implementation of smart devices, intelligent systems, and new communication protocols has a big impact not only on the technological dimension of the factory but also on system&#8217;s performance, work organization, and business strategy [32]. Therefore, a questionnaire of circa 60 questions about the technical, operational, and human factory&#8217;s dimensions has been defined within FAR-EDGE to holistically analyze the current condition of the production system.</para>
<fig id="F13-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-3">Figure <xref linkend="F13-3" remap="13.3"/></link></label>
<caption><para>FAR-EDGE Migration Matrix.</para></caption>
<graphic xlink:href="graphics/ch13_fig003.jpg"/>
</fig>
<para>Based on the answers of this questionnaire, different migration scenarios according to the possible technology options are investigated [12] in order to identify the migration alternatives to go from the identified AS-IS situation to the TO-BE one. To this end, a tool called Migration Matrix (<link linkend="F13-3">Figure <xref linkend="F13-3" remap="13.3"/></link>) has been developed within the FAR-EDGE project to identify all the necessary improvements in the direction of the Industry 4.0 vision of smart factory, splitting the digital transformation in different scale levels. Thus, the matrix represents the three impact dimensions, aiming at providing a snapshot of current situation of companies and suggesting which steps should be achieved in order to reach the FAR-EDGE objective in a smooth and stepwise migration process.</para>
<para>The migration matrix is structured in rows and columns. The rows represent the relevant application fields selected during the preparation phase with a high potential of improvement by FAR-EDGE concepts implementation on the architecture. They refer to technology innovations, factory process maturity, and human roles. The columns describe the development steps for each application field towards a higher level of production flexibility, intelligent manufacturing, and business process in the direction of the FAR-EDGE platform implementation. As shown in <link linkend="F13-3">Figure <xref linkend="F13-3" remap="13.3"/></link>, the five columns represent five levels of production system&#8217;s digital maturity.</para>
<para>These levels are based on the integrating principles of both the CMMI (Capability Maturity Model Integration) framework [24, 33, 34] and DREAMY model (Digital REadiness Assessment MaturitY) [35], which are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Level 1 &#8211; The production system is poorly controlled or not controlled at all, process management is reactive and does not have the proper organizational aspects and technological &#8220;tools&#8221; for building an infrastructure that will allow repeatability and usability of the utilized solutions.</para></listitem>
<listitem><para>Level 2 &#8211; The production is partially planned and implemented. Process management is weak due to lacks in the organization and/or enabling technologies. The choices are driven by specific objectives of single projects and by the experience of the planner, which demonstrates only a partial maturity in managing the infrastructure development.</para></listitem>
<listitem><para>Level 3 &#8211; The process is defined thanks to the planning and the implementation of good practices, management and organizational procedures, which highlight some gaps/lacks of integration and interoperability in the applications and in the information exchange because of some constraints on the organizational responsibilities and /or on the enabling technologies.</para></listitem>
<listitem><para>Level 4 &#8211; The integration and the interoperability are based on common and shared standards within the company, borrowed from intra- and/or cross-industry de facto standard, with respect to the best practices in industry in both the spheres of the organization and enabling technologies.</para></listitem>
<listitem><para>Level 5 &#8211; The process is digitally oriented and based on a solid tech-nology infrastructure and a high potential growth organization, which supports business processes in the direction of Industry 4.0, including continuous improvement processes, complete integrability, organization development, speed, robustness and security in information exchange.</para></listitem>
</itemizedlist>
<para>The main reason of this choice is that the CMMI provides a defined structure, specifying what are the capabilities, the characteristic, and the potentiality a company has at each level. Based on [35], as the five-scale CMMI provided a generic model to start from, the maturity levels have been readapted in order to be compliant and coherent with the dimensions considered by domains previously defined.</para>
<para>Therefore, the Migration Matrix provides a clear map of the current and desired conditions of a factory, revealing different alternatives to achieve the first short-term goal in the direction of the long-term vision. These alternatives are then evaluated according to the business strategy, considering also strengths and weaknesses points. Since FAR-EDGE aims at providing a holistic overview of the impact of edge and cloud computing solutions on the existing production environments, the developed approach supports the identification of the areas in which improvement actions are required, matching the needs of the organization and the estimation of the overall benefit of the innovative solution for the industry.</para>
<para>Based on the results of these phases, a migration path is defined and the solution to execute the first migration step is designed, implemented, and deployed following the migration process of [12]. In parallel, a set of guidelines and recommendations for the implementation of the FAR-EDGE solution are defined and documented.</para>
</section>
<section class="lev1" id="sec13-4">
<title>13.4 Use Case Scenario</title>
<para>The industrial application example provided here describes a simple sce-nario in the automotive industry. The manufacturer aims to decentralize the current factory automation architecture and introduce cyber-physical system concepts in order to flexibly deploy new technologies and maximize the correlation across its technical abilities to support mass-customization. Target of the implementation of the FAR-EDGE platform is the reduction of time and effort required for deploying new applications by the automatic reconfiguration of physical equipment on different stations, according to the current operation, and its automatic synchronization among different information systems (PLM, ERP, and MES).</para>
<para>The factory currently presents an automation architecture compliant to ISA-95 standards with three layers: ERP, MES, and SCADA with Field devices. However, the integration of new applications at the MES level to obtain new functions at the shopfloor is very expensive because of highly dependent on the centralized control structure of the architecture. Moreover, it requires a long verification time and, consequently, a long delivery time to customers.</para>
<para>The factory envisioned by FAR-EDGE, according to the Industry 4.0 paradigm, is a highly networked CPS in which the modules are able to reconfigure themselves and communicate with each other via a standard I4.0 semantic protocol. As there is no central control, the system modules can identify and integrate new components automatically, negotiate their services and capabilities in some sort of social interaction. The modules have the abilities of perception, communication and self-explanation. In this way, the new modules can be integrated into the system quickly in a &#8220;Plug and Produce&#8221; 
fashion, and the system can reconfigure itself in the event of changes and continue the production process without additional adjustments of the overall control.</para>
<para>Applying this vision to the considered use case, the single physical equipment becomes a single &#8220;Plug-and-Produce&#8221; module able to configure and deploy itself without human intervention. The plugging of the module could be implemented at the edge automation component of the platform (Ref to <link linkend="ch02">Chapter <xref linkend="ch2" remap="2"/></link> e <link linkend="ch04">Chapter <xref linkend="ch4" remap="4"/></link>). An adapter for controlling and accessing information about the single equipment should be developed as part of the communication middleware. Data will flow to the edge automation component, which will interact with the CPS models database of the platform in order to access and update information about the location and status of the single equipment. The synchronization and reconfiguration functionalities of the platform will trigger changes to the configuration of the stations, which will be reflected in the CPS models database. The ledger automation and reconfiguration services could also be used for automating the deployment and reconfiguration of the shopfloor.</para>
</section>
<section class="lev1" id="sec13-5">
<title>13.5 Application of the Migration Approach</title>
<section class="lev2" id="sec13-5-1">
<title>13.5.1 Assessment</title>
<para>Table 13.1 presents the main fields of application to be considered from technical, operational and human points of view for the automation. The assessment represented in the Migration Matrix provides an overview of the current (AS-IS) situation of the factory with reference to the automation. The AS-IS situation of the considered industrial use case is depicted in red within the matrix of Table 13.1. From this Migration Matrix, it is immediately clear which are the less developed areas of a specific factory&#8217;s use case, towards the implementation of digital technologies, i.e. &#8220;Plug-and-Produce&#8221; modules.</para>
<para>Currently, the automation control has a centralized structure that allows the vertical integration of the different architectural levels, by providing automation and analytics capabilities to entities that work in parallel. The production equipment is networked through vendor-specific API, and data can be shared from different systems. In this way, the production data can be monitored and analyzed from a MES system and the order processing, which is fully automated, and production processes are being developed to be fully integrated. However, the production system has only a very basic security and local access control. The main issue in this use case is the reconfiguration of the production equipment that is performed per equipment by configuring it at PLC level. Moreover, time-consuming reconfiguration operations can stop the production. From a human perspective, the main role to be considered in this use case is the IT Operator, who has a strong knowledge on the current IT infrastructure of the factory but not on the digital systems and Industry 4.0 concepts. Within the factory, the implications of digital technologies on IT Operator have not been addressed because they are still unclear. Furthermore, other roles are involved in the transformation: the Operator, Production Manager, Product Designer, and Production Engineer. Their tasks will change as a consequence of the automatic reconfiguration of the physical equipment, of the novel devices in the field, for the need to encompass all the necessary information within product design and production planning. However, these roles are currently performing according to the current tasks and procedures, unaware of the prospected transformation.</para>
<para><emphasis role="strong">Table 13.1</emphasis> AS-IS situation of the use case for the automation functional domain</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/ch13_fig004.jpg"/></para>
<para>The manufacturer could benefit from the implementation of the FAR-EDGE architecture and components in terms of modularity and reconfigurability capabilities of the shopfloor. In fact, the implementation of Edge Nodes on the single equipment enables the identification of new entities in the shopfloor and their instantiation at Cloud level, thus being directly accessible for all IT systems that require their definition (i.e. PLM). Moreover, the decentralization of the automation architecture through the Edge and Ledger layers could increase the flexibility and reconfigurability of the architectural assets, enabling future modifications and improvements.</para>
</section>
<section class="lev2" id="sec13-5-2">
<title>13.5.2 Gap Analysis</title>
<para>Of course, to migrate the current traditional automation system to the FAR-EDGE architecture and components, different aspects of the factory need to be evaluated to guarantee a smooth transformation of the factory at minor impact on the current production system.</para>
<section class="lev3" id="sec13-5-2-1">
<title>13.5.2.1 Technical aspects</title>
<para>FAR-EDGE supports automated control and automated configuration of physical production systems using plug-and-produce factory equipment in order to enable fast adjustments of the production processes according to requirements changes. To integrate plug-and-produce capabilities within an existing shopfloor equipment, a bidirectional monitoring and control communication channel with the shopfloor equipments is required, thus not only via sensors and actuators but also with active actors (e.g. PLC) equipped with a significant processing power and with a good network connection capability, namely the Edge Nodes of the FAR-EDGE architecture that are described in earlier chapters of the book.</para>
<para>The connection of digital and physical worlds will also support gathering and processing Field data towards a better understanding of production processes, for example, to change an automation workflow based on changes detected in the controlled process. This requires Edge Gateways, i.e. computing devices connected to a fast LAN to provide a high-bandwidth field communication channel. Edge Gateways can execute edge processes activity, namely the local orchestration of the physical process monitored and operated by Edge Nodes. In addition, Cloud services running in a virtualized computing environment can act as entry point for external applications and provide centralized utilities to be used internally or perform activities for archiving analysis results.</para>
<para>The introduction of the Cloud within the production control entails full security and global access control mechanisms, which need to be increased immediately, as soon as the production information will be available at cloud level for different stakeholders in order to prevent data security and privacy issues. In addition, the automatic reconfiguration of physical equipments can be enhanced by the integration of simulation tools that provide an interactive design process leading to the optimization of the production processes. In order to improve the optimization, 3D layouts and CAD systems must be fully integrated in a common digital model by means of intelligent tools that automatically feed the simulation systems with the real production data and derive optimized solutions.</para>
</section>
<section class="lev3" id="sec13-5-2-2">
<title>13.5.2.2 Operational aspects</title>
<para>Plug and Produce capability could be seen as a crucial solution to reduce the time and costs involved in not only manufacturing process (e.g. new machine/equipment/resources deployment) but also process design and process development. For this reason, it presumes the need of building an agile enterprise application platform which helps a company to be proactive in carrying out its core activities. To facilitate such tight and effective improvement in a modern enterprise, the (information and operational technology (IT/OT) integration is needed. This means, first, the integration of ERP applications, MES and shopfloor systems (i.e PLCs, SCADA, DCS) along the levels defined by ISA-95 and, second, the integration of PLM systems and MES (Level 3 and Level 4) when it comes to the transition of a ready-to-market product into production.</para>
<para>The latter consideration enables the integration between design and production, in terms of processes and systems, increasing product quality and process efficiency. This convergence is the source of not only product but also process definition. On one side, the Bill of Process (BoP) provides traceability to the Bill of Materials (BoM) to leverage PLM&#8217;s configuration and effectiveness controls, defining the correct sequence of operations to guarantee a high level of product quality. On the other side, the Manufacturing process management carries out the documentation and the follow-up of processes in the MES, which reshapes theoretically designed processes to make them fit the reality on the shopfloor, ensuring the process efficiency. Considering this, the proper integration of systems is vital, otherwise data related to the new machine introduction or the process adjustment would &#8220;manually&#8221; be passed to MES (that coordinate and monitor the process execution).</para>
<para>From this consideration, the evolution to a Plug and Produce production system has to go through the information harmonization between engineering and manufacturing, coherently with a stepwise approach. To this aim, the first step is to realize an overall data backbone for all processes and products. This means to centralize the DBs and the information systems in order to integrate the information flow between manufacturing and engineering domain. Within the next step, the MES will automatically provide execution data to ensure holistic and reliable product information that, being documented and available in both systems, can be considered as a strategic asset to improve the maintenance, repair, and optimization process.</para>
<para>In this context, the deployment of event-driven architecture (&#8216;RT-SOA&#8217; or Real-Time Service Oriented Architecture) could facilitate the information exchange and, therefore, the seamless reconfiguration of machinery and robots as a response to operational or business events.</para>
</section>
<section class="lev3" id="sec13-5-2-3">
<title>13.5.2.3 Human aspects</title>
<para>The migration towards digital manufacturing automation implies changes in the behavior of the production systems as well as in the information flows. The implications impact the work of the employees under different points of view.</para>
<para>The health and conditions of the operators are usually modified by the introduction of automation. In most cases, the ergonomic effort is reduced, but in some cases, additional factors, such as the introduction of robotics, have to be included in the risk management plans. The autonomy and privacy of the employees may change because of a more accurate and real-time monitoring of the operations and tracking of products and tools.</para>
<para>These implications need to be carefully analyzed with all the stakeholders and managed.</para>
<para>The role of employees can be affected by the new technological and operational landscape: on the one hand, some manual tasks or scheduling decisions are taking over by the systems; on the other hand, some new tasks are added to supervise the systems, monitor the KPIs, and address the problems. The workplace, the HMIs, the workflow, and the instructions change in several cases. It is important that the operators stay in the loop of control of the process and are aware of the states and activities of the technological systems.</para>
<para>The deployment of the new technologies is expected to impact not only the Production Operators, but also the Product Designers, the Production Engineers, and obviously the IT Operators. Overall, the skills requirements for each role have to be updated on the basis of the TO-BE scenario and compared with those available in the AS-IS situation, in order to identify and address the gaps, through up-skilling or recruitment initiatives. Furthermore, the job profiles and training plans need to be updated to ensure the incorporation in the standard procedures.</para>
<para>Although the need for these changes is perceived, they are still unclear and the size of the gap has not been evaluated yet.</para>
</section>
</section>
<section class="lev2" id="sec13-5-3">
<title>13.5.3 Migration Path Alternatives</title>
<para>Considering the current situation of the industrial use case and the longterm vision of digital manufacturing enabled by the FAR-EDGE reference architecture, different migration path alternatives can be identified. The identified alternatives are generated on the basis of technical constraints, investment capabilities, and organizational structure. Considering different priorities and required improvements on part of the production system, these migration alternatives lead to the achievement of the first short-term goal of the migration path towards the Industry 4.0 vision.</para>
<para>Two main migration path (MP) alternatives have been derived according to the specific business goal of the represented factory. The first alternative (MP 1) focuses on the implementation of plug-and-produce equipment to enhance the production system reconfigurability (Table 13.2), while the second alternative (MP 2) focuses on the real-virtual automatic synchronization of the single equipment based on simulation tools to optimize the production process (Table 13.3). Both alternatives will enable the factory to improve different parts of the system towards the long-term vision of &#8220;digitalization&#8221; by implementing step-by-step some of the FAR-EDGE solution components. The manufacturer will then select the adequate solution according to the enterprise&#8217;s needs, interests and constraints.</para>
<para><emphasis role="strong">Table 13.2</emphasis> MP for the implementation of reconfigurability</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/ch13_fig005.jpg"/></para>
<para>Color legend: red = AS-IS, yellow = intermediate step, green = TO-BE</para>
<para>The migration matrixes depicted for the two MPs represent two specific improvement scenarios and not the production system as a whole. In both matrixes, the maturity levels of the current situation are represented in red, while the migration steps are represented in yellow (the intermediate migration step) and in green (the final step).</para>
<para><emphasis role="strong">MP 1: Implementation of reconfigurability</emphasis>. According to the business strategy, the deployment configuration should give priority to the Cloud, since the factory already planned to implement cloud technologies in the production automation control. The collection and integration of information through the Cloud will support the reconfigurability of plug and produce equipment. In fact, PLM provides the planning information about how the product will be produced and the MES serves as the execution engine to realize the plan and BoP. As a second step, the information provided by PLM</para>
<para><emphasis role="strong">Table 13.3</emphasis> MP for the implementation of simulation-based optimization</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/ch13_fig006.jpg"/></para>
<para>Color legend: red = AS-IS, yellow = intermediate step, green = TO-BE</para>
<para>needs to be reshaped. It is important to increase the amount of detail included in product information to cover machine programming, operator instructions and task sequencing. In this way, work plans, routing and BoP will serve as bridge elements between PLM and the MES [36]. In order to integrate the production systems information to the Cloud, a first improvement of the access control for each system must be immediately considered, which will be enhanced to a full security system in a second step. Moreover, because of the number of different stakeholders involved, in terms of third- part vendors and system developers, the second migration step should include also the introduction of open API to enable the standard communication among heterogeneous systems. Following this change in production systems and operations, the IT Operators must be trained in order to be able to manage the new automation control system, from the field level to the Cloud. The implications for the other roles should be analyzed in order to prepare the following steps.</para>
<para><emphasis role="strong">MP 2: Simulation-based optimization</emphasis>. The virtual representation of the physical objects in cyber space can be used for optimization of the production processes. For example, the cyber modules have the ability to avoid getting stuck in local optimization extremes and are able to find the global maximum and minimum which results in high performance. Therefore, additionally to the migration steps described in MP 1, the integration of digital models must be considered. Firstly, the existing CAD systems will be interfaced to each other, and secondly, they will be fully integrated to enable the optimization of equipment reconfiguration through intelligent simulation tools. In the same way, the production will be optimized based on the integrated information derived from the CAD designs and then it will be automatically implemented through the intelligent tools. To this end, the production process models and their different layout versions will be first integrated with business functions, in order to align the process parameters with cost deployment and profitability measures. From an organizational perspective, the main implications affect the roles of product designers and production engineers: they need to increase their level of cooperation to model all the relevant aspects of the manufacturing processes into the CAD. Furthermore, the production engineers have to see that the models of the CAD are connected to the models of the actual production facilities, so that the production can be simulated, planned and monitored. Therefore, the competences of the above mentioned roles require to be enhanced with new skills concerning digitalization, modeling and simulation. Furthermore, the tasks and responsibilities of these roles have to be updated accordingly.</para>
<para>The migration matrixes support manufacturers by providing them with a holistic view of the required steps for migration towards the Industry 4.0 vision at different dimensions of the factory, i.e. technical, operational, and human. Based on this information and according to the business goals, the manufacturer will select the optimal scenario as first step of migration towards the long-term goal of complete digitalization of the factory. The solution identified within the selected scenario will be then designed in detail, implemented and deployed according to next process phases described in [12].</para>
</section>
</section>
<section class="lev1" id="sec13-6">
<title>13.6 Conclusion</title>
<para>In conclusion, this chapter shows how the FAR-EDGE migration approach can lead a manufacturing company to achieve an improvement towards a new manufacturing paradigm following a smooth and no risk transition approach with a holistic overview.</para>
<para>In fact, the use case scenario points out that every part of an organization &#8211; including workforce, product development, supply chain and manufacturing &#8211; has been considered to reach more flexible and reconfigurable aspects in order to rapidly react to both endogenous and exogenous drivers that are affecting the current global market [37].</para>
<para>In this context, the IT/OT convergence can be seen as a first implementation of operational aspects needed to obtain a solid manufacturing layer based on the encapsulation of production resources and assets according to the existing protocols, in order to facilitate the plug-and produce readiness, and therefore, to achieve a flexible manufacturing environment.</para>
<para>As far as the technical dimension is concerned, the Edge Nodes and the ledger implementation can enable the realization of the overall system architecture based on information systems integration needed to obtain the seamless system reconfiguration avoiding scrap and reducing time to market and cost.</para>
<para>Finally, the human aspect is crucial to ensure the operation, management, and further development of the highly digitalized and automated production system. The methodology illustrated in this chapter guides manufacturing in considering the implications for skills and work organization within their migration strategy.</para>
<para>Only by jointly considering the technical, operational, and human aspects can a migration strategy anticipate the possible hurdles and lead to a smooth transformation towards an effective new production paradigm.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>The authors would like to thank the European Commission for the support, and the partners of the EU Horizon 2020 project FAR-EDGE for the fruitful discussions. The FAR-EDGE project has received funding from the European Union&#8217;s Horizon 2020 research and innovation programme under grant agreement No. 723094.</para></section>
<section class="lev1" id="sec13-7">
<title>References</title>
<para>[1] Acatech - National academy of science and Engineering, &#8220;Recommendations for implementing the strategic initiative INDUSTRIE 4.0-Final report of the Industrie 4.0 Working Group&#8221;. pp. 315&#8211;320.</para>
<para>[2] Acatech - National academy of science and engineering-, &#8220;CyberPhysical Systems Driving force for innovation in mobility, health, energy and production&#8221;.</para>
<para>[3] U. Rembold and R. Dillmann, <emphasis>Computer-Aided Design and Manufacturing</emphasis>, 1985.</para>
<para>[4] M. Foehr, J. Vollmar, A. Cal&#224;, P. Leit&#227;o, S. Karnouskos, and A. W. Colombo, &#8220;Engineering of Next Generation Cyber-Physical Automation System Architectures&#8221;, in <emphasis>Multi-Disciplinary Engineering for Cyber-Physical Production Systems</emphasis>, Springer International Publishing, pp. 185&#8211;206, 2017.</para>
<para>[5] &#8220;FAR-EDGE &#8211; Factory Automation Edge Computing Operating System Reference Implementation&#8221;. 2017.</para>
<para>[6] J. Delsing, J. Eliasson, R. Kyusakov, A. W. Colombo, F. Jammes, J. Nessaether, S. Karnouskos, and C. Diedrich, &#8220;A migration approach towards a SOA-based next generation process control and monitoring&#8221;, in <emphasis>IECON Proceedings (Industrial Electronics Conference)</emphasis>, pp. 4472&#8211;4477, 2011.</para>
<para>[7] C. Zillmann, A. Winter, A. Herget, W. Teppe, M. Theurer, A. Fuhr, T. Horn, V. Riediger, U. Erdmenger, U. Kaiser, D. Uhlig, and Y. Zimmermann, &#8220;The SOAMIG Process Model in Industrial Applications&#8221;, in <emphasis>2011 15th European Conference on Software Maintenance and Reengineering</emphasis>, pp. 339&#8211;342, March 2011.</para>
<para>[8] S. Balasubramaniam, G. A. Lewis, E. Morris, S. Simanta, and D. Smith, &#8220;SMART: Application of a method for migration of legacy systems to SOA environments&#8221;, <emphasis>Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics)</emphasis>, vol. 5364 LNCS, pp. 678&#8211;690, 2008.</para>
<para>[9] S. Cetin, N. I. Altintas, H. Oguztuzun, A. H. Dogru, O. Tufekci, and S. Suloglu, &#8220;A mashup-based strategy for migration to Service-Oriented Computing&#8221;, in <emphasis>2007 IEEE International Conference on Pervasive Services, ICPS</emphasis>, pp. 169&#8211;172, 2007.</para>
<para>[10] P. V. Beserra, A. Camara, R. Ximenes, A. B. Albuquerque, and N. C. Mendon&#231;a, &#8220;Cloudstep: A step-by-step decision process to support legacy application migration to the cloud&#8221;, in <emphasis>2012 IEEE 6th International Workshop on the Maintenance and Evolution of Service-Oriented and Cloud-Based Systems, MESOCA 2012</emphasis>, pp. 7&#8211;16, 2012.</para>
<para>[11] R. Fuentes-Fern&#225;ndez, J. Pav&#243;n, and F. Garijo, &#8220;A model-driven process for the modernization of component-based systems&#8221;, <emphasis>Sci. Comput. Program</emphasis>., vol. 77, no. 3, pp. 247&#8211;269, 2012.</para>
<para>[12] A. Cal&#224;, A. Luder, A. Cachada, F. Pires, J. Barbosa, P. Leitao, and M. Gepp, &#8220;Migration from traditional towards cyber-physical production systems&#8221;, in <emphasis>Proceedings - 2017 IEEE 15th International Conference on Industrial Informatics, INDIN 2017</emphasis>, pp. 1147&#8211;1152, 2017.</para>
<para>[13] T. Newcomb and E. Hartley, <emphasis>No TitleGroup decision and social change</emphasis>, Holt. New York, 1947.</para>
<para>[14] S. Z. A. Kazmi and M. Naarananoja, &#8220;Collection of Change Management Models &#8211; An Opportunity to Make the Best Choice from the Various Organizational Transformational Techniques&#8221;, <emphasis>GSTF J. Bus. Rev</emphasis>., vol. 2, no. 4, pp. 44&#8211;57, 2013.</para>
<para>[15] B. H. Sarayreh, H. Khudair, and E. alabed Barakat, &#8220;Comparative study: The Kurt Lewin of change management&#8221;, <emphasis>Int. J. Comput. Inf. Technol</emphasis>., vol. 02, no. 04, pp. 2279&#8211;764, 2013.</para>
<para>[16] J. P. Kotter, &#8220;Leading change: why transformation efforts fail the promise of the governed corporation&#8221;, <emphasis>Harward Bus. Rev</emphasis>., no. March&#8211; April, pp. 59&#8211;67, 1995.</para>
<para>[17] J. Hiatt, <emphasis>adkar</emphasis>. Prosci Inc., 2006.</para>
<para>[18] A. Berger, &#8220;Continuous improvement and <emphasis>kaizen</emphasis>: standardization and organizational designs&#8221;, <emphasis>Integr. Manuf. Syst</emphasis>., vol. 8, no. 2, pp. 110&#8211;117, 1997.</para>
<para>[19] S. Yadav and V. Agarwal, &#8220;Benefits and Barriers of Learning Organization and its five Discipline,&#8221; <emphasis>IOSR J. Bus. Manag. Ver. I</emphasis>, vol. 18, no. 12, pp. 2319&#8211;7668, 2016.</para>
<para>[20] P. M. Senge, &#8220;The fifth discipline: the art and practice of the learning organization&#8221;, <emphasis>5th Discipline. p. 445</emphasis>, 2006.</para>
<para>[21] D. Romero and F. Vernadat, &#8220;Enterprise information systems state of the art: Past, present and future trends&#8221;, <emphasis>Comput. Ind</emphasis>., vol. 79, pp. 3&#8211;13, 2016.</para>
<para>[22] C. G. Worley and S. A. Mohrman, &#8220;Is change management obsolete?&#8221;, <emphasis>Organ. Dyn</emphasis>., vol. 43, pp. 214&#8211;224, 2014.</para>
<para>[23] A. M. Maier, J. Moultrie, and P. J. Clarkson, &#8220;Assessing organizational capabilities: Reviewing and guiding the development of maturity grids&#8221;, in <emphasis>IEEE Transactions on Engineering Management</emphasis>, vol. 59, no. 1, pp. 138&#8211;159, 2012.</para>
<para>[24] T. Mettler and P. Rohner, &#8220;Situational maturity models as instrumental artifacts for organizational design&#8221;, <emphasis>Proc. 4th Int. Conf. Des. Sci. Res. Inf. Syst. Technol. - DESRIST &#8217;09. Artic. No. 22</emphasis>, pp. 1&#8211;9, May 06&#8211;08, 2009.</para>
<para>[25] R. L. Nolan, &#8220;Managing the crises in data processing&#8221;, <emphasis>Harv. Bus. Rev</emphasis>., vol. 57, pp. 115&#8211;127, March 1979.</para>
<para>[26] P. B. Crosby, &#8220;Quality is free: The art of making quality certain&#8221;, <emphasis>New York: New American Library</emphasis>. p. 309, 1979.</para>
<para>[27] P. Fraser, J. Moultrie, and M. Gregory, &#8220;The use of maturity mod- els/grids as a tool in assessing product development capability&#8221;, in <emphasis>IEEE International Engineering Management Conference</emphasis>, 2002.</para>
<para>[28] C. P. Team, &#8220;Capability Maturity Model{\textregistered} Integration (CMMI SM), Version 1.1&#8221;, <emphasis>C. Syst. Eng. Softw. Eng. Integr. Prod. Process Dev. Supplier Sourc. (CMMI-SE/SW/IPPD/SS, V1. 1)</emphasis>, 2002.</para>
<para>[29] R. Wendler, &#8220;The maturity of maturity model research: A systematic mapping study&#8221;, <emphasis>Inf. Softw. Technol</emphasis>., vol. 54, no. 12, pp. 1317&#8211;1339, 2012.</para>
<para>[30] M. Kerrigan, &#8220;A capability maturity model for digital investigations&#8221;, <emphasis>Digit. Investig</emphasis>., vol. 10, no. 1, pp. 19&#8211;33, 2013.</para>
<para>[31] N. Rother, &#8220;Toyota KATA - Managing people for improvement, adap-tiveness, and superior results&#8221;, 2010.</para>
<para>[32] A. Cal&#224;, A. L&#252;der, F. Boschi, G. Tavola, M. Taisch, P. Milano, and V. R. Lambruschini, &#8220;Migration towards Digital Manufacturing Automation - an Assessment Approach&#8221;.</para>
<para>[33] M. Macchi and L. Fumagalli, &#8220;A maintenance maturity assessment method for the manufacturing industry&#8221;, <emphasis>J. Qual. Maint. Eng</emphasis>., 2013.</para>
<para>[34] M. J. F. Macchi M., Fumagalli L., Pizzolante S., Crespo A. and Fernandez G., &#8220;Towards Maintenance maturity assessment of maintenance services for new ICT introduction&#8221;, in <emphasis>APMS-International Conference Advances in Production Management Systems</emphasis>, 2010.</para>
<para>[35] A. De Carolis, M. Macchi, E. Negri, and S. Terzi, &#8220;A Maturity Model for Assessing the Digital Readiness of Manufacturing Companies&#8221;, in <emphasis>IFIP International Federation for Information Processing 2017</emphasis>, pp. 13&#8211;20, 2017.</para>
<para>[36] Atos Scientific and C. I. Convergence, &#8220;The convergence of IT and Operational Technology&#8221;, 2012.</para>
<para>[37] F. Boschi, C. Zanetti, G. Tavola, and M. Taisch, &#8220;From key business factors to KPIs within a reconfigurable and flexible Cyber-Physical System&#8221;, in <emphasis>23rd ICE/ITMC Conference- International Conference on Engineering, Technology,and Innovationth ICE/IEEE ITMC International Technology Management Conference w</emphasis>, Janurary 2018.</para>
<para>[38] M. C. Paulk, B. Curtis, M. B. Chrissis, and C. V Weber, &#8220;Capability Maturity ModelSM for Software, Version 1.1&#8221;, <emphasis>Office</emphasis>. 1993.</para>
</section>
</chapter>

<chapter class="chapter" id="ch014" label="14" xreflabel="14">
<title>Tools and Techniques for Digital Automation Solutions Certification</title>
<para><emphasis role="strong">Batzi Uribarri<superscript>1</superscript>, Lara Gonz&#225;lez<superscript>2</superscript>, Bego&#241;a Laibarra<superscript>1</superscript> and Oscar Lazaro<superscript>2</superscript></emphasis></para>
<para><superscript>1</superscript> Software Quality Systems, Avenida Zugazarte 8 1-6, 48930-Getxo, Spain</para>
<para><superscript>2</superscript> Asociacion de Empresas Tecnologicas Innovalia, Rodriguez Arias, 6, 605, 48008-Bilbao, Spain</para>
<para>E-mail: buribarri@sqs.es; lgonzalez@innovalia.org; blaibarraz@sqs.es;</para>
<para>olazaro@innovalia.org</para>
<para>The digitisation and adoption of increasingly autonomous digital capabilities in the Factory 4.0 shopfloor demands that a large number of technologies need to be integrated, while the differential value of European manufacturing, i.e. security and safety, is ensured. Industry 4.0 puts additional pressure in small and medium-sized enterprises (SMEs) in terms of navigating standards, norms, and platforms to fulfil their business ambition. The Digital Shopfloor Alliance emerges as a multisided ecosystem that provides an integrated approach and a manufacturing-centric view on the digital transformation of automation solutions. This chapter introduces a certification framework for faster system integration and validated solution deployment. The main inputs from our approach to modular Plug &amp; Produce autonomous factory environments &amp; Validation &amp; Verification (V&amp;V) framework. This chapter also discusses how a validation and verification framework in combination with certified components could become key in the development of open digital shopfloors with future digital ability extensibility, controlled return of investment on Industry 4.0 solutions. This paper discusses also how such an approach can create a virtuous cycle for digital platform ecosystems such as FIWARE for smart industry, IDSA or more commercially driven ones such as Leonardo, Mindsphere, 3DExperience, Bosch IoT Suite, Bluemix, Watson, Predix, and M3.</para>
<section class="lev1" id="sec14-1">
<title>14.1 Introduction</title>
<para>In the context of Industry 4.0 and Cyber Physical Production Systems (CPPS), markets, business models, manufacturing processes, and other challenges along the value chain are all changing at an increasing speed in an increasingly interconnected world, where future workplace will present increased mobility, collaboration across humans, robots and products with inbuilt plug &amp; produce capabilities. Current practice is such that a production system is designed and optimized to execute the exact same process over and over again.</para>
<para>The planning and control of production systems has become increasingly complex regarding flexibility and productivity, as well as the decreasing predictability of processes. The full potential of open and smart CPPS is yet to be fully realized in the context of cognitive autonomous production systems. In an autonomous production scenario, as the one proposed by Digital Shopfloor Alliance (DSA) [1], the manufacturing systems will have the flexibility to adjust and optimize for each run of the task. Small and medium-sized enterprises (SMEs) face additional challenges to the implementation of &#8220;cloudified&#8221; automation processes. While the building blocks for digital automation are available, it is up to the SMEs to align, connect, and integrate them together to meet the needs of their individual advanced manufacturing processes. Moreover, SMEs face difficulties to make decisions on the strategic automation investments that will boost their business strategy.</para>
<para>Within the AUTOWARE project [3], new digital technologies including reliable wireless communications, fog computing, reconfigurable and collaborative robotics, modular production lines, augmented virtuality, machine learning, cognitive autonomous systems, etc. are being made ready as manufacturing technical enablers for their application in smart factories. Special attention is paid to the interoperability of these new technologies between each other and with legacy devices and information systems on the factory floor, as well as to providing reliable, fast integration, and cost-effective customized digital automation solutions. To achieve these goals, the focus has been set on open platforms, protocols, and interfaces, providing a Reference Architecture for the factory automation, and on a specific certification framework, for the validation not only of individual components but of deployed solutions for specific purposes, to help SMEs and other manufacturing companies to access and integrate new digital technologies in their production processes.</para>
<para>This chapter aims to review the certification framework, tools and techniques proposed within the global vision of DSA ecosystem, with a clear focus on enabling the digital transformation process on manufacturing SMEs through the adoption of digital automation solutions in their shopfloors.</para>
<para>Section 14.2 presents Safety as a main asset of European manufacturing industry that is challenged by autonomous operations and which represents a big challenge for SMEs in terms of regulation and level of integration across technologies and platforms. Section 14.3 presents a global vision of the DSA initiative and ecosystem, while Section 14.4 presents the alignment of DSA ecosystem with AUTOWARE Reference Architecture (RA), Technical and Usability Enablers to leverage digital abilities in the Shopfloor. This chapter also introduces the main strategic services to be provided. Next, Section 14.5 presents the V&amp;V framework and component/system certification that constitutes the basis for the Digital Automation Technologies Validation framework. Section 14.6 elaborates in-depth DSA ecosystem players, approach, benefits, and services towards a win-win model for the multi-sided ecosystem.</para>
</section>
<section class="lev1" id="sec14-2">
<title>14.2 Digital Automation Safety Challenges</title>
<para>SMEs are a focal point in shaping enterprise policy in the European Union (EU). In order to preserve and increase competitiveness in the global market, the SMEs need to digitalize their processes through the adoption of CPPS technologies in Digital Automation Solutions. After the analysis of new trends and challenges in SME manufacturing towards digital production paradigm by accessing new CPPS technologies and tools, we focused on new emerging technologies and paradigms such as Internet of Things, Industry 4.0, machine learning and artificial intelligence, robotics, Virtual/Augmented Reality, cloud computing, Cyber Physical Production Systems, and particularly on their impact on the SME production.</para>
<para>All these technologies that can be deployed in SME manufacturing and low-volume production are beginning to emerge and were proved to be beneficial to gain a competitive edge. However, the adoption of these technologies in actual SME production is still limited and needs to be sped up. Two main barriers preventing wider usage of these digital solutions were identified. On the end-users&#8217; side, the lack of knowledge and the time and cost constraints are dominant. On the supply side, there is a need to move from application orientation towards integrated solutions that will better support small enterprises, both in terms of customized and flexible applications. An effective measure to overcome problems related with the application of new smart technologies in the SMEs is to provide easy access to them through an ecosystem with integrated tools and techniques for Digital Automation Solutions certification. This section forms a report on identified demands and challenges faced by manufacturing SMEs with regard to safety and certification areas.</para>
<para>The fourth Industrial Revolution for EU Manufacturing Industry <emphasis role="strong">(Industry 4.0)</emphasis> is generally associated with the full adoption of digital technologies in production and for having an exclusive focus on smart factory automation. This was at the basis of the Industrie 4.0 initiative in Germany when it started back in 2011. However, the most recent evolutions of the Industry 4.0 paradigm have considerably extended the scope and characteristics of Industry 4.0 projects, embracing and addressing new ways of conceiving products, production, and running manufacturing-oriented business models. During the <emphasis role="strong">World Manufacturing Forum</emphasis> 2016 in Barcelona, Roland Berger [1] presented the main transformations of new Industry 4.0, such as from <emphasis>mass production</emphasis> to <emphasis>mass customization</emphasis>, from <emphasis>volume scale effect</emphasis> to localized and <emphasis>flexible production units</emphasis>, from <emphasis>make to stock</emphasis> static and hierarchical supply chains to <emphasis>make to order</emphasis> dynamic reverse supply networks, from <emphasis>product oriented</emphasis> economy to <emphasis>service and experience economy</emphasis>, from hard <emphasis>Taylorism-driven workplaces</emphasis> to attractive and <emphasis>adaptive workspaces</emphasis>. The result of materializing the newly identified <emphasis role="strong">&#8220;Industry 4.0&#8221;</emphasis> is the identification of characteristics, where extensions of the traditional Smart Production model (well represented by the RAMI 4.0 Reference Architecture) are required.</para>
<para>The Digital Shopfloor Alliance (DSA) is a manufacturing-driven approach to digital transformation and it is the response to such new production paradigms. Production lines are in the process of migrating from production lines into autonomous work cell environments where increased autonomy and flexibility in operations are the key features. However, such flexible environments yet need to retain the same safety features as the traditional production chains. Hence, there is a demand for the development of digital platforms that will support the engineering, commissioning, and safe and secure operation of such advanced autonomous production strategies.</para>
<para>The smart factories of the future are built on a modular basis. With standardized interfaces and cutting-edge information technology, they enable the establishment of flexible automated manufacturing reflecting the &#8220;plug and produce&#8221; and autonomous production principles. Initiatives such as Industry 4.0 and the European digital shopfloor alliance (DSA) are developing the concept of <emphasis role="strong">modular certification scheme and control in real time</emphasis> to include a specific approach which takes into account all specific requirements for adaptive, configurable systems. All plant manufacturers need to develop a safety and security concept for their equipment and confirm that their equipment complies with legal requirements. Modular certification and self-assessment schemes are critical elements in the operation of autonomous equipment variants. This ensures that the equipment is automatically certified when a module is replaced, or a new line configuration is set by the integration of autonomous equipment in modular manufacturing settings and thus continues to be in conformity with the legal requirements and/or the standard.</para>
<para><emphasis role="strong">Currently, industrial automation is a consolidated reality, with approximately 90 per cent of machines in factories being unconnected</emphasis>.</para>
<para>These isolated and static systems mean that product safety (functional safety and security) can be comfortably assessed. However, the connected world of Industry 4.0&#8217;s smart factories adds a new dimension of complexity in terms of machinery and production line safety challenges. IoT connects people and machines, enabling bidirectional flow of information and real-time decisions. Its diffusion is now accelerating with the reduction in size and price of the sensors, and with the need for the exchange of large amount of data. In today&#8217;s static machinery environment, the configuration of machines and machine modules in the production line is completely known at the starting point of the system design. However, if substantial changes are made, a new conformity assessment may be required. It is an employer&#8217;s responsibility to ensure that all machinery meet the requirements of the Machinery Directive and Provision and Use of Work Equipment Regulations (PUWER), of which risk assessments are an essential ingredient. Therefore, if a machine has a substantial change made, a full CE marking and assessment must be completed before it can be returned to service. Any configuration change in the production line requires re-certification of the whole facility.</para>
<para><emphasis role="strong">However, the dynamic approach of Industry 4.0&#8217;s autonomous robotic systems means that with a simple press of a button, easily configurable machinery and production lines can be instantly changed</emphasis>.</para>
<para>As it is the original configuration that is risk assessed, such instant updates to machinery mean that the time-hungry, the traditional approach of &#8220;risk assessment as you make changes&#8221; will become obsolete. The risk assessment process therefore needs to be modified to meet the demands of the more dynamic Industry 4.0 approach. This would mean that all possible configurations of machines and machine modules would be dynamically validated during the change of the production line. Each new configuration would be assessed in real time, based on digital models of the real behavior of each configuration, which would be based upon the machinery manufacturer&#8217;s correct (and trusted) data. The result would be a rapidly issued digital compliance certificate.</para>
<para>This Section discuss the challenges that such approach would entail from the context of safe operation of modular manufacturing, reconfigurable cells, and collaborative robotic scenarios.</para>
<section class="lev2" id="sec14-2-1">
<title>14.2.1 Workplace Safety and Certification According to the DGUV</title>
<para>The Deutsche Gesetzliche Unfallversicherung (DGUV) is the German statutory accident insurance. The DGUV has published a requirements document that addresses workplace safety and certification aspects concerning collaborative robots. On conventional industrial robot systems, safe-guards, such as protective fences and light curtains, prevent the access of people to hazardous areas. Collaborative robot systems, however, represent a link between fully automated systems and manual workplaces. The fact that Smart Manufacturing tends towards smaller batch sizes is one reason why collaborative robots are taking on greater significance. In an almost fenceless operation, which is dependent on the type of collaboration, the robot can thus support workers on manual tasks. This relieves the worker which is of benefit to the company managers in the medium to long term, since it results in less downtimes and an enhanced health situation of employees.</para>
<para>The DGUV provides the information &#8220;Collaborative robot systems &#8211; Design of systems with Power and Force Limiting&#8221; function for free download [4]. This information is intended to give an initial overview on the procedures when planning collaborative robot systems. The implementation of the AUTOWARE Use Case 3 &#8211; Industrial Cooperative Assembly of Pneumatic Cylinders necessitates the compliance with the workplace safety standards. In doing so, the DGUV requirements are considered in the design of the collaborative workspace. The fulfilment of the requirements provided by the DGUV is an essential prerequisite for the legal operation of collaborative robot systems in factories and their certification.</para>
</section>
<section class="lev2" id="sec14-2-2">
<title>14.2.2 Industrial Robots Safety According to ISO 10218-1:2011 &amp; ISO 10218-2:2011</title>
<para>The main current, i.e. published, standards regarding security as relevant to industrial robots (in contrast to personal care robots), are ISO 10218-1:2011 and ISO 10218-2:2011. The ISO 10218-1:2011, &#8220;Robots and robotic devices &#8211; Safety requirements for industrial robots &#8211; Part 1: Robots&#8221; </para>
<para>[5] specifies requirements and guidelines for the inherent safe design, protective measures, and information for use of industrial robots. It describes the basic hazards associated with robots and provides requirements to eliminate, or adequately reduce, the risks associated with these hazards.</para>
<para>The ISO 10281-2:2011, &#8220;Robots and robotic devices &#8211; Safety requirements for industrial robots &#8211; Part 2: Robot systems and integration,&#8221; specifies safety requirements for the integration of industrial robots and industrial robot systems as defined in ISO 10218-1 with industrial robot cell(s) [6]. The integration includes the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>the design, manufacturing, installation, operation, maintenance, and decommissioning of the industrial robot system or cell;</para></listitem>
<listitem><para>necessary information for the design, manufacturing, installation, operation, maintenance, and decommissioning of the industrial robot system or cell; and</para></listitem>
<listitem><para>component devices of the industrial robot system or cell.</para></listitem>
</itemizedlist>
<para>ISO 10218-2:2011 describes the basic hazards and hazardous situations identified with these systems, and it also provides requirements to eliminate or adequately reduce the risks associated with these hazards. It also specifies requirements for the industrial robot system as part of an integrated manufacturing system. The design of experiments in AUTOWARE JSI reconfigurable robotics cell will take into account these two standards.</para>
</section>
<section class="lev2" id="sec14-2-3">
<title>14.2.3 Collaborative Robots Safety According to ISO/TS 15066:2016</title>
<para>As flexible, fast reconfigurable robot tasks nowadays often include collaborative activity between human operators and robots (and such application will only increase), a very important standard we take into account is ISO/TS 15066:2016, &#8220;Robots and robotic devices &#8211; Collaborative robots.&#8221; ISO/TS 15066:2016 specifies the safety requirements for the collaborative industrial robot systems and the work environment. It supplements the requirements and guidance on collaborative industrial robot operation given in ISO 10218-1 and 10218-2. Two main newly introduced points are that 1) in essence, we have to obtain a safe collaborative application &#8211; the robot per se is not enough to guarantee a safe robot application and 2) the safety is specified by limited physical values that can be exerted in relation to humans (e.g. limited contact forces) rather solely by adoption of some technical type of safety solution. JSI reconfigurable robotics cell makes use of robots certified for collaboration with humans.</para>
</section>
</section>
<section class="lev1" id="sec14-3">
<title>14.3 DSA Ecosystem Vision</title>
<para>Until recently, digital products for SME businesses were nothing more than products for large enterprises, with reduced functionalities. This has resulted in a first opportunistic rather than the strategic adoption of CPPS by SMEs, which handicaps the sustainable growth of such industries. To accelerate the adoption of CPPS by SMEs as producers of CPPS or as users of CPPS, the barriers to translate the benefits of CPPS into core business values, need to be reduced.</para>
<para>There are several European initiatives under the framework of FoF-11 H2020 DEI call that are working on providing platforms and solutions for the acceleration of digital automation engineering processes and the development of the necessary building blocks to realize full support to fog/cloud-based manufacturing solution in the context of Industry 4.0.</para>
<para>Based on the common approach of H2020 AUTOWARE [3], DAEDALUS [7], and FAR-EDGE [8] projects for the European digitisation of SMEs, the DSA has been defined with the common objective of providing reliable, cost-effective integrated solutions to support small enterprises, both in terms of customized and flexible applications.</para>
<para>The DSA is an open ecosystem of certified applications that will allow the ecosystem partners to access different components to develop smart digital automation solutions (the so-called shopfloor digital abilities) for their manufacturing processes. This ecosystem is aimed at reducing the cost, time, and effort required for the deployment of digital automation system on the basis of validated &amp; verified components for specific configurations and operation profiles.</para>
<para>The three projects provide a complete CPPS solution allowing SMEs to access all the different components in order to develop digital automation cognitive solutions for their manufacturing processes. AUTOWARE provides a complete CPPS ecosystem, including a reference architecture that perfectly fits with FAR-EDGE architecture based on splitting the computing in the field (considering the decentralized automated shopfloor defined inside DAEDALUS), the edge, and the cloud. DAEDALUS also defines an intermediate layer (Ledger) to synchronize and orchestrate the local processes. Finally, AUTOWARE also enriches the different technical enablers to make easier the adoption of CPPS by SMEs as well as reliable communications and data distribution processes.</para>
<para>This combined solution reduces the complexity of the access to the different isolated tools significantly and speed up the process by which multi-sided partners can meet and work together. Moreover, the creation of added value products and services by device producers, machine builders, systems integrators, and application developers will go beyond the current limits of manufacturing control systems, allowing the development of innovative solutions for the design, engineering, production, and maintenance of plants&#8217; automation.</para>
<para>AUTOWARE has defined a complete open framework including a novel modular, scalable, and responsive Reference Architecture (RA) for the factory automation, defining methods and models for the synchronization of the digital and real world based on standards and certified components. AUTOWARE RA aligns several cognitive manufacturing technical enablers, which are complemented by usability enablers, thereby making it easy to access and operate by the manufacturing SMEs. The third key element in the ecosystem is the certification framework for the fast integration and customization of digital automation solutions.</para>
<para>The DSA proposes to go beyond a mere marketplace, (see <link linkend="F14-1">Figure <xref linkend="F14-1" remap="14.1"/></link>) and provide an integrated approach that on the development side ensures the provisioning of qualified CPPS components, certified systems and solutions thereby reducing the integration and customization costs. Moreover, the operational conditions and performance expected from Systems of Systems (SoS) operations can be managed in a controlled manner that ensures that machine and co-botic EU safety requirements can be addressed in the context of increased flexibility and system reconfiguration. On the demand side, the acquisition and operation costs are reduced based on shorter deployment cycles and customization on the basis of certified components already qualified with concrete working and development conditions validated for a specific purpose.</para>
<para>DSA ecosystem aim to ease the digital transformation process for manufacturing SMEs and it is based on an integrated approach, aligned with AUTOWARE goals of leveraging autonomous solutions through digital abilities, which includes a set of tools, techniques and services:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">DSA experts network</emphasis> helping manufacturing SMEs to define and evaluate their digital transformation strategy, and providing support for its implementation;</para></listitem>
<listitem><para><emphasis role="strong">DSA RA</emphasis>, aligned with widely established open HW and open platforms technologies, based on AUTOWARE RA;</para></listitem>
<listitem><para>the provisioning of <emphasis role="strong">DSA compliant:</emphasis></para>

<itemizedlist mark="circle" spacing="normal">
<listitem><para><emphasis role="strong">Technological components</emphasis> (from well-known technology providers and aligned to open HW, SW and platforms);</para></listitem>
<listitem><para><emphasis role="strong">Core Products</emphasis> (architectural, functional, non-functional, normative and S&amp;S compliant, validated for a purpose VPP);</para></listitem>
<listitem><para><emphasis role="strong">Certified solutions</emphasis> (safety compliant: certified Components and Core Product validated for a specific application/service); and</para></listitem>
<listitem><para><emphasis role="strong">Validated deployments</emphasis>, developed by trained professional integrators, for SME&#8217;s customized automation solutions;</para></listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<fig id="F14-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-1">Figure <xref linkend="F14-1" remap="14.1"/></link></label>
<caption><para>DSA manufacturing multisided ecosystem.</para></caption>
<graphic xlink:href="graphics/ch14_fig001.jpg"/>
</fig>

<itemizedlist mark="bullet" spacing="normal">
<listitem><para>access to trial-oriented <emphasis role="strong">testbeds and neutral facilities</emphasis> to offer a quick access and hands-on demonstrations of already validated solutions;</para></listitem>
<listitem><para>the <emphasis role="strong">Digital Automation Technology Validation (DATV) framework</emphasis> for technologies, tools and services validation for a specific use under certain conditions, normatives, and standards based on AUTOWARE V&amp;V enablers; and</para></listitem>
<listitem><para>the access to an <emphasis role="strong">homologated professional network</emphasis> of integrators trained by Core Products owners and expertise in DSA technologies.</para></listitem>
</itemizedlist>
<para>This approach enables DSA to offer <emphasis role="strong">both top-down and bottom-up vision to implement safe digital transformation strategies and secure I4.0 digital automation systems in manufacturing SMEs</emphasis>. In contrast to the top-down vision of known large technology providers focused on its core products or focused on its core architecture; DSA lowers the barrier that hinders the adoption of the latest technologies for the implementation of digital shopfloors in manufacturing SMEs.</para>
<para>DSA approach will focus on easing and enabling the digital transformation strategy for key application areas &amp; services of competitive interest for manufacturing SMEs:</para>
<fig id="F14-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-2">Figure <xref linkend="F14-2" remap="14.2"/></link></label>
<caption><para>DSA ecosystem objectives.</para></caption>
<graphic xlink:href="graphics/ch14_fig002.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Energy efficient manufacturing services;</para></listitem>
<listitem><para>condition-based monitoring &amp; predictive maintenance services;</para></listitem>
<listitem><para>zero defect manufacturing services;</para></listitem>
<listitem><para>factory logistics management services;</para></listitem>
<listitem><para>workplace augmentation, training, and human decision support services;</para></listitem>
<listitem><para>digital twin modeling and simulation services; and</para></listitem>
<listitem><para>Big Data Analytics for production planning and optimisation services.</para></listitem>
</itemizedlist>
<para>The DSA bottom-up vision, based on the access to DATV-validated Core Products and Solutions, DSA services, and professionals, helps fulfil the DSA main goals of reducing the cost, time, and effort required to implement safe digital processes and products and secure Industry 4.0 digital automation solutions, in line to its objectives, see <link linkend="F14-2">Figure <xref linkend="F14-2" remap="14.2"/></link> above:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Maximize Industry 4.0 RoI</emphasis>, DSA services and DATV solutions will help optimize the SMEs investment for digital shopfloor implementation.</para></listitem>
<listitem><para><emphasis role="strong">Keep integration time under control</emphasis>, DSA-established methods and framework ease the adoption of digital solutions through validated deployments and access to certified testbeds.</para></listitem>
<listitem><para><emphasis role="strong">Ensure future digital shopfloor extendibility</emphasis>, relying on DATV- validated and standard-compliant components and DSA RA to safely plan the digital transformation strategy towards a future digital shopfloor.</para></listitem>
</itemizedlist>
<para>On the business dimension, the DSA ecosystem is offering <emphasis role="strong">a set of services to support SMEs in defining and executing their own digital transformation strategy</emphasis> (see <link linkend="F14-3">Figure <xref linkend="F14-3" remap="14.3"/></link>), including:</para>
<fig id="F14-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-3">Figure <xref linkend="F14-3" remap="14.3"/></link></label>
<caption><para>DSA ecosystem strategic services.</para></caption>
<graphic xlink:href="graphics/ch14_fig003.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">DSA profiling</emphasis>; DSA experts offer SMEs support on digital shopfloor profile selection, and ROI assessment of their digital shopfloor strategy.</para></listitem>
<listitem><para><emphasis role="strong">DSA certification</emphasis>; DATV framework application ensures safe operation of customised DSA deployments in modular/reconfigurable manufacturing cell or collaborative robotic workplace.</para></listitem>
<listitem><para><emphasis role="strong">DSA integration</emphasis>; DSA network of expert integrators offers suitable support for the safe and secure deployment of the digital shopfloor services.</para></listitem>
<listitem><para><emphasis role="strong">DSA-ready products</emphasis>; DATV HW components and SW solutions and infrastructures validated for purpose (VPP) helps reduce the ramp-up time of digital shopfloor services.</para></listitem>
</itemizedlist>
<para>This set of services oriented to manage and support the digital transformation strategy for manufacturing SMEs&#8217; shopfloors is based on the AUTOWARE technical usability and V&amp;V enablers and exploitable results. The DSA digitisation strategy&#8217;s first steps will comprise a <emphasis role="strong">digital transformation status assessment</emphasis> that will enable the <emphasis role="strong">digital transformation strategy and an action plan definition</emphasis> through an <emphasis role="strong">investment proposal aligned with the manufacturing SME global strategy and situation</emphasis>, ensuring future extendibility of the deployments in the shopfloor and maximizing the Industry 4.0 ROI. The next steps will be supported by both catalogue of the DATV Core Products and validated deployments for specific purposes and the Integrators network services, eased by the access to trial-ready testbeds in neutral facilities offered by the Autoware partners and manufacturing DIHs.</para>
<fig id="F14-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-4">Figure <xref linkend="F14-4" remap="14.4"/></link></label>
<caption><para>DSA-aligned open HW &amp; SW platforms.</para></caption>
<graphic xlink:href="graphics/ch14_fig004.jpg"/>
</fig>
<para>On the technological dimension, the DSA is centred in the AUTOWARE-based RA and aligned with main open HW and SW Platforms groups and initiatives in the digital automation area for Industry 4.0, as can be seen in <link linkend="F14-4">Figure <xref linkend="F14-4" remap="14.4"/></link>.</para>
<para>DSA catalogue of Core Products (DSA-CP) will offer, thanks to DATV certification framework, a complete description including the classification of the different DSA technology levels (visualization, security, connectivity, and open standards) achieved by the DSA-CP, its set of components, main features, DSA-RA mapping, component providers, qualified integrators availability (training level backed by CP owner, own homologation and expertise), estimated investment cost &amp; deployment time table depending on complexity level of deployment.</para>
</section>
<section class="lev1" id="sec14-4">
<title>14.4 DSA Reference Architecture</title>
<para>The RA aligned AUTOWARE manufacturing technical enablers, i.e. robotic systems, smart machines, cloudified control, secure cloud-based planning systems, and application platforms to provide cognitive automation systems as solutions while exploiting cloud technologies and smart machines as a common system. The AUTOWARE RA goal is to have a broad industrial applicability, map applicable technologies to different areas and guide technology and standard development.</para>
<para>The AUTOWARE RA has four levels, which target all relevant layers for the modeling of CPPS automation solutions (as depicted in <link linkend="F14-5">Figure <xref linkend="F14-5" remap="14.5"/></link>):</para>
<fig id="F14-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-5">Figure <xref linkend="F14-5" remap="14.5"/></link></label>
<caption><para>AUTOWARE Reference Architecture (layers, communication, &amp; modelling).</para></caption>
<graphic xlink:href="graphics/ch14_fig005.jpg"/>
</fig>
<para>Enterprise, Factory, Workcell, and Field devices. To uphold the concept of Industry 4.0 and move from the old-fashioned automation pyramid, the communication pillar enables direct communication between the different layers by using Fog/Cloud concepts. Finally, the last part of the RA focuses on the actual modelling of the different technical components inside the different layers. Additionally, to maintain compliancy with the overall AUTOWARE Framework, the reference architecture of the Software Defined Autonomous Service Platform (SDA-SP) broadens the overall AUTOWARE RA (see <link linkend="F14-6">Figure <xref linkend="F14-6" remap="14.6"/></link>) with the mapping of main technologies and CPPS services identified:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A reconfigurable workcell that demonstrates solutions typical for robot automation tasks, e.g. robotic assembly using multiple robots;</para></listitem>
<listitem><para>A mixed or dual reality supported automation to illustrate an automation solution that builds upon and benefits from intensive use of technologies like Virtual Reality (VR), Augmented Reality (AR), and Augmented Virtuality (AV). This system will be used to demonstrate the application of these technologies for automatic assembly of custom-ordered pneumatic cylinders.</para></listitem>
</itemizedlist>
<fig id="F14-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-6">Figure <xref linkend="F14-6" remap="14.6"/></link></label>
<caption><para>AUTOWARE Reference Architecture (SDA-SP).</para></caption>
<graphic xlink:href="graphics/ch14_fig006.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A multi-stage production line, where configuration, production, and traceability is built upon use of digital product memory technologies and functionalities.</para></listitem>
</itemizedlist>
<para>Table 14.1 presents a list of AUTOWARE enablers mapped with the AUTOWARE-based RA different layers, network levels, and dimensions identified in <link linkend="F14-6">Figure <xref linkend="F14-6" remap="14.6"/></link>.</para>
</section>
<section class="lev1" id="sec14-5">
<title>14.5 AUTOWARE Certification Usability Enabler</title>
<para>AUTOWARE will improve the European manufacturing industry situation by opening the door to new digital and digitally modified business opportunities with immediate global reach. Moreover, it will provide the enablers for putting innovation faster in the market with better streamlined customer processes and customer insights. The adoption of the CPPS technologies by SMEs is a well-known issue in which AUTOWARE has a major role in the automatization process by SMEs, facilitating that the SMEs build more sustainable and innovative business models. In addition, it also allows SMEs to focus both on the development or exploitation of the personalized applications and on the services to operate their strategic business assets (brand, culture, distribution, sales, production, and innovation).</para>
<para><emphasis role="strong">Table 14.1</emphasis> AUTOWARE enablers aligned to DSA-RA</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/ch14_fig007a.jpg"/></para>
<para>The impact on traditional SMEs, as shown in <link linkend="F14-7">Figure <xref linkend="F14-7" remap="14.7"/></link>, is immediate since technological complexity is decoupled from business value and a simple path towards maximizing the business value of advanced CPPS is facilitated. AUTOWARE hides the complexity of automatization to allow Future Internet SMEs and entrepreneurs to devote their resources and energies to effective and efficient business operation and value generation.</para>
<para>The number of technologies and platforms that need to be integrated to realize a cognitive automation service for Industry 4.0 is significantly high and complex. To this end, the AUTOWARE-proposed RA is rooted on solid foundations and intensive large-scale piloting of technologies for the development of cognitive digital manufacturing in autonomous and collaborative robotics as an extension of ROS and ReconCell frameworks and for modular manufacturing solutions based on the RAMI 4.0 Industry 4.0 architecture.</para>
<fig id="F14-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-7">Figure <xref linkend="F14-7" remap="14.7"/></link></label>
<caption><para>AUTOWARE business impact on SMEs.</para></caption>
<graphic xlink:href="graphics/ch14_fig007.jpg"/>
</fig>
<para>The digital convergence of the traditional industries is increasingly leading towards the disappearance of the boundaries between the industrial and service sectors. In March 2015, Acatech [9], through the Industry-Science Research Alliance&#8217;s strategic initiative &#8220;Web-based Services for Businesses,&#8221; proposed a layered architecture to facilitate a shift from product-centric to user-centric business models. At a technical level, these new forms of cooperation and collaboration will be enabled by new digital infrastructures.</para>
<para>Smart spaces are the smart environments, where smart, internet-enabled objects, devices and machines (smart products) connect to each other. The term &#8220;smart products&#8221; not only refers to actual production machines but also encompasses their virtual representations (CPS digital twins). These products are described as &#8220;smart&#8221; because they know their own manufacturing and usage history and are able to act autonomously. Data generated on the networked physical platforms is consolidated and processed on software-defined platforms. Providers connect to each other via these service platforms to form digital ecosystems. AUTOWARE extends those elements, which are critical for the implementation of the autonomy and cognitive features. AUTOWARE also extends those reference models adopting the layered structure suggested by the Industry 4.0 Smart Service Welt initiative [10] (shown in <link linkend="F14-8">Figure <xref linkend="F14-8" remap="14.8"/></link>) for digital business ecosystem development based on industrial platforms (smart product, smart data and smart service).</para>
<para>AUTOWARE at the smart product level leverages enablers for deterministic wireless CPPS connectivity (OPC-UA and Fog-enabled analytics). At the smart data level, the AUTOWARE technical approach is to develop cognitive planning and control capabilities supported by cloud tools and services and dedicated data management systems that will contribute to meet the real-time visibility and timing constrains of the cloudified planning and control algorithms for autonomous production services. Moreover, at the smart service level, AUTOWARE provides secure CPS capability exposure and trusted CPPS system modeling, design, and (self) configuration. In this latter aspect, the incorporation of the TAPPS CPS application framework, coupled with the provision of a smart automation service store, will pave the way towards an open service market for digital automation solutions which will be &#8220;cognitive by-design.&#8221; The AUTOWARE cognitive operating system makes use of a combination of reliable M2M communications, human- robotics-interaction, modelling and simulation, and cloud and fog-based data analytics schemes. In addition, taking into account the mission-critical requirements, this combination is deployed in a secure and safe environment, which includes validation and certification processes in order to guarantee its correct operation. All of this should enable a reconfigurable manufacturing system that enhances business productivity.</para>
<fig id="F14-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-8">Figure <xref linkend="F14-8" remap="14.8"/></link></label>
<caption><para>Smart service welt data-centric reference architecture.</para></caption>
<graphic xlink:href="graphics/ch14_fig008.jpg"/>
</fig>
<section class="lev2" id="sec14-5-1">
<title>14.5.1 AUTOWARE Certification Techniques</title>
<para>As previously stated, including validation and certification processes in AUTOWARE, Open CPPS ecosystem offers an easy adoption, secure environment, and greater credibility to SMEs. The planning and control of production systems has become increasingly complex regarding flexibility and productivity as well as regarding the decreasing predictability of processes. It is well accepted that every production system should pursue the following three main objectives:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Providing capability for rapid responsiveness,</para></listitem>
<listitem><para>Enhancement of product quality, and</para></listitem>
<listitem><para>Production at low cost.</para></listitem>
</itemizedlist>
<para>These requirements can be satisfied through highly stable and repeatable processes. However, they can also be achieved by creating short response times to deviations in the production system, the production process, or the configuration of the product in coherence to overall performance targets. In order to obtain short response times, a high process transparency and the reliable provisioning of the required information to the point of need at the correct time and without human intervention is essential. As a result, variable and adaptable systems are needed resulting in complex, long, and expensive engineering processes.</para>
<para>Although CPPS are defined to correctly work under several environment conditions, in practice, it is enough if it properly works under specific conditions. In this context, certification processes help guarantee the correct operation under certain conditions making the engineering process easier, cheaper, and shorter for SMEs that want to include CPPS in their businesses.</para>
<para>In addition, certification can increase the credibility and visibility of CPPS, as it guarantees its correct operation even following specific standards. If a CPPS is certified to follow some international or European standards or regulation, it is not necessary to be certified in each country, so the integration complexity, cost, and duration highly reduce. Nowadays, security and privacy are one of the major concerns for every business. SMEs with no specific knowledge need to be able to quickly assess, if an item provides confidence that required security and privacy is provided. For example, a minimal required barrier may need to be set to deter, detect, and respond to distribution and use of insecure interconnected items throughout Europe and beyond.</para>
<para>Security certification as a means of security assurance demonstrates conformance to a security claim for an item and eases the adoption of CPPS. Many certification schemes exist, each having a different focus (product, systems, solutions, services, and organizations) and many assessment methodologies also exist (checklists and asset-based vulnerability assessment). Some of the most important standards related to security are as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">ISO 10218-1:2011:</emphasis> It is the standard that specifies the requirements and guidelines for the inherent safe design, protective measures, and information for use of industrial robots. It describes the basic hazards associated with robots and provides requirements to eliminate, or adequately reduce, the risks associated with these hazards. It does not address the robot as a complete machine. Noise emission is generally not considered a significant hazard of the robot alone, and consequently noise is excluded from the scope of ISO 10218-1:2011.</para></listitem>
<listitem><para><emphasis role="strong">ISO 10218-2:2011:</emphasis> It is the standard that specifies safety requirements for the integration of industrial robots and industrial robot systems as defined in ISO 10218-1, and industrial robot cell(s). The integration includes the following:</para>
<itemizedlist mark="circle" spacing="normal">
<listitem><para>the design, manufacturing, installation, operation, maintenance and decommissioning of the industrial robot system or cell;</para></listitem>
<listitem><para>necessary information for the design, manufacturing, installation, operation, maintenance and decommissioning of the industrial robot system or cell; and</para></listitem>
<listitem><para>component devices of the industrial robot system or cell.</para></listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<para>ISO 10218-2:2011 describes the basic hazards and hazardous situations identified with these systems and provides requirements to eliminate or adequately reduce the risks associated with these hazards. ISO 10218-2:2011 also specifies requirements for the industrial robot system as part of an integrated manufacturing system.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">ISO/TS 15066:2016:</emphasis> It is the standard that specifies the safety requirements for collaborative industrial robot systems and the work environment and supplements the requirements and guidance on a collaborative industrial robot operation given in ISO 10218-1 and ISO 10218-2.</para></listitem>
</itemizedlist>
<para>Various methods can be used to systematically test and improve the security of CPPS systems. Apart from testing individual software components for security-related errors, all components of the CPPS infrastructure can also be tested, and the associated processes can be systematically examined and improved.</para>
<para>Depending on the initial situation, technical security tests may start at various testing stages, from all phases of the engineering or development cycle to integration testing and acceptance of the production infrastructure. It is possible to identify and eliminate security faults and the resulting risks at an early stage for relatively little cost, saving money, improving the accuracy of the planning and staying one step ahead of potential hackers.</para>
<fig id="F14-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-9">Figure <xref linkend="F14-9" remap="14.9"/></link></label>
<caption><para>Main characteristics of CPPS solutions that are desired by SMEs.</para></caption>
<graphic xlink:href="graphics/ch14_fig009.jpg"/>
</fig>
<para>Summarizing, it is well known that SMEs choose CPPS solutions that assure a correct operation, are easy and cheap to adapt, as well as safe &amp; secure (see <link linkend="F14-9">Figure <xref linkend="F14-9" remap="14.9"/></link>). In addition, the CPPS solutions have greater credibility if they are made with certified tools, guaranteeing their correct operation under specific conditions defined according to the specific application requirements. Moreover, certification increases the solution visibility and makes the maintenance operation easier.</para>
</section>
<section class="lev2" id="sec14-5-2">
<title>14.5.2 N-axis Certification Schema</title>
<para>Once the AUTOWARE solution is finished, a certification process is needed in order to guarantee the solution&#8217;s correct operation and assure its compliance with the regulation. As a result, the engineering, integration, and launching processes are easier, cheaper, and shorter for SMEs. The AUTOWARE-proposed certification methodology consists of the following different stages.</para>
<section class="lev3" id="sec14-5-2-1">
<title>14.5.2.1 Data collection</title>
<para>In this step, all the data useful for the certification process is collected. For example, documentation, which are the components, which technologies are used, what risks exist, etc. In the case of the components, it is also necessary to determine if they are critical, security, technological or commercial components, and if they are already certified or not. Table 14.2 shows an example of a possible template for obtaining data related to the solution/production. This information can be directly provided by the client or obtained by the certifying team during an ocular review.</para>
<para>Different options can be considered for the data collection, such as customer surveys, product/solution inspection, interviews, videos, etc. All of them are compatible and complementary to each other and their results can be combined.</para>
<para><emphasis role="strong">Table 14.2</emphasis> Data collection template for the certification process</para>
<para><graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphics/ch14_fig009a.jpg"/></para>
</section>
<section class="lev3" id="sec14-5-2-2">
<title>14.5.2.2 Strategy</title>
<para>An appropriate strategy must be determined depending on the specific prod- uct/solution and the data obtained during the data collection process. For this purpose, the following questions must to be answered.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Which tools are the most appropriate?</para></listitem>
<listitem><para>How far the certification process has to go?</para></listitem>
<listitem><para>What type of tests should be defined?</para></listitem>
</itemizedlist>
<para>Depending on the data obtained in the data collection process, an appropriate series of tests must be defined encompassing as much as possible all the different possibilities: functional tests per component, integrity tests, unit tests per component, complete functional tests, etc.</para>
</section>
<section class="lev3" id="sec14-5-2-3">
<title>14.5.2.3 Test execution</title>
<para>During this phase, the different tests defined during the strategy process are executed using the selected tools.</para>
</section>
<section class="lev3" id="sec14-5-2-4">
<title>14.5.2.4 Analysis &amp; reports</title>
<para>The results obtained from the test execution process are analysed in order to detect possible errors and indicate the level of criticality. The results obtained from the data analysis are gathered in a relevant report for the customer.</para>
<para>This four-phase process applied to the different system components and considering the different kind of components has to be combined with different fields of action (medicine, aviation, etc.) and with different standards (ISO-15066, ISO10218-1, ISO-10218-2). For this reason, the AUTOWARE certification scheme must be a multi-axis certification scheme such as that shown in <link linkend="F14-10">Figure <xref linkend="F14-10" remap="14.10"/></link>.</para>
<fig id="F14-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-10">Figure <xref linkend="F14-10" remap="14.10"/></link></label>
<caption><para>Multi-axis certification solution.</para></caption>
<graphic xlink:href="graphics/ch14_fig010.jpg"/>
</fig>
</section>
</section></section>
<section class="lev1" id="sec14-6">
<title>14.6 DSA Certification Framework</title>
<para><emphasis role="strong">DSA RA compliant components</emphasis> provided by large well-known providers of technologies are the base for the development of Core Products designed and validated for a purpose (previously defined as predictive maintenance, zero-defect manufacturing, energyefficient manufacturing, etc.). Each <emphasis role="strong">DSA Core Product</emphasis> (as shown in <link linkend="F14-11">Figure <xref linkend="F14-11" remap="14.11"/></link>) should be composed by a set of DSA RA-compliant components with their matching datasheets (features and performance specifications), configuration &amp; programming profiles and validation for purpose profiles (VPP), <emphasis role="strong">as guidance to ensure DATV</emphasis> when integrated in future solutions. Thanks to the support and training offered by Core Product owners, <emphasis role="strong">Medium &amp; Small Integrators</emphasis> within the DSA network of experts will offer their services for the implementation of a <emphasis role="strong">validated deployment</emphasis> with customized Core Products for the specific application demanded by the manufacturing SMEs.</para>
<para>AUTOWARE promotes the use of open protocols &amp; standards such as HW platform (openFog), connectivity (MQTT, TSN, iROS), control (IEC61499) data protocol (OPC-UA), data sharing (IDS, FIWARE/ETSI Context Information Framework), and data security. Individual components should support relevant open standards, APIs and specifications to become part of the AUTOWARE framework. However, AUTOWARE does not promote the simple certification of individual components but moreover the availability of <emphasis role="strong">core products</emphasis> (HW infrastructure and software services and digital platforms) that are <emphasis role="strong">constructed following the DSA RA architecture</emphasis>; built <emphasis role="strong">for a purpose</emphasis> (visualisation, analysis, prediction, reasoning) <emphasis role="strong">in the context of specific digital services</emphasis> (energy efficiency, zero defect manufacturing, and predictive maintenance) <emphasis role="strong">for manufacturing lines</emphasis> (collaborative workspaces, robots, reconfigurable cells, modular manufacturing), as can be seen in <link linkend="F14-12">Figure <xref linkend="F14-12" remap="14.12"/></link>.</para>
<fig id="F14-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-11">Figure <xref linkend="F14-11" remap="14.11"/></link></label>
<caption><para>DSA-integrated approach for Digital Automation Solutions Certification.</para></caption>
<graphic xlink:href="graphics/ch14_fig011.jpg"/>
</fig>
<fig id="F14-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-12">Figure <xref linkend="F14-12" remap="14.12"/></link></label>
<caption><para>Digital automation solutions certification workflow.</para></caption>
<graphic xlink:href="graphics/ch14_fig012.jpg"/>
</fig>
<fig id="F14-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-13">Figure <xref linkend="F14-13" remap="14.13"/></link></label>
<caption><para>DSA capability development framework.</para></caption>
<graphic xlink:href="graphics/ch14_fig013.jpg"/>
</fig>
<fig id="F14-14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-14">Figure <xref linkend="F14-14" remap="14.14"/></link></label>
<caption><para>DSA service deployment path.</para></caption>
<graphic xlink:href="graphics/ch14_fig014.jpg"/>
</fig>
<para>The DSA approach, based on the access to DATV Core Products and Solutions and DSA expert professionals and services, will reduce considerably the integration and customization costs of validated deployments. Through the proposed certification framework and DATV tools, the DSA aims to maximize the Industry 4.0 ROI and ensure the future scala- bility/extendibility of the digital shopfloors, by the implementation of a capability development framework (shown in <link linkend="F14-13">Figure <xref linkend="F14-13" remap="14.13"/></link>) and a service deployment path (shown in <link linkend="F14-14">Figure <xref linkend="F14-14" remap="14.14"/></link>) that guide SMEs in their Digital Transformation strategy in order to leverage their automation solutions visibility, analytic, predictability and autonomy.</para>
<para>After a preliminary vision in joint exploitation description of some of the main DSA Ecosystem Players (i.e. providers of the DSA products and services, medium &amp; small integrators. . . ), a detailed mapping of the <emphasis role="strong">DSA ecosystem players</emphasis> and their roles and strategies in the DSA ecosystem, based in a win-win model, is presented here:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Open HW and Open Platform initiatives and groups</emphasis> will provide the required support for the established DSA RA and open source Industry 4.0 technologies. They will join the DSA Ecosystem as members by signing a Memorandum of Understanding, MoU, where their contributions will be defined. DSA will support the interaction of the rest of the ecosystem players like integrators to ease the adoption of these technologies and the certifications associated with the open technologies. This category could integrate interested universities and research centres working on these open technologies.</para></listitem>
<listitem><para><emphasis role="strong">Manufacturing Champions</emphasis>, the key Large Manufacturing Companies will have an essential role in the DSA as main tractors of the manufacturing sector, since they define the regulations and standardizations required to their providers&#8217; network. DSA ecosystem will study and analyse the sector demands and needs to ensure the Manufacturing Champions endorsement align their activities in the right direction. As DSA members, Manufacturing Champions pay a fee that will allow them access to a DSA-validated network of homologated providers implementing DATV deployments that ensure safe operation, energy efficiency, and quality performance first contacts with the main manufacturing companies will be done for instance through the Boost 4.0 Lighthouse European project.</para></listitem>
<listitem><para><emphasis role="strong">Technology Providers</emphasis>, the main technology providers (i.e. Siemens, Bosch, ATOS) will join the DSA Ecosystem as Core Product owners offering these DATV solutions ready to be customized and integrated in the shopfloor, associated maintenance, and support &amp; training services. Prior to join DSA, technology providers&#8217; components and Core Product should be DSA open SW, open HW and open Platform compliant, providing associated datasheet, configuration &amp; programming profiles and validation for purpose profile. As DSA members, the Technology Providers will pay a fee that provide them an alternative access to SME manufacturing companies market, not profitable for the direct sale of their SW packages, platforms and services. The Technology Providers will be able to offer adjusted prices to developers and thus access this SME market.</para></listitem>
<listitem><para><emphasis role="strong">Development Partners</emphasis> will form a network of small and medium integrators, qualified for the implementation of validated deployments for customized solutions for manufacturing SMEs. DSA will search a first contact with the Digital SME Alliance to have access to potential development partners. Prior to joining the DSA network of experts, these small and medium integrators should comply with specific training on the Core Products and DSA technologies, architecture and strategy, and provide a signed SLA. The DSA network provide an homologation methodology based on the training levels on the different DSA technologies and their expertise that, together with a cost/time estimation table for different deployment complexities, will give them visibility and a way to improve their competitive position.</para></listitem>
<listitem><para><emphasis role="strong">Manufacturing SMEs</emphasis>, DSA offers them not only the services and technologies for digital transformation and implementation of Digital Shopfloor technologies, but the visibility as DSA homologated providers to large manufacturing companies. Manufacturing SMEs main access to DSA will be not only the DSA platform/ecosystem web, but through the activities and services offered in clusters and DIH focused on manufacturing sector, and other agents like the Trilateral Alliance cooperation between German Plattform Industrie 4.0, French Alliance Industrie du Futur and Italian initiative Piano Industria 4.0, or Spanish Industria Conectada 4.0.</para></listitem>
</itemizedlist>
<para>DSA will also work for the <emphasis role="strong">integration of standardization methodologies</emphasis> in DSA solutions and deployments, considering not only the technological aspects but other aspects like data protection and GDPR.</para>
<para>As a summary Table 14.3 is presented with the initial players identified in DSA ecosystem within AUTOWARE project, and the potential players that will conform the DSA ecosystem in a future.</para>
</section>
<section class="lev1" id="sec14-7">
<title>14.7 DSA Certification Methodology</title>
<para>The DSA intends to promote the appropriate ecosystem to develop and commercialize Innovative Solutions that respond and can be adapted to end-user needs. When defining the mission pursued by the DSA, a reflection has been made on the key aspects when starting an initiative of this kind:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>WHY: A different way to commercialize I4.0 solutions; Implement DT to Industry; Foster the creation of innovative products</para></listitem>
</itemizedlist>
<table-wrap position="float" id="T14-3">
<label><link linkend="T14-3">Table <xref linkend="T14-3" remap="14.3"/></link></label>
<caption><para>Identification of DSA players</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="rows">
<tbody>
<tr><td valign="top" align="left">DSA Players</td><td valign="top" align="left">Preliminary Stage (AUTOWARE)</td><td valign="top" align="left">Future Stages (DSA Potential Players)</td></tr>
<tr><td valign="top" align="left">Open HW and Open Platform initiatives and groups</td><td valign="top" align="left">UMH, CNR, Fraunhofer, imec, INNO (open initiatives alignment role)</td><td valign="top" align="left">FIWARE, IDS, openFog, iROS, OPC-UA. . . OpenForum Europe</td></tr>
<tr><td valign="top" align="left">Manufacturing Champions</td><td valign="top" align="left">Fraunhofer, INNO, Blue Ocean, SQS (manufacturing sector alignment role)</td><td valign="top" align="left">Key manufacturing large companies from different sectors (i.e. Boost4.0 champions) National Manufacturing Enterprise Associations (CONFINDUSTRIA, It&#8217;s OWL, FrenchTech. . . )</td></tr>
<tr><td valign="top" align="left">Technology Providers</td><td valign="top" align="left">TTTech, JSI, Robovision</td><td valign="top" align="left">EIT Digital, AIOTI Siemens, Bosch, SAP, Huawei, Telefonica, Azzure, CloudFlow, Dassault, ESI Group. . . Digital SME Alliance</td></tr>
<tr><td valign="top" align="left">Development Partners</td><td valign="top" align="left">SmartFactoryKL, JSI, Tekniker (Neutral Experimental Facilities as integrators)</td><td valign="top" align="left">Digital SME Alliance for Small &amp; Medium IT Integrators. . .</td></tr>
<tr><td valign="top" align="left">Manufacturing SMEs</td><td valign="top" align="left">SMC, Stora Enso (industrial Use Cases)</td><td valign="top" align="left">Sectorial clusters &amp; DIH, German Plattform Industrie 4.0, French Alliance Industrie du Futur, Italian initiative Piano Industria 4.0, Spanish AIC. . .</td></tr>
</tbody>
</table>
</table-wrap>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>HOW: Offering solutions vs technologies; creating an ecosystem of beneficiaries from stakeholders from research to end-users;</para></listitem>
<listitem><para>WHAT: Consultancy; Certification; Solutions; Integration</para></listitem>
</itemizedlist>
<para>This analysis has led us to the definition of four key sets of services to be offered within DSA ecosystem: Consultancy, Certification, Integration, and Solutions as shown in <link linkend="F14-15">Figure <xref linkend="F14-15" remap="14.15"/></link>, with the support of the certification framework that ensures easy configuration and operation of reliable scalable open based Digital Automation Solutions with low cost and fast RoI deployment. As shown in <link linkend="F14-16">Figure <xref linkend="F14-16" remap="14.16"/></link>, DSA certification methodology covers the Core Product key aspects to successfully support a manufacturing SME in its digital transformation strategy:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>FUNCTION: Identified key functionality aspects, defining processes and customisation, global, interoperability, and standard features of the core product </para></listitem>
</itemizedlist>
<fig id="F14-15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-15">Figure <xref linkend="F14-15" remap="14.15"/></link></label>
<caption><para>DSA key services.</para></caption>
<graphic xlink:href="graphics/ch14_fig015.jpg"/>
</fig>
<fig id="F14-16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-16">Figure <xref linkend="F14-16" remap="14.16"/></link></label>
<caption><para>DSA key services.</para></caption>
<graphic xlink:href="graphics/ch14_fig016.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>TECHNOLOGY: Identified open SW/HW components, RA alignment, validation, and configuration tools</para></listitem>
<listitem><para>DELIVERY: Identified network of local experts for integration &amp; training, pricing model, and operations &amp; support services</para></listitem>
</itemizedlist>
<para>Moreover, the integration of diverse stakeholders in the DSA ecosystem fosters the adoption of I4.0 &amp; leverages the creation of innovative products:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>TECHNOLOGY PROVIDERS: Provide core components and technologies, Open HW and Open Platform Initiatives and Groups, and Research &amp; Private</para></listitem>
<listitem><para>SOLUTION PROVIDERS: Core products/solution developers</para></listitem>
<listitem><para>INTEGRATORS: Adapt core products to end-user needs</para></listitem>
<listitem><para>VALIDATORS &amp; CERTIFIERS: Validate core products/solutions and their adaptations Service Providers V&amp;V</para></listitem>
<listitem><para>STANDARDIZATION BODIES: Technology and process related</para></listitem>
<listitem><para>MANUFACTURING: Large (Prescription &amp; Customers), SMEs (end- users/Customers), Clusters (Prescription), and Industrial Associations (Prescription)</para></listitem>
<listitem><para>CONSULTANCY: Digital transformation consultancy experts.</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="sec14-8">
<title>14.8 Conclusion</title>
<para>This section has presented the foundation of the DSA and the associated validation and verification framework as the basis to develop a manufacturing driven multi-sided ecosystem. The DSA is originated as a means for SMEs to navigate and exploit the large set of tools and platforms available for the development of digital solutions for the digital shopfloor. This paper has discussed how the DSA approach can nurture synergies across multiple stakeholders for the benefit of SME digitization and the gradual integration of the digital abilities in the digital shopfloor with a business impact. This paper has presented main standardization and compliance drivers, for instance, digital shopfloor safety in advanced robotic systems as one of the multipliers for adoption and the need for a DSA ecosystem that facilitates navigation across standards, platforms, and services with a focus on business competitiveness. This paper has also presented the fundamental services envisioned for such DSA and the dimensions that need to be validated to ensure that digital abilities such as automatic awareness can be fully realized in the context of cognitive manufacturing digital transformation.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>This work has been funded by the European Commission through the FoF-RIA Project <emphasis>AUTOWARE: Wireless Autonomous, Reliable and Resilient Production Operation Architecture for Cognitive Manufacturing</emphasis> (No. 723909).</para>
</section>
<section class="lev1" id="sec14-9">
<title>References</title>
<para>[1] Max Blanchet, The Industrie 4.0 transition. How it reshuffles the economic, social and industrial model http://www.ims.org/wpcontent/ uploads/2017/01/2.02 Max-Blanchet WMF2016.pdf</para>
<para>[2] Digital Shopfloor Alliance website. Available at: https://digital shopflooralliance.eu/, last accessed September 2018.</para>
<para>[3] H2020 AUTOWARE project website. Available at: http://www. autoware-eu.org/, last accessed September 2018.</para>
<para>[4] Kollaborierende Robotersysteme. Planning von Anlagen mit der Funktion &#8220;Leistungs- und Kraftbegrenzing&#8221;, [Online], Available at: https:// www.google.de/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=2&amp;ved= 0ahUKEwjdlry0wNrZAhUuyqYKHXb0BWcQFghAMAE&amp;url=http% 3A%2F%2Fwww.dguv.de%2Fmedien%2Ffb-holzundmetall%2Fpubli kationendokumente%2Finfoblaetter%2Finfobl deutsch%2F080 roboter. pdf&amp;usg=AOvVaw1UxaEcsQ9K4lWZXq3W3UDy, last accessed: March 2018.</para>
<para>[5] ISO 10218-1:2011, &#8220;Robots and robotic devices &#8211; Safety require-ments for industrial robots &#8211; Part 1: Robots&#8221; Available at: https://www. iso.org/standard/51330.html</para>
<para>[6] ISO 10218-1 with industrial robot cell(s) Available at: https://www.iso. org/standard/41571.html</para>
<para>[7] H2020 DAEDALUS project website. Available at: http://daedalus. iec61499.eu, last accessed September 2018.</para>
<para>[8] H2020 FAR-EDGE project website. Available at: http://www.faredge. eu/#/, last accessed September 2018.</para>
<para>[9] ACATECH NATIONAL ACADEMY OF SCIENCE AND ENGINEERING Available at: https://en.acatech.de/, last accessed September 2018.</para>
<para>[10] Industry 4.0 Smart Service Welt initiative Available at: https://www.digitale-technologien.de/DT/Navigation/EN/Foerderprogramme/Smart_Service_Welt/smart_service_welt.html, last accessed September 2018.</para>
</section>
</chapter>

<chapter class="chapter" id="ch015" label="15" xreflabel="15">
<title>Ecosystems for Digital Automation Solutions an Overview and the Edge4Industry Approach</title>
<para><emphasis role="strong">John Soldatos<superscript>1</superscript>, John Kaldis<superscript>2</superscript>, Tiago Teixeira<superscript>3</superscript>, Volkan Gezer<superscript>4</superscript> and Pedro Malo<superscript>3</superscript></emphasis></para>
<para><superscript>1</superscript> Kifisias 44 Ave., Marousi, GR15125, Greece</para>
<para><superscript>2</superscript> Athens Information Technology, Greece</para>
<para><superscript>3</superscript> Unparallel Innovation Lda, Portugal</para>
<para><superscript>4</superscript> German Research Center for Artificial Intelligence (DFKI), Germany E-mail: jsol@ait.gr; jkaldis@ait.gr; tiago.teixeira@unparallel.pt; Volkan.Gezer@dfki.de; pedro.malo@unparallel.pt</para>
<para>Stakeholders&#8217; collaboration is a key to successful Industry 4.0 deployments. The scope of collaboration spans the areas of solution development and deployment, experimentation, training, standardization, and many other activities. To this end, Industry 4.0 vendors and solution providers are creating ecosystems around their project&#8217;s developments, which allow different stakeholders to collaborate. This chapter reviews some of the most prominent ecosystems for Industrial Internet of Things and Industry 4.0 solutions, including their services and business models. It also introduces the Edge4Industry ecosystem portal, which is a part of the ecosystem building efforts of the H2020 FAR-EDGE project.</para>
<section class="lev1" id="sec15-1">
<title>15.1 Introduction</title>
<para>The advent of the fourth industrial revolution (Industrie 4.0) is enabling a radical shift in manufacturing operations, including both factory automation operations and supply chain management operations. CPS (Cyber Physical Systems)-based manufacturing facilitates the collection and processing of large volumes of digital data about manufacturing processes, assets, and operations, towards improving decision-making, driving the efficiency of processes such as production scheduling, quality control, asset management, maintenance, and more. In addition to access to CPS and Industrial Internet of Things (IIoT) platforms that realize these improvements, both manufacturers and providers of industrial automation solutions need a lot of support in testing, validating, and integrating novel applications in the factories. In support of these needs, a wide range of online platform and services have emerged, including:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Online platforms for IIoT services, notably public cloud IoT services. These enable solution integrators to develop, deploy, and validate innovative services for manufacturers. Moreover, these platforms come with a wide range of support services, which are offered to the communities of developers, solution providers, and manufacturers working around them.</para></listitem>
<listitem><para>Testbed platforms for manufacturers and automation solution providers, which enable them to test and validate solutions prior to actual deployment, while supporting them in research and knowledge acquisition.</para></listitem>
<listitem><para>Software/middleware library providers. Instead of providing a complete online platform with a pool of related services, these providers focus on the provision of middleware services that could help other organizations to establish the CPS/IIoT infrastructure.</para></listitem>
</itemizedlist>
<para>These online platforms and services enable the formation of entire ecosystems around them. A business ecosystem is generally defined as an economic community that is supported by a range of interacting organizations and individuals. This community produces goods, services, and knowledge that provide value to the customers of the ecosystem, who are also considered members of the ecosystem along with suppliers, producers, competitors, and other stakeholders [1].</para>
<para>The development of such ecosystems is a key success factor for the successful adoption of platforms such as the ones listed above. In this context, IIoT and Industry 4.0 projects and initiatives (such as our H2020 FAR-EDGE project that is described in previous chapters), should also undertake similar ecosystem building initiatives. In particular, one of the main objectives of FAR-EDGE is to create an ecosystem of manufacturers, factory operators, IT solutions integrators, and industrial automation solution providers around the project&#8217;s results, which will facilitate access and sustainable use of the project&#8217;s assets. The FAR-EDGE ecosystem services will be provided as part of an on-line platform, which will operate like a multi-sided market platform (MSP), which will bring together supply and demand about digital factory automation services based on the edge-computing paradigm. A wide range of solutions and services will be provided by FAR-EDGE to its ecosystem community, including industrial software and middleware-related services (e.g., automation and analytics solutions), as well as business and technical support services (e.g., support on solutions migration).</para>
<para>This chapter aims at providing insights on the IIoT ecosystems in general and the FAR-EDGE ecosystem in particular. The presentation of the existing ecosystems provides a comprehensive overview of the different types of services that they provide, as well as of their business models. Likewise, the presentation of FAR-EDGE ecosystem portal (www.edge4industry.eu) provides an overview of the solutions, services, and the knowledge base that are provided as part of the project and are made available to the community. The chapter is structured as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Section 15.2 following the chapter&#8217;s introduction presents a review of some of the most representative Industry 4.0 and IIoT ecosystems and their services;</para></listitem>
<listitem><para>Section 15.3 provides a comparative analysis of the presented ecosystems, including a description of their business models;</para></listitem>
<listitem><para>Section 15.4 introduces the Edge4Industry ecosystem portal and describes its structure and services; and</para></listitem>
<listitem><para>Section 15.5 is the final and concluding section of the chapter.</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="sec15-2">
<title>15.2 Ecosystem Platforms and Services for Industry 4.0 and the Industrial Internet-of-Things</title>
<para>In the following paragraphs, we describe a representative sample of the IIoT platforms and their ecosystems, as well as a range of other Industry 4.0 platforms and testbeds, including their validation and experimentation services. Each ecosystem platform is presented both in terms of its technical/ technological characteristics as well as in terms of its business model.</para>
<section class="lev2" id="sec15-2-1">
<title>15.2.1 ThingWorx Foundation (Platform and Ecosystem)</title>
<para>The ThingWorx Foundation (www.thingworx.com) that is now part of PTC (https://www.ptc.com) provides a platform for the development and deployment of enterprise-ready, cloud-based IoT solutions. It is an end-to-end solution, which provides access to all elements that comprise an IoT application. Its main value proposition lies in the provision of a simple and seamless way for developing IoT applications, which reduces the development and deployment efforts.</para>
<para>ThingWorx&#8217;s services are accessible to the developers via a developers&#8217; portal and can be classified as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Connectivity Services (Make): Based on the ThingWorx platform, one can connect devices, sensors, and systems, among themselves but also with other systems. Connectivity and information exchange is facilitated in order to reduce the time and effort needed for rapid development of integrated solutions.</para></listitem>
<listitem><para>Data Analytics (Analyze): The ThingWorx platform provides the means for analyzing the data derived from connected IoT devices.</para></listitem>
<listitem><para>Development/Coding Services (Code): The platform offers development tools and APIs, which provide development flexibility and increase the overall productivity of solution integrators.</para></listitem>
</itemizedlist>
<para>While ThingWorx is a general-purpose platform for IoT solutions, smart manufacturing is explicitly listed as one of the primary markets of application. In this direction, ThingWorx provides a wide range of functionalities for interconnected assets within factories, plants and supply chains with business information systems. Moreover, some of the components of the platform, such as its AR (Augmented Reality) and IoT-based immersion module, are demonstrated as a part of the manufacturing scenarios such as industrial maintenance.</para>
<para>Around the Thingworx platform, the foundation has been building an ecosystem, which is providing a complete set of integrated IoT-specific development tools and capabilities in order to ease the delivery of IoT solutions. The ThingWorx ecosystem comprises the following participants, concepts, structures and associated stakeholders&#8217; roles:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Partners: Enterprises are offered the opportunity to join the ThingWorx ecosystem as partners on the basis of a variety of different (partnership) programmes, which cover various needs. In particular, the partner programmes are available for: (i) Enterprises building IoT solutions based on the ThingWorx platform; (ii) Companies that build products that are certified by ThingWorx and made available through the ThingWorx marketplace; (iii) Professional service providers who opt to offer consulting, solution design and technical delivery services based on the ThingWorx IoT platform. These partners are called &#8220;services partners&#8221; and are provided with cumulative educational attainment; and (iv) Reseller of ThingWorx&#8217;s based technologies, which participate in the &#8220;ThingWorx Channel Advantage&#8221; program and can benefit from earning margins for reselling ThingWorx solutions.</para></listitem>
<listitem><para>Marketplace: The ThingWorx Marketplace provides access to everything needed in order to build and run ThingWorx-based IoT applications, including extensions, apps, and partners that can facilitate the development of IoT solutions based on the platform. The marketplace component of the ecosystem is therefore a means for the extensibility of the ecosystem.</para></listitem>
<listitem><para>Academic Programme: An academic programme is also offered to students, researchers, makers, universities, and trainers. It is an IoT education programme, which is built over the platform, leveraging its practical features and content.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec15-2-2">
<title>15.2.2 Commercial Cloud-Based IIOT Platforms</title>
<para>All major IT and industrial automation vendors are offering cloud-based IIoT services. Likewise, they are also building ecosystems around these platforms or in most cases expanding their existing ecosystems in the IIoT space. A detailed analysis of each of the public IIoT services providers is beyond the scope of this chapter. Nevertheless, we can make a broad ballpark classification of the available services to the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">General purpose public IoT cloud services</emphasis>, which are typically offered by IT vendors. These include, for example, Microsoft&#8217;s Azure IoT Suite, IBM&#8217;s Watson IoT platform, SAP&#8217;s HAN Cloud platform with IoT support and extensions, Amazon AWS IoT, LogmeIN&#8217;s Xively platforms, and more. These platforms are not tailored to a specific vertical industry. Rather, they provide scalable and cost-effective cloud infrastructures for IoT, which can be used to develop, deploy, and operate solutions in the different industries.</para></listitem>
<listitem><para><emphasis role="strong">IIoT services for industrial automation</emphasis>, which are typically offered by industry leaders in industrial automation solutions including SIEMENS, Bosch, and ABB. In several cases, there are partnerships between IIoT vendors and providers of IT (IoT/cloud) infrastructure services as evident in the case of ABB and Microsoft, but also in the fact that Bosch&#8217;s IoT services run over various digital plumbing platforms such as Amazon&#8217;s. These partnerships are overall indicative of the distinction of business roles.</para></listitem>
</itemizedlist>
<para>The scope of these services includes connectivity services along with the offering of tools for rapid and cost-effective application developments.</para>
<para>Each of the above-listed platforms is associated with an ecosystem of developers, solution providers, and business partners. In most cases, the above-listed vendors act as ecosystem expanders in the IoT/IIoT space, given that they primarily expand the ecosystem of their existing accounts, customers, and business partners in the area of IoT. Access to the IIoT services, including consulting, technical support, training, and hosting, but mainly turn-key solution deployments is provided on a commercial basis with appropriate SLA (Services Level Agreements). Both public cloud services and private cloud services are offered. Public cloud services are charged in pay-per-use modality (e.g., pay-per-use and pay-as-you-go services are offered by Microsoft Azure, Amazon AWS IoT, and Xively).</para>
</section>
<section class="lev2" id="sec15-2-3">
<title>15.2.3 Testbeds of the Industrial Internet Consortium</title>
<para>The Industrial Internet Consortium (IIC) is an open-membership, international not-for-profit consortium that is leading the establishment of architectural frameworks and overall directions for the Industrial Internet. Its members represent large and small industries, entrepreneurs, academics, and government organizations. The Industrial Internet Consortium is a global, member-supported organization that promotes the accelerated growth of the IIoT by coordinating the ecosystem initiatives to securely connect, control, and integrate assets and the systems of assets with people, processes, and data using common architectures, interoperability, and open standards, in order to deliver transformational business and societal outcomes across industries and public infrastructure.</para>
<para>The IIC scope includes the identification and location of sensor devices, the data exchange between them, control and integration of collections of heterogeneous devices, data extraction, and storage plus data and predictive analytics. The challenge for the IIC is to ensure that these efforts come together into a cohesive whole. The IIC Working Groups coordinate and establish the priorities and enabling technologies of the Industrial Internet in order to accelerate market adoption and drive down the barriers to entry. There are currently 19 Working Groups and teams, broken into seven broad areas, including Business Strategy and Solution Lifecycle, Legal, Liaison, Marketing, Membership, Security, Technology, and Testbeds.</para>
<para>One of the areas of focus of the IIC is the development of Testbeds. A testbed is a controlled experimentation platform that:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Implements specific use cases and scenarios,</para></listitem>
<listitem><para>Produces testable outcomes to confirm that an implementation conforms to expected results,</para></listitem>
<listitem><para>Explores untested or existing technologies working together (interoper-ability testing),</para></listitem>
<listitem><para>Generates new (and potentially disruptive) products and services, and</para></listitem>
<listitem><para>Generates requirements and priorities for standards organizations sup-porting the Industrial Internet.</para></listitem>
</itemizedlist>
<para>Testbeds are a major focus and activity of the IIC and its members. The Testbed Working Group accelerates the creation of testbeds for the Industrial Internet and serves as the advisory body for testbed proposal activities for members. It is the centralized group which collects testbed ideas from member companies and provides systematic yet flexible guidance for new testbed proposals. Testbeds are where the innovation and opportunities of the Industrial Internet &#8211; new technologies, new applications, new products, new services, and new processes &#8211; can be initiated, thought through, and rigorously tested to ascertain their usefulness and viability before coming to market.</para>
</section>
<section class="lev2" id="sec15-2-4">
<title>15.2.4 Factory Automation Testbed and Technical Aspects</title>
<para>One type of Testbed known as Platform as a Service (PaaS) for Factory Automation (FA), is expected to facilitate the integration of the IoT systems to connect the manufacturing sites and head offices for strengthened operations, such as the globalization of supply chains and improved production quality, delivery time, and productivity when responding to sudden changes in markets. The FA testbed provides connectivity between the Factory and Cloud, a data analytics platform, and security resources, in order to ease the FA application development for Application Providers and FA Equipment Vendors. Based on the facilities of the testbed, the Application Providers and FA Equipment Vendors have the opportunity to develop and provide solutions to the manufacturers and factory operators at minimum effort and cost, by engaging in the development of the core logic of each application only, rather in the development of industrial middleware as well. Overall, the Testbed provides the following features to reduce application development process:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Connectivity between Factory and Cloud where architectures differ;</para></listitem>
<listitem><para>APIs specialized in FA, which are re-usable for FA applications: Edge Applications, Cloud Applications, and Domain Applications;</para></listitem>
<listitem><para>Security to protect the Factory brown field from the outside network; and</para></listitem>
<listitem><para>Integration of data from the Business Systems.</para></listitem>
</itemizedlist>
<para>IIC testbeds are privately funded by member companies or publicly funded by government agencies, while Hybrid models involving both public and private funding are also possible.</para>
</section>
<section class="lev2" id="sec15-2-5">
<title>15.2.5 Industry 4.0 Testbeds</title>
<para>As part of the platform &#8220;Industrie 4.0&#8221; in Germany, several testbeds have been established at specialist centres within universities and research institutions in Germany. These testbeds enable the testing and validation of complex production and logistics systems under realistic conditions. They are intended to be used by mechanical and plant engineering companies, notably Small and Medium Enterprises (SMEs). The latter are provided with facilities for testing their I4.0 developments in real-life nearly operational conditions, prior to their deployment in actual production environments. The testbeds are also addressed to factory operators wishing to take advantage of CPS manufacturing in a way that reduces barriers and risks.</para>
<para>As already outlined, the Industry 4.0 testbeds is a public sector- supported/funded initiative for evaluating innovative approaches to CPS manufacturing. This initiative is addressed to equipment manufacturers and operators. Along with access to the testbeds infrastructure, members of the Industry 4.0 platform are offered access to a range of advisory and coordination services. A central coordination office at the Federal Ministry of Education and Research (BMBF) provides funding support for testing innovative Industry 4.0 components by SMEs at the various testbeds. As part of the offered advisory services, BMBF provides SMEs with advice about the most appropriate testbeds to be used, while at the same time undertakes the focused dissemination of the results towards specialist communities. In this way, BMBF&#8217;s initiatives complement the activities undertaken by the Centres of Excellence (CoE) funded by the Federal Ministry for Economic Affairs and Energy (BMWi). The latter CoEs are primarily destined to support operators of machine and plant equipment.</para>
<section class="lev3" id="sec15-2-5-1">
<title>15.2.5.1 SmartFactory pilot production lines &#8211; testbeds</title>
<para>The FAR-EDGE project partner SmartFactory participates in the provision of various testbed services for Industry 4.0, which will herewith be presented as indicative examples. In particular, the SmartFactory provides several production lines to integrate, customize, test, and demonstrate CPS solutions in a realistic industrial production setup. All of its experimental production lines are designed to be strictly modular and are comprised of devices coming from several different vendors, being identical to those found in most modern industrial plants. The open and modular design facilitates the usage as testbed for various experiments. Several demonstrators have been built along four main production lines:</para>
</section>
<section class="lev3" id="sec15-2-5-2">
<title>15.2.5.2 Industry 4.0 production line</title>
<para>The first test-bed is a multi-vendor, highly modular factory system with &#8220;plug n&#8217; play&#8221; module extension. The independent modules are thereby fulfilling vendor-independent standards defined by SmartFactory, which are based on the widely accepted communication protocols. This test-bed representing the key concept of &#8220;Industry 4.0&#8221; has the following features: 1) Service-oriented production line with modular CPS-based field devices, 2) Multi-vendor, highly modular factory system with &#8220;plug n&#8217; play&#8221; module extension, and 3) Demonstration platform for distributed processes based on communicating component. As shown in <link linkend="F15-1">Figure <xref linkend="F15-1" remap="15.1"/></link>, items 1 to 10 are production modules, while 11 to 15 are infrastructure boxes connecting with 16 to 22 into an integrated IT system.</para>
</section>
<section class="lev3" id="sec15-2-5-3">
<title>15.2.5.3 SkaLa (scalable automation with Industry 4.0 technologies)</title>
<para>In today&#8217;s market, the customers do not only need products that they can configure individually, but they also desire products that are cost-effective and readily available. Meeting these requirements calls for a flexible and efficient approach to manufacturing. One way to meet these challenges is provided by &#8220;SkalA&#8221;, a demonstration unit that offers a scalable automation process.</para>
<para>The mobile demo unit can, if necessary and depending on the situation, be scaled to the automation process. The unit&#8217;s scalability is based on a fully decentralized, controlled manufacturing process, made possible by cyber-physical systems (CPS). For each work step, independently operating modules are used, which communicate with each other and control the process. The system can be expanded with a robot module via standardized interfaces to add an automated production component. In the manual mode, workers are provided with support in the form of projected recommendations for work steps. For improved flexibility, both order management and service activities are supported via mobile devices.</para>
<fig id="F15-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-1">Figure <xref linkend="F15-1" remap="15.1"/></link></label>
<caption><para>SmartFactory&#8217;s Industrie 4.0 production line.</para></caption>
<graphic xlink:href="graphics/ch15_fig001.jpg"/>
</fig>
<fig id="F15-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-2">Figure <xref linkend="F15-2" remap="15.2"/></link></label>
<caption><para>SkaLa production line.</para></caption>
<graphic xlink:href="graphics/ch15_fig002.jpg"/>
</fig>
</section>
<section class="lev3" id="sec15-2-5-4">
<title>15.2.5.4 Key finder (The keyfinder production line from SmartFactoryKL)</title>
<para>SmartFactoryKL has presented a unique demonstration plant as the central exhibit of the Forum Industrial IT together with the German Research Center for Artificial Intelligence (DFKI) at the Hannover Messe industrial trade fair in Hanover. On the basis of a complete production line, the relevant aspects of the fourth industrial revolution were exemplified for the first time using innovative information and communication technologies. The modular plant shows the flexible, customized manufacturing of an exemplary product, the components of which (housing cover, housing base and circuit board) are handled, mechanically processed and mounted.</para>
</section>
<section class="lev3" id="sec15-2-5-5">
<title>15.2.5.5 SME 4.0 competence center kaiserslautern</title>
<para>The SME 4.0 Competence Center Kaiserslautern is one of the several regional centers of excellence launched by the Federal Ministry for Economics and Technology (BMWi). The aim of this nationwide funding initiative is to highlight the importance of Industry 4.0 for the future of SMEs, to inform SMEs about the great opportunities in this area, and to actively support them with the implementation of projects.</para>
<para>As part of its mission, the SME 4.0 Competence Center Kaiserslautern assists companies from Rhineland-Palatinate and Saarland. The aim is to assist, offer an extensive, up-to-date knowledge base and valuable practical experience in the area of Industry 4.0. Focus, in particular, is on sharing know-how from many years of research and implementation with small and medium enterprises.</para>
<para>The SME 4.0 Competence Center Kaiserslautern consists of four partners, namely Technology Initiative SmartFactoryKL e.V., the German Research Center for Artificial Intelligence GmbH, the Kaiserslautern University of Technology and the Institute for Technology and Work e.V.</para>
</section>
</section>
<section class="lev2" id="sec15-2-6">
<title>15.2.6 EFFRA Innovation Portal</title>
<para>To foster information exchange and collaboration between innovation projects and the EC, the European Factory of the Future Research Association (EFFRA) has created an Innovation Portal, which serves as a single entry to point to information about FoF projects and their results. The EFFRA Innovation Portal stimulates clustering, maps projects on the &#8216;Factories of the Future 2020&#8217; roadmap, and allows for project monitoring and impact measurement. Within the portal, each project profile provides a summary of the project work and information on its consortium.</para>
<para>The portal is currently accessible to EFFRA project members. However, it also contains publicly accessible pages. It is maintained by EFFRA with support by EU projects involved in the association.</para>
</section>
<section class="lev2" id="sec15-2-7">
<title>15.2.7 FIWARE Project and Foundation</title>
<para>The FIWARE&#8217;s Community led by the EU industry and supported by the academic community, has built an open sustainable ecosystem and several implementation-driven software platform standards that could ease the development of new Smart Applications in multiple sectors. Its main goal is to enable an open community of developers including entrepreneurs, application sponsors and platform providers. FIWARE provides one of the most prominent operational Future Internet platforms in Europe. Its platform provides a rather simple yet powerful set of open public APIs that ease the development of applications in multiple vertical sectors. The implementation of a FIWARE Generic Enabler (GE) becomes a building block of a FIWARE instance. Any implementation of a GE is made up of a set of functions and provides a concrete set of APIs and interoperable interfaces that are in compliance with open specifications published for that GE. The FIWARE project delivers reference implementations for each defined GE, where an abstract specifications layer allows the substitution of any Generic Enabler with alternative or custom made equivalents.</para>
<para>FIWARE&#8217;s main contribution is the gathering of the best available design patterns, emerging standards and open source components, putting them all to work together through well-defined open interfaces. There is a lot of knowledge embedded, lowering the learning curve and mitigating the risks of bad architecture designs. The scope of the platform is also very wide, covering the whole pipeline of any advanced cloud solution: connectivity to the IoT, processing and analyzing Big data, real-time media, cloud hosting, data management, applications, services, security, etc. But FIWARE does not only accelerate the development of robust and scalable cloud based solutions, it also establishes the basis for an open ecosystem of smart applications. In the FIWARE sense, be SMART means to be Context Aware and to be able to interoperate with other applications and services; and this is where FIWARE excels.</para>
<para>FIWARE has over the years developed an ecosystem of developers, integrators and users of FIWARE technologies, which includes several SMEs. An instrumental role for the establishment and development of the FIWARE ecosystem has been played by the FIWARE Acceleration Programme, which promoted the take up of FIWARE technologies among solution integrators and application developers, with special focus on SMEs and start-ups. Around this programme, the EU has also launched an ambitious campaign where SMEs, start-ups and web entrepreneurs can get a funding support for the development of innovative services and applications using FIWARE technology. This support intends to be continuous and sustainable in the future, engaging accelerators, venture capitalists and businesses who believe in FIWARE.</para>
<para>The FIWARE ecosystem is supported and sustained by the FIWARE Foundation, which is the legal independent body providing shared resources to help achieve the FIWARE mission. The foundation focuses on promoting, augmenting, protecting, and validating the FIWARE technologies, while at the same time organizing activities and events for the FIWARE community. The latter empower its members (end users, developers, integrators and other stakeholders in the entire ecosystem.</para>
<para>Note that the FIWARE Foundation is open, as anybody can join and contribute to a transparent governance of FIWARE activities. The foundation operates on the basis of the principles of openness, transparency and meritocracy.</para>
</section>
<section class="lev2" id="sec15-2-8">
<title>15.2.8 ARROWHEAD ARTEMIS JU Project and ARROWHEAD Community</title>
<para>The Arrowhead project implemented a framework for developing service-oriented industrial automation solutions in five business domains, namely: production (process and manufacturing), smart buildings and infrastructures, electro mobility, energy production and virtual markets of energy. The project&#8217;s framework ensures the interoperability between different systems and approaches for implementing Service-Oriented Architecture (SOA)- based solutions in the target industries. To this end, Arrowhead provides and enables the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A system to make its services known to service consumers;</para></listitem>
<listitem><para>A system for service consumers to discover the services that they want/need to consume;</para></listitem>
<listitem><para>Authorized use of services provided by some service provider to a service consumer; and</para></listitem>
<listitem><para>Orchestration of systems, including control of the provided service instances that a system shall consume.</para></listitem>
</itemizedlist>
<para>The Arrowhead Framework contains common solutions for the core functionality in the area of Information Infrastructure, Systems Management, and Information Assurance as well as the specification for the application services carrying information vital for the process being automated.</para>
<para>Arrowhead is a recently concluded project, which offers a range of industrial middleware solutions to developers and deployers of industrial automation systems. It also provides resources that facilitate developers to develop, deploy, maintain, and manage Arrowhead compliant systems, including technical resources that boost a common understanding of how the Services, Systems, and System-of-Systems are defined and described. The latter resources include design patterns, documentation templates, and guidelines that aim at helping systems, newly developed or legacy, to conform to the Arrowhead Framework specifications.</para>
<para>Arrowhead has managed to establish around its framework an ecosystem of solution developers, along with end-users for the target industry areas, as well as associated use cases where the framework has been deployed and used.</para>
</section>
</section>
<section class="lev1" id="sec15-3">
<title>15.3 Consolidated Analysis of Ecosystems &#8211; Multi-sided Platforms Specifications</title>
<section class="lev2" id="sec15-3-1">
<title>15.3.1 Consolidated Analysis</title>
<para>In the following paragraphs, we perform a consolidation of the services and business models which have been outlined in the previous section. The following table provides a high-level taxonomy of the services that are presented in the following paragraphs, including the different ecosystems that offer them.</para>
<para>The business and sustainability models of the various ecosystems are essential for their longer-term viability. The main monetization strategies are as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Revenues from sales or use of services on a commercial basis (licensed or pay-as-you-go models):</emphasis> The ecosystems of the large vendors provide commercial services for end-users and providers of the IIoT solutions. The services are provided based on either licensed models or pay-as-you-go models. The latter is the primary monetization modality for public cloud services, yet they are considered as part of the private cloud services that vendors build for manufacturers.</para></listitem>
</itemizedlist>
<fig id="F15-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-3">Figure <xref linkend="F15-3" remap="15.3"/></link></label>
<caption><para>Overview of Services offered by various IIoT/Industry 4.0 ecosystems and communities.</para></caption>
<graphic xlink:href="graphics/ch15_fig003.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Sales of complementary services:</emphasis> Complementary services (notably training, education, advisory, and consulting services) are also provided on a commercial basis as part of the presented ecosystems). These services are offered separately or bundled with IIoT solution development, hosting, and deployment services.</para></listitem>
<listitem><para><emphasis role="strong">Public funding support services:</emphasis> Several of the services (such as some of the testbed services) are financed by public funding (including projects) or even by the combination of private and public funding sources.</para></listitem>
<listitem><para><emphasis role="strong">Membership fees:</emphasis> In foundations (such as FIWARE) and associations (such as EFFRA) there income is also generated from membership fees.</para></listitem>
</itemizedlist>
<para>There are different types of legal entities that support the above-listed monetization models. These include commercial entities, associations and non-profit foundations.</para>
<para>Based on the analysis of the above ecosystem platforms and services, it is important to highlight some important considerations for anyone attempting a similar ecosystem building initiative:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Critical Mass:</emphasis> The formation of a critical mass of stakeholders is a prerequisite for establishing an ecosystem.</para></listitem>
<listitem><para><emphasis role="strong">Viability of Service Offerings:</emphasis> In addition to creating a range of services, ecosystems should ensure that the offered services are viable.</para></listitem>
<listitem><para><emphasis role="strong">Business Models and Sustainability of Service Offerings:</emphasis> A viable business model should also support the sustainability of the ecosystem services.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec15-3-2">
<title>15.3.2 Multi-sided Platforms</title>
<para>It should be also outlined that the reviewed platforms provide services for both demand-side stakeholders (i.e. users of IIoT/Industry 4.0 services) and the supply-side ones, i.e. vendors and solution providers. As such, these platforms offer a range of base features such as a catalogue of services, services for registering and managing participants, authentication and authorization (as a prerequisite for accessing these services) and more. A basic set of such functionalities has been listed in the following figure (<link linkend="F15-4">Figure <xref linkend="F15-4" remap="15.4"/></link>) and illustrated in the literature (e.g., [2&#8211;4]).</para>
<fig id="F15-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-4">Figure <xref linkend="F15-4" remap="15.4"/></link></label>
<caption><para>Baseline functionalities of a Multi-sided market platform.</para></caption>
<graphic xlink:href="graphics/ch15_fig004.jpg"/>
</fig>
</section>
</section>
<section class="lev1" id="sec15-4">
<title>15.4 The Edge4Industry Ecosystem Portal</title>
<para>The FAR-EDGE Ecosystem portal (publicly accessible at www.edge4industry.eu) is a vertical IIoT ecosystem on factory automation, focusing on FoF/I4.0 applications for manufacturers, with the objective to ensure EU&#8217;s leadership in the manufacturing sector. It presents all the research work and innovation developed in the FAR-EDGE project and aims to advance the competitiveness of the participants, manufacturers, and providers of the industrial automation solutions. <link linkend="F15-5">Figure <xref linkend="F15-5" remap="15.5"/></link> presents the home page of the ecosystem portal.</para>
<para>As the goal for the Edge4Industry Ecosystem portal is to remain active, functional, and independent beyond the FAR-EDGE project, having broader adoption aspirations, a new unique brand and domain name has been specified to support the ecosystem evolution and branding beyond the duration of the FAR-EDGE project. <link linkend="F15-6">Figure <xref linkend="F15-6" remap="15.6"/></link> provides a mind-map with the structure of the portal that includes the FAR-EDGE services and solutions, a knowledgebase, a blog, and a registration/sign-in section. These pages can be accessed through the main menu and contain the following information:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Services:</emphasis> Provides all relevant information about each FAR-EDGE service.</para></listitem>
<listitem><para><emphasis role="strong">Solutions:</emphasis> Provides information and access to the FAR-EDGE solutions.</para></listitem>
</itemizedlist>
<fig id="F15-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-5">Figure <xref linkend="F15-5" remap="15.5"/></link></label>
<caption><para>Home page of Edge4Industy portal.</para></caption>
<graphic xlink:href="graphics/ch15_fig005.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Knowledgebase:</emphasis> This is a dedicated page with articles, training and presentations regarding the project.</para></listitem>
<listitem><para><emphasis role="strong">Blog:</emphasis> This section provides articles, news, and latest publications about the Edge4Industry community.</para></listitem>
<listitem><para><emphasis role="strong">Sign in:</emphasis> This is a sign in area that enables users&#8217; registration/login.</para></listitem>
</itemizedlist>
<fig id="F15-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-6">Figure <xref linkend="F15-6" remap="15.6"/></link></label>
<caption><para>Content structure of the Edge4Industry portal.</para></caption>
<graphic xlink:href="graphics/ch15_fig006.jpg"/>
</fig>
<section class="lev2" id="sec15-4-1">
<title>15.4.1 Services</title>
<para>The Services section can be easily accessed through the main menu by clicking in the Services button and intends to present to the users community all the available FAR-EDGE services. At this stage, the following services are available:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">FAR-EDGE Datasets:</emphasis> Provides access to open datasets that can be used for experimentation and research. The first datasets provided include data related to individual production modules such as their power consumption, their status, operating mode (maintenance, active, etc.). The datasets include all module production-related information, including Module ID, module description, production status, conveyor status, operating status, error status, uptime information, power consumption, order number, process time etc.</para></listitem>
<listitem><para><emphasis role="strong">Migration Services:</emphasis> The FAR-EDGE Migration Services supports manufacturers, plant operators and solutions integrators in planning and realizing a smooth migration from conventional industrial automation systems (like ISA-95 systems) into the emerging Industry 4.0 ones (like edge computing systems). The service provides a Migration Matrix Tool, which includes all the essential improvement steps and plans needed to enable a smooth migration from traditional control production systems towards the decentralised control automation architecture based on edge computing, CPS, and IoT technologies.</para></listitem>
<listitem><para><emphasis role="strong">Training Services:</emphasis> This service delivers technical, architectural, and business training to Industry 4.0-related communities, as a means of raising awareness about digital automation in general and FAR-EDGE solutions in particular. It includes specific courses and training presentations. The latter are appropriate for stakeholders that wish to understand opportunities stemming from the deployment of edge computing and distributed ledger infrastructure for industrial automation use cases.</para></listitem>
</itemizedlist></section>
<section class="lev2" id="sec15-4-2">
<title>15.4.2 Solutions</title>
<para>Similar to the Services section, the Solutions section intends to present all the available FAR-EDGE solutions and can be accessed too through the main menu by clicking in the Solutions button. At this stage, the FAR-EDGE solutions that are available are as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Analytics Engine:</emphasis> The Analytics Engine solution is a middleware component for configurable distributed data analytics in industrial automation scenarios. Its functionalities are accessible through an Open API, which enables the configuration and deployment of various industrial-scale data analytics scenarios. It supports processing of large volumes of streaming data, at both the edge and the cloud/enterprise layers of digital automation deployments. It also supports data analytics at both the edge and the cloud layers of a digital automation system. It is extremely flexible and configurable based on the notion of Analytics Manifests (AMs), which obviate the need for tedious data analytics programming. AMs support various analytics functionalities and are amenable by visual tools. Note that the Analytics Engine is provided with an open source license.</para></listitem>
<listitem><para><emphasis role="strong">Automation Engine:</emphasis> This solution provides the means for executing automation workflows based on an appropriate Open API. It enables lightweight high-performance interactions with the field for the purpose of configuring and executing automation functionalities. It provides field abstraction functionalities and therefore supports multiple ways and protocols for connecting to the field. It also facilitates the execution of complex automation workflows based on a system-of-systems approach. It offers reliable and resilient functionalities at the edge of the plant network, based on Arrowhead&#8217;s powerful local cloud mechanism. Finally, it leverages a novel, collaborative blockchain-based approach to synchronizing and orchestrating automation workflows across multiple local clouds.</para></listitem>
<listitem><para><emphasis role="strong">Distributed Ledger Infrastructure:</emphasis> This solution results in a runtime environment for user code that implements decentralized network services as smart contracts, which are used for plant-wide synchronization of industrial processes. It enables the synchronization of several edge analytics processes, as well as various edge automation processes. The solution is a first of a kind implementation of permissioned ledger infrastructure for the reliable synchronization of distributed industrial processes.</para></listitem>
<listitem><para><emphasis role="strong">Edge Computing Infrastructure:</emphasis> The Edge Computing Infrastructure solution is a pool of components, which provide the means for high-performance connectivity and data acquisition at the edge of the industrial automation network. The solution leverages the capabilities of popular connectivity protocols (like MQTT) and high-performance data streaming frameworks (like Apache Kafka). It also enables dynamic connectivity and data acquisition for the field, in order to facilitate edge computing configurations. Its implementation is containerized (i.e. Docker based), which facilitates usage and deployment.</para></listitem>
<listitem><para><emphasis role="strong">FAR-EDGE Digital Models:</emphasis> This solution offers the means for rep-resenting, exchanging and sharing information in the scope of an edge computing system for industrial automation. Also support is provided for the development of digital twins for field configurations and digital simulations. These Digital Models are based on ideas from several standards for plant modeling, while being tailored to the needs of edge computing for factory automation. They are among the few publicly available digital models for edge computing implementations of industrial automation systems.</para></listitem>
<listitem><para><emphasis role="strong">Security Infrastructure:</emphasis> This solution is a system designed following the principles of the Industrial Internet Security Framework (IISF) of the Industrial Internet Consortium (IIC) that provide superior integrity of distributed security functions within an Edge Computing based system. It can operate in conjunction with the Distributed Ledger in order to host the security policy and to provide consistent security across various edge analytics and edge automation processes. It is a first of a kind distributed ledger implementation of an IISF compliant security system.</para></listitem>
<listitem><para><emphasis role="strong">Simulation and Virtualization Engine:</emphasis> This solution provides the means for configuring and executing digital simulations. It includes a real-to-digital synchronization tool, which allows simulation services providers and integrators to improve the reliability of simulations predictions and develops synchronization functionalities between physical world elements and their digital twin. This tool regards any related data source based on appropriate digital models while offering all steps necessary to translate the messages from the physical world element format to the data model format used by the simulation.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec15-4-3">
<title>15.4.3 Knowledge Base</title>
<para>The Knowledgebase section is a dedicated area of the portal that provides direct access to articles and presentations concerning the latest research and innovation work provided by the FAR-EDGE project and the Edge4Industry community.</para>
<para>The goal of this section is to enable the Edge4Industry members to acquire an in-depth knowledge regarding all the FAR-EDGE project issues, the information, and the resources available; while the access to them is user-friendly and dynamic.</para>
</section>
<section class="lev2" id="sec15-4-4">
<title>15.4.4 Blog</title>
<para>The blog section presents to the ecosystem community publications about topics that are related to the industry, including those that have been published by members of the Edge4Industry community as well as other sources such as other blogs and electronic magazines. Similar to the Knowledgebase section, access to the Edge4Industry Blog section publications is user-friendly and dynamic.</para>
</section>
<section class="lev2" id="sec15-4-5">
<title>15.4.5 Sign-in and Registration</title>
<para>The Edge4Industry portal includes a user management system that enables access to different user&#8217;s types and determines which portal resources are applicable and authorized for each user. At this stage are two user types:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">Guest</emphasis>, which is assigned to unauthenticated users and grants lowest- level permission within the portal.</para></listitem>
<listitem><para><emphasis role="strong">Registered member</emphasis>, which is assigned to members that can access all the relevant resources that are provided in the knowledgebase.</para></listitem>
</itemizedlist>
<para>The Edge4Industry Register members can authenticate in the portal by the Sign in section. Members can use a set of different authentication tools to access the Edge4Industry portal.</para>
</section>
</section>
<section class="lev1" id="sec15-5">
<title>15.5 Conclusions</title>
<para>In the era of digitization, the development of proper ecosystems is as important as the development of digital platforms. In many cases, most of the value of a digital platform lies in its ecosystem and the opportunities that it provides to stakeholders&#8217; in order to collaborate and advance the digital transformation of modern organizations. Digital automation platforms are no exception, which is the reason why all major vendors of IIoT and Industry 4.0 platforms have established ecosystems around their products and services. Likewise, several public and private funded initiatives have established testbeds, where industrial organizations can experiment with digital technologies without disrupting their production operations.</para>
<para>As part of this chapter, we have reviewed several IIoT/Industry 4.0 ecosystem building efforts, including ecosystems established around commercial platforms, experimental testbeds and community portals. Moreover, we have provided the key building blocks and success factors of multi-sided platforms. Furthermore, we have presented the Egde4Industry portal, which is providing a single point of access to the full range of digital automation results of the FAR-EDGE project, including results presented in previous chapters such as the project&#8217;s analytics engine, digital models and approach to supporting smooth migration from ISA-95 to decentralized automation.</para>
<para>The Edge4Industry community is gradually growing in size and expanding in terms of stakeholders&#8217; engagement. In support of this growth, we plan to provide a range of collaboration and engagement features, which will also be supporting its growth based on an ambitious dissemination and communication plan during the next couple of years.</para>
</section>
<section class="lev1" id="ack">
<title>Acknowledgements</title>
<para>This work has been carried out in the scope of the FAR-EDGE project (H2020-703094). The authors acknowledge help and contributions from all partners of the project.</para>
</section>
<section class="lev1" id="sec15-6">
<title>References</title>
<para>[1] J. Moore &#8216;The Death of Competition: Leadership &amp; Strategy in the Age of Business Ecosystems&#8217; New York: HarperBusiness. ISBN 0-88730-850-3, 1996.</para>
<para>[2] T. Eisenmann, G. Parker and M. W. Van Alstyne &#8216;Strategies for Two-Sided Markets,&#8217; Harvard Business Review, November 2006.</para>
<para>[3] A Hagiu. &#8216;Two-Sided Platforms: Pricing, Product Variety and Social Efficiency&#8217; mimeo, Harvard Business School, 2006.</para>
<para>[4] Leslie Brokaw, (2014) &#8220;How to Win With a Multisided Platform Business Model&#8221;, MIT Sloan Business School (blog), May 20, 2014.</para>
</section>
</chapter>
</part>
<chapter class="chapter" id="ch016" label="16" xreflabel="16">
<title>Epilogue</title>
<para>At the dawn of the fourth industrial revolution, the benefits of the digital transformation of plants are gradually becoming evident. Manufacturers and plant operators are already able to use advanced CPS systems in order to increase the automation, accuracy, and intelligence of their industrial processes. They are also offered opportunities for simulating processes based on digital data as a means of evaluating different scenarios (i.e. &#8220;what-if&#8221; analysis) and taking optimal automation decisions. These capabilities are empowered by the accelerated evolution of digital technologies, which is reflected in rapid advances in areas such as cloud computing, edge computing, Big Data, AI, connectivity technologies, block chains and more. The latter digital technologies form the building blocks of the state-of-the-art digital manufacturing platforms.</para>
<para>In this book, we have presented a range of innovative digital platforms, which have been developed in the scope of three EU projects, namely the AUTOWARE, DAEDALUS, and FAR-EDGE projects, which are co-funded by the European Commission in the scope of its H2020 framework programme for research and innovation. The presented platforms emphasized the employment edge computing, cloud computing, and software technologies as a means of decentralizing the conventional ISA-95 automation pyramid and enabling flexible production plants that can support mass customization production models. In particular, the value of edge computing for performing high-performing operations close to the field was presented, along with the merits of deploying enterprise systems in the cloud towards high per-formance, interoperability, and improved integration of data and services. Likewise, special emphasis was paid in illustrating the capabilities of the IEC 61499 standard and the related software technologies, which can essentially allow the implementation of automation functionalities at the IT rather than the OT part of the production systems.</para>
<para>Special emphasis has been put in the presentation of some innovative and disruptive automation concepts, such as the use of cognitive technologies for increased automation intelligence and the use of the trending block chain technologies for the resilient and secure synchronization of industrial processes within a plant and across the supply chain. The use of these technologies in automation provide some characteristic examples about how the evolution of digital technologies will empower innovative automation concepts in the future.</para>
<para>In terms of specific Industry 4.0 functionalities and use cases, our focus has been put on systems that boost the development of flexible and highperformance production lines, which boost the mass customization and reshoring strategies of modern manufacturers. A distinct part of the book was devoted to digital simulation system and their role in digital automation. It is our belief that digital twins will play a major role in enhancing the flexibility of production lines, as well as in optimizing the decision-making process for both production managers and business managers.</para>
<para>Nevertheless, the successful adoption of digital automation concepts in the Industry 4.0 era is not only a matter of deploying the right technology. Rather, it requires investments in a wide range of complementary assets, such as digital transformation strategies, new production processes that exploit the capabilities of digital platforms (e.g., simulation), training of workers in new processes, and many more. Therefore, we have a dedicated a number of chapters to the presentation of such complementary assets such as migration strategies, ecosystem building efforts, training services, development support services, and more. All of the presented projects and platforms pay emphasis to the development of an arsenal of such assets as a means of boosting the adoption, sustainability and wider use of these solutions.</para>
<para>Even though this book develops the vision of a fully digital shopfloor, it should be outlined that we are only in the beginning and far from the ultimate realization of this concept. In particular, we have only marginally discussed integration and interoperability issues, which are at the heart of a fully digital shopfloor. Moreover, we have not presented how different components and modular solutions can be used to address the different needs of manufacturers and plant operators. Our Digital Shopfloor Alliance (DSA) initiative (https://digitalshopflooralliance.eu/) aims at bringing these issues into the foreground, but also in creating critical mass for successfully confronting them.</para>
<para>Industry 4.0 will be developed in a horizon that spans across the next three to four decades, where digital platforms will be advanced in terms of intelligence and functionalities, while becoming more connected. In particu-lar, the following developments are likely to take place over state-of-the-art digital platforms presented in this book:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><emphasis role="strong">The establishment of industrial data spaces</emphasis>, which will provide the means for interoperable data exchanges between different platforms and stakeholders. As a characteristic example, industrial data spaces that allow supply chain stakeholders to exchange production orders and materials information without only minimal effort for integrating their enterprise systems with the industrial data space infrastructure.</para></listitem>
<listitem><para><emphasis role="strong">The enhancement of machines and equipment with intelligence features</emphasis>, based on the integration of advanced digital technologies such AI. As a prominent example, future machines will be able to identify and in several cases repair defect causes on-line i.e. without a need for stopping production.</para></listitem>
<listitem><para><emphasis role="strong">The development and establishment of open APIs</emphasis> for accessing capabilities and datasets within these platforms. Such APIs will greatly facilitate their integration and access in the scope of end-to-end applications. For example, they will provide the means for processes that span multiple stations and platforms within a factory.</para></listitem>
<listitem><para><emphasis role="strong">The provision of support for smart objects such as smart machines and industrial robots</emphasis>. Smart objects feature (semi)autonomous behaviour and are able to operate as stand-alone systems in the shopfloor. Occasionally, they will be able to synchronize their state with the state of digital automation platforms that control the shopfloors. Hence, they will be able to co-exist with digital platforms in order to perform collaborative tasks in the plant.</para></listitem>
<listitem><para><emphasis role="strong">The implementation of strong security features</emphasis>, which will ensure secure operations for both IT and OT systems of the plant. Strong security and data protection will be required as a result of the expanding scope of the digital automation platforms, but also as a result of their interconnection with other CPS, IT and OT systems.</para></listitem>
</itemizedlist>
<para>Overall, Industry 4.0 will be certainly an exciting journey for plant operators, providers of industrial automation solutions, IIoT solution providers and many other stakeholders. In this book we have provided knowledge and insights about where we stand in this journey, while trying to develop a vision for the future. We really hope you will enjoy the journey and will appreciate our efforts to help you get started with the right foot.</para>
</chapter>

<chapter class="nosec" id="ch10">
<title>About the Editors</title>
<para><emphasis role="strong">Dr. John Soldatos</emphasis> (http://gr.linkedin.com/in/johnsoldatos) holds a Phd in Electrical &amp; Computer Engineering from the National Technical University of Athens (2000) and is currently Associate Professor at the Athens Information Technology (2006 to present) and Honorary Research Fellow at the University of Glasgow, UK (2014 to present). He was also Adjunct Professor at Carnegie Mellon University, Pittsburgh, PA (2007&#8211;2010). Dr. Soldatos is a world-class expert in Internet-of-Things (IoT) technologies and applications, including IoT&#8217;s applications in smart cities and the fourth industrial revolution (Industry 4.0 &amp; Industrial Internet of Things).</para>
<para>Dr. Soldatos has played a leading role in the successful delivery of more than 50 (commercial-industrial, research and consulting) projects for both private and public sector organizations, including some complex integrated projects. He is co-founder of the open-source platforms OpenIoT (https://github.com/OpenIotOrg/openiot) and AspireRFID (http://wiki.aspire.ow2.org). He has published more than 150 articles in international journals, books and conference proceedings. He has also significant academic teaching experience, along with experience in corporate training. Dr. Soldatos is regular contributor to various international magazines and blogs, as well as in social media, where he is among the influencers of the IoT community. Moreover, Dr. Soldatos has received national and international recognition through appointments in standardization working groups, expert groups and various boards.</para>
<para><emphasis role="strong">Dr. Oscar Lazaro</emphasis> is the Managing Director of Innovalia Association, the Associated Research Lab founded by the Innovalia Alliance, one of the three strategic technology groups in the Basque Country. Oscar has more than 20 years of experience in the ICT and manufacturing field. He is also Visiting Professor at the Electrical and Electronic Engineering Department of the University of Strathclyde in the area of wireless &amp; mobile communications. Also, he is permanent representative of Innovalia in EFFRA and he has also served to the Future Internet Advisory Board and the Sherpa Group on 5G Action Plan.</para>
<para>Dr. Oscar Lazaro has been one of the three experts appointed in the high- level group supporting the EC in the analysis of the 15 national initiatives in Digitising European Industry. He has been supporting the activities of the I4MS Programme since its very beginning and leads the I4MS Com-petence Centre for Advanced Quality Control Services in the Zero Defect Manufacturing DIH at the Automotive Intelligence Centre in the Basque Country. He is also part of the Smart Industry 4.0 Technical Committee of the FIWARE Foundation and regular contributor to the activities of the Industria Conectada 4.0 DIH and Platform working groups. Since January 2018, he has been coordinating the European lighthouse initiative BOOST 4.0 on Big Data Platforms for Industry 4.0.</para>
<para><emphasis role="strong">Dr. Franco Cavadini</emphasis> has a PhD in aerospace engineering on the subject of artificial intelligence applied to robotic manipulation tasks at Politecnico di Milano. He is Chief Technical Officer at Synesis, a small Italian company whose mission is the technology transfer of advanced automation solutions from the research to the market. With a specific focus on the research and development of technologies for the optimization of production systems and control under the constraints of high-energy efficiency and low environmental impact, he has guided Synesis technical department throughout several Horizon 2020 and industrial projects, providing both technical and project management contributions. Dr. Cavadini is an expert in the field of distributed real-time automation and on the design of complex control architectures. He is currently the project coordinator for the European Daedalus initiative for Industry 4.0 (www.daedalus.eu) and technical coordinator of the DEMETO project (www.demeto.eu).</para>
</chapter>
</book>
