﻿<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="9788770220873.xsl"?>
<book id="home" xmlns:xlink="http://www.w3.org/1999/xlink">
<bookinfo>
<title>Challenges in Cybersecurity and Privacy &#8211; the European Research Landscape</title>
<affiliation><emphasis role="strong">Editors</emphasis></affiliation>

<authorgroup>
<author><firstname>Jorge Bernal</firstname>
<surname>Bernabe</surname>
</author>
</authorgroup>
<affiliation>University of Murcia, Spain</affiliation>
<authorgroup>
<author><firstname>Antonio</firstname>
<surname>Skarmeta</surname>
</author>
</authorgroup>
<affiliation>University of Murcia, Spain</affiliation>
<publisher>
<publishername>River Publishers</publishername>
</publisher><isbn>9788770220873</isbn>
</bookinfo>

<preface class="preface" id="preface01">
<title><b>RIVER PUBLISHERS SERIES IN SECURITY AND DIGITAL FORENSICS</b></title>
<para><i>Series Editors:</i></para>
<para><b>WILLIAM J. BUCHANAN</b></para>
<para><i>Edinburgh Napier University, UK</i></para>
<para><b>ANAND R. PRASAD</b></para>
<para><i>NEC, Japan</i></para>
<para>Indexing: All books published in this series are submitted to the Web of Science Book Citation Index (BkCI), to SCOPUS, to CrossRef and to Google Scholar for evaluation and indexing.</para>
<para>The &#8220;River Publishers Series in Security and Digital Forensics&#8221; is a series of comprehensive academic and professional books which focus on the theory and applications of Cyber Security, including Data Security, Mobile and Network Security, Cryptography and Digital Forensics. Topics in Prevention and Threat Management are also included in the scope of the book series, as are general business Standards in this domain.</para>
<para>Books published in the series include research monographs, edited volumes, handbooks and textbooks. The books provide professionals, researchers, educators, and advanced students in the field with an invaluable insight into the latest research and developments.</para>
<para>Topics covered in the series include, but are by no means restricted to the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Cyber Security</para></listitem>
<listitem><para>Digital Forensics</para></listitem>
<listitem><para>Cryptography</para></listitem>
<listitem><para>Blockchain</para></listitem>
<listitem><para>IoT Security</para></listitem>
<listitem><para>Network Security</para></listitem>
<listitem><para>Mobile Security</para></listitem>
<listitem><para>Data and App Security</para></listitem>
<listitem><para>Threat Management</para></listitem>
<listitem><para>Standardization</para></listitem>
<listitem><para>Privacy</para></listitem>
<listitem><para>Software Security</para></listitem>
<listitem><para>Hardware Security</para></listitem>
</itemizedlist>
<para>For a list of other books in this series, visit www.riverpublishers.com</para>
</preface>

<preface class="preface" id="preface02">
<title><b>Preface</b></title>
<para>The global and hyper-connected society is experimenting an increasing number of Cybersecurity and Privacy issues. The widespread usage and development of ICT systems is expanding the number of attacks and leading to new kind of evolving cyber-threats, which ultimately undermines the possibilities of a trusted and dependable global digital society development.</para>
<para>Cyber-criminals are continuously shifting their cyber-attacks specially against cyber-physical systems and IoT, since they present additional vulnerabilities due to their constrained capabilities, their unattended nature and the usage of potential untrustworthiness components.</para>
<para>In this context, several cybersecurity and privacy challenges can be identified. Some of these challenges revolve around the autonomic cybersecurity management, orchestration and enforcement in heterogeneous and virtualized CPS/IoT and mobile ecosystems. Some other challenges are related to: cognitive detection and mitigation of evolving new kind of cyber-threats; the dynamic risk assessment and evaluation of cybersecurity, trustworthiness levels, privacy and legal compliance of ICT systems; the digital Forensics handling; the security intelligent and incident information exchange; cybersecurity and privacy tools and the associated usability and human factor. Similarly, regarding privacy and trust related challenges, four main global challenges can be identified, encompassing the reliable and privacy-preserving identity management, efficient and secure cryptographic mechanisms, Global trust management and privacy assessment.</para>
<para>Therefore, new holistic approaches, methodologies, techniques and tools are needed to cope with those issues, and mitigate cyberattacks, by employing novel cyber-situational awareness frameworks, risk analysis and modeling, threat intelligent systems, cyber-threat information sharing methods, advanced big-data analysis techniques as well as exploiting the benefits from latest technologies such as SDN/NFV and Cloud systems. In addition, novel privacy-preserving techniques, and crypto-privacy mechanisms, identity and eID management systems, trust services, and recommendations are needed to protect citizens&#8217; privacy while keeping usability levels.</para>
<para>The European Commission is facing the aforementioned cybersecurity and privacy challenges through different means, including the Horizon 2020 Research and Innovation program, and concretely, the European program H2020-EU.3.7, entitled <i>&#8220;Secure societies &#8211; Protecting freedom and security of Europe and its citizens&#8221;</i>, and therefore, financing innovative projects that can cope with the increasing cyberthreat landscape.</para>
<para>This book presents and analyses 14 cybersecurity and privacy-related EU projects founded by that European program H2020-EU.3.7, encompassing: ANASTACIA, SAINT, FORTIKA, CYBECO, SISSDEN, CIPSEC, CS-AWARE. RED-Alert, Truessec.eu. ARIES, LIGHTest, CREDENTIAL, FutureTrust and LEPS.</para>
<para>The book is the result of a collaborative effort among relative ongoing European Research projects in the field of privacy and security as well as related cybersecurity fields, and it is intended to explain how these projects meet the main cybersecurity and privacy challenges faced in Europe. In the book we have invited to contribute with his knowledge some of the top cybersecurity and privacy experts and researcher from Europe.</para>
<para>The first introduction chapter identifies and describes 10 main cybersecurity and privacy research challenges presented and addressed in this book by 14 European research projects. In the book, each chapter is dedicated to a different funded European Research project and includes the project&#8217;s overviews, objectives, and the particular research challenges that they are facing.</para>
<para>In addition, we have required each chapter&#8217; authors to provide, for his EU research project analysed, the research achievements on security and privacy, as well as the techniques, outcomes, and evaluations accomplished in the scope of the corresponding EU project.</para>
<para>The first part of the book, i.e. chapters from #2 to #10 describe 9 EU projects related to cybersecurity and how they face the challenges identified in Introduction section. Concretely: ANASTACIA, SAINT, FORTIKA, CYBECO, SISSDEN, CIPSEC, CS-AWARE. RED-Alert, Truessec.eu. The second part of the book, i.e. chapters from #11 to #15, describe 5 EU projects focused on privacy and Trust management. Namely, ARIES, LIGHTest, CREDENTIAL, FutureTrust and LEPS.</para>
<para>The idea of this book was originated after a successful clustering workshop entitled <i>&#8220;European projects Clustering workshop On Cybersecurity and Privacy (ECoSP 2018)&#8221;</i> collocated in ARES Conference &#8211; <i>13th International Conference on Availability, Reliability and Security</i>, held in Hamburg, Germany, where the EU projects analyzed in this book were presented and the attenders exchanged their views about the European research landscape on Security and privacy.</para>
<para>The chapters have been written for target both, researchers and engi-neers. Thus, after reading this book, academic researchers will have a proper understanding of current cybersecurity and privacy challenges to be solved in the coming years, and how they are being approached in different angles by several European research projects. Likewise, engineers will get to know the main enablers, technologies and tools that are being considered and implemented to deal with those main cybersecurity and privacy issues.</para>
</preface>

<preface class="preface" id="preface03">
<title><b>List of Contributors</b></title>
<para><b>Adamantios Koumpis</b>, <i>University of Passau, Germany; E-mail: adamantios.koumpis@uni-passau.de</i></para>
<para><b>Adrian Quesada Rodriguez</b>, <i>Mandat International, Research, Switzerland; E-mail: aquesada@mandint.org</i></para>
<para><b>Aitor Couce Vieira</b>, <i>Institute of Mathematical Sciences (ICMAT), Spanish National Research Council (CSIC), Spain; E-mail: aitor.couce@icmat.es</i></para>
<para><b>Alberto Crespo</b>, <i>Atos Research and Innovation, Atos, Calle Albarracin 25, Madrid, Spain; E-mail: alberto.crespo@atos.net</i></para>
<para><b>Alejandro Molina</b>, <i>Department of Information and Communications Engineering, University of Murcia, Murcia, Spain; E-mail: alejandro.mzarca@um.es</i></para>
<para><b>Ales &#268;ernivec</b>, <i>XLAB d.o.o., Slovenia; E-mail: ales.cernivec@xlab.si</i></para>
<para><b>Alexander Sazonov</b>, <i>National Certification Authority Rus CJSC (NCA Rus), 8A building 5, Aviamotornaya st., Moscow 111024, Russia; E-mail: sazonov@nucrf.ru</i></para>
<para><b>Alexandre Defays</b>, <i>Ar&#183;s Spikeseed, Rue Nicolas Bove 2B, 1253 Luxembourg, Luxembourg; E-mail: alexandre.defays@arhs-developments.com</i></para>
<para><b>Alexandros Papanikolaou</b>, <i>InnoSec, Greece; E-mail: a.papanikolaou@innosec.gr</i></para>
<para><b>Ali Anjomshoaa</b>, <i>Digital Catapult, Research and Development, NW1 2RA, London, United Kingdom; E-mail: ali.anjomshoaa@ktn-uk.org</i></para>
<para><b>Alie El-Din Mady</b>, <i>United Technologies Research Center, Ireland; E-mail: madyaa@utrc.utc.com</i></para>
<para><b>Aljosa Pasic</b>, <i>Atos Research and Innovation (ARI), Atos, Spain; E-mail: aljosa.pasic@atos.net</i></para>
<para><b>Anargyros Sideris</b>, <i>Future Intelligence LTD, United Kingdom; E-mail: Sideris@f-in.co.uk</i></para>
<para><b>Anastasios Drosou</b>, <i>Information Technologies Institute, Centre for Research &amp; Technology Hellas, Greece; E-mail: drosou@iti.gr</i></para>
<para><b>Andreas K&#252;hne</b>, <i>Trustable Limited, Great Hampton Street 69, Birmingham B18 6E, United Kingdom; E-mail: kuehne@trustable.de</i></para>
<para><b>Angelo Consoli</b>, <i>Eclexys Sagl, Via Dell Inglese 6, Riva San Vitale</i>, <i>Switzerland; E-mail: angelo.consoli@eclexys.com</i></para>
<para><b>Antonio &#193;lvarez</b>, <i>ATOS SPAIN, Spain; E-mail: antonio.alvarez@atos.net</i></para>
<para><b>Antonio Skarmeta</b>, <i>Department of Information and Communications Engineering, University of Murcia, Murcia, Spain; E-mail: skarmeta@um.es</i></para>
<para><b>Apostolos Fournaris</b>, <i>University of Patras, Greece; E-mail: apofour@ece.upatras.gr</i></para>
<para><b>Arnolt Spyros</b>, <i>InnoSec, Greece; E-mail: a.spyros@innosec.gr</i></para>
<para><b>Bojana Bajic</b>, <i>Archimede Solutions, Geneva, Switzerland; E-mail: bbajic@archimede.ch</i></para>
<para><b>Bruce Anderson</b>, <i>Law Trusted Third Party Service (Pty) Ltd. (LAWTrust), 5 Bauhinia Street, Building C, Cambridge Office Park Veld Techno Park, Centurion 0157, South Africa; E-mail: bruce@LAWTrust.co.za</i></para>
<para><b>&#199;a&#287;atay Karabat</b>, <i>Turkiye Bilimsel Ve Tknolojik Arastirma Kurumu, Ataturk Bulvari 221, Ankara 06100, Turkey; E-mail: cagatay.karabat@tubitak.gov.tr</i></para>
<para><b>Carl-Markus Piswanger</b>, <i>Bundesrechenzentrum GmbH, Hintere Zollamtsstra&#223;e 4, A-1030 Vienna, Austria; E-mail: carl-markus.piswanger@brz.gv. at</i></para>
<para><b>Caroline Baylon</b>, <i>AXA Technology Services, France; E-mail: caroline.baylon@axa.com</i></para>
<para><b>C&#233;dric Crettaz</b>, <i>Mandat International, Research, Switzerland; E-mail: ccrettaz@mandint.org</i></para>
<para><b>Chris Wills</b>, <i>CARIS Research Ltd., United Kingdom; E-mail: ccwills@carisresearch.co.uk</i></para>
<para><b>Christina Hermanns</b>, <i>Federal Office of Administration (Bundesverwaltungsamt), Barbarastr. 1, 50735 Cologne, Germany; E-mail: christina.hermanns@bva.bund.de</i></para>
<para><b>Cristi Potlog</b>, <i>SIVECO Romania SA, Romania; E-mail: cristi.potlog@siveco.ro</i></para>
<para><b>Cristina Condovici</b>, <i>Ruhr Universit&#228;t Bochum, Universit&#228;tsstra&#223;e 150, 44801 Bochum, Germany; E-mail: cristina.condovici@rub.de</i></para>
<para><b>Dallal Belabed</b>, <i>THALES Communications &amp; Security SAS, Gennevilliers, France; E-mail: dallal.belabed@thalesgroup.com</i></para>
<para><b>Damian Wabisch</b>, <i>Trustable Limited, Great Hampton Street 69, Birmingham B18 6E, United Kingdom; E-mail: damian@trustable.de</i></para>
<para><b>Dan Garcia-Carrillo</b>, <i>Department of Research &amp; Innovation, Odin Solutions, Murcia, Spain; E-mail: dgarcia@odins.es</i></para>
<para><b>Daniel Abel</b>, <i>Maven Seven Solutions Zrt., Hungary; E-mail: daniel.abel@maven7.com</i></para>
<para><b>Daniel Nemmert</b>, <i>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany; E-mail: daniel.nemmert@ecsec.de</i></para>
<para><b>Danny S. Guam&#225;n</b>, <i>1. Universidad Polit&#232;cnica de Madrid, Departamento de Ingenier&#237;a de Sistemas Telem&#225;ticos, 28040, Madrid, Spain; 2. Escuela Polit&#232;cnica Nacional, Departamento de Electr&#243;nica, Telecomunicaciones y Redes de Informaci&#243;n, 170525, Quito, Ecuador; E-mail: ds.guaman@dit.upm.es</i></para>
<para><b>Dave Fortune</b>, <i>Saher Ltd., United Kingdom; E-mail: dave@saher-uk.com</i></para>
<para><b>David Mart&#237;n</b>, <i>GEMALTO, Czech Republic; E-mail: martin.david@gemalkto.com</i></para>
<para><b>David R&#237;os Insua</b>, <i>Institute of Mathematical Sciences (ICMAT), Spanish National Research Council (CSIC), Spain; E-mail: david.rios@icmat.es</i></para>
<para><b>Dawn Branley-Bell</b>, <i>Psychology, University of Northumbria at Newcastle, United Kingdom; E-mail: dawn.branley-bell@northumbria.ac.uk</i></para>
<para><b>Deepak Subramanian</b>, <i>AXA Technology Services, France; E-mail: deepak.subramanian@axa.com</i></para>
<para><b>Denis Guilhot</b>, <i>WORLDSENSING Limited, Spain; E-mail: dguilhot@worldsensing.com</i></para>
<para><b>Detlef H&#252;hnlein</b>, <i>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany; E-mail: detlef.huhnlein@ecsec.de</i></para>
<para><b>Diego Rivera</b>, <i>R&amp;D Department, Montimage, 75013, Paris, France; E-mail: diego.rivera@montimage.com</i></para>
<para><b>Dimitrios Tzovaras</b>, <i>Information Technologies Institute, Centre for Research &amp; Technology Hellas, Greece; E-mail: tzovaras@iti.gr</i></para>
<para><b>Dirk Wegener</b>, <i>German Federal Information Technology Centre (Informationstechnikzentrum Bund, ITZBund), Waterloostr. 4</i>, <i>30169 Hannover, Germany; E-mail: dirk.dirkwegener@itzbund.de</i></para>
<para><b>Edgardo Montes de Oca</b>, <i>Montimage Eurl, 39 rue Bobillot, Paris, France; E-mail: edgardo.montesdeoca@montimage.com</i></para>
<para><b>Elena Torroglosa</b>, <i>Department of Information and Communications Engineering, Faculty of Computer Science, University of Murcia, Murcia, Spain; E-mail: emtg@um.es</i></para>
<para><b>Enrico Cambiaso</b>, <i>National Research Council (CNR-IEIIT) &#8211; Via De Marini 6 &#8211; 16149 Genoa, Italy; E-mail: enrico.cambiaso@ieiit.cnr.it</i></para>
<para><b>Eunah Kim</b>, <i>Device Gateway SA, Research and Development, Switzerland; E-mail: eunah.kim@devicegateway.com</i></para>
<para><b>Eva Marin-Tordera</b>, <i>Universit&#228;t Polit&#232;cnica de Catalunya, Spain; E-mail: eva@ac.upc.edu</i></para>
<para><b>Evangelos K. Markakis</b>, <i>Department of Informatics Engineering, Technological Educational Institute of Crete, Greece; E-mail: Markakis@pasiphae.teicrete.gr</i></para>
<para><b>Evangelos Pallis</b>, <i>Department of Informatics Engineering, Technological Educational Institute of Crete, Greece; E-mail: Pallis@pasiphae.teicrete.gr</i></para>
<para><b>Francisco Hernandez</b>, <i>WORLDSENSING Limited, Spain; E-mail: fhernandez@worldsensing.com</i></para>
<para><b>Frank-Michael Kamm</b>, <i>Giesecke &amp; Devrient GmbH, Prinzregentestra&#223;e 159, 81677 Munich, Germany; E-mail: frank-michael.kamm@gi-de.com</i></para>
<para><b>Georgios Sakellariou</b>, <i>Department of Applied Informatics, University of Macedonia, Greece; E-mail: geosakel@uom.edu.gr</i></para>
<para><b>Gerald Quirchmayr</b>, <i>University of Vienna &#8211; Faculty of Computer Science, Austria; E-mail: gerald.quirchmayr@univie.ac.at</i></para>
<para><b>Harris Papadakis</b>, <i>University of the Aegean, i4m Lab (Information Management Lab) and Hellenic Mediterranean University, Greece; E-mail: adanar@atlantis-group.gr</i></para>
<para><b>Heiko Ro&#223;nagel</b>, <i>Fraunhofer IAO, Fraunhofer Institute of Industrial Engineering IAO, Nobelstr. 12, 70569 Stuttgart, Germany; E-mail: heiko.ro&#223;nagel@iaofraunhofer.de</i></para>
<para><b>Herbert Leitold</b>, <i>A-SIT, Seidlgasse 22/9, A-1030 Vienna, Austria; E-mail: herbert.leitold@a-sit.at</i></para>
<para><b>Hristina Veljanova</b>, <i>University of Graz, Institute of Philosophy and Institute of Sociology, 8010, Graz, Austria; E-mail: hristina.veljanova@uni-graz.at</i></para>
<para><b>Ignacio Alamillo</b>, <i>University of Murcia, Murcia, Spain; E-mail: ignacio.alamillod@um.es</i></para>
<para><b>Ignasi Garc&#237;a-Mila</b>, <i>WORLDSENSING Limited, Spain; E-mail: igarciamila@worldsensing.com</i></para>
<para><b>Ilias Spais</b>, <i>AEGIS IT RESEARCH LTD, United Kingdom; E-mail: hspais@aegisresearch.eu</i></para>
<para><b>Ioannis Mavridis</b>, <i>Department of Applied Informatics</i>, <i>University of Macedonia, Greece; E-mail: mavridis@uom.edu.gr</i></para>
<para><b>Ivan Vaccari</b>, <i>National Research Council (CNR-IEIIT) &#8211; Via De Marini 6 &#8211; 16149 Genoa, Italy; E-mail: ivan.vaccari@ieiit.cnr.it</i></para>
<para><b>Jan Eichholz</b>, <i>Giesecke &amp; Devrient GmbH, Prinzregentestra&#223;e 159</i>, <i>81677 Munich, Germany; E-mail: jan.eichholz@gi-de.com</i></para>
<para><b>Jart Armin</b>, <i>CyberDefcon BV, Herengracht 282, 1016 BX Amsterdam</i>, <i>The Netherlands; E-mail: jart@cyberdefcon.com</i></para>
<para><b>Jens Urmann</b>, <i>Giesecke &amp; Devrient GmbH, Prinzregentestra&#223;e 159, 81677 Munich, Germany; E-mail: jens.urmann@gi-de.com</i></para>
<para><b>John M. A. Bothos</b>, <i>National Center for Scientific Research &#8220;Demokritos&#8221;, Patr. Gregoriou E. &amp; 27 Neapoleos Str, Athens, Greece; E-mail: jbothos@iit.demokritos.gr</i></para>
<para><b>John S&#246;ren Pettersson</b>, <i>Karlstad University, Sweden; E-mail: john_soren.pettersson@kau.se</i></para>
<para><b>Jon Shamah</b>, <i>European Electronic Messaging Association AISBL, Rue Washington 40, Bruxelles 1050, Belgium; E-mail: jon.shamah@eema.org</i></para>
<para><b>Jordi Forne</b>, <i>Universit&#228;t Polit&#232;cnica de Catalunya, Spain; E-mail: jforne@entel.upc.edu</i></para>
<para><b>Jordi Ortiz</b>, <i>Department of Information and Communications Engineering, Faculty of Computer Science, University of Murcia, Murcia, Spain; E-mail: jordi.ortiz@um.es</i></para>
<para><b>J&#246;rg Schwenk</b>, <i>Ruhr Universit&#228;t Bochum, Universit&#228;tsstra&#223;e 150, 44801 Bochum, Germany; E-mail: jorg.schwenk@rub.de</i></para>
<para><b>Jorge Bernal Bernabe</b>, <i>Department of Information and Communications Engineering, University of Murcia, Murcia, Spain; E-mail: jorgebernal@um.es</i></para>
<para><b>Jose Crespo Mart&#237;n</b> <i>Atos Research and Innovation (ARI), Atos, Spain; E-mail: jose.crespomartin.external@atos.net</i></para>
<para><b>Jos&#233; M. del &#193;lamo</b>, <i>Universidad Polit&#232;cnica de Madrid, Departamento de Ingenier&#237;a de Sistemas Telem&#225;ticos, 28040, Madrid, Spain; E-mail: jm.delalamo@upm.es</i></para>
<para><b>Jose Martins</b>, <i>Multicert &#8211; Servicos de Certificacao Electronica SA, Lagoas Parque Edificio 3 Piso 3, Porto Salvo 2740 266, Portugal; E-mail: jose.martinsmulticert.com</i></para>
<para><b>Jos&#233; Vila</b>, <i>Devstat, Spain; E-mail: jvila@devstat.com</i></para>
<para><b>Juan Carlos P&#233;rez Ba&#250;n</b>, <i>Atos Research and Innovation (ARI), Atos, Spain; E-mail: juan.perezb@atos.net</i></para>
<para><b>Juha R&#246;ning</b>, <i>University of Oulu &#8211; Faculty of Information Technology and Electrical Engineering, Finland; E-mail: juha.r&#246;ning@oulu.fi</i></para>
<para><b>Julian Valero</b>, <i>University of Murcia, Murcia, Spain; E-mail: julivale@um.es</i></para>
<para><b>Juliano Efson Sales</b>, <i>University of Passau, Germany; E-mail: juliano-sales@uni-passau.de</i></para>
<para><b>Juliet Lodge</b>, <i>Saher Ltd., United Kingdom; E-mail: juliet@saher-uk.com</i></para>
<para><b>Juraj Somorovsky</b>, <i>Ruhr Universit&#228;t Bochum, Universit&#228;tsstra&#223;e 150, 44801 Bochum, Germany; E-mail: juraj.somorovsky@rub.de</i></para>
<para><b>Katerina Ksystra</b>, <i>University of the Aegean, i4m Lab (Information Management Lab), Greece; E-mail: katerinaksystra@gmail.com</i></para>
<para><b>Katsiaryna Labunets</b>, <i>Faculty of Technology, Policy and Management, Delft University of Technology, The Netherlands; E-mail: K.Labunets@tudelft.nl</i></para>
<para><b>Kim Gammelgaard</b>, <i>RheaSoft, Denmark; E-mail: kim@rheasoft.dk</i></para>
<para><b>Konstantinos Lampropoulos</b>, <i>University of Patras, Greece; E-mail: klamprop@ece.upatras.gr</i></para>
<para><b>Konstantinos M. Giannoutakis</b>, <i>Information Technologies Institute, Centre for Research &amp; Technology Hellas, Greece; E-mail: kgiannou@iti.gr</i></para>
<para><b>Konstantinos Rantos</b>, <i>Eastern Macedonia and Thrace Institute of Technology, Department of Computer and Informatics Engineering</i>, <i>Greece; E-mail: krantos@teiemt.gr</i></para>
<para><b>Laurentiu Vasiliu</b>, <i>Peracton, Ireland; E-mail: laurentiu.vasiliu@peracton.com</i></para>
<para><b>Leonidas Kallipolitis</b>, <i>AEGIS IT RESEARCH LTD, United Kingdom; E-mail: lkallipo@aegisresearch.eu</i></para>
<para><b>Manel Medina</b>, <i>1. Universit&#228;t Polit&#232;cnica de Catalunya, esCERT-inLab, 08034, Barcelona, Spain; 2. APWG European Union Foundation, Research and Development</i>, <i>08012, Barcelona, Spain; E-mail: medina@ac.upc.edu</i></para>
<para><b>Manos Athanatos</b>, <i>Foundation for Research and Technology &#8211; Hellas, Greece; E-mail: athanat@ics.forth.gr</i></para>
<para><b>Marc Sel</b>, <i>PwC Enterprise Advisory, Woluwedal 18, Sint Stevens Woluwe 1932, Belgium; E-mail: marc.sel@be.pwc.com</i></para>
<para><b>Markus Heinrich</b>, <i>Technische Universit&#228;t Darmstadt, Germany; E-mail: heinrich@seceng.informatik.tu-darmstadt.de</i> <b>Marlos Silva</b>, <i>SONAE, Portugal; E-mail: mhsilva@sonae.pt</i></para>
<para><b>Mart&#237;n Griesbacher</b>, <i>University of Graz, Institute of Philosophy and Institute of Sociology, 8010, Graz, Austria; E-mail: m.griesbacher@uni-graz.at</i></para>
<para><b>Mathieu Bouet</b>, <i>THALES Communications &amp; Security SAS, Gennevilliers, France; E-mail: mathieu.bouet@thalesgroup.com</i></para>
<para><b>Matteo Bregonzio</b>, <i>3rd Place, Italy; E-mail: matteo.bregonzio@3rdplace.com</i></para>
<para><b>Matteo Filipponi</b>, <i>Device Gateway SA, Research and Development, Switzerland; E-mail: mfilipponi@devicegateway.com</i></para>
<para><b>Maurizio Aiello</b>, <i>National Research Council (CNR-IEIIT) &#8211; Via De Marini 6 &#8211; 16149 Genoa, Italy; E-mail: maurizio.mongelli@ieiit.cnr.it</i></para>
<para><b>Michael Jonas</b>, <i>Federal Office of Administration (Bundesverwaltungsamt), Barbarastr. 1, 50735 Cologne, Germany; E-mail: michael.jonas@bva.bund.de</i></para>
<para><b>Michael Rauh</b>, <i>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany; E-mail: michael.rauh@ecsec.de</i></para>
<para><b>Mikheil Kapanadze</b>, <i>Public Service Development Agency, Tsereteli Avenue 67A, Tbilisi 0154, Georgia; E-mail: mkapanadze@sda.gov.ge</i></para>
<para><b>Miloud Bagaa</b>, <i>Department of Communications and Networking, School of Electrical Engineering, Aalto University, Finland; E-mail: miloud.bagaa@aalto.fi</i></para>
<para><b>Monica Florea</b>, <i>SIVECO Romania SA, Romania; E-mail: Monica.Florea@siveco.ro</i></para>
<para><b>Neeraj Suri</b>, <i>Technische Universit&#228;t Darmstadt, Germany; E-mail: suri@cs.tu-darmstadt.de</i></para>
<para><b>Niko Tsakalakis</b>, <i>University of Southampton, Highfield, Southampton S017 1BJ, United Kingdom; E-mail: niko.tsakalakis@soton.ac.uk</i></para>
<para><b>Nikolaos Tsinganos</b>, <i>Department of Applied Informatics, University of Macedonia, Greece; E-mail: tsinik@uom.edu.gr</i></para>
<para><b>Nikolaos Zotos</b>, <i>Future Intelligence LTD, United Kingdom; E-mail: Zotos@f-in.co.uk</i></para>
<para><b>Nikos Triantafyllou</b>, <i>University of the Aegean, i4m Lab (Information Management Lab), Greece; E-mail: triantafyllou.ni@gmail.com</i></para>
<para><b>Nikos Vassileiadis</b>, <i>Trek Consulting, Greece; E-mail: n.vasileiadis@trek-development.eu</i></para>
<para><b>Nuno Ponte</b>, <i>Multicert &#8211; Servicos de Certificacao Electronica SA, Lagoas Parque Edificio 3 Piso 3, Porto Salvo 2740 266, Portugal; E-mail: nuno.pontemulticert.com</i></para>
<para><b>Nuria Ituarte Aranda</b> <i>Atos Research and Innovation (ARI), Atos, Spain; E-mail: nuria.ituarte@atos.net</i></para>
<para><b>Oscar Garcia</b>, <i>Information Catalyst, Spain; E-mail: oscar.garcia@informationcatalyst.com</i></para>
<para><b>Pablo L&#243;pez-Aguilar</b>, <i>APWG European Union Foundation, Research and Development, 08012, Barcelona, Spain; E-mail: pablo.lopezaguilar@apwg.eu</i></para>
<para><b>Pamela Briggs</b>, <i>Psychology, University of Northumbria at Newcastle, United Kingdom; E-mail: p.briggs@northumbria.ac.uk</i></para>
<para><b>Panayotis Fouliras</b>, <i>Department of Applied Informatics, University of Macedonia, Greece; E-mail: pfoul@uom.edu.gr</i></para>
<para><b>Peter Hamm</b>, <i>Goethe University Frankfurt, Germany; E-mail: peter.hamm@m-chair.de</i></para>
<para><b>Peter Pollner</b>, <i>MTA-ELTE Statistical and Biological Physics Research Group, Hungary; pollner@angel.elte.hu</i></para>
<para><b>Rafael Marin-Perez</b>, <i>Department of Research &amp; Innovation, Odin Solutions, Murcia, Spain; E-mail: rmarin@odins.es</i></para>
<para><b>Rafael Torres</b>, <i>University of Murcia, Murcia, Spain; E-mail: rtorres@um.es</i></para>
<para><b>Rami Addad</b>, <i>Department of Communications and Networking, School of Electrical Engineering, Aalto University, Finland; E-mail: rami.addad@aalto.fi</i></para>
<para><b>Raquel Cort&#233;s Carreras</b>, <i>Atos Research and Innovation (ARI), Atos, Spain; E-mail: raquel.cortes@atos.net</i></para>
<para><b>Renato Portela</b>, <i>Multicert &#8211; Servicos de Certificacao Electronica SA, Lagoas Parque Edificio 3 Piso 3, Porto Salvo 2740 266, Portugal; E-mail: renato.portelamulticert.com</i></para>
<para><b>Ren&#233; Lottes</b>, <i>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany; E-mail: rene.lottes@ecsec.de</i></para>
<para><b>Roger Dean</b>, <i>European Electronic Messaging Association AISBL</i>, <i>Rue Washington 40, Bruxelles 1050, Belgium; E-mail: r.dean@eema.org</i></para>
<para><b>Rub&#233;n Trapero</b>, <i>Atos Research and Innovation, Atos, Calle Albarracin 25, Madrid, Spain; E-mail: ruben.trapero@atos.net</i></para>
<para><b>S&#233;bastien Ziegler</b>, <i>Mandat International, Research, Switzerland; E-mail: sziegler@mandint.org</i></para>
<para><b>Shmuel Bar</b>, <i>IntuView, Israel; E-mail: sbar@intuview.com</i></para>
<para><b>Silvia Scaglione</b>, <i>National Research Council (CNR-IEIIT) &#8211; Via De Marini 6 &#8211; 16149 Genoa, Italy; E-mail: silvia.scaglione@ieiit.cnr.it</i></para>
<para><b>Slobodan Nedeljkovic</b>, <i>Ministarstvo unutra&#353;njih poslova Republike Srbije, Kneza Milo&#353;a 103, Belgrade 11000, Serbia; E-mail: slobodan.nedeljkovic@mup.gov.rs</i></para>
<para><b>Sne&#382;ana Stoji&#269;i&#263;</b>, <i>Ministarstvo unutra&#353;njih poslova Republike Srbije, Kneza Milosa 103, Belgrade 11000, Serbia; E-mail: snezana.stojicic@mup.gov.rs</i></para>
<para><b>Sofia Tsekeridou</b>, <i>Intrasoft International, Greece; E-mail: Sofia.Tsekeridou@intrasoft-intl.com</i></para>
<para><b>Sophie Stalla-Bourdillon</b>, <i>University of Southampton, Highfield, Southampton S017 1BJ, United Kingdom; E-mail: sophie.stalla-bourdillon@soton.ac.uk</i></para>
<para><b>Sotiris Ioannidis</b>, <i>Foundation for Research and Technology &#8211; Hellas, Greece; E-mail: sotiris@ics.forth.gr</i></para>
<para><b>Stavros Salonikias</b>, <i>Department of Applied Informatics, University of Macedonia, Greece; E-mail: salonikias@uom.edu.gr</i></para>
<para><b>Stefan Baszanowski</b>, <i>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany; E-mail: stefan.baszanowski@ecsec.de</i></para>
<para><b>Stefan Katzenbeisser</b>, <i>Technische Universit&#228;t Darmstadt, Germany; E-mail: katzenbeisser@seceng.informatik.tu-darmstadt.de</i></para>
<para><b>Stefan Schiffner</b>, <i>University of Luxemburg; E-mail: Stefan.schiffner@uni.lu</i></para>
<para><b>Stefano Bianchi</b>, <i>Research &amp; Innovation Department, SOFTECO SISMAT SRL, Di Francia 1 &#8211; WTC Tower, 16149, Genoa, Italy; E-mail: stefano.bianchi@softeco.it</i></para>
<para><b>Stephan Krenn</b>, <i>AIT Austrian Institute of Technology GmbH, Austria; E-mail: stephan.krenn@ait.ac.at</i></para>
<para><b>Stuart Mart&#237;n</b>, <i>Office of the Police and Crime Commissioner for West Yorkshire, (POOC), West Yorkshire, United Kingdom; E-mail: stuart.martin@westyorkshire.pnn.police.uk</i></para>
<para><b>Sven Wagner</b>, <i>University Stuttgart, Institute of Human Factors and Technology Management, Allmandring 35, 70569 Stuttgart, Germany; E-mail: sven.wagner@iat.uni-stuttgart.de</i></para>
<para><b>Syed Naqvi</b>, <i>Birmingham City University, United Kingdom; E-mail: Syed.Naqvi@bcu.ac.uk</i></para>
<para><b>Tarik Taleb</b>, <i>Department of Communications and Networking, School of Electrical Engineering, Aalto University, Finland; E-mail: tarik.taleb@aalto.fi</i></para>
<para><b>Thomas Schaberreiter</b>, <i>University of Vienna &#8211; Faculty of Computer Science, Austria; E-mail: thomas.schaberreiter@univie.ac.at</i></para>
<para><b>Thomas Schubert</b>, <i>Federal Office of Administration (Bundesverwaltungsamt), Barbarastr. 1, 50735 Cologne, Germany; E-mail: thomas.schubert@bva.bund.de</i></para>
<para><b>Tiago Oliveira</b>, <i>SONAE, Portugal; E-mail: tioliveira@sonae.pt</i></para>
<para><b>Tilman Frosch</b>, <i>Ruhr Universit&#228;t Bochum, Universit&#228;tsstra&#223;e 150, 44801 Bochum, Germany; E-mail: tilman.frosch@rub.de</i></para>
<para><b>Tina H&#252;hnlein</b>, <i>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany; E-mail: tina.huhnlein@ecsec.de</i></para>
<para><b>Tobias Wich</b>, <i>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany; E-mail: tobias.wich@ecsec.de</i></para>
<para><b>Valentin Gibello</b>, <i>University of Lille, CERAPS &#8211; Faculty of Law, 59000, Lille, France; E-mail: valentin.gibello@univ-lille.fr</i></para>
<para><b>Vassilis Chatzigiannakis</b>, <i>Intrasoft International, Greece; E-mail: Vassilis.Chatzigiannakis@intrasoft-intl.com</i></para>
<para><b>Veronika Kupfersberger</b>, <i>University of Vienna &#8211; Faculty of Computer Science, Austria; E-mail: veronika.kupfersberger@univie.ac.at</i></para>
<para><b>Vincent Bouckaert</b>, <i>Ar&#951;s Spikeseed, Rue Nicolas Bov&#233; 2B</i>, <i>1253 Luxembourg, Luxembourg; E-mail: vincent.bouckaert@arhs-developments.com</i></para>
<para><b>Vladislav Mladenov</b>, <i>Ruhr Universit&#228;t Bochum, Universit&#228;tsstra&#223;e 150, 44801 Bochum, Germany; E-mail: vladislav.mladenov@rub.de</i></para>
<para><b>Volker Zeuner</b>, <i>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany; E-mail: volker.zeuner@ecsec.de</i></para>
<para><b>Waqar Asif</b>, City, <i>University of London, United Kingdom; E-mail: Waqar.Asif@city.ac.uk</i></para>
<para><b>Wolter Pieters</b>, <i>Faculty of Technology, Policy and Management, Delft University of Technology, The Netherlands; E-mail: W.Pieters@tudelft.nl</i></para>
<para><b>Xavi Masip-Bruin</b>, <i>Universit&#228;t Polit&#232;cnica de Catalunya, Spain; E-mail: xmasip@ac.upc.edu</i></para>
<para><b>Yannis Nikoloudakis</b>, <i>Department of Informatics Engineering</i>, <i>Technological Educational Institute of Crete, Greece; E-mail: Nikoloudakis@pasiphae.teicrete.gr</i></para>
<para><b>Yolanda G&#243;mez</b>, <i>Devstat, Spain; E-mail: ygomez@devstat.com</i></para>
</preface>

<preface class="preface" id="preface04">
<title><b>List of Figures</b></title>
<table-wrap position="float" id="T">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr><td valign="top" align="left"><b><link linkend="F2-1">Figure 2.1</link></b></td><td>Main stages of ANASTACIA framework</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F2-2">Figure 2.2</link></b></td><td>Anastacia high-level architectural view</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F2-3">Figure 2.3</link></b></td><td>Security orchestration plane</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F2-4">Figure 2.4</link></b></td><td>Security orchestration and enforcement in case of a reactive scenario</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F3-1">Figure 3.1</link></b></td><td>High-level architecture of the SAINT framework</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F3-2">Figure 3.2</link></b></td><td>Stakeholder reference model</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F3-3">Figure 3.3</link></b></td><td>Global security map of Finland</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-1">Figure 4.1</link></b></td><td>Information technology security vs cyber-security</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-2">Figure 4.2</link></b></td><td>Incidents reported to US-CERT, Fiscal Years 2006&#8211;2014</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-3">Figure 4.3</link></b></td><td>FORTIKA deployment diagram (High level)</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-4">Figure 4.4</link></b></td><td>FORTIKA accelerator architecture</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-5">Figure 4.5</link></b></td><td>Middleware use in FORTIKA</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-6">Figure 4.6</link></b></td><td>SBH and LwM2M client components of the middleware</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-7">Figure 4.7</link></b></td><td>Synthesis engine component of the middleware</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-8">Figure 4.8</link></b></td><td>Synthesis sequence steps</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-9">Figure 4.9</link></b></td><td>Process of deployment and management within FORTIKA marketplace</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-10">Figure 4.10</link></b></td><td>ABAC components</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-11">Figure 4.11</link></b></td><td>ABAC layered approach</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-12">Figure 4.12</link></b></td><td>SEARS architecture</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F4-13">Figure 4.13</link></b></td><td>SEARS conceptual design</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F5-1">Figure 5.1</link></b></td><td>BAID describing the cybersecurity resource allocation problem</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F5-2">Figure 5.2</link></b></td><td>Screenshot of the online cybersecurity shop in the experiment</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F5-3">Figure 5.3</link></b></td><td>A snapshot of the CYBECO tool, gathering inputs on assets to feed the cyber risk analysis tool</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F6-1">Figure 6.1</link></b></td><td>Map of deployed SISSDEN sensors</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F6-2">Figure 6.2</link></b></td><td>High-level architecture of the SISSDEN network</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F6-3">Figure 6.3</link></b></td><td>Shows the genesis of the attack over time (measures made each 5 minutes)</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F7-1">Figure 7.1</link></b></td><td>Overall CIPSEC innovations due to various solutions integration</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F7-2">Figure 7.2</link></b></td><td>CIPSEC reference architecture for protection of critical infrastructures</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F7-3">Figure 7.3</link></b></td><td>CIPSEC dashboard</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-1">Figure 8.1</link></b></td><td>The CS-AWARE approach</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-2">Figure 8.2</link></b></td><td>Roles and responsibilities in European cybersecurity strategy</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-3">Figure 8.3</link></b></td><td>Systems thinking</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-4">Figure 8.4</link></b></td><td>CS-AWARE framework.</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-5">Figure 8.5</link></b></td><td>I/P/O interface definition framework</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-6">Figure 8.6</link></b></td><td>Soft systems analysis rich picture</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-7">Figure 8.7</link></b></td><td>System and dependency analysis use case example</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-8">Figure 8.8</link></b></td><td>From tweets to knowledge graphs</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-9">Figure 8.9</link></b></td><td>The CS-AWARE visualization component</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-10">Figure 8.10</link></b></td><td>CTI exchange interoperability layers</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-11">Figure 8.11</link></b></td><td>The traffic light protocol</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-12">Figure 8.12</link></b></td><td>Properties of self-healing research</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F8-13">Figure 8.13</link></b></td><td>Self-healing subcomponents activity diagram</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F9-1">Figure 9.1</link></b></td><td>Dynamic learning capabilities of the systems to update keywords, vector spaces, rule patterns, algorithms and models</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F9-2">Figure 9.2</link></b></td><td>Layered application architecture</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F9-3">Figure 9.3</link></b></td><td>Complex event processing module &#8211; Logical component diagram</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F9-4">Figure 9.4</link></b></td><td>Illustration of a possible output of the SNA tool</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F9-5">Figure 9.5</link></b></td><td>Two-layer networked privacy preserving big data analytics model between coalition forces.</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F9-6">Figure 9.6</link></b></td><td>Main system user interface &#8211; Component interactions diagram</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F9-7">Figure 9.7</link></b></td><td>Centralized audit and logging &#8211; Interactions diagram</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F10-1">Figure 10.1</link></b></td><td>TRUESSEC.eu core areas of trustworthiness</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F10-2">Figure 10.2</link></b></td><td>Developing the criteria catalogue</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F10-3">Figure 10.3</link></b></td><td>Core areas of trustworthiness and related ICT system properties</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F10-4">Figure 10.4</link></b></td><td>Guiding elements of the operationalization process</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F10-5">Figure 10.5</link></b></td><td>TRUESSEC.eu labelling proposal</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F10-6">Figure 10.6</link></b></td><td>(Illustrative) Levels of conformance</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F11-1">Figure 11.1</link></b></td><td>ARIES ecosystem overview</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F11-2">Figure 11.2</link></b></td><td>e-Commerce demonstrator scenario overview</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F12-1">Figure 12.1</link></b></td><td>The LIGHTest reference architecture.</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F12-2">Figure 12.2</link></b></td><td>Sequence diagram for trust publication of a qualified signature</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F12-3">Figure 12.3</link></b></td><td>Representation of trust scheme publications in the TSPA</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F13-1">Figure 13.1</link></b></td><td>Abstract data flow in CREDENTIAL</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F13-2">Figure 13.2</link></b></td><td>Architecture of the CREDENTIAL eHealth pilot</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F14-1">Figure 14.1</link></b></td><td>FutureTrust partners</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F14-2">Figure 14.2</link></b></td><td>FutureTrust System Architecture</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F14-3">Figure 14.3</link></b></td><td>Outline of the Architecture of the Scalable Preservation Service</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F14-4">Figure 14.4</link></b></td><td>National eID cards, platforms and applications supported by IdMS</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F14-5">Figure 14.5</link></b></td><td>Enrolment and usage phase for SigS</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F15-1">Figure 15.1</link></b></td><td>eIDAS network and CEF building blocks background</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F15-2">Figure 15.2</link></b></td><td>eIDAS adapter architecture general overview</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F15-3">Figure 15.3</link></b></td><td>SP integration with eIDAS node using greek connector(s)</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F15-4">Figure 15.4</link></b></td><td>eIDAS WebApp 2.0</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F15-5">Figure 15.5</link></b></td><td>eIDAS ISS 2.0 (plus this WebApp)</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F15-6">Figure 15.6</link></b></td><td>Spanish eIDAS adapter modules</td></tr>
<tr><td valign="top" align="left"><b><link linkend="F15-7">Figure 15.7</link></b></td><td>LEPS services and automated eCATS tool</td></tr>
</tbody>
</table>
</table-wrap>
</preface>

<preface class="preface" id="preface05">
<title><b>List of Tables</b></title>
<table-wrap position="float" id="T">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr><td valign="top" align="left"><b><link linkend="T1-1">Table 1.1</link></b></td><td>Main cybersecurity research challenges and related EU project&#8217;s</td></tr>
<tr><td valign="top" align="left"><b><link linkend="T1-2">Table 1.2</link></b></td><td>Main Privacy related research challenges and related EU projects</td></tr>
<tr><td valign="top" align="left"><b><link linkend="T10-1">Table 10.1</link></b></td><td>TRUESSEC.eu Core Areas of trustworthiness</td></tr>
<tr><td valign="top" align="left"><b><link linkend="T10-2">Table 10.2</link></b></td><td>Criterion &#8211; Information</td></tr>
<tr><td valign="top" align="left"><b><link linkend="T10-3">Table 10.3</link></b></td><td>Example of the guiding elements of the operationalization process</td></tr>
<tr><td valign="top" align="left"><b><link linkend="T12-1">Table 12.1</link></b></td><td>Types of trust scheme publications in LIGHTest</td></tr>
<tr><td valign="top" align="left"><b><link linkend="T15-1">Table 15.1</link></b></td><td>Cost of three eIDAS connectivity options</td></tr>
</tbody>
</table>
</table-wrap>
</preface>

<preface class="preface" id="preface06">
<title><b>List of Abbreviations</b></title>
<table-wrap position="float" id="T1">
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr valign="top"><td>AAA</td><td>Authentication, Authorization and Accounting</td></tr>
<tr valign="top"><td>ABAC</td><td>Attribute Based Access Control</td></tr>
<tr valign="top"><td>ABC</td><td>Anti-Bot Code of Conduct</td></tr>
<tr valign="top"><td>ABC</td><td>Attribute Based Credentials</td></tr>
<tr valign="top"><td>ACS</td><td>Anonymous Credential Systems</td></tr>
<tr valign="top"><td>AD</td><td>Architectural Description</td></tr>
<tr valign="top"><td>AI</td><td>Artificial Intelligence</td></tr>
<tr valign="top"><td>API</td><td>Application Programming Interface</td></tr>
<tr valign="top"><td>AQDRS</td><td>Regional System of Detection of Air Quality</td></tr>
<tr valign="top"><td>ASN</td><td>Anonymous Communication Networks</td></tr>
<tr valign="top"><td>ASR</td><td>Automatic Speech Recognition</td></tr>
<tr valign="top"><td>ATHEX</td><td>Athens Exchange Group</td></tr>
<tr valign="top"><td>BAID</td><td>Bi-Agent Influence Diagram</td></tr>
<tr valign="top"><td>C&amp;C</td><td>Command &amp; Control</td></tr>
<tr valign="top"><td>CEF</td><td>Connecting Europe Facility</td></tr>
<tr valign="top"><td>CEP</td><td>Complex Event Processing</td></tr>
<tr valign="top"><td>CERT</td><td>Community Emergency Response Team</td></tr>
<tr valign="top"><td>CERTs</td><td>Community Emergency Response Teams</td></tr>
<tr valign="top"><td>CI, CIs</td><td>Critical Infrastructure, Critical Infrastructures</td></tr>
<tr valign="top"><td>CIDR</td><td>Classless Inter-Domain Routing</td></tr>
<tr valign="top"><td>CIPIs</td><td>Critical Infrastructure Performance Indicator</td></tr>
<tr valign="top"><td>CMS</td><td>Compliance Management Service</td></tr>
<tr valign="top"><td>CNN</td><td>Convolutional Neural Network</td></tr>
<tr valign="top"><td>Conpo</td><td>Honeypot, http://conpot.org/</td></tr>
<tr valign="top"><td>CORAS</td><td>Name of the risk analysis method developed by Lund, Solhaug and St&#248;len</td></tr>
<tr valign="top"><td>CPS</td><td>Cyber Physical Systems</td></tr>
<tr valign="top"><td>CSIRTs</td><td>Computer Security Incident Response Teams</td></tr>
<tr valign="top"><td>CTI</td><td>Cyber Threat Intelligence</td></tr>
<tr valign="top"><td>CYBECO</td><td>Accronym of the H2020 project &#8220;Supporting Cyberinsurance from a Behavioural Choice Perspective&#8221;</td></tr>
<tr valign="top"><td>DDoS</td><td>Distributed Denial of Service</td></tr>
<tr valign="top"><td>Dionaea</td><td>Honeypot, https://github.com/DinoTools/dionaea</td></tr>
<tr valign="top"><td>DKMS</td><td>Decentralized Key Management System</td></tr>
<tr valign="top"><td>DNP3</td><td>Distributed Network Protocol</td></tr>
<tr valign="top"><td>DoS</td><td>Denial of Service</td></tr>
<tr valign="top"><td>DoS</td><td>Denial of Service Attacks</td></tr>
<tr valign="top"><td>DoW</td><td>Description of Work</td></tr>
<tr valign="top"><td>DSI</td><td>Digital Service Infrastructure</td></tr>
<tr valign="top"><td>DSM</td><td>Digital Single Market</td></tr>
<tr valign="top"><td>DSS</td><td>Decision Support System</td></tr>
<tr valign="top"><td>EBA</td><td>Eisenbahnbundesamt (Railway Federal Office)</td></tr>
<tr valign="top"><td>EC</td><td>European Commission</td></tr>
<tr valign="top"><td>EC3</td><td>European Cybercrime Centre</td></tr>
<tr valign="top"><td>EHEA</td><td>European Higher Education Area</td></tr>
<tr valign="top"><td>EIA</td><td>Ethical Impact Assessment</td></tr>
<tr valign="top"><td>eID</td><td>electronic Identity</td></tr>
<tr valign="top"><td>eID</td><td>electronic identification</td></tr>
<tr valign="top"><td>eIDAS</td><td>electronic IDentification, Authentication and trust Services</td></tr>
<tr valign="top"><td>ELTA</td><td>Hellenic Post</td></tr>
<tr valign="top"><td>ENISA</td><td>European Union Agency for Network and Information Security</td></tr>
<tr valign="top"><td>EU</td><td>European Union</td></tr>
<tr valign="top"><td>FICORA</td><td>Finnish communications regulatory authority</td></tr>
<tr valign="top"><td>FP7</td><td>7th Framework Programme</td></tr>
<tr valign="top"><td>FPGA</td><td>Field Programmable Gate Array</td></tr>
<tr valign="top"><td>G20</td><td>Group of 20</td></tr>
<tr valign="top"><td>GDP</td><td>Gross Domestic Product</td></tr>
<tr valign="top"><td>GDPR</td><td>General Data Protection Regulation (EU)</td></tr>
<tr valign="top"><td>GSM</td><td>SAINT&#8217;s Global Security Map</td></tr>
<tr valign="top"><td>GW</td><td>Gateway</td></tr>
<tr valign="top"><td>H2020</td><td>Horizon 2020</td></tr>
<tr valign="top"><td>HIDS</td><td>Host Intrusion Detection Systems</td></tr>
<tr valign="top"><td>HMI</td><td>Human-Machine Interface</td></tr>
<tr valign="top"><td>HTTP</td><td>Hypertext Transfer Protocol</td></tr>
<tr valign="top"><td>ICS</td><td>Integrated Computer Solution</td></tr>
<tr valign="top"><td>ICT</td><td>Information and Communication Technology</td></tr>
<tr valign="top"><td>ICTs</td><td>Information Communication Technologies</td></tr>
<tr valign="top"><td>IdM</td><td>Identity Management</td></tr>
<tr valign="top"><td>IdP</td><td>Identity Provider</td></tr>
<tr valign="top"><td>IDPS</td><td>Intrusion Detection and Prevention System</td></tr>
<tr valign="top"><td>IDS</td><td>Intrusion Detection System</td></tr>
<tr valign="top"><td>IMG</td><td>Industry Monitoring Group</td></tr>
<tr valign="top"><td>IMPACT</td><td>International Multilateral Partnership Against Cyber Threats</td></tr>
<tr valign="top"><td>IoT</td><td>Internet of Things</td></tr>
<tr valign="top"><td>IP</td><td>Internet Protocol</td></tr>
<tr valign="top"><td>IP, IPs</td><td>Internet Protocol, IP addresses</td></tr>
<tr valign="top"><td>IPS</td><td>Intrusion Prevention System</td></tr>
<tr valign="top"><td>ISAC</td><td>Information Sharing and Analysis Center</td></tr>
<tr valign="top"><td>ISF</td><td>Information Security Forum</td></tr>
<tr valign="top"><td>ISO</td><td>International Organization for Standardization</td></tr>
<tr valign="top"><td>ISPs</td><td>Internet Service Providers</td></tr>
<tr valign="top"><td>ISS</td><td>Interconnection Supporting Service</td></tr>
<tr valign="top"><td>IT</td><td>Information Technology</td></tr>
<tr valign="top"><td>ITU</td><td>International Telecommunication Union</td></tr>
<tr valign="top"><td>ITU-GCA</td><td>IUT Global Cyber-security Agenda</td></tr>
<tr valign="top"><td>JBPM</td><td>Java Business Process Model</td></tr>
<tr valign="top"><td>JSON</td><td>JavaScript Object Notation</td></tr>
<tr valign="top"><td>JWT</td><td>JASON Web Token</td></tr>
<tr valign="top"><td>Kippo</td><td>Honeypot, https://github.com/desaster/kippo</td></tr>
<tr valign="top"><td>KPI</td><td>Key Performance Indicator</td></tr>
<tr valign="top"><td>KPIs</td><td>Key Performance Indicators</td></tr>
<tr valign="top"><td>LEA</td><td>Law Enforcement Agency</td></tr>
<tr valign="top"><td>LEAs</td><td>Law Enforcement Agencies</td></tr>
<tr valign="top"><td>LEPS</td><td>Leveraging eID in the Private Sector</td></tr>
<tr valign="top"><td>LoA</td><td>Level of Assurance</td></tr>
<tr valign="top"><td>MDM</td><td>Data Management systems</td></tr>
<tr valign="top"><td>ML</td><td>Meta-Learning</td></tr>
<tr valign="top"><td>MMT</td><td>Montimage Monitoring Tool</td></tr>
<tr valign="top"><td>Modbus</td><td>Communications protocol, http://www.modbus.org/</td></tr>
<tr valign="top"><td>MS</td><td>Member State</td></tr>
<tr valign="top"><td>MSPL</td><td>Medium Security Policy Language</td></tr>
<tr valign="top"><td>NFC</td><td>Near Field Communication</td></tr>
<tr valign="top"><td>NFV</td><td>Network Function Virtualization</td></tr>
<tr valign="top"><td>NHS</td><td>Britain&#8217;s National Health Service</td></tr>
<tr valign="top"><td>NIDS</td><td>Network Intrusion Detection Systems</td></tr>
<tr valign="top"><td>NIS</td><td>Directive EU Directive on Security of Network and Information Systems</td></tr>
<tr valign="top"><td>NLP</td><td>Natural Language Processing</td></tr>
<tr valign="top"><td>NTA</td><td>Network Traffic Analysis</td></tr>
<tr valign="top"><td>OC</td><td>Operations Centre</td></tr>
<tr valign="top"><td>OPC</td><td>UA Open Platform Communications</td></tr>
<tr valign="top"><td>OPC</td><td>Open Platform Communications</td></tr>
<tr valign="top"><td>OSGi</td><td>Open Services Gateway initiative</td></tr>
<tr valign="top"><td>OT</td><td>Open Platform Communications Unified Architecture</td></tr>
<tr valign="top"><td>PAP</td><td>Policy Administration Point</td></tr>
<tr valign="top"><td>PC</td><td>Personal Computer</td></tr>
<tr valign="top"><td>PCAP</td><td>Packet Capture</td></tr>
<tr valign="top"><td>PDP</td><td>Policy Decision Point</td></tr>
<tr valign="top"><td>PEP</td><td>Policy Enforcement Point</td></tr>
<tr valign="top"><td>PHR</td><td>Patient Healthcare Record</td></tr>
<tr valign="top"><td>PIP</td><td>Policy Information Point</td></tr>
<tr valign="top"><td>PLCs</td><td>Programmable Logic Controller</td></tr>
<tr valign="top"><td>RDBMS</td><td>Relational Database Management System</td></tr>
<tr valign="top"><td>ROI</td><td>Return of Investment</td></tr>
<tr valign="top"><td>RPN</td><td>Region Proposal Network</td></tr>
<tr valign="top"><td>RTNTA</td><td>Real Time Network Traffic Analyzer</td></tr>
<tr valign="top"><td>SAINT</td><td>Systemic Analyser In Network Threats</td></tr>
<tr valign="top"><td>SAML</td><td>Security Assertion Mark-up Language</td></tr>
<tr valign="top"><td>SCADA</td><td>Supervisory Control and Data Acquisition</td></tr>
<tr valign="top"><td>SDA</td><td>Slow DoS Attack</td></tr>
<tr valign="top"><td>SDN</td><td>Software Defined Networks</td></tr>
<tr valign="top"><td>SEARS</td><td>Social Engineering Attack Recognition System</td></tr>
<tr valign="top"><td>SFC</td><td>Service Function Chain</td></tr>
<tr valign="top"><td>SIEM</td><td>Security Information and Event Management</td></tr>
<tr valign="top"><td>SISSDEN</td><td>Secure Information Sharing Sensor Delivery event Network</td></tr>
<tr valign="top"><td>SLA</td><td>Service-Level Agreement</td></tr>
<tr valign="top"><td>SMA</td><td>Semantic Multimedia Analysis</td></tr>
<tr valign="top"><td>SME</td><td>Small and Medium Enterprises</td></tr>
<tr valign="top"><td>SNA</td><td>Social Network Analysis</td></tr>
<tr valign="top"><td>SoC</td><td>System on-Chip</td></tr>
<tr valign="top"><td>SP</td><td>Service Provider</td></tr>
<tr valign="top"><td>SQL</td><td>Structured Query Language</td></tr>
<tr valign="top"><td>SSH</td><td>Secure Shel</td></tr>
<tr valign="top"><td>SSI</td><td>Self-Sovereign Identity</td></tr>
<tr valign="top"><td>STIX/TAXII</td><td>Structured Threat Information eXpression/rusted Automated Exchange of Indicator Information</td></tr>
<tr valign="top"><td>UI</td><td>User Interface</td></tr>
<tr valign="top"><td>UK</td><td>United Kingdom</td></tr>
<tr valign="top"><td>URL</td><td>Uniform Resource Locator</td></tr>
<tr valign="top"><td>URLs</td><td>Uniform Resource Locators</td></tr>
<tr valign="top"><td>US</td><td>United States of America</td></tr>
<tr valign="top"><td>USB</td><td>Universal Serial Bus</td></tr>
<tr valign="top"><td>UTM</td><td>Unified Thread Management</td></tr>
<tr valign="top"><td>VHDL</td><td>VHSIC Hardware Description Language</td></tr>
<tr valign="top"><td>VHSIC</td><td>Very High Speed Integrated Circuit</td></tr>
<tr valign="top"><td>VLAN</td><td>Virtual Local Area Network</td></tr>
<tr valign="top"><td>VM</td><td>Virtual Machine</td></tr>
<tr valign="top"><td>VNF</td><td>Virtual Network Functions</td></tr>
<tr valign="top"><td>VPS</td><td>Virtual Private Server</td></tr>
<tr valign="top"><td>VSA</td><td>Virtual Security Appliance</td></tr>
<tr valign="top"><td>WPAN</td><td>Wireless Personal AreaNetworks</td></tr>
<tr valign="top"><td>WSDL</td><td>Web Services Description Language</td></tr>
<tr valign="top"><td>XL-SIEM</td><td>Cross-Layer Security Information and Event Management</td></tr>
<tr valign="top"><td>ZKP</td><td>Zero Knowledge Proof</td></tr>
</tbody>
</table>
</table-wrap>
</preface>

<preface class="preface" id="preface07">
<title>Contents</title>
<table-wrap position="float">
<table cellspacing="5" cellpadding="5" frame="none" rules="none">
<tbody>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch01">1 Introducing the Challenges in Cybersecurity and Privacy: The European Research Landscape</link></emphasis><?lb?> </td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C1.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch02">2 Key Innovations in ANASTACIA: Advanced Networked Agents for Security and Trust Assessment in CPS/IOT Architectures</link></emphasis><?lb?>Jorge Bernal Bernabe, Alejandro Molina, Antonio Skarmeta, Stefano Bianchi, Enrico Cambiaso, Ivan Vaccari, Silvia Scaglione, Maurizio Aiello, Rub&#233;n Trapero, Mathieu Bouet, Dallal Belabed, Miloud Bagaa, Rami Addad, Tarik Taleb, Diego Rivera, Alie El-Din Mady, Adrian Quesada Rodriguez, C&#233;dric Crettaz, Sebastien Ziegler, Eunah Kim, Matteo Filipponi, Bojana Bajic, Dan Garcia-Carrillo and Rafael Marin-Perez</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C2.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch03">3 Statistical Analysis and Economic Models for Enhancing Cyber-security in SAINT</link></emphasis><?lb?>Edgardo Montes de Oca, John M. A. Bothos and Stefan Schiffner</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C3.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch04">4 The FORTIKA Accelerated Edge Solution for Automating SMEs Security</link></emphasis><?lb?>Evangelos K. Markakis, Yannis Nikoloudakis, Evangelos Pallis, Ales Cernivec, Panayotis Fouliras, Ioannis Mavridis, Georgios Sakellariou, Stavros Salonikias, Nikolaos Tsinganos, Anargyros Sideris, Nikolaos Zotos, Anastasios Drosou, Konstantinos M. Giannoutakis and Dimitrios Tzovaras</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C4.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch05">5 CYBECO: Supporting Cyber-Insurance from a Behavioural Choice Perspective</link></emphasis><?lb?>Nikos Vassileiadis, Aitor Couce Vieira, David R&#237;os Insua, Vassilis Chatzigiannakis, Sofia Tsekeridou, Yolanda G&#243;mez, Jos&#233; Vila, Deepak Subramanian, Caroline Baylon, Katsiaryna Labunets, Wolter Pieters, Pamela Briggs and Dawn Branley-Bell</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C5.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch06">6 Cyber-Threat Intelligence from European-wide Sensor Network in SISSDEN</link></emphasis><?lb?>Edgardo Montes de Oca, Jart Armin and Angelo Consoli</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C6.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch07">7 CIPSEC-Enhancing Critical Infrastructure Protection with Innovative Security Framework</link></emphasis><?lb?>Antonio &#193;lvarez, Rub&#233;n Trapero, Denis Guilhot, Ignasi Garc&#237;a-Mila, Francisco Hernandez, Eva Mar&#237;n-Tordera, Jordi Forne, Xavi Masip-Bruin, Neeraj Suri, Markus Heinrich, Stefan Katzenbeisser, Manos Athanatos, Sotiris Ioannidis, Leonidas Kallipolitis, Ilias Spais, Apostolos Fournaris and Konstantinos Lampropoulos</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C7.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch08">8 A Cybersecurity Situational Awareness and Information-sharing Solution for Local Public Administrations Based on Advanced Big Data Analysis: The CS-AWARE Project</link></emphasis><?lb?>Thomas Schaberreiter, Juha Roning, Gerald Quirchmayr, Veronika Kupfersberger, Chris Wills, Matteo Bregonzio, Adamantios Koumpis, Juliano Efson Sales, Laurentiu Vasiliu, Kim Gammelgaard, Alexandros Papanikolaou, Konstantinos Rantos and Arnolt Spyros</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C8.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch09">9 Complex Project to Develop Real Tools for Identifying and Countering Terrorism: Real-time Early Detection and Alert System for Online Terrorist Content Based on Natural Language Processing, Social Network Analysis, Artificial Intelligence and Complex Event Processing Monica Florea</link></emphasis><?lb?>Monica Florea, Cristi Potlog, Peter Pollner, Daniel Abel, Oscar Garcia, Shmuel Bar, Syed Naqvi and Waqar Asif</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C9.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch010">10 TRUESSEC Trustworthiness Label Recommendations</link></emphasis><?lb?>Danny S. Guam&#193;n&#8217;, Manel Medina&#8217;, Pablo Lopez-Aguilar, Hristina Veljanova, Jose M. del /Alamo, Valentin Gibello, Mart&#237;n Griesbacher and Ali Anjomshoaa</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C10.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch011">11 An Overview on ARIES: Reliable European Identity Ecosystem</link></emphasis><?lb?>Jorge Bernal Bernabe, Rafael Torres, David Martin, Alberto Crespo, Antonio Skarmeta, Dave Fortune, Juliet Lodge, Tiago Oliveira, Marlos Silva, Stuart Martin, Julian Valero and Ignacio Alamillo</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C11.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch012">12 The LIGHTest Project: Overview, Reference Architecture and Trust Scheme Publication Authority</link></emphasis><?lb?>Heiko Ro&#223;nagel and Sven Wagner</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C12.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch013">13 Secure and Privacy-Preserving Identity and Access Management in CREDENTIAL</link></emphasis><?lb?>Peter Hamm, Stephan Krenn and John Soren Pettersson</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C13.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch014">14 FutureTrust &#8211; Future Trust Services for Trustworthy Global Transactions</link></emphasis><?lb?>Detlef H&#252;hnlein, Tilman Frosch, J&#246;rg Schwenk, Carl-Markus Piswanger, Marc Sel, Tina H&#252;hnlein, Tobias Wich, Daniel Nemmert, Rene Lottes, Stefan Baszanowski, Volker Zeuner, Michael Rauh, Juraj Somorovsky, Vladislav Mladenov, Cristina Condovici, Herbert Leitold, Sophie Stalla-Bourdillon, Niko Tsakalakis, Jan Eichholz, Frank-Michael Kamm, Jens Urmann, Andreas K&#252;hne, Damian Wabisch, Roger Dean, Jon Shamah, Mikheil Kapanadze, Nuno Ponte, Jose Mart&#237;ns, Renato Portela, Cagatay Karabat, Snezana Stojicic, Slobodan Nedeljkovic, Vincent Bouckaert, Alexandre Defays, Bruce Anderson, Michael Jonas, Christina Hermanns, Thomas Schubert, Dirk Wegener and Alexander Sazonov</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C14.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch015">15 LEPS &#8211; Leveraging eID in the Private Sector</link></emphasis><?lb?>Jose Crespo Mart&#237;n, Nuria Ituarte Aranda, Raquel Cort&#233;s Carreras, Aljosa Pasic, Juan Carlos Perez BaUn, Katerina Ksystra, Nikos Triantafyllou, Harris Papadakis, Elena Torroglosa and Jordi Ortiz</td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C15.pdf">Download As PDF</ulink></td></tr>
<tr><td valign="top"><emphasis role="strong"><link linkend="ch016">16 About the Editors</link></emphasis><?lb?> </td><td valign="top" align="left" width="15%"><ulink url="https://www.riverpublishers.com/pdf/ebook/chapter/RP_9788770220873C16.pdf">Download As PDF</ulink></td></tr>
</tbody>
</table>
</table-wrap>
</preface>

<chapter class="chapter" id="ch01" label="1" xreflabel="1">
<title>Introducing the Challenges in Cybersecurity and Privacy: The European Research Landscape</title>
<para><b>Jorge Bernal Bernabe and Antonio Skarmeta</b></para>
<para>Department of Information and Communications Engineering, University of Murcia, Murcia, Spain</para>
<para>E-mail: jorgebernal@um.es; skarmeta@um.es</para>
<para>The continuous, rapid and widespread usage of ICT systems, the constrained and large-scale nature of certain related networks such as IoT (Internet of Things), the autonomous nature of upcoming systems, as well as the new cyber-threats appearing from new disruptive technologies, are given rise to new kind of cyberattacks and security issues. In this sense, this book chapter categorises and presents 10 current main cybersecurity and privacy research challenges, as well as 14 European research projects in the scope of cybersecurity and privacy, analysed further throughout this book, that are addressing these challenges.</para>

<section class="lev1" id="sec1-1">
<title>1.1 Introduction</title>
<para>The widespread usage and development of ICT systems is leading to new kind of cyber-threats. Cyberattacks are continuously emerging and evolving, exploiting disruptive systems and technologies such as Cyber Physical Systems (CPS)/IoT, virtual technologies, clouds, mobile systems/networks, autonomous systems (e.g. drones, vehicles). Cyber attackers are continuously improving their techniques to come up with stealth and sophisticated attacks, especially against IoT, since these environments suffer additional vulnerabilities due to their constrained capabilities, their unattended nature and the usage of potential untrustworthiness components. Similarly, identity-theft, fraud, personal data leakages, and other related cyber-crimes are continuously evolving, causing important damages and privacy problems for European citizens in both virtual and physical scenarios.</para>
<para>In this evolving cyber-threat landscape, we have identified 10 main cybersecurity and privacy research challenges (described in Section 2 of this chapter):</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Interoperable and scalable security management in heterogeneous ecosystems</para></listitem>
<listitem><para>Autonomic security orchestration and enforcement in softwarized and virtualized IoT/CPS systems and mobile environments</para></listitem>
<listitem><para>Cognitive detection and mitigation of evolving new kind of cyber-threats</para></listitem>
<listitem><para>Dynamic Risk assessment and evaluation of cybersecurity, trustworthiness levels, privacy and legal compliance of ICT systems</para></listitem>
<listitem><para>Digital Forensics handling, security intelligent and incident information exchange</para></listitem>
<listitem><para>Cybersecurity and privacy tools for end-users and SMEs. The usability and human factor challenges</para></listitem>
<listitem><para>Reliable and privacy-preserving physical and virtual identity management</para></listitem>
<listitem><para>Efficient and secure cryptographic mechanisms to strengthen confidentiality and privacy</para></listitem>
<listitem><para>Global trust management of elD and related services</para></listitem>
<listitem><para>Privacy assessment, run-time evaluation of the quality of security and privacy risks</para></listitem>
</orderedlist>
<para>To meet those challenges, new holistic approaches, methodologies, techniques and tools are needed to prevent and mitigate cyberattacks by employing novel cyber-situational awareness frameworks, risk analysis and modelling tools, threat intelligent systems, cyber-threat information sharing methods, advanced big-data analysis techniques as well as new solutions that can exploit the benefits brought from latest technologies such as SDN/NFV and Cloud systems. In addition, novel privacy-preserving techniques, and crypto-privacy mechanisms, identity and eID management systems, trust services, and recommendations are needed to protect citizens&#8217; privacy while keeping usability levels.</para>
<para>The European Commission is addressing the aforementioned challenges through different means, including the Horizon 2020 Research and Innovation program, thereby financing innovative research projects that can cope with the increasing cyberthreat landscape.</para>
<para>In this sense, the cybersecurity strategy of the European Union is summarized in 5 strategic priorities &#8220;An Open, Safe and Secure Cyberspace&#8221; [1]</para>
<para>&#8211; <i>Achieving Cyber resilience;</i></para>
<para>&#8211; <i>Reducing cybercrime;</i></para>
<para>&#8211; <i>Developing a cyber defense policy and capabilities related to the Common Security and Defense Policy (CSDP);</i></para>
<para>&#8211; <i>Developing the industrial and technological resources for cybersecurity;</i></para>
<para>&#8211; <i>Establishing a coherent international cyberspace policy for the European Union that promoted core EU values</i>.</para>
<para>Namely, the European program H2020-EU.3.7 [2] &#8211; &#8220;Secure societies &#8211; Protecting freedom and security of Europe and its citizens&#8221;, budget with 1694.60 million, is addressing those cybersecurity and privacy challenges. The general objective in that program is <i>&#8220;to foster secure European societies in a context of unprecedented transformations and growing global interde-pendencies and threats, while strengthening the European culture of freedom and justice</i>.&#8221;</para>
<para>Thus, the H2020-EU.3.7 program is addressing the global challenge about <i>&#8220;undertaking the research and innovation activities needed to protect our citizens, society and economy as well as our infrastructures and services, our prosperity, political stability and wellbeing.&#8221;</i> Namely, this programme [3] aims:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><i>&#8220;to enhance the resilience of our society against natural and man-made disasters, ranging from the development of new crisis management tools to communication interoperability, and to develop novel solutions for the protection of critical infrastructure;</i></para></listitem>
<listitem><para><i>to fight crime and terrorism ranging from new forensic tools to protection against explosives;</i></para></listitem>
<listitem><para><i>to improve border security, ranging from improved maritime border protection to supply chain security and to support the Union&#8217;s external security policies including through conflict prevention and peace building;</i></para></listitem>
<listitem><para><i>and to provide enhanced cybersecurity, ranging from secure information sharing to new assurance models.&#8221;</i></para></listitem>
</itemizedlist>
<para>In this context, this book presents and analyses 14 cybersecurity and privacy-related EU projects founded by this H2020 program, encompassing: ANASTACIA, SAINT, FORTIKA, CYBECO, SISSDEN, CIPSEC, CS-AWARE. RED-Alert, Truessec.eu. ARIES, LIGHTest, CREDENTIAL, FutureTrust. For further information about other H2020 EU projects funded under this H2020-EU.3.7 the reader is refereed to [2].</para>
<para>Each chapter in the book is dedicated to a different funded European Research project and includes the project&#8217;s overviews, objectives, and the particular research challenges, among the ones identified above, that they are facing. In addition, each EU research project in his corresponding chapter describes its research achievements on security and privacy, as well as the techniques, outcomes, and evaluations accomplished in the scope of the corresponding EU project.</para>
<para>The idea of this book was originated after a successful clustering workshop entitled <i>&#8220;European projects Clustering workshop On Cybersecurity and Privacy (ECoSP 2018)&#8221;</i> [4] collocated in ARES Conference &#8211; 13th International Conference on Availability, Reliability and Security, where the EU projects analyzed in this book were presented and the attenders exchanged their views about the European research landscape on Security and privacy.</para>
<para>The rest of this chapter is structured as follows. Section 2 presents the main security and privacy research challenges. Section 3 is devoted to the introduction of the main H2020 EU projects covered in this book, and the main challenges, among the ones identified in Section 2, that each project is facing. Section 4 concludes this chapter.</para>
</section>

<section class="lev1" id="sec1-2">
<title>1.2 Cybersecurity and Privacy Research Challenges</title>
<para>The Ponemon Institute in a recent study [23], identified the Cyber threats with the greatest risk: Cyber warfare or cyber terrorism, Breaches involving high-value information, Nation-state attackers, Breaches that damage critical infrastructure, Breaches that disrupt business and IT processes, Emergence of cyber syndicates, Stealth and sophistication of cyber attackers, Emergence of hacktivism, Breaches involving large volumes of data, Malicious or criminal insiders, Negligent or incompetent employees. The study highlights that Cyber warfare and cyber terrorism and breaches involving high-value information will have the greatest impact on organizations over the next three years.</para>
<para>These cyber-threats are especially notorious and dangerous when affecting IoT and CPS, where massive heterogenous, and potentially constrained, things are being added to the network, meaning additional potential vulnerabilities. In this regard, Roman et al. [19] identified the main &#8220;Challenges of Security &amp; Privacy in Distributed Internet of Things&#8221;. Namely, they provided and analysis of attacker models and threats and identified 7 main challenges in the design and deployment of the security mechanisms, including: Identity and Authentication, Access control, Protocol and Network Security, Privacy, Trust management, Governance, Fault tolerance.</para>
<para>Additionally, recently [22] identified the security and privacy threats in IoT at different network layers, including the major security vulnerabilities. In that paper authors highlighted the main aspects of the IoT ecosystem, such as, having legacy systems running in these platforms, the large number of devices, dynamicity, constrained nature, which are provoking new kind of threats. Likewise, [25] reviewed the IoT cybersecurity research, highlighting the data handling issues, standardization aspects, and research trends when IoT meets Cloud Computing and 5G technologies. Other research trends (Fault Tolerance Mechanism, Self-Management, IoT Forensics, Blokchain Embedded Cybersecurity Design) are also studied.</para>
<para>Besides, Backes et al. [24] identified their 8 most important challenges in IT security research. Including, (1) Security for Autonomous Systems, (2) Security in Spite of Untrustworthy Components, (3) Security Commensurate with Risk, (4) Privacy for Big Data, (5) Economic Aspects of IT Security, (6) Behaviour-related and Human Aspects of IT Security (7) Security of Cryptographic Systems against Powerful Attacks, (8) Detection and Reaction.</para>
<para>The characterization presented herein includes most of those security research challenges but, unlike their work, we use another perspective and for us some of their research challenges (such as economic aspects) are out of our main challenges, as they are not such important in our classification.</para>
<para>The main cybersecurity and privacy research challenges identified are described below. It should be noted that order of challenges does not have any relation with the order of importance or impact of the challenges.</para>
</section>

<section class="lev2" id="sec1-2-1">
<title>1.2.1 Main Cybersecurity Research Challenges</title>
<para><b>1. Interoperable and scalable security management in heterogeneous ecosystems</b></para>
<para>Security Management in fragmentated and heterogeneous domains is still nowadays an open research challenge. This issue is exacerbated in CPS/IoT deployments which are comprised of heterogenous disparate kind of devices and networks protocols/systems. Security management requires a holistic approach to deal with new types of wireless network technologies (e.g. 5G), potentially constrained networks (e.g. LPWANs), protocols and systems, that need to face the management of large and scalable deployments in any segment of network: RAN, Edge, Fog or Core segments.</para>
<para>The definition of security management policies to deal with heterogeneity and interoperability across domains, systems and networks, introduces several challenges related to the employed security models, the language and the level of abstraction required to govern the systems. In this regard, interoperability and contextual aspects in policies, particularities of managed systems domains, policy conflicts and resolution as well as dependencies in policies, are open research challenges that need to be solved. The policies should encompass not only security/privacy policies, but also QoS/SLA policies, network management policies (e.g. slicing, traffic filtering), operational and orchestration policies.</para>
<para><b>2. Autonomic security orchestration and enforcement in softwarized and virtualized IoT/CPS systems and mobile networks</b></para>

<itemizedlist mark="bullet" spacing="normal">
<listitem><i>Holistic security orchestration:</i> New autonomic and context-awareness security orchestrators are needed, which can choregraph and enforce quickly and dynamically the proper defence mechanism (proactively or as countermeasure), according to the circumstances, in SDN/NFV-enabled systems. The orchestration will need to face the challenge to interface with diverse, heterogeneous and distributed IoT controllers, NFV-MANO (Management and Orchestration) orchestrators, Fog-Edge entities, SDN controllers, thereby enforcing dynamically the security enablers in the network/systems.</listitem>
<listitem><i>Virtualized and Softwarized security management</i>: current defences of network operators and companies are mainly based on hardware appliances. Naturally, the hardware appliances have fixed location that must be chosen by the ISP smartly. These hardware appliances can be deployed on-premises or outsourced, and the packets/flows are redirected to these hardware appliances. Using the virtualization enabled by SDN and NFV allows a quick instantiation of VMs in the adequate location. Indeed, the lack of elasticity can be easily handled by Security Virtual Network Function (VNF) functions that can be chained and placed on-demand according to the incoming attacks. However, it is challenging to manage the orchestration and placement of multiple VNFs on an NFV Infrastructure at large scale, either at the core of at the edge of the network, while dealing with scalability and security issues and additional threats that raise from the fact of using a virtualized environment.</listitem>
<listitem><i>Selection of the adequate mitigation plan:</i> and fast enforcement of the defined policies are challenging processes that require a lot of efforts and time. The orchestration and the enforcement of the adequate countermeasures in a short time, and without affecting the Quality of Service (QoS), introduce several challenges that must be duly considered. Also, the definition and enforcement of mitigation plans while reducing the deployment cost and by taking into account the limitations in existing infrastructure clouds, the system/network status and are open research questions that needs to be addressed.</listitem>
<listitem><i>Lightweight Security enablers and protocols for IoT/CPS systems:</i> Traditional security enablers and protocols, encompassing Authentication, Authorization and Accounting (AAA), Channel protection protocols, network filtering, deep packet inspection, intrusion detection..., need to be evolved and adapted to be able to be enforced and managed properly in softwarized and virtualized networks (SDN/NFV) and CPS/IoT systems. In addition, these security enablers and protocols need to be redesigned to cope with the constrained nature of distributed IoT networks, that requires lightweight crypto-protocols and solutions to be enforced in constrained (battery, memory, cpu) devices and networks.</listitem>
<listitem><i>Security in 5G mMTC and mobile networks:</i> 5G mMTC (massive Machine-type Communications) is the key technology needed to scale up the internet of thing (IoT). However, this 5G large-scale management and orchestration raises new cybersecurity threats which requires novel security solutions, as analysed in [26]. 5G imports vulnerabilities and threats coming from cloud computing, virtualization and SDN/NFV technologies. Thus, it is a research challenge to deal with information transmission management, secure communication channels, new security interfaces for AAA to deal with Non-Access Spectrum (NAS) signalling, roaming security, and cope with diverse network-based mobile security threats and attacks (e.g. saturation attacks, penetration attacks, identity thief, Man-in-the-middle, scanning attaks, Hijacking, DoS attacks, Signaling storms).</listitem>
</itemizedlist>
<para><b>3. Cognitive detection and mitigation of evolving new kind of cyber-threats</b></para>

<itemizedlist mark="bullet" spacing="normal">
<listitem><i>Dealing with evolving kind of cyberattacks:</i> The identification of novel types of attacks not yet identified before (e.g. unknown zero-day attacks), that can exploit IoT networks, CPS (and the consequent protection approaches to provide advanced security from last generation threats) is a key research challenge. This new kind of attacks need to be addressed following a global approach through both, signature-based and anomaly-based detection techniques, by using artificial intelligence and Big Data analysis approaches. In the cyber physical world, the attacker&#8217;s goal is to disrupt both the normal operations of the CPS, e.g. sensor readings, safety limits violation, status reports, safety compliance violation etc. and communication flows among devices. The continued rise of cyber-attacks together with the evolving skills of the attackers, and inefficiency of the traditional security algorithms to defend against advanced and sophisticated attacks such as DDoS, slow DoS and zero-day, demand the development of novel defence and resilient detection techniques.</listitem>
<listitem><i>Monitoring in heterogenous ICT systems</i>. Cybersecurity handling, especially in Critical systems, Cyber Physical Systems and IoT networks introduces challenges due the restrictions and constrained nature of these kind of devices and networks. New tools, for network scanning (including encrypted traffic), analysis of digital forensics and pen testing as well as innovative algorithms and techniques (e.g. machine learning) are needed to perform security analysis.</listitem>
<listitem><i>Real-time incident detection and analysis:</i> Incident analysis should be supported by risk models that follows a multidimensional approach, performing evaluation of incidents that combines several factors (such as, for instance, incident severity, criticality of assets affected, global risk associated to the incident or cost of potential mitigations among others) to decide, if needed, dynamically the most convenient mitigation plan to enforce. It should cover, threat analysis, data fusion and correlation from different sources different types of events to detect hidden relations and thus identify potential threats.</listitem>
<listitem><i>Cyber situational-awareness, self-learning and dynamic reaction for self-healing, self-repair and self-protection capabilities:</i> Management and Control systems as well as Autonomous systems, such as for instance, drones, smart objects, self-driving cars, robots, etc, will need to perform self-learning to make proper intelligent decisions based on current real-time situation. However, those autonomous systems could be manipulated when sensing the external world, and therefore, assessing the quality of the potential sensed environment is a challenge. In addition, upcoming cybersecurity frameworks and systems should face the challenge of countering dynamically cyberattacks according to contextual and evolving conditions, thereby providing self-healing, self-repair and self-protection capabilities. This will allow to diagnose and enforce proper defence mechanism and mitigate threats autonomously.</listitem>
<listitem><i>Cognitive big data analysis of systems/networks, services, social networks and cybersecurity intelligence information to counter cyberthreats</i>: To meet this challenge an interdisciplinary approach should be followed, performing cognitive science, communications, compu-tational linguistics, discourse processing, language studies and social psychology. Upcoming cybersecurity solutions should meet the challenge of combing diverse technologies, such for instance, IA algo-rithms, Machine Learning (ML), CEP (Complex Event Processing), SNA (Social Network Analysis) and NLP (Natural Language processing) to assess systems data/events, social features in communications used by terrorist organizations, in order to increase security levels and counter cyber-threats.</listitem>
</itemizedlist>
<para><b>4. Dynamic risk assessment and evaluation of cybersecurity, trustworthiness levels and legal compliance of ICT systems</b></para>
<para>New models are needed to quantify in real time, according to the context, the trustworthiness, of new kind of devices-system-networks, compute the risk associated to an ICT system and evaluate the security and privacy legal compliance. Risk evaluation should be performed through an interdisciplinary approach including not only technological, but also legal and socio-ethical perspectives. Relevant metrics need to be established for cybersecurity economic analysis, cybersecurity and cybercrime market. The risk evaluation should consider automated analysis, for behavioural, social analysis, cybersecurity risk and cost assessment. In this regard, another challenge is to make this risk analysis usable and easy interpretable for administrators and stakeholders, through short and long terms actions and recommendations.</para>
<para>Another related challenge is to kept users informed about the trustworthiness levels of their application and servers, according to multi factor criteria, encompassing sociocultural, legal, ethical, technological and business while paying due attention to the protection of Human Rights. Proper recommendations about certification and labelling of ICT products and services should be automatically inferred, that will foster trust among citizens that use them.</para>
<para><b>5. Digital forensics handling, security intelligence and incident information exchange</b></para>
<para>An important cybersecurity challenge is to improve levels of collaboration between cooperative and regulatory approaches for information sharing in order to enhance cybersecurity and mitigate the risk and the impact of cyber-attacks. In this regard, new standards, models, protocols are needed to achieve interoperability for effective collaboration between operational teams including Law Enforcement Agencies, CSIRTs, Organization, through automated exchange of cybercrime data, including source Open Source Intelligence (OSINT) data sources, thereby allowing sharing the own system cyber-situational awareness information with the external entities in an effective way. In addition, another challenge is to perform automatic application and enforcement of data sharing in an interoperable manner that can feed the incident analysis, which ultimately, can help in the cybersecurity decision support making.</para>
<para><b>6. Cybersecurity and privacy tools for end-users and SMEs. The usability and human factor challenges</b></para>
<para>Individuals, SMEs, local administrators and related end-users are overwhelmed with the complexity of cybersecurity and privacy aspects, which obstructs proper decision making and digital technology usage. These kinds of users cannot dedicate enough effort and resources to invest in security personnel and cybersecurity products or services. User-friendly and automated cybersecurity unified tools need to implemented targeting (potential inexpert) final users, so that they can face cybersecurity threats and manage properly security configurations. The human factor is one of the most problems when it comes to security management, as it can easily generate new security gaps. Most of the cyber-attacks such as ramsonware, physing, identity chief, etc, are originated by the end-user. Thus, the human factor needs to be handled by cybersecurity frameworks and tools in order to increase system resilience against end-users&#8217; and operators&#8217; errors.</para>
</section>

<section class="lev2" id="sec1-2-2">
<title>1.2.2 Privacy and Trust Related Research Challenges</title>
<para><b>7. Reliable and privacy-preserving physical and virtual identity management</b></para>
<para>Identity management Systems require new security and privacy mechanisms that can holistically manage user&#8217;s/object&#8217;s privacy, ID-proofing techniques based on multiple biometrics, strong authentication, usage of breeder documents (e.g. eID, ePassports), while ensuring privacy-by-default, unlikability, anonymity, federation support, non-reputation and self-sovereign IdM management. The challenge is to manage properly those features for mobile, online or physical/face-to face scenarios, while maintaining usability and compliance with regulation e.g. GDPR (General Data Protection Regulations)[GDPR] and eIDAS [21]. This will allow ultimately to reduce identity-theft and related cybercrimes.</para>
<para>In this context, another challenge arises from the extension of global identity management and AAA to <i>anything</i> deployments, managing efficiently identities and access control of new kinds of autonomous Systems, such as, IoT smart objects, self-driving cars, robots, humanoids, drones, etc. that requires new evolved algorithms, protocols and systems.</para>
<para><b>8. Efficient and secure cryptographic mechanisms to strengthen confidentiality and privacy</b></para>

<itemizedlist mark="bullet" spacing="normal">
<listitem><i>Confidentiality and privacy in distributed systems:</i> End-to-end encryption of shared data, in transit and in rest, while maintaining usability and efficiency on the end-user side is an open research challenge that still needs to be covered effectively to protect user&#8217;s privacy. In this sense, new techniques, algorithms and protocols, e.g. those based on proxy re-encryption, are needed to reinforce security/privacy while outsourcing the computation to Cloud wallets to minimize user&#8217;s risks in protecting crypto-material. In addition, new crypto-privacy techniques are needed to guarantee authenticity on the data through novel signatures schemes.</listitem>
<listitem><i>Data anonymization and secure data sharing</i>: All exchanged data should be encrypted, without intermediate entities such as proxies or cloud-providers being able to access the user&#8217;s data. Data min-imization and privacy-by-default properties, above all, in emerging distributed deployments needs to be guaranteed. Thus, novel crypto-privacy protocols, mechanism and systems, such as those based on Zero-knowledge proofs, are needed to ensure anonymity, minimal disclosure of personal information, above all in public Clouds, ledgers and mobiles, while ensuring the user&#8217;s rights laid out in GDPR.</listitem>
<listitem><i>Big data privacy:</i> Data analytics raises new concerns about privacy preservation, as the possible dynamic combination of large data coming from diverse sources can undermine anonymity, pseudonimity properties that can be given for granted in a single domain. This challenge is especially relevant in critical sectors (eHealth, eBanking), distributed systems that will handle massive user data, e.g. blockchains, ledgers, and social networks. Therefore, new technologies to enforce efficient privacy protection are needed, as a response of a new collaborative privacy-assessment mechanisms.</listitem>
<listitem><i>Crypto-resilience to brute-force attacks:</i> Quantum computing technology is making possible new risks and threats, as most of current encryption and signature algorithms will not be fully secure against brute-force attacks perpetrated by quantum computers. In this sense, new cryptographic algorithms are needed to be resilient to brute-force attacks using quantum computing.</listitem>
</itemizedlist>
<para><b>9. Global trust management of eID and related services</b></para>
<para>There is a need of a Global, trusted, open and scalable infrastructure where authorities can publish their trust information to certify trustworthy electronic identities, so that rest of stakeholders, including public sector, private companies, and citizens can verify automatically trust in electronic transactions, while hiding the complexity of dealing with heterogenous formats and protocols.</para>
<para>This challenging Global Trust System should deal with issues such as unified data model, rights delegation, trust policy language, claims discovery to make the system interoperable accessible for everyone, while facilitating, at the same time, the use of eID and electronic signature technology in real world applications. This global trust management infrastructure should leverage the eIDAS trust scheme laid out in Regulation (EU) N&#176;910/2014 [21], extending the European Trust Service Status List (TSL) infrastructure towards a &#8220;Global Trust List&#8221;.</para>
<para><b>10. Privacy assessment, run-time evaluation of the quality of security and privacy risks</b> There is a need of evaluation tools and methods to assess whether an application or a service is compliant with privacy and personal data protection principles, as well as quantitative and qualitative run-time evaluation of the quality of security and privacy risks.</para>
<para>In this sense, novel Dynamic Security and Privacy Seals (DSPS) are needed to increase trust in the system, by combining ISO, legal norms and security and privacy standards with deep technical monitoring integration, in order to provide a user-friendly and synthetic view of the overall system trust ability. In this regard, it is challenging to integrate and enhance the alerts generated by the underlying systems with direct technical and organizational feedback from the end-user. These novel kinds of seals would come up with legally valid and non-repudiable proof of compliance of the system with legal or contractual security-privacy requirements, which can be easily managed and visualized by the user.</para>
</section>

<section class="lev1" id="sec1-3">
<title>1.3 H2020 Projects Facing the Challenges</title>
</section>

<section class="lev2" id="sec1-3-1">
<title>1.3.1 Cybersecurity Related Projects Addressing the Challenges</title>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>ANASTACIA</b> [5] (Chapter 02): ANASTACIA is researching, developing and demonstrating a holistic solution enabling trust and security by-design for Cyber Physical Systems (CPS) based on IoT and Cloud architectures. ANASTACIA cybersecurity framework provides self-protection, self-healing and self-repair capabilities through novel enablers and components. The framework dynamically orchestrates and deploys security policies and actions that can be instantiated on local agents. Thus, security is enforced in different kinds of devices and heterogeneous networks, e.g. IoT &#8211; or SDN/NFV &#8211; based networks. The framework has been designed in full compliance to SDN/NFV standards as specified by ETSI NFV and OFN SDN, respectively. Therefore, Anastacia is addressing challenges #1, #2, #3 and #4 enumerated in Section 2.1</para></listitem>
<listitem><para><b>SAINT</b> [6] (Chapter 03): &#8220;SAINT analyses and identifies incentives to improve levels of collaboration between cooperative and regulatory approaches to information sharing. SAINT is designing new methodologies for the development of an ongoing and searchable public database of cybersecurity indicators and open source intelligence. Comparative analysis of cyber-crime victims and stakeholders within a framework of qualitative social science methodologies deliver valuable evidences and advance knowledge on privacy issues and deep web practices. SAINT defines innovative models, algorithms and automated framework for cost-benefit analysis and estimation of tangible and intangible costs for optimal risk and investment incentives&#8221;. Thus, SAINT is mainly focusing on challenge #5 enumerated in Section 2.1.</para></listitem>
<listitem><para><b>FORTIKA</b> [7] (Chapter 04): &#8220;The project is designing and implementing a security &#8216;seal&#8217; specially devised for small and medium-sized companies that will strengthen trust and facilitate further adoption of digital technologies. The project is implementing robust, resilient and effective cybersecurity solutions to be customized for each individual enterprise&#8217;s evolving needs and can also speedily adapt/respond to the changing cyber threat landscape&#8221;. Therefore, FORTIKA is mainly focusing on challenges #2 and #6 of those described in Section 2.1.</para></listitem>
<listitem><para><b>CYBECO</b> [8] (Chapter 05): &#8220;CYBECO focuses on two mains aspects to deal with cyber-insurance from a Behavioural Choice Perspective: (1) including cyber threat behaviour through adversarial risk analysis to support insurance companies in estimating risks and setting premiums and (2) using behavioural experiments to improve IT owners&#8217; cybersecurity decisions. Therefore, CYBECO facilitates risk-based cybersecurity investments supporting insurers in their cyber offerings through a risk management modelling framework and tool.&#8221; Therefore, SAINT is mainly focusing on challenge #4 of Section 2.1.</para></listitem>
<listitem><para><b>SISSDEN</b> [9] (Chapter 06): &#8220;SISSDEN is intended to improve the cyber security through development of situational awareness and sharing of actionable information. The passive threat data collection mechanism is complemented by behavioural analysis of malware and multiple external data sources. Actionable information produced by SISSDEN provides no-cost victim notification and remediation via organizations such as CERTs, ISPs, hosting providers and LEAs such as EC3. The main goal of the project is the creation of multiple high-quality feeds of actionable security information that can be used for remediation purposes and for proactive tightening of computer defences. This is achieved through the development and deployment of a distributed sensor network based on state-of-the-art honeypot and darknet technologies, the creation of a high-throughput data processing centre, and provisioning of in-depth analytics, metrics and reference datasets of the collected data.&#8221; Therefore, SISSDEN is mainly focusing on challenge #5 of Section 2.1.</para></listitem>
<listitem><para><b>CIPSEC</b> [10] (Chapter 07): &#8220;CIPSEC aims to create a unified security framework that orchestrates state-of-the-art heterogeneous security products to offer high levels of protection in IT (information technology) and OT (operational technology) departments of CIs, also offering a complete security ecosystem of additional services. These services include vulnerability tests and recommendations, key personnel training courses, public-private partnerships (PPPs), forensics analysis, standardization activities and analysis against cascading effects.&#8221; CIPSEC is mainly focusing on challenge #3, #4 and #5 of Section 2.1.</para></listitem>
<listitem><para><b>CS-AWARE</b> [11] (Chapter 08): CS-AWARE aims to increase the automation of cybersecurity awareness approaches, by collecting cybersecurity relevant information from sources both inside and outside of monitored local public administrations (LPA) systems, performing advanced big data analysis to set this information in context for detecting and classifying threats and to detect relevant mitigation or prevention strategies. CS-AWARE aims to advance the function of a classical decision support system by enabling supervised system self-healing in cases where clear mitigation or prevention strategies for a specific threat could be detected. CS-AWARE is built around this concept and relies on cybersecurity information being shared by relevant authorities in order to enhance awareness capabilities. At the same time, CS-AWARE enables system operators to share incidents with relevant authorities to help protect the larger community from similar incidents. CS-AWARE is mainly focusing on challenge #5 of Section 2.1.</para></listitem>
<listitem><para><b>RED-Alert</b> [12] (Chapter 09): &#8220;RED-Alert has built a complete software toolkit to support LEAs in the fight against the use of social media by terrorist organizations for conducting online propaganda, fundraising, recruitment and mobilization of members, planning and coordination of actions, as well as data manipulation and misinformation. The project aims to cover a wide range of social media channels used by terrorist groups to disseminate their content which will be analysed by the RED-Alert solution to support LEAs to take coordinated action in real time but having as a primordial condition preserving the privacy of citizens.&#8221; RED-Alert is mainly focusing on challenge #3 of Section 2.1.</para></listitem>
<listitem><para><b>Truessec.eu</b> [13] (Chapter 10): &#8220;The main goal of TRUESSEC project is to foster trust and confidence in new and emerging ICT products and services throughout Europe by encouraging the use of assurance and certification processes that consider multidisciplinary aspects such as sociocultural, legal, ethical, technological and business while paying due attention to the protection of Human Rights.&#8221; Therefore, TRUESSEC is mainly addressing challenge #4.</para></listitem>
</itemizedlist>
<table-wrap position="float" id="T1-1">
<label><link linkend="T1-1">Table <xref linkend="T1-1" remap="1.1"/></link></label>
<caption><para>Main cybersecurity research challenges and related EU project&#8217;s</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr valign="top"><td>Challenge ID</td><td>Name</td><td>EU projects addressing the challenge</td></tr>
</thead>
<tbody>
<tr valign="top"><td>1</td><td>Interoperable and scalable security management in heterogeneous ecosystems</td><td>ANASTACIA</td></tr>
<tr valign="top"><td>2</td><td>Autonomic Security orchestration and enforcement in softwarized and virtualized IoT/CPS systems and mobile environments</td><td>ANASTACIA, FORTIKA, CIPSEC</td></tr>
<tr valign="top"><td>3</td><td>Cognitive detection and mitigation of evolving new kind of cyber-threats</td><td>ANASTACIA, CIPSEC, CS-AWARE, RED-ALERT</td></tr>
<tr valign="top"><td>4</td><td>Dynamic Risk assessment and evaluation of cybersecurity, trustworthiness levels, privacy and legal compliance of ICT systems</td><td>CYBECO, CIPSEC, TRUESSEC, ANASTACIA</td></tr>
<tr valign="top"><td>5</td><td>Digital Forensics handling, security intelligent and incident information exchange</td><td>SIESSDEN, SAINT, CIPSEC, CS-AWARE</td></tr>
<tr valign="top"><td>6</td><td>Cybersecurity and privacy tools for end-users and SMEs. The usability and human factor challenges</td><td>FORTIKA</td></tr>
</tbody>
</table>
</table-wrap>
<para>Table 1.1 recaps the main cybersecurity research challenges presented in Section 1.2.1 and links them with the EU project&#8217;s, presented in this section, that are addressing those challenges.</para>
</section>

<section class="lev2" id="sec1-3-2">
<title>1.3.2 H2020 Projects Addressing the Privacy and Trust Related Challenges</title>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>ARIES</b> [14] (Chapter 11): Aries aims to set up a reliable identity ecosystem encompassing technologies, processes and security features that ensure highest levels of quality in secure credentials for highly secure and privacy-respecting physical and virtual identity management processes with the specific aim to tangibly achieve a reduction in levels of identity fraud, theft, wrong identity and associated crimes. The ecosystem is strengthening the link between physical documents linked to the biometric identity and the digital (online and mobile) identity.</para></listitem>
<listitem><para><b>LIGHTest</b> [15] (Chapter 12): LIGHTest project aims to set-up a global trust infrastructure where authorities can publish their trust information. Thus, member states can use infrastructure to publish lists of qualified trust services, while private companies can establish trust in different sectors, such as, inter-banking, international trade, shipping, business reputation and credit rating. Then, different entities can query this trust information to verify trust in simple signed documents or multi-faceted complex transactions.</para></listitem>
<listitem><para><b>CREDENTIAL</b> [16] (Chapter 13): CREDENTIAL project has developed a cloud-based service for identity provisioning and data sharing. On the one hand, it offers high confidentiality and privacy guarantees to the data owner, while, on the other hand, it offers high authenticity guarantees to the receiver. CREDENTIAL integrates advanced cryptographic mechanisms into standardized authentication protocols. The solution has proved high user convenience, strong security, and practical efficiency.</para></listitem>
<listitem><para><b>FutureTrust</b> [17] (Chapter 14): The FutureTrust project aims to develop a comprehensive Open Source validation service as well as a scalable preservation service for electronic signatures and will provide components for the eID-based application for qualified certificates across borders, and for the trustworthy creation of remote signatures and seals in a mobile environment. Furthermore, the FutureTrust project extends and generalize existing trust management concepts to build a &#8220;Global Trust List&#8221;, which allows to maintain trust anchors and metadata for trust services and eID related services around the globe.</para></listitem>
<listitem><para><b>LEPS</b> [18] (Chapter 15): LEPS project aims to &#8220;validate and facilitate the connectivity options to recently established eIDAS ecosystem, which provides this trusted environment with legal, organisational and technical guarantees already in place. Strategies have been devised to reduce SP implementation costs for this connectivity to eIDAS technical infrastructure&#8221;. The project has implemented integrated and validated the solution in Pilots of two EU countries.</para></listitem>
</itemizedlist>
<para>Table 1.2 summarizes the main privacy-related research challenges presented in Section 1.2.2 and links them with the EU project&#8217;s, presented in this section, that are addressing those challenges.</para>
<table-wrap position="float" id="T1-2">
<label><link linkend="T1-2">Table <xref linkend="T1-2" remap="1.2"/></link></label>
<caption><para>Main Privacy related research challenges and related EU projects</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr valign="top"><td>Challenge ID</td><td>Name</td><td>EU projects addressing the challenge</td></tr>
</thead>
<tbody>
<tr valign="top"><td>7</td><td>Reliable and privacy-preserving physical and virtual identity management</td><td>ARIES, LEPS</td></tr>
<tr valign="top"><td>8</td><td>Efficient and secure cryptographic mechanisms to strengthen confidentiality and privacy</td><td>CREDENTIAL</td></tr>
<tr valign="top"><td>9</td><td>Global trust management of eID and related services</td><td>LIGHTest, Future Trust</td></tr>
<tr valign="top"><td>10</td><td>Privacy assessment, run-time evaluation of the quality of security and privacy risks</td><td>ANASTACIA</td></tr>
</tbody>
</table>
</table-wrap>
</section>

<section class="lev1" id="sec1-4">
<title>1.4 Conclusion</title>
<para>This chapter has identified and introduced the 10 main cybersecurity and privacy research challenges presented and addressed in this book by 14 European research projects. Some of the challenges revolve around the autonomic cybersecurity management, orchestration and enforcement in het-erogeneous and virtualized CPS/IoT and mobile ecosystems. The challenges identified cognitive detection and mitigation of evolving new kind of cyber-threats; the dynamic risk assessment and evaluation of cybersecurity, trustworthiness levels, privacy and legal compliance of ICT systems; the digital Forensics handling; the security intelligent and incident information exchange; and cybersecurity and privacy tools and the associated usability and human factor. Regarding privacy and trust related challenges, we have identified four main global ones, encompassing the reliable and privacy-preserving identity management, efficient and secure cryptographic mechanisms, Global trust management and privacy assessment.</para>
<para>In addition, the chapter has introduced the 14 EU projects analysed in the book and the main challenges the are addressing. ANASTACIA, SAINT, FORTIKA, CYBECO, SISSDEN, CIPSEC, CS-AWARE. RED-Alert, Truessec.eu. ARIES, LIGHTest, CREDENTIAL, FutureTrust.</para>
<para>The rest of the book is intended to present each of those 14 EU projects, which are described in a different book chapter. Each chapter includes the project&#8217;s overviews and objectives, the particular challenges they are covering, research achievements on security and privacy, as well as the techniques, outcomes, and evaluations accomplished in the scope of the EU project.</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This work has been supported by a postdoctoral INCIBE grant &#8220;Ayudas para la Excelencia de los Equipos de Investigation Avanzada en Ciberseguridad&#8221; Program, with Code INCIBEI-2015-27363. This book chapter has also received funding from the European Union&#8217;s Horizon 2020 research and innovation programme under grant agreement No. 700085 (ARIES project).</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Cybersecurity Strategy of the European Union: An Open, Safe and Secure Cyberspace. Joint communication to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. (2013). Available at from: https://eeas.europa.eu/archives/docs/policies/eu-cyber-security/cybsec_comm_en.pdf</para>
<para>[2] H2020-EU.3.7. &#8211; Secure societies &#8211; Protecting freedom and security of Europe and its citizens. https://cordis.europa.eu/programme/rcn/664463/en</para>
<para>[3] Secure societies &#8211; Protecting freedom and security of Europe and its citizens Last accessed 10/04/1 from: https://ec.europa.eu/programmes/hori zon2020/en/h2020-section/secure-societies-%E2%80%93-protecting-freedom-and-security-europe-and-its-citizens</para>
<para>[4] European projects Clustering workshop On Cybersecurity and Privacy (ECoSP 2018) https://2018.ares-conference.eu/workshops/ecosp-2018/. held in conjunction with the 13th International Conference on Availability, Reliability and Security (ARES 2018 &#8211; http://www.ares-conference.eu)</para>
<para>[5] ANASTACIA (Advanced Networked Agents for Security and Trust Assessment in CPS / IOT Architectures) H2020 EU project, Grant Agreement No. 731558 http://anastacia-h2020.eu/</para>
<para>[6] SAINT (Systemic Analyzer In Network Threats) H2020 EU project, Grant Agreement No. 740829 https://project-saint.eu/</para>
<para>[7] FORTIKA (Cyber Security Accelerator for trusted SMEs IT Ecosystem) H2020 EU project, Grant Agreement No. 740690. http://fortika-project.eu/</para>
<para>[8] CYBECO (Supporting Cyberinsurance from a Behavioural Choice Perspective) H2020 EU project, Grant Agreement No. 740920. https://www.cybeco.eu/</para>
<para>[9] SISSDEN (Secure Information Sharing Sensor Delivery Event Network) H2020 EU project, grant Agreement No. 700176. https://sissden.eu/</para>
<para>[10] CIPSEC (Enhancing Critical Infrastructure Protection with innovative SECurity framework) H2020 EU project, Grant Agreement No. 700378 http://www.cipsec.eu/</para>
<para>[11] CS-AWARE (A cybersecurity situational awareness and information sharing solution for local public administrations based on advanced big data analysis) H2020 EU project, Grant Agreement No. 740723. https://cs-aware.eu/</para>
<para>[12] RED-Alert (Real-time Early Detection and Alert System) H2020 EU project, Grant Agreement No. 740688 http://redalertproject.eu/</para>
<para>[13] Truessec.eu (TRUst-Enhancing certified Solutions for SEcurity and protection of Citizens&#8217; rights in digital Europe) H2020 EU project, Grant Agreement No. 731711 http://truessec.eu/</para>
<para>[14] Aries (ReliAble euRopean Identity EcoSystem), H2020 EU Project Grant Agreement No. 700085 https://www.aries-project.eu/</para>
<para>[15] LIGHTest (Lightweight Infrastructure for Global Heterogeneous Trust management in support of an open Ecosystem of Stakeholders and Trust schemes), H2020 EU Project Grant Agreement No. 700321, https://www.lightest.eu/</para>
<para>[16] CREDENTIAL (Secure Cloud Identity Wallet), H2020 EU project, Grant Agreement No. 653454, https://credential.eu/</para>
<para>[17] FutureTrust (Future Trust Services for Trustworthy Global Transactions), H2020 EU project, Grant Agreement No. 700542 https://www.futuretrust.eu/</para>
<para>[18] LEPS (Leveraging eID in the Private Sector), European Union&#8217;s Connecting Europe Facility, Grant Agreement No. INEA/OEF/ICT/A2016/ 1271348. http://www.leps-project.eu/</para>
<para>[19] Roman, R., Zhou, J., &amp; Lopez, J. (2013). On the features and challenges of security and privacy in distributed internet of things. Computer Networks, 57(10), 2266&#8211;2279.</para>
<para>[20] Zou, Y., Zhu, J., Wang, X., &amp; Hanzo, L. (2016). A survey on wireless security: Technical challenges, recent advances, and future trends. <i>Proceedings of the IEEE</i>, 104(9), 1727&#8211;1765.</para>
<para>[21] European Parliament, &#8216;Regulation (EU) No. 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC&#8217;, European Parliament, Brussels, Belgium, Regulation 910/2014, 2014.</para>
<para>[22] Ziegler, S., Crettaz, C., Kim, E., Skarmeta, A., Bernabe, J. B., Trapero, R., &amp; Bianchi, S. (2019). Privacy and Security Threats on the Internet of Things. In <i>Internet of Things Security and Data Protection</i> (pp. 9&#8211;43). Springer, Cham.</para>
<para>[23] Megatreds (2018). &#8220;Study on global megatrends in cybersecurity, ponemon institute research report&#8221;, Research report, February 2018.</para>
<para>[24] Backes, M., Buxmann, P., Eckert, C., Holz, T., M&#252;ller-Quade, J., Raabe, O., &amp; Waidner, M. (2016). <i>Key Challenges in IT Security Research</i>. Discussion Paper for the Dialogue on IT Security 2016, SecUnity, https://it-security-map. eu.</para>
<para>[25] Lu, Y., Da Xu, L. Internet of Things (IoT) cybersecurity research: a review of current research topics. <i>IEEE Internet of Things Journal</i>, 2018.</para>
<para>[26] Ahmad, I., Kumar, T., Liyanage, M., Okwuibe, J., Ylianttila, M., &amp; Gurtov, A. (2017, September). 5G security: Analysis of threats and solutions. In <i>2017 IEEE Conference on Standards for Communications and Networking (CSCN)</i> (pp. 193&#8211;199). IEEE.</para>
</section>
</chapter>

<chapter class="chapter" id="ch02" label="2" xreflabel="2">
<title>Key Innovations in ANASTACIA: Advanced Networked Agents for Security and Trust Assessment in CPS/IOT Architectures</title>
<para><b>Jorge Bernal Bernabe<sup>1</sup>, Alejandro Molina<sup>1</sup>, Antonio Skarmeta<sup>1</sup>, Stefano Bianchi<sup>2</sup>, Enrico Cambiaso<sup>3</sup>, Ivan Vaccari<sup>3</sup>, Silvia Scaglione<sup>3</sup>, Maurizio Aiello<sup>3</sup>, Rub&#233;n Trapero<sup>4</sup>, Mathieu Bouet<sup>5</sup>, Dallal Belabed<sup>5</sup>, Miloud Bagaa<sup>6</sup>, Rami Addad<sup>6</sup>, Tarik Taleb<sup>6</sup>, Diego Rivera<sup>7</sup></b>, <b>Alie El-Din Mady<sup>8</sup>, Adrian Quesada Rodriguez<sup>9</sup>, C&#233;dric Crettaz<sup>9</sup>, Sebastien Ziegler<sup>9</sup>, Eunah Kim<sup>10</sup>, Matteo Filipponi<sup>10</sup>, Bojana Bajic<sup>11</sup>, Dan Garcia-Carrillo<sup>12</sup> and Rafael Marin-Perez<sup>12</sup></b></para>
<para><sup>1</sup>Department of Information and Communications Engineering, University of Murcia, Murcia, Spain</para>
<para><sup>2</sup>Research &amp; Innovation Department, SOFTECO SISMAT SRL, Di Francia 1 &#8211; WTC Tower, 16149, Genoa, Italy</para>
<para><sup>3</sup>National Research Council (CNR-IEIIT) &#8211; Via De Marini 6 &#8211; 16149 Genoa, Italy</para>
<para><sup>4</sup>Atos Research and Innovation, Atos, Calle Albarracin 25, Madrid, Spain</para>
<para><sup>5</sup>THALES Communications &amp; Security SAS, Gennevilliers, France</para>
<para><sup>6</sup>Department of Communications and Networking, School of Electrical Engineering, Aalto University, Finland</para>
<para><sup>7</sup>R&amp;D Department, Montimage, 75013, Paris, France</para>
<para><sup>8</sup>United Technologies Research Center, Ireland</para>
<para><sup>9</sup>Mandat International, Research, Switzerland</para>
<para><sup>10</sup>Device Gateway SA, Research and Development, Switzerland</para>
<para><sup>11</sup>Archimede Solutions, Geneva, Switzerland</para>
<para><sup>12</sup>Department of Research &amp; Innovation, Odin Solutions, Murcia, Spain E-mail: jorgebernal@um.es; alejandro.mzarca@um.es; skarmeta@um.es; stefano.bianchi@softeco.it; enrico.cambiaso@ieiit.cnr.it; ivan.vaccari@ieiit.cnr.it; silvia.scaglione@ieiit.cnr.it; maurizio.mongelli@ieiit.cnr.it; ruben.trapero@atos.net; mathieu.bouet@thalesgroup.com; dallal.belabed@thalesgroup.com; miloud.bagaa@aalto.fi; rami.addad@aalto.fi; tarik.taleb@aalto.fi; diego.rivera@montimage.com; madyaa@utrc.utc.com; aquesada@mandint.org; ccrettaz@mandint.org; sziegler@mandint.org; eunah.kim@devicegateway.com; mfilipponi@devicegateway.com; bbajic@archimede.ch; dgarcia@odins.es; rmarin@odins.es</para>
<para>This book chapter presents the main key innovations being devised, implemented and validated in the scope of Anastacia H2020 EU research project, to meet the cybersecurity challenge of protecting dynamically heterogenous IoT scenarios, endowed with SDN/NFV capabilities, which face evolving kind of cyber-attacks. The key innovations encompasses, among others, policy-based security management in IoT networks, trusted and dynamic security orchestration of virtual networks security functions using SDN/NFV technologies, security monitoring and cognitive reaction to countering cyber-treats, behavioural analysis, anomaly detection and automated testing for the detection of known and unknown vulnerabilities in both physical and virtual environments as well as secured and authenticated dynamic seal system as a service.</para>


<section class="lev1" id="sec2-1">
<title>2.1 Introduction</title>
<para>The Internet of Things (IoT) aims to leverage network capabilities of devices and smart objects, integrating the sensing and actuation features to create pervasive information systems, which are used as baseline to provide smart services to the industry and citizens. However, as a greater number of constrained IoT devices are connected to Internet, the security and privacy risks increase accordingly. The boosted connectivity and constrained capabilities of devices in terms of memory, CPU, memory, battery, the unattended behaviour of IoT devices, misconfigurations and lack of vendor support, increase potential kinds of vulnerabilities. Therefore, new advanced security frameworks for IoT deployments are needed to face these threats and meet dynamically the desired defence levels.</para>
<para>H2020 Anastacia EU project addresses the security management of heterogenous and distributed IoT scenarios, such as Smart Buildings or Smart Cities, which can benefit from a policy-based orchestration and security management approach, where NFV/SDN-based solutions and novel monitoring and reaction tools are combined to deal with new kind of evolving cyber-attacks.</para>
<para>ANASTACIA is developing new methodologies, frameworks and support tools that will offer resilience to distributed smart IoT systems and Mobile Edge Computing (MEC) scenarios against cyber-attacks, by leveraging SDN and NFV technologies. Security VNFs can be timely and dynamically orchestrated through policies to deal with heterogeneity demanded by these distributed IoT deployments that can be deployed either at the core of at the edge, in VNF entities, to rule the security in IoT networks. Dynamic and reactive provisioning of Security VNFs towards the edge of the network can enhance scalability, necessary to deal with IoT scenarios.</para>
<para>The primary objective of the ANASTACIA project is to address cybersecurity concerns by researching, developing and demonstrating a holistic solution enabling trust and security by-design for Cyber Physical Systems (CPS) based on Internet of Things (IoT) and Cloud architectures.</para>
<para>The heterogeneous, distributed and dynamically evolving nature of CPS based on IoT and virtualised cloud architectures introduces new and unexpected risks that can be only partially solved by current state-of-the-art security solutions. Innovative paradigms and methods are required i) to build security into the ICT system at the outset, ii) to adapt to changing security conditions, iii) to reduce the need to fix flaws after deploying the system, and iv) to provide the assurance that the ICT system is secure and trustworthy at all times. ANASTACIA is thus developing, integrating and validating a security and privacy framework that will be able to take autonomous decisions through the use of new networking technologies such as Software Defined Networking (SDN) and Network Function Virtualisation (NFV) and intelligent and dynamic security enforcement and monitoring methodologies and tools.</para>
<para>Dealing with this general ambition and scenario raises several research challenges, being faced in Anastacia:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Interoperable and scalable IoT security management: dealing with the level of abstraction, the language and new security models, contextual IoT aspects in policies, particularities in IoT security models, policy conflicts and dependencies in orchestration policies.</para></listitem>
<listitem><para>Optimal selection of SDN/NFV-based security mechanisms: allocate multiple VNF requests on an NFV Infrastructure, especially in a cost-driven objective.</para></listitem>
<listitem><para>Orchestration of SDN/NFV-based security solutions for IoT environments: the selection of the adequate mitigation plan and the fast enforcement of the defined policies, as well as orchestration and the enforcement of the adequate countermeasures in a short time.</para></listitem>
<listitem><para>Dealing with a new kind of cyber-attacks in IoT: providing advanced security from last generation threats on IoT environments.</para></listitem>
<listitem><para>Learning decision model for detecting malicious activities: the development of novel defence and resilient detection techniques.</para></listitem>
<listitem><para>Hybrid security monitoring for IoT enhanced with event correlation: The application of both signature-based and behavioural-based security analysis for IoT.</para></listitem>
<listitem><para>Quantitative evaluation of incidents for mitigation support: combination of several factors to evaluate incidents to decide on the most convenient mitigation plan to enforce.</para></listitem>
<listitem><para>Construction of a dynamic security and privacy seal that secures both organizational and technical data: generate trust by considering technical insights on security and privacy personal data protection requirements.</para></listitem>
</itemizedlist>
<para>This chapter describes the main key innovations being devised, implemented and evaluated in the scope of ANASTACIA to cope with the aforementioned security challenges in IoT scenarios.</para>
</section>

<section class="lev1" id="sec2-2">
<title>2.2 The Anastacia Approach</title>
</section>

<section class="lev2" id="sec2-2-1">
<title>2.2.1 Anastacia Architecture Overview</title>
<para>The NIST Cybersecurity Framework identifies five steps for the protection of critical infrastructures: Identification, protection, detection, response and recovering. In general, these three steps are supported by the retrieval and management of security information extracted from the infrastructure to protect. On top of the five steps of the NIST Cybersecurity Framework, we can overlap the three main activities in what regards to the data lifecycle in ICT infrastructures for security protection, namely the data acquisition, data dissemination, data consumption and data processing. Data acquisition includes of the components and mechanisms to retrieve relevant data from the infrastructure, such as logs, heartbeats or reports. Data Dissemination regards to the elements that allow to distribute or store the acquired data among the relevant components of the infrastructure, such as monitoring agents, document or software repositories. Data consumption refers to the components involved in the usage of such data, either for its correlation, patterns finding for incident detection or forensic analysis. Finally, data processing carries out activities based on the result obtained by the data consumers, such as mitigation actions to react to the incidents detected, their enforcement or the creation of security and privacy seals that inform about the security and privacy level of the platform.</para>
<para>The ANASTACIA approach is based on the flow and management of data gathered from IoT infrastructures. Following the aforementioned model, ANASTACIA designs and uses proper mechanisms to retrieve information from the underlying infrastructure and accurate ways to interpret it them to know the real status of the infrastructure and to make accurate deci-sions based to automatically react to incidents. ANASTACIA relies on the concept of automation when referring to the dynamic protection against security incidents, considering the cycle depicted in <link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link> for identifying sources of relevant security information, deployment of security probes for the protection of IoT infrastructures, the detection of security incidents, responding to them by generating security alerts that are used to enforce mitigation actions to recover from the detected security incidents.</para>
<para>To this end ANASTACIA has designed a plane-based architecture [1] where the information flows from the data acquisition from the IoT infrastructure to their dissemination and consumption by the monitoring infrastructure and to the data processing by the reaction module to decide about mitigations to enforce. <link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link> represents the plane-based approach of ANASTACIA. On top of the data plane, which represents the data to obtain from the IoT infrastructure, and on top of the control plane, which represents the elements (software defined networks or virtual network functions) that allows to interact with the IoT infrastructure, are: (i) the enforcement plane that uses the control plane to obtain monitoring data from the infrastructure, (ii) the monitoring and reaction plane, which correlates the monitoring data to detect incidents and propose reactions to mitigate them, (iii) the security orchestration plane, which enforce the reactions using the enforcement plane. On top of them, the Seal Management plane uses monitoring data and reactions to provide with a snapshot of the security and privacy level of the infrastructure, and the user plane that provides interaction with human administrators for the establishment of security policies.</para>
<fig id="F2-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-1">Figure <xref linkend="F2-1" remap="2.1"/></link></label>
<caption><para>Main stages of ANASTACIA framework.</para></caption>
<graphic xlink:href="graphics/ch002_fig001.jpg"/>
</fig>
<fig id="F2-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-2">Figure <xref linkend="F2-2" remap="2.2"/></link></label>
<caption><para>Anastacia high-level architectural view.</para></caption>
<graphic xlink:href="graphics/ch002_fig002.jpg"/>
</fig>
</section>

<section class="lev1" id="sec2-3">
<title>2.3 Anastacia Main Innovation Processes</title>
</section>

<section class="lev2" id="sec2-3-1">
<title>2.3.1 Holistic Policy-based Security Management and Orchestration in IOT</title>
<para>In distributed smart IoT deployments scenarios like those previously described, the system security management is crucial. At this point, it is important to highlight that to the diversity of the current systems and services they are added a vast amount of different devices in the IoT domain, being the latter quite different among the previous approach and even among themselves. From this point of view, the current state of art shows that it is highly valuable to provide different levels of security policies to provide different levels of abstraction for different profiles of management. It is also important to highlight the difference between generic models and specific extensible models, as well as to remark then relevance of policy orchestration features and policy conflict detection. Main ANASTACIA&#8217;s contributions on policies reside in the unification of relevant, new and extended capability-based security policy models (including ECA features), as well as policy orchestration and conflict detection mechanisms, all under a unique policy framework. To this aim, the holistic policy-based solution provides different components and features like <b>Policy Models</b>, <b>Policy Editor Tool</b>, <b>Policy Repository</b>, <b>Policy Interpreter</b>, <b>Policy Conflict Detection</b> and <b>Policy for Orchestration</b>.</para>
<para>ANASTACIA&#8217;s <b>Policy Models</b> thus improve the current state of the art as well as provide novelty approaches to be able to increase the security measures and countermeasures in the whole system at different levels. To this aim, ANASTACIA adopts and extend concepts and features from the state of art, to provide a unified security policy framework. I.e., ANASTACIA involves and evolves previous works by extending the already existing features as well as by providing new IoT-focused features.</para>
<para>The Policy Models can be instantiated using the <b>Policy Editor Tool</b> which allows defining security policies at a high-level of abstraction through a friendly GUI. In this way, the security administrator is able to manage the security of the system by instantiating new security policies, as well as supervise the existing security policies by the Policy Repository. <b>The Policy Repository</b> registers all policy operations as well as the current status for each one. It also provides valuable policy templates to make the security management easier.</para>
<para>Since the security policies are instantiated in a High-level Security Policy Language (HSPL), it must be transformed in configurations for the specific devices which will enforce the security policy. To this aim, the <b>Policy Interpreter</b> is able to refine the HSPL in one or several Medium-level Security Policy Language (MSPL) policies depending on a set of identified capabilities (filtering, forwarding, etc.). This process transforms the high-level concepts into more detailed parameters but still independent to the specific technologies. Finally, these MSPL policies are translated in final configurations using specific translator plugins for each technology. Once the configurations have been obtained, they can be enforced in the specific security enablers, understanding a security enabler as a piece of hardware or software able to implement a specific capability. Of course, a security policy only can be enforced if it does not present any kind of conflict with the already enforced ones. In this sense the <b>Policy Conflict Detection</b> engine verifies that the new security policy will not generate conflicts like redundancy, priorities, duties (e.g. packet inspection vs channel protection), dependences or contradictions. To this aim, the security policy is processed against the rule engine which extracts context information from the policy repository and the system model to perform the necessary verifications.</para>
<para>Regarding the dependences, ANASTACIA also includes as part of the policy model the Policy for Orchestration concept. The <b>Policy for Orchestration</b> model allows the security administrator to specify how a set of security policies must be enforced by defining priorities and dependencies, where a security policy can depend on other security policies or even in system events like an authentication success.</para>
<para>Through these components and features, the policy-based ANASTACIA framework aims to cope with research challenges related with interoper-ability and scalability IoT security management. That is, the policy-based approach aims to deal with the heterogeneity and scalability by defining different level of abstractions, models and translation plugins. In this way, the scalability is also benefited since the policy-based approach with a high-level of abstraction makes easier to manage a large amount of devices. The policy conflict detection allows the framework to deal with several conflict types, and finally the policy for orchestration considers policy chaining by priority or dependencies to cover an orchestration plan.</para>
<para>Currently, the project is validating the related components and features by experimenting on IoT/SDN/NFV Proof of Concepts for different security capabilities like authentication, authorization and accounting (AAA), filtering, IoT management, IoT honeynet and channel protection as it can be seen in the research outcomes.</para>
<para>Regarding the research outcomes and associated publications, [2] provides a first PoC performance evaluation focused on a sensor isolation through different SDN controllers as well as a traditional firewall approach. [3] shows the potential of the policy-based framework focused on a AAA scenario. The paper entitled &#8220;Virtual IoT HoneyNets to mitigate cyberattacks in SDN/NFV-enabled IoT networks&#8221; shows the dynamic deployments of IoT-honeynet networks on demand by replicating real IoT environments by instantiating the ANASTACIA IoT-honeynet policy model. It also provides performance for different kind of IoT devices and topologies. In [1], the authors present the architecture focusing on the reaction performance of the policy-based framework.</para>
</section>

<section class="lev2" id="sec2-3-2">
<title>2.3.2 Investigation on Innovative Cyber-threats</title>
<para>The CNR team involved in ANASTACIA has multi-year experience in the cyber-security field, concerning both the development of innovative cyberattacks and intrusion detection algorithms. By exploiting the knowledge of the team, in the ANASTACIA context, deep work has been accomplished in the cyber-security context. Such work led to the identification of two innovative threats, related to the IoT and Slow DoS Attacks contexts. The novelty of such threats is demonstrated by their acceptance from the research world [4, 5]. In the following, based our description on the published works just mentioned and on the descriptions reported in the project deliverables, the introduced new attacks are briefly described (how they work and how it is possible to protect from them).</para>
</section>

<section class="lev3" id="sec2-3-2-1">
<title>2.3.2.1 IoT 0-day attack</title>
<para>Being exchanged information extremely sensitive, due to the nature of IoT devices and networks, security of IoT systems is a topic to be investigated in deep. The work behind the proposed attack goes in this direction, by investigating the domotic IoT context and exploiting its components, to identify weaknesses that attackers may exploit.</para>
<para>The proposed attack is part of the ZigBee security context. ZigBee is a wireless standard introduced by the ZigBee Alliance in 2004 and based on the IEEE 802.15.4 standard, used in the Wireless Personal Area Networks (WPAN) context [6]. In particular, we identified a particular vulnerability affecting AT Commands capabilities implemented in IoT sensor networks. Our work focuses on the exploitation of such weakness on XBee devices, supporting remote AT commands, exploited to disconnect an end-device from the ZigBee network and make it join a different (malicious) network and hence forward potentially sensitive data to third malicious parties. Given the nature of IoT end-devices, often associated with a critical data and operations, it may be obvious how a Remote AT Command attack represents a serious threat for the entire infrastructure. Early evaluation of the effects of the proposed attack on a real network led to validate the success of the proposed threat [4]. Obtained results prove the efficacy of the proposed attack.</para>
<para>Moreover, since just a single packet is sent to the victim by the attacker to reconfigure it, the proposed attack should be considered as dangerous as scalable. Particularly, the time required to send such packet is minimal, so in case of multiple targeted sensors, the attack success is guaranteed.</para>
<para>By adopting an external level protection approach [4], the protection system is directly employed on the nodes, since agents implemented on the IoT devices are responsible for monitoring the device status and verifying that all the parameters are correct. In case the device is affected by a remote AT reconfiguration command attack, such alert information is forwarded to the IoT coordinator, and the device is designed to mitigate the attack (by autonomously reconfiguring itself, as previously described). Since not all the devices may embed a detection and mitigation system, the IoT coordinator is supposed to also monitor devices status periodically to identify disconnections, hence report them to the other ANASTACIA modules.</para>
</section>

<section class="lev3" id="sec2-3-2-2">
<title>2.3.2.2 Slow DoS attacks</title>
<para>Among all the methodologies used to successfully execute malicious cyberoperations, denial of service attacks (DoS) are executed with the aim of exhaust victim&#8217;s resources, compromising the targeted systems&#8217; availability, thus affecting availability and reliability for legitimate users. These threats are particularly dangerous, since they can cause significant disruption on network-based systems [7]. The term Slow DoS Attack (SDA), coined by the CNR research group involved in the project, concerns a DoS attack which makes use of low-bandwidth rate to accomplish its purpose. An SDA often acts at the application layer of the Internet protocol stack because the characteristics of this layer are easier to exploit to successfully attack a victim even by sending it few bytes of malicious requests [8]. Moreover, under an SDA, an ON-OFF behaviour may be adopted by the attacker [9], which comprises a succession of consecutive periods composed of an interval of inactivity (called off-time), followed by an interval of activity (called on-time).</para>
<para>The innovative attack proposed is called SlowComm, sending a large amount of slow (and endless) requests to the server, saturating the available connections at the application layer on the server inducing it to wait for the (never sent) completion of the requests. As an example, we refer to the HTTP protocol, where the characters sequence \r\n\r\n represent the end of the request: SlowComm never sends such characters, hence forcing the server to an endless wait. Additionally, during a SlowComm the request payload is sent abnormally slowly. Similar behaviour could be adopted for other protocols as well (SMTP, FTP, etc.). As a consequence, by applying this behaviour to a large amount of connections with the victim, a DoS may be reached. In particular, SlowComm works by creating a set of predefined connections with the victim host. For each connection, a specific payload message is sent (the payload is typically endless), one character at time (one single character per packet), by making use of the Wait Timeout [9] to delay the sending. In this way, once the connection is established with the server (at the transport layer), a single character is sent (hence, establishing/seizing the connection at the application layer, hence, with the listening daemon). At this point, the Wait Timeout is triggered, to delay the sending of the remaining payload, and to prevent server-side connection closures. During our work we proved how the attack may successfully lead a DoS to different popular TCP based services [4], hence proving that the attack is particularly dangerous.</para>
<para>To protect from SlowComm and Slow DoS Attacks in general, it is important to consider the following fact: <i>it is trivial to detect and mitigate a single attacking host, while it is extremely difficult to identify a distributed attack</i>. This fact derives from the fact that IP address filtering may be applied to detect and mitigate a SlowComm attack (see, for instance, our tests on mod-security [4]), while in case of a distributed attack this concept may not be adopted with ease. Moreover, from the stealth perspective, the proposed attack is particularly difficult to detect while it is active, since log files on the server are often updated only when a complete request is received or a connection is closed: being our requests typically endless, during the attack log files do not contain any trace of attack. Therefore, different approaches should be adopted, for instance based on statistic [10], machine learning [6, 11, 12], or spectral analysis [13]. A possible approach to adopt combines the algorithm proposed in [10] and the methodology proposed in [14] to detect running SlowComm attacks. Early version of the algorithms has been tested in laboratory, while testing on relevant environments has not been accomplished to date. Concerning the ANASTACIA platform, further work on the topic will be focused on evaluating a possible implementation of such approach, aimed to provide protection from Slow DoS Attacks by embedding innovative anomaly-based intrusion detection algorithms in a relevant environment and providing additional capabilities to the ANASTACIA framework, in the context of cyber-security applied to counter last generation threats.</para>
</section>

<section class="lev2" id="sec2-3-3">
<title>2.3.3 Trusted Security Orchestration in SDN/NFV-enabled IOT Scenarios</title>
<para>In the ANASTACIA architecture, the security orchestrator oversees orchestrating the security enablers according to the defined security policies. The later would be generated either by the end-user or received from the monitoring and reaction plane. The security orchestration plane, through its components security orchestrator, security resource planning and policy interpreter, is able to coordinate the policies and security enables to cover the security configuration needed for different communications happen in the network. The security orchestration plane takes into account the policies requirements and the available resources in the underlying infrastructure to mitigate the different attacks while reducing the expected mitigation cost and without affecting the QoS requirements of different verticals. The resources in the underlying infrastructure refer to the available amount of resources in terms of CPU, RAM, and storage in different cloud providers, as well as the bandwidth communication between these network clouds.</para>
<para><link linkend="F2-3">Figure <xref linkend="F2-3" remap="2.3"/></link> depicts the main architecture of the security orchestration and enforcement plane suggested in ANASTACIA. Using SDN network, the IoT domain is connected to the cloud domain, whereby different IoT services are running. The user accesses the IoT devices, first, through the cloud domain, then the SDN enabled network and the IoT router. In fact, in ANASTACIA, the communication between a user and an IoT device happens through a chain of virtual network functions (VNFs) named service function chaining (SFC). The latter consists of three parts:</para>
<para>(i) The ingress point, which is the first VNF in the SFC. The user initially attaches to the ingress point;</para>
<para>(ii) The intermediate VNFs;</para>
<para>(iii) The egress point, which is the last VNF in the SFC. The egress point should be connected to the IoT controller. As depicted in <link linkend="F2-3">Figure <xref linkend="F2-3" remap="2.3"/></link>, the order of the communications between the VNFs is defined according to the different SDN rules enforced thanks to the SDN controller. The nature and the size of the SFC would be defined according to the nature of the user (a normal or a suspicious).</para>
<para><link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link> depicts the different steps of the orchestration and enforcement plane suggested in ANASTACIA. The attack is detected thanks to the Mitiga-tion Action Service (MAS) component. The later sends a mitigation request (MSPL file) to the security orchestrator (<link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>, Step 3). To mitigate the attacks, the security orchestrator interacts with three main actors, which are (<link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>):</para>
<para><b>IoT controller:</b> It provides IoT command and control at high-level of abstraction in independent way of the underlying technologies. That is, it is able to carry out the IoT management requests through different IoT constrain protocols like CoAP or MQTT. It also main-tains a registry of relevant information of the deployed IoT devices like the IoT device properties and available operations. Since it knows the IoT devices status, it could be able to perform an effective communication to avoid the IoT network saturation when it is required a high-scale command and control operation. In &#8220;Security Management Architecture for NFV/SDN-aware IoT Systems&#8221; (Under review) can be found an example and performance of IoT management as part of a building management system. To mitigate different attacks, the security orchestrator interacts with the IoT controller to mitigate the attacks at the level of the IoT domain and prevent the propagation of the attack to other networks (<link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>: 4). The IoT controller enforce different security rules at the IoT router (data plane) to mitigate the attack (<link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>: 5).</para>
<fig id="F2-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-3">Figure <xref linkend="F2-3" remap="2.3"/></link></label>
<caption><para>Security orchestration plane.</para></caption>
<graphic xlink:href="graphics/ch002_fig003.jpg"/>
</fig>

<fig id="F2-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link></label>
<caption><para>Security orchestration and enforcement in case of a reactive scenario.</para></caption>
<graphic xlink:href="graphics/ch002_fig004.jpg"/>
</fig>

<para><b>NFV orchestrator</b>: In ANASTACIA, to ensure efficient management of SFC, we have integrated SDN controller (ONOS) with the used Virtual Infrastructure Manager (VIM), in our case OpenStack. The integration of SDN with the VIM enable the smooth communication between different VNFs that form the same SFC. After receiving the MSPL message from the MAS, the security orchestrator identifies the right mitigation plane should be implemented. If the mitigation plan requires the instantiation of new VNFs, the security orchestrator instructs the NFV orchestrator to instantiate and configure the required VNFs. To instantiate the required VNFs, the NFV orchestrator interacts with the VIM (<link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>: 6). Also, the security orchestrator interacts with the policy interpreter to translate the received MSPL to the low configuration (LSPL) needed for different VNFs. After the successful instantiation of a security VNF, the security orchestrator configures that VNF with the received LSPL (<link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>: 6).</para>
<para>In ANASTACIA, we have also developed different virtual security enablers that should be instantiated to mitigate the different attacks. For instance, we have developed a new VNF firewall based on SDN-enabled switch and OpenFlow. OVS-Firewall is a newly developed solution that relies on OpenFlow protocol to create a sophisticated firewalling system. We have also proposed and developed a new security VNF, named virtual IoT-honeynet, that allows to replicate a real IoT environment in a virtual one by simulating the IoT devices with their real deployed firmware, as well as the physical location. The IoT-honeynet can be represented by an IoT-honeynet security policy, and the final configuration can be deployed transparently on demand with the support of the SDN network. &#8220;Virtual IoT HoneyNets to mitigate cyberattacks in SDN/NFV-enabled IoT networks&#8221; (Under review) shows the potential and performance of this approach.</para>
<para><b>SDN controller:</b> This component helps in rerouting the traffic between the VNFs in different SFCs. As depicted in <link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>, when the mitigation action service notifies the orchestrator about an attack, the SFC would be updated by adding/inserting new security VNFs in the SFCs. The security orchestrator should push the adequate SDN rules to reroute the traffic between different VNFs in the SFC and the IoT domain (<link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>: 7). Also, according to the different situations, the security orchestrator can choose the SDN as security enabler. In this case, it can be the attack mitigated by pushing exploring the strength of the SDN technology. If so, the security orchestrator can instruct the SDN controller to push some SDN rules to prevent, allow or limit the communication on specified protocols and ports between different communication peers (<link linkend="F2-4">Figure <xref linkend="F2-4" remap="2.4"/></link>: 7).</para>
<para>By relying in the aforementioned orchestration properties and features, as well as the SDN and IoT controllers, the ANASTACIA framework aims to cope with the research challenges related with Orchestration of SDN/NFV-based security solutions for IoT environments and currently several experiments have been carried out in different security areas.</para>
<para>For instance, several experiments have been carried out regarding <b>virtual IoT-honeynets</b>. This kind of VNF allows to replicate a real IoT envi-ronment in a virtual one by simulating the IoT devices with their real deployed firmware, as well as the physical location. The IoT-honeynet can be represented by an IoThoneynet security policy, and the final configuration can be deployed transparently on demand with the support of the SDN network. &#8220;Virtual IoT HoneyNets to mitigate cyberattacks in SDN/NFV-enabled IoT networks&#8221; (paper under review) shows the potential and performance of this approach.</para>
<para>Furthermore, the security orchestration of ANASTACIA enables continuous and dynamic management of Authentication, Authorization, Accounting (AAA) as well as Channel Protection virtual security functions in IoT networks enabled with SDN/NFV controllers. Our scientific paper [1] shows how a virtual AAA is deployed as VNF dynamically at the edge, to enable scalable device&#8217;s bootstrapping and managing the access control of IoT devices to the network. Besides, our solution allows distributing dynamically the necessary crypto-keys for IoT M2M communications and deploy virtual Channel-protection proxies as VNFs, with the aim of establishing secure tunnels (e.g. through DTLs) among IoT devices and services, according to the contextual decisions inferred by the cognitive framework. The solution was implemented and evaluated, demonstrating its feasibility to manage dynamically AAA and channel protection in SDN/NFV-enabled IoT scenarios.</para>
<para>A telco cloud environment may consist of multiple VNFs that can be shipped and provided, in the form virtual machine (VM) images, from different vendors. These VNF images will contain highly sensitive data that should not be manipulated by unauthorized users. Moreover, the manipulation of these VNF images by unauthorized users can be a threat that can affect the whole system setup. In ANASTACIA, we have designed and developed different tools to prevent the manipulation of different VNF images should run on top of different network clouds. In ANASTACIA, we have devised efficient methods that verify the integrity of physical machines before using them and also the integrity of virtual machine and virtual network function images before launching them [15&#8211;17]. For this purpose, different technologies have been investigated, such as i) Trusted Platform Module (TPM); ii) Linux Volume Management (LVM); iii) Linux Unified Key Setup (LUKS). For instance, in [16], we have provided a trusted cloud platform that consists of the following components:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>TPM module that is used to store passwords, cryptographic keys, certificates, and other sensitive information. TPM contains platform configuration registers (PCRs) which can be used to store cryptographic hash measurements of the system&#8217;s critical components. There are in total 24 platform configuration registers (PCRs) in most TPM modules starting from 0 till 23.</para></listitem>
<listitem><para>Trusted boot module, which is an open source tool, uses Intel&#8217;s trusted execution technology (TXT) to perform the measured boot of the system. Trusted boot process starts when trust boot is launched as an executable and measures all the binaries of the system components (i.e., firmware code, BIOS, OS kernel and hypervisor code). Trust boot then writes these hash measurements in TPM&#8217;s secure storage.</para></listitem>
<listitem><para>Remote attestation service, which is the process of verifying the boot time integrity of the remote hosts. It is a software mechanism integrated with TPM, to securely attest the trust state of the remote hosts. It uses boot time measurements of the system components such as BIOS, OS, and hypervisor, and stores the known good configuration of the host machine in its white list database. It then queries the remote host&#8217;s TPM module to fetch its current PCR measurements. After receiving the current PCR values, it compares them against its white list values to derive the final trust state of the remote host.</para></listitem>
<listitem><para>OpenStack Resource Selection Filters component that should be integrated with the nova scheduler. In OpenStack, when a VNF is launched, the nova-scheduler filters pass through each host and select the number of hosts that satisfy the given criteria. Each filter passes the list of selected hosts to the proceeding filter. When the last filter is processed, OpenStack&#8217;s default filter scheduler performs a weighing mechanism. It assigns weight to each of the selected hosts depending on the RAM, CPU and any other custom criteria to select a host which is most suitable to launch the VM instance.</para></listitem>
</itemizedlist>
</section>

<section class="lev2" id="sec2-3-4">
<title>2.3.4 Dynamic Orchestration of Resources Planning in Security-oriented SDN and NFV Synergies</title>
<para>Network operators are facing different type of attacks that introduce new set of challenges to detect and to defend from the attack. However, the hardware appliances for defence or detection are neither flexible nor elastic and they are expensive. To extend the NFV MANO framework, ANASTACIA incorporates a set of intelligent and dynamic security policies that can be updated seamlessly to constantly reflect security concerns in the VNF placement through the resource planning module while still ensuring acceptable QoE. Moreover, we have defined and implement synergies between SDN controllers and NFV MANO for the purpose of coordinating security to have an effective impact by defining adequate SDN rules or the adequate virtual security appliances (VNF) to be enforced through the Security Enabler Provider module. In the following section the resource planning and the security enabler provider modules will be defined.</para>
</section>

<section class="lev3" id="sec2-3-4-1">
<title>2.3.4.1 Resource planning module</title>
<para>During the first phase of ANASTACIA, we have done two main works. The first one focused on the selection of best service (Virtual Network Function (VNF)), called &#8220;The security enablers selection&#8221;, among the list of enablers selected previously by the selected Security Enabler Provider, to cope with a security attack, and a second work focus &#8220;Mobile Edge Computing Resources Optimization&#8221;. In fact, one of our two main use cases focuses on Mobile Edge Computing, as an example, to secure protection of a company perimeter, based on several buildings with different usage situated in different areas using distributed resource as MEC; an emerging technology that aims at pushing applications and content close to the users (e.g. at base stations, access points, and aggregation networks), reduces the latency, improves the quality of experience, and ensures highly efficient network operation and service delivery.</para>
<para>During the second phase of the project, we aim to extend the resource planning module to include a dynamic Service Function Chain (SFC) requests placement that aim to reduce the routing overhead in case of an attack happen as an example. In fact, it is challenging to allocate multiple SFC requests on an NFV Infrastructure, especially in a cost-driven objective. VNFs have to be chained in a specific order. Moreover, depending on their type and isolation considerations, VNFs can be potentially shared among several SFCs. Finally, VNFs must not be placed far from the shortest path to avoid increasing SFC delay and network usage.</para>
</section>

<section class="lev3" id="sec2-3-4-2">
<title>2.3.4.2 The security enablers selection</title>
<para>The aim of the model is to select the best service (Virtual Network Function (VNF)) among the list of enablers selected previously by the selected Security Enabler Provider, to cope with a security attack and that minimize the maximum load nodes (CPU, RAM, bandwidth) of the topology, provided by the system model. Indeed, the system information will provide relevant data about the whole infrastructure, server capacity (CPU, RAM, etc.), and VNF flavours (CPU, RAM, etc.). On the other hand, the Security Enablers information will provide the data regarding the available Security Enablers capable to enforce specific capabilities. The goal of the model is minimizing the maximum load nodes to improve provider cost revenue (provider energy efficiency goal). For more details please refer to the Anastacia deliverable D3.3.</para>
</section>

<section class="lev3" id="sec2-3-4-3">
<title>2.3.4.3 Mobile edge computing resources optimization</title>
<para>Mobile edge computing (MEC) is an emerging technology that aims at pushing applications and content close to the users (e.g. at base stations, access points, aggregation networks) to reduce latency, improve quality of experience, and ensure highly efficient network operation and service delivery. It principally relies on virtualization-enabled MEC servers with limited capacity at the edge of the network. One key issue is to dimension such systems in terms of server size, server number and server operation area to meet MEC goals. In this work, we have proposed a graph-based algorithm that, taking into account a maximum MEC server capacity, provides a partition of MEC clusters, which consolidates as many communications as possible at the edge. We evaluate our proposal and show that, despite the spatio-temporal dynamics of the traffic; our algorithm provides well-balanced MEC areas that serve a large part of the communications.</para>
<para>This work has been published in a Sigcomm [18] workshop and extended for a TNSM journal [19].</para>
</section>

<section class="lev3" id="sec2-3-4-4">
<title>2.3.4.4 Security enabler provider</title>
<para>The Security Enabler Provider is a component of the Security Orchestration Plane, as defined in the Anastacia architecture. This component is able to identify the security enablers which can provide specific security capabilities, to meet the security policies requirements. Moreover, when the Security Resource Planning, a sub-component of the security orchestrator, defined before, selects the security enabler, the Security Enabler Provider is also responsible for providing the corresponding plugin.</para>
<para>The Security Enabler Provider primarily interacts with the Policy Interpreter. Specifically, two different interactions have been contemplated:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The first one will provide to the Policy Interpreter a list of security enabler candidates from the main identified capabilities.</para></listitem>
<listitem><para>The second one will provide to the Policy Interpreter the specific Security Enabler Plugin to perform the policy translation. This policy translation process was defined in Anastacia D3.1 [20], and also published in journal paper [2].</para></listitem>
</itemizedlist>
<para>The first role is implemented as a piece of software that from the specific capabilities given as an input it will provide the more accurate enablers. The second role is also implemented as piece of software capable to translate MSPL policies into specific configuration/tasks rules according to a concrete security enabler. For more details please refer to the Anastacia deliverable D3.3 [21].</para>
</section>

<section class="lev2" id="sec2-3-5">
<title>2.3.5 Security Monitoring to Threat Detection in SDN/NFV-enabled IOT Deployments</title>
<para>Security threat levels change dynamically as the attackers discover new breaches and try to exploit them. To cope with this challenge, the ANASTACIA project relies on SDN and NFV techniques to embed the developed security products and provide a dynamic way to deploy them when needed. In this way, the ANASTACIA project delivers a set of scientific and technological innovations, grouped in two principal key innovation areas.</para>
</section>

<section class="lev3" id="sec2-3-5-1">
<title>2.3.5.1 Security monitoring and reaction infrastructure</title>
<para>Saedgi et al. identify the principal challenges when securing IoT-based Cyber Physical Systems, highlighting as one of the principal challenges the development of a &#8220;<i>a holistic cybersecurity framework covering all abstraction layers of heterogeneous IoT systems and across platform boundaries&#8221;</i> [22]. The ANASTACIA project fulfils this challenge by proposing a state-of-the-art security infrastructure composed by three principal modules:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b><i>Monitoring Agents</i></b><i>:</i> These are the components in charge of extracting the security data from the monitored network. The ANASTACIA framework has been designed flexible enough to support both physical and virtual monitoring agents, as well as to extract data from data networks (both IP and IoT networks) and from analogue CPS devices. This make the ANASTACIA framework a multilevel security platform, and therefore suitable for physical sensor networks, emulated environments and hybrid networks. In this direction, the ANASTACIA partners have worked in the implementation of monitoring agents adapted for 6LowPan and ZigBee IoT networks, as well as the development of agents capable of extracting temperature information from analogue sources. These agents have been tested using the case studies of the project, aiming to be applied in wider scenarios for its final validation. Following this path, the project partners are extending even further these monitoring agents with virtualization characteristics. By means of using NFV and SDN technologies on the monitoring agents, it will be possible to deploy and (re)configure them on demand, allowing to deploy new agents on the network as a reaction to ongoing attacks. In this sense, the ANASTACIA partners are also extending the security policy language (MSPL) to correctly specify such type of countermeasures, allowing the deployment of new monitoring agents on the network in a complete autonomous manner.</para></listitem>
<listitem><para><b><i>Monitoring Module</i></b><i>:</i> This component contains the logic of the detection of security incidents. The heterogeneous monitoring agents (IoT networks and analogue agents) use a shared communication channel to publish the extracted security data. This information is then analysed by the incident detectors (for well-known attacks) and behaviour analysis modules (for zero-days attacks), emitting verdicts about the detected incidents. As stated in [22], detecting zero-days attacks does not ensure a high security level, since well-known attacks are still used by malicious users to gain control of the systems. ANASTACIA does not only provide both types of analysis (well-known attacks and behaviour analysis) but it will also use all this information to provide a deeper analysis and found correlations between already-known attacks and they behavioural analysis result, detecting hidden relationships between events coming from different sources. The ANASTACIA partners are developing such correlation engines to enhance both security analyses and provide enriched information to the reaction module.</para></listitem>
<listitem><para><b><i>Reaction Module</i></b><i>:</i> Using the information provided by the monitoring module (namely incidents verdicts and behavioural analysis results), the reaction module has the responsibility of determining the best mitigation plan for the detected incidents. The ANASTACIA framework provides a simple yet powerful design for this component, which uses not only the incidents verdicts provided by the monitoring module, but also system model and the capabilities deployed in the network. All this information is enhanced with a risk analysis to determine the best set of countermeasures to cope with the ongoing attack. Further information about how this analysis is performed can be found in the following sections.</para></listitem>
</itemizedlist>
</section>

<section class="lev3" id="sec2-3-5-2">
<title>2.3.5.2 Novel products for IoT- and cloud-based SDN/NFV systems</title>
<para>The security infrastructure described above represents one of the principal outcomes of the project, however the partners are also working on a concrete implementation of this design. To implement this monitoring infrastructure, the partners have developed a set of technologies that fulfil the functionalities of the ANASTACIA infrastructure, generating a set of novel products ready to be deployed on IoT- and cloudbased systems. For example, partner Montimage has developed a 6LowPan network sniffer in coordination with the MMT tool to detect anomalies in IoT networks. UTRC (in collaboration with OdinS) has developed analogue temperature agents and a machine learning-based behavioural analysis for data sensors, allowing them to detect zero-days attacks on temperature sensor networks. ATOS has extended its XL-SIEM tool to perform the risk analysis when computing the reaction and the inclusion of the system model when computing the countermeasures to be taken. Despite the development of such products is not finished yet, the partners have managed to integrate PoC version of such technologies on a shared platform, allowing to perform initial tests and validation of the technologies. Moreover, it is envisaged to further extend this tools with a correlation engine, aiming to reveal hidden relationships between security events coming from different sources (monitoring agents) and, therefore, raising the awareness level of the whole security platform.</para>
<para>To further extend the offer of products, the ANASTACIA partners are preparing the solutions to be NFV- and SDN-ready, by means of adapting the solutions (especially network agents) to work as single, self-contained NFV modules. In this sense, the ANASTACIA outcomes will have the potential to be deployed in virtualized environments, be dynamically deployed as a reaction to an ongoing attack and, capable of being reconfigured if required. In this scenario, the ANASTACIA platform will have the ability to momentarily harden the security of the portions of the network are under attack, by means of deploying new agents, load new security rules on the monitoring agents/module, analyse new protocols or reconfigure the existing instances. All these actions are to be maintained until the security level has returned to normal values or the network administrator has intervened to solve the security breach.</para>
<para>All these novel products will have a high impact on the security market, opening business possibilities in the IoT-based CPS area.</para>
<para>Despite the ambition of the project is high, the ANASTACIA partners have already established the bases of the further innovations. The ANASTACIA partners will continue its efforts to fully integrate the security innovations with the SDN and NFV technologies, as well as developing a correlation engine for security events. This direction aims to provide the market with a highly-dynamic security solution, capable of not only detecting current cyber threats, but also capable of reacting against them and also deploy new security instances to adapt to the always-evolving security levels of IoT networks.</para>
</section>

<section class="lev2" id="sec2-3-6">
<title>2.3.6 Cyber Threats Automated and Cognitive Reaction and Mitigation Components</title>
<para>The monitoring information and the incident detected are evaluated for automatic mitigation. Security policies are used to determine the security enablers supported by the IoT infrastructure. This is also used to know the mitigations that the IoT infrastructure supports. Obviously, not all mitigations work with all possible threats, and not all mitigations have the same cost. Cost is not considered here just in terms of economic impact, but also in terms of time to mitigate, computational resources required or complexity of the mitigation. ANASTACIA automatically analyses these factors and, along with the incidents detected, evaluates and decides on the most convenient mitigation in each case. To this end several data are considered in the analysis:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>severity of the incidents, which is received by the correlation engine at the monitoring module and takes into account the type of incident and the duration of the incident among others,</para></listitem>
<listitem><para>importance of the assets affected, which depends on the criticality of the IoT devices affected, their location or the importance of the data they manage,</para></listitem>
<listitem><para>the cost of the mitigation, obtained either from the orchestrator in charge of enforce the available security enablers, or from the system admin in case specific expert knowledge is required.</para></listitem>
</itemizedlist>
<para>The global risk of the incident is obtained from (1) and (2), which is used together with (3) to decide on the most convenient mitigation. A decision support service (DSS) is used to compute that information, providing with a score for each mitigation, which represents the suitability of the mitigation for the ongoing incident. The mitigation with the higher suitability score represents the most suitable mitigation, which is passed to the orchestrator for its enforcement. To this end a Mitigation Action Service (MAS) is used to translate the output of the DSS to a format that is understandable by the orchestrator. The MAS is then in charge of generating the reaction in the MSPL format. This language was selected since its XML-based structure allows specifying the type of base capability to deploy (e.g. filtering, monitoring), and the configurations of such action (e.g. involved IPs, port numbers, number of agents to deploy). The MSPL format also allows the MAS to directly send the mitigation plan to the Security Orchestrator, which will use it to deploy the computed plan.</para>
<para>In order to generate the MSPL file, the MAS analyses the response of the DSS by performing the following processes: (1) it identifies the countermeasure computed by the DSS; (2) it identifies the network capabilities able to execute the countermeasure; (3) it retrieves the information of the capabilities from the System Model Analysis module; (4) it builds the MSPL file to express the countermeasure, specifying the capability to use and the configurations of that capability used to apply the countermeasure.</para>
<para>Every incident handled by the reaction (including risk evaluation, decision support activities), the information associated to it (such as type of incident or IoT devices affected) and all the indicators that characterize the incident (such as severity, importance of assets affected, global risk of the incident or suitability of the mitigation) are passed to the Dynamic Security and Privacy Seal to update the seal status.</para>
<para>Currently we are developing the quantitative model that supports the assessment of incidents and mitigations for deciding on the most convenient reaction based on incident severity, criticality of the assets affected, possible mitigations and cost of mitigating them.</para>
</section>

<section class="lev2" id="sec2-3-7">
<title>2.3.7 Behaviour Analysis, Anomaly Detection and Automated Testing for the Detection of Known and Unknown Vulnerabilities in both Physical and Virtual Environments</title>
<para>Our behavioural framework automatically identifies cyber-security attacks in a given IoT environment. It uses system design and operational data to discover dependencies between cyber systems and operations of HVAC in a cyber-physical domain. We predict potential security consequences of interacting operations among subsystems and generate threat alarms. Specifically, our behavioural engine is empowering ANASTACIA&#8217;s use case scenario using the &#8220;best&#8221; practices to implement security in terms of (1) adding network security (in forms of IDS/IPS), and (2) using threat intelligence to detect evasions or hidden attacks. Our developed platform can detect:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Known attacks such as DDoS and MiTM attacks,</para></listitem>
<listitem><para>IoT zero-days attacks and slow DoS attacks that might pass undetected by normal IDS/IPS [9].</para></listitem>
</itemizedlist>
<para>Our framework developed a monitoring component that is composed of messaging wrappers, Constraint Programming (CP) models and buffered sensor data from IoT networks. Primarily, CP model is the core component of our behavioural analysis engine. First the information is gathered and analysed for learning a CP model and then it is deployed to identify any intrusion. Moreover, CP model built on continuous stream of data (i.e. timeseries) where the time interval between successive updates could vary from milliseconds to minutes. CP model consists of network of relations between building sensor data. Using this CP model, we aggregate the different types of sensor data to truly model the normal behaviour of the system that is being supervised. This model is built for monitoring at system level, but it does not prevent from including in the model information about network performance if that is exposed to it. For an example, CPU consumption of a device can be included along its actual sensor data. The variety of data that we can aggregate allows the model to be as generic or as specific as the end-user required it to be. Since the model is built on relations, we can leverage from the fact that what data effects what other data type (features).</para>
<para>We developed an approach to learn a CP-based decision model consisting of a set of relations to detect misbehaviour of the system. More specifically, the idea is to learn a set of relations which together when satisfied defines the normal behaviour of the system. After learning important relations, the approach discards un-important relations, and consequently creates a model with best possible relations and features of sensor nodes. In each iteration, the relation between the sensor features and all other network features further verified. Also, we identify the sensors are involved in breaking the relation and what are the set of relations are broken Following this fashion, the model is further tuned. The developed &#8216;Monitoring&#8217; component enables continuous and integrated monitoring of multivariate signals, event logs, heartbeat signals, status reports, operational information, etc., emanating from various devices in multitude of building operational subsystems. This monitoring component also evaluates the security situation against known policies, models, threat signatures to detect abnormalities and outliers, e.g. high data download, external database or port accesses during an emergency. Such situations will be analysed by the &#8216;Reaction&#8217; component which will evaluate the severity of the situation. Isolation and predictive mechanisms are activated to ensure that the rest of the building operations system continues as normal. Policies and rules are activated, updated and enforced by the &#8216;Security Enforcement&#8217; component, e.g. a building emergency will lock-down the non-essential database accesses, and escalation of the emergency to the city fire brigade should be performed by any of the authorized personnel. To this end, our behavioural engine&#8217;s innovation is summarized as the following key points:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Learning constraint programming model for capturing the normal behaviour of a given cyberphysical system</para></listitem>
<listitem><para>CP-model provides explanation when a potential anomaly is detected by reporting which constraints fails to satisfy the model</para></listitem>
<listitem><para>User-defined constraints can be easily integrated with the constraints learn from the data</para></listitem>
<listitem><para>The developed behaviour engine can handle multiple attacks of different types.</para></listitem>
</itemizedlist>
</section>

<section class="lev2" id="sec2-3-8">
<title>2.3.8 Secured and Authenticated Dynamic Seal System as a Service</title>
<para>Several projects have tried to address the need to enable trustable ICT deployments. The solutions they have developed are generally focused either on enhancing trust on security or on privacy, but not both. This situation can be counterproductive if considered in the context of the obligations emerging from the recently adopted European General Data Protection Regulation (GDPR) (which considers both security and privacy controls as fundamental to the protection of personal data).</para>
<para>Moreover, existing solutions are usually based on two separate models:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Either ISO standard-based certification of products and information management systems respecting ISO 17065 or ISO 17021-1 and relaying on human audit and assessment;</para></listitem>
<listitem><para>Or purely system-based monitoring of security, such as anti-virus applications or intrusion detection system (IDS), which are often designed independently from any standard.</para></listitem>
</itemizedlist>
<para>The ever-evolving normative framework for security and personal data protection calls for a holistic approach which considers technical insights alongside human and organizational controls. An organization that seeks to comply with the regulatory frameworks will finally rely on the professional advice from information security professionals (spearheaded by a Chief Information Security Officer -CISO-) and legal professionals (usually taking the token as Data Protection Officers -DPO-), which might have difficulties understanding the complex outputs of the technological enablers used to introduce the necessary controls to the systems they oversee and integrating these with the legal and managerial feedback necessary to transparently and accurately demonstrate due diligence has been carried out.</para>
<para>In response to this situation, ANASTACIA&#8217;s Dynamic Privacy and Security Seal (DSPS) will seek to inform the end-user (DPO/CISO) on the most relevant privacy and security issues while supporting certification and compliance activities. To this end, the DSPS will:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Introduce a privacy-by-design and by default compliant architecture, services and graphical user interface (GUI) that seek to combine the certainty and trustworthiness of conventional certification schemes with real-time certification surveillance capabilities through the real time dynamic monitoring (provided by ANASTACIA) of the certified system.</para></listitem>
<listitem><para>Compile alerts and threats from ANASTACIA, compatible monitoring solutions (using the STIX 2 standard) and the end-user (CISO/DPO) and showcase them through a unified GUI, displaying IoT/CPS privacy and security information while providing decision support capabilities, and data visualization (considering accessibility/ease of use requirements).</para></listitem>
<listitem><para>Empower the end-user by enabling the client&#8217;s Data Protection Officer (DPO) and Chief Information Security Officer (CISO) to provide feedback to the raised alerts directly through the GUI and to enhance the information obtained from the monitoring system with technical, legal, annd organizational documentation. This data will be stored in a -privacy-by-design-distributed storage solution (powered by Shamir Secret Sharing Scheme), which will be associated with the DSPS blockchain-based seal ledger (Hyperledger Fabric), to ensure the data is non-repudiable, immutable, and easily verifiable in direct relation to the events showcased by the DSPS both by the end-user (for internal audit and compliance purposes) and associated certification bodies (to determine the validity of relevant certifications).</para></listitem>
</itemizedlist>
<para>The Dynamic Security and Privacy Seal (DSPS) aims to provide a holistic solution to privacy and security monitoring, addressing both the organizational and technical requirements enshrined by the GDPR through the implementation of a layered process by which: 1) an initial examination by an auditor or expert determines the baseline status of the system with regards to privacy and security of both the product or system that is to be monitored, and the organizational policies and mechanisms that surround its implementation to ensure compliance with the most relevant ISO standards (particularly if linked to a certification) and regulations; 2) ANASTACIA provides constant monitoring and reaction capabilities which are then used to update the DSPS; 3) the end-user provides feedback on the effectivity of the mitigation activities and uses the DSPS enablers to enhance transparency and accountability in the monitored system.</para>
<para>The resulting tool will provide the end-user with a broad perspective over the state of the monitored system which will consistently track and unify the organizational/human elements considered by personal data protection regulations with the technical insights provided by ANASTACIA&#8217;s monitoring and reaction services. Once implemented, this process will not only provide advanced trust-enhancing information functionalities to ANASTACIA users, but will also serve as a surveillance solution for audit/certification/legal compliance purposes. It will generate a non-repudiable historic track of system variations and potential threats (technical and organizational) to the sealed system while enhancing the contextual information available to the client, auditors or regulatory authorities.</para>
<para>Current work [23] has been focused towards developing the DSPS architecture as defined by ANASTACIA Deliverable 5.1; deploying and integrating the monitoring service and associated enablers; and refining the GUI elements that will inform the end-user and enable them to provide the required feedback. Upcoming research will seek out ways to simplify complex privacy and security information, so as to address the varying technical and legal knowledge of the potential end-users. Furthermore, research on integration with additional information sources (particularly through the STIX2 format) and privacy-management tools (such as the CNIL DPIA software) will be performed to further enhance the functionalities available through the DSPS GUI.</para>
</section>

<section class="lev1" id="sec2-4">
<title>2.4 Conclusion</title>
<para>This book chapter has summarized the main key innovations being devised, implemented and validated in the scope of Anastacia research project to meet the cybersecurity challenge in heterogenous IoT scenarios. Namely it has presented eight key innovations: 1) Holistic policy-based security management and orchestration in IoT, 2) Investigation on innovative cyber-threats, 3) Trusted Security orchestration in SDN/NFV-enabled IoT scenarios, 4) Dynamic orchestration of resources planning in Security-oriented SDN and NFV synergies, 5) Security monitoring to threat detection in SDN/NFV- enabled IoT deployments, 6) Cyber threats automated and cognitive reaction and mitigation components, 7) Behaviour analysis, anomaly detection and automated testing for the detection of known and unknown vulnerabilities in both physical and virtual environments, 8) Secured and Authenticated Dynamic Seal System as a Service.</para>
<para>These main key innovations are currently being realized and evaluated successfully in MEC and Smart-building scenarios. In this sense, important research outcomes have been already obtained and published in high impact journals, which demonstrate the feasibility and performance of ANASTACIA cybersecurity framework to dynamically handling and counter evolving kind of cyberattacks in SDN/NFV-enabled IoT deployments.</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This work has been supported by the following research projects:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Advanced Networked Agents for Security and Trust Assessment in CPS/IoT Architectures (ANASTACIA), funded by the European Commission (Horizon 2020, call DS-01-2016) Grant Agreement Number 731558.</para></listitem>
</itemizedlist>
<para>The authors declare that there is no conflict of interest regarding the publication of this document.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Alejandro Molina Zarca, Jorge Bernal Bernabe, Ruben Trapero, Diego Rivera, Jesus Villalobos y, Antonio Skarmeta, Stefano Bianchi, Anastasios Zafeiropoulos and Panagiotis Gouvas &#8220;Security Management Architecture for NFV/SDN-aware IoT Systems&#8221;, IEEE IoT Journal, 2019.</para>
<para>[2] Molina Zarca, A.; Bernal Bernabe, J.; Farris, I.; Khettab, Y.; Taleb, T.; Skarmeta, A. Enhancing IoT 719 security through network softwarization and virtual security appliances. International Journal of Network 720 Management, 28, e2038, https://onlinelibrary.wiley.com/doi/pdf/10. 1002/nem.2038.e2038, 721 doi:10.1002/nem.2038.</para>
<para>[3] Molina Zarca, Alejandro and Garcia-Carrillo, Dan and Bernal Bernabe, Jorge and Ortiz, Jordi and Marin-Perez, Rafael and Skarmeta, Antonio, Enabling Virtual AAA Management in SDN-Based IoT Networks, Sensors, 19, 2019, 2, 295, http://www.mdpi.com/1424-8220/19/2/295, 1424-8220, 10.3390/s19020295</para>
<para>[4] Cambiaso, E., Papaleo, G., and Aiello, M. (2017). Slowcomm: Design, development and performance evaluation of a new slow DoS attack. Journal of Information Security and Applications, 35, 23&#8211;31.</para>
<para>[5] Vaccari, I., Cambiaso, E., and Aiello, M. (2017). Remotely Exploiting AT Command Attacks on ZigBee Networks. Security and Communication Networks, 2017.</para>
<para>[6] Katkar, V., Zinjade, A., Dalvi, S., Bafna, T., and Mahajan, R. (2015, February). Detection of DoS/DDoS Attack against HTTP Servers Using Naive Bayesian. In Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on (pp. 280&#8211;285). IEEE.</para>
<para>[7] Beitollahi, H., and Deconinck, G. (2011). A dependable architecture to mitigate distributed denial of service attacks on network-based control systems. International Journal of Critical Infrastructure Protection, 4(3&#8211;4), 107&#8211;123.</para>
<para>[8] Cambiaso, E., Papaleo, G., and Aiello, M. (2012, October). Taxonomy of slow DoS attacks to web applications. In International Conference on Security in Computer Networks and Distributed Systems (pp. 195&#8211;204). Springer, Berlin, Heidelberg.</para>
<para>[9] Cambiaso, E., Papaleo, G., Chiola, G., and Aiello, M. (2013). Slow DoS attacks: definition and categorisation. International Journal of Trust Management in Computing and Communications, 1(3&#8211;4), 300&#8211;319.</para>
<para>[10] Aiello, M., Cambiaso, E., Scaglione, S., and Papaleo, G. (2013, July). A similarity based approach for application DoS attacks detection. In Computers and Communications (ISCC), 2013 IEEE Symposium on (pp. 000430&#8211;000435). IEEE.</para>
<para>[11] Duravkin, I. V., Carlsson, A., and Loktionova, A. S. (2014). Method of slow-attack detection. <img src="images/.jpg" alt=""/>, (8), 102&#8211;106.</para>
<para>[12] Singh, K. J., and De, T. (2015). An approach of DDOS attack detection using classifiers. In Emerging Research in Computing, Information, Communication and Applications (pp. 429&#8211;437). Springer, New Delhi.</para>
<para>[13] Brynielsson, J., and Sharma, R. (2015, August). Detectability of low-rate HTTP server DoS attacks using spectral analysis. In Advances in Social Networks Analysis and Mining (ASONAM), 2015 IEEE/ACM International Conference on (pp. 954&#8211;961). IEEE.</para>
<para>[14] Cambiaso, E., Papaleo, G., Chiola, G., and Aiello, M. (2016). A Network Traffic Representation Model for Detecting Application Layer Attacks. International Journal of Computing and Digital Systems, 5(01).</para>
<para>[15] S. Lal, A. Kalliola, I. Oliver, K. Ahola, and T. Taleb, &#8220;Securing VNF Communication in NFVI,&#8221; in Proc. IEEE CSCN&#8217;17, Helsinki, Finland, Sep. 2017.</para>
<para>[16] S. Lal, I. Oliver, S. Ravidas, T. Taleb, &#8220;Assuring Virtual Network Function Image Integrity and Host Sealing in Telco Cloud,&#8221; in Proc. IEEE ICC 2017, Paris, France, May 2017.</para>
<para>[17] S. Lal, T. Taleb, and A. Dutta, &#8220;NFV: Security Threats and Best Practices,&#8221; in IEEE Communications Magazine., Vol. 55, No. 8, May 2017, pp. 211&#8211;217.</para>
<para>[18] M. Bouet, V. Conana, Geo-partitioning of MEC resources, ACM MECOMM &#8216;17, August 21, 2017, Los Angeles, CA, USA.</para>
<para>[19] M. Bouet, V. Conana, Mobile Edge Computing Resources Optimization: A Geo-Clustering Approach, IEEE Transactions on Network and Service Management, Vol. 15, No. 2, June 2018.</para>
<para>[20] AM Zarca, JB Bernabe, AS, K Yacine, B Dallal, S Bianchi &#8220;Initial Security Enforcement Manager Report&#8221;. 2018. H2020 Anastacia EU project deliverable D3.1.</para>
<para>[21] D Belabed, M Bouet, D Rivera, P Sobonski, A Molina Zarca, &#8220;Initial Security Enforcement Enablers Report&#8221; Anastacia EU project deliverable D3.3.</para>
<para>[22] Sadeghi, Ahmad-Reza, Christian Wachsmann, and Michael Waidner. &#8220;Security and privacy challenges in industrial internet of things.&#8221; Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE. IEEE, 2015.</para>
<para>[23] Quesada Rodriguez, Adrian; Bajic, Bojana; Crettaz, Cedric; Filipponi, Matteo; Pacheco Huamani, Ana Mar&#237;a; Perlini, Adriano, Kim, Eunah; Loup, Vincent; Ziegler, S&#233;bastien. &#8220;Dynamic Security and Privacy Seal Monitoring Service&#8221;. 2018. H2020 Anastacia EU project deliverable 5.2.</para>
<para>[24] Quesada Rodriguez, Adrian; Bajic, Bojana; Crettaz, Cedric; Menon, Mythili; Pacheco Huamani, Ana Mar&#237;a; Kim, Eunah; Loup, Vincent; Ziegler, Sebastien. &#8220;Dynamic Security and Privacy Seal Model Analysis&#8221;. 2018. H2020 Anastacia EU project deliverable 5.1.</para>
<para>[25] C. M. Ramya, M. Shanmugaraj, and R. Prabakaran, &#8220;Study on ZigBee technology,&#8221; in Proceedings of the 3rd International Conference on Electronics Computer Technology (ICECT &#8217;11), pp. 297&#8211;301, IEEE, Kanyakumari, India, April 2011.</para>
<para>[26] J. Yick, B. Mukherjee, and D. Ghosal, &#8220;Wireless sensor network survey,&#8221; Comput. networks, vol. 52, no. 12, pp. 2292&#8211;2330, 2008.</para>
<para>[27] Ziegler S. et al. (2019) Privacy and Security Threats on the Internet of Things. In: Ziegler S. (eds) Internet of Things Security and Data Protection. Internet of Things (Technology, Communications and Computing). Springer, Cham, DOI: 10.1007/978&#8211;3-030&#8211;04984-3_2</para>
</section>
</chapter>

<chapter class="chapter" id="ch03" label="3" xreflabel="3">
<title>Statistical Analysis and Economic Models for Enhancing Cyber-security in SAINT</title>
<para><b>Edgardo Montes de Oca<sup>1</sup>, John M. A. Bothos<sup>2</sup> and Stefan Schiffner<sup>3</sup></b></para>
<para><sup>1</sup>Montimage Eurl, 39 rue Bobillot, Paris, France</para>
<para><sup>2</sup>National Center for Scientific Research &#8220;Demokritos&#8221;, Patr. Gregoriou E. &amp; 27 Neapoleos Str, Athens, Greece <sup>3</sup> University of Luxemburg</para>
<para>E-mail: edgardo.montesdeoca@montimage.com; jbothos@iit.demokritos.gr; Stefan.schiffner@uni.lu</para>
<para>SAINT analyses and identifies incentives to improve levels of collaboration between cooperative and regulatory approaches to information sharing. Analysis of the ecosystems of cyber-criminal activity, associated markets and revenues drive the development of a framework of business models appropriate for the fighting of cyber-crime. The role of regulatory approaches as a cost benefit in cyber-crime reduction is explored within a concept of greater collaboration to gain optimal attrition of cyber-criminal activities. Experimental economics aid SAINT in designing new methodologies for the development of an ongoing and searchable public database of cyber-security indicators and open source intelligence. Comparative analysis of cyber-crime victims and stakeholders within a framework of qualitative social science methodologies deliver valuable evidences and advance knowledge on privacy issues and deep web practices. Equally, comparative analysis of the failures of current cyber-security solutions underpins a model for greater effectiveness and improved cost-benefits. SAINT advances the metrics of cyber-crime through the construct of a framework of a new empirical science that challenges traditional approaches and fuses evidence-based practices with more established disciplines. Innovative models, algorithms and automated framework for metrics benefit decision-makers, regulators, law enforcement, at national and organisational levels providing improved cost-benefit analysis and estimation of tangible and intangible costs for optimal risk and investment incentives.</para>


<section class="lev1" id="sec3-1">
<title>3.1 Introduction</title>
<para>The SAINT project<footnote id="fn_1" label="1"> <para>SAINT (Systemic Analyser In Network Threats) is an H2020 project. See https://cordis.europa.eu/project/rcn/210229 and https://project-saint.eu for more information.</para></footnote> examines the problem of failures in cyber-security using a multidisciplinary approach that goes beyond the purely technical viewpoint. Building upon the research and outcomes from preceding projects, it combines the insights gained to progress further analysis into economic, behavioural, societal and institutional views in pursuit of new methodologies that improve the cost-effectiveness of cyber-security.</para>
<para>SAINT analyses and identified incentives to improve levels of collaboration between cooperative and regulatory approaches to information sharing to enhance cyber-security and mitigate (a) the risk and (b) the impact from a cyber-attack, while providing, at the same time, solid economic evidence on the benefits from such improvement based on solid statistical analysis and economic models.</para>
<para>It is widely acknowledged that despite the sums spent annually on cybersecurity, cyber-crime continues to flourish. No true or accurate picture of the situation is readily available and yet vast amounts of money continue to be employed in efforts to reduce levels of cyber-crime that do not appear to be working. There are now more than 3.6 billion Internet users<footnote id="fn_2" label="2"> <para>www.internetworldstats.com (30 June 2016).</para></footnote> and 7.3 billion mobile-cellular subscriptions worldwide<footnote id="fn_3" label="3"> <para>www.itu.int</para></footnote> in 2016 and rising. According to Microsoft&#8217;s report [1] on &#8220;Cyberspace 2025: Today&#8217;s Decisions, Tomorrow&#8217;s Terrain&#8221;, it is estimated that by 2025, more than 91% of people in developed countries and nearly 69% of those in emerging economies will be using the Internet, with the total number of Internet users estimated to be 4.7 billion. In this expanding cyber-space, it is estimated that at least 7% of URLs are malicious, 85% of the 200 billion emails processed per day are spam, 1.4 million browser agents are botnets, consisting 20% of mobile browser agents and measurable cyber-attacks rise up to 1 million plus every day. The annual cost to the global economy from cyber-crime is &#8364;300 billion, with the average annualized cost of data breaches only, being &#8364;7.9 million. The global cyber-crime market represents &#8364;15 billion and up to &#8364;50 billion for security products and services [2]. Europol, in its 2015 report [3] &#8220;Exploring Tomorrow&#8217;s Organized Crime&#8221; forecasts an expansion of cyber-crime, in the form of a project-basis, where cyber-criminals lend their knowledge, experience and expertise as part of a crime-as-a-service business model. The crime-as-a-service business model is facilitated by social networking, digital infrastructures and virtual currencies that allow cyber-criminals to exchange and use financial resources anonymously on a large scale.</para>



<para>The EU FP7 project CyberROAD<footnote id="fn_4" label="4"> <para>https://www.cyberroad-project.eu</para></footnote> successfully delivered a research roadmap for cyber-crime and cyber&#8211;terrorism using in-depth analysis into technological, social, legal, ethical, political, and economic origins of the issues. A noted research outcome was the proposed innovative cybercrime cost-benefit reduction methodology as delivered in the paper &#8220;2020 Cybercrime Economic Costs: No Measure No Solution&#8221;, [2]. In furtherance of the insights already gained in the CyberROAD project, SAINT carries out an extensive analysis of the state-of-the-art using a range of comparative studies to deliver a framework of data-driven guidelines based on mathematical analysis of the relevant quantitative variables that decision makers require for accurate resource allocation. The construct of such a framework designed with experimental economics aligns and regulates the discipline to that of an empirical science and substantiates the case for greater collaboration in information sharing.</para>
</section>

<section class="lev1" id="sec3-2">
<title>3.2 SAINT Objectives and Results</title>
</section>

<section class="lev2" id="sec3-2-1">
<title>3.2.1 Main SAINT Objectives</title>
<para>SAINT project studies and improves the measurement approaches and methodologies by means of constructing a framework of a new empirical science, challenge traditional approaches and fuse evidence-based practices with more established disciplines for a lasting legacy. Through the construction this framework, it gives decision makers (public policy authorities, business leaders and individuals) data-driven guidelines based on scientific analysis of relevant quantitative and qualitative variables for their decisions about dedicating resources to deal with cyber-threat risks and cyber-criminals.</para>

<para>By employing various methodologies from different scientific fields, the main objectives of SAINT are to:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Establish a complete set of metrics for cyber-security economic analysis, cyber-security and cyber-crime market.</para></listitem>
<listitem><para>Develop new economic models for the reduction of cyber-crime as a cost-benefit operation.</para></listitem>
<listitem><para>Estimate and evaluate the associated benefits and costs of information sharing regarding cyber-attacks.</para></listitem>
<listitem><para>Define the limits of the minimum needed privacy and security level of internet applications, services and technologies.</para></listitem>
<listitem><para>Identify potential benefits and costs of investing in cyber-security industry as a provider of cyber-security services.</para></listitem>
<listitem><para>Develop a framework of automated analysis, for behavioural, social analysis, cyber-security risk and cost assessment.</para></listitem>
<listitem><para>Provide a set of recommendations to all relevant stakeholders including policy makers, regulators, law enforcement agencies, relevant market operators and insurance companies.</para></listitem>
</orderedlist>
</section>

<section class="lev2" id="sec3-2-2">
<title>3.2.2 Main SAINT Results</title>
<para>The SAINT project examines the problem of failures in cyber-security using a multidisciplinary approach that combines economic, behavioural, societal and institutional approaches in pursuit of new methodologies that improve the cost-effectiveness of cyber-security. SAINT analyses and identifies incentives to improve levels of collaboration between cooperative and regulatory approaches to information sharing to enhance cyber-security and mitigate (a) the risk and (b) the impact from a cyber-attack, while providing, at the same time, solid economic evidence on the benefits from such improvement based on solid statistical analysis and economic models.</para>
</section>

<section class="lev3" id="sec3-2-2-1">
<title>3.2.2.1 Metrics for cyber-security economic analysis, cyber-security and cyber-crime market</title>
<para>SAINT investigates and establishes accurate indicators and metrics for economic analysis, cyber-security and cyber-crime market, including the effects of regulatory analysis on the economics of cyber-security. It investigates all the open source intelligence methodologies and performs an analysis on the effect of those metrics in different scenarios and environments. The establishment of metrics for measuring privacy is also included in this effort.</para>
<para>With respect to the metrics and indicators (objective 1), SAINT analyses [4]: 19 open source cyber-security indicator datasets (including ENISA&#8217;s top 15); two indicators of emerging cyber-threats; Blacklists, Blocklists and Whitelists; five insecurity indicators; nine security indicators; nine economic indicators; five open source intelligence methodologies for cyber-threats. It includes relevant examples, usage, statistics, and metrics for each of the above indicators.</para>
<para>SAINT also gathers and analyses [5] evidences from stakeholders, across multiple disciplines, with the objective to examine the problem of failures in cyber-security beyond a purely technical viewpoint and gain advanced knowledge on economics and cyber-security practices from the stakeholders, enabling the gaining of a better understanding of their needs and requirements and providing insights on cyber-security and product value for money. As a consequence of this analysis, FICORA (Finish regulator), is now proactively involved and cooperating in distributing a survey for Finland to gain supporting metrics in answer to an important question: why does Finland have one of the best quantitative track records in cyber-security, within the EU &amp; G20<footnote id="fn_5" label="5"> <para>http://www.intercomms.net/issue-30/dev-3.html</para></footnote>?</para>
<para>It was additionally observed as a result of a comparative analysis that the inclusion of the cost of time spent/lost by cyber-crime victims provided an important metric for ROI calculations. Results show:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The cost of cyber-crime is estimated to be &#8364;30 billion (0.242% of EU&#8217;s GDP).</para></listitem>
<listitem><para>The cost in time lost or spent in 2017 due to cyber-crime amounts to an estimated &#8364;60 billion.</para></listitem>
<listitem><para>Therefore, the actual total cost of cyber-crime to the EU in 2017 can be estimated to be &#8364;90 billion.</para></listitem>
</itemizedlist>
</section>

<section class="lev3" id="sec3-2-2-2">
<title>3.2.2.2 Economic models for the reduction of cyber-crime as a cost-benefit operation</title>
<para>Significant effort of SAINT is dedicated in the research and development of new economic models for cyber-security and cyber-crime. A rich econometric and mathematical theoretical framework is implemented for this purpose, and the final methodologies and models are validated in a controlled environment under the supervision of the Hellenic Police Cyber-Crime Unit.</para>


<para>In relation to objective 2, research focuses on the organisation&#8217;s effective operational processes [6] to achieve efficiency in production by investigating their incentives in choosing input combinations that minimise cost and, consequently, maximise profits. With the rapid evolution of Cloud Internet, organisations have an alternative solution to substitute highly qualified Information Technology working staffs that are paid high wage rates, which means excessive labour costs, with subcontracting of such Information Technology services to external providers like the newly emerged Managed Service Providers Networks. In this way, organisations avoid the excessive economic investment costs to set up and develop in-house Information Technology departments from scratch and find the means to hire or offer professional training to existing working staff, with the potential risk of economic losses resulting, in case of failures from such internally structured departments. Research in this field concerns the organisations&#8217; decisions to substitute production factors, purchased in the respective production factor markets, to minimise their production cost. It studies the dependence of the organisations&#8217; policies, concerning the outsourcing of certain Information Technology activities, by purchasing Cloud Internet computer services from automated platforms of Managed Service Provider Networks, on the price of Information Technology labour force that is the wage rates in the Information Technology sector. The empirical research performed in showed that organisations&#8217; price cross-elasticity demand for Cloud Internet computer services is significantly negative towards the wage rate in the Information Technology sector for specialised Information Technology labour force by &#8211;21.84% (&#177;6.38%). The evolution of Cloud Internet in our time has given organisations many alternatives, especially in the area of Information Technology services that can be purchased online, through the participation in relevant automated platform networks, operated and managed by external providers, in the form of Managed Service Provider Networks.</para>
<para>SAINT identifies current cyber-security failures and requirements to improve the situation at all levels of cyber-security defences and across a variety of sectors [7]. It determines what constitutes a cyber-security failure, or what inadvertently increases the risk of a cyber-attack, using quantitative and qualitative analysis, to identify what new practices are required to improve cyber-security, reduce wasteful information technology spending and improve return on investment.</para>
<para>SAINT also investigates how cyber-attacks materialise, focusing on what lies behind and contributes to the materialisation of these attacks [8]. This basically represents the emergence of a whole new economy consisting of a new and fast-growing body of vulnerability markets with stakeholders selling and buying vulnerabilities to gain financial gains or avoid financial losses, associated with immaterial assets, namely the vulnerabilities and their exploits. The goal is to identify and categorise the vulnerabilities and exploits markets along with the involved stakeholders and their roles, to provide guidelines for cost-effective cyber-security methodologies that can be applied as counter-measures for defence against malicious hackers. Vulnerability announcements can inflict severe monetary and other intangible costs on the company&#8217;s value.</para>
</section>

<section class="lev3" id="sec3-2-2-3">
<title>3.2.2.3 Benefits and costs of information sharing regarding cyber-attacks</title>
<para>SAINT provides guidelines for information sharing between all the agents, for mitigating inefficiencies in the cyber-security investment landscape and in the total economy in general. These guidelines are based on the joint evaluation of measurable quantitative economic and technical variables regarding the influence of cyber-security information sharing in the cost structure, the rate of investment, the effective allocation of resources and the overall profitability of each agent.</para>
<para>SAINT estimates and evaluates the associated benefits and costs of information sharing regarding cyber-attacks (objective 3) [6, 9]. For this, international cooperation activities have been studied [9], such as the ITU Global Cyber-security Agenda (GCA).</para>
<para>The GCA is a framework launched in 2007 for international cooperation. It is designed for cooperation and efficiency, encouraging collaboration with and between all relevant partners and building on existing initiatives to avoid duplicating efforts. Within GCA, ITU and the International Multilateral Partnership Against Cyber Threats (IMPACT) promote the deployment of solutions and services to address cyber-threats on a global scale. It is a global multi-stakeholder and public-private alliance against cyber-threats. EU addresses cyber-security through tool policies that affect the structures and capabilities of organisations while in parallel takes action by providing incentives to support and promote the development of co-operation in the area of cyber-security, for detecting cyber-incidents and responding to cyber-attacks effectively and appropriately.</para>
<para>The Directive on the Security of Network and Information Systems, the &#8220;NIS Directive&#8221;, mentioned as the first EU-wide cyber-security law, is designed among others to foster better co-operation in reporting serious incidents and adopting effective risk management practices.</para>
<para>Regarding the promotion of cooperation in cyber-security domain, ENISA also serves as a focal point for information sharing and spread of knowledge in the cyber-security community, through the setting up of Information Sharing and Analysis Centres. Their role is particularly important in creating the necessary trust for sharing information between all the different agents.</para>
<para>The subject of co-operation between organisations and how it influences their effective performance and allocation of their resources in terms of decreasing production cost and profitable exploitation of production inputs has been studied [6]. In this context co-operation between organisations is defined as information sharing between them. It proves empirically the importance of co-operation through information sharing in minimising production cost and achieving economic efficiency in the allocation of resources. The associated benefits of information sharing between organisations have been evaluated. In the long-run, using information sharing processes for improving the production process has an almost &#8211;13% (&#177; 3.58%) decreasing effect on the real (deflated) long-run average production cost for the sample of our Eurozone countries, for the time period 2009&#8211;2012.</para>
</section>

<section class="lev3" id="sec3-2-2-4">
<title>3.2.2.4 Privacy and security level of internet applications, services and technologies</title>
<para>SAINT analyses the dependence of detection of cyber-security incidents, on behavioural features of network traffic flow to interpret adequately the careless behaviour of internet users, regarding the proper application of cyber-security norms and rules. For this, SAINT implemented a correlation analysis on quantitative technical and measurable qualitative behavioural variables, concerning network traffic flow characteristics and cyber-security behaviour characteristics.</para>
<para>Regarding the limits of the minimum needed privacy and security level of internet applications, services and technologies (objective 4), [10] devised models and mechanisms for measuring privacy and for user privacy protection mechanisms. Several formal frameworks of privacy notions, with differing assumptions, are proposed that study the relations between Anonymous Communication Networks and respective provided privacy.</para>
<para>Based on this work, in [18] SAINT proposes how these different frameworks can be unified by constructing a generalized indistinguishability game similar to the games used to define semantic security in cryptographic protocols [23].</para>
<para>Along with effective defences against website fingerprinting, such as continuous data flow, package padding and traffic morphing, adaptive padding between data packets with generic web traffic and clustering of webpages into similarity groups. Beyond this, SAINT investigates:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Approaches for protecting publicly available databases like secure computation of elementary database queries, locally random reductions of sets to databases, zero Knowledge interactive (and non-interactive) proofs, data oblivious data transfers in private information retrieval.</para></listitem>
<listitem><para>Privacy preserving credentials and authentication mechanisms like password-based authentication, cryptographic certificates, attribute-based credentials, electronic certificates and electronic Identities.</para></listitem>
<listitem><para>Database content anonymisation concepts and techniques like k-anonymity, i-diversity, t-closeness, bloom filters, differential privacy.</para></listitem>
</itemizedlist>
</section>

<section class="lev3" id="sec3-2-2-5">
<title>3.2.2.5 Benefits and costs of investing in cyber-security</title>
<para>SAINT provides guidelines and frameworks for maximising efficiency in cyber-security services. Part of the effort is dedicated in the development of alternative ways and methods to get valuable information in measurable quantitative form of metrics and then to analyse it to highlight guidelines for competitiveness and profitability in the cyber-security industry. SAINT also determines the value of the underground and cyber-crime market within a wider investigation of information security markets including.</para>
<para>In relation to objective 5, SAINT proposes new models and new paradigms in cyber-security with a special focus on the incentives of the different stakeholders in the ecosystem of cyber-criminality. It was first necessary to identify the existing business models that cyber-criminals use, and to describe the different national strategies of European countries that have been put in place to fight against cyber-crime. From this, new models are proposed that provide innovative ways that help reduce cyber-crime by targeting the right incentives of both cyber-criminals and cyber-security practitioners. These models are compared among each other and their practical relevance is evaluated [11]. Some of the results obtained concern: the analysis of existing cyber-criminal business models; the analysis of national, European and international cyber-security policies and strategies and the draft of 8 innovative models to fight against cyber-crime, including: the certification and labelling services model; the insurance model; the wage model; the collaborative model; the education model; the crowdsourcing model; the bug-bounty model; the artificial intelligence model.</para>
<para>In relation to objective 5, SAINT demonstrates [8] that behind the materialisation of the cyber-attacks there is a new and fast-growing body of vulnerability markets with stakeholders selling and buying vulnerabilities for financial gains or to avoid financial loss. This implies that a whole new economy is rapidly evolving based on immaterial assets, the vulnerabilities and their exploits. Over the last years, ransomware attackers demanded payment in cryptocurrencies, with the Bitcoin<footnote id="fn_6" label="6"> <para>https://www.bitcoin.com/</para></footnote> being among the most popular ones. Bitcoin offers anonymity in terms of involved parties and the amount of the transaction and their use for illicit purposes has become popular.</para>
<para>The &#8220;Execute Code&#8221;-related vulnerabilities are prevalent among all other vulnerabilities, which implies that software vendors (mainly OS developers) fail to take appropriate measures during the design and implementation stages. Most of the discovered vulnerabilities (over 50%) are severe, with a severity score at least six. This, in turn, may imply severe financial or other intangible (e.g. trust, fame) costs on affected companies. No software product or system is immune to vulnerabilities, which demonstrates that vulnerability discoverers could virtually target any vendor, operating system, or software product as long as it is either, (or both), a challenging or profitable target.</para>
<para>Vulnerability announcements can inflict severe monetary and other intangible costs (e.g. loss of trust and tarnished fame) on the affected company, measured by system downtime, operation disruption, loss of credibility and customers, higher assurance costs, etc.</para>
<para>Vulnerability announcements can lead to a negative and significant change in a software vendor&#8217;s market value. According to the conducted quantitative analysis, an affected vendor can lose even 60% value in stock price when a related vulnerability is disclosed. Study has also showed that a software vendor loses more market share if the market is competitive or if the vendor is small. Moreover, as can be expected, the change in stock value is more negative if the vendor fails to provide the right patch at the time of disclosure of the vulnerability. In addition, according to the findings, key vulnerabilities have significantly more impact on the company&#8217;s value.</para>
<para>Useful insights on the types of attacks per business sector have also been obtained [12]. Small businesses (with fewer than 250 employees) are those most targeted by cyber-attacks, making up as much as 43% of all the cyber-attacks on companies (in 2015). Large enterprises (with over 2,500 employees) accounted for 35% of all cyber-attacks, while medium-sized businesses (with between 251 and 2,500 employees) made up the remaining 22%. It is interesting to note that these results are diametrically opposed to those from 2011 where large businesses accounted for the majority (50%) of all cyber-attacks on companies, medium-sized businesses represented 32%, while small businesses accounted for 18%. Between 2011 and 2015, small businesses have been increasingly targeted by cyber-attacks. This trend can be explained by the fact that, unlike big businesses that have the capacity to invest in proper expertise and technologies, smaller businesses may not always have the financial resources and staff to protect themselves from such threats. Consequently, cyber-attackers take advantage of smaller companies&#8217; digital vulnerability to steal confidential data and intellectual property, bring down the website, or organising phishing and spamming campaigns. Regarding the type of cyber-attacks on businesses, we have the following specificities:</para>

<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Spam: the size of a company has limited influence over its spam rate. Indeed, in 2016, the spam-rate varied between 52.6% and 54.2%, which shows that all kinds of companies are likely to be targeted, regardless of their size. Furthermore, all industry sectors receive similar quantities of spam.</para></listitem>
<listitem><para>Phishing: although the overall phishing rates have declined over the past three years, companies are still targeted by these attacks. Medium-sized businesses experience the highest phishing rates. In 2016, the sector of agriculture, forestry, and fishing was the most affected by phishing, with one in 1,815 emails being classed as a phishing attempt.</para></listitem>
<listitem><para>Data breaches: In 2016, the industry of services (particularly business services and health services) was the most affected by data breaches, representing 44.2% of all breaches. The sector of finance, insurance, and real estate was ranked second with 22.1%.</para></listitem>
</itemizedlist>
<para>The private sector, particularly the cyber-security industry, plays an important role in combatting cyber-crime by providing individual users, businesses, and organisations with services and solutions to cyber-threats. In 2003, the global cyber-security market represented $2.5 billion, currently it amounts to $106 billion, and the sector will be worth $639 billion in 2023. These numbers underline the growing demand for cyber-security solutions and highlight the business opportunities in the sector.</para>
<para>In 2016, the commercial cyber-security vendors&#8217; market was dominated by the United States with a total of 827 vendors leading cyber-security research and products. Israel and the United Kingdom hold second and third place in the ranking with 228 and 76 vendors respectively.</para>
<para>While the cyber-security industry has potential for growth, in both the private and public sectors, it is still struggling to keep up with cyber-crime for three reasons:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The variety of IoT devices: the increase in connected IoT devices increases the number of potential targets. Projections suggest that, by 2020, there will be tens of billions of connected digital devices in the EU alone.</para></listitem>
<listitem><para>The multiplicity of data: an increase in connected IoT devices directly correlates with an increase in data that needs to be protected.</para></listitem>
<listitem><para>The shortage of skilled workers in the cyber-security sector: in spite of the great employment opportunities and high number of open positions for IT specialists and cyber-security professionals, the cyber-security industry struggles with training them in time to keep up with growing demand. The solution to this problem may come from artificial intelligence and machine learning, which are currently being developed.</para></listitem>
</itemizedlist>
<para>SAINT also performed a cost-benefit analysis of cyber-security solutions and products (objective 5). This is built on a cash flow analysis of cyber-security solutions, products and models. It relies on information from a market analysis established [12], on the revenue analysis of cyber-security services [13] and on the most relevant models identified. It also uses input from conducted surveys [14] and estimates the price of digital assets and the costs of intangible risks. In addition to the cash flow analysis, a sensitivity and risk analysis is implemented [15]. These recommendations serve as guidelines for various stakeholders, including cyber-security business providers. It builds on the cost-benefit implemented, as well as on the econometric analysis of cyber-security solutions, the market analysis, and the assessment of the innovative cyber-security models analysed [16].</para>
</section>

<section class="lev3" id="sec3-2-2-6">
<title>3.2.2.6 Framework of automated analysis, for behavioural, social analysis, cyber-security risk and cost assessment</title>
<para>In the framework of automated analysis (objective 6), SAINT defines the different tools that constitute the Framework (<link linkend="F3-1">Figure <xref linkend="F3-1" remap="3.1"/></link>, [17]). This includes the cyber-security cost-benefit analysis tools and algorithms. Based on available metrics, indicators and parameters, the techniques allow the construction of models and the estimation of the price of digital assets and costs of intangible risks (e.g. reputation, non-critical service disruption). A toolset for automated analysis based on automatic information gathering and analysis tools that extract information from a variety of information sources on the Internet and the Deep Web has been designed and prototypes implemented. The tools include: Social Network Analyser and the Deep Web Crawler. The information sources include cyber-security related discussion forums, bug bounties, social network discussions and public vulnerability and data breach incident databases.</para>
<fig id="F3-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-1">Figure <xref linkend="F3-1" remap="3.1"/></link></label>
<caption><para>High-level architecture of the SAINT framework.</para></caption>
<graphic xlink:href="graphics/ch003_fig001.jpg"/>
</fig>

<para>The developed Twitter Social Network Analyser (SNA) utilizes the social network, Twitter, to extract trends on the cyber-crime activity. To this end, a dictionary of #hashtags of interest is created. The SAINT SNA mines only publicly available tweets and accounts for the specific hashtags and extracts the related information.</para>
<para>The Google Trends SNA utilizes the popular Google Trends platform to extract trends that are related to cyber-crime activity. Google Trends is a public web facility of Google Inc. It is based on Google Search and shows how often a particular search term is entered with respect to the total searches in different regions of the world and in various languages.</para>
<para>Crawling and Scraping the Web and Deep Web can be categorized into two different large types, where each one includes a number of considerations and design decisions, depending to the target web sites that are searched (Web and Deep/Dark Web). The first type is Web Scraping of a website and the second one Crawling. The Tor network was found to be the ideal place for investigating cyber-criminal activity while browsing anonymously Deep Web sites to avoid of being hacked or traced. For the implementation of our scripts, we run Tor in the background to avoid being detected by users of the Deep and the Dark Web.</para>
<para>SAINT&#8217;s Global Security Map (GSM)<sup>7</sup> gathers data on selected ENISA indicators using a variety of suitable open source feeds and presents the results visually on a global map. It is an interactive tool which enables visualization of the geographic distribution of the sources of cyber-crime and quantitative comparative metrics, with the aim to provide a simple and accurate method of displaying the global hotspots for the location and quan-tification of the top cyber-threat indicators: malware, phishing, spam, cyberattacks, and other malicious activities. The unique combination of detailed data and simplified visualizations make the tool ideal for research and comparative analysis purposes by governments, law enforcement, CERTs, academia, Infosec, financial institutions and the public sector (also related to objective 7).</para>
<para>One more tool developed in the scope of SAINT project is Tool for measuring privacy in encrypted networks [18]. Resent research [19, 20] showed that user&#8217;s privacy can be endangered even if he is using anonymization networks such as TOR [21] or JAP [22]. By means of an attack known as <i>website fingerprinting</i>, it is possible to identify which website a user is visiting and, thereby, to identify both two communicators and the content of the communication. However, different websites have different degrees of finger printability. Thereby, SAINT developed a tool which allows any user to estimate his vulnerability level to the website fingerprinting attack when visiting a website. Afterward, the user can decide if visiting this website costs the possible risks.</para>
</section>

<section class="lev3" id="sec3-2-2-7">
<title>3.2.2.7 Recommendations to stakeholders</title>
<para>Reference model (<link linkend="F3-2">Figure <xref linkend="F3-2" remap="3.2"/></link>, [12]) illustrates the interactions between the different stakeholders involved in the cyber-crime and cyber-security ecosystem.</para>
<para>Related to objective 7, SAINT provides a set of recommendations to all relevant stakeholders (policy-makers, regulators, law enforcement agencies, relevant market operators and insurance companies) [16]. This builds on the input of various sources from different partners, including the stakeholder surveys that were conducted. An initial set of recommendations has been defined that includes:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Adopting in-depth comparative analysis for the application of successful practices of individual countries, i.e. Finland (see <link linkend="F3-3">Figure <xref linkend="F3-3" remap="3.3"/></link>).</para></listitem>
</itemizedlist>
<para><sup>7</sup>https://3hz6pq.staging.cyberdefcon.com/</para>
<fig id="F3-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-2">Figure <xref linkend="F3-2" remap="3.2"/></link></label>
<caption><para>Stakeholder reference model.</para></caption>
<graphic xlink:href="graphics/ch003_fig002.jpg"/>
</fig>

<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Improving the cost of cyber-crime metrics and econometrics for enhanced ROI calculations by the inclusion of the time spent or lost by cyber-crime victims.</para></listitem>
<listitem><para>Improving the transparency of cyber-security matters within the workplace.</para></listitem>
<listitem><para>Educating the workforce on the costs and risks to the workplace of cyber-practices.</para></listitem>
<listitem><para>Furthering cyber-security training &amp; education within the EU to alleviate the acknowledged lack of trained staff.</para></listitem>
<listitem><para>Improving the complementarity among standards and best practices in cyber-security within the EU.</para></listitem>
<listitem><para>Standardising the metrics to enable accurate comparative analysis between surveys/reports.</para></listitem>
</itemizedlist>
<para>In Finland, FICORA has the role of a CERT that is a regulator but also acts to prevent and remediate cyber-security issues. The problem in other countries is that the regulators are only telecom regulators whereas in Finland FICORA is both a telecom and cyber-security regulator. Telecom operators are not really concerned about the security of customers. They just want to make sure that their services work, that the pricing brings profits and that the competition is regulated to their advantage. Most CERTs in Europe have a limited role that consists in reporting threats, building cyber-threat intelligence frameworks, and stimulating or developing cyber-threat solutions. When the safety and security of citizens is concerned we need entities that act and are proactive as is the case in the health and food sectors. FICORA is the best qualitatively and quantitatively. It bases its cyber-security activity on technologically efficient techniques, such as darknets or reverse network telescopes, but also it obtains results through sound organisation, clear objectives, close collaboration with all the stakeholders, and has the budget to do it.</para>
<fig id="F3-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F3-3">Figure <xref linkend="F3-3" remap="3.3"/></link></label>
<caption><para>Global security map of Finland.</para></caption>
<graphic xlink:href="graphics/ch003_fig003.jpg"/>
</fig>
<para>Another aspect that should be emphasised is the legal one. The U.S. Anti-Bot Code of Conduct (ABC) for Internet Services Providers (ISPs) has resulted in an almost immediate reduction of botnets in the US. Operators started taking down botnets and collaborating to do so. What can be derived from this experience is that when a law is passed that identifies responsibilities and penalties, companies and individuals are incentivised. Telecom operators will start taking down botnets and fighting cyber-criminality only when it becomes financially interesting for them. Unfortunately, there is yet no law in Europe that is equivalent to ABC. The examples of collaborative actions since 2014 [9] show progress but the need remains to obtain a more systematic approach for fighting cyber-crime that is better and more globally organized. This can only be achieved with effective laws, regulations, incentivisation and cooperation at the national and international levels.</para>
</section>

<section class="lev1" id="sec3-3">
<title>3.3 Conclusion</title>
<para>The SAINT project has worked on and advanced in the comprehension of the stakes involved in the cyber-security domain. It has analysed the risks and cost of security threats by compiling a complete set of metrics for the analysis of cyber-security economics, cyber-security risks, and the cyber-crime market. New economic models and algorithms have been developed to find optimised cost-benefit solutions for reducing cyber-crime.</para>
<para>The deep analysis of the benefits obtained from cyber-attacks information sharing (in particular, cooperative and regulatory approaches), positive impact of investments in cyber-security by industry, and the risks and costs of security breaches have resulted in a set of recommendations valuable for all relevant stakeholders (e.g. policy makers, regulators, law enforcement agencies, industry). The studies and surveys conducted have also allowed to better understanding the limitations and needs involved when finding the equilibrium between privacy and security of internet-based applications, services and technologies.</para>
<para>SAINT has also developed a framework that facilitates the automated analysis for behavioural, social, cyber-security risk and cost assessment. Research gaps have been addressed that can help policy makers make more informed decision on where economic investments should be directed to return the best possible outcomes. The different tools that constitute the SAINT Framework target improving the automation of certain analysis tasks and present the results in an integrated way, at least partially. The resulting system serves as a proof of concept that will show the usefulness of the integration of data from different sources and tools. In the future, the Framework will be extended and include a tighter integration so that researchers can process different types of security intelligence information and obtain results in a methodical way.</para>
<para>The main challenge identified by SAINT is to find the best approaches to:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Coordinate cyber-security related issues and actions (i.e., related to legislative, regulatory, law enforcing and cooperative) between different organisations and countries;</para></listitem>
<listitem><para>Measure the effectiveness of the actions;</para></listitem>
<listitem><para>Achieve long-term impact to improve the security of ICT users;</para></listitem>
<listitem><para>Implement and enforce laws and regulations in a virtualised and often conflicting international context;</para></listitem>
<listitem><para>Make security an integral part of ICT design;</para></listitem>
<listitem><para>Reverse the tendency that makes economic incentives better for criminals that those who need to protect their systems;</para></listitem>
<listitem><para>Achieve consensus between stakeholders and countries;</para></listitem>
<listitem><para>Improve education related to cyber-security;</para></listitem>
<listitem><para>Find a good balance between security and privacy.</para></listitem>
</itemizedlist>
<para>Having analysed different regulations and practices our conclusion is that we need to attack the cyber-threat problem from all fronts at the same time, in other words we need to:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Improve the laws and regulations and make them more comprehensive;</para></listitem>
<listitem><para>Coordinate better the regulatory processes and incentivize cooperation;</para></listitem>
<listitem><para>Make cyber-security and privacy protection an obligation of service providers (including operators) to their customers;</para></listitem>
<listitem><para>Greatly improve the awareness of the individuals to the risks;</para></listitem>
<listitem><para>Change the economics to reduce the benefits of cyber-criminal activities and improve the perceived benefits of cyber-security measures. This includes reforming the international finance system to eliminate, or at least greatly reduce, the money laundering possibilities (e.g., tax havens, bitcoins).</para></listitem>
</itemizedlist>
<para>Many of the challenges are addressed in the case of Finland, except maybe for the challenges related to the privacy concerns and the economics and financial aspects. Collaborative actions need to be done in a more systematic, global and organised way for fighting cyber-crime. This can only be achieved with effective laws, regulations, incentivisation and cooperation at the national and international levels. Currently, cyber-crime is more incentivized and even cooperates better than organisations that fight it. This situation needs to be reversed and obtaining profits by cyber-criminals should be made much more complicated.</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This work is performed within the SAINT Project (Systemic Analyser in Network Threats) with the support from the H2020 Programme of the European Commission, under Grant Agreement No. 740829. It has been carried out by the partners involved in the project:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>National Center for Scientific Research &#8220;Demokritos&#8221;, Integrated Systems Laboratory, Greece</para></listitem>
<listitem><para>Computer Technology Institute and Press DIOFANTUS, Greece</para></listitem>
<listitem><para>University of Luxembourg, Luxemburg</para></listitem>
<listitem><para>Center for Security Studies &#8211; KEMEA, Greece</para></listitem>
<listitem><para>Mandat International, Switzerland</para></listitem>
<listitem><para>Archimede Solutions SARL, Switzerland</para></listitem>
<listitem><para>Stichting CyberDefcon Netherlands Foundation, Netherlands</para></listitem>
<listitem><para>Montimage EURL, France</para></listitem>
<listitem><para>Incites Consulting SARL, Luxemburg</para></listitem>
</itemizedlist>
<para>Thus, we would like to thank the different contributors from these organizations:</para>
<para>Eirini Papadopoulou, Konstantinos Georgios Thanos, Andreas Zalonis, Constantinos Rizogiannis, Antonis Danelakis, Ioannis Neokosmidis, Theodoros Rokkas, Dimitrios Xydias, Jart Armin, Bryn Thompson, Jane Ginn, Pantelis Tzamalis, Vasileios Vlachos, Yannis Stamatiou, Marharyta Aleksandrova, Latif Ladid, Dimitris Kavallieros, George Kokkinis, Cesar Andres, Christopher Hemmens, Anna Brekine, Sebastien Ziegler, Olivia Doell, Gabriela Znamenackova, Gabriela Hrasko.</para>
<para>We would also like to thank FICORA (Finland) for their help.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Burt, Kleiner, Nicholas, Sullivan, &#8220;Cyberspace 2025 Today&#8217;s Decisions, Tomorrow&#8217;s Terrain, Navigating the Future of Cyber-security Policy&#8221;, Microsoft Corporation, June 2014.</para>
<para>[2] Armin, Thompson, Kijewski, Ariu, Giacinto, Roli, &#8220;2020 Cybercrime Economic Costs: No measure No solution&#8221;, 10th International Conference on Availability, Reliability and Security, Toulouse, August 2015.</para>
<para>[3] European Police Office, &#8220;Exploring Tomorrow&#8217;s Organised Crime&#8221;, 2015 available at: https://www.europol.europa.eu</para>
<para>[4] Jart Armin, Bryn Thompson et al., &#8220;Final report on Cyber-security Indicators &amp; Open Source Intelligence Methodologies&#8221;, SAINT D2.1 Deliverable. Not yet available.</para>
<para>[5] Jart Armin, Bryn Thompson et al., &#8220;Final Report on the Comparative Analysis of Cyber-Crime Victims&#8221;, SAINT D2.3 Deliverable. Not yet available.</para>
<para>[6] John M.A. Bothos et al., &#8220;Cyber-security Empirical Stochastic Econometric Modelling of Information Sharing and Behavioural Attitude&#8221;, SAINT D3.1 Deliverable available at: https://project-saint.eu/deliverables</para>
<para>[7] Bryn Thompson, Jart Armin et al., &#8220;Final Analysis on Cyber-security Failures and Requirements&#8221;, SAINT D3.3 Confidential Deliverable.</para>
<para>[8] Yannis Stamatiou et al., &#8220;Analysis of Legal and Illegal Vulnerability Markets and Specification of the Data Acquisition Mechanisms&#8221;, SAINT D3.5 Deliverable available at: https://project-saint.eu/deliverables</para>
<para>[9] Edgardo Montes de Oca, Cesar Andres et al., &#8220;Comparative Analysis of Incentivised Cooperative and Regulatory Processes in Cybersecurity&#8221;, SAINT D2.5 Deliverable available at: https://project-saint.eu/deliverables</para>
<para>[10] Stefan Schiffner, Marharyta Aleksandrova et al., &#8220;Metrics for Measuring and Assessing Privacy of Network Communication&#8221;, SAINT D2.6 Deliverable available at: https://project-saint.eu/deliverables</para>
<para>[11] Olivia Doll, Gabriela Hrasko et al., &#8220;Business Modelling Report&#8221;, SAINT D4.3 Deliverable. Not yet available.</para>
<para>[12] Christopher Hemmens, Anna Br&#233;kine et al., &#8220;Stakeholder and Ecosystem Market Analysis&#8221;, SAINT D4.1 Deliverable available at: https://project-saint.eu/deliverables</para>
<para>[13] John M.A. Bothos, Eirini Papadopoulou, Konstantinos Georgios Thanos, &#8220;Cyber-security and Cyber-crime Market &amp; Revenue Analysis&#8221;, SAINT D4.2 Deliverable. Not yet available.</para>
<para>[14] Bryn Thompson et al., &#8220;Stakeholder and Consumer Requirements Survey Report&#8221;, SAINT D6.2 Deliverable available at: https://project-saint.eu/sites/deliverables</para>
<para>[15] Theodoros Rokkas, Ioannis Neokosmidis, Dimitris Xydias et al., &#8220;Report on Cost-Benefit Analysis of Cyber-security Solutions, Products and Models&#8221;, SAINT D4.4 Deliverable. Not yet available.</para>
<para>[16] Archimede Solutions et al., &#8220;Recommendations on Investment, Risk Management and Cyber-Security Insurance&#8221;, SAINT D4.5 Deliverable. Not yet available.</para>
<para>[17] Edgardo Montes de Oca, Cesar Andres et al., &#8220;Requirements Specification &amp; Architectural Design of The SAINT Tool Framework&#8221;, SAINT D5.1 Deliverable available at: https://project-saint.eu/deliverables</para>
<para>[18] Stefan Schiffner, Marharyta Aleksandrova et al., &#8220;Semi-automated Traffic Analysis of Encrypted Network Traffic&#8221;, SAINT D5.2. Not yet available.</para>
<para>[19] A. Panchenko, L. Niessen, A. Zinnen, and T. Engel, &#8220;Website fingerprinting in onion routing based anonymization networks,&#8221; in Proceedings of ACM WPES. Chicago, IL, USA: ACM Press, pp. 103&#8211;114, October 2011.</para>
<para>[20] A. Panchenko, A. Mitseva, M. Henze, F. Lanze, K.Wehrle, and T. Engel, &#8220;Analysis of fingerprinting techniques for tor hidden services,&#8221; in Proceedings of the Workshop on Privacy in the Electronic Society, pp. 165&#8211;175, ACM, 2017.</para>
<para>[21] R. Dingledine, N. Mathewson, and P. Syverson, &#8220;Tor: The second generation onion router,&#8221; in Proceedings of USENIX Security, San Diego, CA, USA: USENIX Association, 18 p, 2004.</para>
<para>[22] O. Berthold, H. Federrath, and S. Kopsell, &#8220;Web mixes: A system for anonymous and unobservable internet access,&#8221; in Proceedings of Designing Privacy Enhancing Technologies: Workshop on Design Issues in Anonymity and Unobservability, pp. 115&#8211;129, July 2000.</para>
<para>[23] C. Kuhn, M. Beck, S Schiffner, T. Strufe, and E. Jorswieck &#8220;Privacy framework for anonymous communication&#8221;, 20 pages, in print (poPETS, 2019).</para>
</section>
</chapter>

<chapter class="chapter" id="ch04" label="4" xreflabel="4">
<title>The FORTIKA Accelerated Edge Solution for Automating SMEs Security</title>
<para><b>Evangelos K. Markakis<sup>1</sup>, Yannis Nikoloudakis<sup>1</sup>, Evangelos Pallis<sup>1</sup>, Ales Cernivec<sup>2</sup>, Panayotis Fouliras<sup>3</sup>, Ioannis Mavridis<sup>3</sup>, Georgios Sakellariou<sup>3</sup>, Stavros Salonikias<sup>3</sup>, Nikolaos Tsinganos<sup>3</sup>, Anargyros Sideris<sup>4</sup>, Nikolaos Zotos<sup>4</sup>, Anastasios Drosou<sup>5</sup>, Konstantinos M. Giannoutakis<sup>5</sup> and Dimitrios Tzovaras<sup>5</sup></b></para>
<para><sup>1</sup>Department of Informatics Engineering, Technological Educational Institute of Crete, Greece</para>
<para><sup>2</sup>XLAB d.o.o., Slovenia</para>
<para><sup>3</sup>Department of Applied Informatics, University of Macedonia, Greece</para>
<para><sup>4</sup>Future Intelligence LTD, United Kingdom</para>
<para><sup>5</sup>Information Technologies Institute, Centre for Research &amp; Technology Hellas, Greece</para>
<para>E-mail: Markakis@pasiphae.teicrete.gr; Nikoloudakis@pasiphae.teicrete.gr; Pallis@pasiphae.teicrete.gr; ales.cernivec@xlab.si; pfoul@uom.edu.gr; mavridis@uom.edu.gr; geosakel@uom.edu.gr; salonikias@uom.edu.gr; tsinik@uom.edu.gr; Sideris@f-in.co.uk; Zotos@f-in.co.uk; drosou@iti.gr; kgiannou@iti.gr; tzovaras@iti.gr</para>

<section class="lev1" id="sec4-1">
<title>4.1 Introduction</title>
<para>Although the recent trend for the term &#8220;cyber-attack&#8221; is restricted for incidents causing physical damage, it has been traditionally used to describe a broader range of attempts to make unauthorized use of an asset related to computer information systems, computer networks, or even personal computing devices. As such, a cyber-attack aims to steal, alter a targets&#8217; system/data, or even destroy targets by gaining access into a targeted system. In this respect, a whole new industry has been shaped around the need for protection against cyber-attacks, i.e. the &#8220;cyber-security&#8221; domain, which primarily deals with the protection of systems (incl. HW/SW &amp; data) connected to the internet against cyber-attacks and should not be necessarily mixed with the domain of Information Technology (IT) Security (see <link linkend="F4-1">Figure <xref linkend="F4-1" remap="4.1"/></link>) that mainly refers to the protection of information. Cyber-security, on the other hand, is the ability to protect or defend the use of cyberspace from cyber-attacks by securing &#8220;things&#8221;, vulnerable through ICT.</para>
<para>The first cyber-attack was recorded in 1989, in the form of a computer worm (i.e. malware), while their number has significantly grown in the following years (see <link linkend="F4-2">Figure <xref linkend="F4-2" remap="4.2"/></link>). Equal growth has been noted in the level of both the threat they pose and the sophisticated manner with which they are launched and/or acting. Specifically, cyber-security threats have evolved from standalone threats that could affect single targets, to more complicated scenarios, where threats could be self-replicated, mutated and expanded to other devices and/or networks via the internet. Finally, the evolution of the exploitation manner of modern cyber-attacks is also extremely interesting. For instance, traditional ways for (i) harming infrastructures through DDoS attacks, (ii) misusing them through malwares, and (iii) mitigating them through identity spoofing are nowadays considered outdated and new emerging threats and attack scenarios are emerging, which aims at disusing sensitive soft assets through ransomware that directly lead their endangerment and their potential loss.</para>
<fig id="F4-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-1">Figure <xref linkend="F4-1" remap="4.1"/></link></label>
<caption><para>Information technology security vs cyber-security.</para></caption>
<graphic xlink:href="graphics/ch004_fig001.jpg"/>
</fig>
<fig id="F4-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-2">Figure <xref linkend="F4-2" remap="4.2"/></link></label>
<caption><para>Incidents reported to US-CERT, Fiscal Years 2006&#8211;2014. <i>(Source: GAO Analysis data of US-CERT)</i>.</para></caption>
<graphic xlink:href="graphics/ch004_fig002.jpg"/>
</fig>
<para>Unavoidably, cyber-security becomes, of great importance due to its increasing reliance on computer systems. Recently, in the era of the Internet of Things (IoT)<footnote id="fn_1" label="1"> <para>IoT: The network of physical devices with connectivity (i.e. connect, collect &amp; exchange data). The term was first introduced, when the amount of connected devices outnumbered the humans connected to the internet.</para></footnote>, a large number of connected devices, located at the edge of the Internet hierarchy, generate massive volumes of data at high velocities. This turns centralized data models non-sustainable, since it is infeasible to collect all the data to remote data centres and expect the results to be sent back to the edge, with low latency.</para>
<para>Based on the constantly increasing dependency of the global economy on inter-connected digitization (i.e. world-web-web, smart-grids, IoT nets, direct communication links between platforms, etc.), it is the integrity and the availability of the prompt &amp; uninterrupted interconnectivity that attracts great focus and investment from major players in the market. Similarly, the trend towards IoT and digital innovation, forms a flourishing business landscape for SMEs. However, this is put at stake due to the uncertain, cumbersome and most importantly costly nature of holistic cyber-security solutions. Specifically, although tailored solutions capable of providing the appropriate cyber-security levels for big companies appear, they can hardly be adapted to other environments and thus, lack in scalability, which makes them unsuitable for smaller enterprises.</para>
<para>The implementation of a complete and reliable edge computing security framework seems to be a promising alternative to protect an IoT environment and the overall network of an SME. In order to fulfil the IoT requirements, modern trends dictate that more resources (incl. computation, storage &amp; networking) must be located closer to users and the IoT devices, at the edge of the networks, where data is generated (i.e. <i>&#8220;edge computing&#8221;</i>), so as to (i) reduce data traffic especially in Internet backbone, (ii) provide in-situ data intelligence, (iii) reduce latency<footnote id="fn_2" label="2"> <para>Given the complexity of cyber-security tasks and the latency imposed by the network distance between the client and the cloud infrastructure, one can deduce that cloud computing architecture, is by-design unsuitable for time-sensitive applications. The advancements in Edge Computing [1&#8211;3] allow for the efficient deployment and delivery of minimum-latency services.</para></footnote> and (iv) improve the response speed.</para>
<para>This way, cyber-security solutions will become more, in terms of both applicability and adaptability per use-case. Toward this direction, monolithic approaches are not enough; in-situ analysis based on usage Behaviour Analytics and Security Information &amp; Event Management (SIEM) systems, customized at the for certain edge, seem able to offer a plausible and affordable solution, if offered as a modular product of adequate granularity in terms of offered services, so as to form an attractive product, easily customizable to the needs of the each customer.</para>
<para>Toward this direction, edge solutions introduce 5 major challenges<footnote id="fn_3" label="3"> <para>J. Pan, Z. Yang, &#8220;Cybersecurity Challenges and Opportunities in the New &#8220;Edge Computing + IoT&#8221; World&#8221;, Association for Computing Machinery, 2018, doi: 10.1145/3180465.3180470</para></footnote> that require attention, namely (i) the massive numbers of vulnerable IoT devices, (ii) the NFV-SDN integrated edge cloud platform, (iii) the privacy &amp; security<footnote id="fn_4" label="4"> <para>Business data can be either sensitive or non-sensitive, depending on the type of business and the type of transaction. In any case, the sensitive and classified data must be stored and managed in a &#8220;regulated zone&#8221;. With sophisticated encryption and key management, cloud storage platforms can qualify as a legitimate solution for storing and maintaining such data.</para></footnote> of the data, (iv) the interaction between edge &amp; IoT devices and (v) the Trust &amp; Trustworthiness.</para>
<para>This chapter presents an analysis on the cyber-threats landscape within generic ICT environments and its impact on SMEs, it also covers the different standardization and certification schemas that would help SMEs to support a cyber-security strategy and takes into consideration, standardization and best practices for the FORTIKA ecosystem and deployment. Additionally, the modular, edge-based cyber-security solution of the FORTIKA concept<footnote id="fn_5" label="5"> <para>https://cordis.europa.eu/project/rcn/210222/factsheet/en</para></footnote> is promoted within the current article. The resources required from a potential SME customer are efficiently managed, while a dedicated marketplace is a repository that can extend the basic version product with affordable functionalities tailored to the needs of each SME. On top of the latter, one can selectively build the appropriate cyber-security solution that matches their needs, through combination of the correct bundles.</para>
</section>

<section class="lev1" id="sec4-2">
<title>4.2 Related Work and Background</title>
<para>The increasingly connected world of people, organizations, and things is driven by the vast proliferation of digital technologies. This fact guarantees a promising future for cyber-security companies but poses a great threat for SMEs. According to Symantec [4], 60% of targeted attacks in 2015 aimed at small businesses, while &#8220;more than 430 million new unique pieces of malware were discovered&#8221;. According to FireEye [5], 77% of all cybercrimes target SMEs. Simple endpoint protection through antivirus has become by far inadequate, due to the complexity and variety of cyber-threats, as well as the integration of multiple digital technologies in business processes, even in small enterprises. Modern cyber-security solutions for businesses, which are designed to provide multilayer proactive protection, use heuristics and threat-intelligence technologies to detect unknown threats, protecting a wide range of devices (e.g., PCs, servers, mobile devices, etc.) and business practices (e.g., BYOD, remote access, use of cloud-based apps and services, etc.). Due to this complexity, no single security solution can effectively address the whole threat landscape. Threats may range from relatively harmless, abusive content (such as spam messages) and other low-impact opportunistic attacks, to very harmful (malicious code), while they can escalate to targeted attacks (e.g., spyware, denial of service, etc.), with major operational and economic consequences for the enterprise.</para>
<para>According to ENISA [6] the top-5 threats in 2016 are mainly network-based. Consequently, a cost-effective solution for such threats could prove decisive for the future of SMEs and cannot be provided by one of the traditional methods.</para>
<para>Social engineering is another typical form of threat. This can be manifested either by a deceptive e-mail, installation instructions for a &#8220;free&#8221; or even &#8220;trial&#8221; piece of software, bogus sites, etc.</para>
<para>Moreover, Internet of Things (IoT) applications, such as healthcare and assistive technologies promise a higher level of quality of life for citizens around the world; on the other hand, however, they increase the attack surface, considerably. Legacy systems, implantable devices, and wireless networks are also eligible attack domains. Embedded systems are used more and more, e.g. in modern cars. Controlling and manipulating such entities can provide attackers with enormous power. The same holds for critical infrastructures and drones. Therefore, the cyber security research community, needs to address those issues.</para>
<para>SMEs consist of diverse businesses that usually operate in the service, manufacturing, engineering, agroindustry, and trade sectors. SMEs can be innovative and entrepreneurial, and usually aspire to grow. Nevertheless, some stagnate and remain family owned. There is no single, uniformly-accepted definition of SMEs. Many definitions exist whereby SMEs are classified by different characteristics, including, but not limited to profitability, turnover, sales revenue, or the number of people employed.</para>
<para>The European Union defines an SME combining the number of employees, along with revenue and assets. A medium-sized enterprise [7], is defined as an enterprise which employs fewer than 250 persons and whose annual turnover, does not exceed &#8364;50 million or whose annual balance-sheet total does not exceed &#8364;43 million.</para>
<para>SMEs represent the &#8220;middle class&#8221; of entities using computers, with single or home users at the bottom of the hierarchy and large companies or organizations at the top. As such, SMEs lack the resources typically available in the case of large organizations, while, at the same time, they need continuous and secure operation of their systems in order to function. Security can be quite expensive and since low-investment consequences on it are not evident until a significant incident takes place, it is often very tempting to allocate the minimum of resources for it.</para>
<para>However, a significant security incident can prove fatal for an SME, either directly (e.g., cessation of business transactions) or indirectly (e.g., bad reputation causing most of the customers to walk away or litigation). Most SMEs do not consider themselves as having data that is of interest to cyber-criminals and quite often dismiss the need for adequately addressing vulnerabilities in their infrastructure. In reality, the opposite is true; every enterprise today collects data on employees, clients, and vendors that are of interest to cyber criminals. Consequently, it is crucial to develop cyber-security products that would focus on the needs of SMEs. Challenges for mitigating cyberthreats must be addressed and highlighted, and the need to mitigate the identified risks must be addressed as well.</para>
<para>FORTIKA aims to establish a reliable and secure business environment for SMEs, that will provide and ensure business continuity. The FORTIKA solution is composed of modules that are designed to provide a cohesive and cost-effective set of services that address those issues. These modules are described below.</para>
</section>

<section class="lev1" id="sec4-3">
<title>4.3 Technical Approach</title>
<para>This section presents the high-level deployment diagram (see <link linkend="F4-3">Figure <xref linkend="F4-3" remap="4.3"/></link>) of the FORTIKA modules in the two main FORTIKA systems, namely the Cloud and the SME. In the Cloud, the Marketplace, its Dashboard, and Cloud platform related modules (i.e. Orchestrator, Cloud security, Cloud Storage) are deployed; further to that, several constituent components of FORTIKA cyber security appliances (i.e. ABAC, SEARS, Encrypted data search engine, redBorder Manager) are also deployed there. At the SME level, there are two distinct cases of deployment. In the first case, the deployment is performed at the FORTIKA-GW where the GW&#8217;s operational modules (depicted in red colour) and the FORTIKA security modules (lightweight modules in the</para>
<fig id="F4-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-3">Figure <xref linkend="F4-3" remap="4.3"/></link></label>
<caption><para>FORTIKA deployment diagram (High level).</para></caption>
<graphic xlink:href="graphics/ch004_fig003.jpg"/>
</fig>

<para>ARM, heavyweight modules in the FPGA) can be found. In the second case, the Agents (software units collecting information and forwarding it to the GW&#8217;s cyber-security appliances for processing/analysis) are deployed in the workstations and servers of the SME.</para>
</section>

<section class="lev2" id="sec4-3-1">
<title>4.3.1 FORTIKA Accelerator</title>
<para>FORTIKA Accelerator: The FORTIKA security accelerator (FORTIKA gateway) is connected and offers unlimited expandability (by simply connecting as many accelerators in series) in terms of processing power and storage capacity, and scalability through a modular connection of two or more accelerators. Its user interface guides the enterprise administrator to appropriately define and configure the company&#8217;s security &amp; privacy policy, along with the level of encryption (information classification) and the corresponding data availability (privacy) within the enterprise and 3rd parties (e.g. suppliers, partners/ collaborators, customers, other parties), thus covering a wide range of use case scenarios. The system users/admins are kept informed at any time via comprehensive visual analytics while being able to interfere with the functionality of the presented solution in an effortless and user-friendly way.</para>
<para>FORTIKA Accelerator Architecture: Acceleration has been a hot topic in computing for the past few years, with Moore&#8217;s law and the associated performance bumps slowly crawling to a halt. Currently, most industrial leaders accept that one form of acceleration will be used to provide the compute capacity required to cope with the large flows of data being created in the modern, widely interconnected world. FORTIKA, leverages acceleration in the form of programmable logic devices (FPGA-enabled gateway), to deliver high-performance security applications to SMEs. FPGAs offer an efficient solution in terms of performance, flexibility and power consumption. To achieve this, the FPGA must be made accessible as a resource over the network while allowing users to remotely deploy resource-demanding compute tasks on the device. This requires a middleware, either in software or in hardware to allow for the discovery of the programmable logic resources and the exchange of information between the marketplace, which is responsible for determining the appropriate infrastructure for the deployment of a task, and the accelerator module in order to determine where tasks should be deployed.</para>
<para>The FORTIKA accelerator module (<link linkend="F4-4">Figure <xref linkend="F4-4" remap="4.4"/></link>) utilizes an FPGA SoC embedded device which combines ARM processors with programmable logic in one integrated circuit. This device allows an optimal division of labour between software and hardware and allows system designers, to offload computationally intensive tasks to the hardware while using the software for any light-weight, non-critical issue. FORTIKA has inherited several features from the T-NOVA FPGA-powered cloud platform, which uses OpenStack running on the CPUs to deploy tasks on the programmable logic but extended and adapted the platform to meet FORTIKA&#8217;s edge demands.</para>
<fig id="F4-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-4">Figure <xref linkend="F4-4" remap="4.4"/></link></label>
<caption><para>FORTIKA accelerator architecture.</para></caption>
<graphic xlink:href="graphics/ch004_fig004.jpg"/>
</fig>
<para>The FORTIKA Middleware (MDW) (<link linkend="F4-5">Figure <xref linkend="F4-5" remap="4.5"/></link>) aims to facilitate a) the interactions between the FORTIKA GW and the FORTIKA market-place; b) the loading of the security bundles to the FORTIKA accelerator; c) the exchange of data between the ARM deployed security bundles and their FPGA deployed counterparts; and d) the SW developers in producing accelerated security bundles that can be deployed in the FORTIKA accelerator. To put things in context, the following picture shows which (sub)systems, the MDW (pink Note boxes) aims to &#8220;glue&#8221; and what activities to facilitate, inside the FORTIKA architecture.</para>
<para>To achieve these objectives the MDW consists of several components namely the Security Bundle Handler (SBH), the LwM2M client, and the Synthesis engine. The first one provides the deployment and management of the bundles in the FORTIKA GW (both in the ARM and the FPGA parts). The second one provides the communication engine/channel which is used to interact with the FORTIKA marketplace, whereas the last alleviates the development of accelerated security bundles by hiding the complexity of HW design and configuration from the FORTIKA SW developers. As <link linkend="F4-6">Figure <xref linkend="F4-6" remap="4.6"/></link> indicates, the first two components are deployed in the FORTIKA Accelerator (GW), whereas the last one is currently deployed in a Virtual Machine located at FINT&#8217;s cloud infrastructure. So far, the Synthesis engine and the GW&#8217;s MDW components (SBH and LwM2M client) do not have any interaction as their activities are under different scopes.</para>
<fig id="F4-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-5">Figure <xref linkend="F4-5" remap="4.5"/></link></label>
<caption><para>Middleware use in FORTIKA.</para></caption>
<graphic xlink:href="graphics/ch004_fig005.jpg"/>
</fig>
<para>Developing applications for the FPGA requires knowledge of the HW platform and its specifics, something that can discourage SW developers from building applications for the FORTIKA accelerator. In the project&#8217;s context, we tackle this issue by exploiting the fact that the FPGA application development is divided in two phases, namely the Front-End design and the Back-End design [8]. For the Front-End design phase, the FORTIKA developers are using High Level Synthesis tools (i.e. Vivado HLS suite [3]) (<link linkend="F4-7">Figure <xref linkend="F4-7" remap="4.7"/></link>) which allows them to write their FPGA applications in high level languages, such as C/C++, thus avoiding to use low level hardware specific languages (e.g. VHDL) that require knowledge of the HW specifics. After writing their code, the developers can use Vivado HLS (<link linkend="F4-8">Figure <xref linkend="F4-8" remap="4.8"/></link>) to produce artefacts that are known as Soft IP (Intellectual Property) cores. These IP cores are used in the Back-End design phase for producing the final bitstreams, that can run on the actual FPGA; however, the Back-End design phase requires the knowledge of specific parameters of the used HW design, thus making it a hard task for the standard SW engineers; therefore, it is this design phase that the FORTIKA MDW aims to facilitate by providing a service that takes as input the produced soft IP core, runs the low-level synthesis (process of the Back-End design phase), and then returns to the developers the final bitstream. In this context, the following diagram depicts, the sequence of steps that are followed from the Synthesis Engine for implementing this task.</para>
<fig id="F4-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-6">Figure <xref linkend="F4-6" remap="4.6"/></link></label>
<caption><para>SBH and LwM2M client components of the middleware.</para></caption>
<graphic xlink:href="graphics/ch004_fig006.jpg"/>
</fig>

<fig id="F4-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-7">Figure <xref linkend="F4-7" remap="4.7"/></link></label>
<caption><para>Synthesis engine component of the middleware.</para></caption>
<graphic xlink:href="graphics/ch004_fig007.jpg"/>
</fig>
<fig id="F4-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-8">Figure <xref linkend="F4-8" remap="4.8"/></link></label>
<caption><para>Synthesis sequence steps.</para></caption>
<graphic xlink:href="graphics/ch004_fig008.jpg"/>
</fig>
<para>The <i>UploadSoftIPcore()</i> represents the function that allows developers to upload the produced soft IP cores to the Synthesis Engine. Currently, these IP cores are received via email, however at the next versions of the MDW the cores will be uploaded via a web form; this web form is planned to be provided from the Marketplace dashboard. The <i>Synthesise()</i> function, performs the low-level synthesis that produces the final bitstream. The <i>ReturnBitStream()</i> function, represents the push of the synthesised bitstream to the developer.</para>
</section>

<section class="lev2" id="sec4-3-2">
<title>4.3.2 Fortika Marketplace</title>
<para>To facilitate competition and support different value chain configurations, a novel Marketplace Platform is introduced, allowing FORTIKA users to interact with Service Providers and multiple third-party Security Function Developers, for selecting the best service bundle that suits their needs. For this reason, the Marketplace incorporates a prototype that aims to introduce and promote a novel market field for security services, introducing new business-cases and considerably expanding market opportunities by attracting new entrants to the cyber-security market. SMEs and academia can leverage the FORTIKA architecture by developing innovative cutting-edge Security Functions, that can be included in the Function Store, and rapidly introduced to the market, thus avoiding the delay and risk of hardware integration and prototyping. By utilizing a common web-based graphical user interface, the Marketplace constitutes the environment where customers can:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Place their requests for FORTIKA services and declare their requirements for the corresponding security functions</para></listitem>
<listitem><para>Receive offerings and make the appropriate selections, considering the offered Service Level Agreements (SLAs)</para></listitem>
<listitem><para>Monitor the status of the established security services and associated security functions, as well as perform, according to their rights, management operations on them (Service monitoring and management will be enabled via a graphical Service Dashboard to be implemented)</para></listitem>
</itemizedlist>
<para>The overall concept for security functions trading, deployment and management within the Marketplace is depicted in <link linkend="F4-9">Figure <xref linkend="F4-9" remap="4.9"/></link>, where third-party Security Function developers (1) advertise their available virtual security appliances and users may acquire them for customized service creation/utilization. More specifically, users&#8217; requests (2) are received via the Brokerage Module as part of the Marketplace Platform, which is respon-sible for a) analysing their requirements, b) matching the analysis results with the available resources, maintained by the &#8220;Management &amp; Orches-tration&#8221; module along with the Security Functions aggregated at the Store (4), and c) initiating an auction process for all valid solutions under various merchandise policies and the available SLA models. Upon successful SLA establishment and Functions trading, the Orchestration module deploys the Security Function onto the underlying infrastructure (5), maintaining its control, customization and administration.</para>
<fig id="F4-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-9">Figure <xref linkend="F4-9" remap="4.9"/></link></label>
<caption><para>Process of deployment and management within FORTIKA marketplace.</para></caption>
<graphic xlink:href="graphics/ch004_fig009.jpg"/>
</fig>

<para>To carry out Security Functions discovery provided by third-party developers, similarity-based algorithms such as Nearest Neighbour will be exploited by the Brokerage module to perform service matching. To speed up this process, FORTIKA will study and identify the most appropriate data structures for establishing a competent resource and service description schema for Security Functions matching the brokerage. A principal target is to identify mandatory and optional fields within the schema so as to allow a configurable degree of exposure of resources and services, associated Security Functions and SLAs to all involved actors, according to the confidentiality requirements of each. The integration of the FORTIKA Middleware appliance in existing networks requires seamless connectivity, according to usability and automation standards and guidelines. The appliance will integrate an OpenFlow Ethernet switch with physical Ethernet ports routing and security capabilities (firewall, IPS, IPSec). The appliance will also provide the required processing and storage to enable applications available through FORTIKA Marketplace to be locally deployed but orchestrated according to rules computed in the cloud. The FORTIKA Marketplace will enable service providers to deploy and promote integrated security services through a web-based user-friendly interface with personalization features. Depending on the service design requirements, the FORTIKA Marketplace will be deployed in the cloud. Deployment of the Marketplace is not limited to public or private cloud. Due to the dynamical deployment mechanisms leveraging tools like Ansible and Docker, and the use of standards (TOSCA) for the services definition, FORTIKA consortium is not limited to any type of cloud resources.</para>
<para>FORTIKA Appliances (Virtual or Physical) will be managed through a FORTIKA-specific management network, using a personalized cloud service. For this reason, an integrated management platform will be deployed which will offer a consistent and unique administrator front end, for both the Middleware appliance configuration as well as installed modules configuration and management. The administrator front end, will allow management of the Security Functions&#8217; lifecycle.</para>
<para>Finally, the connection of FORTIKA Middleware appliances with the orchestrator in the cloud, is a critical point since protecting the integrity and confidentiality of data traveling in the fog area is crucial for middleware adoption and end-user trust to FORTIKA. For this reason, FORTIKA Middleware and FORTIKA cloud services communicate over secure channel leveraging LWM2M protocol. This is the back-channel used for management of the FORTIKA Appliance with the running Middleware.</para>
</section>

<section class="lev1" id="sec4-4">
<title>4.4 Indicative FORTIKA Bundles</title>
</section>

<section class="lev2" id="sec4-4-1">
<title>4.4.1 Attribute-based Access Control (ABAC)</title>
<para>Access control can be defined as a security service, co-existing with others, that aims to limit actions or operations of legitimate entities against requested resources [9]. Over the years, many access-control models have been proposed with the prevalent ones being MAC, DAC and RBAC [9]. In the recent years, information systems are able to interact with the environment, the context, thus a need for a novel approach in controlling access on context-aware information systems arose. As a result, Attribute-Based Access Control (ABAC) was proposed. ABAC policies are able to include attributes of the subject (requestor), the object (requested resource) and the context (environment). So, in contrary to legacy models, based on identities, a higher level of versatility and control can be achieved.</para>
<para>FORTIKA implements ABAC by providing a cloud-based access control solution which will be highly benefited from the FORTIKA Gateway appliance, to control access to SME ecosystem resources, based on policies that the SME will be able to create and manage.</para>
<para>A system that implements ABAC, consists of the following components [10] (<link linkend="F4-10">Figure <xref linkend="F4-10" remap="4.10"/></link>):</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Policy Administration Point (PAP), that is used to create, store, test and retrieve access control policies. Since the PAP component will be hosted in FORTIKA cloud, a multi-tenant environment will be deployed so that SME administrator users will have access to own organization policies only.</para></listitem>
<listitem><para>Policy Information Point (PIP), that retrieves all necessary attributes and authorization data required by PDP in order to reach an access control decision. PIP in FORTIKA is implemented twofold both in the cloud and in the fog, since attribute values are collected from both the cloud and from SME premises.</para></listitem>
<listitem><para>Policy Decision Point (PDP) that evaluates access requests against policies so that access control decision is computed.</para></listitem>
</itemizedlist>
<fig id="F4-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-10">Figure <xref linkend="F4-10" remap="4.10"/></link></label>
<caption><para>ABAC components [10].</para></caption>
<graphic xlink:href="graphics/ch004_fig10.jpg"/>
</fig>

<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Policy Enforcement Point (PEP) which is the component where an access control request is generated and access decision is enforced.</para></listitem>
</itemizedlist>
<para>The Fortika ABAC service is designed as a three-layered approach (<link linkend="F4-11">Figure <xref linkend="F4-11" remap="4.11"/></link>). In terms of component placing and communication architecture, the PIP and PAP components, as well as the related Policy Repository, will be deployed in the cloud (ABAC.Cloud). This will allow for rapid policy replication in case of multi-site SMEs and, additionally, will permit for replacing an on premise FORTIKA appliance without any prior consideration for existing attributes and policies. Moreover, cloud can provide adequate processing and storage resources to create a user-friendly administration environment.</para>
<para>On the other hand, to avoid any issues with network latency or network unavailability [11], the PDP component will be held in the fog area (ABAC.fog). More specifically, PDP will be held in FORTIKA&#8217;s physical or virtual appliance hosted in SME premises, thus accelerating decision making. Additionally, to better support contextual attributes, a local PIP along with a local attribute repository (currently labelled NA-PIP) will accompany PDP and communicate with cloud PIP to exchange attribute information.</para>
<fig id="F4-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-11">Figure <xref linkend="F4-11" remap="4.11"/></link></label>
<caption><para>ABAC layered approach.</para></caption>
<graphic xlink:href="graphics/ch004_fig11.jpg"/>
</fig>

<para>Finally, the PEP component will be initially integrated into a prototype agent for client devices. Nevertheless, ABAC solution will provide the appropriate API, for other compatible PEP components to be able to utilize FORTIKA&#8217;s ABAC service.</para>
<para>FORTIKA ABAC implements the XACML framework [12] and is based exclusively on open-source technologies, developed with Java and Java EE using Maven. ABAC.Cloud is based on WSO2 Identity Server which is licensed under Apache 2.0 license, whereas ABAC.fog is based on Balana XACML and has been developed to provide a RESTful API to PEPs. The API exposes services according to OASIS REST Profile for XACML 3.0 version 1.0 [13]. This enables potentially any vendor or integrator to utilize FORTIKA ABAC.fog and consume authorization services, constituting FORTIKA ABAC an Authorization as a Service (AuthZaaS) offering.</para>
</section>

<section class="lev1" id="sec4-5">
<title>4.5 Social Engineering Attack Recognition Service (SEARS)</title>
<para>Social engineering attacks are usually an important step in the planning and execution of many other types of cyber-attacks. The term &#8216;social engineering&#8217; refers to physiological, emotional and intellectual manipulation of people into performing actions or revealing confidential information. As defined in [14], social engineering is: <i>&#8220;a deceptive process whereby crackers &#8216;engineer&#8217; or design a social situation to trick others into allowing them access an otherwise closed network, or into believing a reality that does not exist.&#8221;</i></para>
<para>The increased usage of electronic communication tools (email, instant messaging, etc.) in enterprise environments results in the creation of new attack vectors for social engineers. However, a successful social engineering attack could result in a compromised SME&#8217;s information system. Thus, several attempts have been made in the research field to provide technical means for detecting such attacks in early stages. Works that are near to a prototyping level are SEDA [15] and SEADM [16]. Furthermore, interesting efforts that are still under development in the research laboratory are [17] and [18].</para>
<para>Social Engineering Attack Recognition System (SEARS) will operate in the application layer and will be able to compute communication risk and therefore prevent personal or corporate data leakage by raising alerts to the employees when the chat conversation reaches a specific risk threshold [19]. SEARS is a collection of autonomous services that collaborate with each other through technology-agnostic messaging protocols, either point-to-point or asynchronously. The development of SEARS components follows the microservices design approach. Namely, each component is consisted of a number of independent microservices that serve distinct functionalities of the whole system.</para>
<para>SEARS components will be placed in the three layers of FORTIKA&#8217;s architecture, as follows:</para>
<para><b>Client layer:</b></para>
<para>The SEARS Agent (SEARS.agent) is a service that monitors, captures and pre-processes an employee&#8217;s social media communications. It is also capable of receiving the total risk value and alerting the user for possible social engineering attack attempts. SEARS.Agent is deployed on end-user&#8217;s device in a form of a docker container or as local service and continuously monitors and captures an employee&#8217;s social media communications. SEARS users are registered SME employees as interlocutors (e.g. working on live chat service) or corporate IT administrators (<link linkend="F4-12">Figure <xref linkend="F4-12" remap="4.12"/></link>).</para>
<fig id="F4-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-12">Figure <xref linkend="F4-12" remap="4.12"/></link></label>
<caption><para>SEARS architecture.</para></caption>
<graphic xlink:href="graphics/ch004_fig12.jpg"/>
</fig>

<para><b>Fog layer:</b></para>
<para>SEARS components in the fog area (SEARS.fog) will be deployed in FORTIKA physical or virtual appliance, hosted in SME premises. SERS.fog receives the captured data and stores it (Detection Storage component) locally for further pre-processing (Pre-processing component), using Natural Language Processing techniques. The pre-processed data is then anonymized and sent to the cloud (SEARS.cloud). The Detection Engine receives the particular risk values from the SEARS.cloud and then calculates the total Social Engineering Risk value, stores it in the Detection Repository and sends it to the SEARS.client. SEARS.Fog is deployed on FORTIKA Gateway in the form of a docker container.</para>
<para><b>Cloud layer:</b></para>
<para>The pre-processed data received from the SEARS.fog is stored in the SEARS Storage component of SEARS.cloud, in order to be used by the Risk Estimation component to calculate values of particular risks. These values are then sent to the SEARS.fog. The following components are part of SEARS.Cloud core functionality and implemented by several microservices.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>Document Classification (DC)</b>:</para></listitem>
</itemizedlist>
<para>Text dialogue, in the form of an anonymized TF-IDF matrix, is processed and classified as dangerous or not. The real text dialogue is processed at the SEARS.Agent, where an anonymized frequency vector is delivered to SEARS.Cloud, where the classification takes place.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>Personality Recognition (PR)</b>:</para></listitem>
</itemizedlist>
<para>Each of the interlocutors is being classified based on his/her writings. The processing/classification takes place at the SEARS.Cloud using the previous anonymized frequency vector.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>User History (UH)</b>:</para></listitem>
</itemizedlist>
<para>Each previous text chat between the two specific interlocutors is represented as probability (decimal number) and it is stored at SEARS.Cloud.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>Exposure Time (ET)</b>:</para></listitem>
</itemizedlist>
<para>The duration of an employee&#8217;s online presence is being depicted as a decimal number stored at SEARS.Cloud</para>
<para>SEARS offers the ability to communicate the estimated risk values to other modules of FORTIKA. The outgoing information is provided using a standard HTTP POST method. All data is encoded using the JavaScript Object Notation (JSON) format and follow the structure of SEARS Output JSON Schema. Moreover, all data transfers are being carried out using REST</para>
<fig id="F4-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F4-13">Figure <xref linkend="F4-13" remap="4.13"/></link></label>
<caption><para>SEARS conceptual design.</para></caption>
<graphic xlink:href="graphics/ch004_fig13.jpg"/>
</fig>

<para>APIs through HTTPS protocol thus the communication channel cannot be compromised. The SEARS conceptual design as a whole is presented in <link linkend="F4-13">Figure <xref linkend="F4-13" remap="4.13"/></link>.</para>
<para><b>SIEM</b></para>
<para>The Security Information and Event Management System (SIEM), is a solution able to analyse information and events collected at different levels of the monitored system in order to discover possible ongoing attacks, or anomalous situations. FORTIKA includes a customized SIEM solution, able to deal with specificities of its different technologies and components.</para>
<para>The network provides real-time traffic data to the SIEM system. The system in turn, forwards the data for processing to both the Anomaly detection and the behavioural analysis components. The Anomaly detection component analyses the data in order to detect anomalies, utilising both automatic anomaly detection algorithms, such as Local Outlier Factor and Bayesian Robust Principal Component Analysis, as well as visual analytics methods, such as k-partite graphs and multi-objective visualizations. The Behavioural analysis component processes the network data in order to identify abnormal traffic patterns that may indicate that a malicious event such as a DDoS attack is in progress. The output from both components is then passed to the Visualization component for presentation to the user, or to the Hypothesis Formulation component. The Hypothesis Formulation component performs a statistical analysis of the output data of the Anomaly detection and the Behavioural analysis component, through a series of hypotheses in order to determine whether these data express a normal or a usual traffic pattern or behaviour. The analysis data can be subsequently fed back to the Anomaly detection and the behavioural analysis components for further analysis.</para>
</section>

<section class="lev1" id="sec4-6">
<title>4.6 Conclusion</title>
<para>FORTIKA architecture proposes a hybrid (hardware software) cybersecurity solution suitable for micro, small and medium-sized enterprises allowing them to continuously integrate novel cyber-security technologies and thus reinforce their position and overall reputation in the European market. Concluding, this paper introduced a novel architecture that aims at reshaping the cyber-security landscape in order to provide an end-user-friendly solution targeting towards moving security near the network edge. This architecture is based upon two pillars: A near-the-edge security-accelerator, which is able to &#8220;accelerate&#8221; security in the place where the problem is formulated, and a Cloud Marketplace which provides a unified portal for enabling security for FORTIKA end-users. The preliminary evaluation of the presented work illustrated that users (SMEs) can identify which cyber-security solutions are suitable for their enterprises and seamlessly deploy them on their infrastructures (FORTIKA gateway). Additionally, security-solutions&#8217; developers/providers can easily offer their services through the FORTIKA marketplace, which also allows them to interact with users and offer custom-tailored cyber-security solutions (brokerage), thus extending their marketing opportunities. The presented work is an ongoing EU-funded Horizon 2020 project, and currently runs the second year of development. Several complex and intuitive features are to be developed in the near future and thus more detailed and elaborate reporting of the work will be presented through publications and public workshops, as well as from the project&#8217;s social media accounts (Facebook, Twitter, YouTube, etc.).</para>
<para><b>Acknowledgment</b></para>
<para>This work has received funding from the European Union&#8217;s Horizon 2020 Framework Programme for Research and Innovation, with Title H2020-FORTIKA &#8220;cyber-security Accelerator for trusted SMEs IT Ecosystem&#8221; under grant agreement no. 740690.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] H. Madsen, G. Albeanu, B. Burtschy, and F. Popentiu-Vladicescu, &#8220;Reliability in the utility computing era: Towards reliable fog computing,&#8221; in International Conference on Systems, Signals, and Image Processing. IEEE, jul 2013, pp. 43&#8211;46. [Online]. Available: http://ieeexplore.ieee.org/document/6623445/</para>
<para>[2] Y. Nikoloudakis, S. Panagiotakis, E. Markakis, E. Pallis, G. Mastorakis, C. X. Mavromoustakis, and C. Dobre, &#8220;A Fog-Based Emergency System for Smart Enhanced Living Environments,&#8221; IEEE Cloud Computing, vol. 3, no. 6, pp. 54&#8211;62, nov 2016. [Online]. Available: http://ieeexplore.ieee.org/document/7802535/</para>
<para>[3] C. Dobre, C. X. Mavromoustakis, N. M. Garcia, G. Mastorakis, and R. I. Goleva, &#8220;Introduction to the AAL and ELE Systems,&#8221; Ambient Assisted Living and Enhanced Living Environments: Principles, Technologies and Control, pp. 1&#8211;16, jan 2016. [Online]. Available: https://www.sciencedirect.com/science/article/pii/B9780128051955000 016 Books (IDEA/IGI, Springer and Elsevier).</para>
<para>[4] B. McKenna, &#8220;Symantec&#8217;s Thompson pronounces old style IT security dead,&#8221; Network Security, vol. 2005, no. 2, pp. 1&#8211;3, 2005. [Online]. Available: http://linkinghub.elsevier.com/retrieve/pii/S135348580500, 1947.&#8221;</para>
<para>[5] Cyber Security Experts &amp; Solution Providers &#124;FireEye.&#8221; [Online]. Available: https://www.fireeye.com/</para>
<para>[6] &#8220;ENISA Threat Landscape Report 2016 &#8212; ENISA.&#8221; [Online]. Available: https://www.enisa.europa.eu/publications/enisa-threat-landscape-report-2016. [Accessed: 20-Nov-2017].</para>
<para>[7] E. O. Yeboah-Boateng, Cyber-Security Challenges with SMEs in Developing Economies: Issues of Confidentiality, Integrity &amp; Availability (CIA). Institut for Elektroniske Systemer, Aalborg Universitet, 2013.</para>
<para>[8] E. K. Markakis, K. Karras, A. Sideris, G. Alexiou, and E. Pallis, &#8220;Computing, Caching, and Communication at the Edge: The Cornerstone for Building a Versatile 5G Ecosystem,&#8221; IEEE Communications Magazine, vol. 55, no. 11, pp. 152&#8211;157, nov 2017. [Online]. Available: http://ieeexplore.ieee.org/document/8114566/</para>
<para>[9] R. S. Sandhu and P. Samarati, &#8220;Access control: principle and practice,&#8221; IEEE Communications Magazine, vol. 32, no. 9, pp. 40&#8211;48, Sep. 1994.</para>
<para>[10] V. C. Hu et al., Guide to Attribute Based Access Control (ABAC) Definition and Considerations (Draft). 2013.</para>
<para>[11] S. Salonikias, I. Mavridis, and D. Gritzalis, &#8220;Access Control Issues in Utilizing Fog Computing for Transport Infrastructure,&#8221; in Critical Information Infrastructures Security, 2016, pp. 15&#8211;26.</para>
<para>[12] &#8220;extensible Access Control Markup Language (XACML) Version 3.0.&#8221; [Online]. Available: http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-os-en.html. [Accessed: 18-Jan-2019].</para>
<para>[13] &#8220;REST Profile of XACML v3.0 Version 1.0.&#8221; [Online]. Available: http://docs.oasis-open.org/xacml/xacml-rest/v1.0/csprd03/xacml-rest-v1.0-csprd03.html. [Accessed: 18-Jan-2019].</para>
<para>[14] B. H. Schell, B. Schell, and C. Mart&#237;n, Webster&#8217;s New World Hacker Dictionary. John Wiley &amp; Sons, 2006.</para>
<para>[15] M. D. Hoeschele and M. K. Rogers, &#8220;CERIAS Tech Report 2005&#8211;19 Detecting Social Engineering,&#8221; 2004.</para>
<para>[16] F. Mouton, L. Leenen, and H. S. Venter, &#8220;Social Engineering Attack Detection Model: SEADMv2,&#8221; 2015, pp. 216&#8211;223.</para>
<para>[17] R. Bhakta and I. G. Harris, &#8220;Semantic analysis of dialogs to detect social engineering attacks,&#8221; in Semantic Computing (ICSC), 2015 IEEE International Conference on, 2015, pp. 424&#8211;427.</para>
<para>[18] S. Uebelacker and S. Quiel, &#8220;The Social Engineering Personality Frame-work,&#8221; 2014, pp. 24&#8211;30.</para>
<para>[19] N. Tsinganos, G. Sakellariou, P. Fouliras, and I. Mavridis, &#8220;Towards an Automated Recognition System for Chat-based Social Engineering Attacks in Enterprise Environments,&#8221; in Proceedings of the 13th International Conference on Availability, Reliability and Security &#8211; ARES 2018, Hamburg, Germany, 2018, pp. 1&#8211;10.</para>
<para>[20] C. Liu, Y. Mao, J. E. Van der Merwe, and M. F. Fernandez, &#8220;Cloud Resource Orchestration: A Data-Centric Approach,&#8221; in Proceedings of the biennial Conference on Innovative Data Systems Research (CIDR), 2011, pp. 241&#8211;248. [Online]. Available: http://www2.research.att.com/ maoy/pub/cidr11.pdf</para>
<para>[21] A. Dubey and D. Wagle, &#8220;Delivering software as a service,&#8221; The McK-insey Quarterly, vol. 6, no. May, pp. 1&#8211;12, 2007. [Online]. Available: http://www.pocsolutions.net/Delivering_software_as_a_service.pdf</para>
<para>[22] K. Lane, &#8220;Overview Of The Backend as a Service (BaaS) Space,&#8221; 2013. [Online]. Available: http://www.integrove.com/wp-content/uploads/2014/11/api-evangelist-baas-whitepaper.pdf</para>
<para>[23] S. A. Fahmy, K. Vipin, and S. Shreejith, &#8220;Virtualized FPGA accelerators for efficient cloud computing,&#8221; in Proceedings IEEE 7th International Conference on Cloud Computing Technology and Science, CloudCom 2015. IEEE, nov 2016, pp. 430&#8211;435. [Online]. Available: http://ieeexplore.ieee.org/document/7396187/</para>
<para>[24] J. A. Williams, A. S. Dawood, and S. J. Visser, &#8220;FPGA-based cloud detection for real-time onboard remote sensing,&#8221; in Proceedings 2002 IEEE International Conference on FieId-Programmable Technology, FPT 2002. IEEE, 2002, pp. 110&#8211;116. [Online]. Available: http://ieeexplore.ieee.org/document/1188671/</para>
<para>[25] S. Byma, J. G. Steffan, H. Bannazadeh, A. Leon-Garcia, and P. Chow, &#8220;FPGAs in the cloud: Booting virtualized hardware accelerators with OpenStack,&#8221; in Proceedings 2014 IEEE 22nd International Symposium on Field-Programmable Custom Computing Machines, FCCM 2014. IEEE, may 2014, pp. 109&#8211;116. [Online]. Available: http://ieeexplore.ieee.org/document/6861604/</para>
<para>[26] L. Xu, W. Shi, and T. Suh, &#8220;PFC: Privacy preserving FPGA cloud A case study of MapReduce,&#8221; in IEEE International Conference on Cloud Computing, CLOUD. IEEE, jun 2014, pp. 280&#8211;287. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber= 6973752</para>
<para>[27] K. Karras, O. Kipouridis, N. Zotos, E. Markakis, and G. Bogdos, &#8220;A Cloud Acceleration Platform for Edge and Cloud,&#8221; in EnESCE: Workshop on Energy-efficient Servers for Cloud and Edge Computing, 2017. [Online]. Available: https://www.researchgate.net/publication/313 236609</para>
<para>[28] Nikoloudakis, Y, Pallis, E, Mastorakis, G, Mavromoustakis, CX, Skianis, C. &amp; Markakis, EK 2019, &#8216;Vulnerability assessment as a service for fog-centric ICT ecosystems: A healthcare use case&#8217; Peer-to-Peer Networking and Applications. https://doi.org/10.1007/s12083&#8211;019-0716-y</para>
</section>
</chapter>

<chapter class="chapter" id="ch05" label="5" xreflabel="5">
<title>CYBECO: Supporting Cyber-Insurance from a Behavioural Choice Perspective</title>
<para><b>Nikos Vassileiadis<sup>1</sup>, Aitor Couce Vieira<sup>2</sup>, David R&#237;os Insua<sup>2</sup>, Vassilis Chatzigiannakis<sup>3</sup>, Sofia Tsekeridou<sup>3</sup>, Yolanda G&#243;mez<sup>4</sup>, Jos&#233; Vila<sup>4</sup>, Deepak Subramanian<sup>5</sup>, Caroline Baylon<sup>5</sup>, Katsiaryna Labunets<sup>6</sup>, Wolter Pieters<sup>6</sup>, Pamela Briggs<sup>7</sup></b> <b>and Dawn Branley-Bell<sup>7</sup></b></para>
<para><sup>1</sup>Trek Consulting, Greece</para>
<para><sup>2</sup>Institute of Mathematical Sciences (ICMAT), Spanish National Research Council (CSIC), Spain</para>
<para><sup>3</sup>Intrasoft International, Greece</para>
<para><sup>4</sup>Devstat, Spain</para>
<para><sup>5</sup>AXA Technology Services, France</para>
<para><sup>6</sup>Faculty of Technology, Policy and Management, Delft University of Technology, the Netherlands</para>
<para><sup>7</sup>Psychology, University of Northumbria at Newcastle, United Kingdom E-mail: n.vasileiadis@trek-development.eu; aitor.couce@icmat.es; david.rios@icmat.es; Vassilis.Chatzigiannakis@intrasoft-intl.com; Sofia.Tsekeridou@intrasoft-intl.com; ygomez@devstat.com; jvila@devstat.com; deepak.subramanian@axa.com; caroline.baylon@axa.com; K.Labunets@tudelft.nl; W.Pieters@tudelft.nl; p.briggs@northumbria.ac.uk; dawn.branley-bell@northumbria.ac.uk</para>
<para>Cyber-insurance can fulfil a key role in improving cybersecurity within companies by providing incentives for them to improve their security, requiring certain minimum protection standards. Unfortunately, so far, cyber-insurance has not been widely adopted. CYBECO focuses on two aspects to fill this gap: (1) including cyber threat behaviour through adversarial risk analysis to support insurance companies in estimating risks and setting premiums and (2) using behavioural experiments to improve IT owners&#8217; cybersecurity decisions. We thus facilitate risk-based cybersecurity investments support-ing insurers in their cyber offerings through a risk management modelling framework and tool.</para>

<section class="lev1" id="sec5-1">
<title>5.1 Introduction</title>
<para>Cyber security is increasingly perceived as a major global problem as reflected by the World Economic Forum [1] and is becoming even more important as companies, administrations and individuals get more and more interconnected, facilitating the spread of cyberthreats. Famous examples include the Target 2014 data breach, in which a cyber attack to that company through one of its suppliers caused the loss of 70 million credit card details, entailing major reputational damage, and the NotPetya malware, which affected thousands of organisations worldwide with an estimated cost of more than 8 billion EUR.</para>
<para>Given the importance of this problem, numerous frameworks have been developed to support cybersecurity risk management, including ISO 27005 [2] or CORAS [3], among several others. Similarly, several compliance and control assessment frameworks, like ISO 27001 [4] or Common Criteria [5], provide guidance on the implementation of cybersecurity best practices. Their extensive catalogues of assets, controls and threats and their detailed guidelines for the implementation of countermeasures to protect digital assets facilitate cyber security engineering. However, a detailed study of the main approaches to cybersecurity risk management reveals that they often rely on risk matrices for risk analysis purposes, with shortcomings documented in e.g. Thomas et al. [6].</para>
<para>Moreover, with few exceptions like IS1 [7], such methodologies do not explicitly take into account the intentionality of certain threats, in contrast with the relevance that organisations like the Information Security Forum (ISF) [8] start to give to such threats. As a consequence, ICT owners may obtain unsatisfactory results in relation with the prioritisation of cyber risks and the measures they should implement, even more in the case of an increasing variety of threats as well as the increasing complexity of countermeasures for risk management available, including the recent emergence of cyber-insurance products [9].</para>
<para>The CYBECO project aims at providing a framework and a tool to facilitate cyber security resource allocation processes, including the provision of cyber insurance and, consequently, contribute to a more cyber secure environment.</para>
</section>

<section class="lev1" id="sec5-2">
<title>5.2 An Ecosystem for Cybersecurity and Cyber-Insurance</title>
<para>CYBECO includes a detailed analysis of the cyber-insurance (and cybersecurity) ecosystem. This is aimed at facilitating the use of the toolbox for specific stakeholder scenarios, as well as providing policy recommendations that, together with the toolbox, help achieve key goals. We identified several primary and secondary actors participating in the cyber-insurance ecosystem and relationships that exist between them.</para>
<para>The main parties that we identified are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><i>insurance providers</i> who &#8220;assume risks of another parties in exchange for payment&#8221; [9];</para></listitem>
<listitem><para><i>insurance brokers</i> who provide an advice to the companies on the available insurance products matching their needs;</para></listitem>
<listitem><para><i>companies</i> that are interested in transferring part of their cyber-related risks with cyber-insurance. The reasons for purchasing cyber-insurance may differ depending on the company size.</para></listitem>
</itemizedlist>
<para>Secondary actors include <i>consumers</i> using services or products provided by companies; <i>experts</i> that provide professional services to the insurance companies (e.g., risk assessment, forensics, cyber incident counsel, legal and PR services); <i>regulators</i> managing corresponding business sectors; and other parties.</para>
<para>Based on the discussions with the representatives of different actor types and existing literature, we identified their motivation and goals, which guide their behaviour in the ecosystem. An insurance provider is interested in increasing its market share, having better actuarial data to improve risk assessment and run a profitable business. Similarly, an insurance broker aims at making a profit, but also at providing its clients with high-quality advice about cyber risks. The companies try to get advice on security investments, cover possible losses related to cyber risks and, in case of an incident, get help with incident handling. At a higher level, we have a regulator or government actor whose primary interests are to increase the overall level of security and create a resilient ecosystem [10].</para>
<para>The current cybersecurity regulations and standards are poor concerning policy measures that are related to cyber-insurance. Therefore, we adopted a framework proposed by Woods and Simpson [10] to identify possible policy measures that can be considered by the government for improving the cyberinsurance market. The framework provides six main themes for possible policy measures:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para><i>Wider adoption</i> covers measures like assigning financial costs to cyber events (i.e., regulatory fines), raising awareness that traditional insurance policies do not cover cyber risk, supporting market development via governmental procurement capability, and making cyber-insurance mandatory for specific business sectors.</para></listitem>
<listitem><para><i>Defining coverage</i> includes standardisation of the language used in cyber-insurance policies, promotion of cyber exclusion clauses in non-cyber policies, and providing certification for acts of cyber war or terrorism.</para></listitem>
<listitem><para><i>Data collection</i> includes policy measures such as the introduction of standard data formats for risk assessment and claim processes, requirements for risk assessment data collection, and collecting high-level data on the cyber-insurance market.</para></listitem>
<listitem><para><i>Information sharing</i> consists of measures like making available data collected by government (related to GDPR or NIS regulations), open access to sector-specific information-sharing initiatives (sector ISACs), creating a state- or EU-level cyber incident data repository and mandating other organisations to share data.</para></listitem>
<listitem><para><i>Best practice</i> includes defining cybersecurity best practices that cyber-insurers should check with their clients or even demand and, at the same time, implementing regulations that clarify what the liability of insurers giving security advice is.</para></listitem>
<listitem><para><i>Catastrophic loss</i> comprises policy measures related to the role of government as insurer of last resort, including different models for insuring catastrophic events (e.g. terrorism).</para></listitem>
</orderedlist>
<para>To better understand which policy measures have more influence on the ecosystem, we mapped the goals of the actors to Wood and Simpsons&#8217; frame-work. Wider adoption of cyber-insurance implies growth of the market and, therefore, supports goals like increasing market share for insurers, making a profit for insurers and brokers. At the same time, wider adoption means that more companies insured their cyber risks, implying that the resilience of the ecosystem is also increasing. Policy measures related to coverage definition help brokers to better advice companies about relevant insurance products meaning that companies get an appropriate policy to cover their cyber risks. Wider use of cyber exclusions in non-cyber policies could lead to improving the level of sales of cyber-insurance products contributing to the profitability of insurers and brokers.</para>
<para>Data collection policy measures impact insurers&#8217; goal related to having better actuarial data. Information sharing measures also supply insurers with actuarial data and help brokers to provide clients with high-quality advice about cyber risks as brokers can have real information about current cyber incidents. Security best practices help brokers to advise their clients on cyber risks and countermeasures, meaning that companies get advice about what security investments to make. By using security standards in cyber-insurance risk assessment and even including security best practices as required in cyber-insurance policy, the government could affect the overall level of security in the ecosystem. Finally, catastrophic loss measures contribute to increasing ecosystem resilience, which is the goal of the governmental actor.</para>
<para>The only goal that is not covered by this policy measures framework is related to company actors who need assistance in incident handling. How-ever, the existing practice shows that most insurers offer their clients crisis management services as a part of cyber-insurance products. Such services are mostly provided by partnering organisation and its cost is included in the policy coverage [11,12].</para>
<para>Details on the cyber-insurance ecosystem, the associated policy recommendations, and their connection with the CYBECO toolbox are described in the associated deliverable [14].</para>
</section>

<section class="lev1" id="sec5-3">
<title>5.3 The Basic Cybeco Model: Choosing the Optimal Cybersecurity and Cyber-Insurance Portfolio</title>
<para>CYBECO provides several cyber-insurance related decisions. The main model aims at providing support to an organisation that needs to allocate its cybersecurity resources, including the adoption of cyber-insurance. In it, we distinguish between a Defender, to which our methodology will support in her allocation, and an Attacker, who will try to perpetrate attacks to the Defender in pursue of certain goals.</para>
<para>We represent the problem as a bi-agent influence diagram (BAID) in <link linkend="F5-1">Figure <xref linkend="F5-1" remap="5.1"/></link>, with the terminology used in [14]. Therefore, the diagram includes oval nodes that represent uncertainties modelled with probability distributions; hexagonal utility nodes that represent preferences modelled with a utility function; rectangle nodes, which represent decisions modelled through the set of relevant alternatives at such point; and, finally, double oval nodes that represent deterministic nodes modelled through a function evaluating the antecessors of the corresponding node. The diagram also includes arrows to be interpreted as in standard influence diagrams [15]. Light nodes designate nodes belonging just to the Defender problem; dark ones to the Attacker; and, finally, striped ones are relevant to both agents.</para>
<fig id="F5-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-1">Figure <xref linkend="F5-1" remap="5.1"/></link></label>
<caption><para>BAID describing the cybersecurity resource allocation problem.</para></caption>
<graphic xlink:href="graphics/ch005_fig001.jpg"/>
</fig>
<para>We outline the BAID. First, we include a description of the organization profile and features, including its assets. We then identify the threats relevant to the organisation; following the ISF classification, we distinguish between environmental, accidental and non-targeted cyber threats, which we model through uncertain nodes. Besides, we also consider targeted cyber threats, modelled as decisions, but associated with a different agent, the Attacker. Having determined the threats and relevant assets, we may identify the impacts that we separate between insurable and non-insurable ones.</para>
<para>Once with the relevant threats and impacts for the organisation at hand, we may identify the actions that may be undertaken to mitigate the likelihood and/or impact of the threats. We distinguish three types of instruments: proactive security controls, reactive security controls and insurance. The above instruments may have to satisfy certain constraints (financial, technical, compliance, etc.). Besides, they will have security and insurance costs, which will typically be deterministic. With all the relevant attributes in place, we may then prepare the preference model for the Defender through her utility.</para>
<para>We turn now to the remaining elements of the Attacker problem, mainly his detection and identification. Finally, with all his relevant elements in place, we may then build a preference model for the Attacker through the utility of the attacker through a value node.</para>
<para>Based on such model, we build the so-called Defender problem. This facilitates the quantitative modelling of the problem using conditional probability distributions at uncertain nodes and a utility function for modelling the preferences and risk attitudes of the Defender. All those models are standard in decision analysis except those referring to the likely threats performed by the attacker(s) that entail strategic thinking.</para>
<para>To facilitate their assessment, we consider the so-called Attacker problem. As we do not have full access to the attackers to elicit their beliefs and preferences, we use random probabilities and utilities to model our uncertainty about them. We then simulate from such problem to find the corresponding random optimal alternatives that help us to find the required attack forecasts. This feeds back the Defender problem that is finally solved to provide the optimal proactive portfolio, reactive portfolio and insurance that should be implemented by supported organisation.</para>
<para>This and other models for other cyber-insurance related decisions are fully described in [17].</para>
</section>

<section class="lev1" id="sec5-4">
<title>5.4 Validating CYBECO</title>
<para>The findings of the CYBECO project have been validated in several ways:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>A set of use cases and scenarios were developed to verify whether the proposed models were robust in all situations. They are available in [18]. They have confirmed the validity of our approach, although some fine tuning, specification and further modelling has been required.</para></listitem>
<listitem><para>A workshop in which we presented the CYBECO toolbox wireframes to a number of cybersecurity professionals and solicited their feedback. This was essential for the fine-tuning of the project findings.</para></listitem>
<listitem><para>The last validation approach focused on the application of behavioural-experimental methods to test the assumptions of the CYBECO models on purchase behaviour of cyber-protection measures and cyber-insurance, as well as on the belief formation of cyber-risk and vulnerability levels. To this end, the project has designed and run a large-scale online behavioural economic experiment with a total sample of 4.800 subjects from Germany, Poland, Spain and UK. Beyond the validation of the model, the experiment has provided behavioural insights relevant for the development of the cyber-insurance market in the EU.</para></listitem>
</orderedlist>
<para>The structure of the experiment was as follows. In a controlled gamified environment, subjects were meant to design the protection and cyberinsurance strategy for an SME and were required to carry out certain tasks online (see <link linkend="F5-2">Figure <xref linkend="F5-2" remap="5.2"/></link>). After that, each subject may receive a random attack with success probability depending on the purchased protection measures and level of security of her online behaviour. According to the methodology of behavioural economics, the decisions of the participants and the random events in the experiment (the attack) have an actual impact in their economic incentives, to be received after completing the experiment. To check belief formation, the process is repeated twice. The experiment also included a questionnaire to measure risk attitude and the Protection-Motivation psychological variables.</para>
<fig id="F5-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-2">Figure <xref linkend="F5-2" remap="5.2"/></link></label>
<caption><para>Screenshot of the online cybersecurity shop in the experiment.</para></caption>
<graphic xlink:href="graphics/ch005_fig002.jpg"/>
</fig>
<para>The economic experiment validated the underlying assumptions of the model and provided other relevant insights. Experimental results showed that belief formation is dependent on the context of the attack, the participants selecting higher protection and insurance levels under the menace of intentional attacks (cybercrime) than of random (random virus) ones. The experiment also analysed the impact of the experience of suffering a cyberattack in the updating of beliefs and protection-insurance strategies. The results show the presence of two opposite reactions: although an attack does in general motivate participants to increase their protection levels, suffering the attack reduced confidence level in the effectivity of the protection measure for 15.1% of the participants who reduced their protection level after the attack. As insurance behaviour regards, experimental subjects seem to purchase insurance levels over the optimal level. Moreover, the experiment excluded moral hazard in cyber-insurance: purchasing a cyber-insurance policy does not reduce the security level of online behaviour and is positively correlated with the acquisition of stronger cybersecurity protection measures. An additional relevant result of the experiment is the existence of vulnerable segments of population (elder citizens, for instance) that, although being risk averse and concerned with cybersecurity, behave insecurely online. The likely reason for this lack of security is that they do not know how to behave in a safer way.</para>
</section>

<section class="lev1" id="sec5-5">
<title>5.5 The CYBECO Decision Support Tool</title>
<para>When compared with standard approaches in cybersecurity, the CYBECO paradigm provides a more comprehensive method leading to a more detailed modelling of cyber risk problems, yet, no doubt, more demanding in terms of analysis. We believe though that in many organizations, especially, in critical infrastructure sectors, the stakes at play are so high that this additional work should be worth the effort.</para>
<para>To facilitate implementation, we are converting our generic actionable model into a decision support system (DSS), the CYBECO tool, for cybersecurity risk management at a strategic level. The objective of such DSS would be to provide the best portfolio of security controls and insurance products, given a predefined relevant budget and other technical and legal constraints for a certain planning period.</para>
<para>The toolbox adopts the form of an online calculator (see <link linkend="F5-3">Figure <xref linkend="F5-3" remap="5.3"/></link>) to guide the user into analysing their current cybersecurity risk level and the optimal cybersecurity strategy for their specific needs. The calculator is viewed as a multi-step online visually-enriched form, which asks the pertinent questions (e.g., company size, characteristics, relevant threats, relevant security measures and insurance products, relevant impacts, etc.) and offers the best option for the stakeholder (SME, large industry) based on the outcomes of the CYBECO cyber risk management models.</para>
<fig id="F5-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F5-3">Figure <xref linkend="F5-3" remap="5.3"/></link></label>
<caption><para>A snapshot of the CYBECO tool, gathering inputs on assets to feed the cyber risk analysis tool.</para></caption>
<graphic xlink:href="graphics/ch005_fig003.jpg"/>
</fig>
<para>To enhance the usability, visual appearance of outputs, and general user-friendliness of the calculator, three types of user-oriented validations have been undertaken to collect relevant feedback. First, we have designed and implemented a behavioural economic experiment with a sample of 2,000 potential users of the calculator (workers in SMEs in managerial or cybersecurity related positions) in Germany, Poland, Spain and UK. In a gamified controlled environment, the participants were asked to define the cyberprotection and cyber-insurance strategies of an SME using five different framings of the output of the CYBECO calculator. The experiment showed that the potential users of the CYBECO toolbox tend to use it more as an information source to make such a decision in a better informed manner rather than an expert tool able to guide them to the best option and provide relevant recommendations (only 30% of the users declared to have purchased the strategy recommended by the tool). It must be highlighted that this result is not attributable to a lack of understanding of the ranking criteria but it results from the fact that users do consciously prefer a different protection approach, coverage or price level than the one dynamically recommended by the toolbox. Another evaluation target has been the user navigation paths, offered by the toolbox, which were evaluated by two focus groups with about 50 actual users, which helped in improving the visual aspect of the toolbox. Finally, a rich set of uses cases has been developed and applied as usage patterns on the toolbox to crosscheck the correct implementation of the cyber risk analysis algorithms.</para>
</section>

<section class="lev1" id="sec5-6">
<title>5.6 Conclusion</title>
<para>We have provided a brief summary of some of the ongoing and expected achievements of the CYBECO project. On the supply side, we expect that the end-users would benefit from better founded and designed cyber-insurance products and cyber risk management frameworks. On the demand side, we expect that the end-users would benefit from a well-founded tool that allows them to determine their optimal cyber security investments, including the appropriate cyber-insurance product. Globally, the society as a whole would benefit as CYBECO helps in creating a more secure environment.</para>
<para>In a nutshell, by properly modelling and combining decision-making behaviour surrounding cyber threats (risk generation), the decision-making behaviour of insurance companies (risk assessment) and the decision-making behaviour of IT owners (which includes cyber-insurance), we hope to help mitigate cyber risks at the global level.</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>CYBECO: Supporting cyberinsurance from a behavioural choice perspective is a project funded by the H2020 programme through grant agreement no. 740920.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] World Economic Forum, &#8220;The Global Risks Report 2019,&#8221; 2019.</para>
<para>[2] International Organization for Standardization, ISO/IEC 27005 &#8211; Information Security Risk Management, 2013.</para>
<para>[3] M. S. Lund, B. Solhaug and K. St&#248;len, Model-driven Risk Analysis: The CORAS Approach, Springer, 2010.</para>
<para>[4] International Organization for Standardization, ISO/IEC 27001 &#8211; Information Security Management Systems &#8211; Requirements, 2013.</para>
<para>[5] The Common Criteria Recognition Agreement Members., Common Criteria for Information Technology Security Evaluation, Version 3.1 Release 4, 2009.</para>
<para>[6] P. Thomas, R. B. Bratvold and J. E. Bickel, &#8220;The risk of using risk matrices,&#8221; in <i>SPE Annual Technical Conference and Exhibition 2013</i>., 2013.</para>
<para>[7] National Technical Authority for Information Assurance (UK), HMG IA Standard Number 1., 2012.</para>
<para>[8] Information Security Forum, Information Risk Assessment Methodology 2, 2016.</para>
<para>[9] A. Marota, F. Martinelli, S. Nanni, A. Orlando and A. Yautsiukhin, &#8220;Cyber-insurance survey,&#8221; <i>Computer Science Review</i>, 2017.</para>
<para>[10] PricewaterhouseCoopers, &#8220;The Global State of Information Security Survey 2018,&#8221; 2017.</para>
<para>[11] D. Woods and A. Simpson, &#8220;Policy measures and cyber insurance: a framework,&#8221; <i>Journal of Cyber Policy</i>, vol. 2, no. 2, pp. 209&#8211;226.</para>
<para>[12] S. Romanosky, L. Ablon, A. Kuehn and T. Jones, &#8220;Content analysis of cyber insurance policies: how do carriers write policies and price cyber risk?,&#8221; in <i>Workshop on Economics of Information Security</i>, 2017.</para>
<para>[13] B. Nieuwesteeg, L. Visscher and B. de Waard, &#8220;The law and economics of cyber insurance contracts: a case study,&#8221; <i>European Review of Private Law</i>, vol. 26, no. 3, pp. 371&#8211;420, 2018.</para>
<para>[14] The CYBECO Consortium, &#8220;D7.1 &#8211; CYBECO Policy Recommendations,&#8221; 2019.</para>
<para>[15] D. Banks, J. R<small>IOS</small> and D. R<small>IOS</small> Insua, Adversarial Risk Analysis, Francis and Taylor, 2015.</para>
<para>[16] R. D. Shachter, &#8220;Evaluating Influence Diagrams,&#8221; <i>Operations Research</i>, vol. 34, no. 6, pp. 871&#8211;882, 1986.</para>
<para>[17] The CYBECO Consortium, &#8220;D3.1 &#8211; Modelling framework for cyber risk,&#8221; 2018.</para>
<para>[18] The CYBECO Consortium, &#8220;D4.1 &#8211; Cyber-Insurance Use-Cases and Scenarios,&#8221; 2018.</para>
</section>
</chapter>

<chapter class="chapter" id="ch06" label="6" xreflabel="6">
<title>Cyber-Threat Intelligence from European-wide Sensor Network in SISSDEN</title>
<para><b>Edgardo Montes de Oca<sup>1</sup>, Jart Armin<sup>2</sup> and Angelo Consoli<sup>3</sup></b></para>
<para><sup>1</sup>Montimage Eurl, 39 rue Bobillot, Paris, France</para>
<para><sup>2</sup>CyberDefcon BV, Herengracht 282, 1016 BX Amsterdam, the Netherlands <sup>3</sup>Eclexys Sagl, Via Dell Inglese 6, Riva San Vitale, Switzerland E-mail: edgardo.montesdeoca@montimage.com; jart@cyberdefcon.com; angelo.consoli@eclexys.com</para>
<para>SISSDEN is a project aimed at improving the cyber security posture of EU entities and end users through development of situational awareness and sharing of actionable information. It builds on the experience of Shadowserver, a non-profit organization well known in the security community for its efforts in mitigation of botnet and malware propagation, free of charge victim notification services, and close collaboration with Law Enforcement Agencies (LEAs), national CERTs, and network providers. The core of SISSDEN is a worldwide sensor network which is deployed and operated by the project consortium. This passive threat data collection mechanism is complemented by behavioural analysis of malware and multiple external data sources. Actionable information produced by SISSDEN provides no-cost victim notification and remediation via organizations such as CERTs, ISPs, hosting providers and LEAs such as EC3. It will benefit SMEs and citizens which do not have the capability to resist threats alone, allowing them to participate in this global effort, and profit from the improved analysis and exchange of security intelligence, to effectively prevent and counter security breaches. The main goal of the project is the creation of multiple high-quality feeds of actionable security information that can be used for remediation purposes and for proactive tightening of computer defences. This is achieved through the development and deployment of a distributed sensor network based on state-of-the-art honeypot and darknet technologies, the creation of a high-throughput data processing centre, and provisioning of in-depth analytics, metrics and reference datasets of the collected data.</para>

<section class="lev1" id="sec6-1">
<title>6.1 Introduction</title>
<para>The primary data collection mechanism at the heart of the SISSDEN project<footnote id="fn_1" label="1"> <para>SISSDEN (Secure Information Sharing Sensor Delivery event Network) is an H2020 project. See https://cordis.europa.eu/project/rcn/202679_en.html and https://sissden.eu/ for more information.</para></footnote> is a sensor network of honeypots and darknets. The sensor network is composed of VPS provider hosted nodes and nodes donated to the project by third-parties acting as endpoints. These VPS nodes/endpoints are not the actual honeypots themselves. Instead, they act as layer 2 tunnels to the SISSDEN datacenter. Attack/scan traffic to the VPS nodes is sent via these tunnels to corresponding VMs which run the actual honeypots themselves. The honeypots in the datacenter then respond to the attacks/scans with the IP addresses from the VPS nodes.</para>
<para>This approach allows for easier management of the honeypots themselves &#8211; instead of having to remotely manage (and maintain) honeypots at the VPS provider locations, all can be centrally managed in one datacenter instead.</para>
<para>Each sensor endpoint has multiple IPv4 addresses &#8211; one for management, the others for tunnelling to the real honeypots.</para>
<para>As of 14th of January 2019, SISSDEN has 226 operational nodes running, spread across 58 countries. A total of 953 IP address from 112 ASNs are used, covering 375/24 networks.</para>
<para>The following world map (<link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link>) shows the current snapshot of operational sensor IPs:</para>
<para>Nine different honeypot types are currently deployed. These are focused on observing different forms of attacks against SSH/telnet services, general or specialised web services, remote management protocols, databases, mail relays, ICS devices, etc, including exploits, scans, brute force attempts. Information about these attacks is disseminated to 95+ National CSIRTs and 4200+ network owners via Shadowserver&#8217;s free daily remediation feeds. These are marked with source &#8216;SISSDEN&#8217;. One can subscribe to SISSDEN feeds via the SISSDEN Customer Portal (https://portal.sissden.eu).</para>
<para>To capitalise on the tools and knowhow from the H2020 SISSDEN project and assure the sustainability of the results, innovative real-time Cyber Threat</para>

<fig id="F6-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-1">Figure <xref linkend="F6-1" remap="6.1"/></link></label>
<caption><para>Map of deployed SISSDEN sensors.</para></caption>
<graphic xlink:href="graphics/ch006_fig001.jpg"/>
</fig>

<para>Intelligence data for timely threat detection and prevention will be provided by a new start-up company called SISSDEN BV (https://sissden.com), launched by three SME partners (CyberDefcon, UK/The Netherlands, Montimage, France, and Eclexys, Switzerland).</para>
</section>

<section class="lev1" id="sec6-2">
<title>6.2 SISSDEN Objectives and Results</title>
</section>

<section class="lev2" id="sec6-2-1">
<title>6.2.1 Main SISSDEN Objectives</title>
<para>The main objectives of the SISSDEN project are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Create a large distributed sensor network. Over 100 passive sensors based on current and beyond state-of-the-art honeypot and darknet technologies are deployed in multiple organisations, including all 28 EU member states and 6 candidate countries, and are being used to observe malicious activities on an unprecedented scale, without intercepting any legitimate traffic.</para></listitem>
<listitem><para>Advancements in attack detection. New types of honeypots, darknets and probes are deployed to detect, analyse and alert on types of attacks not widely detected today, such as reflective DDoS amplification or attacks against Internet of Things (IoT) devices, which are expected to increase significantly in the coming years as a range of new networkcentric technologies are embraced by consumers and SMEs globally.</para></listitem>
<listitem><para>Advancements in malware analysis and botnet tracking. The large sensor network is augmented by an innovative new generation of enhanced sandbox technologies designed for long running monitoring of malware specimen execution and behavioural clustering, to provide even more information on current threats.</para></listitem>
<listitem><para>Improving the fight against botnets. Sensor and sandbox data collected is used for detailed studies of botnet infrastructures. Long-term observation of multiple families of current botnets will support antibotnet research and law enforcement activities. Output will closely align with the existing European anti-botnet and anti-cybercrime strategies, as well as providing support to proven strong LEA partnerships, such as with Europol&#8217;s European Cybercrime Center (EC3).</para></listitem>
<listitem><para>Collect, store, analyse and reliably process Internet scale security data sets. The inherent challenges of building and continuously operating reliable data collection, storage, exchange, analysis and reporting systems at high volumes is solved by multiple innovations in sensor and backend packaging, deployment, integration and data searching, based on SISSDEN&#8217;s consortium&#8217;s extensive experience with &#8220;big data&#8221; approaches, high volume transactional and non-relational data systems.</para></listitem>
<listitem><para>Share high-quality actionable information on a large scale. SISSDEN produces large amounts of intelligence on current threats and all of it is being shared with stakeholders and the larger community, at no cost to them, for the purposes of remediation or for early warning. The project distributes high-quality data feeds to the majority of the National CERTs in Europe, as well as worldwide, along with Law Enforcement Agencies, Internet providers, network owners and other vetted organisations fighting to defend their networks, SME customers, EU citizens and Internet users against continuous attacks.</para></listitem>
<listitem><para>Provide objective situational awareness through metrics. Access to huge amounts of high-quality data on cyber threats: primarily obtained by the sensor network, but also contributed by the members of the SISSDEN consortium, provides metrics that offer objective, non-vendor biased overview of the threat landscape in the EU and individual member states.</para></listitem>
<listitem><para>Create and publish a large scale curated reference data set. A significant subset of the data produced by SISSDEN is being made available to vetted researchers and Academia, addressing the clear and urgent need for large scale, high quality, and recent security datasets in order to improve or test defensive solutions.</para></listitem>
</itemizedlist>
</section>

<section class="lev2" id="sec6-2-2">
<title>6.2.2 Technical Architecture</title>
<para><link linkend="F6-2">Figure <xref linkend="F6-2" remap="6.2"/></link> below provides a simplified view of the SISSDEN technical architecture.</para>
<para>Components located at the EU datacentre include the Frontend Servers, Backend Servers and Utility Server pictured on the diagram. The sensor network consists of remote VPS Provider end points located at various VPS hosting providers (i.e. outside the EU datacentre), configured as transparent network tunnel endpoints forwarding traffic to the EU datacentre. SISSDEN collects attack data, such as network scans, spam email, malware binaries, brute force attacks, interactive attacker logins, etc.</para>
</section>

<section class="lev3" id="sec6-2-2-1">
<title>6.2.2.1 Remote endpoint sensors (VPS)</title>
<para>Each remote endpoint sensor contains only the minimum amount of configuration and management capabilities required to securely participate as one end of a transparent network tunnel. They are configured to act as a long virtual Ethernet cable between the VPS and SISSDEN&#8217;s local data centre frontend. At the Frontend in the EU Datacenter, a tunnel server terminates each transparent layer 2 Ethernet tunnel and delivers the Ethernet frames to an isolated, dedicated Virtual Local Area Network (VLAN).</para>
</section>

<section class="lev3" id="sec6-2-2-2">
<title>6.2.2.2 Frontend servers</title>
<para>Traffic from the remote sensor endpoints are received by multiple types of honeypot systems, implemented as VMs, running on the EU Datacentre Frontend. Each honeypot VM emulates one or more potential vulnerabilities and collect data about attacks observed against those vulnerabilities. The honeypots have a standard configuration and standard data collection formats enabled. Their data collection capabilities are complemented by network packet capture components (using solutions such as MMT and Snort) running on separate VM instances that listen to all traffic coming to them. SISSDEN system management components centrally manage all VM configuration, orchestration and operations.</para>
<fig id="F6-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-2">Figure <xref linkend="F6-2" remap="6.2"/></link></label>
<caption><para>High-level architecture of the SISSDEN network.</para></caption>
<graphic xlink:href="graphics/ch006_fig002.jpg"/>
</fig>
<para>Honeypot data and data from the network capture components are being ingested into the Backend datastores located at the Backend Servers at the EU Datacentre.</para>
<para>Tools like MMT and Snort are used to capture and analyse the network traffic. Snort allows identifying attacks using known attack signatures. MMT (Montimage&#8217;s monitoring framework), adapted for SISSDEN, allows characterising malicious behaviour that corresponds to both known and unknown attacks. This information, referred to as CTI, is used by this monitoring framework for automating the real-time prevention and mitigation of attacks to an organisation (large or small) before they reach their network.</para>
</section>

<section class="lev3" id="sec6-2-2-3">
<title>6.2.2.3 External partner and third-party systems</title>
<para>The data collected by the SISSDEN sensor network is supplemented by data from external systems operated by SISSDEN partners. These include separate honeypot networks, darknets, sandbox and malware analysis systems, threat intelligence platforms, etc. As with the sensor network, data from these systems is being ingested in various forms and stored in the Backend data stores.</para>
<para>To avoid unnecessary software development, SISSDEN makes use of and extends background partner systems, which aggregate data from multiple sources and provide a well-defined RESTful API for accessing normalized datasets.</para>
</section>

<section class="lev3" id="sec6-2-2-4">
<title>6.2.2.4 Backend servers</title>
<para>Data from SISSDEN&#8217;s various data collection systems is presented in multiple formats, such as live-streamed events, log files, PCAP files, and other file format data. Most of these data types are stored in their raw format in local data storage systems, at least for predetermined periods/repository size quotas, and some of the data types require parsing, normalization and ingesting into backend data indexes in support of free daily remediation report generation, high-value CTI, data analytics and ad-hoc querying.</para>
</section>

<section class="lev3" id="sec6-2-2-5">
<title>6.2.2.5 External reporting system</title>
<para>One of the main purposes of the SISSDEN project is to collect Internet scale, timely security event data and make it available at no cost to vetted National CERTs, Network Owners and organizations who sign up for SISSDEN&#8217;s free daily alerts.</para>
<para>The various sources of data collected by SISSDEN, such as honeypot and darknet data, malware analysis data, and botnet tracking information &#8211; as well as ingested external third party data sources &#8211; is being collected and stored locally in the SISSDEN backend. Each day, recipients who have voluntarily signed up for free reporting will receive by email multiple reports, covering different types of potentially malicious activity detected by SISSDEN on their nominated, verified IP/ASN/CIDR addresses.</para>
<para>On the other hand, SISSDEN BV will further provide real-time CTI, through a subscription service, to allow any organisation to block identifying cyber-attack campaigns before they reach their networks.</para>
</section>

<section class="lev3" id="sec6-2-2-6">
<title>6.2.2.6 Utility server</title>
<para>Various analytics are being performed on the data collected by SISSDEN. An analytics platform is being extended, and hosted on the Utility Server. These analytics solutions provide additional insight into threats propagating in the Internet, pooling together partner resources dedicated to the project. In addition, metrics are being applied to the collected datasets to provide improved situational awareness. They can be used as a basis on which informed decisions can be made to mitigate threats. Curated reference datasets are also being made available to vetted researchers through the Utility Server. Interactions with the above are described in more details in this document and take place through the external interfaces illustrated in the diagram (with the exception of the analytics platform, which is only available to SISSDEN partners).</para>
<para>SISSDEN presents a number of systems to interact with the public and external partners. These include a Public website (mostly containing information about the project), email communication (reports), a Customer Portal, Metrics Dashboard, etc. Hosted on the Utility Server, these public facing systems include mechanisms to communicate with the consortium, sign up to request free of charge reports, gain access to the curated reference data set, provide customer feedback, and manage opt in/out and data privacy issues.</para>
</section>

<section class="lev2" id="sec6-2-3">
<title>6.2.3 Concrete Examples</title>
<para>Two use cases, among many, have been selected to illustrate the real added-value of the CTI information that is provided.</para>
</section>

<section class="lev3" id="sec6-2-3-1">
<title>6.2.3.1 Use Case 1: Targeted Cowrie attack that can be</title>
<para><b>anticipated by the analysis of the traffic before it occurs</b></para>
<para>Targeted attacks are one of the emerging trends in cyber-security. Unlike conventional network scans and massive operations like spam and phishing, these attacks are generally answering the following criteria:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>They are focused on the assets of a single victim (private institution, government, critical infrastructure...) with objectives such as Data Exfiltration and Service Disruption.</para></listitem>
<listitem><para>In the case of Data Exfiltration, it needs to be prepared and carried out after studying the infrastructure of the victim. The attackers will most probably put a lot of effort to hide their activity.</para></listitem>
<listitem><para>In the case of Service Disruption, the attack is generally based on DDoS activity to disrupt the services and assets of the victim. This objective is normally achieved in a very short time (few minutes) and could be carried out repeatedly, thus generating an annoying service disruption, and consequently impact the victim&#8217;s reputation. If the victim is a cyber-security company, the attack may take offline important security infrastructure (such as IDPS, honeypots and firewalls) and thus open the door to other attacks toward the protected zones (clients&#8217; assets, infrastructures...).</para></listitem>
</itemizedlist>
<para>From a network traffic point of view, a targeted attack on honeypots looks like the curve shown in <link linkend="F6-3">Figure <xref linkend="F6-3" remap="6.3"/></link>. The spike shows when the targeted honeypot and/or its back-end system are hit. The graph shows the number of events registered in by the honeypot system which led to a 2-hour downtime of the honeypot system.</para>
<para>One can see the &#8220;normal traffic noise&#8221; before the attack and after the system has recovered.</para>
<para>Service suppliers (e.g., hospitals, media, power plants, control systems) cannot afford a 2-hour downtime. This class of attacks are able to disrupt the majority of infrastructures on the market. This has led the SISSDEN BV team to develop a DDoS resilient honeypot that will detect but not suffer from these attacks and therefore offer customers an improved security and uninterrupted threat analysis/monitoring.</para>
<fig id="F6-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F6-3">Figure <xref linkend="F6-3" remap="6.3"/></link></label>
<caption><para>Shows the genesis of the attack over time (measures made each 5 minutes).</para></caption>
<graphic xlink:href="graphics/ch006_fig003.jpg"/>
</fig>

<para>Furthermore, with the information that can be provided, the customers can prevent that their services do not go down in their own networks. For this, they can redirect or drop all the ingress traffic coming from the sources of this spike (using the IP addresses) and, if the attack starts occurring, set up another path for the egress traffic.</para>
</section>

<section class="lev3" id="sec6-2-3-2">
<title>6.2.3.2 Use Case 2: Understanding the numbers &#8211; metrics</title>
<para>SISSDEN delivers realistic up-to-date metrics data and dashboards from its own sources that are compared and complemented with collated sources. SISSDEN categories are based on digital epidemiology and evidence-based practices as modelled from prior knowledge and research gained from other H2020 EU Projects: SAINT<footnote id="fn_2" label="2"> <para>https://cordis.europa.eu/project/rcn/210229_en.html and https://project-saint.eu/</para></footnote> and CyberROAD<footnote id="fn_3" label="3"> <para>https://cordis.europa.eu/project/rcn/188603_en.html and http://www.cyberroad-project.eu/</para></footnote>. SISSDEN provided data can be used to make more informed decisions and improve security outcomes for clients. For instance, CTI data from SISSDEN and related sources found that in the first quarter of 2018 alone, the average enterprise faced:</para>


<itemizedlist mark="bullet" spacing="normal">
<listitem><para>21.8% of all Website traffic that is due to bad bots (a 9.5% increase over the first quarter of 2017). For example, click fraud is a major threat especially for ISP&#8217;s and enterprises, 1 out of 4 clicks are now fraudulent.</para></listitem>
<listitem><para>7,739 malware attacks (a 151% increase over the first quarter of 2017).</para></listitem>
<listitem><para>9,500 Botnet C&amp;Cs (Command and Control servers) on 1,122 different networks (a 25% increase over the first quarter of 2017).</para></listitem>
<listitem><para>173 ransomware attacks (a 226% increase over the first quarter of 2017).</para></listitem>
<listitem><para>335 encrypted cyber-attacks (a 430% increase over the first quarter of 2017).</para></listitem>
<listitem><para>963 phishing attacks (a 15% year-over-year increase).</para></listitem>
<listitem><para>554 zero-day attacks (a 14% increase over 2017).</para></listitem>
<listitem><para>5,418,909,703 (5.4 billion) Web-based user accounts that have been compromised by 310 known or reported data breaches (a 40% increase over the first quarter of 2017).</para></listitem>
<listitem><para>40% of business and government networks in US and Europe shown evidence of DNS tunnelling.</para></listitem>
<listitem><para>75% of application DDoS, like HTTP-flooding, was in fact automated threats to Web applications mistakenly reported as DDoS.</para></listitem>
<listitem><para>73% of cyber-attacks focused on the cloud were directed at Web applications.</para></listitem>
<listitem><para>755 of 62,167 of the ASNs (autonomous systems) in routing system (1%) account for hosting, routing and trafficking 85% of all malicious activity.</para></listitem>
<listitem><para>13,935 total incidents are either route hijacks or outages. Over 10% of all ASNs were affected. 3,106 ASNs were a victim of at least one routing incident. 1,546 networks caused at least one incident in 2017 and already up by 20% in 2018.</para></listitem>
<listitem><para>90% of enterprises feel vulnerable to insider attacks, of which 47% are insiders wilfully causing harm and 51% are from insiders by accident; compromised credentials, negligence etc.</para></listitem>
</itemizedlist>
<para>Ultimately analysing this type of metrics data by attack type, origin and region helps enterprises understand how cyber-attack trends are evolving. SISSDEN BV innovative AI approaches help in the timely prevention of these threats, remove false positives, help improve budget/resources prioritisation, and improve awareness with open source feeds.</para>
</section>

<section class="lev1" id="sec6-3">
<title>6.3 Conclusion</title>
<para>Many security-oriented tools and services exist that provide or use CTI for the prevention, detection and response to threats. CTI is integrated natively into security products (i.e., appliances and software tools) or provided as a service for organisations&#8217; response teams. Among those that offer state-of-the-art Threat Intelligence solutions and services we have, for instance [2]: Anomali, ThreatConnect, ThreatQuotient, LookingGlass and EclecticIQ.</para>
<para>With respect to these offers, the SISSDEN project provides free feeds derived from its wide network of honeypots and darknets; and, the start-up, SISSDEN BV, provides original real-time actionable feeds complemented with information from other sources, that are not provided by these companies since they mainly rely on the existing open data that is analysed offline.</para>
<para>The innovation with respect to state-of-the-art market solutions provided by SISSDEN concerns the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Ease of use and comprehensive threat indicators: SISSDEN relies on open standards (e.g., STIX/TAXII) and provides malicious-only IP addresses, subnets, URLs, threat ontology and ASNs.</para></listitem>
<listitem><para>Trust in provided intelligence and accuracy: SISSDEN intelligence comes from malicious honeypot and darknet activity that contains no false positives.</para></listitem>
</itemizedlist>
<para>The SISSDEN BV start-up further provides:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Timely and Real Time: SISSDEN BV delivers CTI in real time (less than 1 minute) for effective blocking of attacks before they occur.</para></listitem>
<listitem><para>CTI is correlated with information from other sources and using Deep Data and Artificial Intelligence-based analysis, increasing its value and extent.</para></listitem>
<listitem><para>Removing complexity: SISSDEN BV allows for efficient use of security resources and provides shared threat intelligence and automated response.</para></listitem>
<listitem><para>Modular and scalable: SISSDEN BV can serve different categories of customers: SMEs without security expertise or solutions, medium and large enterprises with their own solutions and security teams. . .</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This work is performed within the SISSDEN Project with the support from the H2020 Programme of the European Commission, under Grant Agreement No 700176. It has been carried out by the partners involved in the project:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Naukowa I Akademicka Siec Komputerowa, Poland</para></listitem>
<listitem><para>Montimage EURL, France</para></listitem>
<listitem><para>CyberDefcon LTD, United Kingdom and The Netherlands</para></listitem>
<listitem><para>Universitaet des Saarlnades, Germany</para></listitem>
<listitem><para>Deutsche Telekom AG, Germany</para></listitem>
<listitem><para>Eclexys SAGL, Switzerland</para></listitem>
<listitem><para>Poste Italiane &#8211; Societa per Azioni, Italy</para></listitem>
<listitem><para>Stichting The Shadowserver Foundation Europe, The Netherlands.</para></listitem>
</itemizedlist>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Bachar Wehbi, Edgardo Montes de Oca, Michel Bourdell&#232;s: Events-Based Security Monitoring Using MMT Tool. ICST 2012: 860&#8211;863</para>
<para>[2] Craug Lawson, Khushbu Pratap; &#8220;Market Guide for Security Threat Intelligence Products and Services&#8221; published 20 July 2017 by Gartner.</para>
</section>
</chapter>

<chapter class="chapter" id="ch07" label="7" xreflabel="7">
<title>CIPSEC-Enhancing Critical Infrastructure Protection with Innovative Security Framework</title>
<para><b>Antonio &#193;lvarez<sup>1</sup>, Rub&#233;n Trapero<sup>1</sup>, Denis Guilhot<sup>2</sup>, Ignasi Garc&#237;a-Mila<sup>2</sup>, Francisco Hernandez<sup>2</sup>, Eva Mar&#237;n-Tordera<sup>3</sup>, Jordi Forne<sup>3</sup>, Xavi Masip-Bruin<sup>3</sup>, Neeraj Suri<sup>4</sup>, Markus Heinrich<sup>4</sup>, Stefan Katzenbeisser<sup>4</sup>, Manos Athanatos<sup>5</sup>, Sotiris Ioannidis<sup>5</sup>, Leonidas Kallipolitis<sup>6</sup>, Ilias Spais<sup>6</sup>, Apostolos Fournaris<sup>7</sup> and Konstantinos Lampropoulos<sup>7</sup></b></para>
<para><sup>1</sup>ATOS SPAIN, Spain</para>
<para><sup>2</sup>WORLDSENSING Limited, Spain</para>
<para><sup>3</sup>Universit&#228;t Polit&#232;cnica de Catalunya, Spain</para>
<para><sup>4</sup>Technische Universit&#228;t Darmstadt, Germany</para>
<para><sup>5</sup>Foundation for Research and Technology &#8211; Hellas, Greece</para>
<para><sup>6</sup>AEGIS IT RESEARCH LTD, United Kingdom</para>
<para><sup>7</sup>University of Patras, Greece</para>
<para>E-mail: antonio.alvarez@atos.net; ruben.trapero@atos.net; dguilhot@worldsensing.com; igarciamila@worldsensing.com; fhernandez@worldsensing.com; eva@ac.upc.edu; jforne@entel.upc.edu; xmasip@ac.upc.edu; suri@cs.tu-darmstadt.de; heinrich@seceng.informatik.tu-darmstadt.de; katzenbeisser@seceng.informatik.tu-darmstadt.de; athanat@ics.forth.gr; sotiris@ics.forth.gr; lkallipo@aegisresearch.eu; hspais@aegisresearch.eu; apofour@ece.upatras.gr; klamprop@ece.upatras.gr</para>
<para>In the recent years, the majority of the world&#8217;s Critical Infrastructures (CIs) have evolved to be more flexible, cost efficient and able to offer better services and conditions for business growth. Through this evolution, CIs and companies offering CI services had to adopt many of the recent advances of the Information and Communication Technologies (ICT) field. This rapid adaptation however, was performed without thorough evaluation of its impact on CIs&#8217; security. It resulted into leaving CIs vulnerable to a new set of threats and vulnerabilities that impose high levels of risk to the public safety, economy and welfare of the population. To this extend, the main approach for protecting CIs includes handling them as comprehensive entities and offer a complete solution for their overall infrastructures and ICT systems (IT&amp;OT departments). However, complete CI security solutions exist, in the form of individual products from IT security companies. These products, integrate only in-house designed and developed tools/solutions, thus offering a limited range of technical solutions.</para>
<para>The main aim of CIPSEC is to create a unified security framework that orchestrates state-of-the-art heterogeneous security products to offer high levels of protection in IT (information technology) and OT (operational technology) departments of CIs, also offering a complete security ecosystem of additional services. These services include vulnerability tests and recommendations, key personnel training courses, public-private partnerships (PPPs), forensics analysis, standardization activities and analysis against cascading effects.</para>
<section class="lev1" id="sec7-1">
<title>7.1 Introduction</title>
</section>

<section class="lev2" id="sec7-1-1">
<title>7.1.1 Motivation and Background</title>
<para>Critical infrastructures (CIs) are defined as systems and assets either physical or virtual, extremely vital to a state. The incapacitation or destruction of such infrastructures would have a debilitating impact on security, economy, national safety or public health, loss of life or adversely affect the national morale or any combination of these matters. These infrastructures affect all aspects of daily use including oil and gas, water, electricity, telecommunications, transport, health, environment, government services, agriculture, finance and banking, aviation and other systems that, at the basis of their services, are essential to state security, prosperity of the state, social welfare and more.</para>
<para>In the recent years, the majority of the world&#8217;s CIs has unstoppably evolved to be more flexible, cost efficient and able to offer better services and business opportunities for existing but also new initiatives. CIs and companies offering CI services had to adopt many of the recent advances of the Information and Communication Technologies (ICT) field, thus incorporating the use of sophisticated devices with improved networking capabilities. In fact, the use of Internet enables a distributed operation of facilities, an optimized sharing and balance of resources through network elements, eases the prompt notification and reaction in case of emergency scenarios. In parallel, physical devices like sensors, actuators, engines and others become more and more intelligent thanks to the recent Internet of Things paradigm. In most cases, however, these advances have been performed, without security in mind. Apart from the security risks imposed by the new connections to the Internet, there are also additional risks due to IT/OT software vulnerabilities. The result was to leave CIs vulnerable to a whole new set of threats and attacks that impose high levels of risk to the public safety, economy and welfare of the population. One example of these vulnerabilities is the WannaCry incident, produced by a ransomware attack [1], in 2017 that affected more than 200,000 Windows systems, including CIs such as six UK hospitals of the Britain&#8217;s National Health Service (NHS). Other data, show that the number of incidents in the power supply systems sector has increased from 39 in 2010 to 290 in 2016 [2], including the cyberattacks to the Ukrainian power supply plant in 2015 and 2016.</para>
<para>This data and considering that the borders between OT and IT sides of CIs have progressively blurred, show that CIs have become more exposed to the public through Internet and therefore within reach of cyber criminals. The landscape of possible attacks against critical infrastructures has widen a lot and is still evolving at a very quick pace. Some examples include cross-site scripting attacks, code injections of any kind, with SQL injection being one of the most popular ones, malicious files uploads, virus installation via USB, ports scan &amp; intense network scans, binary trojans, denial of Service (DoS), email propagation of malicious code, spoofing, botnets or worms, to name some. Also we cannot neglect that personal information belonging to CI users may be compromised, jeopardizing more than just their privacy. To respond to this, the CIPSEC project has developed the CIPSEC framework for critical infrastructure protection, which is presented within the next sections.</para>
</section>

<section class="lev2" id="sec7-1-2">
<title>7.1.2 CIPSEC Challenges</title>
<para>Critical infrastructures (CIs) consist of several different, heterogeneous subsystems and need holistic solutions and services to provide coverage against a broad range of cybersecurity attacks. The main objective of the CIPSEC project is to create a unified security framework that orchestrates state-of-the-art heterogeneous, diverse, security products and offers high levels of cybersecurity protection in IT and OT CI environments. CIPSEC Framework should be able to collect and process security-related data (logs, reports, events), to generate anomaly based security alerts for events that can affect CI&#8217;s health and can have a series of cascading effects on other CI systems. The developed framework should be very flexible and adaptive to any CI. Additionally, it should cause minimum interference to the CI&#8217;s normal functionality and should be able to upgrade its components, when an update is available in a secure and easy manner.</para>
<para>Beyond that, CIPSEC aims to provide a series of services to support the CIs in attaining a high cybersecurity level. Specifically, CIPSEC provides CIs&#8217; systems vulnerability tests and recommendations including studies for cascading effects, promotes information sharing and describes good security policies that need to be followed by the CI administration and personnel. The CIPSEC framework incorporates a training service that will assist the CI&#8217;s personnel how to use the proposed framework, as well as basic cybersecurity principles to be followed in the CI routine. Finally, we also introduce an updating and patching mechanism to keep the framework always updated and secure against the latest cyber attacks.</para>
<para>To prove the effectiveness and efficiency of the CIPSEC framework and to evaluate the security level of the solution, we have installed our solution in real conditions, inside three pilot infrastructures belonging to the transportation, health and environment monitoring sectors respectively. Using the output and knowledge derived from the three-pilot experimentation, we aim on communicating the CIPSEC results to standardization bodies and influence emerging standards on CI security primarily in transportation, health and environmental monitoring and in other CI domains (like smart grid or industrial control). Finally, the CIPSEC ultimate objective is to create a framework solution that can enhance the current cybersecurity market and has a positive impact on the CI cybersecurity ecosystem. CIPSEC&#8217;s goal is to provide a solution that is market ready, innovative and well beyond the relevant market competition, thus offering interesting business opportunities and exploitation results.</para>
<para>The rest of this chapter is organized as follows. Section 2 presents the innovations of the project. Section 3 describes the CIPSEC framework, including the proposed architecture. Section 4 shows how the proposed solution is applied to the three different pilots. Section 5 addresses dissemi-nation and exploitation. Finally, Section 6 concludes the chapter.</para>
</section>

<section class="lev1" id="sec7-2">
<title>7.2 Project Innovations</title>
<para>Each individual solution introduced must successfully match all the requirements of the Critical Infrastructure Security domain and be fully compatible with the overall CIPSEC framework technical and market goals. Moreover, it must be viewed as a commercial solution and, as such, target individually and through the CIPSEC framework a specific part of the relevant market. Thus, all the CIPSEC security products/solutions are designed with strong innovation in-mind, to better achieve strong technical and market benefits. The CIPSEC anomaly detection reasoner, namely the ATOS XL-SIEM product. IT can integrate inputs from many heterogeneous observable indicators of cyber-attacks without any compromising its reliability. Also, the XL-SIEM system can support even legacy monitoring equipment (typically found in long-lifetime critical infrastructures). XL-SIEM introduces intelligence into the traditional correlation ecosystem that exists today, providing information and visibility of the cybersecurity events produced inside organizations in real time. It consists of a real-time distributed and modular infrastructure, that adapts to the specific needs of each organization. Sensors of the CIPSEC anomaly detection reasoner are innovative themselves. For instance, the Bitdefender antimalware solution can provide proactive detection for previously unseen malwares with an uncharted behaviour. In a way, the antimalware solution is capable of detecting anomalies in the system&#8217;s behaviour even if they are unknown to it through the introduction of new technologies like deep packet inspection and machine learning techniques. Innovative honeypot solutions are integrated and combined to capture and analyse a broad range of attacks. They can analyse IT and OT infrastructure traffic and create replicas of real IT and OT services. It also includes peripheral security solutions like rootkit hunters and SSH attack detectors. Moreover, CIPSEC solution incorporate a series of honeypots that are able to detect attack attempts prior to happening and divert attacks from the production systems to them. The honeypot solutions consist of a DDoS amplification honeypot, a low interaction honeypot and an ICS/SCADA honeypot. The CIPSEC framework, innovated by introducing apart from software-based solutions also hardware security solutions. Denial of Service attacks on the physical layer of broad wireless band can also be detected in an innovative way by DoSSensing, that operates as an external element sentinel to specifically detect Jamming attacks to any band(s) in which the wireless sensors, industrial IoT elements, and even computers connect to the Critical Infrastructure network. Empelor&#8217;s innovative programmable, flexible and diverse card reader solution can be adapted to any critical infrastructure environment at hand and that offers multi factor authentication. The framework also includes a Hardware Security Module solution that is directly connected to CI host devices and acts as a trusted environment for security/cryptography related operations and secure storage. This solution is extremely fast since computation intensive cryptography operations are accelerated by hardware means and thus fits well to the critical, real-time nature of many CI systems. Another important feature that the CIPSEC framework offers is the ability to visualize forensics events. By implementing and installing in the CI system Critical Infrastructure Performance Indicators (CIPIs), we are able to collect, analyse and visualize forensics measurements. Thus, we are able to innovate by providing advanced, intuitive and detailed data visualizations to active (real time) cyber/digital forensics analysis where data from heterogeneous sources are aggregated, combined and presented in a intuitive manner. Finally, the CIPSEC framework can handle private data by including and applying anonymization methodologies through a relevant tool wherever CI system needs it. The tool is based on innovative research on micro-aggregation methods and fast computational responses for anonymizing data.</para>
<para>Apart from innovation from individual components of the CIPSEC Framework, the integration process of those heterogeneous components into a unified, fully functional architecture has introduced several innovative aspects. Those aspects include the acquisition, exchange and management of security related information (events, logs, alerts) existing within the CIPSEC framework. Thus, every component of the framework has introduced some data exchange mechanism or feature to its architecture to be compliant, integrated-ready to the overall CIPSEC architecture. In the following figure (<link linkend="F7-1">Figure <xref linkend="F7-1" remap="7.1"/></link>), all such mechanisms/features are presented and described in brief.</para>
<fig id="F7-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-1">Figure <xref linkend="F7-1" remap="7.1"/></link></label>
<caption><para>Overall CIPSEC innovations due to various solutions integration.</para></caption>
<graphic xlink:href="graphics/ch007_fig001.jpg"/>
</fig>
</section>

<section class="lev1" id="sec7-3">
<title>7.3 CIPSEC Framework</title>
<para>This section is structured as follows. Firstly, the CIPSEC reference architecture for critical infrastructure protection is introduced. This architecture considers the basic data flow which takes place in all critical infrastructures. Once the architecture is defined, the functional components are detailed differentiating between core components and data collectors. Finally, the methodology followed for the integration of the components into a unified framework is explained.</para>
</section>

<section class="lev2" id="sec7-3-1">
<title>7.3.1 CIPSEC Architecture</title>
<para>As presented in [3], the CIPSEC reference architecture, is proposed at data flow level, that is infrastructure agnostic and establishes a general framework for protection applicable to any critical infrastructure, regardless of the vertical (i.e. activity sector) it belongs to or the resources managed. In this sense, the architecture is flexible and adaptable. The architecture is based on the security data lifecycle existing in critical infrastructures and shared among their components.</para>
<para>The data lifecycle considers three different stages: data acquisition, dissemination and consumption.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Data acquisition is the Critical-Infrastructure-specific devices that are used to acquire the data as specifically applicable to the Critical Infrastructures&#8217; industrial control, meaning that the control of a single process or machine is not interrelated directly to another process or machine. For example, hospital ventilators do not interact directly with syringe plumps. However, in Industry 4.0 IoT scenarios, all facilities tend to communicate with each other, increasing the security management complexity in such interoperable scenarios. At this stage of the lifecycle, communications are usually not done through public/open or documented protocols but through a proprietary protocol documented at the discretion of the manufacturer. And in most cases, monitoring is done through a client-server protocol. Some OT devices add to their own communication protocols the possibility of communicating using standard protocols such as Modbus, DNP3 or OPC UA. Not only OT field devices are involved in this stage, but also others like PLCs, robots or HMIs. Data transmitted are signals or data sequences used with different purposes, like for instance monitoring status.</para></listitem>
<listitem><para>Data dissemination considers a set of networks, equipment and communication protocols that perform real-time monitoring of industrial processes and complex tasks that use the information obtained in the data acquisition phase. Data dissemination is also about communicating with actuators/controller devices to transmit to sensors appropriate orders that can control the process automatically by means of specialized software. In the data dissemination phase, the communication is facilitated through specific protocols between OT devices and OT controllers. Data dissemination is also about integrating and centralizing all signals generated by a given process. The data are monitored, controlled and managed in real-time. In data dissemination SCADA systems, OPC servers, activity monitoring systems or Historian servers are some examples of elements involved, while the information disseminated is related to process variables, consumed resources, downtimes or device status, for instance.</para></listitem>
<listitem><para>Data consumption is associated to the concept of Industrial Business Intelligence (IBI), which in turn is defined as the set of tools, applications, technologies, solutions and processes that allow different users to process the collated information for decision making-purposes by using the sensory and behavioural data as collected from network infrastructures. This information is the result of a process which starts by extracting the information from different data sources. Then, there is a transformation process consisting of contextualizing the raw data obtained from such different data sources. Finally, the loading process consists of storing all the information already contextualized in some centralized data storage point. Several tools will take care of exploiting the information once it is available in the data storage point, these tools are focused on offering the user several KPIs that allow to make informed decisions.</para></listitem>
</itemizedlist>
<para>On the basis of this data lifecycle, the next challenge addresses the security aspects relevant to the critical infrastructure. CIPSEC proposes to integrate the security data lifecycle around the critical infrastructure data lifecycle to decouple both processes and avoid conflicts. The approach used is similar, using exactly the same stages: acquisition, dissemination and consumption.</para>
<para>For data acquisition, CIPSEC considers a wide range of data sources such as Host Intrusion Detection Systems (HIDS), Network Intrusion Detection Systems (NIDS), data from other systems that coexist together in the same security ecosystem, log files, monitoring status information, reports and human knowledge. It is relevant to highlight the utility and variety of information that can be obtained from the logs. To provide some examples, these logs can contain information about firewalls, antivirus/antimalware, real-time activity monitoring, intrusion detection sensors or disturbances in wireless signal. The CIPSEC Framework uses a combination of detailed event logs, collected from heterogeneous security solutions, used to provide a complete audit trail covering data acquisition to data delivery.</para>
<para>Data dissemination addresses how the information is made available to different stakeholders and systems. The organization should disseminate security knowledge to stakeholders, especially about security incidents, and focus on establishing a dissemination plan to deliver critical knowledge, more specifically to get the right information, transform it in the right format, to the right people, and at the right time. Types of outputs from this stage include events, alarms, tokens, software updates and security data insights.</para>
<para>Data consumption corresponds to the highest level of security management, obtaining an overview of the cybersecurity posture at all levels (for example, information about threats or attacks affecting the infrastructure), assisting to make timely decisions about prevention or mitigation of existing or upcoming attacks. Data consumption is all about understanding the critical infrastructure security data and extracting security insights from them. Decision-making is undoubtedly the main driver of this security data lifecycle. The complexity of the critical infrastructure processes requires carrying out decision-making activities both at business and technical levels. Profiles to be involved in the process may be field service technicians, network managers, security analysts, computer forensics, system administrators, contingency plan designers or industrial engineers to name but a few.</para>
<para>Once the picture is clear with respect to the data lifecycle in critical infrastructures, both for operational and security data, and insisting on the fact that the two cycles are completely decoupled and unrelated, the foundations are established for the definition of the CIPSEC reference architecture. To produce this architecture, the set of requirements expressed by the three pilots of CIPSEC [5] were also considered, as well as the commonalities existing in critical infrastructures across different domains [6, 7] . <link linkend="F7-2">Figure <xref linkend="F7-2" remap="7.2"/></link> shows this architecture, which is a very relevant result of the CIPSEC project. It is a layered architecture where the layers are established following the security data flow from the infrastructure to the user interface and back to the infrastructure to communicate the decisions made by the users (ideally made jointly by managers and technicians). This architecture is extensively explained in [3] and minor updates are reflected in [4].</para>
<fig id="F7-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-2">Figure <xref linkend="F7-2" remap="7.2"/></link></label>
<caption><para>CIPSEC reference architecture for protection of critical infrastructures [4].</para></caption>
<graphic xlink:href="graphics/ch007_fig002.jpg"/>
</fig>
<para>The CIPSEC reference architecture is applied to the critical infrastructure itself, considering its operative components and the deployed network security. CIPSEC makes a leap forward protecting the whole perimeter of the critical infrastructure and therefore enhancing its security. The closest layer to the infrastructure is the acquisition layer. This element, consists of five main components: Vulnerability Assessment, Identity Access Management, Integrity Management, Endpoint Detection and Response, and Crypto Services. These components are able to obtain different inputs from both the critical infrastructure components and the network security elements. This layer also includes a block for future security services that can be plugged into the framework. On top of the acquisition layer, the detection layer is placed. This layer includes the Anomaly Detection Reasoner which receives aggregated information from the different acquisition layer blocks.</para>
<para>On top of the detection layer, the data processing layer includes the Data Anonymization and Privacy Tool, capable of anonymizing sensitive data coming from the critical infrastructure and eventually storing it in a historic anomalies database. The data processing layer also contains the forensics service, which receives critical infrastructure performance indicators and produces relevant information that can be used in a forensics analysis upon incident occurrence. The presentation layer is implemented as a dashboard which shows a summary of the main highlights concerning the security status of the critical infrastructure, and also offers the specific details provided by the user interfaces of the different components which are integrated in a harmonized way with a common look and feel. All the details about the dashboard are documented in [8]. The information in the dashboard can be used to decide on reaction mitigation actions and to produce a sound contingency plan with reconfigurations and adjustments to be applied to the infrastructure. With regards to this, CIPSEC provides a consulting service aiming at assisting the user to produce a complete contingency plan. Three more services are present in the architecture: the compliance management service, that is part of the contingency service and its goal is to show the level of compliance between the solutions provided by the CIPSEC Framework and the requirements of the respective critical infrastructure. Another service, applies updates and patches, in an automated manner, to the components of the framework when it is required. Finally, the framework also offers a training service aiming at improving the skills of the operators in charge of managing the security of the critical infrastructure.</para>
<para>The solutions provided by the partners put in place the functionalities required to enable the different blocks of the architecture presented above to play their role in the integrated framework. These products and services fit well into the architecture and allow to establish the settings for the instantiation of the architecture in different scenarios, starting by those of the three pilots (presented in Section 4).</para>
</section>

<section class="lev3" id="sec7-3-1-1">
<title>7.3.1.1 CIPSEC core components</title>
<para>CIPSEC core components are in charge of making the most of the information obtained by the collectors presented in Section 7.1.2. Their role is different depending on the component in question. The XL-SIEM (ATOS) correlates and processes events across multiple layers, identifying anomalies, and is present in the Anomaly Detection Reasoner component. The anonymization tool (UPC) implements different data sanitization mechanisms, including suppression, generalization and pseudonymization, to protect sensitive personal information. It is present in the Data Anonymization and Privacy component and makes it possible to share cybersecurity data among different critical infrastructure stakeholders without jeopardizing the privacy of the users. The Forensics Visualization Tool (AEGIS) provides intuitive and detailed visualizations to enable cyber/digital forensics analysis. It is present in the Forensics Service component. Finally, the dashboard is a vital core component, whose objective is to provide a unified, harmonized, and consistent application, where the user/administrator of the infrastructure is able to i) check for the current status; ii) easily access to all tools and services provided by the CIPSEC Framework and iii) be warned about current or future threats in the system</para>
</section>

<section class="lev3" id="sec7-3-1-2">
<title>7.3.1.2 CIPSEC collectors</title>
<para>CIPSEC combines information produced by the different products playing a role within the acquisition layer of the framework. They monitor OT systems and collect raw security data from multiple sources and functionalities and provide monitoring and anomaly detection for the complete critical infrastructure. The collectors are the following:</para>
<para>The Forensics Agents (AEGIS) are a set of plugins/tools deployed in the critical infrastructure and properly configured to log information that is relevant to the hosting critical infrastructure and is used by the Forensics Service. The Network Intrusion Detection System (ATOS) sensor is similar to a sniffer since it monitors all network traffic searching for any kind of intrusion. It implements an attack detection and port scanning engine that allows registering, alerting and responding to any anomaly previously defined as patterns. The Gravity Zone Antimalware Solution (Bitdefender) detects malware, phishing, application control violation or data loss, among others. Honeypots brought by FORTH monitor the critical infrastructure network and produce insightful results for the anomaly detection and prevention component. Used Honeypots are Dionaea, Kippo, Conpot and a custom DDoS honeypot based on the detection of amplification attacks. The DoSSensing Jamming Detector by World sensing monitors the whole wireless spectrum to detect anomalies derived from a Denial of Service attack in real-time. All the aforementioned solutions are present in the Endpoint Detection and Response component. The Hardware Security Module developed by the University of Patras is a synchronous Secure System on Chip (SoC) device implemented on FPGA technology. It is a trusted device offering cryptography, secure storage and message integrity services. It is present in the Crypto Services and Integrity Management components. Secocard (Empelor) is a security enhanced single board embedded microcontroller and is present in the Identity Access Management, Integrity Management and Crypto Services component.</para>
</section>

<section class="lev1" id="sec7-4">
<title>7.4 CIPSEC Integration</title>
<para>CIPSEC is a challenging project in terms of integration. The goal is to obtain an orchestrated solution which offers a general yet comprehensive approach to protect critical infrastructure against cyber threats. A clear roadmap for component integration was designed by the Consortium to produce the solution that makes the most of the features of the components brought by the different project partners. A thorough study of each product was carried out to understand the kind of information that can be obtained from such product. An important aspect to analyze was how this information was provided. In the specific case of the products playing the role of collectors (see Section 7.1.2), they produce logs containing the relevant information to consume. These logs have different formats according to the kind of event to communicate and also depending on the product in question. The CIPSEC architecture proposes the Anomaly Detection Reasoner (with the ATOS XL-SIEM playing this role) as the orchestrating element that integrates the logs from a wide range of collectors. To do so, several plugins were developed to adapt the different formats to the one understandable by the XL-SIEM. As the plugins were available and therefore the information coming from the different collectors was translated into the common format, the partners researched on how to combine events coming from different products to produce more complex events and eventually alarms with insightful messages demanding actions and clear responses from the user. Regarding the core components (Section 7.1.1), it is important to highlight the approach used for the dashboard (see <link linkend="F7-3">Figure <xref linkend="F7-3" remap="7.3"/></link>), which embeds views from the different products under a common look and feel, offering a harmonized user interface for the different CIPSEC user profiles.</para>
<para>All the development and tests were carried out on a distributed testbed where the different components were located in public IPs within each partner&#8217;s local network, resulting on a testbed distributed in countries like Spain, Greece, United Kingdom, Romania and Switzerland. This led to a distributed prototype ready to be deployed in the three pilots. A deployment plan was designed for each pilot. The prototype is flexible enough to allow the user to choose components off-the-shelf according to his specific needs. The three deployments were carried out in Darmstadt, Germany, for the railway pilot; in Barcelona, Spain, for the health pilot; and in Torino, Italy for the environmental pilot. More details about the pilots are provided in Section 7.4. The approach for these pilots is hybrid, with most components deployed in the cloud except for those that necessarily need to be on premises, like the security data collectors. The prototype contains extra features like the presence of tools to produce attacks and to test its performance or a set of virtual assets emulating industrial networks. In some cases, critical infrastructures demand a completely on premise deployment, taking place in an off-line environment, without any connectivity to the Internet, as they work in isolation, therefore Internet connection is not an option for them. Based on this, a second prototype was created with the purpose of demonstrating CIPSEC even without internet connection. This prototype is composed of two physical machines that contain the CIPSEC solution in the form of several virtual machines. The deployment plan is to place all the VMs in the same local subnet where CI systems resides. Additionally, CIPSEC members use this prototype for demos in different events. All the details about the different integration environments and pilot deployments can be found in [8].</para>
<fig id="F7-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F7-3">Figure <xref linkend="F7-3" remap="7.3"/></link></label>
<caption><para>CIPSEC dashboard.</para></caption>
<graphic xlink:href="graphics/ch007_fig003.jpg"/>
</fig>
</section>

<section class="lev1" id="sec7-5">
<title>7.5 CIPSEC Pilots</title>
<para>The security framework for Critical Infrastructures (CI) proposed by CIPSEC has been designed, integrated, deployed and tested in 3 different pilot domains. Health sector-represented by Hospital Clinic de Barcelona (HCB [9]), Transportation sector-represented by the German Railway infrastructure (Deutche Bahn [10]) and Environmental monitoring sector &#8211; represented by The Regional System of Detection of Air Quality AQDRS managed by CSI &#8211; Piemonte (CSI, [11]). For all the pilots of each domain, CIPSEC followed an analytical process of defining their characteristics, eliciting the requirements in terms of fitting solutions, analyzing the involved security and privacy aspects and finally extracting the system requirements, as described in the previous chapter. The integration and testing of the proposed solution is described in the following sections.</para>
</section>

<section class="lev2" id="sec7-5-1">
<title>7.5.1 Integration of the Solution in the Pilots</title>
<para>Integration of security technologies in a CI is affected by a set of limiting factors that were faced by the CIPSEC team during the integration phase into the OT and IT systems of the pilots. Some indicative examples are communication infrastructure not following proper security guidelines (e.g. lack of firewalls), proprietary communication protocols, dedicated software that can&#8217;t be managed by standard security tools, unattended physical locations of equipment, highly regulated environments, limitation/lack of resources, requirement for real time readiness and difficulty in applying patches and updates to existing working systems. To overcome these limitations, CIPSEC has developed a compliance management service (CMS) which shows the level of compliance between the solutions for cybersecurity that the CIPSEC framework provides and requirements of the CI stemming from various sources, such as expert knowledge, domain standards, industrial standards, or legislation. This process is performed by matching the CIPSEC Profile and the CI Profile so as to define the CIPSEC solutions that can be applied to the CI and therefore proceed with their deployment. So, the main objectives and the environment to test how CIPSEC can fulfil them were defined for all pilots.</para>
<para>The main objective of railway transportation is safe operation. Due to this the systems have to fulfil the requirements of several safety standards (EN 50126, EN 50128, EN 50129) and an admission by the national safety authority has to be granted. This also applies, if changes are made to the system which affect safety.</para>
<para>A typical control system in the railway domain consists of several subsystems:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Safety-related components like interlocking, points, switches and axle counters</para></listitem>
<listitem><para>Assisting systems like train number systems and automated driveway systems</para></listitem>
<listitem><para>Data management systems as the MDM, the documentation system</para></listitem>
<listitem><para>Diagnosis systems</para></listitem>
</itemizedlist>
<para>The most relevant to CIPSEC components are the ones responsible for signaling, like interlocking systems. Due to the safety-relevance of the interlocking components and the required admission by the German national safety authority Eisenbahnbundesamt (EBA), DB established a test site in their OT testing facilities for testing the CIPSEC Framework. The environment consists of Operating Centers and operator workstations that simulate the normal operation of the system and therefore integration of CIPSEC components has been performed on a really close to real environment.</para>
<para>The Health pilot includes an abundance of in-hospital devices, many different networks with high low-latency constraints, controls at different levels and strong privacy requirements on the collected and processed data. HCB focused on the selection of the most representative IoT elements to be tested and the definition and construction of appropriate test sites. Due to the unavailability of these areas, the necessity to have them perfectly controlled (either from physical and remote accesses), the requirement to install the selected equipment inside as a local network but working separately from the central production servers of the data center and the lack of technical space dedicated to non-care uses inside the Hospital, HCB took action to:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Adapt one existing test room dedicated to clinical emergency training to configure the test site 1 which includes medical equipment</para></listitem>
<listitem><para>Adapt one existing office dedicated to new developments and technological trials to configure the test site 2 which includes IoT industrial equipment interacting with information provided by medical equipment</para></listitem>
<listitem><para>Build from scratch a third room with the purpose of using it as test site 3 including generic IOT equipment</para></listitem>
</itemizedlist>
<para>Having all these test sites available allowed CIPSEC to integrate all its components and design tests covering a plethora of usage scenarios for the hospital devices.</para>
<para>In the Environmental monitoring pilot, CSI is responsible for the monitoring network operated by ARPA Piemonte (Regional Agency for the Protection of the Environment of Piedmont region) which includes 56 monitoring stations and one Operations Center (OC) which receives the gathered environmental data. Protecting the stations and primarily the OC is the main objective of CSI. The pilot consists of five main functional areas:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The air measurement equipment</para></listitem>
<listitem><para>The PC Stations</para></listitem>
<listitem><para>The OC Operations Centre Server for data acquisition</para></listitem>
<listitem><para>The OC Operations Centre Databases</para></listitem>
<listitem><para>The ARPA Enterprise Infrastructures</para></listitem>
</itemizedlist>
<para>The CSI in agreement with ARPA prepared a testing environment. The virtualized environment is comprised of two parts: the monitoring station and the Operation Centre. Therefore, the CIPSEC components were integrated in this environment so as to test security threats regarding the normal operation of the stations, the uninterrupted communication with the OC and also other possible external cyber-attacks. It must be noted that all the aforementioned testing facilities included the deployment of new hardware and software components that allowed the creation of VLANs and the integration of the various CIPSEC components in networks local to the pilot CIs. All CIPSEC solution providers followed the integration guidelines described in Chapter 3 to successfully deploy their components to the test facilities and evaluate the CIPSEC prototype in all three pilot domains.</para>
</section>

<section class="lev2" id="sec7-5-2">
<title>7.5.2 Testing the Proposed Solution in the Pilots</title>
<para>CIPSEC has followed a detailed testing methodology with regards to evaluation of performance and capabilities of the integrated platform. This methodology (&#8220;IEEE Standard for Software Test Documentation [12]&#8221;) includes the definition, implementation execution and reporting of composite test scenarios that can prove the effectiveness of the CIPSEC platform in trial as well as real-world scenarios. The composite tests were defined for each one of the pilots and produced results covering many features of device resources and security requirements at the same time. Overall, 29 composite tests were executed in planned online and on-site sessions for all the pilots. [13] reports the execution results of these tests in detail, whereas recorded versions of the test execution are available for all the testing sessions.</para>
<para>Moreover, required equipment, the procedures and the people necessary to set up the CIPSEC tools were also recorded to identify problems and gain insights on possible deployment issues in real world deployments. The latter is also enhanced by the findings derived, after the tests were conducted. The main identified issues have to do with CIPSEC components requesting internet access, overall configuration of the framework being cumbersome and increased resource consumption by the components. To this end, the CIPSEC final prototype will offer a fully on-premise deployment that requires no internet connection to operate and an operational environmental that will allow tailor-made presentation of the infrastructure information that is of most importance to the CI managers.</para>
</section>

<section class="lev1" id="sec7-6">
<title>7.6 Dissemination and Exploitation</title>
</section>

<section class="lev2" id="sec7-6-1">
<title>7.6.1 Dissemination</title>
<para>Although finding solutions for protecting CIs is the main objective of the CIPSEC project, communicating and regularly showing the achieved progress will ensure the objectives are being accomplished. All CIPSEC results will be used to raise the citizens&#8217; awareness about CIPSEC solutions, paying special attention to target groups and the research community as a whole. In this sense, one of the first tasks in dissemination was to identify these possible target groups potentially interested in different aspects of the project. We identified seven target groups: Local Authorities, Policy Makers, Business people, Researchers, Associations, General Public and Media. A second task of the dissemination strategy was to create the approach and communication strategies to reach out to the identified stakeholders. Some of these communication activities include, the creation of a corporate identity, the maintenance and updating of a website, the production of promotional material, a monthly CIPSEC blog entry to disseminate the project ideas to a wide audience, the dissemination of daily information in CIPSEC social accounts (Twitter, LinkedIn and ResearchGate), the production of project videos and upload them in YouTube, and finally to produce scientific publications with research related to CIPSEC. During the life of the project we have already produced 23 blog entries, 10 videos in YouTube and we have 40 accepted papers.</para>
</section>

<section class="lev2" id="sec7-6-2">
<title>7.6.2 Exploitation</title>
<para>The main strength of the CIPSEC framework is the integration and orchestration of heterogeneous solutions under one unique umbrella which is specifically designed to protect CIs. The pilots are an excellent showcase of the direct operational benefits for the Health, Transportation and Air Quality customer segments and the stakeholders&#8217; opinion will be extremely valuable to define a final market approach. It has been demonstrated that the targeted market is not necessarily aware not only of the solutions available, but sometimes of the actual pain points and issues it is facing. Technology evangelism is being performed and will be increased through the implementation of a free version of the CIPSEC framework which should be considered as a powerful demo of the capabilities of the premium version and provides a set of expressly selected functionalities. These will be strongly limited, and any additional customer support request will entail the regular applicable fees. The following remarks were taken into consideration:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The free version must be representative of the full CIPSEC concept: stakeholder should get a quick idea of the premium version by simply interacting with the tool;</para></listitem>
<listitem><para>Tools and services must be there: CIPSEC is not just the sum of different tools, but also services;</para></listitem>
<listitem><para>The free version should be attractive and simple enough to gain the interest of heterogeneous stakeholders&#8217; groups.</para></listitem>
</itemizedlist>
<para>In contrast, the premium version will merge the different solutions from all the partners, offering a full functionality on a business-based approach. The Consortium rules out the emerging of a joint company or a similar stable structure but instead it considers the establishment of a framework of collaboration to commercialise the collaborative solution framework. Besides, powerful synergies have been revealed between some of the partners, which will be consolidated towards ad-hoc joint commercial exploitation.</para>
</section>

<section class="lev1" id="sec7-7">
<title>7.7 Conclusions</title>
<para>In this chapter, we have presented the CIPSEC project, whose objective is to create a unified security framework that orchestrates heterogeneous, diverse, security products in Critical infrastructure environments. This framework is able to collect and process security-related data (logs, reports, events) so as to generate security anomaly alerts that can affect a CI health and that can have a cascading effect on other CI systems. CIPSEC includes products/tools and services encompassing features such as network intrusion detection, traffic analysis and inspection, jamming attacks detection, antimalware, honeypots, forensics analysis, integrity management, identity access control, data anonymization, security monitoring and vulnerability analysis. The innovation and benefit of CIPSEC relies not only to the addition of all these services and products, but mainly to the integration process of those heterogeneous components has introduced an added value, not covered by the individual solutions, for example allowing to collect all sensors&#8217; data of all the products in the XL-SIEM to be analysed, or also allowing to add easily new sensors coming from new future solutions. In summary, the CIPSEC framework integrates all the cybersecurity elements and centralizes all the management in one point, making Critical Infrastructure protection easier to maintain, update and upgrade.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Jesse M. Ehrenfeld, &#8216;WannaCry, Cybersecurity and Health Information Technology: A Time to Act.&#8217; J Med Syst (2017) 41: 104.</para>
<para>[2] https://ics-cert.us-cert.gov/</para>
<para>[3] CIPSEC project, deliverable D2.2, &#8220;D2.2 CIPSEC Unified Architecture First Internal Release&#8221;, November 2017, https://www.cipsec.eu/content/ d22-cipsec-unified-architecture-first-internal-release</para>
<para>[4] CIPSEC project, deliverable D2.5, &#8220;D2.5 Final Version of the CIPSEC Unified Architecture and Initial Version of the CIPSEC Framework Prototype&#8221;, April 2018, https://www.cipsec.eu/content/d25-fi nal-version-cipsec-unified-architecture-and-initial-version-cipsec-frame work-prototype</para>
<para>[5] CIPSEC project, deliverable D1.2, &#8220;D1.2 Report on Functionality Building Blocks&#8221;, October 2016, https://www.cipsec.eu/content/d12-report-functionality-building-blocks</para>
<para>[6] CIPSEC project, deliverable D1.1, &#8220;D1.1 CI base security characteristics and market analysis&#8221;, November 2016, https://www.cipsec.eu/content/ d11-ci-base-security-characteristics-and-market-analysis</para>
<para>[7] CIPSEC project, deliverable D1.3, &#8220;D1.3 Report on taxonomy of the CI environments&#8221;, November 2016, https://www.cipsec.eu/content/d13-report-taxonomy-ci-environments</para>
<para>[8] CIPSEC project, deliverable D2. 7, &#8220;D2.7: CIPSEC Framework Final version&#8221;.</para>
<para>[9] http://www.hospitalclinic.org/en</para>
<para>[10] https://www.bahn.de/p_en/view/index.shtml</para>
<para>[11] http://www.csipiemonte.it/web/en/</para>
<para>[12] IEEE-SA Standards Board, &#8220;IEEE Standard for Software Test Documentation&#8221;, IEEE Std 829-1998, 16 September 1998.</para>
<para>[13] CIPSEC Deliverable D4.3: &#8220;Prototype Demonstration: Field trial results&#8221;.</para>
</section>
</chapter>

<chapter class="chapter" id="ch08" label="8" xreflabel="8">
<title>A Cybersecurity Situational Awareness and Information-sharing Solution for Local Public Administrations Based on Advanced Big Data Analysis: The CS-AWARE Project</title>
<para><b>Thomas Schaberreiter<sup>1</sup>, Juha Roning<sup>2</sup>, Gerald Quirchmayr<sup>1</sup>, Veronika Kupfersberger<sup>1</sup>, Chris Wills<sup>3</sup>, Matteo Bregonzio<sup>4</sup>, Adamantios Koumpis<sup>5</sup>, Juliano Efson Sales<sup>5</sup>, Laurentiu Vasiliu<sup>6</sup>, Kim Gammelgaard<sup>7</sup>, Alexandros Papanikolaou<sup>8</sup>, Konstantinos Rantos<sup>9</sup> and Arnolt Spyros<sup>8</sup></b></para>
<para><sup>1</sup>University of Vienna &#8211; Faculty of Computer Science, Austria</para>
<para><sup>2</sup>University of Oulu &#8211; Faculty of Information Technology and Electrical Engineering, Finland</para>
<para><sup>3</sup>CARIS Research Ltd., United Kingdom</para>
<para><sup>4</sup>3rd Place, Italy</para>
<para><sup>5</sup>University of Passau, Germany</para>
<para><sup>6</sup>Peracton, Ireland</para>
<para><sup>7</sup>RheaSoft, Denmark</para>
<para><sup>8</sup>InnoSec, Greece</para>
<para><sup>9</sup>Eastern Macedonia and Thrace Institute of Technology, Department of Computer and Informatics Engineering, Greece</para>
<para>E-mail: thomas.schaberreiter@univie.ac.at; juha.roning@oulu.fi; gerald.quirchmayr@univie.ac.at; veronika.kupfersberger@univie.ac.at; ccwills@carisresearch.co.uk; matteo.bregonzio@3rdplace.com; adamantios.koumpis@uni-passau.de; juliano-sales@uni-passau.de; laurentiu.vasiliu@peracton.com; kim@rheasoft.dk; a.papanikolaou@innosec.gr; krantos@teiemt.gr; a.spyros@innosec.gr</para>
<para>In this chapter, the EU-H2020 project CS-AWARE (running from 2017 to 2020) is presented. CS-AWARE proposes a cybersecurity awareness solution for local public administrations (LPAs) in line with the currently developing European legislatory cybersecurity framework. CS-AWARE aims to increase the automation of cybersecurity awareness approaches, by col-lecting cybersecurity relevant information from sources both inside and outside of monitored LPA systems, performing advanced big data analysis to set this information in context for detecting and classifying threats and to detect relevant mitigation or prevention strategies. CS-AWARE aims to advance the function of a classical decision support system by enabling supervised system self-healing in cases where clear mitigation or prevention strategies for a specific threat could be detected. One of the key aspects of the European cybersecurity strategy is a cooperative and collaborative approach towards cybersecurity. CS-AWARE is built around this concept and relies on cybersecurity information being shared by relevant authorities in order to enhance awareness capabilities. At the same time, CS-AWARE enables system operators to share incidents with relevant authorities to help protect the larger community from similar incidents. So far, CS-AWARE has shown promising results, and work continues with integrating the various components needed for the CS-AWARE solution. An extensive trial period towards the end of the project will help to assess the validity of the approach in day-to-day LPA operations.</para>

<section class="lev1" id="sec8-1">
<title>8.1 Introduction</title>
<para>As is the case in other sectors, the problem of securing ICT infrastructures is increasingly causing major worries in local public administration. While local public administrations are, compared to other areas, rarely the target of an attack, using its ICT infrastructure as a springboard for the infiltration of other government systems is of great concern for system administrators. Another significant issue is the danger of becoming a victim of collateral damage ensuing from widespread attacks, as happened to hospitals in the 2017 ransomware attacks [1], causing severe damages to local public administration as well, and going far beyond the loss of reputation. Depending on the criticality of services provided by a local public administration, the damage caused by a successful DDoS, ransomware, malware, or, in the worst case, a destruction-orientated APT attack, can be substantial.</para>
<para>Against this background, the H2020-funded CS-AWARE project<footnote id="fn_1" label="1"> <para>https://cs-aware.eu/</para></footnote> aims to equip local public administrations with a toolset allowing them to gain a better picture of vulnerabilities and threats or infiltrations of their ICT systems. This will be achieved via an underlying information flow model including components for information collection, analysis and visualisation which contribute to an integrated awareness picture that gives an overview of the current status in the monitored infrastructure and raises the awareness for both looming and already materialized threats.</para>
<para>Starting from a requirements and situation analysis based on workshops following the soft systems methodology (SSM), Rich Pictures serve as tools for developing a core information flow model that facilitates the information collection, analysis and rendering/visualization processes. In addition to these steps, recommendations are suggested that can either be used as support for human decision makers or are directly executed by (re)configuration scripts to realign defensive capabilities in such a way that existing attacks can be dealt with and developing ones can be prevented from getting through.</para>
<para>In CS-AWARE we develop the building blocks for a cybersecurity awareness solution that builds upon a holistic socio-technological system and dependency analysis. An overview of the proposed approach can be seen in <link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link>. After data collection, which is composed of static information collected during system and dependency analysis as well as dynamic information collected at run-time, an analysis and decision support component as well multi-lingual support, will process the information to support the main objectives of the solution:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Provide situational awareness to system operators or administrators via visualization</para></listitem>
<listitem><para>Provide supervised self-healing in cases where the analysis engine could determine an automated solution to prevent or mitigate a detected cybersecurity incident</para></listitem>
<listitem><para>Provide the capabilities to share cybersecurity related information with relevant communities to help prevent or mitigate similar incidents for other organizations</para></listitem>
</itemizedlist>
<para>To ensure the practical feasibility of the approach, processes and tools developed in this project from the requirements analysis onwards, two city administrations, one medium sized and one large and complex which included outsourced operations, are involved to provide the necessary guidance and support. Assuming that the pilot implementations are satisfactory at the end of the project, the commercialisation group of the project will then advance the toolset and the services around it into a commercial operation. With the Network and Information Security (NIS) Directive [2] and General Data Protection Regulation (GDPR) [3] having become binding legislation in the European Union in May 2018, it is expected that the need for such a toolset will increase, way beyond local public administration.</para>

<fig id="F8-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-1">Figure <xref linkend="F8-1" remap="8.1"/></link></label>
<caption><para>The CS-AWARE approach.</para></caption>
<graphic xlink:href="graphics/ch008_fig001.jpg"/>
</fig>
<para>The remainder of the chapter is organized as follows: Section 2 discusses related work. Section 8.3 details the CS-AWARE concept and framework, while Section 4 specifies implementation aspects of the main framework components. Section 5 discusses the project results and experiences so far, and Section 6 concludes the chapter.</para>
</section>

<section class="lev1" id="sec8-2">
<title>8.2 Related Work</title>
<para>Cybersecurity affects both individuals and organisations, being one of today&#8217;s most challenging societal security problems. Next to strategic/critical infrastructures, large commercial enterprises, SMEs and also governmental or non-governmental organisations (NGOs) are affected. Expanding beyond the technology-focused boundaries of classical information technology (IT) security, cybersecurity is strongly interlinked with organisational and behavioural aspects of IT operations, and the need to adhere to the existing and upcoming legal and regulatory framework for cybersecurity. This is particularly true in the European Union, where substantial efforts have been made to introduce a comprehensive and coherent legal framework for cybersecurity. Consequent upon the EU cybersecurity strategy [4], the two main legislatory efforts have been the NIS directive [2] and the GDPR [3]. One of the main aspects of the NIS directive, as well as the European cybersecurity strategies is cooperation and collaboration among relevant actors in cybersecurity, as is pictured <link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link> taken from the EU cybersecurity strategy, identifying the main actors relevant for a cooperative and collaborative cybersecurity environment. Enabling technologies for coordination and cooperation efforts are essential for situational awareness and information sharing among relevant communities and authorities. In the long term, it is expected that information sharing can improve cybersecurity sustainably and benefit society and economy in its entirety as an outcome of the enhanced awareness so generated. Current reports such as the 2018 Europol IOCTA (Internet Organised Crime Threat Assessment) [1], support and encourage the growing importance of collaboration and coordination in order to address current and future cybersecurity challenges.</para>
<para>In common with the challenges faced by the NIS, GDPR compliance efforts require greater understanding of an organizations systems in order to identify and understand GDPR relevant information and information flows. Awareness technologies like the one proposed in CS-AWARE enable organizations to assess and manage GDPR compliance.</para>
<fig id="F8-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link></label>
<caption><para>Roles and responsibilities in European cybersecurity strategy.</para></caption>
<graphic xlink:href="graphics/ch008_fig002.jpg"/>
</fig>
<para>Situational awareness in the CS-AWARE context is a runtime mechanism to gather cybersecurity relevant data from an IT infrastructure and visualise the current situation for a user or operator. Understanding the entirety of the cybersecurity relevant aspects of the internal system is one of the cornerstones for ensuring useful as well as successful collaboration and cooperation between institutions. This is a complex task that will greatly improve the cybersecurity of organisations in the context of cybersecurity situational awareness and cooperative/collaborative strategies towards cybersecurity. Therefore, a system and dependency analysis methodology has been introduced to analyse the environment and</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Identify assets and dependencies within the system and how to monitor them</para></listitem>
<listitem><para>Capture the socio-technical relations within the organisation and the purely technical aspects</para></listitem>
<listitem><para>Identify external information sources, either official or from dedicated communities</para></listitem>
<listitem><para>Provide the results in an output that can be utilised by support tools</para></listitem>
</orderedlist>
<para>Our work is based on established and well proven methods related to systems thinking, the soft systems methodology (SSM) [5, 6] as well as PROTOS-MATINE [7, 8] and GraphingWiki [9] for system analysis and management/visualization of results. Since technology is only one of many factors in cybersecurity, the system and dependency analysis is designed to detect and analyse the socio-technical nature of an IT infrastructure. It does so by considering the human, organisational and technological factors, as well as other legal/regulatory and business related factors that may contribute to the cybersecurity in a specific context. The key concepts are holism (looking at the entirety of the domain and not at isolated components) and systemic (treating things as systems, using systems ideas and adopting a systems perspective). As can be seen in <link linkend="F8-3">Figure <xref linkend="F8-3" remap="8.3"/></link>, systems thinking is a way of looking at some part of the world, by choosing to regard it as a system, using a framework of perspectives to understand its complexity and undertake some process of change.</para>
<para>Hard and soft systems thinking are the two concepts of systems thinking. Hard systems design is based on systems analysis and systems engineering and it builds on the idea that the world is comprised of systems that can be described and that these systems can be understood through rational analysis. Hard systems design assumes that there is a clear consensus as to the nature of the problem that is to be solved. It is unable to depict, understand, or make provisions for &#8220;soft&#8221; variables such as people, culture, politics or aesthetics. While hard systems design is highly appropriate for domains involving engineering systems structures that require little input from people, the complex systems and interactions in critical infrastructures or other organisations &#8211; especially with cybersecurity in mind &#8211; usually do not allow this type of analysis. Soft systems design is therefore much more appropriate and suitable for analysing human activity systems that require constant interaction with, and intervention from people.</para>
<fig id="F8-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-3">Figure <xref linkend="F8-3" remap="8.3"/></link></label>
<caption><para>Systems thinking.</para></caption>
<graphic xlink:href="graphics/ch008_fig003.jpg"/>
</fig>
<para>Complex systems in software engineering are systems where single components function autonomously but are dependent on the outputs of other components [10, 11] and require abstraction in software engineering can occur in two ways, according to Sokolowski et al. [12] either by limiting the information covered by the model to only the components which are relevant and ignoring the remaining or by reproducing a minimized version of the real-world concept. This procedure of abstraction is critical and sometimes considered one of the most important capabilities of a software engineer [13].</para>
<para>The CS-AWARE modelling approach of the information flow of the complex system is influenced by the Data Flow Diagram (DFD) language defined by Li and Chen [14] but adapted to suit the domains needs. Information flows can cover multiple granularities of interconnections between components, but on a high-level can be classified in three categories: direct information flow, indirect information flow and general information flow [15]. The types of data flows of the original DFD have been adapted, while types of activities were added to ensure that the diversity of the system can be modelled easily. This approach was chosen due to the strong focus and importance of the information flow in the CS-AWARE solution as well as the need for individualised entities.</para>
<para>The role of PROTOS-MATINE and GraphingWiki in this proposed analysis method is to complement the SSM analysis with information from other sources, and provide a solid base for discussion through visualisation in dedicated workshops with the system users and operators. One of the capabilities of GraphingWiki is to instantly link gathered information to other relevant information and thus allow an update of the graphical representation of the analysed system as soon as new information arrives. This feature is used together with SSM analysis to create more dynamic discussions and give even more incentive to the participants to create a system model that is as close to reality as possible.</para>
</section>

<section class="lev1" id="sec8-3">
<title>8.3 The CS-AWARE Concept and Framework</title>
<para>The CS-AWARE framework is the core of the CS-AWARE solution and is based largely on the analysis of cybersecurity requirements for local public administrations and the existing technologies. The aim of the framework is to provide a unified understanding of which components interact with each other and in what way this interaction is made possible. The framework provides a high-level overview of the main components, most of which are represented by one of the consortium partners, as well as a more detailed view of the main subcomponents or processes each of them consist of. Additionally, the relations between these components are defined as well as, in the case of data flows, the data format in which the exchange takes place. The high-level nature of the framework was crucial, since some technical details will only be specifiable during the projects implementation phase.</para>
<para>The CS-AWARE framework consists of an information flow model as well as individual interface definitions for each of the components. The model is a high-level, abstract view on how each of the separate technology components cooperates with the others and in what relation they stand to each other. This might be data flows or also logical control flows between the modules. The focus of the current design of CS-AWARE lies on layers 3, 4 and 7, namely the network, transport as well as application layers of the LPAs systems. To facilitate further analysis, the detailed investigation into the appropriate connections was based on the ETL structured diagram. ETL stands for Extraction, Transformation and Load and is a process most commonly used for database warehouses. Extract stands for the gathering of the data from various sources, Transform for cleaning and manipulating the data to ensure integrity and completeness, Load for transferring the data into its target space [16]. Since the CS-AWARE solution is evidently not a database warehouse, the final layer load was adjusted to better suit the framework&#8217;s nature and renamed the data-provisioning layer. In our case, the division into layers will mainly be applied to facilitate the structuring of the following, more detailed diagrams of the subcomponents, processes and their interrelationships.</para>
<para>The data extraction layer covers all components responsible for defining relevant data and extracting it, as well as the sources themselves. The system dependency analysis is where the analyst defines relevant sources and data necessary for monitoring the LPAs systems. This information is fed into the data collection module via a control flow, which then extracts the data accordingly.</para>
<para>The data transformation layer summates all components tasked with transforming and analysing the data in some way. The first step is to filter and adapt the data as required before it can be, if necessary, run through the natural language processing information extraction component. The data analysis and pattern recognition and the multi-language support module further process the data. For visualising and sharing the detected incidents and data patterns, the data provisioning layer was defined. This is where all collected information is either visually presented to the end user, shared with selected information sharing communities or used for self-healing rule definition.</para>
<para>The approach chosen to present the CS-AWARE framework interface specifications is based on the classical I/P/O - Input, Process, Output &#8211; model, where each component consists of as many input, process and output entities as is required, as visualized in <link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link>. For each component, all other building blocks providing data or control flows are summarized as inputs, including which data format they use. Additionally, each component has one or multiple processes or sub components that execute the respective logic of the module and are described in detail as well. Each sub process has inputting and outputting components. Finally, the output components are defined by the same information as the inputs; data format and which type of information flow they use. In preparation of conceptualising the framework, various models and approaches were researched. In the end the CS-AWARE framework was based on the information flows between the components. Nevertheless, it is in line with the NIST cybersecurity framework [17], which identifies five functions as its core: Identify, Protect, Detect, Respond and Recover, making it also compliant with the Italian cybersecurity report, which is based on the NIST framework [18].</para>
<fig id="F8-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-4">Figure <xref linkend="F8-4" remap="8.4"/></link></label>
<caption><para>CS-AWARE framework.</para></caption>
<graphic xlink:href="graphics/ch008_fig004.jpg"/>
</fig>
<fig id="F8-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-5">Figure <xref linkend="F8-5" remap="8.5"/></link></label>
<caption><para>I/P/O interface definition framework.</para></caption>
<graphic xlink:href="graphics/ch008_fig005.jpg"/>
</fig>

<para>It was decided that the communication between components illustrated in <link linkend="F8-5">Figure <xref linkend="F8-5" remap="8.5"/></link>, as well as the communication with relevant authorities via the information sharing component will be in accordance with the STIX2 protocol [19]. STIX2 is a modern and flexible protocol to express and link cybersecurity information and is expected to gain wide adoption over the coming years. An open-source java implementation of the protocol specification was developed by CS-AWARE<footnote id="fn_2" label="2"> <para>https://github.com/cs-aware/stix2</para></footnote> to facilitate wider adoption of the protocol.</para>

</section>

<section class="lev1" id="sec8-4">
<title>8.4 Framework Implementation</title>
<para>This Section discusses in more detail the main framework components identified in Section 3. Section 4.1 discusses the system and dependency analysis approach, Section 4.2 details the data collection and pre-processing steps, Section 4.4 and Section 4.3 discuss the multi-language support and data analysis. In Section 4.5 the visualization component is detailed while Sections 4.6 and 4.7 discuss the information sharing and self-healing components respectively.</para>
</section>

<section class="lev2" id="sec8-4-1">
<title>8.4.1 System and Dependency Analysis</title>
<para>For analysing the networks and systems in the two European CS-AWARE piloting cities in different countries, one with a population in excess of 2.5 million and a one with a population in excess of 150,000, the Systems Methodology (SSM) was used in conjunction with GraphingWiki. The two cities participated in the project as pilot use cases for whom cybersecurity awareness systems were to be built as an output of the project.</para>
<para>The two cities presented very different problem domains: one city&#8217;s system was extremely large, reflecting as it did the size of the population it served and potentially had 15+ million concurrent citizen users. These users can access the city&#8217;s systems both from their homes, public buildings and wireless hotspots around the city. This city has outsourced the management many of its key systems. The network topology, the systems and underlying process combine to form what overall is an extremely complex system. The size and complexity of the system precludes any one individual, or indeed small group of employees&#8217; form having a complete understanding of all of the systems or the links between systems and their processes and sub-processes. The smaller city operates, manages and maintains all of its own systems.</para>
<para>SSM is a well-proven analytical approach to systems analysis that has been used in an extremely wide range of settings. It is beyond the scope of this chapter to give anything but a brief description of the methodology.</para>
<para>SSM consists of seven stages:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Enter the problem situation</para></listitem>
<listitem><para>Express the problem situation</para></listitem>
<listitem><para>Formulate root definitions of systems behaviour</para></listitem>
<listitem><para>Build conceptual models of systems in root definitions</para></listitem>
<listitem><para>Compare models with real-world situations</para></listitem>
<listitem><para>Define possible and feasible changes</para></listitem>
<listitem><para>Take action to improve the problem situation</para></listitem>
</orderedlist>
<fig id="F8-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-6">Figure <xref linkend="F8-6" remap="8.6"/></link></label>
<caption><para>Soft systems analysis rich picture.</para></caption>
<graphic xlink:href="graphics/ch008_fig006.jpg"/>
</fig>

<para>The problem situation is explored (expressed), by drawing &#8220;Rich Pictures&#8221;. These pictures are cartoon-like representations that are intended to encompass all of the elements of the situation being examined, be they technical, social, economic, political. A machine drawn example can be seen in <link linkend="F8-6">Figure <xref linkend="F8-6" remap="8.6"/></link>, and depicts a malfunctioning airline passenger check-in system and outlines different viewpoints of those involved when one airline&#8217;s check-in systems fails.</para>
<para>The analysis of both of the city&#8217;s operation was conducted in the first two of a proposed series of three workshops. In the first workshops, the participants were asked to draw rich pictures to identify their city&#8217;s key critical systems (those systems critical to their city&#8217;s ability to provide services to its citizens and those systems storing or processing sensitive or personal information). Having identified the critical systems, further rich pictures were drawn to explore the interrelationships between the systems so identified in terms of network connectivity and information flows.</para>
<para>These rich pictures informed the development of a series of GraphingWiki graphs, like the one seen in <link linkend="F8-7">Figure <xref linkend="F8-7" remap="8.7"/></link>, which enabled the analysts to represent and model their understanding of the networks and systems in both pilot cities. Each of the nodes is a wiki page that holds the semantic descriptions of the respective elements.</para>
<para>A second round of workshops in the pilot cities was undertaken in which the analysts decided to use the CATWOE approach [20] to gain a better understanding of the processes depicted in the rich pictures created during the first workshops. CATWOE (a mnemonic) was used to identify, express explore and explain the following features in the key rich pictures drawn in the workshops. In doing so, the participants described the processes and sub-processes of the key systems identified in the first and second workshops.</para>
<fig id="F8-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-7">Figure <xref linkend="F8-7" remap="8.7"/></link></label>
<caption><para>System and dependency analysis use case example.</para></caption>
<graphic xlink:href="graphics/ch008_fig007.jpg"/>
</fig>
<table-wrap>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tr><td valign="top" align="left"><b>C</b>ustomers</td><td>The organisations customers. The stakeholders of the system</td></tr>
<tr><td valign="top" align="left"><b>A</b>ctors</td><td>The employees of the organisation. The people involved in ensuring that a transformation takes place</td></tr>
<tr><td valign="top" align="left"><b>T</b>ransformation</td><td>The process by which inputs become outputs e.g. raw materials become finished goods</td></tr>
<tr><td valign="top" align="left"><b>W</b>orld view</td><td>The wider view of all of the interested parties &#8211; employees, suppliers, customers etc... The &#8220;big picture&#8221;</td></tr>
<tr><td valign="top" align="left"><b>O</b>wner</td><td>The owner of the system or process. The organisation in control</td></tr>
<tr><td valign="top" align="left"><b>E</b>nvironmental constraints</td><td>Finances, legislation ethics</td></tr>
</table>
</table-wrap>
<para>These CATWOE analyses were then used in a plenary session to further correct and refine the representation of the systems as mapped out in the GraphingWiki, and allowed to identify the information flows through the systems each of those processes produce during day-to-day operations. The identification of information flows is considered a key aspect of understanding where and how to best monitor the systems in the cybersecurity context and are the key to interface the analysed systems with CS-AWARE.</para>
</section>

<section class="lev2" id="sec8-4-2">
<title>8.4.2 Data Collection and Pre-Processing</title>
<para>For data collection and pre-processing the main challenge in CS-AWARE is to deal with the diversity of data collected from various sources, ranging from cybersecurity information that is heavily structured (e.g. STIX based information sources), to loosely structured information (e.g. log files) or completely free semantic text (e.g. social media). It was decided to convert incoming data from all sources to STIX2 format in the pre-processing stage.</para>
<para>Data collection and pre-processing are applied to multiple sources and the retrieved data is stored in a data-lake. To handle large volumes and a variety of file formats, a big-data pipeline has been implemented following a flexible approach, so that data sources can easily be integrated at a later stage, should additional relevant data sources be identified. Importantly, the collection has been executed in compliance of GDPR regulation where personal data are removed or anonymized at source, since personal data is not required for CS-AWARE operation in the majority of use cases. The implemented framework aggregates and ingests three main classes of information sources:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Logs from servers, databases, applications and network/security devices from within monitored systems</para></listitem>
<listitem><para>Cyber threat intelligence from specialised websites and feeds</para></listitem>
<listitem><para>More general cybersecurity related notifications and warnings collected from social networks</para></listitem>
</itemizedlist>
<para>In order to collect information from the monitored systems within the local public administrations that usually do not have APIs for data collection, a collector interface was developed to be hosted within the monitored systems. It acts as a local collector of data that is relevant for CS-AWARE and provides an interface to the CS-AWARE solution which may be hosted in the cloud. The conversion to STIX2 format is usually straight forward, because the relevant information is often based on unusual behaviour which can be easily modelled in STIX2.</para>
<para>Threat intelligence sources usually provide a public API that allows collection of data, but there is no agreed or standardized data format in which this data is provided. Common formats are among others STIX1/STIX2, comma separated values (CSV), extensible Markup Language (XML) or Java Object Notation (JSON). Since CS-AWARE operates on STIX2, all collected data entries are converted to STIX2 in data pre-processing. Since almost exclusively information with a strong cybersecurity context is shared by threat intelligence sources, the conversion is usually straight-forward.</para>
<para>Threat intelligence notifications collection is performed every 12 hours and stored within the CS-AWARE repository.</para>
<para>As part of this project we want to explore the opportunity of cybersecurity prevention and notifications by listening to social media sources such as Reddit and Twitter. The intuition here is that a cyber-attack may propagate following a certain pattern that could be anticipated by social media warnings, and social media conversations often provide an early indicator to information that may be shared by threat intelligence at a later time. A challenge with utilizing unstructured semantic text like social media is to assess the relevance of each element and assign a structure to it so that it can be processed in an automated way. In CS-AWARE we try to answer this challenge with a natural language processing (NLP) based information extraction approach that will be discussed in more detail in Section 4.3.</para>
<para>As project repository we believe that a winning approach would be a cloud based big-data repository since it offers a ready-to-use framework designed to scale up in a cost effective manner. For this type of challenge, a popular approach involves using a queue system, such as Apache Kafka, and a database where the data is stored; this infrastructure could be well replicated on major cloud providers. Having said that, a full functioning big-data pipeline has a fixed cost even if not fully exploited. For this reason, we preferred a slim and flexible solution where costs are compressed. In more detail, we created the CS-AWARE data-lake on AWS S3<footnote id="fn_3" label="3"> <para>https://aws.amazon.com/s3/</para></footnote> storage. AWS S3 provides capabilities to store and retrieve any amount of data from anywhere. It is worth mentioning that thanks to a structured folder hierarchy, it is intuitive and straight forward to retrieve the needed information. Despite the low cost and simplicity, this approach already demonstrated to be fast and stable.</para>
</section>

<section class="lev2" id="sec8-4-3">
<title>8.4.3 Multi-language Support</title>
<para>In CS-AWARE, the existing technology to support handling of multiple languages is used and has been adopted to fit specific needs of the project context and the use cases. To this aim <i>Graphene</i>, a rule-based information extraction system developed in the context of research conducted at the University of Passau, was utilized. There are two use-cases for multilanguage support in CS-AWARE: multi-language support at the input when cybersecurity relevant information is collected from multiple sources, and multi-language support at the output to inform the system operators of the systems security status in their chosen language. In this Section we focus on the first use case, where the challenge is not only to translate new incoming information to a meta-language, but at the same time to extract the most relevant information using natural language processing (NLP) methods.</para>
<para>In the project framework, Graphene is responsible for all functions of the NLP information extraction component. The tool uses a two-layered transformation stage consisting of a <i>clausal disembedding layer</i> and a <i>phrasal disembedding layer</i>, together with <i>rhetorical relation identification</i>. To put this in simpler terms, the main approach we take here is to <i>simplify complex sentences</i> before applying a set of tailored rules to transform a text into the knowledge graph. During the CS-AWARE project, we had the opportunity to mature the original research prototype as a technology which is both easy to deploy as a service and integrate as a product using de-facto web standards. Additionally, we also had the opportunity to implement and add a new extraction layer responsible for transforming complex categories &#8211; what one would call &#8216;coarse-grained information&#8217; &#8211; into a graph of fine-grained knowledge, as described in the implementation section.</para>
<para>Consequent upon Graphene&#8217;s ability to extract complex categories, we are able to extract useful information in the correct level of granularity. As an example, we consider the case of a recent tweet written by the United States Computer Emergency Readiness Team (US-CERT), as shown in <link linkend="F8-8">Figure <xref linkend="F8-8" remap="8.8"/></link>.</para>
<para>Once we remove the links and hashtags, the knowledge graph generated from Graphene allows us to identify vendor and products that might be under attack or suffering from new vulnerabilities. With this functionality, both types of information can be forwarded to users and system admins as quickly as they are published in a social network like Twitter. More elaborate information and technical details about the information extraction strategy, including the sentence simplification step and the identification of the rhetorical structures can be found in [21], while for the extraction of complex categories more elaborate information can be found in [22].</para>

</section>

<section class="lev2" id="sec8-4-4">
<title>8.4.4 Data Analysis</title>
<para>One of the main tasks of the CS-AWARE platform is to look for various threat patterns some of which may have not been detected or recognised as such before and which can signal either a clear threat or a suspicious behaviour that may possibly or potentially be a threat. The way we define a threat pattern at a conceptual level that it is considered as an open set of individual threat parameters with unique settings/values aimed to capture anomalous events.</para>
<fig id="F8-8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-8">Figure <xref linkend="F8-8" remap="8.8"/></link></label>
<caption><para>From tweets to knowledge graphs.</para></caption>
<graphic xlink:href="graphics/ch008_fig008.jpg"/>
</fig>

<para>Such a set of threat parameters can be altered and improved with time as the knowledge about threats expands. Once such patterns catch an occurrence of multiple suspicious events simultaneously, then the identified events are flagged for further analysis. Many times one suspicious event may not be enough to be considered a threat but when multiple suspicious events happen, then the chances to have a threat increases.</para>
<para>In light of the above, we take as data analysis as being the set of processes where all data sources are assembled, combined and searched for unusual or threat patterns. Handling the data sources, their format and the way they should be cleaned from overhead, prepared and then analysed is vital for finding unusual threats or patterns that otherwise may go undetected by the existing tools in the market. Our data analysis efforts are focused on internal data sources belonging to organizations that use the CS-AWARE platform, such as logs, as well as external data sources such as threat intelligence platforms, specialized cyber-security forums, news and solutions. Such data is in a raw form and will have to be filtered and processed in order to extract only the most useful data for analysis.</para>
<para>The data analysis focuses on extracting the most probable elements of information that could form cyber security threats as well as info related to such threats. The data analysis that builds on a Peracton <small>MAARS<SUP>TM</SUP></small> component combines the above sources to identify threats and possible security incidents. Some combinations, assuming a proper information preprocessing, will be quite straightforward to process while others might need some more advanced analysis.</para>
<para>In this respect, the data analysis engine should be able to perform at least the following with regards to the above sources:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><i>Match vulnerability information to assets</i> &#8211; e.g. a vulnerability found on a specific OS version; is it applicable to monitored LPAs.</para></listitem>
<listitem><para><i>Combine threat information with logs and assets</i> &#8211; the analysis should be done based on specific attributes that characterize an attack e.g. to identify a security incident regarding suspicious activity originating from a specific IP and targeting specific systems there is a need to match these attack characteristics to the information we have, i.e. we have to analyse threat information provided by external sources to give values to these attributes, and once we do so, process LPA&#8217;s logs and LPA&#8217;s assets inventory to identify these.</para></listitem>
<listitem><para><i>Attack pattern matching</i> &#8211; analyse network and system activity to identify potential security incidents based on attack patterns either collected from external sources or specified by CS-AWARE security experts. The engine&#8217;s efficiency strongly depends on the defined patterns. Although the engine should demonstrate its ability through a pre-defined set of patterns, it should also be able to accommodate additional patterns that security experts would like to define in the future.</para></listitem>
</itemizedlist>
<para>We expect that the data analysis should provide information about the criticality of the specific security incident and, based on this classification, suggest the most appropriate risk mitigation option (if available from data). Revisiting the above example where a threat that reports suspicious activity originates from specific IPs and targets specific systems, the (risk) analysis for the following scenarios will give us the corresponding results described below:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><i>Scenario 1</i>: Threat is flagged by external sources as critical and the LPA has systems that are vulnerable to this malicious activity: the risk for the organization is high. A risk mitigation strategy should be applied, i.e. an action is required to mitigate the risk, the details of which are subject to the information provided by external sources or by a CS-AWARE security expert.</para></listitem>
<listitem><para><i>Scenario 2:</i> Threat is flagged by external sources as critical, logs indicate incoming traffic from this IP, yet the LPA has <i>no</i> systems that are vulnerable to this malicious activity: the risk is low. In this case, <i>no</i> action is required.</para></listitem>
</itemizedlist>
</section>

<section class="lev2" id="sec8-4-5">
<title>8.4.5 Visualization</title>
<para>The visualization component will show the users (e.g. system administrator, management) the level of cyber threats to their system and will make it possible for system administrators to cooperate with the system to identify self-healing procedures and to share information with external partners regarding new cyber threats that have been identified in the analytics module.</para>
<para>The visualisation module is also the main user interface of the CS-AWARE product for administration. In order to provide cybersecurity awareness, it is necessary to visualise the threats, the threat level, the possible self-healing strategies and the information shared with the cybersecurity community. It is also necessary to have an interface to communicate back to the system information regarding controlling the aforementioned topics as well as lower level administration. The visualisation component will take care of this according to the work done in the dependency analysis and in good cooperation with other parts of the CS-AWARE solution. For the interchange of data, the STIX2 format has been determined as being the basic communication format between the modules, as it is commonly used in the field of cybersecurity and is both fairly stable, extensible when needed and with a reasonable support of frameworks with which to work.</para>
<para>The number of cybersecurity events has also been rising over the past years and as more and more of our society is based on information systems, the issues have multiplied over time. Before this project, a number of independent vendors have different visualisation means to show how their particular system is threatened by cyber security events. In a large-scale facility like most LPAs this results in a large number of reports on what is going on in their field of operation. For the system administrator this only gives a partial overview of the cyber security events, as it on one hand only delivers the view from the single vendor, and on the other hand often is too complex to be useful. The number of different reports to choose from can be high and they are usually only collected per vendor. This makes it difficult to assess the full cyber security overview. The paucity of overview leaves the cybersecurity awareness level lower than it could be. This is a situation that needs to be remedied.</para>
<para>The main gap is that current systems lack a significant cognitive component, in order to propagate the overall level of cybersecurity awareness. Specifically, we have identified the lack of single point of overview, and together with the rising level of entropy in cybersecurity reporting, which is believed to be the consequence of the multitude of sources that may not be connected. This both results with information overload and the already mentioned lack of common cybersecurity awareness. The main solution is to have a single interface to propagate the immediate cybersecurity awareness situation to the system administrator and other users who have a need for this information. For this we have developed a dashboard that &#8211; in an early version &#8211; is shown in <link linkend="F8-9">Figure <xref linkend="F8-9" remap="8.9"/></link>. It makes it easy to overview all concurrent cybersecurity threats and vulnerabilities as well as a summarised threat level. Through the dashboard, the system administrator has direct access to self-healing strategies, suggestions as well as possible information sharing texts on newly found threats.</para>
<fig id="F8-9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-9">Figure <xref linkend="F8-9" remap="8.9"/></link></label>
<caption><para>The CS-AWARE visualization component.</para></caption>
<graphic xlink:href="graphics/ch008_fig009.jpg"/>
</fig>
<para>We are generating a single view of cybersecurity threats and vulnerabilities that will show all of the major threat types and the summarised threat level. These will be shown over time to help understand the urgency and how the change in threat level is evolving, in order to mitigate a threat in the best way at that time. A reduction in time spent looking for a cybersecurity issue is worth many hours of post-mortem issue fixing and cleaning. Notice that the dashboard will have a graphic that continuously shows development over time in both size and colour, in order to let the system administrator act swiftly and becoming aware of cybersecurity events much faster than going through heaps of internet pages to find a possible change.</para>
<para>The visualisation component has interfaces to the system and dependency analysis, the data analysis and pattern recognition as well as the multi-language (NLP) support components, and also to the self-healing and information sharing components, where information sharing to the cybersecurity communities will be for the common good. This way the visualisation component enhances the cybersecurity awareness and helps the system administrators maintain their systems unaffected through a faster and better decision making and self-healing process.</para>
</section>

<section class="lev2" id="sec8-4-6">
<title>8.4.6 Cybersecurity Information Exchange</title>
<para>The CS-AWARE cybersecurity information exchange (CIE) provides a dissemination point for cyber threat information (CTI) that CS-AWARE components have collected, analysed, identified and classified as &#8220;shareable&#8221;. It is the interface to external entities, such as Computer Emergency Response Teams (CERTs), Computer Security Incident Response Teams (CSIRTs) and other threat intelligence platforms to inform them about threats, sightings (i.e. an observation related to a threat) and mitigation actions. This information will be mainly produced by the CS-AWARE data analysis component or by external sources and enhanced by the former.</para>
<para>CTI is information that is constantly generated and shared among devices and departments within (especially large) organisations which have well-established procedures for appropriately handling sensitive, classified and personal information found within CTI. When CTI is about to be shared with external entities, several interoperability and security issues have to be confronted [23], which can be categorised according to the three layers depicted in <link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link>.</para>
<para>Although the <i>legal</i> framework might encourage or require the sharing of cyber-threat information, as the NIS directive does, several other legal requirements might prohibit or restrict the uncontrolled sharing of CTI. One of the main legal restrictions arises from the GDPR and relates to the personal information shared with external entities without the user&#8217;s consent. In the case of CTI, personal data might be part of the shared sightings, such as IP addresses or usernames of entities that have been identified as sources of malicious activity. CTI sharing with external entities should not impact privacy and personally identifiable information (PII), and therefore, data anonymization should take place if necessary, prior to sharing CTI with external entities or being made public. However, certain data that under certain circumstances might be considered as personal (e.g. IPs), are very important for the receiving parties to have. Otherwise the information provided becomes useless, and therefore should be excluded from any anonymization processing. Moreover, based on Article 49 of the GDPR, the processing of personal data by certain entities, such as CERTs and CSIRTs, strictly for the purposes of ensuring network security is permitted as it constitutes a legitimate interest of the data controller.</para>
<fig id="F8-10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-10">Figure <xref linkend="F8-10" remap="8.10"/></link></label>
<caption><para>CTI exchange interoperability layers.</para></caption>
<graphic xlink:href="graphics/ch008_fig10.jpg"/>
</fig>
<para>An organization&#8217;s <i>policy</i> should address issues related to information sharing, while well-established procedures and appropriately deployed measures will help avoid the leakage of classified or sensitive information. Data sanitisation [24] is one of the solutions that the organisations should consider utilising to ensure that no sensitive or classified information is disclosed to unauthorised entities while sharing CTI with external entities. Policy restrictions with regards to sharing should also be supported by appropriate technical measures.</para>
<para>On the <i>technical</i> layer, adoption of standardised schemes used for sharing cyber-threat information is deemed necessary to achieve the necessary semantic and technical interoperability. The STIX2 protocol is the information model and serialisation solution adopted by CS-AWARE for the communication and sharing of CTI.</para>
<para>STIX2 also supports data markings which can facilitate enforcement of policies regarding the sharing of information. More specifically, STIX2 supports statements (copyright, terms of use, ...) applied to the shared content as well as the Traffic Light Protocol (TLP)<footnote id="fn_4" label="4"> <para>https://www.first.org/global/sigs/tlp/</para></footnote>, <footnote id="fn_5" label="5"> <para>https://www.enisa.europa.eu/topics/csirts-in-europe/glossary/considerations-on-the-traf</para></footnote> (a set of designations used to ensure that sensitive information is shared with the appropriate audience by providing four options as shown in <link linkend="F8-11">Figure <xref linkend="F8-11" remap="8.11"/></link>). Although optimized for human readability and person-to-person sharing and not for automated sharing exchanges, the adoption of TLP in CS-AWARE will help restrict information sharing only with specific entities or platforms and avoid any further unnecessary or unauthorized dissemination thereof.</para>
<para>Considering the limitations of TLP which cannot support fine-grained policies, the CS-AWARE information exchange component also adopted the Information Exchange Policy (IEP), a JSON based framework developed by</para>


<fig id="F8-11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-11">Figure <xref linkend="F8-11" remap="8.11"/></link></label>
<caption><para>The traffic light protocol.</para></caption>
<graphic xlink:href="graphics/ch008_fig11.jpg"/>
</fig>
<para>FIRST IEP SIG (2016) [25]. IEP is not supported in the current version of STIX, yet STIX compatibility was considered in its design.</para>
</section>

<section class="lev2" id="sec8-4-7">
<title>8.4.7 System Self-Healing</title>
<para>Self-healing is described as the ability of systems to autonomously diagnose and recover from faults with transparency and within certain criteria. Although these criteria vary according to the system&#8217;s infrastructure, they often include requirements such as availability, reliability and stability [26]. Self-healing constitutes an important building block of the CS-AWARE architecture, which aims to assist LPA administrators in responding to iden-tified vulnerabilities and high-risk threats by providing customised healing solutions or recommendations. The self-healing component is an innovative fully-supervised solution that uses the results of the analysis performed by the analysis component. The latter processes cyber threat information collected from external sources, internal logs and LPA architecture specifics and produces knowledge about potential high-risk situations for a specific LPA. Based on the aforementioned outcome, self-healing looks for the most appropriate mitigation solution among those provided by the external sources or found in the self-healing enhanced database of appropriate solutions. Supervision is defined as the degree of required human interaction concerning the feedback mechanism and the expansion of self-healing mechanisms [26]. Self-healing systems are categorised as fully supervised, semi-supervised or unsupervised. <link linkend="F8-12">Figure <xref linkend="F8-12" remap="8.12"/></link> provides an overview of the research work accomplished on self-healing properties as published in [27].</para>
<para>The composed mitigation rules aim to enhance the availability and overall security of the system while simultaneously reducing the required workload of system maintainability. Furthermore, the CS-AWARE self-healing component has the ability autonomously to diagnose and mitigate threats, while ensuring that the system&#8217;s administrator, who is always aware of the system behaviour, can prevent configuration changes that may raise incompatibility issues.</para>
<fig id="F8-12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-12">Figure <xref linkend="F8-12" remap="8.12"/></link></label>
<caption><para>Properties of self-healing research.</para></caption>
<graphic xlink:href="graphics/ch008_fig12.jpg"/>
</fig>
<para>The self-healing component also interacts with the visualisation component for the following purposes:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>inform administrators about mitigation solutions applied to LPA systems</para></listitem>
<listitem><para>request LPA administrator permission to apply a solution</para></listitem>
<listitem><para>provide recommendations about how to confront an identified high-risk situation or vulnerability</para></listitem>
</itemizedlist>
<para>The self-healing component is fully supervised, always allowing the LPA administrator to decide whether or not they want to apply the suggested mitigation rule. It utilises the results of the data analysis component pro-vided in STIX2. Once the self-healing receives input data from the analysis component, it identifies the threat type and composes the proper mitigation rule autonomously. Rules composed by the self-healing module incorporate three alternatives:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Inform LPAs about which acts to perform in order to avoid the threat or reduce the impact (recommendation)</para></listitem>
<listitem><para>Ask for the LPA administrator&#8217;s permission in order to apply the rule automatically</para></listitem>
<listitem><para>Automatically apply the rule, provided that the administrator has set this preference through the visualisation component</para></listitem>
</itemizedlist>
<para>The self-healing component consists of three main and three auxiliary subcomponents, whose interaction is shown in <link linkend="F8-13">Figure <xref linkend="F8-13" remap="8.13"/></link>. The main subcomponents were defined in the CS-AWARE framework while auxiliary subcomponents were defined during the design phase to facilitate the composition and application of mitigation rules. The self-healing policies are a database which contains records of potential threats that might be detected in an LPA system and the corresponding mitigation rules. The mitigation rules are stored in a human-readable generic format as well as in machine-readable format. Moreover, the self-healing policies subcomponent includes entries which contain the CLI syntax of LPAs central nodes.</para>
<para>The decision engine initiates the composition of a rule. It performs searches of the self-healing policies database for a matching rule. If it finds a match, it initiates a rule which is in a human-readable format. The security rules composer accepts input from the decision engine subcomponent in a human-readable format and converts it to a machine-readable format based on the CLI syntax of the affected node. The parser parses the STIX package and extracts useful data for the process of mitigation rules composition, and the rule applicator is responsible for enriching the STIX package with the mitigation rule, sending data to the visualisation component and for applying the rule on the remote machine. In case the mitigation rule must be applied remotely then the self-healing connects to the remote node and applies the rule automatically provided that the LPA administrator has given permission. Finally, the logger writes a log entry in the log file which contains information about how the mitigation rule was applied.</para>
<fig id="F8-13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F8-13">Figure <xref linkend="F8-13" remap="8.13"/></link></label>
<caption><para>Self-healing subcomponents activity diagram.</para></caption>
<graphic xlink:href="graphics/ch008_fig13.jpg"/>
</fig>
</section>

<section class="lev1" id="sec8-5">
<title>8.5 Discussion</title>
<para>The demand for cybersecurity tools is strong. An alarming rate of purposeful cyber-attacks forces authorities on different levels to do more than just to be reactive operations. At the same time new regulatory and legal requirements are implemented by the highest-level authorities and are effecting how systems can be operated and data can be handled on the regional level. In Europe, especially the NIS directive is concerned with how the most critical services for our society are handling cybersecurity, while the GDPR is protecting an individual person&#8217;s information and privacy. This has caused actions and worries with private companies but is affecting also many functions of local public administrations. Although the local public administrations have not been the direct targets of malware attacks they are crucial providers of services governing our everyday lives and are heavily influencing society on a regional level. The CS-AWARE project has proven to be even more current and relevant than we could have anticipated during the time of writing the proposal.</para>
<para>The first year of the project has been successful. Two rounds of dependency analysis workshops at our piloting municipalities have been completed and have provided extensive insight into the operations of local public administrations. We have gained valuable information that has influenced and guided the CS-AWARE framework development and implementation. We have seen that there are substantial differences in LPA operations between countries even on the European level. Besides the obvious differences in language in national and regional levels in Europe, we have seen that the rules and regulations guiding LPA operations are substantially different between countries, and may affect how cybersecurity tools like the CS-AWARE toolset can be deployed and operated. We have also seen however that the CS-AWARE concept and framework is well suited to handle these differences due to the flexible and socio-technological analysis at its base. We believe that we have proven the framework to be valid. It is now modified and adjusted based on knowledge and circumstances derived from the LPA use cases. The project will continue with the framework implementation and integration, and an extensive piloting phase towards the end of the project will allow us to draw broader conclusions about the usefulness of cybersecurity awareness technologies in day-to-day operations of local public administrations.</para>
<para>An important lesson we have already learned at this stage is how important collaboration and information sharing are. Cooperation and collaboration is essential and becoming more relevant in future, since there are many actors on the local public administration level. Small cities and communes with individually centralized organizations, but each distributing responsibilities among external experts. The larger the commune, the greater appears to be the silo effect. Then even a single service forms, an isolated unit which does not have direct collaboration with other city services. Information sharing is therefore a key factor to generate and understand the full picture of the internal infrastructures. While our information sharing efforts were focused on sharing cybersecurity information with external authorities, such as the NIS competent authorities listed in <link linkend="F8-2">Figure <xref linkend="F8-2" remap="8.2"/></link>, we have seen that in practice already sharing with different actors on the local level (other departments or suppliers) may have a significant positive effect on cybersecurity. This is one aspect that will be more closely looked into during the piloting phase of CS-AWARE. We are investigating this even further in another H2020 project, CinCan (Continuous Integration for the Collaborative Analysis of Incidents)<footnote id="fn_6" label="6"> <para>https://cincan.io/index.html</para></footnote>, where we also try to promote sharing and reporting vulnerability information between different countries&#8217; CERT organizations.</para>
<para>We feel that CS-AWARE is not just an individual project, but a continuous path we need and have now started to follow. Technology touches every aspect of our lives and we need tools that allow us to safely utilise them by covering all legal security requirements.</para>
</section>

<section class="lev1" id="sec8-6">
<title>8.6 Conclusion</title>
<para>In this Chapter we have presented the EU-H2020 project CS-AWARE (running from 2017 to 2020), aiming to provide cybersecurity awareness technology to local public administrations. CS-AWARE has several unique features, like the socio-technological system and dependency analysis at the core of the technology that allows a fine grained understanding of LPA cybersecurity requirements on a per case basis. Furthermore, the strong focus on automated incident detection and classification, as well as our efforts towards system self-healing and cooperation/collaboration with relevant authorities are pushing the current state-of-the-art, and are in line with cybersecurity efforts on a European and global level.</para>
<para>In light of a substantially changing legal cybersecurity framework in Europe, we have shown that CS-AWARE is an enabling technology for many cybersecurity requirements imposed by these regulations. For example, information sharing of cybersecurity incidents is a requirement of the NIS directive for organizations classified as critical infrastructures, and may in future be extended to other sectors as well. Similarly, the identification of personal information and information flows within organizations systems, as done in the system and dependency analysis of CS-AWARE, is a key requirement for GDPR compliance.</para>
<para>We have detailed the CS-AWARE framework and have shown how the different building blocks are implemented in CS-AWARE. We have discussed the first results of the project, especially the outcomes of two rounds of system and dependency analysis workshops in the piloting municipalities of CS-AWARE, and we have discussed how those results are influencing the framework implementation and integration in preparation for the piloting phase of the project. Our initial results show the necessity of awareness technologies in LPAs. Administrators and system operators are looking for solutions that improve awareness of cybersecurity incidents on a system level and assist with prevention or mitigation of such incidents. We have seen a specific need for awareness as well as improved collaboration and cooperation between different departments or suppliers, an area that is often neglected but has significant potential for introducing cybersecurity risks.</para>
<para>CS-AWARE will continue with further developing of the technological base and integration of the components that form the CS-AWARE framework. An extensive piloting phase towards the end of the project will give insights into the practical feasibility and relevance of the awareness generating tech-nologies, and allow us to evaluate how both system administrators and system users can benefit from CS-AWARE. The piloting phase will be accompanied by social sciences based study to evaluate how the CS-AWARE technologies are accepted by its users in day-to-day operations. At the same time, we will continue to promote CS-AWARE among potential users, implementers and authorities to bridge the gap between legal and regulatory requirements and actual technology that can fulfil those requirements. In an era where it is thought that cybersecurity can only be effective through cooperation and collaboration, constant interaction between the main actors is important to achieve a comprehensive and holistic solution.</para>

</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>The authors would like to thank the EU H2020 project CS-AWARE (grant number 740723) for supporting the research presented in this chapter.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Europol, &#8220;Internet Organized Crime Threat Assessment (IOCTA) 2018,&#8221; Online, 2018.</para>
<para>[2] THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION, DIRECTIVE (EU) 2016/1148 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL <i>of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union</i>, 2016.</para>
<para>[3] THE COUNCIL OF THE EUROPEAN UNION, REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of <i>27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC</i>, 2016.</para>
<para>[4] European Commission and High Representative of the European Union for Foreign Affairs and Security Policy, <i>Cybersecurity Strategy of the European Union: An Open, Safe and Secure Cyberspace</i>, JOIN(2013) 1 final, 2013.</para>
<para>[5] P. Checkland, &#8220;Systems Thinking, Systems Practice,&#8221; Wiley [rev 1999 ed], 1981.</para>
<para>[6] P. Checkland, &#8220;Soft Systems in Action,&#8221; Wiley [rev 1999 ed], 1990.</para>
<para>[7] P. Pietikainen, K. Karjalainen, J. Roning and J. Eronen, &#8220;Socio-technical Security Assessment of a VoIP System,&#8221; in 2010 <i>Fourth International Conference on Emerging Security Information, Systems and Technologies</i>, 2010.</para>
<para>[8] T. Schaberreiter, K. Kittila, K. Halunen, J. Reining and D. Khadraoui, &#8220;Risk Assessment in Critical Infrastructure Security Modelling Based on Dependency Analysis,&#8221; <i>in Critical Information Infrastructure Security: 6th International Workshop, CRITIS 2011, Lucerne, Switzerland, September 8&#8211;9, 2011, Revised Selected Papers</i>, 2011.</para>
<para>[9] J. Eronen and J. Reining, &#8220;Graphingwiki &#8211; a semantic wiki extension for visualising and inferring protocol dependency,&#8221; in First Workshop on <i>Semantic Wikis &#8211; From Wiki To Semantics, 2006</i>.</para>
<para>[10] J. Jiang, J. Yu and J. Lei, &#8220;Finding influential agent groups in complex multiagent software systems based on citation network analyses,&#8221; <i>Advances in Engineering Software</i>, pp. 57&#8211;69, 2015.</para>
<para>[11] L. Saitta and J.-D. Zucker, Abstraction in artificial intelligence and complex systems, Springer, 2013.</para>
<para>[12] J. Sokolowski, C. Turnitsa and S. Diallo, &#8220;A conceptual modeling method for critical infrastructure modeling,&#8221; <i>in Simulation Symposium</i>, 2008. <i>ANSS 2008. 41st Annual</i>, 2008.</para>
<para>[13] J. Kramer, &#8220;Is abstraction the key to computing?,&#8221; <i>Communications of the ACM</i>, pp. 36&#8211;42, 2007.</para>
<para>[14] Y.-L. Chen and Q. Li, Modeling and Analysis of Enterprise and Information Systems: from requirements to realization, Springer, 2009.</para>
<para>[15] P. Clemente, J. Rouzaud-Cornabas and C. Toinard, From a generic framework for expressing integrity properties to a dynamic mac enforcement for operating systems, Springer, 2010, pp. 131&#8211;161.</para>
<para>[16] S. Bansal and S. Kagemann, &#8220;Integrating big data: A semantic extract-transform-load framework,&#8221; <i>Computer</i>, pp. 42&#8211;50, 2015.</para>
<para>[17] NIST National Institute of Standards and Technology, &#8220;Framework for Improving Critical Infrastructure Cybersecurity,&#8221; 2015.</para>
<para>[18] CINI Cyber Security National Laboratory, &#8220;Italian Cyber Security Report 2015 &#8211; A National Cyber Security Framework,&#8221; 2016.</para>
<para>[19] OASIS Committee Specification 01, STIX Version 2.0. Part 1: STIX Core Concepts, R. Piazza, J. Wunder and B. Jordan, Eds., 2017.</para>
<para>[20] D. S. Smyth and P. B. Checkland, &#8220;Using a Systems Approach: The Structure of Root Definitions,&#8221; <i>Journal of Applied Systems Analysis</i>, vol. 6, no. 1, 1976.</para>
<para>[21] M. Cetto, C. Niklaus, A. Freitas and S. Handschuh, &#8220;Graphene: Semantically-Linked Propositions in Open Information Extraction,&#8221; in <i>In Proceedings of the 27th International Conference on Computational Linguistics (COLING)</i>, New-Mexico, USA, 2018.</para>
<para>[22] J. E. Sales, A. Freitas, B. Davis and S. Handschuh, &#8220;A Compositional-Distributional Semantic Model for Searching Complex Entity Categories,&#8221; in <i>5th Joint Conference on Lexical and Computational Semantics (*SEM)</i>, Berlin, 2016.</para>
<para>[23] C. S. Johnson, M. L. Badger, D. A. Waltermire, J. Snyder and C. Skorupka, &#8220;Guide to Cyber Threat Information Sharing,&#8221; <i>National Institute of Standards and Technology</i>, pp. NIST SP 800&#8211;150, 2016.</para>
<para>[24] M. Bishop, B. Bhumiratana, R. Crawford and K. Lwvitt, &#8220;How to sanitize data?,&#8221; in <i>13th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises</i>, Modena, 2004.</para>
<para>[25] Forum of Incident Response and Security Teams (FIRST), &#8220;Information Exchange Policy Framework, Version 1.0&#8221;.</para>
<para>[26] C. Schneider, A. Barker and S. Dobson, &#8220;A survey of self-healing systems frameworks,&#8221; <i>Software: Practice and experience</i>, vol. 45, no. 10, pp. 1378&#8211;1394, 2014.</para>
<para>[27] H. Psaier and S. Dustdar, &#8220;A survey on self-healing systems: approaches and systems,&#8221; <i>Computing</i>, vol. 91, no. 1, p. 47, 2010.</para>
</section>
</chapter>

<chapter class="chapter" id="ch09" label="9" xreflabel="9">
<title>Complex Project to Develop Real Tools for Identifying and Countering Terrorism: Real-time Early Detection and Alert System for Online Terrorist Content Based on Natural Language Processing, Social Network Analysis, Artificial Intelligence and Complex Event Processing</title>
<para><b>Monica Florea<sup>1</sup>, Cristi Potlog<sup>1</sup>, Peter Pollner<sup>2</sup>, Daniel Abel<sup>3</sup>, Oscar Garcia<sup>4</sup>, Shmuel Bar<sup>5</sup>, Syed Naqvi<sup>6</sup> and Waqar Asif<sup>7</sup></b></para>
<para><sup>1</sup>SIVECO Romania SA, Romania</para>
<para><sup>2</sup>MTA-ELTE Statistical and Biological Physics Research Group, Hungary</para>
<para><sup>3</sup>Maven Seven Solutions Zrt., Hungary</para>
<para><sup>4</sup>Information Catalyst, Spain</para>
<para><sup>5</sup>IntuView, Israel</para>
<para><sup>6</sup>Birmingham City University, United Kingdom</para>
<para><sup>7</sup>City, University of London, United Kingdom</para>
<para>E-mail: Monica.Florea@siveco.ro; cristi.potlog@siveco.ro; pollner@angel.elte.hu; daniel.abel@maven7.com; oscar.garcia@informationcatalyst.com; sbar@intuview.com;</para>
<para>Syed.Naqvi@bcu.ac.uk; Waqar.Asif@city.ac.uk</para>
<para>In the last decades, the importance of social media has increased extremely with the creation of new communication channels and even changing the way people are communicating. These trends came along with the disadvantage of allowing a new scenario where messages containing valuable data about critical threats like terrorism and criminal activity are ignored, due to the sheer inability to process &#8211; much less analyze &#8211; the vast amount of available data. Terrorism has a very real and direct impact on basic human rights of victims, such as the right to life, liberty and physical integrity, often with devastating consequences.</para>
<para>In this context, the RED-Alert project was designed to build a complete software toolkit to support LEAs in the fight against the use of social media by terrorist organizations for conducting online propaganda, fundraising, recruitment and mobilization of members, planning and coordination of actions, as well as data manipulation and misinformation. The project aims to cover a wide range of social media channels used by terrorist groups to disseminate their content which will be analysed by the RED-Alert solution to support LEAs to take coordinated action in real time but having as a primordial condition preserving the privacy of citizens.</para>

<section class="lev1" id="sec9-1">
<title>9.1 Introduction</title>
<para>Radicalisation leading to violent extremism and terrorism is not a new phenomenon but the way it is now spreading is more and more alarming and extending to the EU as a whole. As a matter of urgency, the European and Member States&#8217; policies must evolve to match the scale of the challenge offering effective responses [1].</para>
<para>During recent years Europe is facing new challenges to design and build new tools and to take advantage of technological advancements to prevent terrorist attacks. The Europol report from 2017 shows that, in 2016, a total of 142 failed, foiled and completed attacks have been reported. In 2017, 16 attacks struck eight different Member States while more than 30 plots were foiled.<footnote id="fn_1" label="1"> <para>https://ec.europa.eu/home-affairs/sites/homeaffairs/files/what-we-do/policies/european-agenda-security/20180613_final-report-radicalisation.pdf</para></footnote></para>
<para>The RED-Alert project is aligned to SECURITY Work Programme 2016&#8211;2017 call objectives that targets improvement of investigation capabilities, solving crimes more rapidly, reducing societal distress, investigative costs and the impact on victims and their relatives and to prevent more terrorist endeavours.<footnote id="fn_2" label="2"> <para>http://ec.europa.eu/research/participants/data/ref/h2020/wp/2016_2017/main/h2020-wp-1617-security_en.pdf</para></footnote></para>
<para>The RED-Alert project is a H2020 European research and development project that uses analytics techniques such as NLP, SMA, SNA and CEP to tackle LEAs needs in terms of prevention and action regarding terrorist social media online activity.</para>


<para>The novelty the project brings is combining these technologies for the first time in an integrated solution that will be validated in the context of five LEAs.</para>
<para>The consortium was designed to gather together all required capabilities and expertise that sustain the development of RED-Alert solution:</para>
<para><b>Five Law Enforcement Agencies (LEAs)</b>: Protection and Guard Service from Republic of Moldova (SPPS), Guardia Civil from Spain (GUCI), Ministry Of Public Security &#8211; Israel National Police (MOPS-INP), Metropolitan Police Service from UK (SO15) and Protection and Guard Service from Romania (SPP);</para>
<para><b>Five Industrial innovation champions (of which four SMEs)</b>: SIVECO Romania SA (SIV), Intu-View Ltd (INT), Usatges Bcn 21 Sl (INSKT), Maven Seven Solution Technology (MAV), and Information Catalyst for Enterprise Ltd (ICE);</para>
<para><b>Four Academic &amp; Research Organizations</b>: Interdisciplinary Center Herzliya (ICT), Eotvos Lorand Tudomanyegyetem (ELTE), City University Of London (CITY) and Birmingham City University (BCU);</para>
<para><b>One Regulatory association</b>: Malta Information Technology Law Association (MITLA).</para>
<para>The project duration is 36 months and started in June 2017.</para>
</section>

<section class="lev1" id="sec9-2">
<title>9.2 Research Challenges Addressed</title>
<para>The main challenge in the domain of terrorism and radicalization research is that the underlying data sources and data usages are constantly and rapidly evolving, as terrorist groups are moving away from structured written blogs and forum posts and instead, are using social media to propagate URLs that redirect to repositories of propaganda videos. Thus, processes of detecting suspicious content can become quickly outdated, and it is becoming essential to automatically adapt the system to evolving media channels layouts and interfaces, as well as changing user behaviours.</para>
<para>To support the project&#8217;s objectives, the following key performance indicators (KPIs) are to be reached until the end of the project: seven social media channels mined for content, 10 languages supported for analysis, improved accuracy and usability of tools within the context of data privacy, as well as extended real-time and collaborative capabilities and support for further development.</para>
<para>To address these KPIs, the RED-Alert project mixes relevant software components from different partners. In the same time, the challenge and innovation are to combine technologies such as CEP, SNA and NLP to assess social features in communications used by terrorist organizations. This will imply harmonisation of theories, tools and techniques from cognitive science, communications, computational linguistics, discourse processing, language studies and social psychology. Moreover, in order that the system performance to be adapted for each component the project implements a meta-learning process that will assist SNA, CEP and NLP components defined processes.</para>
<para>Another major challenge that needs to be addressed by the project is to preserve the privacy of citizens that use online social networking platforms. Having in mind the rumours linked with social media data collection and new GDPR that applied from 25<sup>th</sup> of May 2018, became obvious that the Internet service providers struggle to balance the user privacy against the national security. The only way to move forward is to preserve the privacy when processing the data and in the same time to take advantage of the latest technological advancement when designing the security part of the system; hence, the malicious content and the corresponding personality can be tracked while the privacy of innocent citizens can be preserved. RED-Alert system will include privacy-preserving mechanisms allowing the capture, processing and storage of social media data in accordance with applicable European and national legislations.</para>
<para>RED-Alert will face the additional challenge of allowing collaboration between the different LEAs from different countries, with different privacy laws and trust levels by implement a privacy-preserving tool to mine the data.</para>
<para>There is a growing understanding that innovation, creativity and competitiveness must be approached from a &#8220;design-thinking&#8221; perspective &#8211; namely, a way of viewing the world and overcoming constraints that is at once holistic, interdisciplinary, integrative, innovative, and inspiring. Privacy, too, must be approached from the same design-thinking perspective. Privacy must be incorporated into software systems and technologies, by default, becoming integral to organizational priorities, project objectives, design processes, and planning operations [2].</para>
</section>

<section class="lev1" id="sec9-3">
<title>9.3 Architecture Overview</title>
<para>The vision of RED-Alert project is to develop and validate a real-time system able to facilitate the timely identification of terrorism-related content by summarizing large volumes of data from social media and other online sources (such as blogs, forums).</para>
<para>The RED-Alert components as shown in <link linkend="F9-1">Figure <xref linkend="F9-1" remap="9.1"/></link>: NLP, SMA, SNA, CEP, Data Anonymization, Data Visualisation and ML) will be integrated in three separate layers, based on Lambda Architecture<footnote id="fn_3" label="3"> <para>http://lambda-architecture.net/</para></footnote> concepts defined by [3], designed to handle massive quantities of data by taking advantage of both batch and stream methods for real-time data processing, as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>the &#8220;speed&#8221; layer, which includes data acquisition components for processing data streams in real time by means of data collection (social media capture, web crawling, LEA &#8220;raw&#8221; content), data filtering</para></listitem>
</itemizedlist>
<fig id="F9-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-1">Figure <xref linkend="F9-1" remap="9.1"/></link></label>
<caption><para>Dynamic learning capabilities of the systems to update keywords, vector spaces, rule patterns, algorithms and models.</para></caption>
<graphic xlink:href="graphics/ch009_fig001.jpg"/>
</fig>

<para>(pulling text data from a message queue, normalizing and extracting the required meta-data), data enrichment (multimedia content analysis), and data privacy (anonymization of text and image data);</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>the &#8220;batch&#8221; layer, which integrates the predictive models (based on CEP) that will be used by the pattern detection features within the analysis module. Due to the changing nature of the facts and behaviours, the set of stored models should be periodically re-trained with the new data arriving to the system. This is usually a resource-intensive task that cannot be performed in real-time and it should be scheduled as a frequent batch job;</para></listitem>
<listitem><para>and the &#8220;service&#8221; layer, which integrates the visual analytics gateway that will be in charge of presenting the aggregated data, metrics and events configured by users, who can set up the rules or conditions for triggering alerts. This will be used directly by the rules engine to determine whether the conditions exist for a particular event type. The layer will also offer a Web Service API allowing third parties or LEAs in-house developers to build external components on top of the RED-Alert integrated solution.</para></listitem>
</itemizedlist>
<para><link linkend="F9-2">Figure <xref linkend="F9-2" remap="9.2"/></link> shows the designed Architecture for RED-Alert. In this multilayered architecture, application components are grouped into logical layers, namely:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Front-end Layer &#8211; grouping components and functionalities that face the end-users of the system, with the role of getting and presenting data, displaying alerts, and allowing the users to configure the system and administrators to monitor it;</para></listitem>
<listitem><para>Back-end Layer &#8211; grouping the core modules and data processing components that service the system;</para></listitem>
<listitem><para>Integration Layer &#8211; grouping inward middleware services that interconnect the components of the system, as well as outward facing APIs that facilitate connections with other systems;</para></listitem>
<listitem><para>Data Storage Layer &#8211; grouping database management systems (both relational and non-relational) that handles the storage of data needed by the system.</para></listitem>
</itemizedlist>
<para>This approach to architecture, described above, attempts to balance latency, throughput, and fault-tolerance by using batch processing to provide comprehensive and accurate views of historical operational batch data, while simultaneously using real-time stream processing to provide views of online data.</para>
<fig id="F9-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-2">Figure <xref linkend="F9-2" remap="9.2"/></link></label>
<caption><para>Layered application architecture.</para></caption>
<graphic xlink:href="graphics/ch009_fig002.jpg"/>
</fig>
<para>The &#8220;speed&#8221; layer sacrifices throughput as it aims to minimize latency by providing real-time views into the most recent data. The &#8220;batch&#8221; layer pre-computes results using a distributed processing system that can handle very large quantities of data. Output from the &#8220;batch&#8221; and &#8220;speed&#8221; layers are stored in the &#8220;service&#8221; layer, which responds to ad-hoc queries by returning pre-computed views or building views from the processed data.</para>
<para>&#8220;Privacy by Design&#8221;, focuses on maximizing privacy and data protection by embedding safeguards across the design and development of software systems, services or processes by taking privacy and data protection consid-erations into account from the outset and throughout their whole lifecycle, rather than as a remedial afterthought. Such safeguards should be built into the core of the products, services or processes and treated as a default setting for not only technologies, but also into used operation systems, network infrastructures, work processes and management structures [4].</para>
</section>

<section class="lev1" id="sec9-4">
<title>9.4 Results</title>
</section>

<section class="lev2" id="sec9-4-1">
<title>9.4.1 Natural Language Processing Module (NLP)</title>
<para>Ever since the Tower of Babel, the human race has taken recourse to translation to bridge the gap between languages, cultures, societies and nations. Translation serves many purposes: it enables us to broaden the scope of our cultural perspective, to see the world in a way that others &#8211; friends and foes &#8211; do, to retrieve ancient knowledge that, otherwise, would be lost to mankind and to communicate between people on a day to day basis.</para>
<para>However, in a global environment challenged with enormous amounts of information, a challenge has arisen that cannot be solved by translation. This is the need to identify affinities and dis-affinities between semantic units in different languages to normalize streams of information and mine the &#8220;meaning&#8221; within them regardless of their original language. When we look for information or wish to generate alerts &#8211; particularly in domains that are global &#8211; we do not want to be restricted to streams of information in one language; when we are interested in information &#8211; be it alerts on terrorism, fraud, cyber attacks or financial developments &#8211; we do not care if the origin is in English, French Arabic, Russian or Chinese. The need, therefore, is for technology that scans the entire gamut of information, identify the language and the language register of the texts, perform domain and topic categorization and match the information conveyed in different languages to create normalized data for assessment of the scope and nature of a problem.</para>
<para>The problem facing automated extraction of meaning from language is not restricted to translation between languages but within languages. That which we call a &#8220;language&#8221; is frequently a political definition and not one based on the linguistic reality. Some cases of a &#8220;language&#8221; are, actually a group of &#8220;dialects&#8221; that in other cases are defined as separate languages. The decision to call Swedish, Danish and Norwegian separate languages on one hand, and Moroccan, Libyan, Saudi Arabia and Egyptian all &#8220;Arabic&#8221; is political and not linguistic. Even within the same language register, words, quotations, idioms or historic references can be &#8220;polysemic&#8221;; they have different meanings according to the domain and the context of the surrounding text. A verse in the Quran may mean one thing to a moderate or mainstream Muslim and the exact opposite to a radical.</para>
<para>Methods to deal with this problem have generally been based on multilingual dictionaries that enable key words spotting (by input of a key word in one language, the search engine can add the nominal corresponding terms in other languages) or by automated translation of texts and application of the search criteria in the target language. The limitations of such methods are obvious: a word in one language has many &#8220;translations&#8221; and not all of them may even be remotely related to the meaning that the user is interested in.</para>
<para>In 1949 the cryptologist Warren Weaver wrote a memorandum on automated translation using computer technology. Weaver suggested the analogy, of individuals living in a series of tall closed towers, all erected over a common foundation. When they try to communicate with one another, they shout back and forth, but cannot make the sound penetrate even the nearest towers. But, when an individual goes down his tower, he finds himself in a great open basement, common to all the towers. Here he establishes easy and useful communication with the persons who have also descended from their towers. Thus, he suggested &#8220;... to descend, from each language, down to the common base of human communication &#8211; the real but as yet undiscovered universal language &#8211; and then re-emerge by whatever particular route is convenient&#8221; [5]. In this description, Weaver touched &#8211; without calling it by name &#8211; on the approach that we are suggesting: semantic normalization of statements in different languages according to domain-specific ontologies.</para>
<para>This solution is based on emulation of the &#8220;intuitive&#8221; links that domain experts find between concatenations of lexical occurrences and appearances of a document and conclusions regarding the authorship, inner meaning and intent of the document. In essence, this approach looks at a document as a holistic entity and deduces from combinations of statements meanings, which may not be apparent from any one statement. These meanings constitute the &#8220;hermeneutics&#8221; of the text, which is manifest to the initiated (domain specialist or follower of the political stream that the document represents) but is a closed book to the outsider. The crux of this concept is to extract not only the prima facie identification of a word or string of words in a text, but to expand the identification to include implicit context-dependent and culture-dependent information or &#8220;hermeneutics&#8221; of the text. Thus, a word or quote in a text may &#8220;mean&#8221; something that even contradicts the ostensible definition of that text.</para>
<para>The meanings that are represented in one language by one word may be represented in other languages by completely different lexemes (words). &#8220;Idea Analysis&#8221; or &#8220;Meaning Mining&#8221; is the ability to extract from a text the hermeneutics (interpretation) that is not obvious to the non-initiated reader. We use of &#8220;Artificial Intuition&#8221; technology for this purpose. Artificial Intuition is based on algorithms that apply to input of unstructured texts the aggregated comprehension by seasoned subject matter experts regarding texts of the same domain used in training. Humans reach &#8220;intuitive&#8221; conclusions &#8211; even by perfunctory reading &#8211; regarding the authorship and intent of a given text, subconsciously inferring them from previous experience with similar texts or from extra-linguistic knowledge relevant to the text. After accumulating more information through other features (statements, spelling and references) in the text, they either strengthen their confidence in the initial interpretation or change it. These intuitive conclusions are part of what the Nobel Laureate Prof. Daniel Kahneman called &#8220;fast thinking&#8221; &#8211; a judgment process that operates automatically and quickly, with little or no effort and no sense of voluntary control [6].</para>
<para>We have approached this problem through combining language-specific and language-register specific NLP with domain-specific ontologies. The technology extracts such implicit meaning from a text or the hermeneutics of the text. It employs the relationship between lexical instances in the text and ontology &#8211; graph of unique language-independent concepts and entities that defines the precise meaning and features of each element and maps the semantic relationship between them. As a result of these insights, the process of disambiguation of meaning in texts is based on a number of stages:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Identification of the &#8220;register&#8221; of the language. The register of the language may represent a certain period of the language), dialects, social strata etc. In the global world today, however, it is not enough to identify languages; the world is replete with &#8220;hybrid languages&#8221; (e.g. &#8220;Spanglish&#8221; written and spoken by Hispanics in the US; &#8220;Frarabe&#8221; written and spoken by people of Lebanese and North African origin in France and Belgium) that are created when a person inserts secondary language into a primary (host) language, transliterates according to his own literacy, accent etc. It is necessary, therefore, to take the non-host language tokens, discover their original language, back transliterate them and then find the ontological representation of that word and insert it back into the semantic map of the document;</para></listitem>
<listitem><para>Identification through statistical analysis (based on prior training of tagged documents) of the ontological instances in the text to determine the probability that the author represents a certain background and ideo-logical leaning. Statistical categorization of a document as belonging to a certain domain, topic, or cultural or religious context can reduce the number of possible interpretations of a given lexical occurrence, hence reducing ambiguity;</para></listitem>
<listitem><para>Disambiguation using the immediate neighbourhood of the lexical instances. Such neighbourhood consists of the lexical tokens directly preceding or following the lexical instance. After reading a number of texts of a given genre, the algorithm infers that X percent accord to statement A, the meaning B. When statement C is encountered in a text that is categorized as belonging to the same genre, the algorithm derives from this a high level of confidence that C also means B. This confidence can be enhanced by additional information in the text;</para></listitem>
<listitem><para>Statistical categorization of a document as belonging to a certain domain, topic, or cultural or religious context to reduce ambiguity;</para></listitem>
<listitem><para>Chunking and Part of Speech Analysis of the text to use the relationship between different words (not necessarily arbitrarily choosing a certain level of N-grams) to provide additional disambiguating information;</para></listitem>
<listitem><para>Based on the identification of the domain of the text, the lexical units (words, phrases etc.) are linked to ontological instances with a unique meaning (as opposed to words which may have different meanings in different contexts) that can be &#8220;ideas&#8221;, &#8220;actions&#8221;, &#8220;persons&#8221; &#8220;groups&#8221; etc. An idea may be composed of statements in different parts of the document, which come together to signify an ontological instance of that idea<footnote id="fn_4" label="4"> <para>Ontology is a graph of unique language-independent concepts and entities built by experienced subject matter experts that defines the precise meaning and features of each element in the graph and maps the semantic relationship between them. Hence, the features that are encountered in the surroundings of a lexical instance are factored in the system&#8217;s decision to what unambiguous meaning (ontological instance) to refer the lexical instance. &#8220;Ontology&#8221;, Tom Gruber, Encyclopedia of Database Systems, Ling Liu and M. Tamer Ozsu (Eds.), Springer-Verlag, 2009.</para></footnote>;</para></listitem>
<listitem><para>The ontological digest of the document then is matched with preprocessed statistical models to perform categorization.</para></listitem>
</itemizedlist>
<para>This approach, therefore, is not merely &#8220;data mining&#8221; but &#8220;meaning mining&#8221;. The purpose is to extract meaning from the text and to create a normalized data set that allows us to compare the &#8220;meaning&#8221; extracted from a text in one language with that, which is extracted from another language.</para>
<para>This methodology applies also to entity extraction. Here, the answer to Juliette&#8217;s queclarative, &#8220;what&#8217;s in a name&#8221; is &#8211; quite a lot a not &#8211; as Juliette suggested almost nothing. A name can tell us gender, ethnicity, religion, social status, family relationships and even age or generation. To extract the information, however, we must first be able to resolve entities that do not look alike but may be the same entity (e.g. names of entities written in different scripts English, Arabic, Devanagari, Cyrillic) and to disambiguate entities that look the same but may not be (different transliterations of the same name in a non-Latin source language or culturally acceptable permutations of the same name).</para>

</section>

<section class="lev2" id="sec9-4-2">
<title>9.4.2 Complex Event Processing Module (CEP)</title>
<para>The key challenges so far with the complex even processing has been the need to make it both functional and generic at the same time. As downstream consumers, the component is dependent on receiving data from the other, upstream components, like the NLP and SMA data. The challenge here was to produce something that could consume unknown data as well as make assumptions and best guesses as to the nature, structure, quantity and quality of the data. In addition to working in the dark with its source data, the CEP engine also had to the challenge of not having any intelligence data to work with either. Clearly, on a project of this nature LEAs must guard and protect their intelligence for a plethora of operational reasons, however, regardless of this the CEP engine must still be delivered and demonstrate a working capability, so again it had to make a few leaps of faith which in the end should remain relevant when integration and pilot tests them out. Hence, the CEP engine remains probably a bit simplistic because of its generic nature, but by the same token generic is inherently extensible &#8211; so as both upstream data and real-world intelligence are fed into it, the engine will be able to adapt.</para>
<para>The CEP component aims to identify, via pattern matching algorithms, the dynamics, interactions, feedback loops, causal connections and trends associated with the data content it receives as input from the other RED-Alert components. Specifically, it is a secondary, downstream consumer of pre-processed data from the NLP, SNA and AI components and will generate output alerts; the component will also allow the configuration of data sources to allow the ingestion of external data out with the primary sources. The alerts themselves will be output to log files which will be monitored by a file reader component to display alerts, as well as monitor the CEP engine as a whole, and to integrate with the external API&#8217;s of the LEAs.</para>
<para>As a development timeline, it comprises template architecture of many different CEP nuances which are set/selected/derived via a web tool to produce myriad applications. A data ingestion component will, either acquire processed data from the configured input component via configurable connection components, or the connection components will feed Kafka topics which will serve as the actual source for all CEP input. Apache Kafka<sup>&#174;</sup><footnote id="fn_5" label="5"> <para>https://kafka.apache.org/</para></footnote> is a distributed streaming platform generally used for two broad classes of applications: 1) for building real-time streaming data pipelines that reliably get data between systems or applications, and 2) for building real-time streaming applications that transform or react to the streams of data. And it is in this 2<sup>nd</sup> type of applications where the CEP from RED-Alert is conceived. Current expectations assume that multiple CEP applications will be running in parallel &#8211; each either working on different parts of the input data, or on different patterns within the data, or different configurations of the same CEP Application but utilizing an alternate configuration (e.g. on data consumed per month versus per week) or providing staged, partial result sets that will subsequently be consumed by an additional, downstream, CEP application that will act on the staged data.</para>

<para>Other tools and technologies covering similar RED-Alert needs and functionalities were analysed but dismissed as there was a need for extra developments further than the actual accomplished or the expertise of the development team was more limited. These other technologies were Apache Flink which is incorporated into the main CEP RED-Alert component, Spark or Red Hat Drools.</para>
<para>Primarily from a performance perspective, we expect Kafka to deal with this sort of load far better than Mongo, hence we expect any data sourced from Mongo to be moved into Kafka, and hence a data loading component will perform this task. Note also that as part of creating staged, pre-processed data for downstream consumption by other CEP applications, the CEP applications themselves will create and populate MongoDB/Kafka topics as well. Also, it is likely that Kafka will serve as the primary source for the engines and that these topics are populated from MongoDB, in real time. This event data is then converted to a data type associated with the CEP software via a generic parsing component to produce objects with a common structure representative of the source data (i.e. NLP, SNA and ML).</para>
<para>The block diagram shown in <link linkend="F9-3">Figure <xref linkend="F9-3" remap="9.3"/></link> outlines the workflow, interactions, input/output and decision-making processes on the CEP engine itself. As the diagram clearly shows, the engine itself works on structured, well defined JSON, where well defined includes all field names, their data types as well as an indication of their original source &#8211; Note, in this case, source indicates where the data analytics (i.e. NLP, SNA and SMA processing) that generated particular aspects of the JSON originated, as opposed to the source of the input, i.e. the raw data.</para>
<fig id="F9-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-3">Figure <xref linkend="F9-3" remap="9.3"/></link></label>
<caption><para>Complex event processing module &#8211; Logical component diagram.</para></caption>
<graphic xlink:href="graphics/ch009_fig003.jpg"/>
</fig>

</section>

<section class="lev2" id="sec9-4-3">
<title>9.4.3 Semantic Multimedia Analysis Tool (SMA)</title>
<para>Multimedia is extensively used in social networks nowadays and is gaining popularity among the users with the increasing growth in the network capacity, connectivity, and speed. Moreover, affordable prices of data plans, especially mobile data packages, have considerably increased the use of multimedia by different users. This includes terrorists who use social media platforms to promote their ideology and intimidate their adversaries. It is therefore very important to develop automated solutions to semantically analyse given multimedia contents. The SMA Tool is designed to ensure security and policing of online contents by detecting terrorist material.</para>
<para>The SMA Tool extracts meaningful information from multimedia contents taken from social media. The five main features of the tool are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Segmentation of audio streams, identifying sections of speech;</para></listitem>
<listitem><para>Transcription of the segmented speech sections using an ASR engine;</para></listitem>
<listitem><para>Detection of sound events within audio streams, such as gunfire, explosions, crowd noise etc;</para></listitem>
<listitem><para>Extraction and identification of objects, such as logos, flags, weapons, faces, etc., within image and video scene elements;</para></listitem>
<listitem><para>Extraction and transcription of text elements in image and video elements.</para></listitem>
</itemizedlist>
<para>Moreover, the SMA Tool retrieves multimedia data, converts it to a uniform format and delivers the analysis results. The extraction of semantic information is the third of four stages the tool will perform. All four stages are as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Input: Retrieval of multimedia files from disk or URL;</para></listitem>
<listitem><para>Stream Separation: Extraction of audio/video streams in multimedia files;</para></listitem>
<listitem><para>Feature Analysis: Semantic analysis of audio/image content;</para></listitem>
<listitem><para>Output: Compilation of results in a uniform JSON format.</para></listitem>
</itemizedlist>
<para>The results of this tool are sent to the other key components of the project such as NLP, SNA and CEP.</para>
</section>

<section class="lev3" id="sec9-4-3-1">
<title>9.4.3.1 Speech recognition</title>
<para>This component is used for audio segmentation, language detection and speech transcription. The RED-Alert project is required to support 10 languages, and be able to run offline, without having to send data to a 3<sup>rd</sup> party web API. We have consulted our LEA partners to prepare a list of 10 languages which must be supported by the speech/written text transcription elements of the SMA Tool. These languages are: Arabic, English, French, German, Hebrew, Romanian, Russian, Spanish, Turkish, and Ukrainian.</para>
</section>

<section class="lev3" id="sec9-4-3-2">
<title>9.4.3.2 Face detection</title>
<para>The SMA tool uses a Haar-like feature based cascade classifier [7] to detect both frontal facing and profile faces in images. Haar-like features are calculated by finding the difference in average pixel intensity between two or more adjacent rectangular regions of an image. In the SMA tool, Haar cascades are used as a supplementary feature to implement simple face detection. More advanced techniques are implemented in the object detection element, which can also be used to detect people/faces.</para>
</section>

<section class="lev3" id="sec9-4-3-3">
<title>9.4.3.3 Object detection</title>
<para>State of the art methods for detection of objects within images use large neural networks consisting of multiple sub-networks (region proposal network, classification network etc.). The SMA tool&#8217;s object detection utility uses the Faster R-CNN structure [8]. Faster R-CNN is constructed primarily of two separate networks: a Region RPN which produces suggestions of regions of an image which might contain objects, and a typical CNN which generates a feature map and classifies the objects in the proposed regions.</para>
</section>

<section class="lev3" id="sec9-4-3-4">
<title>9.4.3.4 Audio event detection</title>
<para>Audio event detection is implemented in the SMA Tool by using a recurrent CNN [9]. The convolutional element classifies the short term temporal/spectral features of the audio, while the recurrent element detects longer term temporal changes in the signal. The SMA Tool applies feature extraction prior to processing by the network. This provides a more detailed representation of the audio signal to the network, meaning the first few layers can extract more meaningful information. Peak picking algorithms [10] are applied to remove any noise and only annotate the onset of any detected audio events.</para>
</section>

<section class="lev2" id="sec9-4-4">
<title>9.4.4 Social Network Analysis Module (SNA)</title>
<para>In the last decades, human communication has gone through a crucial transition. Thanks to the Internet, which connects all individuals around the globe, everybody can contact each other without any time delay and without geographical restrictions. Social interactions became cheap and worldwide, the only restriction remained at the human side: all of us are able to process information at a finite rate and can engage trustful relations only with a few tens or hundreds of others. Therefore, describing and modelling of the new type of human interactions called for a description which is free of space limitations: these represent the tools of Network Science.</para>
<para>SNA module, aims to provide methods and software solutions for handling relational data. It focuses on three aspects of networked analysis as described in the following subsections.</para>
<para>1) <i>Network dynamics and temporal network structure models</i>.</para>
<para>The tool describes the evolution of networks and edges/nodes in time, by calculating quantitative features derived from models on evolving networks, and evolution of communities.</para>
<para>Real systems are usually not static, instead they evolve in time [11]. This can manifest in the emergence of new parts, the disappearance of existing parts, and also the relations among constituents can be rearranged over time. Temporal networks with changing topology over time result typically changing community structures. Since community finding methods determine the structures only at different time steps, the structures from consecutive steps must be matched. When communities simply shrink or increase in size, then the matching is straightforward: matching of communities is determined uniquely by intersecting nodes between the two communities of different time steps. However, individuals can also change their community membership over time.</para>
<para>The SNA module implements a special community finder algorithm to solve this challenge. The solution is based on the property of the applied algorithm, which ensures, that adding new nodes and edges to a network does not change the membership status of a node or an edge. The only possible change is, that distinct communities fuse. This property allows an algorithm to match consecutive groups by introducing an intermediate time step, where the two snapshots are merged into a common network. Because the intermediate snapshot can contain only additional nodes and edges, the communities of the intermediate network can be matched to the prior and to the subsequent communities by the rule of matching intersections.</para>
<para>2) <i>Link prediction solution</i></para>
<para>The SNA tool of the RED-Alert solution adopts network theoretic similarity and distance measures for counter terrorism purposes. Based on the special targeted measures, missing links and nodes are predicted by the module. Furthermore, some features e.g. weights, labels, directionality of the links are updated as well.</para>
<para>The implementation relies on two theoretic pillars:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>prediction based on topological measures;</para></listitem>
<listitem><para>prediction based on attribute information.</para></listitem>
</itemizedlist>
<para>Topological measures use only information from connectivity patterns, in contrast attribute measures predict missing/hidden relations from com-mon attribute statistics. Upon request of the analyst on the user interface of the integrated RED-Alert solution, the SNA module can apply hybrid predictions as well, where both networked measures and attribute data are combined (Soundarajan &amp; Hopcroft, 2012).</para>
<para>It must be noted though, that all theoretical speculations are useless without reliable data sources. The scientific background behind this tool ensures only the mathematical rigour with the calculations, but the final conclusions must be always thoroughly reviewed by human experts. All mathematical models work with assumptions that can be only partially valid in real scenarios.</para>
<para>3) <i>Hierarchy reconstructing methods</i></para>
<para>Terrorism has its own frame and structure. As all organizations that consist of many individuals and conduct several tasks, the actions of terrorists are driven by a hierarchical background. However, in several cases, this hierarchy is hidden and builds up in a self-organized way. For traditional observation techniques, this organization seems to be wide spread, unstructured and loosely connected. Here comes SNA into an important role: collecting small pieces of information from huge amount of data results in a holistic picture, where &#8211; if data allows it &#8211; the unseen hierarchical skeleton can be revealed.</para>
<fig id="F9-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-4">Figure <xref linkend="F9-4" remap="9.4"/></link></label>
<caption><para>Illustration of a possible output of the SNA tool.</para></caption>
<graphic xlink:href="graphics/ch009_fig004.jpg"/>
</fig>
<para>Here, algorithms are implemented for revealing hierarchical structures from flat dataset. New networks are constructed from input data: either from co-occurrence statistics or from directed networks containing loops. Furthermore, quantitative measures are calculated for characterizing the similarity of any network to an ideal hierarchical structure [12].</para>
<para>The upper drawing in <link linkend="F9-4">Figure <xref linkend="F9-4" remap="9.4"/></link> presents a typical thread-network layout of a forum in the Darkweb. The thread IDs are shown within nodes and the size of the nodes is proportional to the edges belonging to the given node. Node colours indicate topic groups; links are coloured by the dominant neighbouring node. The lower drawing in <link linkend="F9-4">Figure <xref linkend="F9-4" remap="9.4"/></link> shows the hierarchical structure of commenters of a Darkweb forum.</para>
</section>

<section class="lev1" id="sec9-5">
<title>9.5 Data Anonymization Tool</title>
<para>We live in an era of technology, where smart devices surround us in all realms of life. These devices feed on our information to generate smart options for us, which at the end help us in making smart decisions. The data gathered by these devices can contain vital personal information such as name, age, location and interest. Alongside these smart devices, nowadays, we tend to rely on social network to broaden the scope of our social interactions. We share personal information such as name and age, we highlight the key things happening in our lives such as places visited, accidents and achievement, we also like sharing our believes and interests. Unlike smart devices where adversaries need to corroborate with others to gather information about a single individual, social network data is a source of detailed insight into one&#8217;s life, thus becoming a bigger threat compared to a single smart device. To mitigate the potential risks, the General Data Protection Regulation (GDPR) was introduced. This new regulation limits the way in which personal data is processed. It limits the ways in which data processing can be done by providing only six lawful ways: Consent, Contract, Legal obligation, vital interest, Public task and legitimate interest<footnote id="fn_6" label="6"> <para>https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/lawful-basis-for-processing/</para></footnote>. In light of this new regulation, processing social network data becomes tricky. Data collectors, which own the social networks can process this information but after explicitly informing the data owner. This limits the flexibility that third party organizations had. Under the GDPR, all third party companies, who do not have prior consent, need to rely on anonymized data only.</para>
<para>Data anonymization has been around for a while now. It is a process of carefully categorizing social network data into different streams, where each stream undergoes a certain set of tasks. Social network can be divided into three main streams, personal identifiers, quasi-identifiers and non-personal data. Personal identifiers refer to all such parameters that can help identify an individual directly from a large dataset. This mainly constitutes of name, unique-id, contact number and email address. To ensure data anonymization, all such data is removed from the dataset before further processing, thus reducing the probability of identification of an individual from a large dataset. This probability is further reduced by processing the quasi-identifiers, which on their own have limited meaning but when combined with other quasi-identifiers, can lead to privacy violation. For instance, a dataset containing age information would have less meaning, but when combined with location information would help adversaries in narrowing down their search for an individual and the more quasi-identifiers one has the higher the probability of identification. Therefore, quasi-identifiers are key parameter that all anonymization techniques need to process. The third stream of data deals with the non-personal data. This is set of information that is not connected to any particular individual and can point to anyone in the dataset, for instance, a Facebook post or a twitter tweet can be made by anyone thus, this is considered as non-personal data.</para>

<para>To introduce data anonymization, data analyst carefully analyses the dataset and then narrow down the anonymization approaches that need to be executed. Social network contains three types of quasi-identifiers: numeric, non-numeric and relational information. This therefore means that three separate streams of data anonymization techniques are combined to get results for social network data. Numeric data can be handled by the well-known differential privacy approach [13], where as non-numeric data is handled by k-anonymity (Sweeney, 2002). The relational information is anonymized using a privacy conscious node-grouping algorithm [14]. This anonymized data ensures that no individual can be identified from the processed social network data.</para>
<para>The anonymization techniques applied in this project work in hiding information about all innocent individuals but it also helps terrorist organizations in hiding behind the covers. This as a result puts extra burden on the SNA, CEP, NLP modules. They adapt to working on anonymized social network data and narrow down the search of terrorist organizations. Once identified, the LEAs need to know the identity of the highlighted individuals. To cater for this need, a de-identification approach is also developed in this project, that takes as input the surrogate id&#8217;s that are provided by the anonymization technique and provide the true identity of an individual. This de-identification algorithm only exists due to the nature of the project and where one can argue that this would make the anonymization algorithm pseudo in nature, it is key to highlight that the de-identification approach only resides with the LEAs thus limiting any adversary from actually identifying individuals and also complying with the GDPR.</para>
</section>

<section class="lev1" id="sec9-6">
<title>9.6 Data Networked Privacy Tool</title>
<para>Intelligence information can be very tricky at times and the nature of this information limits LEAs located in different geographical location from sharing information. On the contrary social networks have no territorial boundaries and terrorist organization can operate from any possible location, making it harder for LEAs to track and tackle them. To overcome this difficulty, the Red-Alert project is equipped with a novel Inter-LEA search algorithm. It limits and controls the amount of information that LEAs located in different geographical location can share with the use of high end encryption algorithm. Under this approach, as shown in <link linkend="F9-5">Figure <xref linkend="F9-5" remap="9.5"/></link>, LEAs are independent in performing their own search and collecting their own intelligence information, they then are requested to populate a list of names of the individuals identified. The second LEA who is looking for a particular individual can search in the encrypted list and find out if one exists or not. The benefit of using high end encryption techniques is of limiting what else an inquiring LEA can see. As such, the LEA inquiring only sees a response in terms of a YES or a NO, therefore hiding all other names in the database. The search query is made with taking a probability attack into consideration, thus if an LEA searches for the same name over and over again, there exists no defined pattern. This limits the first LEA (who is hosting the list) from knowing what name is being searched, thus making it a double sided blinded process.</para>
<fig id="F9-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-5">Figure <xref linkend="F9-5" remap="9.5"/></link></label>
<caption><para>Two-layer networked privacy preserving big data analytics model between coalition forces.</para></caption>
<graphic xlink:href="graphics/ch009_fig005.jpg"/>
</fig>
</section>

<section class="lev1" id="sec9-7">
<title>9.7 Integration Component</title>
<para>All the components presented in previous sections will be integrated in one unique solution. The integration component will have therefore some different subcomponents</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>Main System User Interface</b> provides common look-and-feel to the graphical user interface of the overall RED-Alert System. As shown in <link linkend="F9-6">Figure <xref linkend="F9-6" remap="9.6"/></link>, this component will provide a portal-like user interface for the overall system with common interface placeholders, such as header and footer, main menu, and user interface components hosting through custom common APIs.</para></listitem>
</itemizedlist>
<fig id="F9-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-6">Figure <xref linkend="F9-6" remap="9.6"/></link></label>
<caption><para>Main system user interface &#8211; Component interactions diagram.</para></caption>
<graphic xlink:href="graphics/ch009_fig006.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>User Identification</b> and Access Management component will be implemented based on RedHat Keycloak<footnote id="fn_7" label="7"> <para>http://www.keycloak.org</para></footnote> and will provide the means for identifying users and managing their access to application components, both to front-end user interface and to back-end processes;</para></listitem>
<listitem><para><b>The Collaborative Workflow</b>/Case Management component is based on RedHat jBPM,<footnote id="fn_8" label="8"> <para>https://www.jbpm.org</para></footnote> a light-weight and extensible workflow engine, offering process management features and tools for both business users and developers. RedHat jBPM supports adaptive and dynamic processes that require flexibility to model complex, real-life situations that cannot easily be described using a rigid process;</para></listitem>
<listitem><para><b>Application Integration Services</b> component is built with Apache ServiceMix,<footnote id="fn_9" label="9"> <para>http://servicemix.apache.org</para></footnote> an open-source integration container that unifies the functionalities of Apache ActiveMQ, Camel, CXF, and Karaf into a powerful runtime platform you can use to build your own integrations solutions. It provides a complete, flexible, enterprise ready ESB exclusively powered by OSGi;</para></listitem>
<listitem><para><b>System Interoperability Services</b> component will be built on top of the Application Integration Services, exposing selected RED-Alert system&#8217;s functionalities to external system, including existing systems of LEAs;</para></listitem>
<listitem><para><b>Centralized Audit and Logging</b> component will be implemented using Audit4j,<footnote id="fn_10" label="10"> <para>http://audit4j.org</para></footnote> an open source auditing framework which is a full stack application auditing and logging solution for Java enterprise applications, tested on a common distributions of Linux, Windows and Mac OS, designed to run with minimum configurations, yet providing various options for customization.</para></listitem>
</itemizedlist>
<para><link linkend="F9-7">Figure <xref linkend="F9-7" remap="9.7"/></link> presents the interactions of the Centralized Audit and Logging component with the Main System UI (Portal), by means of hosting the visual part exposed by the component, and also with the other components of the RED-Alert system, by means of custom common APIs that will allow all components to log entries into a central repository.</para>




<fig id="F9-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F9-7">Figure <xref linkend="F9-7" remap="9.7"/></link></label>
<caption><para>Centralized audit and logging &#8211; Interactions diagram.</para></caption>
<graphic xlink:href="graphics/ch009_fig007.jpg"/>
</fig>

</section>

<section class="lev1" id="sec9-8">
<title>9.8 Future Research Challenges</title>
<para>The next stage of the CEP process is to integrate the recently published standard JSON to finalise the ingestion process (Mongo data source to Kafka topic transfer), and to extend the range of CEP engines to accommodate the SNA (network analysis) data.</para>
<para>One of the major challenges of multimedia extraction is to reduce the number of false positives. We need to make fine grained tuning of SMA tool&#8217;s components by using larger dataset of a broad range of objects and audio variations. Nowadays data collection, processing and storage have become itself very challenging due to the recently enforced GDPR compliance requirements. The situation is improving with the development of new data management processes and good practices for the data protection. We aim to further improve the performance of SMA Tool and evolve it towards a comprehensive Multimedia Forensics Analysis Toolkit.</para>
<para>Social network analysis is very sensitive to the quality of the available datasets. Further research will aim to develop algorithms for evaluating noisy or biased input datasets. E.g. ensemble averages over possible realizations of networks can shed light on the reliability of predictions.</para>
<para>Another challenge to be addressed is to develop tools for hierarchical visualization of time evolving networks, which helps the analyst in understanding the possible correlations and trends at different scales.</para>
<para>Integration activities will continue with the scheduled iterations towards the piloting phase. These iterations imply adding online streaming capabilities to data acquisition component and expanding the social media channels capabilities beyond Facebook and Twitter, to reach the 7 channels KPI. The Common Schema will be extended with new fields to support these new channels. The security and audit solutions will also be rolled out to all other components to enable the full scope of security requirements of LEAs.</para>
<para><b>Acknowledgement</b></para>
<para>This project has received funding from the European Union&#8217;s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 740688.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Migration and Home Affairs, &#8220;High-Level Commission Expert Group on Radicalisation (HLCEG-R),&#8221; Publications Office of the European Union, Luxembourg, 2018.</para>
<para>[2] OASIS, Privacy by Design Documentation for Software Engineers Version 1.0, 2014.</para>
<para>[3] N. Marz and J. Warren, Big Data &#8211; Principles and best practices of scalable realtime data systems, Manning, 2015.</para>
<para>[4] Deloitte, Privacy by Design Setting a new standard for privacy certification, Deloitte Design Studio, 2016.</para>
<para>[5] The Rockefeller Foundation, &#8220;Reproduction of Weaver&#8217;s memorandum,&#8221; 15 07 1949. [Online]. Available: http://www.mt-archive.info/ Weaver-1949.pdf. [Accessed 12 December 2018].</para>
<para>[6] D. Kahneman, &#8220;Thinking, Fast and Slow by Daniel Kahneman,&#8221; <i>Journal of Social, Evolutionary, and Cultural Psychology</i>, vol. 2, pp. 253&#8211;256, 2012.</para>
<para>[7] P. Viola and M. Jones, &#8220;Rapid Object Detection using a Boosted Cascade of Simple Features,&#8221; in <i>Conference on computer vision and pattern recognition</i>, 2001.</para>
<para>[8] S. Ren, K. He, R. Girshick and J. Sun, &#8220;Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,&#8221; in <i>Advances in Neural Information Processing Systems 28 (NIPS 2015)</i>, 2016.</para>
<para>[9] I. Goodfellow, Y. Bengio and A. Courville, &#8220;Deep Learning,&#8221; MIT Press, 2017.</para>
<para>[10] C. Southall, R. Stables and J. Hockman, &#8220;Improvingpeak-picking using multiple time-steploss functions,&#8221; in <i>Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR)</i>, Birmingham, 2018.</para>
<para>[11] N. Masuda and R. Lambiotte, A Guide to Temporal Networks, Singapore: World Scientific, 2016.</para>
<para>[12] E. Mones, L. Vicsek and T. Vicsek, &#8220;Hierarchy measure for complex networks,&#8221; <i>PLoS ONE</i>, 2012.</para>
<para>[13] C. Dwork, &#8220;Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation,&#8221; <i>Springer, Berlin, Heidelberg</i>., pp. 1&#8211;19, April 2008.</para>
<para>[14] W. Asif, I. G. Ray, S. Tahir and R. Muttukrishnan, &#8220;Privacy-preserving Anonymization with Restricted Search (PARS) on Social Network Data for Criminal Investigations,&#8221; 2018.</para>
</section>
</chapter>

<chapter class="chapter" id="ch010" label="10" xreflabel="10">
<title>TRUESSEC Trustworthiness Label Recommendations</title>
<para><b>Danny S. Guam&#193;n<sup>1</sup>&#8217;<sup>7</sup>, Manel Medina<sup>2</sup>&#8217;<sup>3</sup>, Pablo Lopez-Aguilar<sup>3</sup>, Hristina Veljanova<sup>4</sup>, Jose M. del /Alamo<sup>1</sup>, Valentin Gibello<sup>6</sup>, Mart&#237;n Griesbacher<sup>4</sup> and Ali Anjomshoaa<sup>5</sup></b></para>
<para><sup>1</sup>Universidad Polit&#233;cnica de Madrid, Departamento de Ingenier&#237;a de Sistemas Telem&#193;ticos, 28040, Madrid, Spain</para>
<para><sup>2</sup>Universit&#228;t Polit&#232;cnica de Catalunya, esCERT-inLab, 08034, Barcelona, Spain</para>
<para><sup>3</sup>APWG European Union Foundation, Research and Development, 08012, Barcelona, Spain</para>
<para><sup>4</sup>University of Graz, Institute of Philosophy and Institute of Sociology, 8010, Graz, Austria</para>
<para><sup>5</sup>Digital Catapult, Research and Development, NW1 2RA, London, United Kingdom</para>
<para><sup>6</sup>University of Lille, CERAPS &#8211; Faculty of Law, 59000, Lille, France <sup>7</sup>Escuela Polit&#232;cnica Nacional, Departamento de Electr&#243;nica, Telecomunicaciones y Redes de Informacion, 170525, Quito, Ecuador E-mail: ds.guaman@dit.upm.es; medina@ac.upc.edu; pablo.lopezaguilar@apwg.eu; hristina.veljanova@uni-graz.at; jm.delalamo@upm.es; valentin.gibello@univ-lille.fr; m.griesbacher@uni-graz.at; ali.anjomshoaa@ktn-uk.org</para>
<para>The main goal of TRUESSEC project is to foster trust and confidence in new and emerging ICT products and services throughout Europe by encouraging the use of assurance and certification processes that consider multidisciplinary aspects such as sociocultural, legal, ethical, technological and business while paying due attention to the protection of Human Rights.</para>
<para>TRUESSEC&#8217;s central recommendation to the European Commission (EC) is a label scheme that can suitably address found issues that is worth developing and testing. While actual software development is beyond the current scope of TRUESSEC, the remainder of this paper describes the characteristics of such a solution, allowing the EC to commission a working prototype should it wish to do so.</para>
<para>At the heart of the proposed solution is a set of prioritized survey questions that take into account a set of core areas of trustworthiness to produce both a visual &#8220;transparency&#8221; statement that is easy for the citizen to understand, and additionally provides a specific piece of code to enable machine-to-machine integration based on the policy settings of 3rd party users. In this regard, the Creative Commons licensing model<footnote id="fn_1" label="1"> <para>https://creativecommons.org</para></footnote> is analogous to our proposed solution.</para>

<section class="lev1" id="sec10-1">
<title>10.1 Introduction</title>
<para>This paper provides a recommendation for a TRUESSEC labelling solution, aimed to show users the level of trustworthiness of applications and services, according to multi factor criteria.</para>
<para>The central task of the TRUESSEC project is to apply an interdisciplinary approach, encompassing ethics, sociology, law and technical engineering, to make recommendations to the European Commission for a certification and labelling of ICT products and services that will foster trust among citizens that use them.</para>
<para>Both the core areas that constitute &#8220;trust&#8221; (which spans cybersecurity through to branding and user experience), and the various potential fields of application (from web services to cyber-physical systems) means that the remit of this project is very broad indeed.</para>
<para>Nevertheless, the project team values this approach and, as background to our recommendation, has noted that good progress has been made with European legislation which, over time, is likely to enhance levels of citizen trust. Even though the Digital Single Market legal framework is still a work in progress, these advances have resulted in a strong legal foundation to protect the rights of EU citizens entrenched in the Charter of Fundamental Rights of the EU [1].</para>
<para>In addition, pan-European bodies, such as ENISA<footnote id="fn_2" label="2"> <para>See the Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on ENISA, the &#8220;EU Cybersecurity Agency&#8221;, and repealing Regulation (EU) 526/2013, and on Information and Communication Technology cybersecurity certification (&#8220;Cybersecurity Act&#8221;), COM/2017/0477 final &#8211; 2017/0225 (COD).</para></footnote>, are progressing well with security and privacy certification and codes of conduct in relatively new areas, such as security in the Cloud and Internet of Things &#8211; although certification remains a voluntary responsibility of the online service providers with little legal implications.</para>
<para>Our research started &#8220;evaluating existing trustworthiness seals and labels&#8221; [2], and the analysis of these existing schemes showed a general lack of adoption and awareness, as well as poor transparency regarding what is being certified and under what conditions. In fact, citizens tend to employ other indicators of trust (3rd party payment systems, branding, user experience, and user-based recommendation engines) to make decisions about their use of a service, despite how little guarantees they actually offer.</para>
<para>TRUESSEC&#8217;s research work also went beyond current business practices, technology and legislation to explore the social and ethical questions behind what constitutes trust from users. This is summarized by our criteria catalogue, which was published as deliverable [3].</para>
<para>Given these inputs, there are a number of issues with existing label schemes:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>There are too many labels to provide a common understanding for citizens or service providers</para></listitem>
<listitem><para>Businesses tend not to understand the cost/benefits of using labelling</para></listitem>
<listitem><para>They are not sufficiently flexible and updated to acknowledge relatively new legislation, such as the GDPR</para></listitem>
<listitem><para>They are not inclusive enough to incorporate additional 3rd party certification</para></listitem>
<listitem><para>They do not &#8220;go beyond the law&#8221; to enable service providers to demonstrate that they have taken an ethical, responsible and transparent approach</para></listitem>
<listitem><para>They rarely encompass all major components of trust such as safety or &#8220;security by design&#8221;, personal data protection and consumer rights enforcement</para></listitem>
<listitem><para>They provide insufficient information on how they are awarded and on the safeguards offered</para></listitem>
</itemizedlist>
<para>These shortcomings mean that current labels are often out of date, removed from best practices, poorly understood and therefore little known and used.</para>
</section>

<section class="lev1" id="sec10-2">
<title>10.2 Interdisciplinary Requirements</title>
<para>TRUESSEC.eu Core Areas of trustworthiness are based on the findings from five support studies, <b>considering the European values and</b></para>
<fig id="F10-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-1">Figure <xref linkend="F10-1" remap="10.1"/></link></label>
<caption><para>TRUESSEC.eu core areas of trustworthiness.</para></caption>
<graphic xlink:href="graphics/ch010_fig001.jpg"/>
</fig>
<para><b>fundamental rights</b> as well as following joint work among all disciplines represented in the TRUESSEC.eu project. Six Core Areas have been agreed upon, that set the stage for the search of the multidisciplinary criteria [4]. The Core Areas displayed in <link linkend="F10-1">Figure <xref linkend="F10-1" remap="10.1"/></link> represent the reflections of the five disciplines: ethics, law, sociology, business and technology.</para>
<para><b>Transparency</b>. The TRUESSEC.eu Core Area transparency reflects the understandings of the five disciplines by having information in its focus. In this regard, the Core Area transparency evolves around the fulfilment of information duties related to personal data processing, but it also goes beyond that, as the business perspective shows. Overall, transparency can help to narrow down the existing informational gap and give users clearer answers to questions regarding their personal data and the products and services they purchase.</para>
<para><b>Privacy</b>. The TRUESSEC.eu Core Area privacy is equally important in all disciplines. When users are provided with relevant information, this sets the ground for them to take control over their data. On the one hand, users must be able to make decisions regarding their personal data; on the other hand, providers must respect those decisions. The latter is a striking point, as providers have commercial interests in processing as many data as possible. Considering the economic relevance of data and the emerging data economy, it is crucial to ensure the protection of personal data. This includes considering aspects of privacy throughout the design and development of an ICT product or service (privacy by design) as well as offering the privacy settings at a high level of privacy protection (privacy by default).</para>
<para><b>Anti-discrimination</b>. This Core Area has a great relevance for trustworthiness. The need to formulate such a core area stems from the fact that discrimination concerning ICT products and services is present and it is very often hidden in decision-making carried out by algorithms and self-learning systems. This particularly relates to cases where parameters are included in the decision-making process, which go beyond the scope of the service or product in question.</para>
<para><b>Autonomy</b>. The TRUESSEC.eu Core Area autonomy summarizes well the considerations of the five disciplines. Having access to and rights to use various ICT products and services brings up one very central issue, which is, the need for users to be given the opportunity to make decisions regarding their personal data. These decisions need to be well informed and free of manipulation and coercion.</para>
<para><b>Respect</b>. The TRUESSEC.eu Core Area respect presents a transition from discipline-related understanding to a transdisciplinary one. It embodies the idea that based on societal, legal and ethical frameworks there are certain duties that arise for ICT providers that ground legitimate expectations on the side of users when dealing with ICT products and services. Legitimate expectations have three main hallmarks: they are predictive, prescriptive and justifiable. In the ICT context, this would suggest that users create expectations on what ICT providers will and should or should not do, or how they will and should operate. Whereby these expectations are justifiable, that is, users have justification or warrant for forming them in the first place. Example of such legitimate expectations is that ICT providers respect users&#8217; rights and freedoms.</para>
<para><b>Protection</b>. The considerations of all five disciplines seem to be focused in the protection of individuals against any harms as well as the protection of their rights and freedoms. This has led us to formulate the TRUESSEC.eu Core Area protection as the sixth core area. In the context of ICT, protection relates to both safety and security thus encompassing risks of physical injury or damage and risks related to data such as unauthorized access, identity theft etc. In order to enable solid level of protection, compliance with already established safety and cybersecurity standards is essential. The aim is to hinder any harms that may be caused because of using ICT in the first place.</para>
</section>

<section class="lev1" id="sec10-3">
<title><b>10.3 Criteria Catalogue and Indicators</b><footnote id="fn_3" label="3"> <para>For more on the TRUESSEC.eu Criteria Catalogue see Stelzer et al. &#8220;TRUESSEC.eu Deliverable D7.2: Cybersecurity and privacy Criteria Catalogue for assurance and certification,&#8221; 2018, https://truessec.eu/library.</para></footnote></title>
<para>The TRUESSEC.eu Criteria Catalogue represents a constituent part of the TRUESSEC.eu work on labelling. It is a multidisciplinary endeavour to compile a list of criteria and indicators that could contribute towards enhancing the trustworthiness of ICT products and services. The development of the TRUESSEC.eu Criteria Catalogue consists of two phases: (a) development of the First Draft Criteria Catalogue, which includes only ethical and legal criteria and indicators, and (b) development of the multidisciplinary Criteria Catalogue, which builds upon the First Draft Criteria Catalogue, but it also includes sociological, business and technical input.</para>
<para>The basis for the Criteria Catalogue consists of the European values as stated in Article 2 of the Treaty of the European Union and the European fundamental rights, on the one hand, and the findings from the five support studies prepared in the first year of the project as well as some interdisciplinary work and discussion, on the other hand. It is from here that we extracted the hierarchical structure of the Criteria Catalogue. As depicted in <link linkend="F10-2">Figure <xref linkend="F10-2" remap="10.2"/></link>, we started with high-level concepts we called <b>Core Areas</b>. The very aim of the Core Areas is to provide a framework which in a next step could be broken down into elements that are more specific. In that sense, the Core Areas reflect the values that should be considered in the design and use of ICT products and services, and thus serve as an orientation tool when determining the criteria. Based on the Core Areas we then developed the <b>criteria</b>. The criteria show what requirements an ICT product and service should fulfil in order to be considered trustworthy. In the hierarchical structure, the criteria are less abstract than the Core Areas; however, they are still not concrete enough to be measurable. For that purpose, we formulated <b>indicators</b>, which could be measured. A set of indicators is determined for each single criterion. The aim of the indicators is to indicate the degree to which a particular criterion is met.</para>
<para>Based on the support studies and the interdisciplinary discussion we defined six TRUESSEC.eu Core Areas of trustworthiness: <i>transparency, privacy, anti-discrimination, autonomy, respect</i> and <i>protection</i> and provided a TRUESSEC.eu multidisciplinary understanding of each of them (see Table 10.1).</para>

<fig id="F10-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-2">Figure <xref linkend="F10-2" remap="10.2"/></link></label>
<caption><para>Developing the criteria catalogue.</para></caption>
<graphic xlink:href="graphics/ch010_fig002.jpg"/>
</fig>
<table-wrap position="float" id="T10-1">
<label><link linkend="T10-1">Table <xref linkend="T10-1" remap="10.1"/></link></label>
<caption><para>TRUESSEC.eu Core Areas of trustworthiness</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr valign="top"><td>TRUESSEC.eu Core Areas</td><td>Multidisciplinary TRUESSEC.eu Understanding</td></tr>
</thead>
<tbody>
<tr valign="top"><td>Transparency</td><td>The ICT product or service is provided in line with information duties regarding personal data processing and the product/service itself.</td></tr>
<tr valign="top"><td>Privacy</td><td>The ICT product or service allows the user to control access to and use of their personal information and it respects the protection of personal data.</td></tr>
<tr valign="top"><td>Anti-discrimination</td><td>The ICT product or service does not include any discriminative practices and biases.</td></tr>
<tr valign="top"><td>Autonomy</td><td>The ICT product or service gives users the opportunity to make decisions and respects those decisions. The ICT product or service also respects other parties&#8217;/persons&#8217; rights and freedoms.</td></tr>
<tr valign="top"><td>Respect</td><td>ICT products or services are to be provided in accordance with the legitimate expectations related to them.</td></tr>
<tr valign="top"><td>Protection</td><td>ICT products and services are provided in accordance with safety and cybersecurity standards.</td></tr>
</tbody>
</table>
</table-wrap>
<para>To give a better understanding of the interdisciplinary nature of the Core Areas, we show some exemplary details on the discussion of transparency. From an <b>ethics</b> perspective, transparency relates to two aspects: (a) providing clear and sufficient information about products and services in general and (b) more specifically providing information to users regarding activities with their personal data. <b>Legally</b>, transparency can be understood as in information duties, laid down in the GDPR, the Directive on consumer rights or the e-commerce Directive. With respect to personal data, transparency is one of the core principles of data processing (Article 5 GDPR). From a <b>technology</b> perspective, transparency (in data protection) is defined as the property that all personal data processing can be understood (intelligible and meaningful) at any time by end-users (i.e., before, during, and after processing takes place). In the technical domain there is also a concept named &#8216;Service Level Agreement&#8217;, which describes technical specification of the service/product being used. You may think e.g. on a service availability, uptime, etc. These more normatively oriented definitions can also be complemented by a <b>sociological</b> perspective, which focusses on public opinion. Considering that currently (Eurobarometer data from 2015):</para>
<para>(a) only a minority of EU citizens reads privacy statements (less than 1/5),</para>
<para>(b) only about 4 out of 10 of internet users read the terms and conditions on online platforms, (c) over 90 % want to be informed if their data ever was lost or stolen,</para>
<para>It can be assumed that there is a need for improvement in current information practices.</para>
<para>Having well-informed citizens, e.g. on the risks of cybercrime, also leads to improved cybersecurity behaviour, which emphasizes the importance of transparency and information. These interdisciplinary considerations can also be connected to a <b>business</b> perspective. Transparency includes a wide range of business processes which range from being clear about terms of use of the online service, through to publishing transparency reports about the passing on of user data to 3<sup>rd</sup> parties, such as law enforcement. Transparency of service and use of personal data is increasingly being perceived by business as a competitive advantage.</para>
<para>From the six Core Areas we extracted the following twelve criteria of trustworthiness:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Information</para></listitem>
<listitem><para>Anti-discrimination</para></listitem>
<listitem><para>Cyber security</para></listitem>
<listitem><para>Product safety</para></listitem>
<listitem><para>Law enforcement declaration</para></listitem>
<listitem><para>Appropriate dispute resolution Protection of minors</para></listitem>
<listitem><para>User-friendly consent</para></listitem>
<listitem><para>Enhanced control mechanisms</para></listitem>
<listitem><para>Privacy commitment</para></listitem>
<listitem><para>Unlinkability</para></listitem>
<listitem><para>Transparent processing of personal data</para></listitem>
</itemizedlist>
<para>It should be emphasized that the way the criteria are ordered in this list does not indicate their importance per se. Furthermore, we consider this list of twelve criteria to be the groundwork consisting of the most fundamental criteria in the context of ICT products and services. In that sense, the list is not complete from the simple reason that with the technological developments additional criteria might have to be added.</para>
<para>In what follows, we will choose one criterion from the list and use it as an example to elaborate our approach. Table 10.2 illustrates this example.</para>
<para>The Criteria Catalogue is represented in a tabular form. It consists of three columns. The middle column represents the criterion. The right column represents the indicators. As the table shows, to each criterion a set of corresponding indicators are assigned that, when checked, should show to what degree the criterion is fulfilled. In the column on the left, which is named &#8216;<b>Trustworthiness enhancer</b>&#8217;, are represented the six Core Areas into six sections. By adding this column, we wanted to show the interrelation between the criterion in question and the Core Areas. In order to show this, we used a colour system. We divided each of the six sections representing the six Core Areas into three subsections where a colour can be applied that would indicate the degree to which based on our assessments the criterion addresses each Core Area. In that sense, one could apply colour to one, two or three boxes, with three meaning the criterion fully addresses and meets the particular Core Area. This proved to be, eventually, a very useful way to check whether the group of criteria we identified sufficiently addresses the identified six Core Areas [5].</para>
<para>In this is represented the criterion &#8216;Information&#8217;. Our findings showed that information plays undoubtedly an important part in enhancing trustworthiness of ICT products and services. Having the relevant information allows one to make informed decisions and it also creates a climate of openness, and transparency. In general, information consists of two aspects:</para>
<para><b>Table 10.2</b> Criterion &#8211; Information</para>
<fig id="T10-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<graphic xlink:href="graphics/ch010_tab002.jpg"/>
</fig>
<para>(a) <b>content</b>, namely, <i>what</i> the user is informed about, and</para>
<para>(b) <b>form</b>, or <i>how</i> information is provided.</para>
<para>Since the first aspect, which is related to the content reappears as an indicator in few other criteria, we have not included it here. In that sense, this criterion was limited only to the <i>form</i> of the information provided to the user. As the table shows, the indicators we assigned to this criterion should check whether the information is provided in a user-friendly manner, which means that the information is provided in a plain language that is easily understandable also for laypersons, and that it is as long as necessary and as short as possible. Regarding the length, we suggested that information should be provided in a form of one pager. Additionally, the information should be relevant to the context, easy to locate by the user and it should be provided in a structured machine-readable format. Apart from the format, we also included here another indicator which should check whether the information is provided free of charge. This is just one example of how the Criteria Catalogue operated. The same logic was followed for the other eleven criteria.</para>
<para>One of the main features of the Criteria Catalogue is that it adopts a post-compliance or beyond compliance framework. This framework is very similar to the framework suggested by Luciano Floridi [6]. When analysing the Digital, Floridi distinguishes between hard and soft ethics. Hard ethics is, as he explains, <i>&#8220;what we usually have in mind when discussing values, rights, duties and responsibilities-or, more broadly, what is morally right or wrong and what ought or ought not to be done&#8221;</i> [6]. Soft ethics, on the other hand, is post-compliance ethics as it goes beyond the compliance level and hence beyond existing regulation. In that sense, the aim of the Criteria Catalogue is to address this post-compliance or beyond compliance, for the simple reason that compliance is a very important part in making sure that a business acts within the legal framework. Nevertheless, for enhancing trustworthiness and strengthening trust, which is the main focus of the TRUESSEC.eu project, that might not always be sufficient. With this in mind, in the Criteria Catalogue we provide Core Areas, criteria and indicators as possible ways to address the post-compliance level.</para>
<para>The development of the Criteria Catalogue also paved the way for the drafting of the TRUESSEC.eu recommendations.</para>
</section>

<section class="lev1" id="sec10-4">
<title>10.4 Operationalization of the TRUESSEC.eu Core Areas of Trustworthiness</title>
<para>Using Core Areas of trustworthiness as a starting point, a potential set of ICT system properties and detailed operational requirements have been defined. They attempt to bring Social Science and Humanities requirements closer to the technical domain and analyse which of them have already covered by the state-of-the-art and which need more attention from stakeholders. ICT system properties are quality or behavioural characteristics of a system that, ideally, can be distinguished qualitatively or quantitatively by some assess-ment method. There are several ICT system properties already defined and studied in the technical realm (e.g. security and safety), so the knowledge base around them can be leveraged to analyse and identify the specific operational requirements that need to be met and assessed for a specific ICT product or service. <link linkend="F10-3">Figure <xref linkend="F10-3" remap="10.3"/></link> provides an overview of how we have mapped the Core Areas (and criteria) in ICT system properties (details can be found in [7]).</para>
<para>Once identified the ICT system attributes, they can become the basis for carrying out an operationalization process and deriving a set of specific operational requirements that can be realised and assessed.</para>
<para>As depicted in <link linkend="F10-4">Figure <xref linkend="F10-4" remap="10.4"/></link>, operational requirements are requirements of capabilities that should be guaranteed by an ICT product or service to satisfy one or more of the aforementioned ICT system properties. Moreover, they can be used as a precursor to the selection of more specific measures or countermeasures that are known as controls. Controls can be of technical nature (i.e. functionality in hardware, software, and firmware), organizational nature (i.e. organizational procedures related to the system environment and people using it), or physical nature (i.e. physical protective devices).</para>
<para>Finally, controls are instantiated using one or more specific techniques, which are found adequate to fulfil requirements of controls.</para>
<fig id="F10-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-3">Figure <xref linkend="F10-3" remap="10.3"/></link></label>
<caption><para>Core areas of trustworthiness and related ICT system properties.</para></caption>
<graphic xlink:href="graphics/ch010_fig003.jpg"/>
</fig>

<fig id="F10-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-4">Figure <xref linkend="F10-4" remap="10.4"/></link></label>
<caption><para>Guiding elements of the operationalization process.</para></caption>
<graphic xlink:href="graphics/ch010_fig004.jpg"/>
</fig>
<para>It is worth noting the difference between operational requirements, controls, and techniques. Actually, both operational requirements and controls specify a system or organizational capabilities; however, an operational requirement recognises that a trustworthy capability seldom derives from a single control. In other words, one capability, depending on the context, may require several controls. On the other hand, while controls express what measure should be implemented, techniques indicate how it is implemented. Finally, it is important to mention that controls and techniques are context dependent, i.e. they are suitable for the specific context where a system is intended to work. Table 10.3 shows an example of the guiding elements of the operationalization process. Controls and a survey of the technical solutions for trustworthiness can be found in [8].</para>
<para>The state of the practice already includes plenty of controls contained within standard frameworks that, given the broad use of them during audits and certifications, enable to be closer to measurable (and assessable) factors and their corresponding evidence. Controls are widely used by the industry and the state of the practice shows hundreds of standards and certification schemes (around 290 according to ECSO<footnote id="fn_4" label="4"> <para>European Cyber Security Certification, A Meta-Scheme Approach v1.0. December 2017. Available under: http://www.ecs-org.eu/documents/uploads/european-cyber-security-certification-a-meta-scheme-approach.pdf</para></footnote>). Just to mention a few examples, security control frameworks include the ISO/IEC 15408 Common Criteria that contains a general catalogue of security requirements for ICT products, the ISO/IEC 27002 defines a set of organizational and technical controls intended to information security management, and the CSA Cloud Control Matrix (CCM) presents a catalogue of cloud-specific security controls. Privacy controls are defined, e.g. in the recent standard ISO/IEC 27018 that is intended to Cloud Service Providers (CSP) acting as Data.</para>
<para>Processors, the NIST 800-53 Rev4 contains security and privacy controls meant to Information Systems and Organizations, and; the General Accepted Privacy Principles (GAPP). Safety requirements e.g. are defined in the IEC 61508-2, they are intended to electrical, electronic, and programmable safety-related systems. Similarly, in the literature we can find significant works that propose, e.g. taxonomies of requirements that can be leveraged to operationalize some of the ICT system properties defined in the section above (e.g. using a goal-oriented approach). For instance, intervenability property can be refined into two guidelines: Data Subject Intervention and Authority Intervention. The first one representing intervention actions for data subjects and the latter the intervention actions for supervisory authorities to intervene in the processing of personal data. Each guideline can be refined into one or more operational requirements that act as success criteria, being empirically observable and objectively measurable. Following up with the intervenability property, the possible intervention actions by data subjects (e.g. do not consent, withdraw consent, review, challenge accuracy, challenge completeness, and request data copy) and the required ICT systems capabilities (e.g. access, no processing, restricted processing, amendment, correction, erasure, data copy, and suspended data flow) may lead to the definition of specific intervention readiness operational requirements. For example, before collecting personal data, the system shall provide data subjects with the option to &#8216;consent&#8217; and &#8216;do not consent&#8217; the [processing instance].</para>
<para><b>Table 10.3</b> Example of the guiding elements of the operationalization process Finally, while it should be recognised that the state of the art already provides plenty of controls contained in standard catalogues and frameworks for other more mature properties (mainly in the cybersecurity realm), controls related to anti-discrimination or autonomy are scarce and only recently there are some efforts and initiatives to address them (e.g. the EC has released ethics guidelines for trustworthy AI on April 8, 2019<footnote id="fn_5" label="5"> <para>European Commision, &#8220;Ethics Guidelines for trustworthy AI&#8221;, https://ec.europa.eu/ digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accesed 12 April 2019)</para>
<fig id="T10-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<graphic xlink:href="graphics/ch010_tab003.jpg"/>
</fig></footnote>).</para>

</section>

<section class="lev1" id="sec10-5">
<title>10.5 Recommendations</title>
<para>The European and international landscape of labels/seals is heterogeneous, as there is a great variation around their core functional models, the criteria they assess, the assurance level they offer, etc., and they also present a number of issues that need to be addressed [1]. For example, most of labelling core functional models require a complex chain of trust involving several third parties throughout the labelling process (e.g. evaluation body, certification/declaration authority, and accreditation authority). This complexity often results in a lot of time (and effort) required in the preparation and assessment of an ICT product/service, as well as in affordability issues due to the high costs involved. These issues are exacerbated when an ICT product or service must pass through the same process several times (one for obtaining the label and some other for certifying specific properties), involving additional cost and time. The industry has also highlighted these matters and called to &#8220;<i>minimize the burden on providers/manufacturers with respect to assessment, costs and time to market while ensuring an adequate level of trustworthiness</i>&#8221; [2].</para>
<para>While the TRUESSEC.eu labelling proposal advocates for addressing the complexity and affordability issues by reducing the intervention of third parties as far as possible, it also recognises the relevance of pursuing the verifiability and credibility of the labelling process. Providing the necessary evidence to support what is claimed about an ICT product or service improves verifiability. In turn, adding an independent public or private authority responsible for defining and articulating the labelling governance framework enhances credibility.</para>
<para>In this context, the TRUESSEC.eu proposal advocates a labelling solution that includes the following key elements: a self-assessment questionnaire, a labelling portal, a transparency report plus a visual label, and a governance framework ruled by an authority. <link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link> illustrates the labelling approach proposed by TRUESSEC.eu.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><i>The self-assessment questionnaire</i> is based on the indicators defined in the Criteria Catalogue. It provides a set of yes/no questions for a service provider to determine its compliance with the Criteria Catalogue. A service provider performs the self-assessment and attaches the evidence of an indicator&#8217;s fulfilment when the answer to a question is affirmative.</para></listitem>
</itemizedlist>
<fig id="F10-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-5">Figure <xref linkend="F10-5" remap="10.5"/></link></label>
<caption><para>TRUESSEC.eu labelling proposal.</para></caption>
<graphic xlink:href="graphics/ch010_fig005.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><i>The labelling portal</i> processes the questionnaire answers and issues a transparency report and a visual label according to the level of conformance achieved for each of the twelve criteria included in the Criteria Catalogue.</para></listitem>
<listitem><para><i>The transparency report and the visual label</i> deliver a two-layer trust-worthiness declaration. The visual label provides the first layer as it is easy to understand. The transparency report further details the assessment results in both text and machine-readable format, thus providing the second layer. The label and the report are both multi-dimensional (twelve criteria for trustworthiness) and multi-level (several levels of conformance for each criterion).</para></listitem>
<listitem><para><i>The governance framework</i> sets the fundamental rules the label must follow.</para></listitem>
</itemizedlist>
<para>The following sections further elaborate on these elements.</para>

</section>

<section class="lev2" id="sec10-5-1">
<title>10.5.1 Questionnaire</title>
<para>The questionnaire contains a set of yes/no questions, each asking whether an <i>indicator</i> of the Criteria Catalogue is met. The answer to each question allows to objectively determine which indicators are met and, ultimately, to what extent an ICT product or service meets a criterion for trustworthiness. Thus, we envisage a self-assessment and a yes/no questionnaire whereby providers/manufacturers reveal which <i>indicators</i> for trustworthiness they comply with, attaching the corresponding evidence when applicable. <i>Indicators</i> act as checkpoints, so they should be <b>empirically observable</b> (i.e. through evidence) and <b>objectively measurable</b> (i.e. a measurable element should be clearly defined in the indicators&#8217; description).</para>
<para>In our context, evidence refers to the information used to support the assessment and compliance of the <i>indicators</i>. Some evidences can refer to the implementation/realization of a given technique (e.g. the fifth indicator of the <i>&#8216;user-friendly consent&#8217; criterion: users are given the option to opt-out from data processing</i> can be supported by a centralized privacy control panel that includes opt-out options). Other evidences can describe organizational means to meet an indicator (e.g. those related to the <i>appropriate dispute resolution criterion</i>). Yet other evidence can be supported/provided by third parties who already performed an assessment on the subject-matter of the labelling e.g. through a certification or audit process. In this way, we prevent an ICT product or service from going through the same process several times (one for obtaining the label and some other for certifying specific properties). For example, a provider/manufacturer can link the certificate issued by a trusted third-entity as evidence of meeting the second indicator of the <i>&#8216;Cybersecurity&#8217; criterion: the ICT product or service is compliant with relevant [security] standards</i>.</para>
<para>An <i>indicator</i> should also include a measurable element easy to justify with evidence, calculate and understand. This measurable element should be clearly identified in the <i>indicator</i>&#8217;s description along with the corresponding measurement scale, which may be one of the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>Nominal scales</b> are applicable for mapping values (without an intrinsic order) to categories, and only equality operation is allowed. The nominal <b>dichotomous</b> scale only has two categories and can be used to express whether a feature is present or not. In the Criteria Catalogue, several measurable elements are dichotomous in nature. For instance, the second <i>indicator</i> (ii) of the &#8216;<i>Cybersecurity&#8217; criterion</i> encloses a dichotomous measurable element with true or false as possible values. An evaluator will check whether the <i>ICT product or service is compliant with relevant [security] standards</i>. A provider/manufacturer can provide the certificate issued by a third trusted entity as evidence. Similarly, this can be applied for the first <i>indicator</i> of the <i>Privacy &#8216;Commitment&#8217; criterion</i>, which states that &#8220;<i>The ICT provider clearly states its commitment to the GDPR in the form of a declaration&#8221;</i>.</para></listitem>
<listitem><para><b>Ordinal scales</b> allow to sort or rank two or more categories, and equality and inequality operations are allowed. This may be applicable to, e.g. the <i>&#8216;Enhanced control mechanisms&#8217; criterion</i>. The first <i>indicator</i> of this criterion states that <i>means to deletion of personal data should be provided</i>. In this respect, the <i>level of recovery</i> may be a measurable element intended to assess the difficulty (or easiness) to recover supposedly deleted data. For example, based on the guidelines and techniques presented into the NIST SP 800-88, three values on the ordinal scale can be abstracted:</para></listitem>
<listitem><para>Level 1 (Clearing) &#8211; Deletion is done using overwriting software not only on the logical storage location but on also all addressable locations, so data cannot be easily recovered with basic utilities but could be possible with laboratory attacks.</para></listitem>
<listitem><para>Level 2 (Purging) &#8211; Deletion is done using sophisticated sanitization techniques, so data cannot be possible at all.</para></listitem>
<listitem><para>Level 3 (Destroying) &#8211; The media is destroyed (physical destruction).</para></listitem>
</itemizedlist>
<para>Therefore, this measurable element can have three different ordinal levels, and the assurance of a given <i>level of recovery</i> can be an <i>indicator</i> attached to a particular <i>level of conformance</i>. For example, ensuring the Level 1 (Clearing) can be a criterion of the Level of Compliance 1, and the corresponding successive levels.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>Interval/ratio scales</b> have numerical values and allow obtaining the difference or distance between them allowing be comparing and ordering. This may be applicable to measurable elements that have continuous numerical values. For example, the <i>period for the disposal of personal data once they have been processed for the purpose consented to</i> be another relevant, measurable element of the criterion mentioned in the previous paragraph. This period may have a continuous and infinite range of values, e.g. 1 day, 30 days, 365 days, etc. These quantitative, measurable elements can then be embedded in dichotomous (yes/no) <i>indicators</i> in terms of intervals or thresholds. As a matter of example, an <i>indicator</i> belonging to an advanced <i>level of conformance</i> may state that personal data are automatically deleted as soon as they are not used (0 days), while an <i>indicator</i> of <i>a basic/entry level of conformance</i> may state that personal data are deleted within 15 to 30 days.</para></listitem>
</itemizedlist>
</section>

<section class="lev2" id="sec10-5-2">
<title>10.5.2 Labelling Portal</title>
<para>Based on the answers submitted by a provider/manufacturer, the labelling portal issues a transparency report and a visual label conveying the <i>level of trustworthiness</i> of the ICT product or service assessed. The notion of <i>level of trustworthiness</i> must be understood neither an absolute &#8220;yes/no trustworthy&#8221; nor as a single scalar &#8220;75.5% trustworthy&#8221;, but as the extent to which the twelve criteria for trustworthiness defined in the Criteria Catalogue are fulfilled. This &#8220;extent&#8221; corresponds to one of the <i>levels of conformance</i> defined in the labelling scheme. To illustrate the notion of <i>level of conformance</i> and supported by the levelling structure defined in and [9], the following levels have been defined: Basic/Entry (Level I), Enhanced (Level II), and Advanced (Level III).</para>
<para>We advocate for an assessment based on groups of <i>indicators</i>, where each group is associated with a qualitative <i>level of conformance</i>. As illustrated in <link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link>, the <i>indicators</i> of each criterion are divided into subsets. Each subset is assigned to a particular Level of Conformance. For an ICT product or service to reach a superior <i>level of conformance</i> in any of its criteria, it must necessarily comply with all the <i>indicators</i> of the previous levels. Therefore, a criterion that has a Level I means that it complies with all the <i>indicators</i> belonging to Level I, Level II implies that a criterion complies with both Level I and Level II <i>indicators</i>, and Level III implies that a criterion complies with Level I, Level II, and Level III <i>indicators</i>.</para>
<para>An ICT product or service can have different <i>levels of conformance</i> for each of the twelve Criteria for trustworthiness. <link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link>(b) further depicts two items (ICT products or services) in its last two columns. On the one hand, the item A complies with Level II for criterion C1 (Information) and with Level I for criterion C2 (User-friendly consent). On the other hand, item B complies with level I for criterion C1 and level III for criterion C2. Also, note that item B does not conform to level II in criterion C1 because it fails to meet <i>indicator</i> I1.4. In this example, it can also be noted that if an ICT product or service is not able to comply with a single <i>indicator</i> for some level, it does not conform to that level.</para>
<para>The decision to define different levels for the Criteria is supported by different legislation; for example, the European GDPR (General Data Protection Regulation) defines different degrees of sensitivity of personal information, each requiring different privacy controls to protect them. Therefore, different privacy protection controls could be mapped to different subsets of <i>indicators</i>, each assigned to a respective <i>level of conformance</i>. Similarly, the Cyber Security Certification Framework by European Commission defines three Assurance Levels, each assigned to different subsets of requirements/criteria in terms of the risks involved.</para>
<fig id="F10-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F10-6">Figure <xref linkend="F10-6" remap="10.6"/></link></label>
<caption><para>(Illustrative) Levels of conformance.</para></caption>
<graphic xlink:href="graphics/ch010_fig006.jpg"/>
</fig>

</section>

<section class="lev2" id="sec10-5-3">
<title>10.5.3 Transparency Report and Visual Label</title>
<para>The trustworthiness of an ICT product or service is expressed as twelve dimensions (criteria) each with a level of conformance depending on the subset of indicators met. These are conveyed to the label consumer through a two-layer declaration:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The first layer shows a <b><i>visual label</i></b> that is easy for users to understand. It shows the extent to which each trustworthiness criteria is fulfilled (i.e. criterion plus its level of conformance).</para></listitem>
<listitem><para>The second layer shows a <b><i>transparency report</i></b> in both text and machine-readable format. This should provide further details, i.e. criteria, <i>indicators</i> fulfilled, evidence provided (if applicable), and the individual <i>levels of conformance</i>. The machine-readable <i>transparency report</i> enables machine-to-machine integration based on e.g. the users&#8217; policy settings as set in their user agents such as a web browser. This may facilitate the automation of products and services trustworthiness comparison and assessment.</para></listitem>
</itemizedlist>
<para>Both the transparency report and the visual label should highlight the date of the last update and should clearly specify which components of the product (modules/functionalities) or service (operations) are part of the labelling. In addition, in order to verify the authenticity of a label the following measures need to be considered:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The <i>labelling portal</i> (who issues the <i>transparency report</i> and the <i>visual label</i>) should publicly provide a list of issued labels, including the two-layer information above described.</para></listitem>
<listitem><para>The <i>visual label</i> also should integrate a link to forward the user to the <i>labelling portal</i>, which provides information about the corresponding ICT product and service.</para></listitem>
<listitem><para>The authenticity of the <i>labelling portal</i> should also be ensured.</para></listitem>
<listitem><para>The Criteria Catalogue should be easily accessible to the public, i.e. freely downloadable from a public website.</para></listitem>
</itemizedlist>
</section>

<section class="lev2" id="sec10-5-4">
<title>10.5.4 Governance and Authority</title>
<para>Having an independent third party managing the verification of criteria/ indicators and subsequent declaration increases the credibility and ulti-mately the degree of user confidence in a labelling scheme, since, e.g., fraudulent behaviour or user complaints are managed by these independent entities. This is supported by previous findings [1] which suggest that (i) the schemes operated by public bodies or foundations were found to be the most transparent, comprehensive and, trustworthy; and, (ii) labelling schemes have poor longevity unless they are backed by public authorities or large operators. Thus, the TRUESSEC.eu labelling solution advocates for a governance framework ruled by a public or private authority that will be responsible for:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Creating the yes/no questionnaire.</para></listitem>
<listitem><para>Deciding the number of <i>levels of conformance</i>.</para></listitem>
<listitem><para>Assigning indicators to each level of conformance.</para></listitem>
<listitem><para>Setting a validity period for the <i>transparency report</i> and the <i>visual label</i>. It should be considered that the Cybersecurity Act states that certificates shall be issued for a maximum period of three years and may be renewed, under the same conditions (Article 48). The same is stated by the GDPR (Article 42). However, the &#8216;lightweight nature&#8217; of the proposed labelling solution allows re-issuing the <i>transparency report</i> and <i>visual label</i> in shorter time thus increasing the credibility of the approach. Therefore, we recommend a 12-month expiry date from the last update.</para></listitem>
<listitem><para>Defining the terms and conditions on the use of a label. This should include penalty rules in case of cheating or non-compliance as well as supervision mechanisms to ensure the validity of the label (e.g. random audits or complaint channels). In this sense and aligned with our &#8220;re-use and no-burden approach&#8221;:</para></listitem>
<listitem>We recommend that penalty and complaint approaches already defined in other close legislation, e.g. GDPR, are considered and articulated with the labelling system here proposed. Some &#8216;Core Areas of Trustworthiness&#8217; fall within already regulated areas (e.g. privacy and security). Therefore, considering, e.g. that most of the <i>indicators</i> in the Criteria Catalogue are covered by the GDPR, its complaint and penalty regime (GDPR CHAPTER VIII: Remedies, liability and penalties) should be articulated with the labelling system. Thus, e.g. a GDPR breach will trigger a re-issue of the <i>transparency report</i> and <i>visual label</i> (in this case, even the basic/entry level would not be met).</listitem>
<listitem>Non-compliance with a criterion should not necessarily result in the revocation of the label, but its update to reflect a new <i>level of conformance</i>. Revocation should only be performed when at least the basic/entry level is not met.</listitem>
</itemizedlist>
</section>

<section class="lev1" id="sec10-6">
<title>10.6 Conclusions</title>
<para>The current world scenario shows that the users feel unable to recognise the level of trustworthiness of applications and services, and not even identify which characteristics should they have or show, depending on the confiden-tiality or sensitivity of the process the user is intending to perform with them.</para>
<para>This makes users feel helpless facing the dilemma &#8220;to trust or not to trust&#8221;.</para>
<para>In this scenario, the trust labels appear to be the solution, i.e. the users could look at the label issuer, and ask its experts to take a decision on their behalf, or at least make some assessment of the level of trust on the application the user could make, in one or several of the criteria identified in TRUESSEC.eu.</para>
<para>This scenario is a somewhat utopic for several reasons:</para>
<para>&#8211; There are not well recognised trustworthiness labels, so the users don&#8217;t know about its existence</para>
<para>&#8211; Which ones they should trust more, based on the specific user requirements and expectations about the behaviour of a specific application.</para>
<para>&#8211; Which levels of trust and on which areas should the user request from the application or service provider.</para>
<para>&#8211; Who evaluates the level of trust of the applications and on which criteria, to assess the level of trust, so that the users could be confident that the assessment itself is trustworthy.</para>
<para>In order to change this pessimistic scenario, the first thoughts of the project in order to propose a roadmap for the implementation of a trustworthy widely adopted trust label (or set of), are taking into consideration the following ideas:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Involvement of well-known and authoritative stakeholders, like ENISA, FRA or other European Union institutions, issuing and supporting recommendations to launch and promote the adoption of the trustwor-thiness label(s).</para></listitem>
<listitem><para>Encourage organisations active in the cybersecurity awareness, like APWG.eu and most of the EU Member States N/G CERTs, to disseminate and make the citizens aware of the existence and advantages of using those trustworthy labels for their own cyber-safety.</para></listitem>
<listitem><para>Define a methodology to allow application developers and service providers to self-assess the trustworthiness of their applications in some or all the criteria identified in TRUESSEC.eu. This approach is aligned with the policy adopted by ENISA in the PET assessment tool. Adoption of this strategy by application developers and service providers will be proportional to the effective demand expressed by the users in the Market.</para></listitem>
<listitem><para>National and/or European authorities should appoint a supervisory authority that could validate the accuracy of the self-assessment statements made by developers and service providers, in order to provide the required trustworthiness to the whole assessment schema. Optionally the assessment criteria could be upgraded to standard and be evaluated by an independent laboratory or trusted third party, which would provide an additional level of trust on the label by the citizens.</para></listitem>
</orderedlist>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>The research leading to these results has received funding from the European Union&#8217;s Horizon 2020 research and innovation programme under grant agreement No. 731711. The first author gratefully acknowledges his sponsor, Escuela Polit&#232;cnica Nacional.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] V. Gibello, &#8220;TRUESSEC Deliverable D4.1: Legal Analysis,&#8221; 2017. [Online]. Available: https://truessec.eu/content/deliverable-41-legal-analysis.</para>
<para>[2] V. Gibello, &#8220;TRUESSEC Deliverable D7.1: Evaluation of existing trustworthiness seals and labels,&#8221; 2018. [Online]. Available: https://truessec.eu/content/deliverable-71-evaluation-exiting-trustworthi ness-seals-and-labels.</para>
<para>[3] H. Stelzer, E. Staudegger, H. Veljanova, V. Beimrohr, and A. Haselbacher, &#8220;TRUESSEC Deliverable D4.3: First draft Criteria Catalogue and regulatory recommendations,&#8221; 2018. [Online]. Available: https://truessec.eu/content/d43-first-draft-criteria-catalogue-and-regulatory-recommendations.</para>
<para>[4] D. S. Guam&#225;n, J. M. Del Alamo, H. Veljanova, S. Reichmann, and A. Haselbacher, &#8220;Value-based Core Areas of Trustworthiness in Online Services,&#8221; in IFIP International Conference on Trust Management, 2019, Springer, Cham.</para>
<para>[5] D. S. Guam&#225;n, J. M. Del Alamo, H. Veljanova, A. Haselbacher, and J. C. Caiza, &#8220;Ranking Online Services by the Core Areas of Trustworthiness&#8221;, RISTI-Revista Iberica de Sistemas e Tecnologias de Informacao, 2019.</para>
<para>[6] L. Floridi, &#8220;Soft Ethics: Its Application to the General Data Protection Regulation and Its Dual Advantage,&#8221; Philos. Technol., vol. 31, no. 2, pp. 163&#8211;167, 2018.</para>
<para>[7] D. S. Guaman and J. M. del Alamo, &#8220;TRUESSEC Deliverable D5.2: Technical gap analysis.&#8221; [Online]. Available: https://truessec.eu/content/ deliverable-52-technical-gap-analysis.</para>
<para>[8] D. S. Guaman, J. Del Alamo, S. Mart&#237;n, and J. C. Yelmo, &#8220;TRUESSEC Deliverable D5.1: Technology situation analysis: Current practices and solutions,&#8221; 2017. [Online]. Available: https://truessec.eu/content/delive rable-51-technology-situation-analysis-current-practices-and-solutions.</para>
<para>[9] European Cyber Security Organisation, &#8220;European Cyber Security Certification: A meta-scheme approach,&#8221; WG1 &#8211; Standardisation, certification, labelling and supply chain management, 2017. [Online]. Available: http://www.ecs-org.eu/documents/uploads/european-cyber-security-certification-a-meta-scheme-approach.pdf. [Accessed: 05 April 2018].</para>
</section>
</chapter>

<chapter class="chapter" id="ch011" label="11" xreflabel="11">
<title>An Overview on ARIES: Reliable European Identity Ecosystem</title>
<para><b>Jorge Bernal Bernabe<sup>1</sup>, Rafael Torres<sup>1</sup>, David Martin<sup>2</sup>, Alberto Crespo<sup>3</sup>, Antonio Skarmeta<sup>1</sup>, Dave Fortune<sup>4</sup>, Juliet Lodge<sup>4</sup>, Tiago Oliveira<sup>5</sup>, Marlos Silva<sup>5</sup>, Stuart Martin<sup>6</sup>, Julian Valero<sup>1</sup> and Ignacio Alamillo<sup>1</sup></b></para>
<para><sup>1</sup>University of Murcia, Murcia, Spain</para>
<para><sup>2</sup>GEMALTO, Czech Republic</para>
<para><sup>3</sup>Atos Research and Innovation, Atos, Calle Albarracin 25, Madrid, Spain</para>
<para><sup>4</sup>Saher Ltd., United Kingdom</para>
<para><sup>5</sup>SONAE, Portugal</para>
<para><sup>6</sup>Office of the Police and Crime Commissioner for West Yorkshire, (POOC), West Yorkshire, United Kingdom</para>
<para>E-mail: jorgebernal@um.es; rtorres@um.es; martin.david@gemalkto.com; alberto.crespo@atos.net; skarmeta@um.es; dave@saher-uk.com; juliet@saher-uk.com; tioliveira@sonae.pt; mhsilva@sonae.pt; stuart.martin@westyorkshire.pnn.police.uk; julivale@um.es; ignacio.alamillod@um.es</para>
<para>Identity-theft, fraud and other related cyber-crimes are continually evolving, causing important damages and problems for European citizens in both virtual and physical places. To meet this challenge, ARIES has devised and implemented a reliable identity management framework endowed with new processes, biometric features, services and security modules that strengthen the usage of secure identity credentials, thereby ensuring a privacy-respecting identity management solution for both physical and online processes. The framework is intended to reduce levels of identity-related crimes by tackling emerging patterns in identity-fraud, from a legal, ethical, socio-economic, technological and organization perspective. This chapter summarizes the main goals, approach taken, achievements and main research challenges in H2020 ARIES project.</para>

<section class="lev1" id="sec11-1">
<title>11.1 Introduction</title>
<para>In a world getting every time more and more digital, the protection of the personal data is a crucial point, in particular, individual identities are vulnerable in this scenario, where European stakeholders are interacting in a global way. The lack of trust is increasing derived from the current absence and deficiency of solutions, including consistently applied identification and authentication processes for trusted enrolments, particularly the use of online credentials with low levels of authentication assurance. Moreover, there is not a common approach in Europe (from the point of view of the legislation, cross-border cooperation and policies) to address identity-related crimes. This situation costs billions of Euros to countries and citizens in fraud and theft.</para>
<para>In this scenario, ReliAbleeuRopean Identity EcoSystem (ARIES) H2020 research project aims to provide a stronger, more trusted, user-friendly and efficient authentication process while maintaining a full respect to subject&#8217;s and personal data protection and privacy.</para>
<para>Thanks to this ecosystem, citizens will be able to generate a digital identity linked to the physical one (elD/ePassport) using biometrics while, at the same time, store enrolment information in a secure vault only accessible by law enforcement authorities in case of cybersecurity incidents. Because of this process, linking proofs of identity based on the combination of biometric traits and citizen digital identity with the administrative processes involved in the issuance of documents like, for example, birth or civil certificates will be possible.</para>
<para>Users will also be able to derive additional digital identities from the ones linked with their eID or ePassports with different levels of assurance and degrees of privacy about their attributes. The new derived digital identities may be used in administrative exchanges where it is required by the governments according to eIDAS regulation [1] and be store in software or hardware secure environments in their mobiles or smart devices.</para>
<para>The rest of this chapter is structured as follows. Section 2, depicts the ARIES Ecosystems, Section 3 is devoted to the main innovative process in ARIES. Section 4 describes the legal and ethical approach considered. Section 5 recaps the main cyber-security and privacy Research challenges. Finally, Section 6 concludes this chapter.</para>
</section>

<section class="lev1" id="sec11-2">
<title>11.2 The Aries Ecosystem</title>
<para>The project goal is to provide new technologies, processes and security features that ensure a higher level of quality in security aspects like credential management for privacy-respecting solutions and the reduction of identity fraud, theft or wrong identity problems which can be associated with crimes. The general Aries ecosystem is depicted in <link linkend="F11-1">Figure <xref linkend="F11-1" remap="11.1"/></link>.</para>
<para>Authentication processes will be ensured with the use of smart devices allowing to use all required biometric (especially face) and electronic (using NFC) data. This process should ensure a high level of quality for biometrics acquisition, while assuring data integrity and delivering the derived identities required attributes to the adequate relying party (service provider). Such features will be achieved by functionality locally (on the smart device) or centrally (back-end). Moreover, digital identities will be generated with privacy preserving technologies and allowing citizens to just prove to be in possession of some attributes without exposing the rest of their data, i.e. being over 18 years old. Given that different levels of assurance are possible a biometric mechanism could also be used as a proof of digital identity possession where appropriate.</para>
<para>A user manages multiple identities and credentials which are issued by Identity Providers (IdP) and presented to the Service Providers (SP) to access the offered services by them. The ARIES approach considers a multi-domain interaction for elD management in order to achieve a distributed but unified elD ecosystem. Each domain usually contains one or more IdPs and one or more SPs. Usually, a SP redirects the user requests to the IdP within its own domain but there are some exceptions to be considered: An SP can directly authenticate the identity of the user (e.g. validating a certificate) and could, also, redirect to an IdP of another domain in which it trusts, including a mobile operator, a bank or a Government for Mobile eID authentication. In addition, IdPs can be interconnected relying on federated interoperability, thereby allowing delegation of authentication (e.g. using STORK) and also attribute aggregation (e.g. to create a derived credential which includes both governmental and academic information). User consent will be obtained prior to transferring any personal information. Interaction with legacy non-ARIES IdPs can be also achieved by contacting those IdPs via standard protocols such as SAML [2], OAuth2 [3], etc.</para>
<fig id="F11-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-1">Figure <xref linkend="F11-1" remap="11.1"/></link></label>
<caption><para>ARIES ecosystem overview.</para></caption>
<graphic xlink:href="graphics/ch011_fig001.jpg"/>
</fig>
<para>The users interact with the system through several devices such as mobile phones or smart wearables. These devices will require a secure element in order to securely protect digital identities with biometric features. Alternative, although less secure, storage and execution environments might be foreseen for larger adoption of the ecosystem, but with limited capabilities to manage the resulting risk. A secure electronic wallet will be provided to users for them to securely handle and manage their digital identities and their related data.</para>
<para>New derived credentials can be requested by users to an ARIES IdP after an authentication process using its eID. These credentials may contain different identity attributes and/or pseudonyms, according to the user needs and required level of privacy and security. The issuance process implies that the new credential must be issued by a trusted IdP and logged to assure the traceability in law enforcement purposes such as real identity identification. This could be achieved by an encrypted and signed logging mechanism. The logged information should also be kept secured and only accessed due to law regulated cases.</para>
<para>The derived credentials will be originated from previous strong ones such as biometric data and eID documents. The process is based on mobile token enrolment server with a derivation module. Optionally, users could also derive their own identity and present cryptographic proofs to an SP. In this case, identity approaches based on Zero Proof Knowledge Proofs like IBM Idemix [4] or ABC4Trust [5] solutions can be used. For this, the obtained credentials should be prepared with the needed cryptographic information to derive new identities and provide proofs when requested by an SP.</para>
<para>Those identities can be derived and issued by different entities having, each credential, an associated Level of Assurance (LoA). This serves as a measure of the security mechanism used by the credential issuer to validate the user identity. ARIES aims to keep the LoA or to avoid significant differences when using derived credentials and similarly, will try to ensure that Level of Trust (LoT) among different entities is also a maintained after adopting derived credentials in the ecosystem.</para>
<para>Accessing a service provided by an SP will impose some requirements for the credential to be presented, like including some attributes about user identity and other trust requirements. The user can choose the mechanism and the credential which he wants to present according to his preferences and the required information by the SP. This includes the usage of derived credentials or pseudonyms with less exposed information; a proof of identity in which no credential is actually sent to the SP, but a proof that the user owns some identity or attribute; or a Mobile ID credential stored in a secure element, which makes use of the Trusted Execution Environment for authentication and can optionally involve mobile operator as party involved in circle of trust [6].</para>
<para>Privacy by design principles is essential from the data protection perspective especially when identifying which and how biometric data are going to be used. Indeed, the requirements of proportionality have to be analysed, bearing in mind the demands of technical security measures, determining what is certainly essential to avoid identity thefts based on the access to biometric information (e.g. a photo in the case of face or a latent in the case of fingerprints).</para>
<para>Likewise, the possibility of using several derived identity credentials demands a concrete assessment from the perspective of data protection. Therefore, it will be necessary to build up identification services prioritizing those technical and organizational solutions that minimize access to personal data to the absolute essential. To accomplish these objectives, ARIES devises means to comply with the minimal disclosure of information principle. In this sense, the principle of proportionality will play a key role in order to face this challenge, since it will be necessary to justify in each case by the service providers the personal data really required for authentication or authorization.</para>
<para>Furthermore, the identity ecosystem will provide unlinkability at the relying party level through polymorphic user identifiers (when compatible with relying parties&#8217; authentication policies). For each authentication or for specified periods of time will be different and random identifiers, so it will disclose no information. The unlinkability at the ARIES IdP will be also ensured. The ecosystem will indeed hide the accessed service from the enrolment and authentication services. Unobservability will be ensured by the system architecture as well. The Identity Providers will have no information about which SP the user wants to log into.</para>
</section>

<section class="lev1" id="sec11-3">
<title>11.3 Main Innovative Processes in Aries</title>
</section>

<section class="lev2" id="sec11-3-1">
<title>11.3.1 Fraud Prevention and Cyber-crime Investigation</title>
<para>The topics of fraud and crime prevention and investigation were one of the main project goals and were addressed from the very beginning of the project by involving law enforcement personnel in the process of requirement definition. Their inputs were based on currently most frequently occurring threats and their experience from crime investigation, and based on them an assessment of state of the art authentication architectures was done at the beginning of the design phase.</para>
<para>Main results of the assessment resulted in three main improvements in the field of crime prevention and investigation the project may provide:</para>
<para>&#8211; Strong authentication accessible to large part of the population to replace legacy authentication types (such as password or SMS one-time code).</para>
<para>&#8211; Biometric authentication as additional obstacle for the criminals.</para>
<para>&#8211; ID Proofing with document reading and biometric verification as a strong identity verification means to ensure the newly issued privacy preserving partial identities are based on reliable information.</para>
<para>It is obvious the strength of the whole solution depends on algorithms used for biometric verifications (both live capture vs. image data from elec-tronic document and live capture vs. previously enrolled baseline template). This was in line with the project plan as improvement of both enrolment and verification of face biometrics was planned as a separate task.</para>
<para>If the authentication is broken by some means and the investigation takes place it usually requires as much information as can be obtained (IP address, device fingerprints, all transaction data) which is in contradiction with project goal to provide privacy friendly solution. It was decided the privacy and a control over user&#8217;s own data is more important than inputs for investigation and a limitation was introduced. User data stored inside of the system are encrypted and stored in an appliance called Secure Vault. The appliance enforces strong authentication and authorization based on ARIES authentication, so it is only the user himself who can approve access to his data. This limits the investigation to cases when user&#8217;s identity was stolen by forgery, but he still has access to his mobile phone to be able to provide access to law enforcement authorities.</para>
<para>The data collected and stored by all server-side components of ARIES solution consist of transaction information and anonymized identity information such as links between the cryptographic and biometric identity parts.</para>
<para>It was decided to introduce a rule that biometric information may be persisted only in user&#8217;s handset in order to give him control over this most important information.</para>
</section>

<section class="lev2" id="sec11-3-2">
<title>11.3.2 Biometric Enrolment and Authentication</title>
<para>The architecture considered that the biometric authentication is evolving and new methods and implementation are introduced as fast as the old ones are broken by new approaches such as deep learning. The solution is a set of loosely coupled server components that allows simple replacement of each component without much impact on the existing ones. To integrate with ARIES each vendor must provide server side API and App SDK, the project provides enveloping App with UI flow control and a server application that controls the issuance and authentication flows and ensures all steps happen in a single session. The choice of which biometric feature should be enrolled is based on user&#8217;s choice and his handset capabilities.</para>
<para>The implemented OpenID Connect authentication flow allows selection of any available biometric authentication type, so the requesting Service provider may choose the optimal balance between level of assurance and user experience that may be worse for some biometric features.</para>
<para>The project considered two options of biometric authentication: usage of the feature obtained during ID Proofing process (at the moment only face image is accessible for commercial applications) and usage of another feature as an additional authentication factor without link to the original electronic document information.</para>
<para>The face recognition done during ID Proofing strongly relies on quality of the image from electronic document. During project pilot phase issues with several passports with poor quality image data were encountered that prevented enrolment of the users. The liveness detection is in ID Proofing mandatory, because if the attacker has stolen document then he gets hold of the image data himself, so the liveness detection is the only protection.</para>
<para>Pilot implementation used face recognition combined with ID Proofing and the results were satisfactory:</para>
<para>&#8211; Enrolment was successful for majority of the users and was done without issues on the first try.</para>
<para>&#8211; Authentication with liveness detection based on head movements (vertical and horizontal) using overlay image to tell the user what to do was smooth and well accepted by the pilot users. Average face verification time was below 3 seconds.</para>
<para>Voice authentication was implemented to prove the solution is able to quickly integrate an existing biometric authentication service. Existing server-side service proposed by one of the partners was selected and integrated in two steps: scaffolding REST service was created to align the API style and session management and very simple App SDK was implemented and added into existing ARIES App.</para>
</section>

<section class="lev2" id="sec11-3-3">
<title>11.3.3 Privacy-by-Design Features (Anonymous Credential Systems)</title>
<para>ARIES follows a privacy-by-design approach to protect user&#8217;s privacy in their digital transactions, either online or offline (on-situ, face-to-face inter-actions). The architecture has been designed to incorporate and interface with Anonymous Credential Systems (ACS), namely Idemix [4]. ACS allows users to set-up and demonstrate Zero Knowledge crypto Proofs, thereby proving certain predicates about personal attributes in a privacy-preserving way, following a selective disclosure approach.</para>
<para>The ARIES Mobile App allows obtaining ACS credentials, once the user has been identified and enrolled in the ARIES IdP. Those credentials are generated based on the attributes demonstrated by the user against the IdP during the ID-proofing, i.e., it contains at least the attributes included in the breeder document (ePassport) used for authentication and enrolment. The credentials are maintained securely protected inside the mobile (mobile wallet).</para>
<para>Once the user has performed the issuance protocol, it can create different proofs of possessions to comply with attributes required by the Service Provider to access a service. This presentation protocol is based on ZKP by relying on the CL signature scheme [7]. It ensures minimal disclosure principle, allows demonstrating having an attribute without disclosing the value itself, and permits proving complex predicates about attributes, e.g. the date of birth is greater than certain year (to check age). Anonymous credentials systems have been also integrated, and successfully evaluated, for IoT scenarios [8] in even constrained IoT scenarios [9] in the scope of Aries project.</para>
<para>In ARIES these privacy-preserving capabilities have been showcased in the Airport scenario, in which the user wants to demonstrate he is over 18 to buy certain products (e.g. alcohol) in a duty-free shop inside the airport, and prove that he has a valid boarding pass (required to buy goods) without revealing any personal data, proving only he is traveling to a valid destination in a valid time-frame.</para>
</section>

<section class="lev1" id="sec11-4">
<title>11.4 The ARIES Ethical and Legal Approach</title>
</section>

<section class="lev2" id="sec11-4-1">
<title>11.4.1 Ethical Impact Assessment</title>
<para>ARIES focussed on how to optimise the potential for minimising and averting unintended misappropriation and disproportionate use of information for unknown and diverse purposes to which citizens have not explicitly consented. It did so in ways that bring privacy and security in balance while addressing the socio-ethical consequences of deploying the ARIES solution to creating a reliable, trustworthy eID ecosystem.</para>
<para>ARIES also has as its foundations the EU&#8217;s ambitious commitment to realising a Single Digital Market relies on creating trustworthy eIDs to augment efficiency, convenience and trustworthiness of e-life for citizens. Digital by default is enabled by the once-only principle (to cut multiple entry of same data several times), interoperability by default, inclusive and accessible practices.</para>
</section>

<section class="lev2" id="sec11-4-2">
<title>11.4.2 Technological Innovation Informed by Ethical Awareness</title>
<para>Technological innovation is not neutral in conception, in development or in its application to society. Algorithms are not neutral. This is the starting point for reflecting on the ethical tests that might be applied as a new application is developed or an existing one extended and used for a different, but possibly complementary purpose, to the one for which it was first developed. Just because a development or application meets current legal privacy requirements, it cannot be assumed that it automatically complies with ethical standards that society values. This means that there must be clarity over the purpose of a new development and its intended used in realtime and in the real-world. The legal tests of ensuring compliance with the law provided by privacy impact assessments themselves are useful check points. By themselves, they are inadequate. Legal compliance is necessary but not sufficient to ensure ethical standards are met.</para>
</section>

<section class="lev2" id="sec11-4-3">
<title>11.4.3 The Socio-ethical Challenge</title>
<para>The key challenge for ARIES was to develop something that was universally acceptable, complied with legal and ethical requirements, while protecting security and privacy and would help form the basis of a reliable and trustable eco-system. Accordingly, ARIES set about developing a neutral application that sought to facilitate convenient, privacy respecting, secure, and speedy transactions whilst minimising the amount of personal data that an individual citizen might be required to disclose (by choice or design) in order to access a service.</para>
</section>

<section class="lev2" id="sec11-4-4">
<title>11.4.4 ARIES Starting Point: What is Meant by Ethics?</title>
<para>The ARIES ecosystem is designed with both privacy and ethics in mind. ARIES extracted core principles of ethical practice from philosophy and medicine which have addressed the impact of technical and scientific advance on what it is to exist as a human being. There is no universal acceptance of what is ethically appropriate or acceptable. Consequently, designing something that is &#8216;ethical by design&#8217; implies designing something that minimizes objections to it from different societies and is an essential building-block of an ethical e-ID eco-system.</para>
<para>The do no harm principle provides the best initial ethical test to be applied to the design of a new algorithm or app. It is useful for a digital society accustomed to automated decisions being driven by bots rather than immediate live human decision making on a human2human basis. However, this immediately raises additional ethical issues summarized by the principles of proportionality, purpose specification and data and purpose minimization. Dignity and autonomy are core elements of the concept of bodily integrity. To those are added notions of privacy (in private and public transactions).</para>
<para>In short, the use made of something, like an eID, occasions many ancillary questions about the person associated with it. This is problematic and has preoccupied legislators and citizens anxious to ensure that they do not inadvertently reveal and allow to be sold for commercial gain, aspects of themselves (i.e. data and associated information that they generate). Further ethical issues arise. Therefore, ARIES seeks to develop a solution which bakes in ethics and is as neutral as possible in its impact on societal values.</para>
</section>

<section class="lev2" id="sec11-4-5">
<title>11.4.5 Embedding the Dominant Ethical Principle: Do No Harm</title>
<para>The key ethical principle to which all other ethical principles are linked and subordinate is the pre-cautionary principle. It highlights the obligation to &#8216;do no harm&#8217;. Closely associated and derived from it are principles impelling proportionality, self-determination and consent, autonomy, dignity, and necessity (data minimisation). Refining accepted medical ethics for informational technology practices suggests that ethical practice and ethical technological applications need to be aware of, and in the case of eIDs, sensitive to how they will mitigate, avert or accommodate risks (or potential harms).</para>
<para>The precautionary principle of do no harm is about more than determining legal liability and redress for harm. In ARIES, it informed design and practice from the start. This differs from the traditional practice of using legal remedies for harms, and the focus in the USA, for example, of litigation to provide financial recompense for harm. In ARIES, attempts were made to widen the understanding of what &#8216;harms&#8217; might be induced by ICT innovations in line with the EU approach to baking in the precautionary principle of &#8216;do no harm&#8217;. In the EU, this is expressed in guidelines and in legislation which translates this principle into duty-of-care provisions, as in the case of the GDPR and the complementary ePrivacy Directive (soon to be Regulation). This duty-of-care has been marked in respect of privacy protection in both the GDPR and ePrivacy deliberations: both require importers and retailers of IT to distribute only privacy-by-design compliant technology. The temptation to assume that PbD compliance automatically implies respect for ethical principles must be avoided.</para>
<para>For ARIES ecosystem, ethics is seen in relation to when, how and by whom (or what algorithm) decisions are made, and for what purpose. This means that there are several points at which ethical reflection must occur in order to guard against baked in bias and ensure ethical principles are respected in terms of all elements of the design, from inception to roll-out, to scalable use. Such ethical checks occur at the following points: design of the medium in which personal information is to be held; technical rules governing handling or and access to that information, including via a human or bot; technical vulnerability to the integrity of the medium and its message; commercial opportunities; and impact on the individual providing information (knowingly or not) to access a service. ARIES reflected on how ethical principles may be used to inform data handling practices that rely, at some point, on eID authentication on the part of the individual or the service provider.</para>
<para>Core ethical principles to be observed are: precaution; proportionality; purpose specification; purpose limitation; privacy; security; autonomy; dignity; informed consent; justifiability; fairness; transparency and equality.</para>
</section>

<section class="lev2" id="sec11-4-6">
<title>11.4.6 Baked in Ethics for the ARIES Use Cases</title>
<para>The ARIES Use Cases on eCommerce and e-Airport reveal that different rules apply to eID based transactions in a common physical setting owing to pragmatic and political constraints imposed by real-world contexts, real-time eID development and use. These values, so far, are shaped by human beings.</para>
<para>For the ARIES, the baseline was the ethical principles common to our societies in the EU28, awareness of what the public interest is; how it can be explained and protected. This entailed learning from on privacy assessment initiatives, regulations, oversight mechanisms, audit, inspection and compliance arrangements and independent scrutiny to ensure accountability and redress. Implicit are ethical principles of good governance, transparency of intent and effect. This places a premium on minimum disclosure requirements in terms of how algorithms are designed and used, phased and shaped (often by other automated processes) and deployment that is proportionate to the goal they are designed to attain. Ethical compliance is not met, therefore, simply by assuming, especially in the case of eCommerce, that competition and anti-trust legislation, standards and regulations are sufficient to guarantee ethical use. Nor is accountability just about liability for malfunction or misuse. This is why the baking in of ethics, an ethic audit trail even, mean that accountability has to be citizen focused and relate also to the intended use and effect of using an eID on society.</para>
<para>Ethical eID design therefore must reflect principles of accessibility, dignity, equality and transparency. Ethical design suggests that in practice where eID use fails to be used, for whatever reason, there should be clarity over why this happens and, in order to preserve dignity and accessibility, alternative means of completing an intended benign transaction. The GDPR Art 22 states that people have a right NOT to be subject to a decision &#8216;based solely on automated processing&#8217;.</para>
</section>

<section class="lev2" id="sec11-4-7">
<title>11.4.7 Ethics in the ARIES Use Cases</title>
<para>ARIES Use Cases rested on the same set of questions and methodological approach to ensure consistent application across all of the ARIES activities. All checked fitness-for-purpose. How is ethical use designed into the system? What bias is there? How can risks and benefits be reconciled? How have ethics been designed into the technical solution envisaged? Is this sufficient from the point of view of user trust building? ARIES was especially mindful of the inherent risk of doing inadvertent harm. Its Ethical Impact Assessment tool therefore reflects this by highlighting that any data enrolment, collection or (manual or automated) processing must not harm the data subject directly or indirectly. It must be proportional to the purpose for which processing occurs, must minimise data used and ensure that it is used for that one, specific and limited purpose only. No more data should be enrolled or col-lected and associated than is expressly necessary for the transaction envisaged.</para>
<para>Breaching the spirit of privacy preservation under the GDPR is a breach of ethical practice.</para>
<para>ARIES concludes that an EIA is a commercial opportunity in its own right and key to building sustainable trust and reliability while maximising privacy and security. An EIA should be conducted in parallel with PIAs. An Ethics audit via an independent and expert body should complement a PIA. These should be done before the decision to proceed with further development of the technical solution is taken. It must be done at the outset (possibly after taking external, independent advice) and authorised and signed off at the highest level. This helps create a trusted privacy and ethics respecting environment for developing innovative technical solutions. Ethically informed good practice becomes second nature. This is communicated to the public and stakeholders. Public trust is key to sustaining trust in the reliability, security and dependability of the solution.</para>
</section>

<section class="lev2" id="sec11-4-8">
<title>11.4.8 Legal Challenges and Lessons Learned in ARIES</title>
<para>As explained in precedent sections, ARIES proposed the use of new identification techniques, fully user centric, that required a complete review of the different legal framework that may be considered applicable to the service, in case of real exploitation.</para>
<para>First of all, an analysis of the eID European Union legislation, and its application to the ARIES ecosystem was conducted, mainly focused in the Regulation (EU) N&#176; 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC (commonly known as &#8220;eIDAS Regulation&#8221;) and its implementing acts.</para>
<para>Our findings include that an ARIES provider may play two different roles in the eID EU regulated ecosystem:</para>
<para>&#8211; First of all, an ARIES provider may be an electronic identification means consumer. This happens when the ARIES provider uses the electronic identification means issued to the citizen i.e. by the Member State, such when the citizen authenticates using a national citizen ID card (i.e. the Spanish National ID card, or the German nPA). This is an interesting way to reuse strong authentication-based identification mechanisms as an authentic source for the self-issuance of user-controlled identities.</para>
<para>&#8211; Secondly, an ARIES provider may be an electronic identification means issuer, in the sense of the eIDAS Regulation. For this to happen, the system must comply with the legal requirements set forth by the eIDAS Regulation, and the corresponding implementing acts, and be recognized by a Member State. The ARIES derived identities aim to be recognized according to the substantial security level defined in Article 8 of the eIDAS Regulation and, thus, the system shall comply with the corresponding requirements set forth in the eIDAS Security Regulation. This possibility would allow the usage of the ARIES derived identities for the electronic access to public services in the EU.</para>
<para>Due to their nature as a private, pseudonymous, identification means with legal value under eIDAS Regulation, an analysis of the use of advanced electronic signatures based in qualified certificates issued by ARIES providers was also considered relevant, for the endorsement of derived identities. Research concluded that an ARIES provider may issue qualified certificates assuring the identity of the person, using pseudonym certificates and other attributes, as a means to represent derived identities. This possibility is directly implementable in the current EU framework, but its recognition is subject to the authorisation of the usage of pseudonym certificates in each Member State.</para>
<para>More interestingly, our research showed that an ARIES provider could offer a new trust service, consisting on the accreditation of possession of personal attributes (a wide conceptualization of identity) with privacy protection.</para>
<para>This may be considered as the main legal innovation of the project: an ARIES provider, once a person identity has been provisioned, provides a service that allow that person to self-create partial, derived, identities asserting in a trustworthy manner a particular personal attribute (i.e. the possession of a personal, valid, boarding pass to shop in the airport, or being older that certain age...). These derived identities constitute assertions that may legally substitute the corresponding documents that evidence the personal attributes (i.e. instead of showing the boarding pass, with all personal data, one shows a partial, derived identity that proves the fact that the person has a personal and valid boarding pass), thus increasing privacy effectively, while reducing compliance costs to data controllers.</para>
<para>To be able to substitute these documents per partial derived ARIES identities, maintaining legal certainty, a definition of this services a new trust service should be proposed, including the institutionalization of the service and a legal effect attained to the service (i.e. establishing some sort of equivalence principle such as &#8220;where the law requires the documental accreditation of a personal attribute, it will be possible to use a [service name] evidence&#8221;.</para>
</section>

<section class="lev1" id="sec11-5">
<title>11.5 ARIES Ecosystem Validation</title>
</section>

<section class="lev2" id="sec11-5-1">
<title>11.5.1 E-Commerce</title>
<para>The secure eCommerce scenario focused on demonstrating how virtual identities with different levels of assurance can be used to access different online services. It showed how this level of assurance may determine the operations that people can perform. It demonstrated the control citizens have in practice over their virtual identities, allowing them to enrol with the ARIES ecosystem and build separate identities, for different purposes, effectively minimizing the disclosure of data and maximizing their privacy. This was informed by and designed to ensure implementation of ethical principles to help build trust.</para>
<para>The e-Commerce demonstrator scenario overview and main processes identified are shown in <link linkend="F11-2">Figure <xref linkend="F11-2" remap="11.2"/></link>. The demonstrator allowed users to use their own elD&#8217;s present in the ARIES vault to register and to login using their biometrics (face authentication) on the Chef Continente website. This new authentication method was done in the mentioned website using the ARIES system and app to read real documents (e.g. passport) and biometrics (user&#8217;s face) to validate identities and connect to the third-party e-Commerce service from Continente, the Portuguese leading retailer.</para>
<para>The demonstration stage aimed to validate ARIES&#8217; results in terms of applicability of the resulting ecosystem and enabling tools and technologies and effectively demonstrate the progress beyond the state of the art of ARIES achievements in a realistic scenario having potentially high impact on society and the economy (this stage was done under strict ethical and legal requirements to protect participants). Several tools were used and are presented next in chronological order:</para>
<fig id="F11-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F11-2">Figure <xref linkend="F11-2" remap="11.2"/></link></label>
<caption><para>e-Commerce demonstrator scenario overview.</para></caption>
<graphic xlink:href="graphics/ch011_fig002.jpg"/>
</fig>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>The ARIES Online Survey was designed to ascertain citizens&#8217; expectations of the ARIES eID and inform the project about which issues needed to be addressed (222 respondents);</para></listitem>
<listitem><para>Proof-of-Concept Design Thinking Workshop had the objective of to engage potential users with the ARIES system and get their feedback about relevancy, usability and functionalities (16 participants);</para></listitem>
<listitem><para>The Demonstrator Focus Groups stage had the main goal of testing the ARIES app in the real context with a group of users that had different backgrounds, ages and experiences (29 users).</para></listitem>
</orderedlist>
<para>The demonstrations were done amidst a time were the concern about privacy and security was at its highest, as this testing was coincidently done at the peak of a few international data breach scandals. Users stated that the focus on privacy, control of data and security was excellent and that they were exited that such a tool, might be available in the future. Also, the linkage of the project to EU funding generated an additional goodwill for it.</para>
<para>With regards to usability, users recognized that for a prototype the ARIES app had a very good look &amp; feel, and the design was good, despite some minor usability issues, that were in the meantime solved. It&#8217;s important to state that later versions, including the current one, included a more intuitive and visual explanation of the different steps and was recognized to be good.</para>
<para>Finally, users mentioned that the need for a passport as a baseline for the user data was an interesting approach as this could generate additional trust in the system. It was often mentioned that the control of own&#8217;s data in the ARIES vault was very important and that the cloud updates on the data being reflected in the third-party services was a huge plus. All in all, it was with no surprise that most users stated that they would likely use the service if it was available in the market.</para>
</section>

<section class="lev2" id="sec11-5-2">
<title>11.5.2 Airport Scenario Pilot and Validation</title>
<para>The planning of the airport demonstrator began in January 2018 the work focused on refining the platform functionality and defining &#8220;use cases&#8221;, with the planning of logistics and final storyboards following as time progressed over the summer and into autumn. Over a period of months this helped establish the approximate date where the development of the ARIES technology and the availability of the venue would be at its most optimum.</para>
<para>The venue of the demonstrator was a crucial and unique location. It held challenges in bringing an innovative concept prototype into a realistic operational environment. The location chosen was the Leeds/Bradford International Airport at Yeadon, in the City of Leeds, West Yorkshire, England. This location chosen for the demonstrator pilot, had far greater challenges to overcome than normal locations this was due to the secure nature of the site and the need to occupy the airport on airside, to perform an operational test on the ARIES prototype in a controlled access environment.</para>
<para>During the consultation stage with the airport, Jet2.com, Border force and a retail outlet, identified that the optimum time during 2018 for holding the pilot was the month of November. This was when the airport had the least flights and passengers in the terminal building, staff availability was at its most convenient and it would minimise the disruption to normal business in the airport.</para>
<para>The end users and stakeholders who took part in the pilot were identified during the consultation period. They came from the airport, airline, and retail sections of the airline experience. In addition to take into account law enforcement participants came from counter terrorism, cyber and serious organised crime officers with also a focus on crime prevention and community cohesion.</para>
<para>All participants were first asked to perform a timed exercise where they performed the ARIES enrolment process to establish an eID using their own genuine passport. During this process a live capture image of their face is part of the enrolment process. This is compared to the biometric information held in the passport and is a verification of identity. Multiple identities were enrolled on the devices and by way of a password, each person could secure their personal data on the device.</para>
<para>Once this had been completed each participant was given an additional enrolment exercise to complete to demonstrate some of the functionality of the ARIES app. These included a genuine expired passport. The date on the passport had expired, so no enrolment could be completed. With a forged passport which was very noticeable, the participants were unable to complete enrolment. Using a stolen passport which was genuine but where none of the biometric information matched the participant was also tried, so no enrolment could take place and no creation of an eID in ARIES.</para>
<para>The pilot was completed in one day and covered two main themed functions in the airport namely the passenger gate boarding process and a retail shopping experience. To help demonstrate and test the security of the facial recognition technology and to maintain participant&#8217;s engagement, they were asked to try to pass the boarding process wearing various head garments; this served to obstruct the live capture, since a clear image of the face is required for comparison. The final section in the boarding scenario was a timed exercise. Using the four devices all the passengers were asked to line up in a queue and were then timed on four separate occasions going through the boarding process. To best simulate a queue of passengers who all have their own mobile devices, the participants were organized in a pre-defined group and asked to queue, in order that they could be rotated four times using the four mobile devices in rotation. It is possible in ARIES to simulate each passenger having their own device, by creating multiple different user profiles on one device. Once all the participants had completed the exercise they were asked to complete a feedback questionnaire.</para>
<para>The second phase of the pilot took place in the retail store. A laptop was set up on the cashier&#8217;s desk to simulate both the register&#8217;s screen and the customer&#8217;s screen. A walk though demonstration was first performed for the retail manager and staff. During the demonstration, the staff were shown how ARIES could be used to present the information they required to approve a sale of a restricted item, such as alcohol or cigarettes. They were asked to consider that using a recognized vID provider could be an approved method of proof of identity. It was also explained that one of the objectives of the ARIES project is to protect customers&#8217; personal information and not to disclose unnecessary information about the customer that could be stolen and used in a fraudulent act. From their own mobile device, the customer has to consent to releasing the above personal data for the Cashier to view. They were then asked to complete a questionnaire.</para>
<para>The feedback questionnaires contained a set of generic qualitative based questions about the user experience during enrolment. Further sub-sections in the questionnaire focused on questions bespoke to the stakeholders involved, i.e. the passengers and airline&#8217;s boarding experience and retailer&#8217;s and customer&#8217;s purchase experience. The questions also aimed to explore exploitation and marketability, in terms of how likely passengers/customers would be to use the app if it were available and how likely a business would be to exploit a product like ARIES. Once all the storyboards and questionnaires were completed, a final opportunity was given to all participants to ask questions and give any feedback not already covered in the questionnaire.</para>
<para>The airport demonstrator was designed to explore the effectiveness of ARIES in issuing virtual credentials in an operational environment which requires the highest level of assurance and eligibility. The pilot demonstrated that a virtual identity combined with a live capture would greatly increase the security that protects citizens&#8217; and their credentials with assurances to service providers and border officials, that the person is eligible to travel, or purchase items that are restricted by legislation.</para>
<para>The general performance of the prototype ARIES app and the verbal feedback given by participants, highlighted that the participants found the app easy to use and liked the concept of holding a duplicate electronic means of proving their identity; they also felt they were in control of that data.</para>
<para>Where ARIES failed to meet the KPI, was with the enrolment on the app; most participants commented that if they were enrolling at home rather than within a test environment, they would not have felt the pressure that came with performing the task in a timed session.</para>
<para>Comments from commercial end users focused on added security, reduced waiting times and efficiency savings in personnel. Comments also included the prospect of participants being able to spend more time in the commercial area of the airport, if the boarding process freed up waiting time. All participants saw clear benefits of the speedy boarding process; their customer experience in this stage of testing was very positive. While most users noted &#8220;some concern&#8221; in relation to concerns over privacy, no participants said they would not use the app. In fact, all users said if the app became a viable product, they would use it. Overall the pilot testing of the ARIES app was successful and the feedback useful and mostly positive.</para>
</section>

<section class="lev1" id="sec11-6">
<title>11.6 Cyber-security and Privacy Research Challenges</title>
<para>The landscape of Identity Management (IDM) has been rapidly evolving since the effective launch of ARIES project in 2017. New identity management models have emerged, transcending the third party-based (identity provider) federated identity approach which has been dominating the identity and access management control landscape. In particular, multiple initiatives on Self-Sovereign Identity (SSI) are maturing and attracting attention from industry and governments [10].</para>
<para>These approaches are being supported on decentralised architectures enabled by Distributed Ledger Technologies (DLT) and more specifically Blockchain (noteworthy examples are Sovrin [11], Hyperledger Indy [14], uPort [12] or Blockstack [13]). Maturity of SSI solutions and widespread adoption has now a good basis on emerging international standards:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>W3C Credentials Community Group such as Decentralized Identifiers or DID [15]</para></listitem>
<listitem><para>Decentralized Key Management System (DKMS) [16]</para></listitem>
<listitem><para>DID Auth [17]</para></listitem>
<listitem><para>Verifiable Credentials [18]</para></listitem>
</itemizedlist>
<para>Furthermore, the European Commission contacted CEN/CLC, which has established a Focus Group on Blockchain and Distributed Ledger Technologies to collect identified European needs on these technologies, contextualised to Europe&#8217;s specific normative and technological environment, monitoring relevant activities of the Joint Multi-Stakeholder Platform on ICT standardization and the Digitising European Industry initiative, while also supporting ISO/TC 307 with a possible future European Technical Committee on Blockchain and DLT, see [19].</para>
<para>The relevance of Law Enforcement Authorities for ARIES as key adopters engaged in the prevention and reduction of identity-related crimes, links well with analysed areas for use of Blockchain for Government and Public Services [20] and this points in the direction of more continuous development of ARIES sustainability when engaging with blockchain initiatives in the Public Sector, in particular, around the European Blockchain Forum/Observatory/Partnership [21] and future opportunities enabled by the development of a European Blockchain Services Infrastructure.</para>
<para>In this respect we can consider key technological breakthroughs achieved by ARIES and which define core features of its identity ecosystem which relate perfectly to major aspects required to materialise and achieve widespread adoption of the Self-Sovereign Identity paradigm:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>ARIES implements a mobile wallet to manage derived identities, which is an essential client component in SSI approaches. This fully aligns with mobile identity orientation of ARIES, allowing full user-centric control of (user-owned) credentials and brings convenience and security to enrolment and authentication phases. In future phases, the ARIES app could also include an SSI Agent, acting as a trust anchor for establishing, by means of Agent-to-Agent Protocol, secure, authenticated connections to other agents (e.g. at relying parties). Coupled with DKMS protocol and Secure Element and Trusted Execution Element in mobile device, the wallet can maintain SSI private keys, extending use of these secure solutions already used for security of biometric material.</para></listitem>
<listitem><para>DID approach relates perfectly to ARIES approach of letting users manage multiple identities, bringing the additional advantage of allowing users to separate interactions and establish through DIDs encrypted channels with other entities (persons, organizations or things) to securely exchange verifiable claims/credential data. This will allow to transition from an &#8216;account-based&#8217; concept of IDM to based on user-managed connections over distributed blockchain solutions, with no central authori-ties that can be the target of attacks, thus achieving a more robust identity ecosystem.</para></listitem>
<listitem><para>ARIES derivation of reliable electronic identities from official or qualified credentials backed by the Member States (eIDASeID, ePassport) can be further explored, linking eIDAS network to ARIES provider for importing official identities into SSI infrastructure, see [22].</para></listitem>
<listitem><para>ARIES approach to data minimisation through Attribute Based Credentials, based on Zero Knowledge Proofs, aligns perfectly with the notion of SSI Verifiable Credentials, and allows once more, strict control by users of personal data disclosure with ease of use as cryptographic mechanisms ensure to relying parties that the user is in possession of certain attributes without revealing any additional unnecessary details, thus fulfilling GDPR data minimisation principle.</para></listitem>
<listitem><para>ARIES Secure Vault approach, with strict authorisation of access by users to competent identity crime investigation authorities, allows to explore in the future research possibilities to support this in private per-missioned ledger technology, facilitating the cross-border investigation of identity related incidents and cooperation between Law Enforcerment Authorities.</para></listitem>
</orderedlist>
<para>All these aspects underline the readiness of ARIES results to reap opportunities, together with vibrant community of identity management experts, which are taking forward identity management ecosystems to a new paradigm of disintermediated and user-centric, privacy-respecting identity and access control. This will create clear benefits for the security of European citizens and organizations, helping authorities to collaboratively achieve EU strategic goals for identity fraud reduction.</para>
</section>

<section class="lev1" id="sec11-7">
<title>11.7 Conclusion</title>
<para>As cyber-criminals evolve their cyber-attacks, the European Commission is determined to meet the challenge promoting research in different cybersecurity and privacy areas to mitigate upcoming identity-related crimes in both virtual (i.e. misuse of information, cyber-mobbing) and physical places (i.e. people trafficking, organized crime). In this sense, during the last years, research efforts in different projects has been made to devise novel solutions aimed to increase user&#8217;s privacy and protect them against evolving kind of cyber-crimes, the challenge is still ongoing.</para>
<para>To this aim, this chapter has summarized the main goals, challenges and the approach followed in European project H2020 ARIES project, whose ultimate goal is to provide security features that ensure highest levels of quality in secure credentials for highly secure and privacy-respecting physical and virtual identity management processes. In addition, the project has addressed key legal, ethical, socio-economic, technological and organisa-tional aspects of identity-related crimes.</para>
<para>Novel processes such as virtual identity derivation, ACS, Id-proofings based on breeder documents, biometric process, along with security features (secure wallet, secure vaults) has been devised, implemented and validated for physical and virtual identity management, strengthening the link between physical-virtual identities to reduce identity fraud.</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This book chapter received funding from the European Union&#8217;s Horizon 2020 research and innovation programme under grant agreement No 700085 (ARIES project).</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] The European Parliament, the Council of the European Union: Regulation (EU) no 910/2014 of the European parliament and of the council (2014).</para>
<para>[2] Hughes, J., Maler, E.: Security assertion markup language (saml) v2.0. Technical report, Organization for the Advancement of Structured Information Standards (2005).</para>
<para>[3] Hardt, D. (ed.): The oauth 2.0 authorization framework (2012).</para>
<para>[4] Camenisch, J., Van Herreweghen, E.: Design and implementation of the idemixanonymous credential system. In: Proceedings of the 9th ACM Conference on Computer and Communications Security, CCS 2002, pp. 21&#8211;30. ACM, New York (2002).</para>
<para>[5] Sabouri, A., Krontiris, I., Rannenberg, K.: Attribute-based credentials for trust (ABC4Trust). In: Fischer-H&#252;bner, S., Katsikas, S., Quirchmayr, G. (eds.) TrustBus2012. LNCS, vol. 7449, pp. 218&#8211;219. Springer, Heidelberg (2012). doi:10.1007/978-3-642-32287-721</para>
<para>[6] Kortuem, G., Kawsar, F., Fitton, D., Sundramoorthy, V.: Smart objects as buildingblocks for the internet of things. IEEE Internet Comput. 14(1), 44&#8211;51 (2010).</para>
<para>[7] J. Camenisch and A. Lysyanskaya, &#8220;A signature scheme with efficient protocols,&#8221; in Proc. Int. Conf. Secur. Commun. Netw., 2002, pp. 268&#8211;289.</para>
<para>[8] Jorge Bernal Bernabe, Jose L. Hernandez-Ramos, and Antonio F. Skarmeta Gomez, &#8220;Holistic Privacy-Preserving Identity Management System for the Internet of Things,&#8221; Mobile Information Systems, vol. 2017, Article ID 6384186, 20 pages, 2017.</para>
<para>[9] J. L. C. Sanchez, J. Bernal Bernabe and A. F. Skarmeta, &#8220;Integration of Anonymous Credential Systems in IoT Constrained Environments,&#8221; in <i>IEEE Access</i>, vol. 6, pp. 4767&#8211;4778, 2018. doi: 10.1109/ACCESS.2017.2788464</para>
<para>[10] European Blockchain Observatory, &#8216;Blockchain innovation in Europe&#8217;, https://www.eublockchainforum.eu/sites/default/files/reports/20180727_report_innovation_in_europe_light.pdf?width=1024&amp;height=800&amp;iframe=true</para>
<para>[11] Sovrin Foundation, https://sovrin.org/</para>
<para>[12] uPort, https://www.uport.me/</para>
<para>[13] Blockstack, https://blockstack.org/what-is-blockstack/</para>
<para>[14] Hyperledger Indy, https://www.hyperledger.org/projects/hyperledger-indy</para>
<para>[15] &#8216;Decentralized Identifiers (DIDs) v0.11, Data Model and Syntaxes for Decentralized Identifiers (DIDs)&#8217;, W3C Credentials Community Group Site: https://w3c-ccg.github.io/did-spec/</para>
<para>[16] DKMS, https://github.com/WebOfTrustInfo/rwot4-paris/blob/master/topics-and-advance-readings/dkms-decentralized-key-mgmt-system.md</para>
<para>[17] &#8216;Link to DID Auth final version&#8217;, https://github.com/WebOfTrustInfo/rwot6-santabarbara/commit/c1c44d6d2ead845 db75f9a52b53c0fb4cd98 db2d</para>
<para>[18] W3C. Verifiable Credentials Working Group, https://www.w3.org/2017/vc/WG/</para>
<para>[19] &#8216;Recommendations for Successful Adoption in Europe of Emerging Technical Standards on Distributed Ledger/Blockchain Technologies&#8221;, ftp://ftp.cencenelec.eu/EN/EuropeanStandardization/Sectors/ICT/Block chain%20+%20DLT/FG-BDLT-White%20paper-Version1.2.pdf</para>
<para>[20] European Blockchain Observatory &#8216;Blockchain for Government and Public Services&#8217;, https://www.eublockchainforum.eu/sites/default/files/reports/eu_observatory_blockchain Jn_governmenLservices_v1_2018&#8211;12-07.pdf?width=1024&amp;height= 800&amp;iframe=true</para>
<para>[21] European Blockchain Forum and Observatory, https://www.eublockchai nforum.eu/</para>
<para>[22] &#8216;Importing National eID Attributes into a Decentralized IdM System&#8217;, Abraham, A., June 2018,https://www.egiz.gv.at/files/projekte/2018/eId AttributeImport/ImportNationaleEIdAttribute.pdf</para>
</section>
</chapter>

<chapter class="chapter" id="ch012" label="12" xreflabel="12">
<title>The LIGHTest Project: Overview, Reference Architecture and Trust Scheme Publication Authority</title>
<para><b>Heiko Ro&#223;nagel<sup>1</sup> and Sven Wagner<sup>2</sup></b></para>
<para><sup>1</sup>Fraunhofer IAO, Fraunhofer Institute of Industrial Engineering IAO, Nobelstr. 12, 70569 Stuttgart, Germany</para>
<para><sup>2</sup>University Stuttgart, Institute of Human Factors and Technology Management, Allmandring 35, 70569 Stuttgart, Germany E-mail: heiko.ro&#223;nagel@iao.fraunhofer.de; sven.wagner@iat.uni-stuttgart.de</para>
<para>There is an increasing amount of electronic transactions in business and peoples everyday lives. To know who is on the other end of the transaction, it is often necessary to have assistance from authorities to certify trustworthy electronic identities. The EU-funded LIGHTest project assists here, by building a global trust infrastructure using DNS, where arbitrary authorities can publish their trust information. This enables then an automatic verification process of electronic transactions. This paper gives an overview on the project, its reference architecture with its main components and its application fields.</para>

<section class="lev1" id="sec12-1">
<title>12.1 Introduction</title>
<para>Traditionally, we often knew our business partners personally, which meant that impersonation and fraud were uncommon. Whether regarding the single European market place or on a Global scale, there is an increasing amount of electronic transactions that are becoming a part of peoples everyday lives, where decisions on establishing who is on the other end of the transaction is important. Clearly, it is necessary to have assistance from authorities to certify trustworthy electronic identities. This has already been done. For example, the EC and Member States have legally binding electronic signatures. But how can we query such authorities in a secure manner? With the current lack of a worldwide standard for publishing and querying trust information, this would be a prohibitively complex leading to verifiers having to deal with a high number of formats and protocols.</para>
<para>The EU-funded LIGHTest project attempts to solve this problem by building a global trust infrastructure where arbitrary authorities can publish their trust information. Setting up a global infrastructure is an ambitious objective; however, given the already existing infrastructure, organization, governance and security standards of the Internet Domain Name System, it is with confidence that this is possible. The EC and Member States can use this to publish lists of qualified trust services, as business registrars and authorities can in health, law enforcement and justice. In the private sector, this can be used to establish trust in inter-banking, international trade, shipping, business reputation and credit rating. Companies, administrations, and citizens can then use LIGHTest open source software to easily query this trust information to verify trust in simple signed documents or multi-faceted complex transactions.</para>
<para>The three-year LIGHTest project has started on September 1<sup>st</sup>, 2016 It is partially funded by the European Union&#8217;s Horizon 2020 research and innovation programme under G.A. No. 700321. The LIGHTest consortium consists of 14 partners from 9 European countries and is coordinated by Fraunhofer-Gesellschaft. To reach out beyond Europe, LIGHTest attempts to build up a global community based on international standards and open source software.</para>
<para>The partners are ATOS (ES), Time Lex (BE), Technische Universit&#228;t Graz (AU), EEMA (BE), Giesecke + Devrient (DE), Danmarks Tekniske Universitet (DK), TUBITAK (TR), Universit&#228;t Stuttgart (DE), Open Identity Exchange (GB), NLNet Labs (NL), CORREOS (ES), University of Piraeus (GR), and Ubisecure (FI).</para>
<para>This paper provides on overview on the LIGHTest project, its reference architecture with its main components and its application fields. This overview is based on already published and accepted papers within this project. Due to the complexity and the wide-range of the project not all topics and work packages can be integrated in this paper. For more details, we refer to the LIGHTest project web site https://www.lightest.eu/.</para>
<para>This paper is structured as follows. Section 12.1 introduces related work. In Section 12.1 an overview of the LIGHTest reference architecture and usage scenarios examples are presented. The concept and role of the Trust Scheme Publication Authority (TSPA) is described in more detail in Section 12.1, by way of example for the components of the LIGHTest reference architecture. The TSPA is one of the key components of the LIGHTest reference architecture, which is used in every verification process. In Section 12.1, the Trust Police Language (TPL) and the Policy Authoring and Visualization Tools used in LIGHTest are introduced. A short discussion and outlook is given in Section 12.1 and a summary is provided in Section 12.1.</para>
<para>For further details, we refer to the following publications: [1] provided a first introduction into the LIGHTest project. In [2] the LIGHTest refer-ence architecture and the Trust Scheme Publication Authority (TSPA) are presented. [3] proposes a delegation scheme that provides a general representation of delegations that can be extended to different domains. In [4] the external API of the involved components, and how they can be used to publish trust scheme information in the TSPA are described as well as how to use DNS to make trust scheme membership claims discoverable by a verifier in an automated way. If in addition to the Trust Scheme Membership, the requirements of the Trust Scheme are published, a Unified Data Model is required. In [5], the development and publication of such a Unified Data Model derived from existing trust schemes (e.g. eIDAS) is described. [6] present the Graphical Trust Policy Language (GTPL), as an easy-to-use interface for the trust policy language TPL proposed by LIGHTest. In [7], a low- and a high-fidelity prototype of the trust policy authoring tool were developed to evaluate the design, in particular considering novice users.</para>
</section>

<section class="lev1" id="sec12-2">
<title>12.2 Related Work</title>
<para>Most of the existing trust infrastructures follow the subsidiarity principle. One prominent example is the eIDAS Regulation (EU) N&#176; 910/2014 ([8]) on electronic identification and trust services for electronic transactions in the internal market. This includes that each Member State establishes and publishes national trusted lists of qualified trust service providers. For the access of these trusted lists, the EC publishes a central list (&#8220;List of Trusted Lists&#8221;) which contains links to these lists. Due to the fact that for verifiers the direct use of trust lists can be very onerous, in particular for international electronic transactions, LIGHTest provides a framework that is conceptually comparable to OCSP for querying the status of individual certificates and which facilities the verification of trust.</para>
<para>DANE (DNS-based Authentication of Names Entities) is a standard using DNS and the DNS security extension DNSSEC to derive trust in TLS server certificates (RCF6698 [9] and RCF7218 [10]). For this purpose, the DNS resource record TLSA was introduced which associates a TLS server certificate (or public key) with the domain name where the record is found. Within LIGHTest, the DANE standard is used to secure network communication and where certificates are used for verifying data.</para>
<para>Much like TLSA, the SMIMEA mechanism [11] provides a number of ways to limit the certificates that are acceptable for a certain e-mail address. It associates an SMIME user&#8217;s certificate with the intended domain name by certificate constraints. In LIGHTest, the SMIMEA resource record is used to verify if the certificate used for signing the trust list is valid.</para>
<para>For the publication that an entity operates under the trust scheme there is an existing and widely accepted standard for trust lists, which is ETSI TS 119 612 [12]. This standard provides &#8220;a format and mechanisms for establishing, locating, accessing and authenticating a trusted list which makes available trust service status information so that interested parties may determine the status of a listed trust service at a given time&#8221;. Within LIGHTest, the ETSI TS 119 612 standard is used for the representation of Trust Lists.</para>
</section>

<section class="lev1" id="sec12-3">
<title>12.3 Reference Architecture</title>
<para>This section gives an overview of the LIGHTest reference architecture. It defines the macroscopic design of the LIGHTest infrastructure as well as the overall system&#8217;s components, their functionality and their interaction on a high-level view. Second, examples of usage scenarios are presented. For more details, we refer to [2].</para>
</section>

<section class="lev2" id="sec12-3-1">
<title>12.3.1 Components of the Reference Architecture</title>
<para><link linkend="F12-1">Figure <xref linkend="F12-1" remap="12.1"/></link> shows the LIGHTest reference architecture with all the major software components and their interactions (see also [1] and [2]). It illustrates how a verifier can validate a received electronic transaction based on her individual trust policy and queries to the LIGHTest reference trust infrastructure.</para>
<para>The verifier interacts with the Policy Authoring and Visualization Tools (e.g. desktop or web applications). These tools also facilitate non-technical users the visualization and editing of trust policies, which can be individual and specific for each transaction. The role of the trust policy is the provision of formal instructions for the validation of trustworthiness for a given type of electronic transaction. For example, it states which trust lists from which authorities should be used. Further details are given in Section 12.1.</para>
<fig id="F12-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-1">Figure <xref linkend="F12-1" remap="12.1"/></link></label>
<caption><para>The LIGHTest reference architecture (see also [1, 2]).</para></caption>
<graphic xlink:href="graphics/ch012_fig001.jpg"/>
</fig>
<para>The Automatic Trust Verifier (ATV) takes the electronic transaction and trust policy as input and provides as output if the electronic transaction is trustworthy or not. In addition, the ATV may provide an explanation of its decision, in particular if the transaction was considered as not trustworthy.</para>
<para>The Trust Scheme Publication Authority (TSPA) uses a standard DNS Name Server with DNSSEC extension. A server publishes multiple trust lists under different sub-domains of the authority&#8217;s domain name. The TSPA enables discovery and verification of trust scheme memberships. In Section 12.1, the TSPA is described in more detail.</para>
<para>The Trust Translation Authority also uses a standard DNS Name Server with DNSSEC extension. Here, a server publishes trust data under different sub-domains of the authority&#8217;s domain name. In addition, trust translation lists express which authorities from other trust domains are trusted.</para>
<para>The Delegation Publisher uses a DNS Name Server with DNSSEC extension to discover the location (IP address) of the delegation provider, given that the user knows the correct domain name. The delegations themselves are not published in DNS mainly due to privacy reasons.</para>
</section>

<section class="lev2" id="sec12-3-2">
<title>12.3.2 Usage Scenarios</title>
<para>In this section, examples of usage scenarios are presented. There are basic scenarios for trust publication, trust translation, and trust delegation, which can be used for qualified signatures, qualified seals, qualified identities, or qualified timestamps. The functionality (publish, translate, delegate) of the basic scenarios can be used to realise a wide range of more sophisticated scenarios. These scenarios can be either variants of the basic scenarios or a combination of different basic scenarios. A combination can be composing two trust services in a chaining process where the output level of the inner trust service becomes the input level of the outer trust service. For example, qualified delivery services, where E-registered delivery can be realised using a combination of the scenarios signature and timestamps. Another example is qualified website authentication, where trust publication with qualified identities is the basic scenario and additionally, trust translation could be used to e.g. authenticate third party users/things.</para>
<para>As an example for a basic scenario, a successful trust scheme membership verification for qualified signatures is presented. For this example, the following preconditions and assumptions for the electronic transaction and trust policy are made:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>As preconditions, it is assumed that the verifier and signer are both located in the EC/eIDAS trust domain and that the eIDAS trust domain contains the actual eIDAS trust scheme. This means that trust translation is not required in this scenario. This could for example be managed in the following domain name structure: trust.ec.europa.eu &#8211; signature &#8211; TrustScheme &#8211; actual eIDAS trust scheme for qualified signature.</para></listitem>
<listitem><para>For the electronic transaction, it is assumed that the transaction is simply a signed document. Furthermore, the certificate used to sign the document contains a link to the trust list (Trust Membership Claim) for easier discovery such as &#8220;Issuer Alt Name: XYZ.qualified.trust.admin.ec&#8221; that points to the DNS resource records of the native trust scheme for qualified signatures. In addition, this trust scheme lists the certificate as qualified.</para></listitem>
<listitem><para>For the trust policy, it is assumed that trust policy simply states that the signature of the document is trusted if the issuer of the certificate is listed in TrustScheme.signature.trust.ec.europa.eu. Hence it is published as a Boolean trust scheme publication (see Section 12.1 for the definition of Boolean trust scheme publication).</para></listitem>
</orderedlist>
<para>For the basic scenario of a successful trust scheme membership verification for qualified signatures with the preconditions and assumptions mentioned above, the corresponding information flow in the architecture is described in the following and depicted in <link linkend="F12-2">Figure <xref linkend="F12-2" remap="12.2"/></link>.</para>
<para>In step 1, the verifier feeds both, the Trust Policy and the Electronic Transaction into the ATV. The ATV parses the electronic transaction and yields the document, the signer certificate and the issuer certificate (step 2). In step 3, the ATV validates the signature on the document to make sure it is signed by the signer certificate. Next, the ATV validates that the signer certificate is signed by the issuer certificate (step 4). In step 5, the ATV searches the signer certificate and the issuer certificate for discovery information. The ATV finds a Trust Membership Claim in the signer certificate: &#8220;Issuer Alt Name: XYZ.qualified.trust.admin.ec&#8221;. Hence, the issuer name is extracted from the certificate. In step 6, the ATV contacts the TSPA for retrieving the associated trust scheme. Therefore, the ATV issues a DNS query for all relevant resource records for boolean trust schemes for XYZ.qualified.trust.admin.ec. In step 7, the ATV verifies the chain of signatures from the DNS trust root of the DNS response using a validating resolver and stores the response as a &#8220;receipt&#8221; for future justification of its decision. Next, the ATV converts the resource records of the response into a boolean value (step 8). In the final step, the ATV looks at the trust policy and detects that the trust scheme, TrustScheme.signature.trust.ec.europa.eu is trusted (step 9). Hence, the overall result of applying the trust policy to the electronic transaction is trusted and sent back to the verifier (step 10).</para>
<fig id="F12-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-2">Figure <xref linkend="F12-2" remap="12.2"/></link></label>
<caption><para>Sequence diagram for trust publication of a qualified signature (Boolean), [2].</para></caption>
<graphic xlink:href="graphics/ch012_fig002.jpg"/>
</fig>
<para>The basic structure of the information flow for the other basic scenarios is similar. For qualified seals, qualified identities, or qualified timestamps it is mainly the domain name structure which differs. For trust translation, and trust delegation there are in addition some additional steps required using the Trust Translation Authority and the Delegation Publisher, respectively.</para>
</section>

<section class="lev1" id="sec12-4">
<title>12.4 Trust Scheme Publication Authority</title>
<para>Knowing which trust scheme the issuer of the signers&#8217; certificate complies to is critical, in order to be able to verify whether an electronic transaction complies with the users&#8217; trust policy. It shows which security controls, and security requirements are fulfilled by the certificate issuer and thus indicate the security quality of the certificate that is used, e.g. for signing a document. The Trust Scheme Publication Authority (TSPA) is therefore an important component of the LIGHTest reference architecture. It enables discovery and verification of trust scheme memberships. Trust scheme publications are always associated with lists that indicate the membership of an entity with the referred to trust scheme. The described setup, which involve a trust list and a trust list provider aligns well with existing trust list standards (e.g. ETSI TS 119 612 [12]).</para>
</section>

<section class="lev2" id="sec12-4-1">
<title>12.4.1 Trust Schemes and Trust Scheme Publications</title>
<para>A trust scheme itself can for example be constituted by requirements to information security processes, processes for issuance or revocation, requirements towards used technologies, or simply one single one-dimensional requirement, e.g. the geographical location of an entity. While some trust schemes, such as ETSI_EN_319_401 [13], just flatly lay out managerial requirements, trust schemes such as ISO/IEC 29115:2013 [14] further use different level of assurances to define which requirements must be met to comply with the trust scheme. In summary this all means, that a trust scheme can be published as a boolean trust scheme publication (e.g. [13]), and a ordinal trust scheme publication (e.g. [14]) (see Table 12.1). Boolean trust scheme publications indicate the entities that comply with the requirements of the trust scheme, and thus are a member of the trust scheme. Ordinal trust scheme publications indicate the entities that comply with the requirements of an ordinal aspect (e.g. a level of assurance) of the trust scheme.</para>

<table-wrap position="float" id="T12-1">
<label><link linkend="T12-1">Table <xref linkend="T12-1" remap="12.1"/></link></label>
<caption><para>Types of trust scheme publications in LIGHTest, [2]</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr valign="top"><td>Type of Trust Scheme Publication</td><td>Example</td><td>Verifiable Information</td></tr>
</thead>
<tbody>
<tr valign="top"><td>Boolean</td><td>ETSI_EN_319_401</td><td>Compliance of an entity to a trust scheme</td></tr>
<tr valign="top"><td>Ordinal</td><td>LoA4.ISO29115</td><td>Compliance of an entity to an ordinal value of a trust scheme</td></tr>
<tr valign="top"><td>Tuple-Based</td><td>{(authentication:2Factor), (identityProofing: inPerson)}</td><td>Requirements of a trust scheme</td></tr>
</tbody>
</table>
</table-wrap>
<para>Both, Boolean and ordinal trust scheme publications do not provide any information on the requirements of the trust scheme, or the ordinal value (e.g. Level of Assurance) of the trust scheme that is represented by the trust scheme publication. In order to fill this gap, tuple-based trust scheme publications provide the requirements of a trust scheme in the form of attributes and values.</para>
<para>For this purpose, the development and publication of a unified Data Model derived from existing trust schemes (e.g. eIDAS) is needed, where each requirement is explicitly represented by one tuple. With this a unified view on the requirements of trust schemes is provided, which can be used within the TSPA. The consolidation and development of this Data Model, which is based on nine existing trust schemes, is presented along with possible applications in the field of trust verification in [5]. The unified Data Model includes the three abstract concepts Credential, Identity, and Attributes and in total 98 concepts, which can be added to standard Trust Lists using ETSI TS 119612.</para>
</section>

<section class="lev2" id="sec12-4-2">
<title>12.4.2 Concept for Trust Scheme Publication Authority (TSPA)</title>
<para>The concept of the TSPA in LIGHTest consists of two components. It uses an off-the-shelf DNS Name Server with DNSSEC extension, in order to enable discovery of the Trust Scheme Provider that operates a Trust Scheme. The Trust Scheme Provider constitutes the second component of the TSPA. It provides a signed Trust List which indicates that a certificate Issuer is trusted under the scheme operated by the Trust Scheme Provider. It further provides the Tuple-Based representation of a Trust Scheme. As the DNS Name Server is only used to provide pointers to location of resources rather than storing the respective resources as DNS resource records directly, the TSPA is well-aligned with existing DNS practices. The use of pointers ensures the limited size of DNS messages, which is required for fast response times in the discovery process.</para>
<para>The use of the DNS Name Server system by LIGHTest enables easy and widespread adoption of the approach. We assume that the trust scheme of a certificate issuer is unknown, upon receiving an electronic transaction. The TSPA therefore provides the capability to discover a trust scheme membership claim for a certificate issuer, and verify this claim. The discovery of a trust scheme membership claim is done by using the domain name resolution capabilities of the DNS Name Server. <link linkend="F12-3">Figure <xref linkend="F12-3" remap="12.3"/></link> provides an overview on the concept for trust scheme publishing in the TSPA. Since the TSPA is using the DNS Name Server mainly for pointing towards the Trust Scheme Provider and the tuple-based representation of a trust scheme, the concept is divided into the DNS records on the DNS Name Server (left side), and the data containers on the Trust Scheme Provider (right side).</para>
<para>The records on the DNS Name Server include a Data Container for the Issuer and for boolean and ordinal trust schemes. Data Containers for an Issuer are identified by an Issuer Name (indicated by <i>&lt;IssuerName&gt;)</i>, and include the Name of the associated Trust Scheme. Data Containers for a Trust Scheme are identified by a SchemeName (indicated by <i>&lt;SchemeName&gt;)</i>, in the boolean case, and an additional LevelName in the ordinal case (indicated by <i>&lt;LevelName&gt;. &lt;SchemeName&gt;)</i>. A Trust Scheme data container includes the Trust Scheme Provider Domain Name (indicated by <i>&lt;SchemeProviderName&gt;)</i>. The data containers for the Issuer, trust scheme name and ordinal level of a trust scheme include in addition certificate constraints, which enable to limit the certificates accepted for signing the trust list, using the SMIMEA DNS resource record. Hence, in the LIGHTest ecosystem, the SMIMEA resource record is used to verify if the certificate used for signing the trust list is valid. These records on the DNS Name Server have been developed in a consolidated approach to publishing trust-related information in general in the DNS within in LIGHTest project.</para>
<para>For the publication of tuple-based trust schemes, the tuples are published either in the signed trust list itself or listed in an extra document with a pointer from the signed trust list to this document. For both cases, there is no additional DNS entry for the tuple-based trust schemes required. It uses the same as for the Trust Scheme Provider.</para>
<fig id="F12-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F12-3">Figure <xref linkend="F12-3" remap="12.3"/></link></label>
<caption><para>Representation of trust scheme publications in the TSPA, [5].</para></caption>
<graphic xlink:href="graphics/ch012_fig003.jpg"/>
</fig>

</section>

<section class="lev2" id="sec12-4-3">
<title>12.4.3 DNS-based Trust Scheme Publication and Discovery</title>
<para>The processes of Trust Scheme publication and discovery of trust lists using DNS is described in detail in [4]. To enable the automatic verification process of an electronic transaction using the ATV, it is required that the verifier knows where the trust scheme is saved at, and it would be more desirable if a CA can publish its membership claim. In order to be found in the DNS, each trust service and trust scheme taking part in LIGHTest picks a domain name as its identifier and announces this name in its associated certificates.</para>
<para>To update nameservers, the following two components were introduced: TSPA (concept of TSPA is introduced in Section 12.1) and ZoneManager.</para>
<para>The TSPA component itself acts as the endpoint for operators, which can be clients publishing trust schemes. It receives all relevant data via an HTTPS API to create the trust scheme. It can process links to existing trust schemes (e.g. eIDAS) as well as full trust scheme data. In the first case, the TSPA component creates the DNS entries together with the ZoneManager. In the second case, the TSPA component stores the trust scheme data locally and creates the DNS entries together with the ZoneManager. The second component, the ZoneManager, acts as the endpoint on the nameserver and modifies the zone data directly. It also ensures any zone data is properly signed using an existing DNSSEC setup. The ZoneManager&#8217;s interface is only called from the TSPA component, and must never be called from the operator directly. Both components implement a RESTful API that is used by clients to publish the trust scheme information.</para>
</section>

<section class="lev1" id="sec12-5">
<title>12.5 Trust Policy</title>
<para>As introduced in the Reference Architecture in Section 12.1, a verifier can validate a received electronic transaction based on her individual trust policy and queries to the LIGHTest reference trust infrastructure. To do so, the verifier has to provide the electronic transaction as well as an individual trust policy, which contains the formal instructions for the validation of trustworthiness for a given type of electronic transaction as input. The newly, in LIGHTest developed Policy Authoring and Visualization Tools facilitate and support also non-technical users to define their trust policies.</para>
<para>A Trust Policy is a recipe, expressed in a Trust Policy Language, that takes an Electronic Transaction and potentially multiple Trust Schemes, Trust Translation Schemes and Delegation Schemes as input and creates a single Boolean value (trusted [y/n]) and optionally an explanation (e.g., why not trusted) as output. For this purpose, a trust policy language is required, which is a formal language with well-defined semantics that is based on a mathematical formalism and is used to express the recipe of a trust policy. For the trust policy language in LIGHTest (LIGHTest TPL) the logic programming language Prolog that is based on Horn clauses only is used.</para>
<para>To facilitate the usage of LIGHTest TPL, [6] developed the Graphical Trust Policy Language (GTPL), which is an easy-to-use interface for the trust policy language TPL proposed by the LIGHTest project. GTPL uses a simple graphical representation where the central graphical metaphor is to consider the input like certificates or documents as forms and the policy author describes &#8220;what to look for&#8221; in these forms by putting constrains on the form&#8217;s fields. GTPL closes the gap between languages on a logical-technical level such as TPL that require some expertise to use, and very basic interfaces like the LIGHTest Graphical-Layer that allow only a selection from a set of very basic patterns.</para>
<para>Furthermore, it is main goal of the project to develop and evaluate a trust policy authoring tool, considering especially novice users. As most contributions on usable policy authoring and IT-security only focus on the design phase of a tool and on stating guidelines how to make these tools and systems more user friendly. But there is a need for also evaluating tools, not only regarding usability but also user experience. For this purpose, a low- and a high-fidelity prototype were developed to evaluate the design (for further details see [7]). With the low-fidelity prototype a usability evaluation during the beginning of the design phase was conducted. After a design iteration a user experience evaluation with the high-fidelity prototype was conducted and the lessons learned derived from the results are considered.</para>
</section>

<section class="lev1" id="sec12-6">
<title>12.6 Discussion and Outlook</title>
<para>The LIGHTest reference architecture and trust scheme publication authority (TSPA) support the implementation of the eIDAS Regulation ([8]). It enables the integration of existing trust lists using the global DNS infrastructure. Furthermore, it even expands eIDAS towards a global market and multi-users from the public and private sector. For the demonstration of the functionality of the LIGHTest infrastructure, two real world pilots are conducted within LIGHTest: In the first one, LIGHTest is integrated in the existing cloud based platform for trusted communication, the e-Correos platform. In the second one, LIGHTest is integrated in an existing e-Invoicing infrastructure and application scenario, OpenPePPOL.</para>
<para>Furthermore, key components of the LIGHTest infrastructure can be used for validation and authentication of data in sensor networks in IoT, e.g. for predictive maintenance use cases. This is demonstrated in a small sensor network of an organization using a Raspberry pi Cluster (see [15]).</para>
<para>LIGHTest supports UNHCR to explore ways to digitalize their documentation processes e.g. for the DAFI program. As the UNHCR deals with many sensitive documents and information, it is vital to be able to trust and verify the source of the documents after it is digitalized. This is especially important as it adds a higher level of security for such sensitive data and information. By digitalizing the documents using a Trust Scheme, it adds a level of security that not only optimizes the use of the digital documents, but also helps keep them secure. With that, after a trust scheme is made the digital documents created in the Trust Scheme can be verified and translated for both internal (with other UNHCR locations and Partners) or external (when the documents are being verified by other organizations that trust documents that are given to them by the UNHCR) purposes.</para>
</section>

<section class="lev1" id="sec12-7">
<title>12.7 Summary</title>
<para>There is a high need for assistance from authorities to certify trustworthy electronic identities due to the worldwide increasing amount of electronic transactions. Within the EU-funded LIGHTest project, a global trust infrastructure based on DNS is built, where arbitrary authorities can publish their trust information. In this paper, a high level description of the LIGHTest reference architecture, its components and its application fields are presented. In addition, the Trust Scheme Publication Authority and the Trust Policy are described in more detail.</para>
<para>The reference architecture and the concept for Trust Scheme Publication Authority fulfil the main general principles and goals, which are required to develop a globally scalable trust infrastructure. Furthermore, it is well aligned with existing standards (e.g. ETSI TS 119 612) and fulfil the requirements using DNS name servers to build a global trust infrastructure.</para>
<para>In addition to the LIGHTest pilots for e-Correos and Open-PePPOL, there are a multitude of use cases, e.g. for sensor validation in the field of IoT or for international organizations (e.g. UNHCR).</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This research is supported financially by the LIGHTest (Lightweight Infrastructure for Global Heterogeneous Trust Management in support of an open Ecosystem of Stakeholders and Trust schemes) project, which is partially funded by the European Union&#8217;s Horizon 2020 research and innovation programme under G.A. No. 700321. We acknowledge the work and contributions of the LIGHTest project partners.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Bruegger, B. P.; Lipp, P.: LIGHTest &#8211; A Lightweight Infrastructure for Global Heterogeneous Trust Management. In: H&#252;hnlein D. et al. (Hgeds.): Open Identity Summit 2016, Rome: GI-Edition, Lecture Notes in Informatics. S. 15&#8212;26.</para>
<para>[2] Wagner, S.; Kurowski, S.; Laufs, U.; Ro&#223;nagel, H.: A Mechanism for Discovery and Verification of Trust Scheme Memberships: The Lightest Reference Architecture. In (Fritsch, L.; Ro&#223;nagel, H.; H&#252;hnlein, D., eds.): Open Identity Summit 2017. Gesellschaft f&#252;r Informatik, Bonn, 2017.</para>
<para>[3] Wagner, G.; Omolola, O.; More, S.: Harmonizing Delegation Data Formats. In (Fritsch, L.; Ro&#223;nagel, H.; H&#252;hnlein, D., eds.): Open Identity Summit 2017. Gesellschaft fur Informatik, Bonn, 2017.</para>
<para>[4] Wagner, G.; Wagner, S.; More, S.; Hoffmann, H.: DNS-based Trust Scheme Publication and Discovery. In (Ro&#223;nagel, H.; Wagner, S.; Huhnlein, D., eds.): Accepted for Open Identity Summit 2019. Gesellschaft f&#252;r Informatik, Bonn, 2019.</para>
<para>[5] Wagner, S.; Kurowski, S.; Ro&#223;nagel, H.: Unified Data Model for Tuple-Based Trust Scheme Publication. In (Ro&#223;nagel, H.; Wagner, S.; H&#252;hnlein, D., eds.): Accepted for Open Identity Summit 2019. Gesellschaft f&#252;r Informatik, Bonn, 2019.</para>
<para>[6] Modersheim, S.; Ni, B.: GTPL: A Graphical Trust Policy Language. In (Ro&#223;nagel, H.; Wagner, S.; H&#252;hnlein, D., eds.): Accepted for Open Identity Summit 2019. Gesellschaft f&#252;r Informatik, Bonn, 2019.</para>
<para>[7] Weinhardt, S.; St. Pierre, D.: Lessons learned &#8211; Conducting a User Experience evaluation of a Trust Policy Authoring Tool. In (Ro&#223;nagel, H.; Wagner, S.; Huhnlein, D., eds.): Accepted for Open Identity Summit 2019. Gesellschaft f&#252;r Informatik, Bonn, 2019.</para>
<para>[8] European Parliament, &#8216;Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC&#8217;, European Parliament, Brussels, Belgium, Regulation 910/2014, 2014.</para>
<para>[9] Hoffman, P.; Schlyter J.: The DNS-Based Authentication of Named Entities (DANE) Transport Layer Security (TLS) Protocol: TLSA, RFC 6698, DOI 0.17487/RFC6698, 2012, http://www.rfc-editor.org/info/rfc6698</para>
<para>[10] Gudmundsson, O.: Adding Acronyms to Simplify Conversations about DNS-Based Authentication of Named Entities (DANE), RFC 7218, DOI 10.17487/RFC7218, 2014, http://www.rfc-editor.org/info/rfc7218</para>
<para>[11] Hoffman, P.; Schlyter, J.: Using Secure DNS to Associate Certificates with Domain Names for S/MIME, RFC 8162, RFC Editor, May 2017.</para>
<para>[12] ETSI: Electronic Signatures and Infrastructures (ESI); Trusted Lists. Sophia Antipolis Cedex, France, Technical Specification ETSI TS 119 612 V1.1.1,2013; http://www.etsi.org/deliver/etsLts/11960A119699/119612/01.01.01_60/ts_119612v010101p.pdf</para>
<para>[13] ETSI: Electronic Signatures and Infrastructures (ESI); General Policy Requirements for Trust Service Providers. ETSI, Sophia Antipois Cedex, France, European Standard ETSI EN 319 401, 2016; http://www.etsi.org/deliver/etsLen/31940A319499/319401/02.01.0L60/em319401v020101p.pdf</para>
<para>[14] ISO/IEC 29115: Information technology &#8211; Security techniques &#8211; Entity authentication assurance framework. ISO/IEC, Geneva, CH (2013).</para>
<para>[15] Johnson-Jeyakumar, I.-H.; Wagner, S.; Ro&#223;nagel, H.: Implementation of Distributed Light weight trust infrastructure for automatic validation of faults in an IOT sensor network. In (Rossnagel, H.; Wagner, S.; Huhnlein, D., eds.):Accepted for Open Identity Summit 2019. Gesellschaft fiir Informatik, Bonn, 2019.</para>
</section>
</chapter>

<chapter class="chapter" id="ch013" label="13" xreflabel="13">
<title>Secure and Privacy-Preserving Identity and Access Management in CREDENTIAL</title>
<para><b>Peter Hamm<sup>1</sup>, Stephan Krenn<sup>2</sup> and John Soren Pettersson<sup>3</sup></b></para>
<para><sup>1</sup>Goethe University Frankfurt, Germany</para>
<para><sup>2</sup>AIT Austrian Institute of Technology GmbH, Austria</para>
<para><sup>3</sup>Karlstad University, Sweden</para>
<para>E-mail: peter.hamm@m-chair.de; stephan.krenn@ait.ac.at; johmsoren.pettersson@kau.se</para>
<para>In an increasingly interconnected world, establishing trust between end users and service providers with regards to privacy and data protection is becoming increasingly important. Consequently, CREDENTIAL, funded under the European Union&#8217;s H2020 framework programme, was dedicated to the development of a cloud-based service for identity provisioning and data sharing. The system aimed at offering both high confidentiality and privacy guarantees to the data owner, and high authenticity guarantees to the receiver. This was achieved by integrating advanced cryptographic mechanisms into standardized authentication protocols. The developed solutions were tested in pilots from three critical sectors, which proved that high user convenience, strong security, and practical efficiency can be achieved at the same time through a single system.</para>

<section class="lev1" id="sec13-1">
<title>13.1 Introduction</title>
<para>Over the last decade, the availability and use of the Internet as well as the demand for digital services have massively increased. This demand has already reached critical and high assurance domains like governmental services, healthcare, or business correspondence. Those domains have particularly high requirements concerning privacy and security, as they are processing highly sensitive user data, and thus they need to be harnessed with various mechanisms for securing access.</para>
<para>Handling all the different authentication and authorization mechanisms requires user-friendly support provided by identity management (IdM) systems. However, such systems have recently experienced a paradigm shift themselves. While classical IdM systems used to be operated locally within organizations as custom-tailored solutions, nowadays identity and access management are often provided &#8220;as a service&#8221; by major cloud providers from different sectors such as search engines, social networks, or online retailers. Connected services can leverage the user identity base of such companies for authentication or identification of users.</para>
<para>In addition, many of these service providers do not only allow users to authenticate them towards a variety of cloud services, but also enable them to store arbitrary other, potentially sensitive, data on their premises, and share this data with other users in a flexible way, while giving the owner full control over who can access their data.</para>
<para>Unfortunately, virtually all existing solutions suffer from at least one of the following two drawbacks. Firstly, upon authentication a service provider (a.k.a. relying party) is only ensured by the IdP service that a user&#8217;s attributes (e.g., name, birth data, etc.) are correct, but it does not receive any formal authenticity guarantees that these attributes were indeed extracted from, e.g., a governmentally-issued certificate. That is, the relying party needs to make assumptions about the trustworthiness of the IdP, which may not be desired in case of high-security domains. Secondly, users often do not get formal end-to-end confidentiality guarantees in the sense that the data storage and IdP do not have access to their data. In particular, for the IdP aspect this is technically necessary as otherwise the IdP could not vouch for the correctness of the claimed attributes. However, this introduces severe risks, e.g., in case of security incidents such as data leaks.</para>
</section>

<section class="lev2" id="sec13-1-1">
<title>13.1.1 CREDENTIAL Ambition</title>
<para>The main ambition of the CREDENTIAL project was to overcome these limitations by designing and implementing a cloud-based identity and access management system which upholds privacy and data confidentiality at all times while simultaneously giving the relying party high and formal authenticity guarantees on the received data.</para>
<para>More precisely, the system aims to put users into full control over their data. They can share digitally signed data with relying parties in its entirety or in parts, thereby realizing the minimum disclosure principle.</para>
<para>Furthermore, all exchanged data is encrypted end-to-end, without the cloud-service provider being able to access the data. By being able to plausibly deny having access to the data, the service provider is able to build his business strategy around this advantageous security property. At the same time, the relying parties is guaranteed that the data they received from the identity provider is authentic and was indeed issues, e.g., by a public authority, thereby reducing the necessary amount of trust into the IdP with regards to the correctness of the provided data. This also holds true if only parts of a signed document are shared with the relying party.</para>
</section>

<section class="lev1" id="sec13-2">
<title>13.2 Cryptographic Background</title>
<para>Before being able to describe how CREDENTIAL achieved its main ambition, we will briefly recap the necessary cryptographic primitives on a high level. For more detailed background information, we refer to the original literature.</para>
</section>

<section class="lev2" id="sec13-2-1">
<title>13.2.1 Proxy Re-encryption</title>
<para>In conventional public key encryption schemes, a user Alice holds a public key <i>pk<sub>A</sub></i> and a corresponding secret key <i>sk<sub>A</sub></i>. Now, when another user Bob wants to send a message to Alice, he encrypts a message <i>m</i> under <i>pk<sub>A</sub></i>, and sends the resulting ciphertext <i><small>CA</small></i> to Alice, who then can decrypt the ciphertext using her secret key. Unfortunately, this technique is not practical for data sharing applications: assume that Alice stores her confidential data in an encrypted form on a cloud platform. Now, in order to share the data with Bob and Charlie, she would need to download the ciphertexts, decrypt them locally, and encrypt them again under the right public keys, say <i>pkB</i> and <i>pk<sub>C</sub></i>.</para>
<para>This challenge is overcome by proxy re-encryption, originally introduced by Blaze et al. [3], and later refined by Ateniese et al. [1] and Chow [6], among others. Using those schemes, Alice can use her secret key and a receiver&#8217;s public key to compute a re-encryption key <i>rk<sub>A</sub>B</i>. Using this key, a proxy can translate a ciphertext <i><small>CA</small></i> encrypted for Alice into a ciphertext <small>C<sub>B</sub></small> for Bob, without learning any information about the message contained in the ciphertext beyond what is already revealed by the ciphertext itself (e.g., the size of the message).</para>
<para>Within CREDENTIAL, proxy re-encryption is used to enable end-to-end encrypted data sharing without negatively affecting usability or efficiency on the end-user side, as the computation is outsourced to the CREDENTIAL Wallet.</para>
</section>

<section class="lev2" id="sec13-2-2">
<title>13.2.2 Redactable Signatures</title>
<para>Traditional digital signature schemes allow the receiver of a signed message to verify the authenticity of the document. That is, a signer first uses his secret signing key <i>sk</i> to sign a message <i>m</i>, obtaining a signature <i>sig</i>. Now, a receiver, having access to <i>m</i>, <i>sig</i>, and the signer&#8217;s public verification key <i>vk</i> can verify that the message has not been altered in any way since the signature has been generated. In particular, any editing or deletion of message parts would be detected, as the verification process would fail.</para>
<para>While this is a very useful primitive in many applications, it is often too restrictive when developing privacy-preserving applications. For instance, when aiming for selective disclosure in authentication processes, the holder of a signed electronic identity document is not able to blank out the information he does not want to reveal to the receiver.</para>
<para>Redactable signatures [16] solve this problem. In such schemes, the signer can label blocks of a message <i>m</i> as admissible when creating a signature <i>sig</i>. Now, any party having access to <i>m</i> and <i>sig</i> can redact admissible message blocks and update the signature to a signature that will still verify for the altered message, without requiring any secret key material. However, no other modifications than redacting admissible blocks (such as deletion of other blocks or parts of blocks, or arbitrary updates to the messages) can be performed without breaking the validity of the signature. Thus, the receiving party can rest assured that the received data blocks are authentic and have been signed by the holder of the secret key.</para>
</section>

<section class="lev1" id="sec13-3">
<title>13.3 Solution Overview</title>
<para>To realize the project&#8217;s ambition, the project consortium developed a cloud-based platform called the CREDENTIAL Wallet. Users can access and manage their account using a mobile application, the CREDENTIAL App.</para>
<para>In the following, we describe the main steps performed by the actors involved in the CREDENTIAL authentication flow:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>A user obtains a digital certificate on his attributes from an issuer, which could be a public authority attesting the user&#8217;s birth date or nationality, but also a service provider signing the expiration date of a subscription. This is done by letting the issuer sign the user&#8217;s attributes using a redactable signature scheme.</para></listitem>
<listitem><para>The user then encrypts the received certificate using his public encryption key and uploads this data to the CREDENTIAL Wallet.</para></listitem>
<listitem><para>When a relying party &#8211; either another user or a service provider &#8211; requests access to the user&#8217;s data for the first time, the user computes a re-encryption key from his public key to that of the relying party. To do so, the user employs the CREDENTIAL App, which fetches the receiver&#8217;s public key, while the user&#8217;s secret key is locally stored. The App then sends the re-encryption key to the CREDENTIAL Wallet, where it is stored in a dedicated key storage component. For subsequent access requests from the same relying party no fresh key material needs to be generated until a potential key update.</para></listitem>
<listitem><para>Now, when the relying party accesses the data, the user receives a notification through the CREDENTIAL App. The user selects which attributes to reveal to the relying party and which ones to blank out. Having received the selection, the CREDENTIAL Wallet redacts the defined attributes and re-encrypts the resulting ciphertext for the receiver.</para></listitem>
<listitem><para>Having received the re-encrypted and redacted data, the relying party decrypts the ciphertext using its own secret key and verifies the signature on the received attributes. If the verification succeeds, the receiver is ensured that the revealed information was indeed signed by the issuer, and continues, e.g., by granting the user access to the request resource. If the verification fails, authentication was unsuccessful and the relying party aborts.</para></listitem>
</itemizedlist>
<para>An overview of the described data flow is given in <link linkend="F13-1">Figure <xref linkend="F13-1" remap="13.1"/></link>. In the case that a user wants to share non-authentic data with another user, the process is simplified, in the sense that all steps related to signature generation, redaction, and verification are omitted.</para>
<fig id="F13-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-1">Figure <xref linkend="F13-1" remap="13.1"/></link></label>
<caption><para>Abstract data flow in CREDENTIAL [10].</para></caption>
<graphic xlink:href="graphics/ch013_fig001.jpg"/>
</fig>

</section>

<section class="lev2" id="sec13-3-1">
<title>13.3.1 Added Value of the CREDENTIAL Wallet</title>
<para>The described data flow and implementation of the CREDENTIAL Wallet brings various benefits for all actors in the ecosystem of identity and access management [2, 11].</para>
<para><b>Benefits for end users</b>. The end users of the CREDENTIAL Wallet benefit in various ways from the fact that the CREDENTIAL Wallet and all related components are under the privacy-by-design principle. For instance, the necessary trust into the IdP provider can be significantly reduced, as the provider does no longer have access to any of the user&#8217;s data; besides protecting against internal threats such as malicious system administrators, this also shields the user against security incidents such as data leaks because of active attacks or during hardware decommissioning. Users are put back into full control over their data and can selectively disclose parts of their identity information to the service provider. This is enforced on a technical and not on a policy level. Furthermore, the user needs to access his or her secret key material when granting a relying party access to his attributes for the first time, but not for subsequent authentications. In particular, the user can store his or her secret key on a trusted mobile device, but does not need to carry it with them, e.g., when leaving for vacation. Finally, due to the implemented multi-factor authentication mechanisms, accessing a service from an insecure device (e.g., a shared PC) under the control of a potential adversary does not enable the adversary to impersonate the user for subsequent authentications to the same or other services.</para>
<para><b>Benefits for CREDENTIAL Wallet providers</b>. Compared to traditional providers of identity and access management systems, providers of the CREDENTIAL Wallet benefit from the end-to-end encryption mechanisms used in our solution, and they can build their business models around our increased security features and guarantees. By not having access to sensitive user data, the liability risk is reduced significantly, and it becomes easier to comply with legal regulations such as the General Data Protection Regulations (GDPR).</para>
<para><b>Benefits for relying parties</b>. The main benefit for relying parties is that they receive formal authenticity guarantees on the data they receive, by being able to verify that the data they receive was indeed cryptographically signed by a valid issuer. Consequently, they can significantly reduce the necessary trust into the identity provider. Furthermore, the CREDENTIAL Wallet was designed with maximum interoperability with existing industry standards for entity authentication (e.g., OAuth) in mind. This simplifies the integration into existing schemes substantially compared to other solutions following an ad-hoc design.</para>
</section>

<section class="lev1" id="sec13-4">
<title>13.4 Showcasing CREDENTIAL in Real-World Pilots</title>
<para>A main objective of the CREDENTIAL project was not only to design the CREDENTIAL Wallet, improve and adapt the required technologies, and develop the necessary components, but also to evaluate the usability, stability, and efficiency of the applications in different real-world application domains from critical sectors.</para>
<para>In the following, we give a brief overview of the different pilot domains and our conclusions based on representative pilot users. Preliminary descriptions of the pilots can also be found in [8, 11].</para>
</section>

<section class="lev2" id="sec13-4-1">
<title>13.4.1 Pilot Domain 1: eGovernment</title>
<para>CREDENTIAL&#8217;s eGovernment pilot considered citizens and professionals who wish to authenticate themselves towards services offered by a public authority in a highly transparent way that gives them full control over which data goes where. More precisely, the project partners integrated the CREDENTIAL Wallet into SIAGE, a web portal hosted by our project partner Lombardia Informatica S.p.A. When visiting SIAGE&#8217;s login page, users were offered to connect using their CREDENTIAL account. When selecting this option, they were redirected to an OpenAM component developed within the project, and an OAuth2 authentication flow was initiated. The users received a notification on their mobile phone and were asked to accept the information requested by the SIAGE system for authentication. Upon approval, the CREDENTIAL Wallet re-encrypted and redacted the appropriate user attributes before forwarding the resulting authentication token to SIAGE, which decrypted the data and verified its authenticity.</para>
<para>The pilot was executed using internal IT professionals for technical evaluations, and external focus groups to analyse the usability and perceived security aspects of the solution. The overall opinion of the users was very positive throughout all user groups. A detailed description of the pilot execution is also given in [17].</para>
<para>We want to stress that the analysed functionalities also demonstrate the technical feasibility and efficiency of the CREDENTIAL technologies in the context of many other eGovernment procedures beyond pure authentication, including aspects such as paper de-materialization. Imagine for example an employer who is willing to issue pay slips electronically. This employer, taking the role of the issuer in the authentication case, could sign the pay slip using a redactable signature scheme and label the different blocks of the pay slip as admissible. Now, when a user wants to request financial advantages from Lombardy region through the SIAGE system, he could log in as described above, and then decide to share those parts of the pay slip that are needed for receiving the requested support. For instance, if the support solely depends on gross income, the notification on the mobile phone would request obligatory access only to this data, and the user could decide to blank out information such as spent vacation days or reimbursements of actual travel costs. The data flows would be fully analogous to the authentication flow, and the service provider only needs to integrate the needed CREDENTIAL libraries.</para>
</section>

<section class="lev2" id="sec13-4-2">
<title>13.4.2 Pilot Domain 2: eHealth</title>
<para>The eHealth pilot focused on secure remote data sharing between diabetes patients and their physicians [14]. To do so, two dedicated mobile applications for patients and doctors, respectively, have been developed.</para>
<para>The patient&#8217;s app offers a convenient way for users to import medical data from devices such as glycosometers or scales. Like existing health-care applications, users can browse through their history and get visual representations of their measurements. Whenever a user imports a new value and wishes to store it in its patient healthcare record (PHR), this access request is processed by a dedicated component developed within the project, cf. <link linkend="F13-2">Figure <xref linkend="F13-2" remap="13.2"/></link>. This so-called interceptor component redirects all requests through the CREDENTIAL Wallet. Technically, the patient&#8217;s data is encrypted using a symmetric encryption scheme. The symmetric key is then encrypted employing the user&#8217;s proxy re-encryption key and stored in their CREDENTIAL Wallet account. When selecting a treating doctor, the patient&#8217;s application computes a re-encryption key from the patient to the doctor and deposits it in the CREDENTIAL Wallet&#8217;s key store. Now, using the doctor&#8217;s app, a diabetologist or general practitioner can access the encrypted key in the patient&#8217;s account. The CREDENTIAL Wallet re-encrypts the ciphertext and the doctor receives the secret key that was used to encrypt the data in the PHR. After accessing the encrypted data in the PHR, the doctor can decrypt the data and analyse the patient&#8217;s measurements.</para>
<fig id="F13-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F13-2">Figure <xref linkend="F13-2" remap="13.2"/></link></label>
<caption><para>Architecture of the CREDENTIAL eHealth pilot (cf. also [14]).</para></caption>
<graphic xlink:href="graphics/ch013_fig002.jpg"/>
</fig>
<para>Furthermore, the doctor can also provide feedback to the patient, e.g., by adding lab values such as HbAlc, or provide treatment recommendations.</para>
<para>After overcoming initial stability and efficiency problems, the feedback received from the external users and doctors was highly positive, in particular concerning the perceived security guarantees of the developed solution. One of our main conclusions is that it is possible to provide sophisticated end-to-end security solutions to the user in a way that is almost fully transparent and does not negatively affect usability.</para>
</section>

<section class="lev2" id="sec13-4-3">
<title>13.4.3 Pilot Domain 3: eBusiness</title>
<para>The eBusiness pilot, documented in detail by Pallotti et al. [13], covered three use cases. The first use case allowed users to securely authenticate themselves towards an eCommerce platform, while the second use case enabled them to retrieve their data from the CREDENTIAL Wallet and share it with a service provider to subscribe to new services. From a technical point of view, these use cases are closely related to the eGovernment pilot described above, and we will focus on the third use case in the following.</para>
<para>In this use case, CREDENTIAL&#8217;s proxy re-encryption libraries were integrated into InfoCert&#8217;s Legalmail application, a certified mail service providing the same level of legal assurance as paper-based registered mail. The use case addressed the issue of forwarding encrypted emails to a deputy in case of absence: using classical email encryption technologies, the sender would need to be notified that the intended receiver is currently unavailable and would have to resend the mail to the defined deputy. The only way to avoid this additional interaction would then be to share the receiver&#8217;s secret decryption key with the deputy, which however poses significant security risks and requires very high trust assumptions. Using proxy re-encryption, a Legalmail client can define a deputy, and deposit a re-encryption key at the mail server. Upon receiving an encrypted mail, the message is re-encrypted and forwarded to the deputy, who can decrypt the mail using his own secret key. While the sender does not need to be actively involved in this process, he still received a notification for transparency reasons.</para>
<para>The test users involved in the piloting phase showed genuine interest in the added security provided by CREDENTIAL. The possibility of exchanging confidential messages with a certified mail service has been highly appreciated. In addition, the pilot was able to show that the CREDENTIAL Wallet is not only able to provide meaningful features &#8220;as a whole&#8221;, but also that single components of the system can successfully be integrated in other contexts.</para>
</section>

<section class="lev1" id="sec13-5">
<title>13.5 Conclusion and Open Challenges</title>
<para>CREDENTIAL&#8217;s main ambition was to develop a privacy-preserving and end-to-end secure and authentic data-sharing platform with integrated identity provisioning functionalities. To achieve this goal, the project consortium analysed, improved, and integrated security technologies from different domains including cryptography, multi-factor authentication, among others. Furthermore, the entire development process was accompanied by privacy experts to guarantee privacy-by-design and privacy-by-default, as well as by usability experts to ensure that end users are able to efficiently and conveniently interact with the system. Finally, the developed CREDENTIAL Wallet was tested through pilots within the highly sensitive domains of eGovernment, eHealth, and eBusiness, where the real-world usability and applicability of the developed solutions has been successfully proven.</para>
</section>

<section class="lev2" id="sec13-5-1">
<title>13.5.1 Recommendations on Usability and Accessibility</title>
<para>Within the project, also ways to facilitate the adoption of privacy-friendly solutions for identity management and data sharing were studied. It turned out that users are often unaware of the privacy-issues with existing IdP solutions. Our analyses suggest that video tutorials can be an efficient way to inform users: statistical tests showed significant differences in the correctly identified advantages between participants who received a tutorial on single sign-on and those who did not, and also perceived usability increased of a more elaborate user interface which supported them in making more informed decisions [9].</para>
<para>Regarding accessibility, we believe that the European Directive 2016/2012 on the accessibility of websites and mobile applications of public sector bodies will inspire a development where assistive technology can seamlessly merge with IdP apps. A mobile application for the CREDENTIAL Wallet is an intermediary for the services benefitting from the Wallet&#8217;s service. Thus, the public sector bodies &#8211; which all have to live up to the Directive &#8211; must rely on IdP services that also meet the requirements of the Directive. This will in its turn make it easy for service providers from the private sector to provide high levels of accessibility, as they benefit from users using these IdPs. Furthermore, the accessibility analysis provided within the CREDENTIAL project [7] can serve as an example for future developers of apps for services like the CREDENTIAL Wallet. One should also realise that further legal analysis might be needed from the public-sector side of its liabilities in accessible interactive communication.</para>
</section>

<section class="lev2" id="sec13-5-2">
<title>13.5.2 Open Challenges</title>
<para>During the project duration, many challenges regarding design, efficiency, or understanding of user attitudes were successfully overcome. Nevertheless, we would like to briefly discuss two remaining challenges in the following.</para>
<para><b>Metadata privacy</b>. From a technical point of view, metadata privacy is one of the main challenges that still needs to be addressed in cloud-based solutions such as the CREDENTIAL Wallet. While fundamental aspects such as linkability of authentication processes in cloud-based solutions were suc-cessfully addressed [12], the CREDENTIAL Wallet may still be able to infer sensitive information, e.g., who is sharing data with whom, or which data is accessed by whom and how often. The cryptographic literature contains several approaches to tackle these challenges, such as private information retrieval (cf. [4] and reference therein) or oblivious transfer (cf. [15] and the references given there). However, to the best of our knowledge, all existing solutions are currently too inefficient for large-scale deployment in real-world systems or would render the entire system too expensive.</para>
<para><b>Establishing business models for privacy</b>. A major challenge we faced during the CREDENTIAL project relates to establishing sustainable business models for privacy-preserving solutions. At the current point in time, many major identity provider solutions &#8211; offered by, e.g., major search engine or social network providers &#8211; are free for the end user in the sense that no subscription fee needs to be paid, but the providers in turn gain substantial amounts of data about the user and build their business models around this. Furthermore, several studies have shown that while end users prefer privacypreserving solutions in different scenarios, they are often not willing to pay for this feature. The successful commercialization of privacy-enhancing systems such as the CREDENTIAL Wallet would thus require a change of thinking on the cloud service provider and on the end user side, which could be triggered by legal regulations such as the General Data Protection Regulation (GDPR) or information campaigns to raise the users&#8217; awareness for privacy-related issues. Alternatively, especially for critical domains such as eHealth or eGovernment, we believe that also public authorities (e.g., ministry of health) could be potential providers of the CREDENTIAL Wallet, where the deployment and maintenance costs do not need to be paid directly by the end users.</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>The CREDENTIAL (Secure Cloud Identity Wallet) project leading to these results has received funding from the European Union&#8217;s Horizon 2020 research and innovation programme under grant agreement No 653454.</para>
<para>The authors would like to thank all partners of the CREDENTIAL consortium for their efforts and work during the entire project duration and beyond. Finally, the authors are grateful to the editors for their efforts during the preparation of this book.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] Giuseppe Ateniese, Kevin Fu, Matthew Green, Susan Hohenberger. <i>Improved Proxy Re-Encryption Schemes with Applications to Secure Distributed Storage</i>. In NDSS 2005. The Internet Society. 2005.</para>
<para>[2] Charlotte Baccman, Andreas Happe, Felix H&#246;randner, Simone Fischer-H&#252;bner, Farzaneh Karegar, Alexandros Kostopoulos, Stephan Krenn, Daniel Lindegren, Silvana Mura, Andrea Migliavacca, Nicolas Notario McDonnell, Juan Carlos Perez Ba&#252;n, John Soren Pettersson, Anna E. Schmaus-Klughammer, Evangelos Sfakianakis, Welderufael Tesfay, Florian Thiemer, and Melanie Volkamer. <i>D3.1 &#8211; UI Prototypes v1</i>. CREDENTIAL Project Deliverable. 2017.</para>
<para>[3] Matt Blaze, Gerrit Bleumer, and Martin Strauss. <i>Divertible Protocols and Atomic Proxy Cryptography</i>. In EUROCRYPT 1998 (LNCS), Kaisa Nyberg (Ed.), Vol. 1403. Springer, 127&#8211;144. 1998.</para>
<para>[4] Ran Canetti, Justin Holmgren, Silas Richelson. <i>Towards Doubly Efficient Private Information Retrieval</i>. In TCC (2) 2017 (LNCS), Yael Kalai and Leonid Reyzin (Eds.), Vol. 10678. Springer, 694&#8211;726. 2017.</para>
<para>[5] Pasquale Chiaro, Simone Fischer-H&#252;bner, Thomas Gro&#223;, Stephan Krenn, Thomas Lor&#252;nser, Ana Isabel Mart&#237;nez Garc&#237;, Andrea Migliavacca, Kai Rannenberg, Daniel Slamanig, Christoph Striecks, and Alberto Zanini. <i>Secure and Privacy-Friendly Storage and Data Process-ing in the Cloud</i>. In IFIP WG 9.2, 9.5, 9.6/11.7, 11.4, 11.6/SIG 9.2.2 International Summer School 2017 (IFIP AICT), Marit Hansen, Eleni Kosta, Igor Nai-Fovino, and Simone Fischer-H&#252;bner (Eds.), Vol. 526. 153&#8211;170. 2018.</para>
<para>[6] Sherman S. M. Chow, Jian Weng, Yanjiang Yang, and Robert H. Deng. <i>Efficient Unidirectional Proxy Re-Encryption</i>. In AFRICACRYPT 2010 (LNCS), Daniel J. Bernstein and Tanja Lange (Eds.), Vol. 6055. Springer, 316&#8211;332. 2010.</para>
<para>[7] Felix Horandner, Pritam Dash, Stefan Martisch, Farzaneh Karegar, John Soren Pettersson, Erik Framner, Charlotte Baccman, Elin Nilsson, Markus Rajala, Olaf Rode, Florian Thiemer, Alberto Zanini, Alberto Miranda Garcia, Daria Tonetto, Anna Palotti, Evangelos Sfakianakis, and Anna Schmaus-Klughammer. <i>D3.2 &#8211; UI Prototypes v2 and HCI Patterns</i>. CREDENTIAL Project Deliverable. 2018.</para>
<para>[8] Felix Horandner, Stephan Krenn, Andrea Migliavacca, Florian Thiemer, and Bernd Zwattendorfer. <i>CREDENTIAL: A Framework for PrivacyPreserving Cloud-Based Data Sharing</i>. In ARES. IEEE Computer Society, 742&#8211;749. 2016.</para>
<para>[9] Farzaneh Karegar, Nina Gerber, Melanie Volkamer, Simone FischerH&#252;bner. <i>Helping John to make informed decisions on using social login</i>. In SAC 2018, Hisham M. Haddad, Roger L. Wainwright, and Richard Chbeir (Eds.). ACM, 1165&#8211;1174. 2018.</para>
<para>[10] Farzaneh Karegar, Christoph Striecks, Stephan Krenn, Felix Horandner, Thomas Lor&#252;nser, and Simone Fischer-H&#252;bner. <i>Opportunities and Chal-lenges of CREDENTIAL &#8211; Towards a Metadata-Privacy Respecting Identity Provider</i>. In IFIP WG 9.2, 9.5, 9.6/11.7, 11.4, 11.6/SIG 9.2.2 International Summer School 2016 (IFIP AICT), Anja Lehmann, Diane Whitehouse, Simone Fischer-H&#252;bner, Lothar Fritsch, and Charles D. Raab (Eds.), Vol. 498. 76&#8211;91. 2017.</para>
<para>[11] Alexandros Kostopoulos, Evangelos Sfakianakis, Ioannis Chochliouros, Jon Soren Pettersson, Stephan Krenn, Welderufael Tesfay, Andrea Migliavacca, and Felix Horandner. <i>Towards the Adoption of Secure Cloud Identity Services</i>. In ARES. ACM, 90:1&#8211;90:7. 2017.</para>
<para>[12] Stephan Krenn, Thomas Lor&#252;nser, Anja Salzer, and Christoph Striecks. <i>Towards Attribute-Based Credentials in the Cloud</i>. In CANS 2017 (LNCS), Srdjan Capkun and Sherman S. M. Chow (Eds.), Vol. 11261. Springer, 179&#8211;202. 2018.</para>
<para>[13] Anna Pallotti, Luigi Rizzo, Romualdo Carbone, Pasquale Chiaro, and Daria Tonetto. <i>D6.6 &#8211; Test and Evaluation of Pilot Domain 3 (eBusiness)</i>. CREDENTIAL Project Deliverable. 2018.</para>
<para>[14] Anna Schmaus-Klughammer, Johannes Einhaus, and Olaf Rode. <i>D6.5 &#8211; Test and Evaluation of Pilot Domain 2 (eHealth)</i>. CREDENTIAL Project Deliverable. 2018.</para>
<para>[15] Peter Scholl. <i>Extending Oblivious Transfer with Low Communication via Key-Homomorphic PRFs</i>. In PKC (1) 2018 (LNCS), Michel Abdalla and Ricardo Dahab (Eds.), Vol. 10769. Springer, 554&#8211;583. 2018.</para>
<para>[16] Ron Steinfeld, Laurence Bull, and Yuliang Zheng. <i>Content Extraction Signatures</i>. In ICISC 2001 (LNCS), Kwangjo Kim (Ed.), Vol. 2288. Springer, 285&#8211;304. 2001.</para>
<para>[17] Alberto Zanini and Andrea Migliavacca. <i>D6.4 &#8211; Test and Evaluation of Pilot Domain 1 (eGovernment)</i>. CREDENTIAL Project Deliverable. 2018.</para>
</section>
</chapter>

<chapter class="chapter" id="ch014" label="14" xreflabel="14">
<title>FutureTrust &#8211; Future Trust Services for Trustworthy Global Transactions</title>
<para><b>Detlef H&#252;hnlein<sup>1</sup>, Tilman Frosch<sup>2</sup>, J&#246;rg Schwenk<sup>2</sup>, Carl-Markus Piswanger<sup>3</sup>, Marc Sel<sup>4</sup>, Tina H&#252;hnlein<sup>1</sup>, Tobias Wich<sup>1</sup>, Daniel Nemmert<sup>1</sup>, Rene Lottes<sup>1</sup>, Stefan Baszanowski<sup>1</sup>, Volker Zeuner<sup>1</sup>, Michael Rauh<sup>1</sup>, Juraj Somorovsky<sup>2</sup>, Vladislav Mladenov<sup>2</sup>, Cristina Condovici<sup>2</sup>, Herbert Leitold<sup>5</sup>, Sophie Stalla-Bourdillon<sup>6</sup>, Niko Tsakalakis<sup>6</sup>, Jan Eichholz<sup>7</sup>, Frank-Michael Kamm<sup>7</sup>, Jens Urmann<sup>7</sup>, Andreas K&#252;hne<sup>8</sup>, Damian Wabisch<sup>8</sup>, Roger Dean<sup>9</sup>, Jon Shamah<sup>9</sup>, Mikheil Kapanadze<sup>10</sup>, Nuno Ponte<sup>11</sup>, Jose Mart&#237;ns<sup>11</sup>, Renato Portela<sup>11</sup>, Cagatay Karabat<sup>12</sup>, Snezana Stojicic<sup>13</sup>, Slobodan Nedeljkovic<sup>13</sup>, Vincent Bouckaert<sup>14</sup>, Alexandre Defays<sup>14</sup>, Bruce Anderson<sup>15</sup>, Michael Jonas<sup>16</sup>, Christina Hermanns<sup>16</sup>, Thomas Schubert<sup>16</sup>, Dirk Wegener<sup>17</sup> and Alexander Sazonov<sup>18</sup></b></para>
<para><sup>1</sup>ecsec GmbH, Sudetenstra&#223;e 16, 96247 Michelau, Germany</para>
<para><sup>2</sup>Ruhr Universit&#228;t Bochum, Universit&#228;tsstra&#223;e 150,44801 Bochum, Germany</para>
<para><sup>3</sup>Bundesrechenzentrum GmbH, Hintere Zollamtsstra&#223;e 4, A-1030 Vienna, Austria</para>
<para><sup>4</sup>PwC Enterprise Advisory, Woluwedal 18, Sint Stevens Woluwe 1932, Belgium</para>
<para><sup>5</sup>A-SIT, Seidlgasse 22/9, A-1030 Vienna, Austria</para>
<para><sup>6</sup>University of Southampton, Highfield, Southampton S017 1BJ, United Kingdom</para>
<para><sup>7</sup>Giesecke &amp; Devrient GmbH, Prinzregentestra&#223;e 159, 81677 Munich, Germany</para>
<para><sup>8</sup>Trustable Limited, Great Hampton Street 69, Birmingham B18 6E, United Kingdom</para>
<para><sup>9</sup>European Electronic Messaging Association AISBL, Rue Washington 40, Bruxelles 1050, Belgium</para>
<para><sup>10</sup>Public Service Development Agency, Tsereteli Avenue 67A, Tbilisi 0154, Georgia</para>
<para><sup>11</sup>Multicert &#8211; Servicos de Certificacao Electronica SA, Lagoas Parque Edificio 3 Piso 3, Porto Salvo 2740 266, Portugal</para>
<para><sup>12</sup>Turkiye Bilimsel Ve Tknolojik Arastirma Kurumu, Ataturk Bulvari 221, Ankara 06100, Turkey</para>
<para><sup>13</sup>Ministarstvo unutra&#353;njih poslova Republike Srbije, Kneza Milosa 103, Belgrade 11000, Serbia</para>
<para><sup>14</sup>Ar&#951;s Spikeseed, Rue Nicolas Bove 2B, 1253 Luxembourg, Luxembourg <sup>15</sup>Law Trusted Third Party Service (Pty) Ltd. (LAWTrust), 5 Bauhinia Street, Building C, Cambridge Office Park Veld Techno Park, Centurion 0157, South Africa</para>
<para><sup>16</sup>Federal Office of Administration (Bundesverwaltungsamt), Barbarastr. 1, 50735 Cologne, Germany</para>
<para><sup>17</sup>German Federal Information Technology Centre (Informationstechnikzentrum Bund, ITZBund), Waterloostr. 4, 30169 Hannover, Germany</para>
<para><sup>18</sup>National Certification Authority Rus CJSC (NCA Rus), 8A building 5, Aviamotornaya st., Moscow 111024, Russia</para>
<para>E-mail: detlef.huhnlein@ecsec.de; tilman.frosch@rub.de; jorg.schwenk@rub.de; carl-markus.piswanger@brz.gv.at; marc.sel@be.pwc.com; tina.huhnlein@ecsec.de; tobias.wich@ecsec.de; daniel.nemmert@ecsec.de; rene.lottes@ecsec.de; stefan.baszanowski@ecsec.de; volker.zeuner@ecsec.de; michael.rauh@ecsec.de; juraj.somorovsky@rub.de; vladislav.mladenov@rub.de; cristina.condovici@rub.de; herbert.leitold@a-sit.at; sophie.stalla-bourdillon@soton.ac.uk; niko.tsakalakis@soton.ac.uk; jan.eichholz@gi-de.com; frank-michael.kamm@gi-de.com; jens.urmann@gi-de.com; kuehne@trustable.de; damian@trustable.de; r.dean@eema.org; jon.shamah@eema.org; mkapanadze@sda.gov.ge; nuno.pontemulticert.com; jose.martinsmulticert.com; renato.portelamulticert.com; cagatay.karabat@tubitak.gov.tr; snezana.stojicic@mup.gov.rs; slobodan.nedeljkovic@mup.gov.rs; vincent.bouckaert@arhs-developments.com; alexandre.defays@arhs-developments.com; bruce@LAWTrust.co.za; michael.jonas@bva.bund.de; christina.hermanns@bva.bund.de; thomas.schubert@bva.bund.de; dirk.dirkwegener@itzbund.de; sazonov@nucrf.ru;</para>
<para>Against the background of the regulation 2014/910/EU [1] on electronic identification (eID) and trusted services for electronic transactions in the internal market (eIDAS), the FutureTrust project<footnote id="fn_1" label="1"> <para>See https://futuretrust.eu</para></footnote>, which is funded within the EU Framework Programme for Research and Innovation (Horizon 2020) under Grant Agreement No. 700542, aims at supporting the practical implementation of the regulation in Europe and beyond. For this purpose, the FutureTrust project will address the need for globally interoperable solutions through basic research with respect to the foundations of trust and trustworthiness, actively support the standardisation process in relevant areas, and provide Open Source software components and trustworthy services which will ease the use of eID and electronic signature technology in real world applications. The FutureTrust project will extend the existing European Trust Service Status List (TSL) infrastructure towards a &#8220;Global Trust List&#8221;, develop a comprehensive Open Source Validation Service as well as a scalable Preservation Service for electronic signatures and seals. Furthermore, it will provide components for the eID-based application for qualified certificates across borders, and for the trustworthy creation of remote signatures and seals in a mobile environment. The present contribution provides an overview of the FutureTrust project and invites further stakeholders to actively participate in contributing to the development of future trust services for trustworthy global transactions.</para>

<section class="lev1" id="sec14-1">
<title>14.1 Background and Motivation</title>
<para>There are currently over 160 trust service providers across Europe<footnote id="fn_2" label="2"> <para>See [2, 3] and https://www.eid.as/tsp-map/ for example.</para></footnote>, which issue qualified certificates and/or qualified time stamps. Hence, the &#8220;eIDAS ecosystem&#8221;<footnote id="fn_3" label="3"> <para>See also https://blog.skidentity.de/en/eidas-ecosystem/.</para></footnote> with respect to these basic services is fairly well developed. On the other hand, the provision of qualified trust services for the validation and preservation of electronic signatures and seals as well as for registered delivery and the cross-border recognition of electronic identification schemes have been recently introduced with the eIDAS regulation [1]. However, these services are not yet broadly available in a mature, standardised, and interoperable manner within Europe.</para>
<para>In a similar manner, the practical adoption and especially the cross-border use of elD cards, which have been rolled out across Europe, is &#8211; despite previous and ongoing research and development efforts in pertinent projects, such as STORK, STORK 2.0, FutureID, e-SENS, SD-DSS, Open eCard, OpenPEPPOL and SkIDentity &#8211; still in its infancy. The opportunity afforded by the new eIDAS Trust Services regulation to use a national elD means outside of its home Member State, is still challenging and perceived to be complex. In particular, it is often not yet possible <i>in practice</i> to use elD cards from one EU Member State to enrol for a qualified certificate and qualified signature creation device (QSCD) in another Member State.<footnote id="fn_4" label="4"> <para>Note, that such a cross-border enrolment for qualified certificates may become especially interesting in combination with remote and mobile signing services, in which no physical SSCD needs to be shipped to the user, because the SSCD is realized as central Hardware Security Module (HSM) hosted by a trusted service provider, which fulfils the requirements of [4], and against the background of the eIDAS-regulation (see e.g. Recital 51 of [1]) one may expect that such a scenario may soon become applicable across Europe and beyond.</para></footnote></para>
<para>In particular, the following problems seem to be not yet sufficiently solved and hence will be addressed in the FutureTrust project:</para>
<para><b>P1. No comprehensive Open Source Validation Service</b></para>
<para>Multiple validation services are available today. They range from offering revocation information to full validation against a formal validation policy. These services are operated by public and private sector actors, and allow relying parties the validation of signed or sealed artefacts. However, there is currently no freely available, standard conforming and comprehensive Validation Service, which would be able to verify arbitrary advanced and qualified electronic signatures in a trustworthy manner. To solve this problem, the FutureTrust project will contribute to the development of the missing standards and the development of such a comprehensive Validation Service. <b>P2. No scalable Open Source Preservation Service</b></para>
<para>The fact that signed objects lose their conclusiveness if cryptographic algorithms become weak induces severe challenges for applications, which require maintaining the integrity and authenticity of signed data for long periods of time. Research related to the strength of cryptographic algorithms is addressed in many places, including ECRYPT-NET<footnote id="fn_5" label="5"> <para>https://www.cosic.esat.kuleuven.be/ecrypt/net/</para></footnote>, and does not fall within the scope of FutureTrust. Rather, the FutureTrust project will aim at solving this problem by contributing to the development of the missing standards for long-term preservation and the implementation of a scalable Open Source Preservation Service that makes use of processes and workflow to ensure preservation techniques embed the appropriate cryptographic solutions.</para>

<para><b>P3. Qualified electronic signatures are difficult to use in mobile environments</b></para>
<para>Today, applying for a qualified certificate involves various paper-based steps. Furthermore, to generate a qualified electronic signature, typically a smart card based signature creation device has to be used, which is complicated in mobile and cloud based environments due to the need for middleware and drivers that are often not supported on the mobile device. The FutureTrust project will aim at changing this by creating a Signature Service, which sup-ports a variety of local and remote signature creation devices and eID-based enrolment for certificates and the remote creation of electronic signatures initiated by using mobile devices.</para>
<para><b>P4. Legal requirements of a pan-European eID metasystem</b></para>
<para>The first part of the eIDAS-regulation that deals with eID systems aims to create a standardized interoperability framework but does not intend to harmonize the respective national elD systems. Instead it employs a set of broad requirements, part of which is the mandatory compliance of all systems to the General Data Protection Regulation (GDPR) [5]. To facilitate compliance with the GDPR, the FutureTrust project will conduct desk research to analyse how privacy and data protection legislation impacts on existing laws and derive a list of necessary characteristics that an EU elD and eSignatures metasystem should incorporate to ensure compliance.</para>
<para><b>P5. Legally binding electronic transactions with non-European partners are hard to achieve</b></para>
<para>While the eIDAS-regulation [1] defines the legal effect of qualified electronic signatures, there is no comparable global legislation and hence electronic transactions with business partners outside the European Union are challenging with respect to legal significance and interoperability. To work on a viable solution for this problem the FutureTrust project will conduct basic research with respect to international legislation, contribute to the harmonization of the relevant policy documents and standards and build a &#8220;Global Trust List&#8221;, which may form the basis for legally significant electronic transactions around the globe.</para>
<para><b>P6. Scope of eIDAS interoperability framework is limited to EU</b></para>
<para>In a similar manner, the scope of the interoperability framework for electronic identification according to Article 12 of [1] is limited to the EU. There are many aspects of an international interoperability framework that need to be assessed, especially in regard of to the privacy and data protection aspects highlighted above.<footnote id="fn_6" label="6"> <para>For example, data transfers to the US are currently not clearly regulated after the inval-idation of the &#8216;Safe Harbor&#8217; agreement by the EUCJ (C-362/14). The EU officials were in negotiations on a new arrangement, named &#8216;EU-US Privacy Shield&#8217; which was halted after a contradictory opinion from the WP29 (WP238).</para></footnote> Against this background, the FutureTrust project will extend the work from pertinent research and large-scale pilot projects to integrate non-European elD-solutions in a seamless and trustworthy manner, after defining the requirements and assessing the impact of data transfers beyond the European Union.</para>
<para><b>P7. No formal foundation of trust and trustworthiness</b></para>
<para>To be able to compare elD solutions on an international scale, there is no international legislation which would allow to &#8220;define&#8221; trustworthi-ness. Instead, scientifically sound formal models must be developed which describe international trust models, and especially model to compare the trustworthiness of different eID services.</para>
<para>To demonstrate the viability and trustworthiness of these formal models, and show that the developed components can be used in productive environments, the FutureTrust project will implement real world pilot applications in the area of public administration, higher education, eCommerce, eBusiness and eBanking.</para>
</section>

<section class="lev1" id="sec14-2">
<title>14.2 The FutureTrust Project</title>
<para>In order to solve the problems mentioned above, the FutureTrust partners (see Section 14.2.1) have sketched the FutureTrust System Architecture (see Section 14.2.2), which includes several innovative services, which are planned to be used in a variety of pilot projects (see Section 14.2.8).</para>
<para>This will in particular include the design and development of a Global Trust List (gTSL) (see Section 14.2.3), a Comprehensive Validation Service (ValS) (see Section 14.2.4), a scalable Preservation Service (PresS) (see Section 14.2.5), an Identity Management Service (IdMS) (see Section 14.2.6) and importantly a Signing and Sealing Service (SigS) (see Section 14.2.7).</para>
</section>

<section class="lev2" id="sec14-2-1">
<title>14.2.1 FutureTrust Partners</title>
<para>The FutureTrust project is carried out by a number of core partners as depicted in <link linkend="F14-1">Figure <xref linkend="F14-1" remap="14.1"/></link>, which includes Ruhr-Universit&#228;t Bochum (Germany), ecsec GmbH (Germany), Arhs Spikeseed (Luxembourg), EEMA (Belgium), Federal Computing Centre of Austria (Austria), Price Waterhouse Coopers (PWC) (Belgium), University of Southampton (United Kingdom), multicert (Portugal), Giesecke &amp; Devrient GmbH (Germany), Trustable Ltd. (United Kingdom), Secure Information Technology Center &#8211; Austria (Austria), Public Service Development Agency (Georgia), Tiirkiye Bilimsel veTeknolojik Arasrma Kurumu (Turkey), LAW Trusted Third Party Services (Pty) Ltd. (South Africa), Ministry of Interior Republic of Serbia (Serbia), DFN-CERT Services GmbH, the PRIMUSS cluster consisting of ten Universities of Applied Science and the Leipzig University (LU) Computing Centre (Germany).</para>

<fig id="F14-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-1">Figure <xref linkend="F14-1" remap="14.1"/></link></label>
<caption><para>FutureTrust partners.</para></caption>
<graphic xlink:href="graphics/ch014_fig001.jpg"/>
</fig>


<para>Furthermore the FutureTrust project is supported by selected subcontractors and a number of associated partners, which currently includes the SAFE Biopharma Association (USA), The Data Processing Center (DPC) of the Ministry of Transport, Communications and High Technologies of the Republic of Azerbaijan, Signicat, SK ID Solution AS, B.Est Solutions, UITSEC Teknoloji A.S., and Comsign Israel.</para>
</section>

<section class="lev2" id="sec14-2-2">
<title>14.2.2 FutureTrust System Architecture</title>
<para>As shown in <link linkend="F14-2">Figure <xref linkend="F14-2" remap="14.2"/></link>, the FutureTrust system integrates existing and emerging eIDAS Trust Services, eIDAS Identity Services and similar Third Country Trust &amp; Identity Services and provides a number of FutureTrust specific services, which aim at facilitating the use of eID and electronic signature technology in different application scenarios.</para>
<fig id="F14-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-2">Figure <xref linkend="F14-2" remap="14.2"/></link></label>
<caption><para>FutureTrust System Architecture.</para></caption>
<graphic xlink:href="graphics/ch014_fig002.jpg"/>
</fig>
</section>

<section class="lev2" id="sec14-2-3">
<title>14.2.3 Global Trust List (gTSL)</title>
<para>The gTSL will become an Open Source component, which can be deployed with the other FutureTrust services or as standalone service and which allows to manage Trust Service Status Lists for Trust Services and Identity Providers. The gTSL will allow to import the European &#8220;List of the Lists&#8221; (LotL), which is a signed XML document according to [6] and all national Trust Service Status Lists (TSLs) referenced therein. This LotL is currently published by the European Commission. This import includes a secure verification of the digital signatures involved. The gTSL will also allow to import Trusted Lists from other geographic regions, such as the Trust List of the Russian Federation<footnote id="fn_7" label="7"> <para>See http://e-trust.gosuslugi.ru/CA/DownloadTSL?schemaVersion=0.</para></footnote> for example, and it is envisioned that the gTSL will generate a &#8220;virtual US-American Trust List&#8221; from the current set of available cross-certificates. gTSL will provide support for the traceable assessment of trust related aspects for potential trust anchors both with and without known trustworthiness and assurance levels<footnote id="fn_8" label="8"> <para>[1] implicitly defines the levels &#8220;qualified&#8221; and &#8220;non-qualified&#8221; for trust service providers and explicitly introduces in Article 8 the assurance levels &#8220;low&#8221;, &#8220;significant&#8221; and &#8220;high&#8221; for electronic identification schemes.</para></footnote> by providing claims or proofs of relevant information with respect to the trustworthiness of a trust service. This may give rise for a reputation based &#8220;web of trust&#8221; for trust services. It is expected that the corroboration of information from relatively independent sources<footnote id="fn_9" label="9"> <para>See [11].</para></footnote> will help to establish trustworthiness. Furthermore, the gTSL provides a web interface as well as a REST interface allowing for a small set of predefined queries, to allow the other FutureTrust services or other gTSL deployments to access the validated data. For implementation of the underlying gTSL model various options have already been identified. These include traditional models such as a Trusted Third Party model and a Trust List, as well as innovative models such as a semantic web ontology and a blockchain ledger.</para>
</section>

<section class="lev2" id="sec14-2-4">
<title>14.2.4 Comprehensive Validation Service (ValS)</title>
<para>The major use case of ValS is the validation of Advanced Electronic Signatures (AdES) in standardized formats, such as CAdES, XAdES and PAdES for example. In order to support the various small legal and regulatory differences with respect to electronic signatures coming from different EU Member States or other global regions, the ValS will support practice oriented XML-based validation policies for electronic signatures, which consider previous work in this area, such as [7] and [8] and current standards, such as [9] and [10] for example. The ValS issues a verification report to the requestor of the service, which is based on the recently published ETSI TS 119 102-2 signature validation report, which in particular considers the procedures defined in [9] and the XML-based validation policies mentioned above. Finally, it seems worth to be mentioned that the ValS is designed in a modular and extensible manner, such that modules for other not (yet) standardized signatures or validation policies can be plugged into the ValS in a well-defined manner.</para>
</section>

<section class="lev2" id="sec14-2-5">
<title>14.2.5 Scalable Preservation Service (PresS)</title>
<para>The PresS is used to preserve the integrity and conclusiveness of a signed document over its whole lifetime. For this purpose the FutureTrust</para>

<fig id="F14-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-3">Figure <xref linkend="F14-3" remap="14.3"/></link></label>
<caption><para>Outline of the Architecture of the Scalable Preservation Service.</para></caption>
<graphic xlink:href="graphics/ch014_fig003.jpg"/>
</fig>
<para>Preservation Service as outlined in <link linkend="F14-3">Figure <xref linkend="F14-3" remap="14.3"/></link> will use the ValS and existing external time stamping services to produce Evidence Records according to [12]. As depicted in <link linkend="F14-3">Figure <xref linkend="F14-3" remap="14.3"/></link> the Preservation Service supports the input interface, which is currently standardised in ETSI TS 119 512 and smoothly integrates with various types of storage systems.</para>
<para>The FutureTrust Preservation Service will support a variety of Archive Information Packages including the zip-based container based on the Associated Signature Container (ASiC) specification according to [13]. An important goal of the envisioned Preservation Service is scalability, which may be realized by using efficient data structures, such as Merkle hash trees as standardized in [12] for example. Using hash tree based signatures<footnote id="fn_10" label="10"> <para>See [14].</para></footnote> may also provide additional security in the case that quantum computers have been built, because any digital signature that is in use today (based on the RSA assumption or on the discrete log assumption) can be forged in this case. However, message authentication codes (MACs), block-chain constructions and signature algorithms based on hash-trees seem to remain secure. Thus it is an interesting research question, whether fully operational and sufficiently performant preservation services can be built on MACs, block-chains or hash-trees alone.</para>
</section>

<section class="lev2" id="sec14-2-6">
<title>14.2.6 Identity Management Service (IdMS)</title>
<para>Many EU Member States and some non-European countries have established eID services, which produce slightly different authentication tokens. Within the EU, most<footnote id="fn_11" label="11"> <para>The [19] system seems to be an exception to this rule, as it produces and accepts identity tokens according to the [20] specification.</para></footnote> of these services produce SAML tokens (see [15]) and the eIDAS interoperability framework [16] is also based on [17]. In addition, industrial standardization activities have produced specifications like FIDO<footnote id="fn_12" label="12"> <para>See [21].</para></footnote> or GSMA&#8217;s MobileConnect<footnote id="fn_13" label="13"> <para>See [22] and [23].</para></footnote> which have gained a broad customer base. The IdMS is based on SkIDentity [18] and is able to consume a broad variety of such authentication tokens (SAML, OpenID Connect, OAuth), work with a broad variety of mobile identification services (FIDO, GSMA Mobile-Connect, European Citizen cards) and transform them into a standardized, interoperable<footnote id="fn_14" label="14"> <para>Due to the fact that SAML is a very complex and highly extensible standard, the integration of different eID services considering all extensions points is a rather challenging task. In order to enable the communication between all eID services, their interoperability has to be thoroughly analysed.</para></footnote> and secure<footnote id="fn_15" label="15"> <para>Based on [16] it is clear that SAML 2.0 will form the basis for eIDAS Interoperability Framework according to Article 12 of [1] and [24], but it is currently likely that the Assertions will be simple &#8220;Bearer Tokens&#8221;, which is not optimal from a security point of view. Furthermore, the different authentication flows and optional message encryptions result in complex standard and thus expose conforming implementations to new attacks. In the last years, several papers (see e.g. [25]) showed how to login as an arbitrary use in SAML Single Sign-On scenarios or decrypt confidential SAML messages (see e.g. [26]). Thus, existing eID services can be evaluated against known attacks.</para></footnote> format. The choice of this standardized format will be based on industry best practices, and on the eIDAS interoperability framework [16]. Moreover, the IdMS supports a large variety of European and non-European eID cards, platforms and application services.</para>
</section>

<section class="lev2" id="sec14-2-7">
<title>14.2.7 Signing and Sealing Service (SigS)</title>
<para>The SigS allows to create advanced and qualified electronic signatures and seals using local and remote signature generation devices. For this purpose, the SigS is operated in a secure environment and supports appropriate standard interfaces based on OASIS DSS-X Version 2.</para>
<para>As outlined in Figures 14.4 and 14.5, one may distinguish the enrolment phase and the usage phase. During enrolment, the Signatory uses his eID and the IdMS to perform an eID-based identification and registration at the SigS and the Certification Authority (CA), which involves the creation of signing credentials, which can later on be used for signature generation. Thanks to the</para>
<fig id="F14-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-4">Figure <xref linkend="F14-4" remap="14.4"/></link></label>
<caption><para>National eID cards, platforms and applications supported by IdMS.</para></caption>
<graphic xlink:href="graphics/ch014_fig004.jpg"/>
</fig>

<fig id="F14-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F14-5">Figure <xref linkend="F14-5" remap="14.5"/></link></label>
<caption><para>Enrolment and usage phase for SigS.</para></caption>
<graphic xlink:href="graphics/ch014_fig005.jpg"/>
</fig>
<para>OASIS DSS Extension for Local and Remote Signature Computation [27] it is possible to use both smart card and cloud-based signature creation devices.</para>
</section>

<section class="lev2" id="sec14-2-8">
<title>14.2.8 FutureTrust Pilot Applications</title>
<para>The FutureTrust consortium aims to demonstrate the project&#8217;s contributions in a variety of demonstrators and pilot applications, which are planned to include University Smart Certificates Enrolment &amp; Use, e-Invoicing with the Business Service Portal of the Austrian Government, an e-Apostille Validation System and a SEPA e-Mandate Service according to [28] for example. Furthermore, the FutureTrust project is open for supporting further pilot applications related to innovative use cases for eID and electronic signature technology.</para>
</section>

<section class="lev2" id="sec14-2-9">
<title>14.2.9 The go.eIDAS Initiative</title>
<para>It is recognised that the FutureTrust Service components that will be made available exist in the eIDAS ecosystem and all exploitation efforts must reflect the early stages of Trust Services deployment and market maturity. In order to establish FutureTrust to be sustainable and to maintain its relevance, it is essential to obtain the best possible support for the exploitation efforts, especially from others than the FutureTrust Partners, Associate Partners and Advisory Board Members. To this end, the go.eIDAS<footnote id="fn_16" label="16"> <para>See https://go.eid.as</para></footnote> initiative will act as the exploitation vehicle for FutureTrust, but will also have sufficient branding to continue after the end of the Horizon 2020 funding.</para>
<para>Planning and initial contacts with Stakeholders commenced with the launch press release on 27/09/2018, in conjunction with the formal start of EU recognition of notified eIDs<footnote id="fn_17" label="17"> <para>See https://www.eid.as/news/details/date/2018/09/27/goeidas-initiative-launched-across-europe-and-beyond-1/.</para></footnote>.</para>
<para>go.eIDAS reflects the private sector need to interoperate with eIDAS and also to interoperate with non-EU based Trust Schemes. go.eIDAS is an open initiative, which welcomes all interested organisations and individuals who are committed to the goals of eIDAS and FutureTrust. We recognise that a thriving community with a spectrum of needs must be created over and above the users of FutureTrust.</para>


</section>

<section class="lev1" id="sec14-3">
<title>14.3 Summary and Invitation for Further Collaboration</title>
<para>This paper provides an overview of the FutureTrust project, which started on June 1<sup>st</sup> 2016 and is funded until August 2019 by the European Commission within the EU Framework Programme for Research and Innovation (Horizon 2020) under the Grant Agreement No. 700542 with up to 6,3 Mio. &#8364;.</para>
<para>As explained throughout the paper, the FutureTrust project has conducted basic research with respect to the foundations of trust and trustworthiness, actively support the standardisation process in relevant areas, and plans to provide innovative Open Source software components and trustworthy services which will enable ease the use of eID and electronic signature technology in real world applications by addressing the problems P1 to P7 introduced in Section 14.2.</para>
<para>As part of the continuation of this project, and its subsequent exploitation, the FutureTrust consortium invites interested parties, such as Trust Service Providers, vendors of eID and electronic signature technology, application providers and other research projects to benefit from this development and join the FutureTrust team in its new go.eIDAS initiative.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] 2014/910/EU. (2014). Regulation (EU) No 910/2014 of the European Parliament and of the council on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC. http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2014.257.01.0073.01.ENG.</para>
<para>[2] EU Trusted Lists of Certification Service Providers. (2016). <i>European Commision</i>. Retrieved from https://ec.europa.eu/digital-agenda/en/eu-trusted-lists-certification-service-providers</para>
<para>[3] 3 x A Security AB. (2016). EU Trust Service status List (TSL) Analysis Tool. http://tlbrowser.tsl.website/tools/.</para>
<para>[4] CEN/TS 419 241. (2014). Security Requirements for Trustworthy Systems supporting Server Signing.</para>
<para>[5] 2016/679/EU. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, <i>and repealing Directive 95/46/EC and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance)</i>. http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:0J.L_.2016.119.01.0001.01.ENG&amp;toc=OJ:L:2016:119:TOC.</para>
<para>[6] ETSI TS 119 612. (2016, April). Electronic Signatures and Infrastructures (ESI); Trusted Lists. <i>Version 2.2.1</i>. http://www.etsi.org/deliver/etsi_ts/119600_119699/119612/02.02.01_60/ts_119612v020201p.pdf.</para>
<para>[7] ETSI TR 102 038. (2002, April). TC Security &#8211; Electronic Signatures and Infrastructures (ESI); XML format for signature policies.</para>
<para>[8] ETSI TS 102 853. (2012, July). Electronic Signatures and Infrastructures (ESI); Signaturue verification procedures and policies. <i>V1.1.1</i>. http://www.etsi.org/deliver/etsi_ts/102800_102899/102853/01.01.01_60/ts_102853v010101p.pdf.</para>
<para>[9] ETSI EN 319 102&#8211;1. (2016, May). Electronic Signatures and Infrastructures (ESI); Procedures for Creation and Validation of AdES Digital Signatures; Part 1: Creation and Validation, Version 1.1.1. http://www.etsi.org/deliver/etsi_en/319100_319199/31910201/01.01.01_ 60/en_31910201v010101p.pdf.</para>
<para>[10] ETSI TS 199 172&#8211;1. (2015, July). Electronic Signatures and Infrastructures (ESI); Signature Policies; Part 1: Building blocks and table of contents for human readable signature policy documents. http://www.etsi.org/deliver/etsi_ts/119100_119199/11917201/01.01.01_60/ts_11917201v010101p.pdf.</para>
<para>[11] Sel, M. (2016). Improving interpretations of trust claims, published in the proceedings of the. <i>Trust Management X: 10th IFIP WG 11.11 International Conference, IFIPTM 2016</i> (pp. 164&#8211;173). Darmstadt, Germany: Springer.</para>
<para>[12] RFC 4998. (2007, August). Gondrom, T.; Brandner, R.; Pordesch, U. <i>Evidence Record Syntax (ERS)</i>. https://tools.ietf.org/html/rfc4998.</para>
<para>[13] ETSI EN 319 162&#8211;1. (2015, August). Electronic Signatures and Infrastructures (ESI); Associated Signature Containers (ASiC); Part 1: Building blocks and ASiC baseline containers. http://www.etsi.org/deliver/etsi_en/319100_319199/31916201/01.00.00_20/en_31916201v010000a.pdf.</para>
<para>[14] Buchmann, J., Dahmen, E., and Szydlo, M. (2009). Hash-based digital signature schemes. In <i>Post-Quantum Cryptography</i> (pp. 35&#8211;93). Springer.</para>
<para>[15] Zwattendorfer, B., &amp; Zefferer, T. T. (2012). The Prevalence of SAML within the European Union. <i>8th International Conference on Web Information Systems and Technologies (WEBIST)</i>, (pp. 571&#8211;576). http://www.webist.org/?y=2012.</para>
<para>[16] eIDAS Spec. (2015, November 26). eIDAS Technical Subgroup. <i>eIDAS Technical Specifications v1.0</i>. https://joinup.ec.europa.eu/software/cefe id/document/eidas-technical-specifications-v10.</para>
<para>[17] SAML 2.0. (2005, March 15). <i>OASIS Standard</i>. Retrieved from Metadata for the OASIS Security Assertion Markup Language (SAML) V2.0: http://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf</para>
<para>[18] SkIDentity. (2018). Retrieved from https://www.skidentity.com/</para>
<para>[19] FranceConnect. (2016). https://doc.integ01.dev-franceconnect.fr/.</para>
<para>[20] OpenID Connect. (2015). OpenID Foundation. <i>Welcome to OpenID Connect</i>. http://openid.net/connect/.</para>
<para>[21] FIDO. (2015). <i>FIDO Alliance</i>. Retrieved from https://fidoalliance.org/</para>
<para>[22] GSMA. (2015). <i>Introducing Mobile Connect &#8211; the new standard in digital authentication</i>. Retrieved from http://www.gsma.com/personaldata/ mobile-connect</para>
<para>[23] GSMA-CPAS5. (2015). CPAS 5 OpenID Connect - Mobile Connect Profile - Version 1.1. https://github.com/GSMA-OneAPI/Mobile-Conn ect/tree/master/specifications.</para>
<para>[24] 2015/1501/EU. (2015). Commission Implementing Regulation (EU) 2015/1501 of 8 September 2015 on the interoperability framework pursuant to Article 12(8) of Regulation (EU) No 910/2014. <i>of the European Parliament and of the Council on electronic identification and trust services for electronic transactions in the internal market (Text with EEA relevance)</i>. http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ%3AJOLA015_235_R_0001.</para>
<para>[25] Somorovsky, J., Mayer, A., Schwenk, J., Kampmann, M., &amp; Jensen, M. (2012). On breaking saml: Be whoever you want to be. <i>Presented as part of the 21st USENIX Security Symposium (USENIX Security 12)</i>.</para>
<para>[26] Jager, T., &amp; Somorovsky, J. (2011). How to break xml encryption. <i>Proceedings of the 18th ACM conference on Computer and communications security</i>.</para>
<para>[27] OASIS DSS Local &amp; Remote Signing (2018). <i>DSS Extension for Local and Remote Signature Computation Version 1.0</i>.</para>
<para>[28] EPC 208&#8211;08. (2013, April 9). European Payments Council. <i>EPC e-Mandates e-Operating Model &#8211; Detailed Specification</i>. Version 1.2: http://www.europeanpaymentscouncil.eu/index.cfm/knowledge-bank/epc-documents/epc-e-mandates-e-operating-model-detailed-specification/epc208-08-e-operating-model-detailed-specification-v12-approvedpdf/.</para>
<para>[29] ETSI SR 019 020. (2016, February). The framework for standardisation of signatures: Standards for AdES digital signatures in mobile and distributed environments. <i>V1.1.1</i>. http://www.etsi.org/deliver/etsLsr/019000_019099/019020/01.01.01_60/sr_019020v010101p.pdf.</para>
<para>[30] ETSI TR 119 000. (2016, April). Electronic Signatures and Infrastructures (ESI); The framework for standardization of signatures: overview. Version 1.2.1: http://www.etsi.org/deliver/etsLtr/119000_119099/119000/01.02.01_60/tr_119000v010201p.pdf.</para>
<para>[31] Kubach, M., Leitold, H., Ro&#223;nagel, H., Schunck, C. H., &amp; Talamo, M. (2015). SSEDIC.2020 on Mobile eID. <i>to appear in proceedings of Open Identity Summit 2015</i>.</para>
<para>[32] Kutylowski, M., &amp; Kubiak, P. (2013, May 06). Mediated RSA cryptography specification for additive private key splitting (mRSAA). <i>IETF Internet Draft, draft-kutylowski-mrsa-algorithm-03</i>. http://tools.ietf.org/html/draft-kutylowski-mrsa-algorithm-03.</para>
<para>[33] OASIS CMIS v1.1. (2013, May 23). Content Management Interoperability Services (CMIS). http://docs.oasis-open.org/cmis/CMIS/vL1/CMIS-v1.1.html.</para>
<para>[34] OASIS DSS v1.0. (2010, November 12). <i>Profile for Comprehensive Multi-Signature Verification Reports Version 1.0</i>. Retrieved from http://docs.oasis-open.org/dss-x/profiles/verificationreport/oasis-dssx-1.0-profiles-vr-cs01.pdf</para>
<para>[35] OASIS-DSS. (2007, April 11). <i>Digital Signature Service Core Protocols, Elements, and Bindings Version 1.0</i>. Retrieved from OASIS Standard: http://docs.oasis-open.org/dss/vL0/oasis-dss-core-spec-v1.0-os.html</para>
<para>[36] RFC 6238. (2011, July). Jerman Blazic, A.; Saljic, A.; Gondrom, T. <i>Extensible Markup Language Evidence Record Syntax (XMLERS)</i>. https://tools.ietf.org/html/rfc6283.</para>
<para>[37] SD-DSS. (2011, August 09). <i>Digital Signature Service|Joinup</i>. Retrieved from https://joinup.ec.europa.eu/asset/sd-dss/description</para>
<para>[38] STORK 2.0. (2014). Retrieved from https://www.eid-stork2.eu/</para>
<para>[39] STORK. (2012). Retrieved from https://www.eid-stork.eu/</para>
</section>
</chapter>

<chapter class="chapter" id="ch015" label="15" xreflabel="15">
<title>LEPS &#8211; Leveraging eID in the Private Sector</title>
<para><b>Jose Crespo Mart&#237;n<sup>1</sup>, Nuria Ituarte Aranda<sup>1</sup>, Raquel Cort&#233;s Carreras<sup>1</sup>, Aljosa Pasic<sup>1</sup>, Juan Carlos Perez BaUn<sup>1</sup>, Katerina Ksystra<sup>2</sup>, Nikos Triantafyllou<sup>2</sup>, Harris Papadakis<sup>3</sup>, Elena Torroglosa<sup>4</sup> and Jordi Ortiz<sup>4</sup></b></para>
<para><sup>1</sup>Atos Research and Innovation (ARI), Atos, Spain</para>
<para><sup>2</sup>University of the Aegean, i4m Lab (Information Management Lab), Greece <sup>3</sup> University of the Aegean, i4m Lab (Information Management Lab) and Hellenic Mediterranean University, Greece</para>
<para><sup>4</sup>Department of Information and Communications Engineering, Faculty of Computer Science, University of Murcia, Murcia, Spain E-mail: jose.crespomartin.external@atos.net; nuria.ituarte@atos.net; raquel.cortes@atos.net; aljosa.pasic@atos.net; juan.perezb@atos.net; katerinaksystra@gmail.com; triantafyllou.ni@gmail.com; adanar@atlantis-group.gr; emtg@um.es; jordi.ortiz@um.es;</para>
<para>Although the government issued electronic identities (eID) in Europe appeared more than 20 years ago, their adoption so far has been very low. This is even more the case in cross-border settings, where private service providers (SP) from one EU Member State needs trusted eID services from identity provider located in another state. LEPS project aims to validate and facilitate the connectivity options to recently established eIDAS ecosystem, which provides this trusted environment with legal, organisational and technical guarantees already in place. Strategies have been devised to reduce SP implementation costs for this connectivity to eIDAS technical infrastructure. Based on the strategy, architectural options and implementation details have been worked out. Finally, actual integration and validation have been done in two countries: Spain and Greece. In parallel, market analysis and further options are considered both for LEPS project results and for e-IDAS compliant eID services.</para>

<section class="lev1" id="sec15-1">
<title>15.1 Introduction</title>
<para>With the eIDAS regulation [1], the EU has put in place a legal and technical framework that obliges EU Member States to mutually recognize each other&#8217;s notified elD schemes for cross-border access to online public-sector services, creating at the same time unprecedented opportunities for the private online service providers. The concept of notified elD limits the scope of electronic identity to the electronic identification means issued under the electronic identification scheme operated by one of the EU Member State (MS), under a mandate from the MS; or, in some cases, independently of the MS, but recognised and notified by that MS. To ensure this mutual recognition of notified e-ID, so called eIDAS infrastructure or eIDAS network has been established with an eIDAS node in each MS that serves as a connectivity proxy towards notified identity schemes. For a service provider that wants to connect to eIDAS network and to use cross-border eID services through it, this resolves only a part of the overall connectivity challenges. The connection from a service provider to its own MS eIDAS node, still has to be implemented by themselves and costs could be a considerable barrier.</para>
<para>This is where LEPS (Leveraging elD in the Private Sector) projects comes into picture. It is a European project financed by the EU through the Connecting Europe Facility (CEF) Digital programme [2], with a duration of 15 months, under grant agreement No. INEA/CEF/ICT/A2016/1271348. The CEF programme, with Digital Service Infrastructure (DSI) building blocks such as eID [3], aims at boosting the growth of the EU Digital Single Market (DSM). While public service providers are already under obligation to recognize notified eID services from another MS, private sector online service providers are especially targeted in CEF projects, and in LEPS in particular, in order to connect them to eIDAS network and offer eIDAS compliant eID services to the European citizens.</para>
<para>The LEPS consortium is formed by 8 partners from Spain and Greece. The project is coordinated by Atos Spain, that also performs the integration with the Spanish eIDAS node and supporting the Spanish partners. The University of Aegean in Greece performing the integration with the Greek eIDAS node and is supporting the Greek partners.</para>
<para>Three end-users participate in the projects in order to validate the use of the pan-European eIDAS infrastructure:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Two postal services companies in Spain and Greece (Sociedad Estatal de Correos y Tel&#233;grafos and Hellenic Post respectively) integrating existing online services</para></listitem>
<listitem><para>A digital financial services provider from Greece, Athens Exchange Group (ATHEX), aiming to offer remote electronic signature services to EU customers, compliant with eIDAS regulation.</para></listitem>
</itemizedlist>
<para>Other partners include the Universidad de Murcia that creates the mobile application for using NFC eID cards, the Hellenic Ministry of Administrative Reconstruction in charge of the eIDAS node, and the National Technical University of Athens supporting the Greek partners.</para>
<para>Challenges in LEPS project cannot be understood outside of context of market adoption of &#8220;eIDAS elD services&#8221;. However, set of challenges related to service provider (SP) connectivity to eIDAS is the main scope of the project. The focus is on the SP side of the elD market, more specifically subgroup of private sector online service providers. The approach taken in LEPS is to explore different options related to integration through so called eIDAS adapters in order to reduce burden for service providers and to reduce overall costs. eIDAS adapters is a sort of generic name given to reusable components, such as supporting tools, libraries or application programming interfaces.</para>
<para>The second group of challenges is around end user adoption, which indirectly affects service providers as well. Many service providers wait for the moment when citizens will activate and start to use massively their elD. This is also explored in LEPS project through design and development of mobile interface for the use of Spanish elD supporting NFC (known as DNI 3.0). The uptake of mobile ID solutions in many countries, notably in Austria, Belgium and Estonia, is growing faster than expected, so the introduction of LEPS mobile ID solution for Spanish DNI 3.0 can be considered as &#8220;right on time&#8221; action.</para>
<para>To summarise, challenges LEPS project faces are both related to the adoption of eIDAS eID services in general, as well as specific and related to the SP-to-eIDAS connectivity.</para>
<para>Finally, we can say that LEPS is fully aligned with the overall aim of CEF programme [4] to bring down the barriers that are holding back the growth of the EU Digital Single Market the development of which could contribute additional EUR 415 billion per year to EU economy.</para>
</section>

<section class="lev1" id="sec15-2">
<title>15.2 Solution Design</title>
<para>The eIDAS network has been built by the European Commission (EC) and EU Member States based on previous development made in European projects such as STORK 1.0 [5] and STORK 2.0 [6]. The work developed by the eSENS project [7], and the collaboration with Connecting Europe Facility (CEF) Digital [8] led to the generation of so called DSI building blocks &#8220;providing a European digital ecosystem for cross-border interoperability and interconnection of citizens and services between European countries&#8221; [9]. <link linkend="F15-1">Figure <xref linkend="F15-1" remap="15.1"/></link> shows eIDAS network and CEF building blocks evolution.</para>
<fig id="F15-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-1">Figure <xref linkend="F15-1" remap="15.1"/></link></label>
<caption><para>eIDAS network and CEF building blocks background [9].</para></caption>
<graphic xlink:href="graphics/ch015_fig001.jpg"/>
</fig>
<para>While many public service providers, especially at the central government level, have already been connected to eIDAS network, although mainly in pilot projects and pre-production environments, the interest of private service providers to connect to eIDAS network and use eID services has been so far very limited. One of the main challenges, it has been mentioned, is related to the uncertainty about architectural options, costs and overall stability and security of the service provision.</para>
<para>With the aim to integrate the selected private SP services with the eIDAS network, two different approaches, one for Spain and one for Greece, were designed and implemented.</para>
<para>For the <b>Spanish services</b> scenario, the <b>eIDAS Adapter</b> [9] is the API implemented by ATOS which allows Correos Services (through MyIdentity service) to communicate with the Spanish eIDAS node. The eIDAS Adapter is based on a Java integration package provided by the Spanish Ministry for integrating e-services from the private sector with the Spanish eIDAS node.</para>
<para>This integration package, in its turn, uses the integration package delivered by the EC [10]. The Spanish eIDAS adapter provides a SP interface to the Correos&#8217; services, and an eIDAS interface for connecting to the eIDAS infrastructure, through Spanish eIDAS node, as depicted in <link linkend="F15-2">Figure <xref linkend="F15-2" remap="15.2"/></link>.</para>
<para>The eIDAS adapter is able to integrate Correos services with the eIDAS network, allowing a Greek citizen accessing to Correos e-services using a Greek eID, as is explained in Validation section.</para>
<para>For the <b>Greek services</b> scenario, the University of the Aegean has proposed a similar approach, as can be seen in <link linkend="F15-3">Figure <xref linkend="F15-3" remap="15.3"/></link>.</para>
<para>The integration of Greek services with the eIDAs network is made through the called LEPS eIDAS API Connector [11]. This API Connector re-uses the basic functionalities of eIDAS Demo SP package provided by CEF [10], and is provided in three different flavours, which can be used in different scenarios.</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para><b>eIDAS SP SAML Tools Library</b>. Used in the case of Java-based SP (developed from scratch) in which there&#8217;s no need for one certificate for many services within SP and in which there is no need for pre-built UIs. This was used to avoid extra development time for creating and processing SAML messages.</para></listitem>
<listitem><para><b>eIDAS WebApp 2.0</b>. This solution is for Java or non-Java-based SP scenarios, in which there is no need for one certificate for many services within SP but with need for built-in UIs. This allows to avoid development time for processing SAML messages, completely handles an eIDAS-based authentication flow (including UIs). Is SP infrastructure independent and operates over a simple REST API. This solution increases the security (JWT based security) (<link linkend="F15-4">Figure <xref linkend="F15-4" remap="15.4"/></link>).</para></listitem>
<listitem><para><b>eIDAS ISS 2.0</b>. This solution is for Java or non-Java-based SP (developed from scratch) in which it is used one certificate for many services within SP and comes with or without SP e-Forms/thin WebApp. It is used to avoid development time for processing SAML messages, supports the interconnection of many SP services in the same domain (each service is managed via a thin WebApp). It sends SAML 2.0 request to eIDAS Node, translates response from SAML 2.0 to JSON and other common enterprise standards (and forward it to the relevant SP service). It&#8217;s for multiple services with the same SPs sharing one certificate (<link linkend="F15-5">Figure <xref linkend="F15-5" remap="15.5"/></link>).</para></listitem>
</orderedlist>
<fig id="F15-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-2">Figure <xref linkend="F15-2" remap="15.2"/></link></label>
<caption><para>eIDAS adapter architecture general overview [9].</para></caption>
<graphic xlink:href="graphics/ch015_fig002.jpg"/>
</fig>
<fig id="F15-3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-3">Figure <xref linkend="F15-3" remap="15.3"/></link></label>
<caption><para>SP integration with eIDAS node using greek connector(s) [11].</para></caption>
<graphic xlink:href="graphics/ch015_fig003.jpg"/>
</fig>
<fig id="F15-4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-4">Figure <xref linkend="F15-4" remap="15.4"/></link></label>
<caption><para>eIDAS WebApp 2.0 [12].</para></caption>
<graphic xlink:href="graphics/ch015_fig004.jpg"/>
</fig>

<fig id="F15-5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-5">Figure <xref linkend="F15-5" remap="15.5"/></link></label>
<caption><para>eIDAS ISS 2.0 (plus this WebApp) [12].</para></caption>
<graphic xlink:href="graphics/ch015_fig005.jpg"/>
</fig>
<para>These connectors provided APIs facilitates the integration of ATHEX and Hellenic Post (ELTA) services with the eIDAS network, allowing a Spanish citizen accessing to Greek e-services using the Spanish eID, as indicated in Validation section.</para>
</section>

<section class="lev2" id="sec15-2-1">
<title>15.2.1 LEPS Mobile App</title>
<para>The use of smartphones and tablets for interacting with public administrations and private companies has currently become an increasing common practice, therefore it is necessary to offer mobile solutions that integrate mobile eIDAS authentication in the SP service ecosystem.</para>
<para>An efficient solution for <b>mobile devices</b> with a successful integration of mobile eIDAS authentication is offered. Concretely, mobile app provides mobile support for Greek services (ATHEX and ELTA) to enable authentication of Spanish citizens, through eIDAS infrastructure using the Spanish DNIe 3.0 (electronic Spanish Identity National Document), which supporting NFC technology [13].</para>
<para>The mobile application developed by Universidad de Murcia can work with any SP offering eIDAS authentication for Spanish users [14]. Additionally, the implementation can be easily extended to other EU Member States by adding other authentication methods beyond the Spanish DNIe. Also, the requirements for SPs to integrate the mobile application are practically minimal and limited to the global requirements of operating in a mobile environment, i.e. providing responsive interfaces and use standard components such as HTML and JavaScript 7.</para>
</section>

<section class="lev1" id="sec15-3">
<title>15.3 Implementation</title>
<para>Aiming to cover all the functionalities and requirements needed by the SPs and the eIDAS network, the developed <b>Spanish eIDAS adapter</b> comprises the following modules depicted in <link linkend="F15-6">Figure <xref linkend="F15-6" remap="15.6"/></link>:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>SP interface</b>: Establishes interaction with the integrated services. Contains a single endpoint which receives the authentication request from the SP;</para></listitem>
<listitem><para><b>eIDAS interface</b>: Connects to the country eIDAS node, Comprises two endpoints:</para></listitem>
<listitem>Metadata endpoint: Provides SP metadata;</listitem>
<listitem>ReturnPage endpoint: Receives the SAML response from the country eIDAS node.</listitem>
</itemizedlist>
<fig id="F15-6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-6">Figure <xref linkend="F15-6" remap="15.6"/></link></label>
<caption><para>Spanish eIDAS adapter modules [10].</para></caption>
<graphic xlink:href="graphics/ch015_fig006.jpg"/>
</fig>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>UI module</b>: Interacts with the end user;</para></listitem>
<listitem><para><b>Manager service</b>: Orchestrate the authentication process inside the eIDAS Adapter;</para></listitem>
<listitem><para><b>Translator service</b>: Translates in both ways from the SP to eIDAS node:</para></listitem>
<listitem>The authentication request (JWT) from the SP to a SAML request;</listitem>
<listitem>The SAML response from eIDAS node to an authentication response (JWT) to SP;</listitem>
</itemizedlist>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>Mapping service</b>: Maps the SP attribute names to SAML eIDAS attribute names, doing the semantic translation;</para></listitem>
<listitem><para><b>SAML Engine</b>: Manages the SAML request and response, encrypting/decrypting and signing;</para></listitem>
<listitem><para><b>Metadata service</b>: Creates SP metadata;</para></listitem>
<listitem><para><b>Mobile service:</b> Optional component able to detect the device where the authentication process is performed.</para></listitem>
</itemizedlist>
<para>The most relevant technologies, standards and protocols used during the implementation include among others: Java 8.0 as implementing language, JWT (industry standard RFC 751) to transmit the user data between the SP and the adapter in a secure way, SAML 2.0 for transmitting user authentication between the eIDAS infrastructure and the adapter. For the deployment process Apache Tomcat as web application server was used and deployed as a Docker container.</para>
<para>During the implementation, deployment and test of the Spanish eIDAS adapter some challenges arose. The actions performed for overtaking these challenges could help MS for taking decisions when facing the implementation of new adapters for integrating private online services to eIDAS infrastructure.</para>
<para>The plan for designing and implementing this adapter was reuse the integration package the Spanish Ministry provided for the private SP integration. This approach would help to reduce the use of resources guarantying the connection to eIDAS node. Thus, only some minor effort for integrating SP service would be needed. Despite this advantage the use of legacy code and the used technologies could restrict the use of cutting-edge or more familiar. In the particular case of the Spanish eIDAS adapter implementation, mixing technologies such as Struts 2 and Spring took more time than expected. It was necessary to carry out some changes and the developer team had to acquire some knowledge on Struts 2 framework. The use of generic eIDAS libraries beside well know technologies by the development team, is recommended. Apart from this, the relevant integration information and the technical support the organization in charge of the country eIDAS node can provide, is very useful.</para>
<para>As a summary the main features of the Spanish eIDAS adapter [10] are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Modular design;</para></listitem>
<listitem><para>Reusable;</para></listitem>
<listitem><para>JWT based security for transmitting user information;</para></listitem>
<listitem><para>Able to create SAML Requests and process SAML Reponses;</para></listitem>
<listitem><para>Translate SAML 2.0 to JSON and vice versa;</para></listitem>
<listitem><para>SP client programming language independent;</para></listitem>
<listitem><para>Docker based deployment.</para></listitem>
<listitem><para>SP infrastructure independent (can be deployed on SP infrastructure or on a third party);</para></listitem>
<listitem><para>Able to connect with different SP services in the same or from different domains.</para></listitem>
</itemizedlist>
<para>Regarding the <b>Greek API connectors</b> [11]:</para>
<para>The <b>eIDAS SP SAML Tools library</b> can be used to simplify SP-eIDAS node communication development, on the SP side. It is offered in the form of a Java library, which can be easily integrated into the development of any Java-based SP. The library itself is based on the CEF provided SP implementation (demo SP). This library provides methods that a Java-based SP implementation can call to create SAML Requests (format, encode, encrypt), parse SAML Responses (decrypt, decode, parse) and create the SP metadata xml, as required by the eIDAS specifications [11].</para>
<para>The <b>eIDAS WebApp 2.0</b> uses the previous eIDAS SP SAML Tools library, providing a UI, a simple REST API and the business logic for handling the eIDAS authentication flow. The WebApp is offered as a Docker image for deployment purposes and need to be deployed on the same domain than the SP [11].</para>
<para>The <b>eIDAS ISS 2.0</b> simplifies the connection of any further SP enabling SPs to connect to the eIDAS node without using SAML 2.0 protocol. Allowing that one ISS 2.0 installation can support multiple services within the same SPs. Provides communication endpoint based on JSON. The ISS 2.0 app is provided as a war artefact to be deployed on Apache Tomcat 7+ [11].</para>
</section>

<section class="lev1" id="sec15-4">
<title>15.4 Validation</title>
<para>With the aim to demonstrate and validate the SP integration with the eIDAS infrastructure through the country eIDAS nodes, the following selected services have been customized in order to proceed with the integration.</para>
<para>The selected services customized on the Spanish side were provided by Correos [15]:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>&#8220;My Identity&#8221; provides secured digital identities to citizens, businesses and governments;</para></listitem>
<listitem><para>&#8220;My Mailbox&#8221; is a digital mailbox and storage that enables you to create a nexus of secure document-based communication;</para></listitem>
<listitem><para>&#8220;My Notifications&#8221; provides a digital service that aims to centralize and manage governmental notifications.</para></listitem>
</itemizedlist>
<para>The services provided on the Greek side were provided by ATHEX and ELTA [16]:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>&#8220;Athex Identity service&#8221; provides an eIDAS compliant identity provider service;</para></listitem>
<listitem><para>&#8220;Athex Sign&#8221; is a service that provides a secure way to sign on the go. Anytime and anywhere;</para></listitem>
<listitem><para>&#8220;Athex AXIAWeb&#8221; allows any European Union citizen-investor to register and login via eIDAS.</para></listitem>
<listitem><para>&#8220;ELTA e-shop&#8221; offers functionalities such as letter mail services or prepaid envelopes;</para></listitem>
<listitem><para>&#8220;ELTA eDelivery Hybrid Service&#8221; provides document management functionalities through the use of digital signatures, standardization flow and other tools;</para></listitem>
<listitem><para>&#8220;Parcel Delivery Voucher&#8221; allows customers print online the accompanying vouchers for parcels send to their customers.</para></listitem>
<listitem><para>&#8220;Online Zip Codes&#8221; allows corporate customers to obtain the current version of Zip codes of Greece.</para></listitem>
</itemizedlist>
<para>After the customization and integration, the IT infrastructure of these services were connected to the appropriate country eIDAS node, allowing the services to use the eIDAS network for user authentication with eID issued by EU Member States. Additionally, is demonstrated the usability of eIDAS specifications and the Spanish and Greek eIDAS nodes in the private sector.</para>
<para>For testing the integrated services during the project, the following steps have been performed [17]:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Preparation of the pre-production tests necessary for the SP services integration verification in pre-production environment;</para></listitem>
<listitem><para>Execution of the pre-production automated tests against the Spanish (for Correos) and Greek (for ATHEX and ELTA) eIDAS node using test credentials on a pre-production environment. Finally, feedback of this step is generated for the next steps for the production testing;</para></listitem>
<listitem><para>Preparation of the production tests considering the feedback from previous steps;</para></listitem>
<listitem><para>Execution of the production manual tests against the Spanish eIDAS node in pre-production environment (for pre-production Correos services) due to Spanish Ministry restrictions, and against (for production ATHEX and ELTA services) production eIDAS Node. In both cases real credentials were used.</para></listitem>
</orderedlist>
<para>The automated testing on pre-production environment where performed using an automated testing tool eCATS (eIDAS Connectivity Automated Testing Suite) based on Selenium Selenium portable software-testing framework for web applications. eCATS tool has been customized for each integrated service as depicted in <link linkend="F15-7">Figure <xref linkend="F15-7" remap="15.7"/></link>.</para>
<para>Apart from the connectivity tested between the Spanish and the Greek eIDAS nodes, additional interoperability tests have been performed from the Spanish eIDAS node to Iceland, The Netherlands and Italy, and between Greece and the Czech Republic. For this purpose, test credentials provided by different public organizations in change of the eIDAS node management in their countries were used.</para>
<fig id="F15-7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label><link linkend="F15-7">Figure <xref linkend="F15-7" remap="15.7"/></link></label>
<caption><para>LEPS services and automated eCATS tool [13].</para></caption>
<graphic xlink:href="graphics/ch015_fig007.jpg"/>
</fig>
</section>

<section class="lev1" id="sec15-5">
<title>15.5 Related Work</title>
<para>LEPS project is linked to a set of projects where the LEPS partners participated with the aim to increase the use of the eID between the EU citizens for accessing online services across EU and reinforce the Digital Single Market in Europe. Among these is worth to mention the following projects:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>STORK</b> [5] (CIP program, 2008&#8211;2011), providing the first European eID Interoperability Platform allowing citizens to access digital services across borders, using their national eID;</para></listitem>
<listitem><para><b>STORK 2.0</b> [6] (CIP program, 2012&#8211;2015), as continuation of STORK was intended to boost the acceptance of eID in EU for electronic authentication for both physical and legal person, and place the basis for the creation of an interoperable and stable cross-border infrastructure (eIDAS network) for public and private online services and attribute providers;</para></listitem>
<listitem><para><b>FuturelD</b> [18] (FP7-ICT program, 2012&#8211;2015), &#8220;created a com-prehensive, flexible, privacy-aware and ubiquitously usable identity management infrastructure for Europe, integrating existing eID technology and trust infrastructures, emerging federated identity management services and modern credential technologies to provide a user-centric system for the trustworthy and accountable management of identity claims&#8221; [18];</para></listitem>
<listitem><para><b>FIDES</b> [19] (EIT, 2015&#8211;2016), built a secure federated and interoperable identity management platform (mobile/desktop). An identity broker was implemented, where STORK infrastructure was provided;</para></listitem>
<listitem><para><b>STRATEGIC</b> [20] (CIP program, 2014&#8211;2017), provided more effective public cloud services, where STORK network was integrated;</para></listitem>
</itemizedlist>
<para>Additionally, there are projects related to eID management in different sectors, such as the academic domain, or the public sector, where the LEPS partners are also participating, and are currently under development.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><b>ESMO</b> project aims to integrate education sector Service Providers to the eIDAS network, contributing to increase eIDAS eID uptake and use in the European Higher Education Area (EHEA) [21]. Outcomes from LEPS will be used;</para></listitem>
<listitem><para><b>TOOP</b> [22] project main objective is &#8220;to explore and demonstrate the once-only principle across borders, focusing on data from busi-nesses&#8221;. TOOP demo architecture implementation incorporates LEPS APIs (WebApp 2.0). Task: Identify users accessing the services of a TOOP Data Consumer via eID_EU.</para></listitem>
<listitem><para><b>FIWARE</b> project (FP7-ICT) [23]. Since Atos is co-founder of FIWARE Foundation it is supporting publication of connectivity to CEF eID building block as generic enabler. LEPS adaptor for Spanish DNIe will be part of know-how exchange with Polytechnic University of Madrid, partner responsible for generic enabler.</para></listitem>
</itemizedlist>
<para>Finally, LEPS project stablished links with other eID projects under the umbrella of the LEPS Industry Monitoring Group (IMG), such as &#8220;Opening a bank account with an EU digital identity&#8221; CEF Telecom eID project [24] and &#8220;The eIDAS 2018 Municipalities Project CEF Telecom eID project&#8221; [25]. Contacts have also been done with other eID initiatives such as Future Trust, ARIES and Credential projects, as well as industrial initiatives EEMA, ECSO, TDL, OIX, Kantara and OASIS.</para>
<para>Besides direct integration of external eID services through the identity provider available APIs, e-service providers have also an option to use broker or aggregator of different identity providers that might offer additional eID services or functionalities. In this category we can mentioned related work on so-called Identity clouds or CIAM (customer identity and access management) solutions. In Germany SkIDentity Service [26] is a kind of broker for service providers that can use popular social logins such as LinkedIn and Facebook Login, as well as eIDAS eID services from a number of countries. In Netherlands, similar broker role to municipal e-services is provided by Connectis with support from CEF project. In Spain Safelayer has offering named TrustedX eIDAS Platform that is &#8220;orchestrating&#8221; digital identities for authentication, electronic signature, single sign-on (SSO) and Two-factor Authentication (2FA) for Web environments.</para>
</section>

<section class="lev1" id="sec15-6">
<title>15.6 Market Analysis</title>
<para>LEPS Market analysis had to take into account specific pre-existing context of provide e-service provider, such as:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Organisations that need or want to make migration from the existing identity and access management (IAM) solution. This could apply to organisations that have scaled out their internal or tailor made IAM solutions, or organisations that already use partially external or third-party e-identification or authentication services, but are looking for the services with a higher level of assurance (LoA)</para></listitem>
<listitem><para>Organisations that use low assurance third party eID services, such as social login, and want to elevate overall level of security and decrease identity theft and fraud by integration of eIDAS eID services, either to replace or to enhance exiting external eID services</para></listitem>
<listitem><para>Organisations that are already acting or could be acting as eID brokers</para></listitem>
<listitem><para>Organisations that want to open new service delivery channels through mobile phone and are interested in mobile ID solutions that work across borders</para></listitem>
</orderedlist>
<para>The first group is composed of organisations that made important investments in their internally operated IAM solutions. These solutions, however, are originally not meant to handle the requirements for large scale crossborder e-service use cases, although the functional building blocks and protocols might be the same. They usually have internal know-how and capacity to implement eIDAS connectivity and the main driver for adoption of eIDAS eID services could be regulatory compliance, such as &#8220;know your customer&#8221; (KYC) requirement in anti-money laundry (AML) directive.</para>
<para>In addition, given that the main value proposition of LEPS approach, right from the start, was based on cost effectiveness and, in a lesser amount, also on cost efficiency, one of the main adoption targets are small and medium enterprises (SME) operating in cross-border context, planning migration or extension of their current third-party eID services. Unlike the first group of LEPS adopters, these organisations are unlikely to have know-how, resources and capacity to implement eIDAS connectivity. The main proposition from LEPS in this regard is saving cost and time for e-service provider organisations in regard to activities such as familiarization with SAML communication (protocol understanding and implementation), implementation of the required web interface (UI) for user interaction with the eIDAS-enabled services, formulation and proper preparation of an eIDAS SAML Authentication Request, processing of an eIDAS Node SAML Authentication Response and provision of the appropriate authentication process end events for success or failure.</para>
<para>The fact that many organisations do not have resources for eID service implementation and operation internally was already exploited by social networks and other online eID service providers that offer their &#8220;identity APIs&#8221;. This is an easy way to integrate highly scalable, yet low assurance, eID services. In some SP segments, such as e-commerce, there is a huge dominance of Facebook and Google eID services (with 70% and 15% market share respectively), while in the other segments, so called customer IAM (or CIAM) appeared as an emerging alternative to integrate API gateways to different online eID service providers.</para>
<para>This new generation of CIAM solutions, complemented by a variety of eID broker solutions, is the third potential target for LEPS adoption. Integrating external identities can be linked to onboarding, such as in the case of Correos Myidentity service, or can help in trust elevation and/or migration from social e-IDs with low LoA to eIDAS eID with high LoA. With scalability, there are other requirements that might depend on a specific e-service provider, such as for example integration with customer relationship management or handling a single customer with many identities.</para>
<para>From all trends that have been analysed in market analysis, the one that is most promising to impact LEPS results uptake is mobile identification and authentication, which targets user experience and usability. Given that the subset of LEPS results also contains interface for mobile eID (although only Spanish DNI 3.0), the organisation that have this specific need, targeting Spanish citizens that use mobile e-service from other members state SP, are considered as the fourth group of adopters.</para>
<para>For all of these users, LEPS brings benefits of cost saving, while eIDAS eID services represent well known benefits:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Improved quality of the service offered to the customer;</para></listitem>
<listitem><para>The introduction of a process of identity check and recognition through eID reduces frauds;</para></listitem>
<listitem><para>Reduction in operational, legal and reputational risk as trusted identities and authentication is provided by national public MS infrastructures;</para></listitem>
<listitem><para>Time savings, reduction in terms of administrative overhead and costs;</para></listitem>
<listitem><para>Increasing potential customer base.</para></listitem>
</itemizedlist>
<para>These theoretical assumptions have been partially validated in the case of LEPS service providers. In ELTA case, for example, possible users are Greek nationals living abroad and using some eID different than Greek. According to the General Secretariat for Greeks Abroad more than 5M citizens of Greek nationality live outside the Greek border, scattered in 140 countries of the world. The greater concentration has been noted in the US (3M), Europe (1M), Australia (0, 7M), Canada (0, 35 M), Asia &#8211; Africa (0, 1M) and Central and South America (0, 06M). In this view as regards cross-European e-delivery, the primary target for this service has customer base of 1M with initial penetration rate set to 1% (10.000 users). ELTA focus on existing and new customers was distinguished with 2 supplementary strategies: Revenue Growth (existing) and Market Share (new) Strategies.</para>
<para>As we can see from Table 15.1 (with data collected from the actual pilots), the cost of implementation of eIDAS connectivity depends of a selected architectural and software options. Reuse of LEPS results significantly reduces this cost, both for fixed one-time expenditures and for operational costs.</para>
<para>Two strategies envisaged by ELTA aimed at benefit of 100.000 within 2 years. With the figures from Table 15.1 it is clear that this breakeven point can be reached only with the reuse of LEPS components (with the accumulated cost of 87.624 euros for 24 months), while building eIDAS connectivity from the scratch would reach this point only in the year 3 (the accumulated cost at the end of 24-months period would be 111.235 euros for this option).</para>
<table-wrap position="float" id="T15-1">
<label><link linkend="T15-1">Table <xref linkend="T15-1" remap="15.1"/></link></label>
<caption><para>Cost of three eIDAS connectivity options</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr valign="top"><td>Integration Scenario</td><td>Fixed Cost (in EUR)</td><td>Operational Cost (in EUR per year)</td></tr>
</thead>
<tbody>
<tr valign="top"><td>Build from scratch (Scenario 1)</td><td>41,739.36</td><td>34,748.44</td></tr>
<tr valign="top"><td>Build by using CEF Demo SP (Scenario 2)</td><td>33,792.96</td><td>32,627.80</td></tr>
<tr valign="top"><td>LEPS eIDAS API Connectors (Scenario 3)</td><td>25,734.46</td><td>30,945.28</td></tr>
</tbody>
</table>
</table-wrap>
</section>

<section class="lev1" id="sec15-7">
<title>15.7 Conclusion</title>
<para>The challenge of adoption of eIDAS eID services can be divided into challenges related to service provider connectivity with eIDAS network and challenges related to citizen/business use of notified eID means, including NFC enabled eID cards in case of Spanish citizens. LEPS projects tried to reduce gaps for both types of challenges. The solution for service provider connectivity to eIDAS nodes can be considered as easily replicable across EU. It is focused on service provider cost saving, when it comes to investments in eIDAS connectivity. The other LEPS solution, focused on the Spanish eID card use through mobile phone interface is targeting usability as the main value proposition.</para>
<para>The analysis of architectural option provided by LEPS project demonstrated that there are different approaches to integrate online services with the eIDAS infrastructure through the connection with the country eIDAS node. Implementation of API&#8217;s in two countries and pilot trials with real services and users, resulted not only in technical verification of selected approaches, but also in validation from cost-benefit and usability perspectives.</para>
<para>The final outcomes generate benefits mainly to the SPs such as reduction of time and effort for integrating their online services with the pan-European eIDAS network. In its turn, the use of eIDAS eID services facilitates the cross-border provision of e-services and elevates level of trust by end users. In addition, LEPS interface for mobile access to Spanish DNI3.0 eID card services improves the user experience. Finally, the results of the project will benefit larger community of eIDAS developers and other stakeholders since results and guidelines generated during the project will help in taking decisions on how to approach and manage the challenges related to service provider connectivity to the country eIDAS node.</para>
</section>
<section class="lev1" id="seca">
<title>Acknowledgments</title>
<para>This project has been funded by the European Union&#8217;s Connecting Europe Facility under grant agreement No. INEA/CEF/ICT/A2016/1271348.</para>
</section>
<section class="lev1" id="secb">
<title>References</title>
<para>[1] EUR-Lex, Access to European Union law, Regulation (EU) No. 910/ 2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2014.257. 01.0073.01.ENG. Retrieved date 30 November 2018.</para>
<para>[2] European Commission, CEF Digital, https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/CEF+Digital+Home. Retrieved date 27 November 2018.</para>
<para>[3] European Commission, About CEF building blocks, https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/About+CEF+building+blocks. Retrieved date 28 November 2018.</para>
<para>[4] European Commission, Digital Single Market, https://ec.europa.eu/digital-single-market/e-identification. Retrieved date 26 November 2018.</para>
<para>[5] European Commission, STORK: Take your e-identity with you, everywhere in the EU, https://ec.europa.eu/digital-single-market/en/content/stork-take-your-e-identity-you-everywhere-eu/. Retrieved date 30 November 2018.</para>
<para>[6] European Commission, Digital Single Market, STORK 2.0, https://ec.europa.eu/digital-single-market/en/news/end-stork-20-major-achievements-making-access-mobility-eu-smarter. Retrieved date 30 November 2018.</para>
<para>[7] eSENS, Moving Services Forward, https://www.esens.eu/. Retrieved date 30 November 2018.</para>
<para>[8] European Commission, e-SENS and Connecting Europe Facility: how do they work together? https://ec.europa.eu/digital-single-market/en/news/e-sens-and-connecting-europe-facility-how-do-they-work-together. Retrieved date 30 November 2018.</para>
<para>[9] D3.3 Operational and Technical Documentation of SP integration. Lead author: Juan Carlos Perez Baiin, Deliverable of the LEPS project, 2018. https://leps-project.eu/node/345. Retrieved date 30 November 2018.</para>
<para>[10] European Commission, eIDAS-Node integration package, https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/eIDAS+Node+integration+ package. Retrieved date 18 November 2018.</para>
<para>[11] D4.2 eIDAS Interconnection supporting Service, Lead authors: Petros Kavassalis, Katerina Ksystra, Nikolaos Triantafyllou, Harris Papadakis, Deliverable of LEPS project 2018, http://www.leps-project.eu/node/347, Retrieved date 30 November 2018.</para>
<para>[12] D4.3 Operational and Technical Documentation of SP (ATHEX, Hellenic Post) integration (production), Lead authors: Petros Kavassalis, Katerina Ksystra, Nikolaos Triantafyllou, Maria Lekakou, Deliverable of LEPS project 2018, https://leps-project.eu/node/348. Retrieved date 30 November 2018.</para>
<para>[13] DNI y Pasaporte, Cuerpo Nacional de Policia, DNI electronico, Descripcion DNI 3.0, https://www.dnielectronico.es/PortalDNIe/PRF1_Cons02.action?pag=REF_038&amp;id_menu=1. Retrieved date 30 November 2018.</para>
<para>[14] D3.1 &#8211; Mobile ID App and its integration results with the Industrial Partners. Lead author: Elena Torroglosa, Deliverable of the LEPS project, 2018.https://leps-project.eu/node/343. Retrieved date 30 November 2018.</para>
<para>[15] D3.2 Operational and Technical Documentation of Correos services customization. Lead author: Juan Carlos Perez Bairn http://www.leps-project.eu/node/344. Retrieved date 30 November 2018.</para>
<para>[16] D4.1 Operational and Technical Documentation of SP (ELTA, ATHEX) customization. Lead author: Petros Kavassalis, Katerina Ksystra. http://www.leps-project.eu/node/346. Retrieved date 30 November 2018.</para>
<para>[17] D6.1 Production Testing Report. Lead authors: Petros Kavassalis, Katerina Ksystra, Manolis Sofianopoulos. http://www.leps-project.eu/ node/354. Retrieved date 30 November 2018.</para>
<para>[18] FutureID, Shaping the future of electronic identity, http://www.futureid. eu/ Retrieved date 30 November 2018.</para>
<para>[19] FIDES, Federated Identity Management System, EIT Digital, https://www.eitdigital.eu/fileadmin/files/2015/actionlines/pst/activities2015/ PST_Activity_flyer_FIDES.pdf. Retrieved date 30 November 2018.</para>
<para>[20] STRATEGIC, Service Distribution Network and Tools for Interoperable Programmable, and Unified Public Cloud Services, https://ec.europa.eu/digital-single-market/sites/digital-agenda/files/logo_strategic_v3_final.jpg. Retrieved date 10 December 2018.</para>
<para>[21] ESMO, e-IDAS-enabled Student Mobility, http://www.esmo-project.eu/. Retrieved date 30 December 2018.</para>
<para>[22] TOOP, The Once-Only Principle Project, http://www.toop.eu/. Retrieved date 10 December 2018.</para>
<para>[23] FIWARE, Future Internet Core Platform, https://www.fiware.org/. Retrieved date 10 December 2018.</para>
<para>[24] CEF: Opening a Bank Account Across Borders with an EU National Digital Identity,https://oixuk.org/opening-a-bank-account-cross-border-id-authentication/ Retrieved date 10 December 2018.</para>
<para>[25] CEF: The eIDAS 2018 Municipalities Project, https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/2017/07/11/eIDAS+2018+Municipal ities+Project. Retrieved date 10 December 2018.</para>
<para>[26] Secure identities for the web: Skidentity web site: skidentity.com</para>
</section>
</chapter>

<chapter class="chapter" id="ch016" label="16" xreflabel="16">
<title><b>About the Editors</b></title>
<fig id="F16-1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<graphic xlink:href="graphics/ch016_fig001.jpg"/>
</fig>
<para><b>Dr. Jorge Bernal Bernabe</b> received the MSc, Master and PhD in Computer Science from the University of Murcia. He was accredited as Associate Professor by Spanish ANECA in 2016 and granted with the &#8220;Best PhD Thesis Award&#8221; from the School of Computer Science of the University of Murcia 2015. Currently, he is a Postdoctoral researcher in the Department of Information and Communications Engineering of the University of Murcia, partially supported by INCIBE (Spanish National Cybersecurity Institute). Jorge Bernal has been visiting researcher in the Cloud and Security Lab of Hewlett-Packard Laboratories (Bristol UK) and in the University of the West of Scotland. Author of several book chapters, more than 20 papers in indexed impact journals and more than 20 papers in international conferences. He has been involved in the scientific committee of numerous conferences and served as a reviewer for multiple journals. During the last years, he has been working in several European research projects related to security and privacy, such as POSITIF, DESEREC, Semiramis, Inter-Trust, SocIoTal, ARIES, ANASTACIA, OLYMPUS, CyberSec4Europe. His scientific activity is mainly devoted to the security, trust and privacy management in distributed systems and IoT.</para>
<fig id="F16-2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<graphic xlink:href="graphics/ch016_fig002.jpg"/>
</fig>
<para><b>Dr. Antonio Skarmeta</b> received the M.S. degree in Computer Science from the University of Granada and B.S. (Hons.) and the Ph.D. degrees in Computer Science from the University of Murcia Spain. Since 2009 he is Full Professor at the same department and University. Antonio F. Skarmeta has worked on numerous research projects in the national and international area in the networking, security and IoT area, such as Euro6IX, ENABLE, DESEREC, Inter-Trust, DAIDALOS, SWIFT, SEMIRAMIS, SMARTIE, SOCIOTAL, IoT6, ARIES, ANASTACIA, OLYMPUS, CyberSec4Europe. His main interest is in the integration of security services, identity, IoT and Smart Cities. He heads the research group ANTS since its creation on 1995. Currently, he is also advisor to the vice-rector of Research of the University of Murcia for International projects and head of the International Research Project Office. Since 2014 he is Spanish National Representative for the MSCA within H2020. He has published over 200 international papers and being member of several program committees. He has also participated in several standardization for a like IETF, ISO and ETSI.</para>
</chapter>
</book>