<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE book SYSTEM "d:\rp-xml\xml_dtds\docbookx.dtd">
<?xml-stylesheet type="text/xsl" href="9788793237704.xsl"?>
<book id="home" xmlns:xlink="http://www.w3.org/1999/xlink">
<bookinfo>
<title>Incorporating Structural Health Monitoring in the Design of Slip Formed Concrete Wind Turbine Towers</title>
<affiliation><emphasis role="strong">PhD Thesis by</emphasis></affiliation>
<authorgroup>
<author><firstname>Mads Knude</firstname>
<surname>Hovgaard</surname>
</author>
</authorgroup>
<affiliation><emphasis>Aarhus University Department of Engineering, Denmark</emphasis></affiliation>
<publisher>
<publishername>River Publishers</publishername>
</publisher>
<isbn>9788793237704</isbn>
</bookinfo>
<preface class="preface" id="preface">
<title>Preface</title>
<para>The idea of concrete wind turbine towers originates from department manager and civil engineer Niels Tornsberg in Rambell Denmark. Civil engineer and Ph.D. Jens C. Kirk have, in his time in the department, designed more concrete chimneys than he can remember &#x2013; some more than 190 meters tall. The local presence of wind power plant producers around the Aarhus area of Denmark motivated a business case of tower concepts for wind turbines, and soon the contracting company MT H&#x00F8;jgaard became involved, due to their expertise in slip forming the chimneys. The local university, Aarhus University had recently formed the department of engineering under the graduate school of science and technology, as the project started. Professor Rune Brincker quickly became part of the project. With his background in operational modal analysis, random vibration and fracture mechanics, he was the obvious choice for the subject. Rune introduced the structural health monitoring concept to the project group and so the project title came to be <emphasis>Incorporating Structural Health Monitoring in the design of slip formed concrete wind turbine towers</emphasis>.</para>
</preface>
<preface class="preface" id="ack">
<title>Acknowledgements</title>
<para>This thesis summarizes an industrial PhD project, partially funded by Innovationsfonden (previously Forsknings og Innovationsstyrelsen). I am grateful for the financing that made this project possible. In the course of the project, I&#x2019;ve been Ramb&#x00F8;ll staff, but MT H&#x00F8;jgaard have provided vital economic support to make such a farfetched project possible. A personal thanks to Niels Tornsberg at Ramb&#x00F8;ll for his belief that in order to progress, we must sometimes take a chance on a wildcard. Industrial supervisor Jens Kirk has had both feet on the ground throughout the project and I&#x2019;m grateful for the always thoughtful and considerate advice. Thanks to Svend Andreassen of MT H&#x00F8;jgaard for his kind help in providing information regard the economic aspect of concrete wind turbine tower and chimney construction. A kind thanks to Rune Brincker for the time, energy and good advice given. Rune&#x2019;s keen interest in the project, always positive energy and good spirits has meant a great deal when times were tough.</para>
<para>Thanks to Raid Karoumi as well as my many other colleagues at KTH, where I spent a month in the spring of 2014. Our discussions of the directions and purpose of SHM were a great inspiration.</para>
<para>Thank you to professors John Dalsgaard S&#x00F8;rensen at AAU and Michael Havbro Faber at DTU, who pointed us in the directions of Bayesian decision analysis and probabilistic deterioration modelling right from the beginning of the project. Thanks also to Bo Juul Petersen, previously Ramb&#x00F8;ll, for his time to share information about matters relating to wind turbine design and load assessment. A thanks to Jens Peter Ulfkj&#x00E6;r for our many interesting discussions on concrete fatigue. Thanks to Thomas Westergaard for doing his bachelor project in concrete wind turbine design and to Lorenzo Colone for reminding me that people actually need to understand this thesis.</para>
<para>A special thanks goes to my PhD colleagues at the university: Jannick B. Hansen, Peter Olsen, Anders Skafte, Lars Hestbech, Jakob Fisker, Annette B. Rasmussen and Lise Kj&#x00E6;r.</para>
</preface>
<preface class="preface" id="summary">
<title>Summary</title>
<para>The design of most civil structures follows the partial-safety-factor format. The partial-safety-factors are coefficients written in codes and guidelines, i.e. decided by administrative societal organs. They ensure that all new structures have similar and sufficient safety levels. When civil engineers use the term <emphasis>optimization,</emphasis> they typically refer to minimizing the material use for deterministic limit state expressions where material strengths and load values have been modified by partial safety factors. An alternative to this approach is the probabilistic approach, where statistical models are employed to represent all variables, and the verification of limit state equations by true/false statement is replaced by the calculation of a probability of failure. The probabilistic approach allows for optimization of life-cycle cost, i.e. not only initial costs but also cost of various maintenance actions. The Bayesian pre-posterior analysis enables optimization of life-cycle cost, taking the unknown outcome of various actions into account. This has been practiced for several decades in the planning of maintenance actions for offshore structures. Recently, monitoring data have begun to be included into the maintenance planning, as the data adds information concerning the reliability.</para>
<para>Parallel to the evolution of this applied science, the disciplines of condition monitoring, fault detection, nondestructive evaluation and damage prognosis, have spawned the topic of Structural Health Monitoring (SHM). Formally speaking, SHM is the discipline of transforming sequential information of a structure&#x2019;s dynamic response, typically obtained during normal operating conditions, to real-time decisions of actions regarding maintenance and operations. This being said, very little effort has been put into the value-creating decision-making aspects of SHM, whereas most of the effort has gone into finding damage sensitive features. As the gap has never been fully bridged, SHM has not gained the implementation one would expect, after four decades of a global effort resulting in thousands of scientific publications. It would seem that, where the development of risk-based inspection for offshore structures has been economically motivated, the economic value added by SHM has been somewhat neglected.</para>
<para>With a starting point in the business case of wind turbine towers made of concrete, this thesis sets up the framework for the assessing the <emphasis>value of SHM</emphasis>.</para>
<para>The framework of Bayesian pre-posterior analysis is utilized to define the common optimum of expected life-cycle costs, w.r.t design variables and decision variables. Although there are many similarities to the offshore inspections planning, the type and frequency of the information is different which require a different approach. As exact solution is intractable, various approximations using surrogate objective functions from detection theory, filters, decision rules and Limited Memory Influence Diagrams (LIMID) are investigated. The main focus is on damage detection but the value of localization is also investigated. Both levels of SHM are investigated numerically, and the damage sensitive features, as well as the detector performances, are investigated experimentally. The value of SHM is calculated for both a wind turbine tower of steel, a wind turbine tower of concrete and for an experimental blade-like structure. When the presented framework was used and the SHM system, as well as the structure, have been co-optimized, it is found to depend largely on the following extrinsic parameters</para>
<itemizedlist mark="dash" spacing="normal">
<listitem><para>the costs of performing a manual inspection</para></listitem>
<listitem><para>the critical damage severity iv sUMMARY IN dANISH</para></listitem>
</itemizedlist>
</preface>
<preface class="preface" id="summary1">
<title>Summary in Danish</title>
<para>Dimensioneringen af de fleste konstruktioner forg&#x00E5;r efter partialkoefficient-metoden. Partialkoefficienterne er faktorer i normer og anvisninger som l&#x00F8;bende fasts&#x00E6;ttes af normkomiteer. Partialkoefficienternes rolle er at sikre at alle nye bygv&#x00E6;rker opn&#x00E5;r omtrent samme tilstr&#x00E6;kkelige sikkerhedsniveau. N&#x00E5;r bygningsingeni&#x00F8;rer bruger udtrykket <emphasis>optimering,</emphasis> refererer de typisk til minimering af materialeforbruget ved brug af deterministiske ligninger for gr&#x00E6;nsetilstande, hvori parametre s&#x00E5;som karakteristiske v&#x00E6;rdier for materialestyrker og laster er modificerede med partialkoefficienter. Som et alternativ til denne tilgang, findes de probabilistiske metoder. Heri anvendes statistiske modeller til at repr&#x00E6;sentere variable som stokastiske variable, og eftervisningen af gr&#x00E6;nsetilstandsligningerne overg&#x00E5;r til en udregning af svigtsandsynlighed, fremfor en sandt/falsk erkl&#x00E6;ring. Den probabilistiske tilgang tillader optimering af levetidsomkostninger, herunder initialomkostninger samt omkostninger vedr&#x00F8;rende det l&#x00F8;bende vedligehold af bygv&#x00E6;rket. Ved at medregne det usikre resultat af forskellige tiltag vedr&#x00F8;rende drift og vedligehold, danner den Bayesiske pr&#x00E6;-posteri&#x00E6;re beslutningsanalyse grundlag for optimering af levetidsomkostningerne. Denne metode har v&#x00E6;ret anvendt i flere &#x00E5;rtier ved planl&#x00E6;gningen af forskellige vedligeholdelsestiltag p&#x00E5; offshore konstruktioner. I de senere &#x00E5;r har inklusionen af data fra moniteringssystemer begyndt at vinde frem ved planl&#x00E6;gningen, eftersom dataene tilf&#x00F8;rer v&#x00E6;rdifuld information vedr&#x00F8;rende konstruktionens sikkerhed.</para>
<para>I parl&#x00F8;b med udviklingen af denne anvendte videnskab, har de faglige discipliner tilstandsoverv&#x00E5;gning, fejlfinding, ikke-destruktiv evaluering og tilstandsprognose, fusioneret i disciplinen Strukturel Helbreds Monitering (SHM). Formelt set er m&#x00E5;ls&#x00E6;tningen med SHM at overs&#x00E6;tte sekventiel information, i form af dataregistrering af et bygv&#x00E6;rks svingninger og vibrationer, til realtids beslutninger vedr&#x00F8;rende drifts og vedligeholdelsestiltag. Disse data opsamles med varig hyppighed, typisk under normale tilstandsbetingelser for bygv&#x00E6;rkets funktion og form&#x00E5;l, under hele levetiden. Med dette er sagt, s&#x00E5; er der kun ydet en begr&#x00E6;nset indsats p&#x00E5; kvantificering af SHM&#x2019;s v&#x00E6;rdiskabende beslutningsrelaterede egenskaber, hvorimod en langt st&#x00F8;rre indsats er ydet imod identifikationen af s&#x00E5;kaldte skadesf&#x00F8;lsomme st&#x00F8;rrelser. Eftersom der endnu ikke er bygget bro over denne kl&#x00F8;ft, har SHM ikke n&#x00E5;et den grad af anvendelse som man kunne have forventet, efter fire &#x00E5;rtiers global indsats, udmundende i tusindevis af videnskabelige publikationer. Det syntes at, hvor udviklingen af risikostyret inspektionsplanl&#x00E6;gning for offshore konstruktioner har v&#x00E6;ret &#x00F8;konomisk motiveret, s&#x00E5; er den &#x00F8;konomiske gevinst ved brug af SHM blevet overset.</para>
<para>Med afs&#x00E6;t i business casen vindm&#x00F8;llet&#x00E5;rne i beton, ops&#x00E6;tter denne afhandling rammerne for beregning af <emphasis>v&#x00E6;rdien af SHM</emphasis>.</para>
<para>Den Bayesiske pr&#x00E6;-posteri&#x00E6;re beslutningsanalyse anvendes ved definitionen af et f&#x00E6;lles optimum for forventede levetidsomkostninger, med hensyn til b&#x00E5;de designparametre samt beslutningsparametre. Selvom der er mange ligheder med offshore inspektionsplanl&#x00E6;gning, s&#x00E5; er typen og hyppigheden af information forskellig, og dette g&#x00F8;r at en anden tilgang bliver n&#x00F8;dvendig. Eftersom eksakt l&#x00F8;sning ikke er mulig anvendes forskellige tiln&#x00E6;rmelser og forenklinger. Blandt disse er surrogat objektfunktioner fra detekteringsteori, filtre, beslutningsregler og begr&#x00E6;nset-hukommelses influens diagrammer (LIMID). Det prim&#x00E6;re fokus er p&#x00E5; skadesdetektering, men v&#x00E6;rdien af lokalisering er ligeledes unders&#x00F8;gt. Begge disse niveauer af SHM er unders&#x00F8;gt numerisk og de skadesf&#x00F8;lsomme st&#x00F8;rrelser, s&#x00E5;vel som detekteringsalgoritmernes virkningsgrad, er unders&#x00F8;gt eksperimentelt. V&#x00E6;rdien af SHM er beregnet for b&#x00E5;de et vindm&#x00F8;llet&#x00E5;rn af st&#x00E5;l, for et vindm&#x00F8;llet&#x00E5;rn af beton, samt for en eksperimentel vinge-lignende skala konstruktion. N&#x00E5;r den angivne procedure bruges og SHM systemet, s&#x00E5;vel som konstruktionen, er optimerede, er v&#x00E6;rdien af SHM fundet til prim&#x00E6;rt at afh&#x00E6;nge af f&#x00F8;lende to ydre parametre</para>
<itemizedlist mark="dash" spacing="normal">
<listitem><para>omkostningen ved at fortage en manuel inspektion</para></listitem>
<listitem><para>st&#x00F8;rrelsen af den kritiske skade v</para></listitem>
</itemizedlist>
</preface>
<preface class="preface" id="abb">
<title>Commonly used Abbreviations</title>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<tbody>
<tr>
<td valign="top">AUC</td>
<td valign="top">Area Under the Curve</td>
</tr>
<tr>
<td valign="top">AR</td>
<td valign="top">Auto Regressive</td>
</tr>
<tr>
<td valign="top">BNT</td>
<td valign="top">Bayes Net Toolbox</td>
</tr>
<tr>
<td valign="top">BED</td>
<td valign="top">Bayesian Experimental Design</td>
</tr>
<tr>
<td valign="top">BN</td>
<td valign="top">Bayesian Network</td>
</tr>
<tr>
<td valign="top">CM</td>
<td valign="top">Condition monitoring</td>
</tr>
<tr>
<td valign="top">CPD</td>
<td valign="top">Conditional Probability Distribution</td>
</tr>
<tr>
<td valign="top">CPT</td>
<td valign="top">Conditional Probability Table</td>
</tr>
<tr>
<td valign="top">CDF</td>
<td valign="top">Cumulative distribution function</td>
</tr>
<tr>
<td valign="top">DOF</td>
<td valign="top">Degree of Freedom</td>
</tr>
<tr>
<td valign="top">DBN</td>
<td valign="top">Dynamic Bayesian Network</td>
</tr>
<tr>
<td valign="top">EVPI</td>
<td valign="top">Expected Value of Perfect Information</td>
</tr>
<tr>
<td valign="top">EMA</td>
<td valign="top">Experimental Modal Analysis</td>
</tr>
<tr>
<td valign="top">EWMA</td>
<td valign="top">Exponentially Weighted Moving Average</td>
</tr>
<tr>
<td valign="top">FA</td>
<td valign="top">Factor Analysis</td>
</tr>
<tr>
<td valign="top">FE</td>
<td valign="top">Finite Element</td>
</tr>
<tr>
<td valign="top">FORM</td>
<td valign="top">First Order Reliability Methods</td>
</tr>
<tr>
<td valign="top">GMM</td>
<td valign="top">Gaussian Mixture Models</td>
</tr>
<tr>
<td valign="top">HMM</td>
<td valign="top">Hidden Markov Model</td>
</tr>
<tr>
<td valign="top">LCC</td>
<td valign="top">Life-Cycle Cost</td>
</tr>
<tr>
<td valign="top">LIMID</td>
<td valign="top">Limited Memory Influence Diagrams</td>
</tr>
<tr>
<td valign="top">LDA</td>
<td valign="top">Linear Discriminant Analysis</td>
</tr>
<tr>
<td valign="top">LEFM</td>
<td valign="top">Linear Elastic Fracture Mechanics</td>
</tr>
<tr>
<td valign="top">MSD</td>
<td valign="top">Mahalanobis squared Distance</td>
</tr>
<tr>
<td valign="top">MEU</td>
<td valign="top">Maximum Expected Utility</td>
</tr>
<tr>
<td valign="top">MCS</td>
<td valign="top">Monte Carlo Sampling</td>
</tr>
<tr>
<td valign="top">MLP</td>
<td valign="top">Multi-Layer Perceptron</td>
</tr>
<tr>
<td valign="top">NB</td>
<td valign="top">Na&#x00EF;ve Bayes</td>
</tr>
<tr>
<td valign="top">NREL</td>
<td valign="top">National Renewable Energy Laboratory</td>
</tr>
<tr>
<td valign="top">NP</td>
<td valign="top">Neyman-Pearson likelihood ratio Lemma</td>
</tr>
<tr>
<td valign="top">NDE</td>
<td valign="top">Non-Destructive Evaluation</td>
</tr>
<tr>
<td valign="top">OMA</td>
<td valign="top">Operational Modal Analysis</td>
</tr>
<tr>
<td valign="top">O&#x0026;M</td>
<td valign="top">Operations &#x0026; Maintenance</td>
</tr>
<tr>
<td valign="top">PCA</td>
<td valign="top">Principal Component Analysis</td>
</tr>
<tr>
<td valign="top">PDF</td>
<td valign="top">Probability Density Function</td>
</tr>
<tr>
<td valign="top">PoD</td>
<td valign="top">Probability of Detection</td>
</tr>
<tr>
<td valign="top">PoI</td>
<td valign="top">Probability of Indication</td>
</tr>
<tr>
<td valign="top">ROC</td>
<td valign="top">Receiver Operating Characteristic</td>
</tr>
<tr>
<td valign="top">RBI</td>
<td valign="top">Risk Based Inspection</td>
</tr>
<tr>
<td valign="top">RMS</td>
<td valign="top">Root-Mean-Square</td>
</tr>
<tr>
<td valign="top">SORM</td>
<td valign="top">Second Order Reliability Methods</td>
</tr>
<tr>
<td valign="top">SPU</td>
<td valign="top">Single Policy Updating</td>
</tr>
<tr>
<td valign="top">SHM</td>
<td valign="top">Structural Health Monitoring</td>
</tr>
<tr>
<td valign="top">SVM</td>
<td valign="top">Support Vector Machines</td>
</tr>
<tr>
<td valign="top">UM</td>
<td valign="top">Usage Monitoring</td>
</tr>
<tr>
<td valign="top">VoI</td>
<td valign="top">Value of Information</td>
</tr>
</tbody>
</table>
</preface>
<chapter class="chapter" id="ch01" label="Chapter 1" xreflabel="ch01">
<title>Introduction</title>
<blockquote>
<para><emphasis>&#x201C;Engineering is the art of modelling materials we do not wholly understand, into shapes we cannot precisely analyze so as to withstand forces we cannot properly assess, in such a way that the public has no reason to suspect the extent of our ignorance&#x201D;</emphasis></para>
<attrib>A.R. Dykes in a speech to British Institution of Structural Engineers, 1976. Cited in Downer [<link linkend="B1">1</link>].</attrib>
</blockquote>
<para>Health monitoring is the pain you feel when your toes collide with the dinner table. As such, it&#x2019;s not a new technology as much as it&#x2019;s a recent application. Furthermore, the latest civil Structural Health Monitoring (SHM) technology is likely 1 million times less evolved than the nervous system of the human body. After 4 decades of intense global research in the field, at best a very small fraction of structures are equipped with the technology. So how do we progress from here?</para>
<para>First of all, the implementation of a high-end technology such as SHM to a (very) low-end technology such as civil structures is not without complications. Compare a jumbo jet with the building you&#x2019;re sitting in. We spend most of our lives inside buildings and most of us feel completely safe inside them. Concerning the jumbo jet, most of us spend, perhaps a few weeks over the course of a lifetime, inside a plane. So which poses the greater risk ? According to Thoft-Christensen &#x0026; Baker [<link linkend="B2">2</link>], the annual rate of people killed in a plane crash is 60.000 times larger than in a structural failure. Where life-safety reasons have created demand of monitoring technology for e.g. aviation, this is not the case for more than, perhaps, a few very large civil structures. In the end, selling SHM technology using the life-safety argument to a building owner will be like selling life rafts to desert nomads.</para>
<para>We need a different argument and one thing, that building owners understand, is cost. Can an SHM system lower the cost of a structure?</para>
<fig id="F1" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 1</label>
<caption><para>Can an SHM system lower the total costs of a structure?</para></caption>
<graphic xlink:href="graphics/fig1.jpg"/>
</fig>
<para>To answer this simple question we must consider both the structural design and the SHM design. In the simplest case, a structure is designed to safety level called the &#x201C;acceptance criteria&#x201D;. The acceptance criteria is given by a maximum annual probability of failure for the structure. In order for the structure to comply to this required safety level, the designer selects dimensions, material properties and deterioration protection so that the requirement is met in the most economical way. If the required safety level was lower, then the structure would be less expensive to build. The addition of SHM influences the probability of failure, but adds costs for developing, installing and maintaining the SHM system. The surface sketched in <link linkend="F2">Figure <xref linkend="F2" remap="2"/></link> shows the idea behind combined SHM-structural design:</para>
<fig id="F2" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 2</label>
<caption><para>Principal sketch of the common optimum of structural design and SHM design. The expected failure costs, and thus the total costs, tend towards infinite when the initial material costs are low</para></caption>
<graphic xlink:href="graphics/fig2.jpg"/>
</fig>
<para>This is of course a heavily simplified outline of the problem, but one that facilitates visualizing the problem at hand. In actuality, the object function that forms the surface has many more dimensions than the two that are given here and cannot be visualized to the ordinary mind. Furthermore, besides having countless local minima, it has non-linear constraints, bounds on most variables, and it can, at the present, only be approximated or sampled. The introduction to the thesis provides background to the topics that are included in the formulation of the object function in <link linkend="F2">Figure <xref linkend="F2" remap="2"/></link>.</para>
<section class="lev1" id="sec1.1" label="1.1" xreflabel="sec1.1">
<title>Structural Health Monitoring</title>
<para>The process of implementing a damage detection strategy for aerospace, civil, and mechanical engineering infrastructure is referred to as Structural Health Monitoring (SHM), as defined by Sohn et al. [<link linkend="B3">3</link>]. SHM should not be confused for Usage Monitoring (UM), which is the process of collecting information of load effects and environmental data with the purpose of prediction and prognosis of damage.</para>
<section class="lev2" id="sec1.1.1" label="1.1.1" xreflabel="sec1.1.1">
<title>Background of SHM</title>
<para>In the following, I have attempted a brief introduction to SHM in the context of the thesis. A more comprehensive insight is provided in the four reviews: Doebling et al. [<link linkend="B4">4</link>] covered the time up to 1996, later extended to the period 1996-2001 by Sohn et al. [<link linkend="B3">3</link>]. Carden and Fanning [<link linkend="B5">5</link>] covers material up to 2003 and, most recent, Fan and Qiao focus on simple structures in their more recent review from 2011 [<link linkend="B6">6</link>]. In the last decade, several books that cover various aspects of vibration based SHM have appeared. Among these are Farrar &#x0026; Worden from 2013 [<link linkend="B7">7</link>] where statistical pattern recognition is the focus, both Adams from 2007 [<link linkend="B8">8</link>] and Balageas et al. from 2006 [<link linkend="B9">9</link>] focus on sensor technologies as well as feature selection. The encyclopedia of SHM from 2009 [<link linkend="B10">10</link>] contains a selection of publications.</para>
<para>Society spends vast resources on the maintenance of the ageing machinery and civil infrastructure. From bridges to offshore platforms and from coastal protection to rotation machinery, society relies on the reliability of operation of numerous advanced structures. All of these assets are characterized by large societal or economic consequences in the event of fault or failure. Consequences can be economic, or in terms of life-safety. Various approaches to translate loss of life into economic consequence exist, e.g. the Life-Quality Index (LQI) by Nathwani et al. [<link linkend="B11">11</link>], the Societal Willingness To Pay (SWTP) by Rackwitz [<link linkend="B12">12</link>] and the Societal Value of a Statistical Life (SVSL) by Pandey &#x0026; Nathwani [<link linkend="B13">13</link>]. Events with societal consequences are event on a scale that has societal impact regarding loss of lives and economic consequences to society &#x2013; collapse of a highway bridge is an example hereof. As the asset approach the end of intended service life, the probability of occurrence of large damage increases. One approach to maintaining the reliability at the acceptable level; <emphasis>the acceptance criteria,</emphasis> is to inspect the asset frequently. Inspections are however expensive and time consuming for large civil structures or for complicated machinery if it must be dismantled at each inspection. Looking to other fields of asset management, similar disciplines are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Condition monitoring (CM) has been in implementation for several decades to detect the onset of damage in rotation machinery.</para></listitem>
<listitem><para>Usage Monitoring (UM) of load effects, implemented in offshore structures to update the predictions of Remaining Useful service Life (RUL) based on probabilistic deterioration models.</para></listitem>
<listitem><para>Non-Destructive Evaluation (NDE) and Non-Destructive Testing (NDT) originates from aviation and aerospace, but is widely implemented. NDE is often carried out offline, in a laboratory, contrary to CM methods where the testing is performed during operation.</para></listitem>
<listitem><para>Health and Usage Monitoring Systems (HUMS) in rotorcraft aviation.</para></listitem></itemizedlist>
<para>Many of these closely related disciplines share common trades: they represent some form of sequential<footnote id="fn1" label="1"><para>Or continuous if the systems streams data, rather than discrete samples</para></footnote> source of discrete information, and they supplement an inspection and maintenance strategy, which is implemented for economic or life-safety reasons.</para>
<para>Such strategies serve mainly to reduce risk associated with structural failure or operational fault, whether the risk is mainly concerning life-safety (as is the case for buildings, aviation and automotive) or mainly economic (as is the case for machinery in production plants). The design and implementation of a monitoring technology is a supplementary to human inspection, motivated by the high cost related to inspections and of temporarily taking the asset out of service.</para>
<para>The 1970s and 1980s marked the start of the ongoing research in SHM. Inspired by successful implementations of CM for rotating machinery, the offshore business set out to investigate feasibility of vibration based damage detection for offshore platforms. The motivation was the very high life-safety-, economic- and environmental risk associated with failure of these structures. Simultaneously, the aerospace sector started with the space Shuttle Modal Inspection System (SMIS), Farrar &#x0026; Worden [<link linkend="B7">7</link>], which reached implementation. SMIS targets fatigue damage in the hull of the shuttle, using Experimental Modal Analysis (EMA). The aviation industry has similarly made large progress in the application of SHM. Boller and Buderath [<link linkend="B14">14</link>] provide an overview of SHM systems for fatigue monitoring in aerostructures.</para>
<para>The emergence of various different technologies and approaches to damage detection incited the definition of Benchmark structures. Among these are the well-known IASC-ASCE large steel frame structure, described e.g. by Johnson et al. [<link linkend="B15">15</link>], and the more recent full-scale cable stayed bridge by the Harbin Institute of Technology, Li et al. [<link linkend="B16">16</link>]. They provide a basis on which to demonstrate the capabilities of a damage detection algorithm on an example that&#x2019;s not tailored for the purpose of that single technology.</para>
<para>Recent real-world applications are few, but among the most equipped structures very large structures are 40 long spans bridges, hereof 20 in China, according to Ko &#x0026; Ni [<link linkend="B17">17</link>].</para>
</section>
<section class="lev2" id="sec1.1.2" label="1.1.2" xreflabel="sec1.1.2">
<title>Motivation for SHM</title>
<para>From the application examples given above, it is evident that the design and implementation of any monitoring effort must be made in a risk-based decision framework. In the following, I focus on primarily on civil structures, including support structures for wind turbines. A comprehensive review of the motivation for SHM of civil structures is provided by Brownjohn [<link linkend="B18">18</link>]. Among the benefits of SHM are:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Facilitates rapid structural reassessment after occurrence of extreme events.</para></listitem>
<listitem><para>Inexpensive reassessment provides support for life-extension and decision support regarding replacement.</para></listitem>
<listitem><para>By using the SHM output to trigger Operations &#x0026; Maintenance (O&#x0026;M) decisions, health monitoring enables reactive (rather than the usual preventive) maintenance based on inspection updating.</para></listitem>
<listitem><para>Sensors for Usage Monitoring can also be used for health monitoring. Both types of information can be used for Risk Based Inspection planning (RBI).</para></listitem>
<listitem><para>Monitoring provides feedback to the designer. This enables engineers to better understand structural behavior and improve future designs.</para></listitem>
<listitem><para>Health monitoring affects the reliability of the structure. By thinking the SHM into the design of new structures, a risk-optimum can be achieved.</para></listitem>
<listitem><para>SHM could become a political requirement for structures associated with large life-safety consequences. For instance, the number of SHM implementations in Japan, due <emphasis>to the bridge retrofit and seismic assessment program,</emphasis> is growing rapidly, according to Fujino &#x0026; Siringoringo [<link linkend="B19">19</link>].</para></listitem></itemizedlist>
<para>On the downside, the cost of implementing SHM can be high. It was investigated by Rice &#x0026; Spencer [<link linkend="B20">20</link>], who refer the Bill Emerson Memorial Bridge, instrumented with 84 accelerometers with an installation cost of more than $15.000 per channel. Lynch et al. [<link linkend="B21">21</link>] estimate the cost at 5.000 $ per channel for buildings.</para>
</section>
<section class="lev2" id="sec1.1.3" label="1.1.3" xreflabel="sec1.1.3">
<title>Rytter&#x2019; s Damage Detection hierarchy</title>
<para>The purpose of damage detection is treated in the thesis by Rytter [<link linkend="B22">22</link>]. He defined five levels of questions to be answered in hierarchical order:</para>
<sidebar>
<title>Text box 1. Rytter&#x2019;s [<link linkend="B22">22</link>] damage detection hierarchy</title>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Is there damage in the system (existence)?</para></listitem>
<listitem><para>Where is the damage in the system (location)?</para></listitem>
<listitem><para>What kind of damage is present (type)?</para></listitem>
<listitem><para>How severe is the damage (extent)?</para></listitem>
<listitem><para>How much useful life remains (prognosis)?</para></listitem></orderedlist>
</sidebar>
<para>Rytter&#x2019;s hierarchy of levels allows for a ranking of SHM technology capabilities. The first level is treated in paper IV of the appendix of this thesis. Level 2 is treated in paper V. The hierarchy is depicted as a partial decision tree in <link linkend="F3">Figure <xref linkend="F3" remap="3"/></link>.</para>
<fig id="F3" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 3</label>
<caption><para>Partial decision tree depicting the damage detection hierarchy. Decision nodes are square and random nodes are round</para></caption>
<graphic xlink:href="graphics/fig3.jpg"/>
</fig>
<para>Due to the propagation of error through the decision tree, the level of uncertainty on the answer rises through each level in the hierarchy, with level 1 having the smallest uncertainty and level 5 having the largest. Most damage detection aim at level 1 and 2, while the levels 3-5 are, to my knowledge, yet not accomplished for any SHM technology.</para>
</section>
<section class="lev2" id="sec1.1.4" label="1.1.4" xreflabel="sec1.1.4">
<title>The Statistical Pattern Recognition Paradigm</title>
<para>The large variations in environmental conditions, as well as the numerous model and measurement uncertainties and biases, as well as incorrect assumptions, encountered in structural damage detection, in 2000 caused a group of prominent researchers, headed by Chuck Farrar at Los Alamos National Laboratory, to claim that SHM is a problem of Statistical Pattern Recognition (SPR) [<link linkend="B23">23</link>]. Their paradigm states that SHM is a process of four integrated steps:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Operational evaluation</para></listitem>
<listitem><para>Data acquisition &#x0026; networking</para></listitem>
<listitem><para>Feature selection &#x0026; extraction</para></listitem>
<listitem><para>Probabilistic decision making</para></listitem></orderedlist>
<para>SPR requires statistical model building of the SHM output to a given input, as well as statistical models for the structural response. The paradigm was demonstrated by Sohn et al. [<link linkend="B24">24</link>] and has, since its emergence, had a large impact on the direction of search.</para>
</section>
<section class="lev2" id="sec1.1.5" label="1.1.5" xreflabel="sec1.1.5">
<title>The Fundamental Axioms of SHM</title>
<para>In the 1990s and 2000s, some general principles began to form in the field of SHM research. The fundamental axioms of SHM by Worden et al. [<link linkend="B25">25</link>] are an attempt to &#x201C;sum up&#x201D; the acting definitions and thus provide newcomers to the topic with a starting point. The axioms are given below:</para>
<sidebar>
<title>Text box 2. The fundamental axioms of SHM, from Worden et al. [<link linkend="B25">25</link>]</title>
<para>Axiom I. All materials have inherent flaws or defects.</para>
<para>Axiom II. The assessment of damage requires a comparison between two system states.</para>
<para>Axiom III. Identifying the existence and location of damage can be done in an unsupervised learning mode, but identifying the type of damage present and the damage severity can generally only be done in a supervised learning mode.</para>
<para>Axiom IVa. Sensors cannot measure damage. Feature extraction through signal processing and statistical classification is necessary to convert sensor data into damage information.</para>
<para>Axiom IVb. Without intelligent feature extraction, the more sensitive a measurement is to damage, the more sensitive it is to changing operational and environmental conditions.</para>
<para>Axiom V. The length- and time-scales associated with damage initiation and evolution dictate the required properties of the SHM sensing system. Axiom VI. There is a trade-off between the sensitivity to damage of an algorithm and its noise rejection capability.</para>
<para>Axiom VII. The size of damage that can be detected from changes in system dynamics is inversely proportional to the frequency range of excitation.</para>
</sidebar>
<para>The axioms are fundamentally building on the SPR paradigm, stating that damage detection is a problem of statistical model building and testing. This is evident from axioms III and IVa, as Machine Learning and statistical classification are directly referred.</para>
</section>
<section class="lev2" id="sec1.1.6" label="1.1.6" xreflabel="sec1.1.6">
<title>Damage definition</title>
<para>One singularly important thing about damage detection, is an operational definition of damage &#x2013; and of critical severity. Without such a definition, it is simply impossible to design an effective detection system as a cost function cannot be adequately defined, and without a cost function, performance based decisions and risk optimization is impossible. Nevertheless, there is no widely accepted definition of critical damage in the SHM environment, as one will observe from the vast literature covered in the reviews [<link linkend="B3">3</link>] [<link linkend="B4">4</link>] [<link linkend="B5">5</link>] [<link linkend="B6">6</link>]. Drawing an analogy (that will be revisited later in the thesis) to offshore inspection updating, described by Skjong [<link linkend="B26">26</link>], and to aviation NDE, described e.g. by Yang &#x0026; Trapp [<link linkend="B27">27</link>], a physical model of the progressing damage is required, along with a mathematical model of the inspection performance. In a large part of SHM publications, a smeared region stiffness reduction of several percent is used for numerical verification. For the IASC benchmark structure, whole structural members are completely removed, see Johnson et al. [<link linkend="B15">15</link>]. Few publications deal with the physical mechanisms of damage or of the topic of Damage Prognosis (DP). A distinction between defect, damage and fault is provided by Worden et al. [<link linkend="B25">25</link>] and the influence of time scale (damage growth rate) is investigated in Worden &#x0026; Farrar [<link linkend="B7">7</link>].</para>
</section>
<section class="lev2" id="sec1.1.7" label="1.1.7" xreflabel="sec1.1.7">
<title>Feature selection</title>
<para>Historically, research in SHM has been primarily focused on the search for damage sensitive features. Features for damage detection are quantities that may be retrieved from the structure with sensors, e.g. accelerations sampled at fixed intervals over a finite length period. Through further processing of the data, other features may be derived from the data, e.g. eigenfrequencies. A damage sensitive feature is a feature that correlates well with damage in structure, while in turn having good noise rejection and insensitivity to changes in the response that are not caused by damage, but by changes in the environmental and operating conditions, e.g. changes in temperature.</para>
<para>The premise of vibration based damage detection is that damages cause changes in the mass and stiffness properties, which in turn cause changes in the modal parameters, i.e. frequencies, mode shapes and damping, of the structure. Some non-modal based approaches exist, but they are not discussed in this thesis.</para>
<para>Most of the damage sensitive features that have been presented in the research community are vibration-based, see Doebling et al. [<link linkend="B4">4</link>], using the measured global response of the structure at sensor locations. Localized measurements, that require knowledge about the approximate location of the damage, are not investigated in this thesis.</para>
<para>System identification is the topic of extracting a structures modal parameters from response measurements. Experimental Modal Analysis (EMA) has been used since the 1950s for system identification and with the emergence of various techniques in the 1970s and 1980s, the possibilities for system identification under operational loading increased. Operational Modal Analysis (OMA) originated from the use of the correlation function as a free decay, to enable the techniques for random loading. The background of OMA is presented by e.g. Brincker &#x0026; Ventura [<link linkend="B28">28</link>].</para>
<para>The main part of vibration-based methods can be partitioned into the following categories:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Resonant frequency-based</para></listitem>
<listitem><para>Mode shape-based</para></listitem>
<listitem><para>Curvature/strain mode shape-based</para></listitem>
<listitem><para>Flexibility-based</para></listitem>
<listitem><para>Modal damping-based</para></listitem>
<listitem><para>Time history model-based</para></listitem></itemizedlist>
</section>
</section>
<section class="lev1" id="sec1.2" label="1.2" xreflabel="sec1.2">
<title>Probabilistic structural design</title>
<para>Structural design is rule of thumb and experience passed down through generations of architects and builders. In the 17<superscript>th</superscript> century the Hooke&#x2019;s and Newton&#x2019;s laws of physics provided the background of structural failure. A century later Euler and Bernoulli formulated the basic beam theory. In 1827 Navier founded of modern structural analysis, by defining the elastic properties to be independent of the inertial properties. Failure could now be explained with mathematics, and rules could be devised. The rules were based on failure modes and mechanisms, and they contained factors which accounted for the uncertainties. With the introduction of the safety concept in the 1950s, by e.g. Freudenthal [<link linkend="B29">29</link>], the heuristic elements could be explained by probability theory, but the absence of tractable methods as well as of experimental data, prevented implementation. Structural Reliability Analysis (SRA) appeared in the 1970s, see e.g. Thoft-Christensen &#x0026; Baker [<link linkend="B2">2</link>] and Madsen et al. [<link linkend="B30">30</link>]. Like its sibling reliability analysis, known from electronics and aviation, it is based in reliability theory. SRA merges probability theory with engineering optimization though the concepts of limit state, safety margin and the central <emphasis>probability of failure</emphasis>. Probabilities enable societal organs to devise <emphasis>acceptance criteria</emphasis> for safety levels of structures. The code-writers have calibrated the deterministic structural codes with the aim that complying structures would satisfy the criteria. Most structures are still designed after deterministic rules while probabilistic analysis rarely is applied. The probabilistic codes, e.g. JCSS [<link linkend="B31">31</link>] and standards, e.g. ISO [<link linkend="B32">32</link>], have not reached broad utilization in civil engineering and their applications have been limited to high-consequence structures with life-safety concerns, e.g. nuclear installations and major bridges. Structural engineering suffers from resistance toward change, perhaps driven by comfort of the deterministic codes. Probabilistic methods are only used in a broad scope in the offshore business, pioneered mainly by Det Norske Veritas (DNV).</para>
</section>
<section class="lev1" id="sec1.3" label="1.3" xreflabel="sec1.3">
<title>Decision theory</title>
<para>Decision theory is the analysis of decision-making under uncertainty with purpose of optimizing the utility of the decision maker. While Berger [<link linkend="B33">33</link>] provides an insightful review, the topic is briefly introduced in the following where I discuss the background and development of statistical tests, statistical decision analysis and detection theory.</para>
<para>There are three main types of statistical tests: Fisherian [<link linkend="B34">34</link>], Neyman-Pearson Likelihood ratio [<link linkend="B35">35</link>] and Bayesian. The Bayesian test is the core of the statistical decision theory, developed by Wald [<link linkend="B36">36</link>] by treating statistical problems as a type of game. Wald&#x2019;s theory followed the mathematical theory of games and utility maximization, described in Von Neumann &#x0026; Morgenstern [<link linkend="B37">37</link>]. Fisherian testing was used for SHM in e.g. Worden et al. [<link linkend="B38">38</link>] and D&#x00F6;hler et al. [<link linkend="B39">39</link>], Neyman-Pearson testing in e.g. Farhidzadeh et al. [<link linkend="B40">40</link>] and Bayesian testing by e.g. Flynn [<link linkend="B41">41</link>].</para>
<sidebar>
<title>Text box 3. Bayesian statistics</title>
<para>As this is the first time we encounter the Bayesian concepts, which are used throughout the thesis, involving probability updating, risk based decision-making and design of experiments, I find it fitting to provide a brief introduction to the Bayesian class of statistics. Bayesian probability is a school of probability theory, named after reverend T. Bayes, but mainly accredited to Laplace [<link linkend="B42">42</link>] from 1812. Bayesian theory is centered around Bayes theorem, in verbose form:</para>
<para><graphic xlink:href="graphics/ueq1.jpg"/></para>
<para>The theorem relates <emphasis>posterior</emphasis> probability to <emphasis>prior</emphasis> probability though the <emphasis>likelihood</emphasis> of the observation. It is central to Bayesian inference, which is the probability updating given observations.</para>
<para>Bayesian and frequentist are the two primary schools of statistics.</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>For the frequentist, probability is a proportion of outcomes &#x2013; it is defined by the observations.</para></listitem>
<listitem><para>For the Bayesian, probability is a degree of belief &#x2013; it changes with observations.</para></listitem></itemizedlist>
<para>The importance of the Bayesian <emphasis>prior</emphasis> is demonstrated in the example by Bishop [<link linkend="B43">43</link>]: The frequentist infers that, after three tosses of a coin all landing heads, the distribution of the fourth outcome is unity for heads and zero for tails. The Bayesian would use a discrete uniform distribution as prior and reach a less extreme conclusion.</para>
<para>For the frequentist, data is a repeatable sample and the underlying parameters are fixed, whereas, for the Bayesian, the data are fixed and all underlying parameters are random.</para>
</sidebar>
<para>The Bayesian decision analysis is mainly accredited to Raiffa &#x0026; Schlaifer [<link linkend="B44">44</link>], which also provides the foundation of the decision analysis used in this thesis. However, the Bayesian concepts were well-known in decision theory, by the concepts of the Bayes Risk and the Bayes decision, before [<link linkend="B44">44</link>] was published. Pre-posterior analysis is the generalized Bayesian Experimental Design (BED), as pointed out by Lindley [<link linkend="B45">45</link>], who developed the BED directly from Bayesian decision theory, by considering Design of Experiments (DoE), as a case of pre-posterior analysis. To provide civil engineers with a theoretical framework for making decisions, Benjamin &#x0026; Cornell [<link linkend="B46">46</link>] introduced the Bayesian decision theory to engineering decision-making, with the argument that decision-making is the ultimate use of probabilistic methods. The use of the <emphasis>prior</emphasis> has caused the Bayesian approach be considered as rational, thus making it easier for decision makers to accept mathematical decision-making. Bayesian decision analysis forms the basis of RBI, see Madsen &#x0026; S&#x00F8;rensen [<link linkend="B47">47</link>] and was applied by Nielsen [<link linkend="B48">48</link>] for maintenance planning for offshore wind turbines. Raiffa &#x0026; Schlaifer [<link linkend="B44">44</link>] also introduced the concepts of Value of Information (VoI) and Expected Value of Perfect Information (EVPI), which enable the calculation of the benefit of an experiment, before it is performed.</para>
<section class="lev2" id="sec1.3.1" label="1.3.1" xreflabel="sec1.3.1">
<title>Detection theory</title>
<para>Detection theory, see e.g. Kay [<link linkend="B49">49</link>], is a system of metrics and functions for performance based evaluation of detectors. Among these are the Receiver Operating Characteristic (ROC), the Area Under the Curve (<emphasis>AUC</emphasis>)<footnote id="fn2" label="2"><para>Throughout the thesis, <emphasis>italics</emphasis> are used to define variables</para></footnote> the Probability of Detection (PoD), the Probability of Indication (PoI), the confusion matrix and the Deflection Coefficient. The theory has its origins in radar and sonar development in the US during the Second World War. The basic theory was published by Peterson, Birdsall and Fox in 1954 [<link linkend="B50">50</link>] as a framework to evaluate the performance of statistical tests and classifiers, and was later adapted in psychophysics, see e.g. Swets &#x0026; Green in 1966 [<link linkend="B51">51</link>].</para>
</section>
</section>
<section class="lev1" id="sec1.4" label="1.4" xreflabel="sec1.4">
<title>Life-Cycle Costs of deteriorating structures</title>
<para>Life-Cycle Costs (LCC) analysis is the topic of optimizing a structure w.r.t. service life costs, including initial, maintenance and, in some cases, the end-of-life costs. It is expected costs optimization, and the realized structure may different costs than predicted. The LCC costs encompass initial costs (design and construction expenses) as well as running costs, including costs for inspections, operations, maintenance, and finally expected failure costs and in some cases also end-of-life costs. The LCC optimum will in cases mean that the initial costs are higher than the optimum of ordinary engineering optimization. It is important to include the net discount rate of money when operations and maintenance costs are included, due to the varying time value of money. LCC analysis enables inclusion of NDE inspection events, modelled by performance functions from detection theory, as well as repairs. This makes it possible to include inspection intervals or similar in the optimization parameters. Monitoring systems and SHM systems may as well be included as optimizations variables. Frangopol et al. [<link linkend="B52">52</link>] treated LCC optimization of deteriorating concrete structures. Enevoldsen &#x0026; S&#x00F8;rensen [<link linkend="B53">53</link>] showed that LCC optimization is basic Bayesian pre-posterior decision analysis.</para>
<para>The closely related topic Life-Cycle Assessment (LCA) considers environmental impact, as well as costs.</para>
<section class="lev2" id="sec1.4.1" label="1.4.1" xreflabel="sec1.4.1">
<title>Risk Based Inspection</title>
<para>The topic of Risk Based Inspection (RBI) is probably the most widely implemented subtopic of LCC analysis for structures. It has been used in the offshore business for more than 3 decades, see e.g. Skjong [<link linkend="B26">26</link>]. RBI has received considerable attention in the technical literature during the past 4 decades, and can be said to have reached a mature level of development. Pre-posterior analysis is the basis of optimization and the event probabilities are found by structural reliability methods. Inspection strategies are implemented to reduce the number of branches in the event tree and to simplify optimizations. Examples of strategies are minimum annual reliability or fixed inspection intervals. For the initial inspection plan, the total expected costs are calculated based on an assumption of never finding a crack. Decision strategies further reduce the calculation cost by implementing fixed decision rules, e.g. indication triggers repair. Alternatively, the crack can be sized and the information can be used in Bayesian updating, following the method described in Lassen &#x0026; Recho [<link linkend="B54">54</link>].</para>
<fig id="F4" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 4</label>
<caption><para>Decision tree of inspection updating, repeated at each inspection. The total number of branches is 6<superscript>n</superscript> where <emphasis>n</emphasis> is the number of time steps where inspections are possible (paper VI)</para></caption>
<graphic xlink:href="graphics/fig4.jpg"/>
</fig>
<para>Before RBI can be applied, a physical model must be calibrated to the fatigue design model. The many computational difficulties in RBI caused Faber et al. [<link linkend="B55">55</link>] to suggest a generic method, from which optimal inspection plans of similar details can be interpolated.</para>
</section>
</section>
<section class="lev1" id="sec1.5" label="1.5" xreflabel="sec1.5">
<title>State of the art in value of SHM</title>
<para>The need to evaluate the benefit of SHM, was realized by Wong &#x0026; Yao in 2001 [<link linkend="B56">56</link>]. They based their conclusions on the need for risk-based decision support, and the, at the time current gap between the SHM efforts and the owner&#x2019;s decision-making. Their observations were spawned by panel discussions at the International Workshop of SHM, so there is no doubt that the scientific community has been aware of the need for work concerning the value of SHM &#x2013; not least because such work would guide the direction of search in the scientific community.</para>
<para>10 years later, in 2011, Pozzi and Der Kiureghian [<link linkend="B57">57</link>] were the first to calculate the Value of Information (VoI) for SHM. Their example is simulation of an observable linear degradation law, which although it models a sequence of information, is actually a static decision problem, where only one decision is made. The VoI was calculated for a fictitious sensor type that directly measure degradation state, and a sensitivity analysis was performed that shows the development of the VoI as a function of sensor precision. Their approach is based on a Monte Carlo Simulation (MCS) of the cost outcome.</para>
<para>Bayesian detection theory is the special case of Bayesian pre-posterior analysis that deals with utility-optimal detection. It was applied by Flynn &#x0026; Todd [<link linkend="B58">58</link>] and by Flynn [<link linkend="B41">41</link>] in 2010 for optimization of the number of sensors applied to an aircraft wing, w.r.t localized damage detection. They used a parametric response model, rather than an FE model, to obtain the sensor response, and the sensing system was of the active sensing piezoelectric type. The risk-optimal detectors were verified experimentally on a scale model of a blade structure. The work was also focused on one-shot detection, with no sequential decisions and time-dependencies taken into account.</para>
<para>Uncertainty quantification plays a major role in the value of SHM. Mao [<link linkend="B59">59</link>] treated the uncertainties from fault detection, using estimation of transmissibility and frequency response function, in a detection theory setting. He used the Neyman-Pearson likelihood ratio lemma to optimize the detectors for maximum detectability for a fixed false alarm rate.</para>
<para>Most recently, the COST program <emphasis>&#x2018;Quantifying the value of structural health monitoring&#x2019;</emphasis> [<link linkend="B60">60</link>] has been initiated just as the last sentences of this thesis were being written. Taking basis in the same approach as used in this thesis, it represents a large economic commitment to the <emphasis>value of SHM</emphasis> topic.</para>
<para>The related discipline of monitoring strains and loads effect with the purpose of updating the deterioration models is known as Usage Monitoring (UM). Using strain measurements and RBI, the value of Usage Monitoring was demonstrated for the case of a numerically simulated offshore fatigue detail by Kierkegaard et al. in 1990 [<link linkend="B61">61</link>] and, more recently, by Th&#x00F6;ns &#x0026; Faber in 2013 [<link linkend="B62">62</link>].</para>
</section>
<section class="lev1" id="sec1.6" label="1.6" xreflabel="sec1.6">
<title>Example stating the detection problem</title>
<para>As large parts of this thesis dive into the mechanics of the damage detection problem, I hope that this example may serve as an illustrative introduction to the subtopics and definitions. Consider the simply supported beam with a crack propagating vertically, shown in <link linkend="F5">Figure <xref linkend="F5" remap="5"/></link>. The crack has length <emphasis>a</emphasis> and the beam fails if it reaches the critical length <emphasis>a<subscript>c</subscript></emphasis>. The crack growth is modelled by the differential equation <emphasis>da/dt</emphasis> = <emphasis>C1a<superscript>C2</superscript>,</emphasis> where <emphasis>C1</emphasis> and <emphasis>C2</emphasis> are random variables with a known joint distribution.</para>
<fig id="F5" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 5</label>
<caption><para>The beam in the example. The 3 blue squares are vibration sensors</para></caption>
<graphic xlink:href="graphics/fig5.jpg"/>
</fig>
<para>Using sensors, the resonance frequencies <emphasis>f<subscript>1</subscript>, f<subscript>2</subscript></emphasis> and <emphasis>f<subscript>3</subscript></emphasis> are extracted at fixed time intervals and a damage indicator <emphasis>DI</emphasis> is calculated from the frequencies. The <emphasis>DI</emphasis> is subject to variation due to variations in loading, environmental variations and measurement uncertainties.</para>
<fig id="F6" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 6</label>
<caption><para>The typical example of sequential SHM damage detection</para></caption>
<graphic xlink:href="graphics/fig6.jpg"/>
</fig>
<para>Notice that the left plot depicts expected values. The random scatter of the features can be very high compared to the damage related change of the expected values. The middle plot shows the Probability Density Function (PDF) of time to failure p(<emphasis>T<subscript>f</subscript></emphasis>) with a realized crack growth shown. The SHM detection problem is to, at every sensing instance, transform the damage indicator, shown in the right plot, into maintenance actions. Among the approaches to the decision-making are Bayesian detectors and fixed and variable thresholds. If, as shown, a fixed threshold is used, then the optimization takes the shape sketched in <link linkend="F7">Figure <xref linkend="F7" remap="7"/></link>.</para>
<fig id="F7" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 7</label>
<caption><para>Area plot of cost contributions from false alarms and failures for the example</para></caption>
<graphic xlink:href="graphics/fig7.jpg"/>
</fig>
<para>A high threshold brings us on the right plateau, corresponding to the expected cost in case of not performing any maintenance. A low threshold means performing maintenance at every sensing instances, i.e. a too low threshold can result in severely increased costs. The background to the problem, further elaboration of approaches to decision-making, as well as results for realistic numerical application and experimental validation are all presented within this thesis.</para>
</section>
<section class="lev1" id="sec1.7" label="1.7" xreflabel="sec1.7">
<title>Objective of thesis</title>
<para>The aim is to use damage detection for decision support regarding preventive operations and maintenance actions. By combining the SHM decision-making with the structural design optimization, the expected Life-Cycle Costs (LCC), from both operations, maintenance and up-front initial costs, can be reduced.</para>
<section class="lev2" id="sec1.7.1" label="1.7.1" xreflabel="sec1.7.1">
<title>Approach and structure of thesis</title>
<para><emphasis>Incorporating SHM in the design of slip formed concrete wind turbine towers</emphasis> is the title of this thesis, but the main scientific focus is on the <emphasis>value of SHM</emphasis>. The wind turbine towers are considered as a business case, and steel towers have been used to a larger extent than concrete towers. There are two main reasons for this: 1) the steel towers were easier to approach as the damage models were more developed, and 2) SHM is not nearly as valuable for concrete towers, as was discovered in the work relating to paper II. In parallel with the work performed in the context of this thesis, work has been performed in the development of concrete tower concepts. The outcome of said work is not included, nor referred to, in this thesis.</para>
<para>Assessing the <emphasis>value of SHM</emphasis> spans multiple research areas and correspondingly, the papers appended this thesis span multiple different subjects. To tie them together at this early stage, the following mission breakdown provides a visual overview of the span of the thesis.</para>
<fig id="F8" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 8</label>
<caption><para>Visual synopsis of the span of the thesis and as an overview of the progression of the papers</para></caption>
<graphic xlink:href="graphics/fig8.jpg"/>
</fig>
<para>The thesis revolves around Bayesian pre-posterior decision analysis. It is intended that paper I is the keystone that chains the elements of the remaining papers together.</para>
<para><emphasis>Papers II + III</emphasis> deal mainly with deterioration models</para>
<para><emphasis>Papers IV + V</emphasis> deal mainly with SHM system design</para>
<para><emphasis>Paper VI</emphasis> deals mainly with probabilistic decision making</para>
<para><emphasis>Paper I</emphasis> deals with combined SHM / structural design</para>
<para>Following this structure, the remaining part of the thesis is divided into chapters as follows:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para><link linkend="Ch02">Chapter <xref linkend="Ch02" remap="2"/></link>: Structural design</para></listitem>
<listitem><para><link linkend="Ch03">Chapter <xref linkend="Ch03" remap="3"/></link> : SHM design</para></listitem>
<listitem><para><link linkend="Ch04">Chapter <xref linkend="Ch04" remap="4"/></link>: Combined structural / SHM design (pre-posterior analysis)</para>
</listitem></itemizedlist>
</section>
<section class="lev2" id="sec1.7.2" label="1.7.2" xreflabel="sec1.7.2">
<title>Scope and general assumptions</title>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The study of the consequence model for SHM decision-making has been left out of this work. A simple consequence model has been applied when relevant, and a sensitivity analysis performed.</para></listitem>
<listitem><para>Data acquisition, cleansing, compression and fusion have not been considered specifically. For some algorithms, the modal parameters, obtained using Operational Modal Analysis (OMA), have been used. The modal identification scripts were written and tested by my colleagues Rune Brincker and Peter Olsen at Aarhus University.</para></listitem>
<listitem><para>Only fatigue damage has been considered for the numerical examples in this work. Although fatigue damage is not the easiest type of damage to detect (mass change, e.g. due to corrosion, icing, spalling etc. has a larger impact on the modal properties than stiffness change), it is a well-known design driver for dynamically loaded structures.</para></listitem>
<listitem><para>Reliability for extreme loads and the impact of SHM is not treated. Value of SHM for reassessment after catastrophic events or similar has not been treated either. In both cases, the same framework applies, although the calculation of benefit must be based on different SHM strategies.</para></listitem>
<listitem><para>Utilities are assumed to be additive and separable. No sensitivity analysis has been carried out to investigate the influence of this assumption.</para></listitem>
<listitem><para>The decision maker is assumed risk neutral. No sensitivity analysis has been carried out to investigate the impact of this assumption.</para></listitem>
<listitem><para>Only finite-risk decision rules and consequence models have been considered.</para></listitem></itemizedlist>
</section>
</section>
</chapter>
<chapter class="chapter" id="ch02" label="Chapter 2" xreflabel="ch02">
<title>Structural design</title>
<para>The design of a civil structure is dependent of the forces acting on it and on the targeted safety level of the structure. We define one <emphasis>limit state</emphasis> for each combination of failure mode and single load effect. A complex structure can thus have a very large number of limit states, whereof few of them are decisive for the structural dimensions. It is the civil engineers task to identify the critical limit states and to ensure sufficient reliability in each of them, by selecting dimensions and materials. I restrict the scope to deterioration failure modes for targeting by SHM as they cause repair and inspection costs.</para>
<para>Structural deterioration in steel structures can be fatigue cracking, corrosion abrasion or erosion. <link linkend="T1">Table <xref linkend="T1" remap="1"/></link> provides an overview:</para>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><para>Life-cycle cost sensitivities of the most common deterioration types of structural steel</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="all" border="1">
<thead>
<tr>
<th valign="top">Type</th>
<th valign="top">Damage type</th>
<th valign="top">Impact on initial costs</th>
<th valign="top">Impact on inspection costs</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top">Fatigue</td>
<td valign="top">Crack</td>
<td valign="top">Varies</td>
<td valign="top">Large</td>
</tr>
<tr>
<td valign="top">Corrosion</td>
<td valign="top">Loss of material</td>
<td valign="top">Large</td>
<td valign="top">Small</td>
</tr>
<tr>
<td valign="top">Abrasion</td>
<td valign="top">Loss of material</td>
<td valign="top">Small</td>
<td valign="top">Small</td>
</tr>
<tr>
<td valign="top">Erosion</td>
<td valign="top">Loss of material</td>
<td valign="top">Small</td>
<td valign="top">Small</td>
</tr>
</tbody>
</table>
</table-wrap>
<para>In concrete structures there are several types of deterioration, all of which fall into one of two categories: concrete deterioration and reinforcement corrosion. The first category includes chemical reactions of constituents e.g. Alkali-Silica Reactivity (ASR), Alkali-Carbonate Reactivity (ACR), freeze-thaw induced scaling, abrasion due to contact wear and fatigue. <link linkend="T2">Table <xref linkend="T2" remap="2"/></link> provides an overview:</para>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><para>Life-cycle cost sensitivities of the most common deterioration types of concrete</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="all" border="1">
<thead>
<tr>
<th>Type</th>
<th>Damage type</th>
<th>Impact on initial costs</th>
<th>Impact on inspection costs</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top">Reinforcement corrosion</td>
<td valign="top">Spalling, loss of resistance</td>
<td valign="top">Small</td>
<td valign="top">Small</td>
</tr>
<tr>
<td valign="top">Fatigue</td>
<td valign="top">Weakened material</td>
<td valign="top">Varies</td>
<td valign="top">Large</td>
</tr>
<tr>
<td valign="top">ACR</td>
<td valign="top">Local scaling</td>
<td valign="top">Small</td>
<td valign="top">Small</td>
</tr>
<tr>
<td valign="top">ASR</td>
<td valign="top">Weakened material</td>
<td valign="top">Small</td>
<td valign="top">Small</td>
</tr>
<tr>
<td valign="top">Freeze/thaw</td>
<td valign="top">Scaling</td>
<td valign="top">Small</td>
<td valign="top">Small</td>
</tr>
<tr>
<td valign="top">Abrasion</td>
<td valign="top">Loss of material</td>
<td valign="top">Small</td>
<td valign="top">Small</td>
</tr>
<tr>
<td valign="top">Erosion</td>
<td valign="top">Loss of material</td>
<td valign="top">Small</td>
<td valign="top">Small</td>
</tr>
</tbody>
</table>
</table-wrap>
<para>I limit the scope to fatigue deterioration in this thesis. Any other deterioration mechanism could, in principle, be considered for SHM targeting, but fatigue is the obvious choice as the required deterioration models are well-known from RBI and because fatigue is a well-known design driver for dynamically loaded structures. Furthermore, as is shown in paper V, the value of SHM depends strongly on the inspection costs, with the SHM more economic when the inspection costs are high.</para>
<para>A probabilistic model for the targeted deterioration mechanism is required for the risk based decision making. Typically, the design models are based on empirical damage accumulation laws, and are of a deterministic nature. As large part of the variables that influence the probability of failure are of a stochastic nature, structural reliability- or sampling methods are used to estimate the probability of failure. As damage detection is intended for decision support regarding preventive operations and maintenance actions, an observable model of the physical damage must be calibrated to the design model &#x2013; a concept known from RBI. The coupled-model concept is sketched in <link linkend="F9">Figure <xref linkend="F9" remap="9"/></link>.</para>
<fig id="F9" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 9</label>
<caption><para>Framework for structural deterioration showing the design model and the observable physical model. The physical model calibrated to be <emphasis>reliability equivalent</emphasis> so that the distributions for time to failure p(<emphasis>T<subscript>f</subscript></emphasis>) are identical<footnote id="fn3" label="3"><para>I use the simplified notation p(<emphasis>T<subscript>f</subscript></emphasis>) for f<emphasis><subscript>Tf</subscript></emphasis>(<emphasis>T<subscript>f</subscript></emphasis>) henceforth.</para></footnote>. I return to the greyed out models later</para></caption>
<graphic xlink:href="graphics/fig9.jpg"/>
</fig>
<section class="lev1" id="sec2.1" label="2.1" xreflabel="sec2.1">
<title>Structural reliability methods</title>
<para>To calculate the probabilities of discrete events, e.g. failure, we may use structural reliability methods. They are approximations developed to the purpose of approximating integrals in the low-probability region of the joint probability density distribution. Structural reliability methods are categorized in 3 levels of analysis: level I, II and III methods. Level I methods are deterministic partial safety factor based, and are discussed no further. Level III methods are exact reliability calculations, based on the integration of the full joint PDF of all variables <emphasis>x</emphasis> = {<emphasis>x<subscript>1</subscript>,x<subscript>2</subscript>, &#x2026;,x<subscript>n</subscript></emphasis>} over the failure domain <emphasis>&#x03C9;<subscript>f</subscript></emphasis>. This is the evaluation of an <emphasis>n</emphasis>-fold integral:</para>
<equation id="1"><graphic xlink:href="graphics/eq1.jpg"/></equation>
<para>The full joint PDF is never obtainable for SHM design and for structural deterioration limit states. Instead we may use level II methods, which use some iterative techniques, or alternatively sampling techniques, to approximate the probability of failure. Several methods exists to approximate the probability of failure on both component- and system level for civil structures. The methods can divided into the categories sampling based, linear approximation based or simulation based. In the first group are Monte Carlo Sampling (MCS), Importance Sampling and Latin Hypercube methods. In the second group are First and Second Order Reliability Methods (FORM and SORM). In the third group are Directional Simulation and Subset Simulation. Several commercial software packages, as well as the open source Matlab toolbox FERUM 4.1 [<link linkend="B63">63</link>], offer these analysis types. FORM analysis was used in papers I &#x2013; III and VI for calculation of the fatigue reliability. Generally, MCS was used for verification of FORM results.</para>
<para>The structural reliability methods are based on the definition of a safety margin <emphasis>M</emphasis> as a function of all basic variables <emphasis>x:</emphasis></para>
<equation id="2"><graphic xlink:href="graphics/eq2.jpg"/></equation>
<para>Setting the margin M = 0 defines a hyper-surface in the <emphasis>n</emphasis>-dimensional space of basic variables, called the <emphasis>limit state surface</emphasis>. Although the basic variables are random entities, the surface is a purely deterministic concept. It divides the <emphasis>n</emphasis>-dimensional space into a safe region <emphasis>&#x03C9;<subscript>s</subscript></emphasis> and a failure region <emphasis>&#x03C9;<subscript>f</subscript></emphasis>. In the case of a linear safety margin <emphasis>M</emphasis> and normal basic variables, the reliability index <emphasis>&#x03B2;</emphasis> is defined as:</para>
<equation id="3"><graphic xlink:href="graphics/eq3.jpg"/></equation>
<para>And the relation to the failure probability is:</para>
<equation id="4"><graphic xlink:href="graphics/eq4.jpg"/></equation>
<para>Where &#x03A6; is the standard normal Cumulative Distribution Function (CDF). When assumptions of Gaussian variables and a linear safety margin are not satisfied, it is still possible to calculate <emphasis>&#x03B2;</emphasis> by linearizing the limit state surface in the design point in the standard normal space of variables. This is the background of FORM, further elaborated in [<link linkend="B64">64</link>].</para>
<para>Monte Carlo sampling can be used to estimate <emphasis>P<subscript>f</subscript></emphasis> by simulating the PDF of <emphasis>M</emphasis> with random samples drawn from the joint distribution of the basic variables <emphasis>x</emphasis>. If <emphasis>n</emphasis> is the total number of samples and <emphasis>k</emphasis> is the number is the number of samples for which <emphasis>M</emphasis> &#x2264; 0, then:</para>
<equation id="5"><graphic xlink:href="graphics/eq5.jpg"/></equation>
<para>As the sampling distribution of <inline-graphic xlink:href="graphics/inline1.jpg"/> is a sum of independent samples, it is asymptotically Gaussian<footnote id="fn4" label="4"><para>Notice that this is <underline>not</underline> the sampling distribution of <emphasis>M</emphasis>.</para></footnote>, following the central limit theorem. The coefficient of variation on the estimate <inline-graphic xlink:href="graphics/inline1.jpg"/> is dependent of the number of samples <emphasis>n</emphasis> and thus the number of samples required to estimate <emphasis>P<subscript>f</subscript></emphasis>, to a reasonable precision, is inversely proportional to <emphasis>P<subscript>f</subscript></emphasis>. As a thumb rule <emphasis>n<subscript>required</subscript></emphasis> &#x007E; 100 / <emphasis>P<subscript>f</subscript></emphasis> to obtain a coefficient of variation of approximately 10%. More elaborate insight is provided by Melchers [<link linkend="B64">64</link>].</para>
<section class="lev2" id="sec2.1.1" label="2.1.1" xreflabel="sec2.1.1">
<title>System aspects</title>
<para>A civil structure is modelled as a system of series- or parallel connected components, each representing a structural member. The two types of connections are shown in <link linkend="F10">Figure <xref linkend="F10" remap="10"/></link>.</para>
<fig id="F10" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 10</label>
<caption><para>Schematic representation of series and parallel systems, from JCSS [<link linkend="B31">31</link>]</para></caption>
<graphic xlink:href="graphics/fig10.jpg"/>
</fig>
<para>System reliability is a union of <emphasis>m</emphasis> failure modes, each being an intersection of <emphasis>m<subscript>i</subscript></emphasis> element limits states. This is modelled as a series system of parallel systems:</para>
<equation id="6"><graphic xlink:href="graphics/eq6.jpg"/></equation>
<para>Naturally, the correlations between component&#x2019;s limits states have a large effect on the system reliability. In general, the reliability of a parallel system decreases with increased correlation. Oppositely, the reliability of a series system increases with increasing correlation. Methods for estimating the reliability of systems are given by Madsen et al. [<link linkend="B30">30</link>].</para>
</section>
</section>
<section class="lev1" id="sec2.2" label="2.2" xreflabel="sec2.2">
<title>Risk-based optimization</title>
<para>The acceptance criteria for civil structures can be set by regulations and codes, which is mainly the case when life-safety is involved, but it can also be a requirement from the building owner. The latter is the case if the owner has his own risk analysis, and thus restricts the system-level probability of failure. This could also be the case if the owner is risk-averse. Reliability based structural optimization, e.g. in Enevoldsen &#x0026; S&#x00F8;rensen [<link linkend="B53">53</link>], is the field of optimizing an object function, e.g. material cost or weight, to the restraint of a given probability of failure. If life-cycle expected costs constitute the object function, then the problem becomes a risk minimization, subject to linear and non-linear constraints. If initial costs <emphasis>C<subscript>init</subscript></emphasis> and failure costs <emphasis>C<subscript>f</subscript></emphasis> are the only relevant costs, the optimization can be written as:</para>
<equation id="7"><graphic xlink:href="graphics/eq7.jpg"/></equation>
<para>Where <emphasis>z</emphasis> is a vector with the parameters of the design variables, e.g. mean values for dimensions and material strengths. The principle is sketched in <link linkend="F11">Figure <xref linkend="F11" remap="11"/></link>:</para>
<fig id="F11" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 11</label>
<caption><para>Principal sketch of structural risk optimization</para></caption>
<graphic xlink:href="graphics/fig11.jpg"/>
</fig>
<para>The black curve in <link linkend="F11">Figure <xref linkend="F11" remap="11"/></link> has two contributions. The first is the contribution from initial design cost, i.e. cost of materials, cost of designing the structure and the cost of construction. The second is the contribution of expected failure costs, given as the probability of failure w.r.t. the service life and the consequence of failure. If the structure has no life-safety relevance, e.g. an offshore wind turbine, then the design can be purely economic-driven and with no non-linear constraints.</para>
</section>
<section class="lev1" id="sec2.3" label="2.3" xreflabel="sec2.3">
<title>Fatigue in steel structures</title>
<para>Metal fatigue has been the known cause of many structural failures ever since it was first recognized for conveyor chains by Albert in 1838. It has gained considerable attention following several catastrophes failures attributed to fatigue. Among the most notable are the Versailles train crash (1842), the Liberty ships (1943), the Comet airplanes (1954) and the Alexander L. Kielland offshore platform (1980), shown in <link linkend="F12">Figure <xref linkend="F12" remap="12"/></link>.</para>
<fig id="F12" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 12</label>
<caption><para>Severed leg of the Alexander L. Kielland platform. The protruding brace is where the fatigue failure occurred. According to N&#x00E6;sheim et al. [<link linkend="B65">65</link>], the crack had grown to more than 60% circumference before fracture occurred during a gale-force blow. Photo: Norwegian Petroleum Museum</para></caption>
<graphic xlink:href="graphics/fig12.jpg"/>
</fig>
<para>Fatigue failure in structural steels is a brittle, failure type that often occurs with little or no premonition for the operators of the structure. As fatigue is associated with multiple large uncertainties, the safety factors must necessarily be large to counter the large consequence and, in offshore engineering, a &#x201C;design&#x201D; fatigue life of 10 times the planned service life is not uncommon for critical components of jackets and topsides. Fatigue is the primary design driver for many structural components of wind turbines, including blades, tower, foundation and hub. According to Lassen &#x0026; Recho [<link linkend="B54">54</link>], fatigue initiates from microscopic defects in locations with large stress-concentrations. Both these factors are present in welds, which is why fatigue often happens in welds. The environment influences the fatigue life and the presence of corrosion reduces the fatigue life.</para>
<section class="lev2" id="sec2.3.1" label="2.3.1" xreflabel="sec2.3.1">
<title>Design model: stress life (SN) model</title>
<para>Lassen &#x0026; Recho [<link linkend="B54">54</link>] provide a thorough overview of the SN approach, whereas it is briefly introduced in the following. Fatigue loading categorizes into three regimes: low-cycle (&#x003C;10<superscript>4</superscript> cycles), high-cycle (&#x003E;10<superscript>4</superscript> cycles) and ultra-high cycle (&#x003E;10<superscript>8</superscript> cycles). The low-cycle regime, which can be observed be repeatedly bending a paper clip until it breaks, is dominated by nonlinear behavior and large plastic strains. High-cycle loading is the normal design regime of the structural codes and several empirical <emphasis>stress-life</emphasis> models have achieved inclusion is the international standards. They are based on the log-linear relationship by Basquin [<link linkend="B66">66</link>]:</para>
<equation id="8"><graphic xlink:href="graphics/eq8.jpg"/></equation>
<para>Where <emphasis>N</emphasis> is the number of cycles to failure, <emphasis>&#x0394;&#x03C3;</emphasis> is the stress range and <emphasis>K</emphasis> and <emphasis>m</emphasis> are empirical constants. Most experiments have been performed with less than 10<superscript>7</superscript> cycles. Structural components of wind turbines endure in the region of 10<superscript>9</superscript> cycles and thus fall into the latter category, where the knowledge of fatigue behavior is very limited. The design codes are extrapolated up into the ultra-high cycle region, and some codes implement an endurance limit, rather than extrapolate into the regime. The model describes the fatigue damage as a function of stress range alone. Corrections have been suggested to account for influence of a non-zero mean stress, but these are not discussed here. The values for the constants <emphasis>m</emphasis> and <emphasis>K</emphasis> depend of the type of stress-calculation concept used, on the type of alloy, on the surface treatment and on the environmental conditions of the detail, and may be given as 5% fractiles for deterministic analysis. For variable amplitude stresses, the linear damage accumulation hypothesis by Palmgren-Miner [<link linkend="B67">67</link>] is used to calculate a damage sum <emphasis>D:</emphasis></para>
<equation id="9"><graphic xlink:href="graphics/eq9.jpg"/></equation>
<para>Where <emphasis>n<subscript>i</subscript></emphasis> is the number of cycles and <emphasis>N<subscript>i</subscript></emphasis> is the number of cycles to failure for stress range <emphasis>i</emphasis>. Palmgren-Miner&#x2019;s model assumes that the order of the loads effects has no influence on the fatigue life. Some SN models, like the one used in papers I, III and VI, are bi-linear. The limit state function is:</para>
<equation id="10"><graphic xlink:href="graphics/eq10.jpg"/></equation>
<para>Where <emphasis>&#x0394;</emphasis> is the damage sum at failure. Furthermore, some models have an endurance limit, i.e. a stress level <emphasis>&#x0394;&#x03C3;<subscript>cut-off</subscript></emphasis>, for which stress ranges less than or equal to, do not contribute to the fatigue life. <link linkend="F13">Figure <xref linkend="F13" remap="13"/></link> shows a bilinear SN curve with endurance limit.</para>
<fig id="F13" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 13</label>
<caption><para>Bilinear SN model with endurance limit for constant amplitude loading</para></caption>
<graphic xlink:href="graphics/fig13.jpg"/>
</fig>
<para>The uncertainties in the SN model approach are both of the epistemic (model and measurement) type and of the aleatory (inherent) type. According to Straub [<link linkend="B68">68</link>], they can be categorized into load-, model- and resistance uncertainties. The uncertainties may be represented by random variables, however, due to the nature of the underlying experiments, each variable cannot be isolated for statistical analysis and some engineering judgment must be applied. The uncertainty model used in papers I,III and VI is provided in paper III.</para>
</section>
<section class="lev2" id="sec2.3.2" label="2.3.2" xreflabel="sec2.3.2">
<title>Measurable model: Paris law</title>
<para>The physical manifestation of metal fatigue is a crack. The Miner damage <emphasis>&#x0394;</emphasis> is not an observable variable and no experiments can obtain information about the state of <emphasis>&#x0394;</emphasis>. Without a measurable damage variable, inspections or health monitoring has no benefit, so for this reason we look to Linear Elastic Fracture Mechanics (LEFM) to model the physical crack growth.</para>
<para>It is commonly agreed upon that there are three stages of fatigue crack growth: initiation, propagation and fracture. For damage detection purposes, we consider failure to occur when the growth enters stage III. This is a common approach in fatigue reliability analysis, as discussed by Straub [<link linkend="B68">68</link>]. Although mathematical theory has been developed for the modelling of the crack initiation, this stage is typically modelled by a random variable; cycles to initiation <emphasis>N<subscript>init</subscript></emphasis> and/or by the initial crack size <emphasis>a<subscript>init</subscript></emphasis>. The propagation stage is modelled using Paris law: by observing a log-log linear relationship between crack growth rate d<emphasis>a</emphasis>/d<emphasis>n</emphasis> and stress intensity range &#x0394;<emphasis>K</emphasis>, Paris &#x0026; Erdogan [<link linkend="B69">69</link>] defined the classic form of Paris law:</para>
<equation id="11"><graphic xlink:href="graphics/eq11.jpg"/></equation>
<para>Like the Basquin equation, it is a purely empirical equation, where <emphasis>C</emphasis> and <emphasis>m</emphasis> are empirically fitted constants. Paris law is known to overestimate growth because the sequence effects are neglected and because all stress ranges contribute to the growth. An endurance limit can be obtained by setting a limit a <emphasis>K<subscript>th</subscript></emphasis> for which stress intensity ranges do not contribute to the growth. &#x0394;<emphasis>K</emphasis> is given by:</para>
<equation id="12"><graphic xlink:href="graphics/eq12.jpg"/></equation>
<para>Where Y is the geometry function, <emphasis>a</emphasis> is crack half-length and &#x0394;<emphasis>&#x03C3;</emphasis> is the constant amplitude stress range in the corresponding uncracked geometry. In case of variable amplitude of random loading, an equivalent stress range <emphasis>S</emphasis> is calculated as:</para>
<equation id="13"><graphic xlink:href="graphics/eq13.jpg"/></equation>
<para>Where <emphasis>&#x0394;&#x03C3;</emphasis> is the Rainflow counted stress-range bins from a cycle count matrix and <emphasis>n<subscript>i</subscript></emphasis> is cycle count corresponding to stress range bin <emphasis>i</emphasis>.</para>
<para>The uncertainty model on the LEFM model is similar to the one from the SN model. The parameters <emphasis>C</emphasis> and <emphasis>m</emphasis> are random with distributions based on empirical fitting to experimental data. Lassen [<link linkend="B70">70</link>] suggested the linear relationship, ln(<emphasis>C</emphasis>) = -15.84-3.34<emphasis>m</emphasis>, which has been used in this thesis. The geometry function depends on the boundary conditions of the considered geometry and can be interpolated from FE model results. Direct integration of (11) is possible only if Y is constant. However, in most practical cases, Y(<emphasis>a</emphasis>) is empirically decided from an FE model and numeric methods are required. This is the case for the use of LEFM for the wind turbine tower in papers I, III and VI. The procedure used is an incremental numeric simulation, applying a constant value of &#x0394;<emphasis>K</emphasis> for each crack growth increment &#x0394;<emphasis>a</emphasis>.</para>
<equation id="14"><graphic xlink:href="graphics/eq14.jpg"/></equation>
<para>This approach decreases the computational effort but introduces a bias on the calculated number of cycles to failure as the value of the geometry function is underestimated. The bias is quantified by performing a sensitivity analysis of the crack increments influence on the calculated number of cycles to failure. An example of a numeric crack growth curve using the method is shown in <link linkend="F14">Figure <xref linkend="F14" remap="14"/></link>.</para>
<fig id="F14" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 14</label>
<caption><para>Crack growth curves for six different crack step increments in the numeric calculation</para></caption>
<graphic xlink:href="graphics/fig14.jpg"/>
</fig>
<para>A trade-off between reasonable precision and calculation effectivity is found for an increment selected as 1%, which has been used in this thesis.</para>
<para>The LEFM model is calibrated to the SN design model in <emphasis>P<subscript>f</subscript></emphasis>-space, i.e. to be reliability equivalent, following the procedure described e.g. by Straub [<link linkend="B68">68</link>] :</para>
<equation id="15"><graphic xlink:href="graphics/eq15.jpg"/></equation>
<para>Where the service life has been discretized into <emphasis>t</emphasis> = <emphasis>t<subscript>1</subscript>,t<subscript>2</subscript>,&#x2026;,t<subscript>N</subscript></emphasis>, and <emphasis>x</emphasis> are the fitted parameters. The fitted parameters are <emphasis>C</emphasis> and initial crack size <emphasis>a<subscript>init</subscript></emphasis>. The calibration can be performed in <emphasis>&#x00DF;</emphasis>-space, but as my focus is on expectations of cost, I target the failure probability. For optimization of eq. (15), the reliability of the SN design model was calculated with FORM, but due to convergence problems, the LEFM reliability was approximated with MCS. The sampling output causes a noisy object function and makes the optimization time-consuming. To accommodate the noisy output, the optimization was performed with the Genetic Algorithm in Matlab. The calibrated model is valid only for one stress level and each combination of loads and geometry require one model calibration. Due to these high computational costs, the parameter space of tower wall thickness was truncated to three values, in paper I.</para>
</section>
</section>
<section class="lev1" id="sec2.4" label="2.4" xreflabel="sec2.4">
<title>Fatigue in concrete structures</title>
<para>Concrete fatigue gained considerable interest with the design and construction of the first Norwegian gravity based Condeep offshore platforms in the 1970s, see Holmen [<link linkend="B71">71</link>], but, although many codes (CEB-FIB [<link linkend="B72">72</link>], FIB [<link linkend="B73">73</link>], EC [<link linkend="B74">74</link>], DNV [<link linkend="B75">75</link>]) incorporate rules for concrete fatigue design, none to few failures have been observed. In a report from 2009 [<link linkend="B76">76</link>], 27 Condeep structures were investigated but no cases of concrete fatigue were diagnosed. Concrete fatigue has been under investigation for causing excessive creep of several long span box-girder bridges, including the collapsed Palau bridge, see Bazant &#x0026; Hubler [<link linkend="B77">77</link>], and recent full scale tests in northern Germany attempt to provoke fatigue in a gravity base foundation, in Urban et al. [<link linkend="B78">78</link>]. The importance of fatigue for future structures is discussed by RILEM Committee 36 [<link linkend="B79">79</link>] as becoming increasingly relevant. There is no current RBI approach for concrete fatigue and structures are designed for &#x201C;safe-life&#x201D;, i.e. the reliability must be sufficient for the whole service life, without inclusion of inspection data.</para>
<para>Fatigue in reinforced concrete can mean fatigue of the concrete, considered unreinforced, or fatigue of the reinforcement. I consider only fatigue of the concrete in this thesis, as preliminary studies indicated that reinforcement fatigue would only be relevant if large tensile strains occur and that the concrete fatigue capacity would then already be exhausted.</para>
<section class="lev2" id="sec2.4.1" label="2.4.1" xreflabel="sec2.4.1">
<title>Design model</title>
<para>Fatigue design of concrete structures is, although there are some main differences from design of steel structures, based on W&#x00F6;hler-type SN curves, and Miner-sum linear damage accumulation. For metals, Basquin&#x2019;s equation describes cycles to failure for a given stress range, but for concrete, the fatigue life is based two parameters: minimum and maximum relative stress: <emphasis>S<subscript>min</subscript></emphasis> and <emphasis>S<subscript>max</subscript></emphasis>. The ratio of minimum to maximum stress is called the <emphasis>R</emphasis>-ratio. Due to the different tensile and compressive properties of concrete, there are tree regimes:</para>
<orderedlist numeration="arabic" continuation="restarts" spacing="normal">
<listitem><para>Pure compression (0 &#x2264; <emphasis>R</emphasis> &#x003C; 1)</para></listitem>
<listitem><para>Pure tensile (0 &#x2264; <emphasis>R</emphasis> &#x003C; 1)</para></listitem>
<listitem><para>Alternating cycles (-1 &#x003C; <emphasis>R</emphasis> &#x003C; 0)</para></listitem></orderedlist>
<para>Most test have been under constant-amplitude loading and it is tempting to use the Palmgren-Miner hypothesis, eq. (9), to account for variable amplitude and random loading. Unfortunately, as e.g. Holmen [<link linkend="B80">80</link>] showed, the loading history has an effect on the fatigue life. In the case of random loading however, it seems reasonable to assume linear damage accumulation, as the load cycles are randomly ordered. The evolution of the SN models started in 1970, when Aas-Jacobsen [<link linkend="B81">81</link>] proposed the following W&#x00F6;hler curve for pure compression:</para>
<equation id="16"><graphic xlink:href="graphics/eq16.jpg"/></equation>
<para>Where the slope of the curve <emphasis>&#x00DF;</emphasis> is an empirical constant. Most current design codes are based on Aas-Jacobsen&#x2019;s relation for compressive cycles, although they have been modified some times. In the recent Model Code 2010 [<link linkend="B73">73</link>], a bi-linear model is used. This is developed from the model presented by Stemland et al. [<link linkend="B82">82</link>]. The latest modification was motivated by recent results of ultra-high strength concrete experimental trials. The Model Code 2010 [<link linkend="B73">73</link>] expression is shown next to Aas-Jacobsen&#x2019;s original expression in <link linkend="F15">Figure <xref linkend="F15" remap="15"/></link>:</para>
<fig id="F15" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 15</label>
<caption><para>W&#x00F6;hler curves for pure compression</para></caption>
<graphic xlink:href="graphics/fig15.jpg"/>
</fig>
<para>Although Tepfers [<link linkend="B83">83</link>] had stated that (16) was valid for the tensile region, Cornelissen [<link linkend="B84">84</link>] proposed the following expression:</para>
<equation id="17"><graphic xlink:href="graphics/eq17.jpg"/></equation>
<para>And for alternating cycles:</para>
<equation id="18"><graphic xlink:href="graphics/eq18.jpg"/></equation>
<para>Cornelissen&#x2019;s expressions are shown in <link linkend="F16">Figure <xref linkend="F16" remap="16"/></link>.</para>
<fig id="F16" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 16</label>
<caption><para>Cornelissen&#x2019;s W&#x00F6;hler curves for tension and alternating stress for concentric specimens</para></caption>
<graphic xlink:href="graphics/fig16.jpg"/>
</fig>
<para>Cornelissen furthermore showed that the type of test, more precisely defined by the stress distribution in the specimen, had a large impact on the cycles to failure. This influences the choice of experimental data to include in the same model, and in turn makes the choice of model applications specific.</para>
</section>
<section class="lev2" id="sec2.4.2" label="2.4.2" xreflabel="sec2.4.2">
<title>Choice of a probabilistic design model for random loading</title>
<para>The need for a probabilistic model was promoted e.g. by Oh [<link linkend="B85">85</link>] and McCall [<link linkend="B86">86</link>]. Among the motivating reason are the following:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The large material uncertainties of concrete create a very large scatter on the fatigue strength and the Palmgren-Miner hypothesis is erroneous for variable amplitude loading, e.g. Holmen [<link linkend="B80">80</link>].</para></listitem>
<listitem><para>Test that use different material properties are gathered under the same model, the scale effects are very large, e.g. Bazant &#x0026; Xu [<link linkend="B87">87</link>].</para></listitem>
<listitem><para>Very few tests have been carried out in the alternating regime, which is relevant for partially posttensioned concrete structures.</para></listitem></itemizedlist>
<para>The probabilistic model in paper II for random loading is a combination of the MC2010 [<link linkend="B73">73</link>] model, which is the most recent model covering the compressive regime, and Cornelissen&#x2019;s model for concentric specimens, as this is the most recent model covering the tensile- and alternating regime. As both models are deterministic models containing parameters fitted to experimental data, subjective model uncertainties were added. The uncertainty model used in paper II is shown in <link linkend="T3">Table <xref linkend="T3" remap="3"/></link>:</para>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><para>Uncertainty model used for the concrete NREL wind turbine tower</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="all" border="1">
<thead>
<tr>
<th>Variable</th>
<th>Type</th>
<th>Mean</th>
<th>Standard deviation</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top"><emphasis>&#x03BC;<subscript>&#x03C3;</subscript></emphasis>, <emphasis>&#x0394;<subscript>&#x03C3;</subscript>, n</emphasis></td>
<td valign="top">D</td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top">Rainflow counted load cycle matrix</td>
</tr>
<tr>
<td valign="top"><emphasis>f<subscript>c</subscript></emphasis><?lb?><emphasis>f<subscript>ct</subscript></emphasis></td>
<td valign="top">LN</td>
<td valign="top"><emphasis>f<subscript>cm</subscript></emphasis><?lb?>0.3<emphasis>f<subscript>cm</subscript><superscript>(2/3)</superscript></emphasis></td>
<td valign="top">0.15<emphasis>f<subscript>cm</subscript></emphasis>, <emphasis>&#x03C1;</emphasis> = 0.9<?lb?>0.3<emphasis>f<subscript>ctm</subscript></emphasis></td>
<td valign="top">From JCSS [<link linkend="B31">31</link>]</td>
</tr>
<tr>
<td valign="top"><emphasis>F<subscript>p</subscript></emphasis></td>
<td valign="top">N</td>
<td valign="top"><emphasis>F<subscript>pm</subscript></emphasis></td>
<td valign="top">0.05<emphasis>F<subscript>pm</subscript></emphasis></td>
<td valign="top">Post-tensioning force</td>
</tr>
<tr>
<td valign="top"><emphasis>X<subscript>Nc</subscript></emphasis></td>
<td valign="top">LN</td>
<td valign="top">1</td>
<td valign="top">0.016</td>
<td valign="top">Subjective model uncertainty: compressive</td>
</tr>
<tr>
<td valign="top"><emphasis>X<subscript>Nt</subscript></emphasis></td>
<td valign="top">LN</td>
<td valign="top">1</td>
<td valign="top">0.037</td>
<td valign="top">Subjective model uncertainty: tensile</td>
</tr>
<tr>
<td valign="top"><emphasis>&#x0394;</emphasis></td>
<td valign="top">LN</td>
<td valign="top">1</td>
<td valign="top">0.3</td>
<td valign="top">Model uncertainty on Palmgren-Miner. From Holmen [<link linkend="B80">80</link>]</td>
</tr>
<tr>
<td valign="top"><emphasis>X<subscript>S</subscript></emphasis></td>
<td valign="top">LN</td>
<td valign="top">1</td>
<td valign="top">0.132</td>
<td valign="top">Combined LN load uncertainty. From IEC61400-1 [<link linkend="B88">88</link>]</td>
</tr>
<tr>
<td valign="top"><emphasis>X<subscript>aero</subscript></emphasis></td>
<td valign="top">G</td>
<td valign="top">1</td>
<td valign="top">0.1</td>
<td valign="top">Load uncertainty. IEC61400-1 [<link linkend="B88">88</link>]</td>
</tr>
</tbody>
</table>
</table-wrap>
<para>The model uncertainties for the loads have been included in the table. The expected value of the cycles to failure E[<emphasis>N<subscript>f</subscript></emphasis>] forms the surface that is plotted in <link linkend="F17">Figure <xref linkend="F17" remap="17"/></link>.</para>
<fig id="F17" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 17</label>
<caption><para>Fatigue model. Expected cycles to failure as a function of mean-stress <emphasis>&#x03BC;<subscript>&#x03C3;</subscript>/f<subscript>cm</subscript></emphasis> and stress-range <emphasis>&#x0394;<subscript>&#x03C3;</subscript></emphasis>/<emphasis>f<subscript>cm</subscript></emphasis></para></caption>
<graphic xlink:href="graphics/fig17.jpg"/>
</fig>
<para>The uncertainty models, especially concerning correlations, of <emphasis>f<subscript>c</subscript></emphasis>, <emphasis>f<subscript>ct</subscript></emphasis> and <emphasis>X<subscript>Nt</subscript></emphasis> and <emphasis>X<subscript>Nc</subscript></emphasis> remains a topic for future research. As the same concentric specimens have not, for obvious reasons, been tested both statically and dynamically, additional sources of uncertainty cannot yet be identified or quantified. In a probabilistic analysis, prior reasoning as well as some conservatism must be applied in setting the model uncertainties. The probability of failure at the tower foot is shown in <link linkend="F18">Figure <xref linkend="F18" remap="18"/></link>, left.</para>
<fig id="F18" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 18</label>
<caption><para>Left: The probability of failure for the NREL tower with the contribution from the two design points (from paper II). Right: Sobol global sensitivity indices with 95% confidence bounds</para></caption>
<graphic xlink:href="graphics/fig18.jpg"/>
</fig>
<para>The probability of failure is seen to be different in shape that the probability of failure of the steel tower. The shape corresponds to results by Petryna &#x0026; Kr&#x00E4;tzig [<link linkend="B89">89</link>]. It can be observed that a substantial contribution to the expected failure costs fall within the first 5% of the service life. This is due to dominating load uncertainties in the probabilistic analysis, as can be seen from the result of a global sensitivity analysis in <link linkend="F18">Figure <xref linkend="F18" remap="18"/></link>, right. The global sensitivity analysis is, unlike the FORM sensitivity analysis (given in paper II), not based on the design point but on the full parameter space of basic variables. According to Sobol [<link linkend="B90">90</link>], it is a suitable to evaluate the sensitivity of the model, rather than the sensitivity at a specific solution.</para>
</section>
<section class="lev2" id="sec2.4.3" label="2.4.3" xreflabel="sec2.4.3">
<title>Observable model</title>
<para>In <link linkend="sec2.3.2">chapter <xref linkend="sec2.3.2" remap="2.3.2"/></link>, Paris law was introduced to model a propagating fatigue crack in steel, but Paris law cannot be directly applied to model fatigue damage in concrete, as the mechanism is not the same. Where steel fatigue is characterized by growth of a single macro-scale crack, the deterioration of concrete is driven by mechanisms at the micro-scale over a larger process zone, Bazant &#x0026; Hubler [<link linkend="B77">77</link>]. At all length scales, the structure of concrete is disordered and defects such as both micro-cracks and macro-cracks are present. Under reversed- or tensile loading, the cracks interact and join, causing accelerated deterioration, Cornelissen &#x0026; Reinhardt [<link linkend="B91">91</link>]. From the distribution of cycles to failure for the concrete tower shown in <link linkend="F18">Figure <xref linkend="F18" remap="18"/></link>, it can be deducted that while the first fraction of the service life is dominated by tensile fatigue, the main part is dominated by compressive fatigue. This means that the feature model must represent the deterioration due to compressive fatigue.</para>
<para>In the early 1970s Holmen [<link linkend="B80">80</link>] showed that the secant modulus <emphasis>E<subscript>s</subscript></emphasis> of the concrete decreased during compressive fatigue loading and that the size of the reduction increased with reduced minimum stress level <emphasis>S<subscript>min</subscript></emphasis>. Holmen used high stress levels, not likely to be found in civil structures (<emphasis>S<subscript>max</subscript></emphasis> &#x003E; 0.675). The phenomenon that Holmen observed is known as cyclic creep, which was first observed by Feret in 1906 [<link linkend="B92">92</link>]. A mathematical model was recently developed using fracture mechanics on the microscopic crack growth, both in tension and compression, by Bazant &#x0026; Hubler [<link linkend="B77">77</link>]. This model is however valid only at the micro-level and generally too involved to model a propagation of damage at the macro level, which is required for SHM purposes. The deterioration causes an increase in load-produced strains that follows three stages; 1) an initial period of decrease of strain-increase rate, 2) a longer period of constant rate and 3) a final period of increasing rate of strain in-crease, shown in <link linkend="F19">Figure <xref linkend="F19" remap="19"/></link>, left.</para>
<para>The secondary stage constitutes the larger part of the fatigue life and displays correlation, albeit weak, to the deterioration state. Currently used NDE methods for concrete fatigue assessment are ultrasound and acoustic emission, as described in e.g. Urban et al. [<link linkend="B78">78</link>]. They target the level of deterioration, based on empirical correlations to the damage sum <emphasis>D</emphasis>, through Holmen&#x2019;s cyclic creep relation.</para>
<fig id="F19" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 19</label>
<caption><para>Left: Principle of cyclic creep, from Hordijk [<link linkend="B93">93</link>]. Right: Sketch of the development of the secant modulus with loading cycles. Mean value and coefficient of variation <emphasis>V</emphasis> (paper II)</para></caption>
<graphic xlink:href="graphics/fig19.jpg"/>
</fig>
<para>The uncertainty model shown in the figure above is for compressive cycles (<emphasis>S<subscript>min</subscript></emphasis> = 0.05, <emphasis>S<subscript>max</subscript></emphasis> = 0.675) based on research from 2006 at the Ruhr University, and which is presented by Breitenb&#x00FC;cher &#x0026; Ibuk [<link linkend="B94">94</link>]. Where Holmen in 1979 conducted his tests at <emphasis>S<subscript>max</subscript></emphasis> stress level above 0.675, these newer test use lower stress levels, and are more representative for the stress conditions in civil structures. The uncertainty model from Breitenb&#x00FC;cher &#x0026; Ibuk was adapted for paper II, where a model was defined to link the deterioration state of the Palmgren-Miner hypothesis to the secant modulus of the concrete in the damaged region.</para>
</section>
</section>
<section class="lev1" id="sec2.5" label="2.5" xreflabel="sec2.5">
<title>Deterioration modelling using Bayesian Networks</title>
<para>So far I&#x2019;ve introduced structural reliability methods and Monte Carlo sampling for calculation the probability of failure (or any other event probability). I will now introduce a graphical approach to probabilistic calculation, called Bayesian Networks (BN). BN&#x2019;s were used to model deterioration by e.g. Friis-Hansen [<link linkend="B95">95</link>], Straub [<link linkend="B96">96</link>] and Nielsen [<link linkend="B48">48</link>]. In the following, I will introduce basic concepts of BNs, with focus on deterioration modelling. The books by Kjaerulff &#x0026; Madsen [<link linkend="B97">97</link>] and by Jensen &#x0026; Nielsen [<link linkend="B98">98</link>] provide a complete background on BN&#x2019;s in the context of decision-making. The nodes in a BN can be discrete or continuous but I consider only discrete BNs in the following, as the use of continuous variables poses many restrictions on the application of BNs.</para>
<section class="lev2" id="sec2.5.1" label="2.5.1" xreflabel="sec2.5.1">
<title>Bayesian networks</title>
<para>BNs are Directed Acyclic Graphs and cannot model cyclic dependencies, meaning that no paths <emphasis>from</emphasis> a node can lead back to the same node. A BN contains nodes, each representing a stochastic variable, and directed links, also called causal arcs, which model dependencies between the variables.</para>
<para>Ancestors, parents and children</para>
<para>A BN is a model of the joint probability distribution p(<emphasis>x</emphasis>) of a set of random variables <emphasis>x</emphasis> = { <emphasis>x<subscript>1</subscript></emphasis> , <emphasis>x<subscript>2</subscript> , &#x2026; x<subscript>n</subscript></emphasis>}. The nodes with a link <emphasis>towards</emphasis> node <emphasis>x<subscript>i</subscript></emphasis> are the parents of <emphasis>x<subscript>i</subscript></emphasis>, denoted <emphasis>pa<subscript>i</subscript></emphasis> and the nodes with links <emphasis>from</emphasis> node <emphasis>x<subscript>i</subscript></emphasis> are the children of <emphasis>x<subscript>i</subscript></emphasis>, denoted <emphasis>ch<subscript>i</subscript></emphasis>. The simplest Bayesian Network (BN) contains two nodes and one link. When information of a variables state is observed, the other variables are updated according to Bayes rule:</para>
<equation id="19"><graphic xlink:href="graphics/eq19.jpg"/></equation>
<para>Where the nominator is the likelihood multiplied by the prior, and the denominator is the total probability of the evidence. Consider the following example of a fire alarm and a fire, as shown in <link linkend="F20">Figure <xref linkend="F20" remap="20"/></link>. Assume that both <emphasis>fire</emphasis> and <emphasis>alarm</emphasis> can take the values 1 or 0.</para>
<fig id="F20" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 20</label>
<caption><para>The causality is explained in this way: if you hear the alarm, it changes the probability of fire. Likewise, you observe a fire, it changes the probability of the alarm sounding</para></caption>
<graphic xlink:href="graphics/fig20.jpg"/>
</fig>
<para>Where <emphasis>fire</emphasis> is a parent of <emphasis>alarm</emphasis> and <emphasis>alarm</emphasis> is a child of <emphasis>fire</emphasis>. The node <emphasis>fire</emphasis> contains the discrete unconditional probability table P(<emphasis>fire)</emphasis> which has two entries. The node <emphasis>alarm</emphasis> contains the Conditional Probability Table (CPT) <emphasis>P(alarm|fire)</emphasis> which has four entries. In this example we have considered <emphasis>fire</emphasis> as an independent variable. Had we included a parent to <emphasis>fire,</emphasis> e.g. <emphasis>gas leak,</emphasis> it would become the ancestor of <emphasis>alarm</emphasis>. Due to the conditional independence relationship introduced by the links, the <emphasis>child</emphasis> is independent of the <emphasis>ancestors</emphasis> given the <emphasis>parents</emphasis>.</para>
<para>If the node <emphasis>x<subscript>i</subscript></emphasis> has no parents, it is unconditional and has discrete probability distribution P(<emphasis>x<subscript>i</subscript></emphasis>). If the node x<emphasis><subscript>i</subscript></emphasis> has parents <emphasis>pa</emphasis>(<emphasis>x<subscript>i</subscript></emphasis>), then it is conditional on the parents and has the probability distribution P(<emphasis>x<subscript>i</subscript></emphasis>&#x007C;<emphasis>pa</emphasis>(<emphasis>x<subscript>i</subscript></emphasis>)). This way, the joint probability distribution <emphasis>P</emphasis>(<emphasis>U</emphasis>) is represented by a set of conditional distributions for the individual variables. When all variables are discrete, the discrete joint probability density function is, according to the chain rule given as a product of the condition probability tables:</para>
<equation id="20"><graphic xlink:href="graphics/eq20.jpg"/></equation>
<para>Each variable has a finite number of states, e.g. a crack length represented by the variable <emphasis>a</emphasis> can be discretized into the possible states of <emphasis>a</emphasis> = {1mm, 2mm, &#x2026;, 40mm}. If the state of <emphasis>a</emphasis> is observed, we say that evidence <emphasis>e<subscript>a</subscript></emphasis> is observed, causing <emphasis>a</emphasis> to be &#x2018;instantiated&#x2019;. This changes the joint probability density function to P(<emphasis>U,e<subscript>a</subscript></emphasis>) and the calculation of the posterior of any variable <emphasis>x</emphasis>, P(<emphasis>x|e<subscript>a</subscript></emphasis>) is called inference. This is further elaborated on the following page.</para>
<para>Connection types</para>
<para>The links introduce some separation properties, which influence the propagation of evidence. The figure below shows the three connection types:</para>
<fig id="F21" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 21</label>
<caption><para>Connection types. From left to right: <emphasis>serial, converging</emphasis> and <emphasis>diverging</emphasis> connection</para></caption>
<graphic xlink:href="graphics/fig21.jpg"/>
</fig>
<itemizedlist mark="dash" spacing="normal">
<listitem><para>In the serial connection, if C is instantiated, then A and B become dependent of C and if B is instantiated, C becomes independent of A.</para></listitem>
<listitem><para>In the converging connection, evidence on a parent (A,C) has no influence over the others, but if any evidence changes the certainty of the child (B), it makes the parents dependent.</para></listitem>
<listitem><para>In the diverging connection, instantiation of the parent (B) blocks communication the children and they become independent. Instantiation of a child influences both parent and the other child.</para></listitem>
</itemizedlist>
</section>
<section class="lev2" id="sec2.5.2" label="2.5.2" xreflabel="sec2.5.2">
<title>Inference in BNs</title>
<para>BNs have many advantages, as I will discuss in the following, but the perhaps most important strength lies in the inference algorithms. Inference is generally based on using Bayes rule and marginalization of (summing over, if discrete) the irrelevant variables of the joint probability distribution, Jensen &#x0026; Nielsen [<link linkend="B98">98</link>]. Let <emphasis>e<subscript>i</subscript></emphasis> be a vector with observed evidence on node <emphasis>x<subscript>i</subscript></emphasis>, then:</para>
<equation id="21"><graphic xlink:href="graphics/eq21.jpg"/></equation>
<para>And the posterior of any variable is calculated by Bayes rule, the theorem of conditional probability and marginalizing out all other variables:</para>
<equation id="22"><graphic xlink:href="graphics/eq22.jpg"/></equation>
<para>However this quickly becomes intractable for large networks and a conversion of the BN to a undirected tree structure can be advantageous.</para>
<para>Junction tree algorithm</para>
<para>The application of BNs in this thesis is influence diagrams for decision-making and the application of the effective Single Policy Updating (SPU) algorithm. SPU is based on the junction tree algorithm. The junction tree algorithm is based on variable elimination, where marginalization is made more efficient by elimination of one variable at a time. A junction tree is made in four steps: moralization, deletion, triangulation and connection. In the first step, a moral graph, containing the nodes from the BN, is made by adding undirected links between all parents with a common child. In the second step, the directions of all links are removed. In the third step, the nodes are organized into cliques, where each clique is a subset of the full variable domain U. In the fourth and final step, the cliques are connected, whereby the junction tree is formed. A more elaborate introduction to the theory is given in Jensen &#x0026; Nielsen [<link linkend="B98">98</link>] and various examples are provided by Friis-Hansen [<link linkend="B95">95</link>].</para>
</section>
<section class="lev2" id="sec2.5.3" label="2.5.3" xreflabel="sec2.5.3">
<title>Dynamic BNs</title>
<para>So far we&#x2019;ve looked a time-invariant static networks. Moving on the application of BNs to deterioration modelling, the Dynamic Bayesian Network (DBN) is introduced: A DBN is, although the name suggest something more sophisticated, a BN of inter- and intra-connected time slices. DBNs are also called temporal BNs. The simplest type of DBN is of the type shown below, with two time slices:</para>
<fig id="F22" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 22</label>
<caption><para>Examples of two-slice DBNs. The right model is a Hidden Markov Model (HMM)</para></caption>
<graphic xlink:href="graphics/fig22.jpg"/>
</fig>
<para>As such DBNs are not different from static net, in that they may be &#x201C;rolled out&#x201D; to static BNs. Some more effective algorithms exist specially for DBNs, of which Murphy [<link linkend="B99">99</link>] provides an overview. In the following, all DBNs are rolled out and transformed to junction trees. The model in <link linkend="F22">Figure <xref linkend="F22" remap="22"/></link> complies with the Markovian assumption (future is independent of the past, given the present). A Hidden Markov Model (HMM), of the first order, consist of an initial state distribution P(<emphasis>B<subscript>0</subscript></emphasis>), a transition model P(<emphasis>B<subscript>t</subscript></emphasis>&#x007C;<emphasis>B<subscript>t-1</subscript></emphasis>) and an observation model P(<emphasis>O<subscript>t</subscript></emphasis>&#x007C;B<subscript>t</subscript>). Letting <emphasis>Z</emphasis> be the variables in the same time slice, and <emphasis>N</emphasis> the number of variables, the transition and observation model in the DBN are then defined by the product of the Conditional Probability Distributions (CPD):</para>
<equation id="23"><graphic xlink:href="graphics/eq23.jpg"/></equation>
<para>Where the parents to node <emphasis>i</emphasis> may be in the same, or the previous, time slice. The full CPD of the DBN, unrolled to <emphasis>T</emphasis> time slices, is given by:</para>
<equation id="24"><graphic xlink:href="graphics/eq24.jpg"/></equation>
<para>Wherein the CPDs of the initial time slice (<emphasis>t</emphasis>=1) represent the unconditional initiation model.</para>
</section>
<section class="lev2" id="sec2.5.4" label="2.5.4" xreflabel="sec2.5.4">
<title>Modelling deterioration with DBNs</title>
<para>Using the above, a DBN can be constructed to model most kinds of deterioration processes. Friis-Hansen [<link linkend="B95">95</link>] uses a time-invariant load model, while Straub [<link linkend="B96">96</link>] also models a time-variant model. He suggests the generic model, shown in <link linkend="F23">Figure <xref linkend="F23" remap="23"/></link>.</para>
<fig id="F23" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 23</label>
<caption><para>DBN deterioration model. <emphasis>x</emphasis> is an observable variable, e.g. the result of an inspection (from Straub [<link linkend="B96">96</link>])</para></caption>
<graphic xlink:href="graphics/fig23.jpg"/>
</fig>
<para>The network in <link linkend="F24">Figure <xref linkend="F24" remap="24"/></link> was used in paper VI to model Paris Law fatigue crack growth, as a time-invariant process.</para>
<fig id="F24" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 24</label>
<caption><para>Dynamic Bayesian Network (DBN) modelling Paris Law fatigue crack growth (<emphasis>a</emphasis>) and damage indicator (<emphasis>x)</emphasis> for the NREL tower</para></caption>
<graphic xlink:href="graphics/fig24.jpg"/>
</fig>
<para>In <link linkend="F24">Figure <xref linkend="F24" remap="24"/></link>, the binary indicator node <emphasis>I</emphasis> enables calculation of the failure probability &#x0394;<emphasis>P<subscript>f,i</subscript></emphasis> = Pr(<emphasis>I<subscript>i</subscript></emphasis>=1) w.r.t. the time between each time slice. As the sensitivity of <emphasis>P<subscript>f</subscript></emphasis> to the critical crack length <emphasis>a<subscript>c</subscript></emphasis> is very small, it is modelled as deterministic, and thus implicitly contained in the indicator node <emphasis>I,</emphasis> which represents the failure limit state <emphasis>g</emphasis> = <emphasis>a<subscript>c</subscript></emphasis> &#x2013; <emphasis>a</emphasis>. Alternative, <emphasis>a<subscript>c</subscript></emphasis> could easily be modelled as a random variable by adding <emphasis>a<subscript>c</subscript></emphasis> as a single node and making a link from <emphasis>a<subscript>c</subscript></emphasis> to <emphasis>I<subscript>1</subscript></emphasis>.</para>
<section class="lev3">
<title>Combining variables</title>
<para>The variable space has the dimension of <emphasis>d</emphasis> = &#x03A0;<emphasis>d<subscript>i</subscript></emphasis>, <emphasis>i</emphasis> = 1,2,..,<emphasis>N</emphasis>, i.e. the product of the dimension of all variables. Instead of modelling each variable with a node, variables can be combined into new dependent variables. In the example in paper VI, <emphasis>d</emphasis> is reduced from <emphasis>d<subscript>Xs</subscript> d<subscript>Xaero</subscript> d</emphasis><emphasis><subscript>Xsif</subscript></emphasis> <emphasis>d<subscript>&#x0394;&#x03C3;</subscript> d<subscript>m</subscript></emphasis> to <emphasis>d<subscript>S</subscript> d<subscript>m</subscript></emphasis> by combining the load model uncertainties with the geometry function model uncertainty.</para>
</section>
<section class="lev3">
<title>Discretization of the variables</title>
<para>As the BN is a discrete representation of the variable space, all CPDs must be discretized into CPTs. The discretization is discussed by Friis-Hansen [<link linkend="B95">95</link>], who suggests using a linear transformation to a space with intervals of equal length, and choosing the transformation so that the interval size is inverse proportional to the slope of the PDF of the variable. In the numerical examples in paper VI, the intervals for discretization <emphasis>L</emphasis> were chosen so that:</para>
<equation id="25"><graphic xlink:href="graphics/eq25.jpg"/></equation>
<para>I.e. the mean-squared error on the prediction of failure probability, compared to MCS, is minimized. The figure below shows the discretization of the crack length for the second example in paper VI. In the example, shown in <link linkend="F25">Figure <xref linkend="F25" remap="25"/></link>, the crack length is discretized into 42 discrete states using a logarithmic transformation, making the intervals of equal length when plotted on a logarithmic axis.</para>
<fig id="F25" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 25</label>
<caption><para>Left: Discretization of the Lognormal variable <emphasis>a<subscript>0</subscript></emphasis> for the NREL example in paper VI</para></caption>
<graphic xlink:href="graphics/fig25.jpg"/>
</fig>
<para>The probability distribution must be truncated at the tails. In the above example, the lower limit <emphasis>a</emphasis><emphasis><subscript>0,lb</subscript></emphasis> is set so that Pr(<emphasis>a<subscript>0</subscript></emphasis> &#x003C; <emphasis>a<subscript>0,lb</subscript></emphasis>) = 10<superscript>-6</superscript> and the upper limit <emphasis>a<subscript>0,ub</subscript></emphasis> &#x003E; <emphasis>a<subscript>c</subscript></emphasis>. The CPTs can be calculated directly if the probability distributions are known, as is the case for <emphasis>a<subscript>0</subscript></emphasis> in <link linkend="F25">Figure <xref linkend="F25" remap="25"/></link>. If the PDF is unknown, e.g. for combined variables, the CPTs can be approximated using MCS. In the example, P(<emphasis>a&#x007C;a<subscript>t-1</subscript>,m,S</emphasis>) was sampled using the numeric crack growth method described in 2.3.2 and using random sampling from within the boundaries of the parents&#x2019; states.</para>
</section>
<section class="lev3">
<title>Observations</title>
<para>The BNs have their strength in inference and updating using evidence from observations, i.e. instantiating observable nodes when evidence is observed. An observation is the event where a variable is observed. The outcome of the observation is the instantiated variable. The way that we model measurement uncertainty on the observation is through the layered structure of a HMM. In the HMM, the hidden layer models a variable that cannot be observed, e.g. the real crack length, and the observable layer models the observations using an inspection or monitoring technology. The information from an observation may be of the inequality type (<emphasis>a</emphasis> &#x2265; <emphasis>a<subscript>d</subscript></emphasis>) or of the equality type (<emphasis>a = a<subscript>d</subscript></emphasis>). Both types are easily modelled with a BN, by controlling the number of states of the observable nodes. Modelling the observable layer will be discussed in the next chapter (SHM design), but a few BN models are introduced here.</para>
<para>The standard HMM assumption is that the observations are conditionally independent given the hidden state. The HMM model cannot model linear filters, e.g. Kalman or moving averages. This is possible by relaxing the HMM assumption and letting the observations be conditional on the previous observations. Such a model is called an Auto Regressive HMM (AR-HMM).</para>
<fig id="F26" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 26</label>
<caption><para>Hidden Markov Model and Autoregressive-Hidden Markov Model. Grey nodes are observable</para></caption>
<graphic xlink:href="graphics/fig26.jpg"/>
</fig>
</section>
<section class="lev3">
<title>Input</title>
<para>Both types of models can be extended to include observable input. This type of model is a State Space Model. An example is shown in <link linkend="F27">Figure <xref linkend="F27" remap="27"/></link>.</para>
<fig id="F27" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 27</label>
<caption><para>A Markovian State Space Model. The input nodes <emphasis>u</emphasis> can be e.g. measurements of environmental conditions that affect the hidden model <emphasis>&#x03B8;</emphasis> and/or the observable output <emphasis>x</emphasis></para></caption>
<graphic xlink:href="graphics/fig27.jpg"/>
</fig>
<para>This type of model is very relevant for damage detection purposes, where the observable nodes model a damage indicator that is sensitive to both damage and environmental conditions, e.g. temperature, wind speed and relative humidity. The links from the input nodes <emphasis>u</emphasis> to the hidden nodes <emphasis>&#x03B8;</emphasis> would be less relevant in the case of damage detection.</para>
</section>
</section>
</section>
</chapter>
<chapter class="chapter" id="ch03" label="Chapter 3" xreflabel="ch03">
<title>SHM design</title>
<para>The detection of damage is engineering decision-making under uncertainty. In this chapter we will build up the framework that enables SHM based decision-making, following the figure below:</para>
<fig id="F28" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 28</label>
<caption><para>Framework for SHM design continued: feature model and decision model</para></caption>
<graphic xlink:href="graphics/fig28.jpg"/>
</fig>
<para>We approach the framework in reverse order: In the following <link linkend="sec3.1">section <xref linkend="sec3.1" remap="3.1"/></link> we set out with basic decision theory and statistical testing, and then move on to Bayesian decision theory. Then, in <link linkend="sec3.2">section <xref linkend="sec3.2" remap="3.2"/></link>, we create statistical models for the damage sensitive features.</para>
<section class="lev1" id="sec3.1" label="3.1" xreflabel="sec3.1">
<title>Decision model</title>
<para>The purpose of SHM is decision-making and the purpose of decision-making is cost reduction. There is some indication of rationality in the decision concept, and the statistical decision analysis accredited to Raiffa &#x0026; Schlaifer [<link linkend="B44">44</link>] is indeed intended to provide a mathematical model of rational decision-making under uncertainty. The basic saying is that decision theory is the &#x2018;marriage of probability theory and utility theory&#x2019;. Following along this line, we aim at making decisions that optimize the expected costs. We start with the cost model:</para>
<section class="lev2" id="sec3.1.1" label="3.1.1" xreflabel="sec3.1.1">
<title>Cost model</title>
<para>We differ between <emphasis>terminal</emphasis> costs and <emphasis>experimental</emphasis> costs. The semantics are due to Bayesian experimental design, which is discussed later in the text, but I find it a necessity to introduce the two concepts at this point:</para>
<orderedlist numeration="loweralpha" continuation="restarts" spacing="normal">
<listitem><para>Experimental costs <emphasis>C<superscript>exp</superscript></emphasis> are the costs of performing the experiment. In the case of SHM damage detection, the experimental costs are the SHM system costs, while in the case of inspections, the experimental costs are the costs of the inspection.</para></listitem>
<listitem><para>Terminal costs <emphasis>C<superscript>ter</superscript></emphasis> are tied to the outcome of an experiment. These outcomes are discrete events e.g. failure, inspection, repair, evacuation &#x2013; thus the outcome of one experiment might be the costs of performing another experiment.</para></listitem>
</orderedlist>
<section class="lev3">
<title>Terminal costs: damage detection</title>
<para>For the example of damage detection, we denote the damaged state <emphasis>&#x03B8;<subscript>d</subscript></emphasis> and the undamaged state <emphasis>&#x03B8;<subscript>b</subscript></emphasis>. A computer is fed information from sensors and outputs <emphasis>d<subscript>b</subscript></emphasis> if no damage is detected and <emphasis>d<subscript>d</subscript></emphasis> otherwise. Unfortunately, no sensor measures damage (Axiom IVa in Worden et al. [<link linkend="B25">25</link>]) and <emphasis>d<subscript>1</subscript></emphasis> can always only be an <emphasis>indication of damage</emphasis>. If <emphasis>d<subscript>d</subscript></emphasis> is output, then the decision-maker cannot be sure that damage is present, so he orders a technician to inspect the structure, at the cost of <emphasis>C<subscript>ins</subscript></emphasis>. If damage is indeed present, the technician will locate it and have it repaired, at the additional cost of <emphasis>C<subscript>rep</subscript></emphasis>. If the computer does not indicate the damage, and damage is present, then the cost to the decision-maker will be <emphasis>C<subscript>dam,</subscript></emphasis> which, for practical reasons, can be equivalate with the cost of failure <emphasis>C<subscript>f</subscript></emphasis>. The cost function is shown as a matrix in shown in <link linkend="T4">Table <xref linkend="T4" remap="4"/></link>.</para>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><para>Cost matrix for damage detection</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="all" border="1">
<thead>
<tr>
<td></td>
<th><emphasis>State = &#x03B8;<subscript>0</subscript></emphasis></th>
<th><emphasis>State = &#x03B8;<subscript>1</subscript></emphasis></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top"><emphasis>Decide</emphasis></td>
<td valign="top">True Negative</td>
<td valign="top">False Negative</td>
</tr>
<tr>
<td valign="top"><emphasis>d<subscript>0</subscript></emphasis></td>
<td valign="top"><emphasis>C<subscript>00</subscript></emphasis> = 0</td>
<td valign="top"><emphasis>C<subscript>01</subscript></emphasis> = <emphasis>C<subscript>dam</subscript></emphasis></td>
</tr>
<tr>
<td valign="top"><emphasis>Decide</emphasis></td>
<td valign="top">False Positive</td>
<td valign="top">True Positive</td>
</tr>
<tr>
<td valign="top"><emphasis>d<subscript>1</subscript></emphasis></td>
<td valign="top"><emphasis>C<subscript>10</subscript></emphasis> = <emphasis>C<subscript>ins</subscript></emphasis></td>
<td valign="top"><emphasis>C<subscript>11</subscript></emphasis> = <emphasis>C<subscript>rep</subscript></emphasis> + <emphasis>C<subscript>ins</subscript></emphasis></td>
</tr>
</tbody>
</table>
</table-wrap>
<para>The semantics are taken from detection theory. If the costs are replaced with probabilities, then then matrix is the confusion matrix. See e.g. Kay [<link linkend="B49">49</link>] for an elaborate background.</para>
</section>
<section class="lev3">
<title>Terminal costs: damage localization</title>
<para>The purpose of damage <emphasis>localization</emphasis> is the reduction of the inspection costs associated with a damage indication <emphasis>d<subscript>d</subscript></emphasis>. To reflect this, we construct the costs matrices in <link linkend="T5">Table <xref linkend="T5" remap="5"/></link>, where the left matrix is the detection costs and the right matrix is the localization costs. We let damage exist in regions <emphasis>i</emphasis> = 1,2,&#x2026;,<emphasis>n</emphasis> of the structure and let <emphasis>&#x03B8;<subscript>b</subscript></emphasis> be the undamaged state and <emphasis>&#x03B8;<subscript>d,i</subscript></emphasis> be damage in location <emphasis>i</emphasis>. The cost of repair <emphasis>C<subscript>Ri</subscript></emphasis> depends on location of damage. If damage location is correctly decided then the cost is <emphasis>C<subscript>Ri</subscript></emphasis>. If it is not decided or incorrectly decided, then the cost increases with the cost of inspection, <emphasis>C<subscript>ins</subscript></emphasis>.</para>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><para>Cost matrices for hierarchical detection and localization (paper V)</para></caption>
<graphic xlink:href="graphics/tbl5.jpg"/>
</table-wrap>
<para>The misclassification rate generally increases with an increasing number of discrete classes, covering the same data<footnote id="fn5" label="5"><para>Statistical Pattern Recognition semantics: every data point belongs to a class, each class has a unique label.</para></footnote>. Alternatively, detection and localization can be done in one single decision. The direct localization approach by Parloo et al. [<link linkend="B100">100</link>] associates a cost matrix as the one in <link linkend="T6">Table <xref linkend="T6" remap="6"/></link>.</para>
<table-wrap position="float" id="T6">
<label>Table 6</label>
<caption><para>Cost matrices for the case of simultaneous detection and localization (paper V)</para></caption>
<graphic xlink:href="graphics/tbl6.jpg"/>
</table-wrap>
<para>It is tradition to model the costs as deterministic, even though they are quite uncertain and difficult to set. I have followed the tradition of deterministic cost, but compensated by including sensitivity analysis of cost ratios.</para>
</section>
</section>
<section class="lev2" id="sec3.1.2" label="3.1.2" xreflabel="sec3.1.2">
<title>Decision theory</title>
<para>Decision theory generalizes statistical testing. A brief introductions is provided in the following. In hypothesis testing, a decision function <emphasis>d</emphasis> selects a hypothesis <emphasis>H</emphasis> from the set of possible hypothesis <emphasis>H<subscript>j</subscript></emphasis>, j = 1,2,..,<emphasis>N</emphasis> without including prior belief of the state nor costs of making one type of error compared to another. We consider a parameter space <emphasis>&#x03F4;</emphasis> of states and partition <emphasis>&#x03F4;</emphasis> into the subsets <emphasis>&#x03F4;<subscript>0</subscript></emphasis> and <emphasis>&#x03F4;<subscript>1</subscript></emphasis>, so that <emphasis>&#x03F4;<subscript>0</subscript></emphasis> &#x222A; <emphasis>&#x03F4;<subscript>1</subscript></emphasis> = <emphasis>&#x03F4;</emphasis> and <emphasis>&#x03F4;<subscript>0</subscript></emphasis> &#x2229; <emphasis>&#x03F4;<subscript>1</subscript></emphasis> = &#x00D8;. Hypothesis H<subscript>0</subscript> is true when the realized state <emphasis>&#x03B8;</emphasis> &#x2208; <emphasis>&#x03F4;<subscript>0</subscript></emphasis> and, correspondingly, hypothesis H<subscript>1</subscript> is true when the realized state <emphasis>&#x03F4;</emphasis> &#x2208; <emphasis>&#x03F4;<subscript>1</subscript></emphasis>. In the <emphasis>simple</emphasis> hypothesis test, the number of states is restricted to two, {<emphasis>&#x03F4;<subscript>0</subscript></emphasis> = <emphasis>&#x03B8;<subscript>0</subscript></emphasis>, <emphasis>&#x03F4;<subscript>1</subscript></emphasis> = <emphasis>&#x03B8;<subscript>1</subscript></emphasis>} while in a <emphasis>composite</emphasis> hypothesis test, each of the subset <emphasis>&#x03F4;<subscript>0</subscript></emphasis> and <emphasis>&#x03F4;<subscript>1</subscript></emphasis> can contain any number of models. The case of global vibration measurement damage detection is, in the general case, the composite hypothesis test, sketched in <link linkend="F29">Figure <xref linkend="F29" remap="29"/></link>.</para>
<fig id="F29" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 29</label>
<caption><para>Composite hypothesis test have more than one state in at least one of the hypothesis. The main problem of damage detection is the binary composite hypothesis test with an uncountable number of states</para></caption>
<graphic xlink:href="graphics/fig29.jpg"/>
</fig>
<para>The three main types of statistical tests are Fisherian, Neyman-Pearson likelihood ratio and Bayesian. The first is fundamentally different from the other two, as is discussed in the following.</para>
<section class="lev3">
<title>Fisherian test</title>
<para>The objective of the Fisherian test, introduced by R. Fisher [<link linkend="B34">34</link>], is to accept or reject the null hypothesis; H<subscript>0</subscript>. The means to do this is by using a <emphasis>Statistic</emphasis>, e.g. <emphasis>X<superscript>2</superscript></emphasis> (chi-square), as a measure of how well the data fits H<subscript>0</subscript>. In an <emphasis>X<superscript>2</superscript></emphasis> test the sampling distribution of the test statistic is a chi-square distribution when H<subscript>0</subscript> is true. The null hypothesis is rejected when the observed value exceeds the probability threshold; the <emphasis>p</emphasis>-value. The <emphasis>p</emphasis>-value is set by a selected significance level, e.g. &#x03B1; = 5%, as sketched in <link linkend="F30">Figure <xref linkend="F30" remap="30"/></link>.</para>
<fig id="F30" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 30</label>
<caption><para>Sketch of a one-sided Fisherian hypothesis test</para></caption>
<graphic xlink:href="graphics/fig30.jpg"/>
</fig>
<para>The alternate hypothesis; H<subscript>1</subscript> is not considered and this can be an advantage, as only the statistical properties of the &#x2018;normal&#x2019; data are required to perform the test. This makes the simple Fisherian test suitable for damage detection, as it represents the special case of composite hypothesis test, where <emphasis>&#x03B8;<subscript>0</subscript></emphasis> = <emphasis>&#x03F4;<subscript>0</subscript></emphasis> and <emphasis>&#x03B8;</emphasis> &#x2208; <emphasis>&#x03F4;<subscript>1</subscript></emphasis>. The Fisherian <emphasis>X<superscript>2</superscript></emphasis> test was used for damage detection by e.g. Worden et al. [<link linkend="B38">38</link>] and D&#x00F6;hler et al. [<link linkend="B39">39</link>]. The <emphasis>X<superscript>2</superscript></emphasis> tests asymptotic properties and their effect on the detection performance were considered in paper IV.</para>
</section>
<section class="lev3">
<title>Neyman-Pearson test</title>
<para>The Neyman-Pearson [<link linkend="B35">35</link>] likelihood ratio (NP) test compares the states and selects a hypothesis based on the likelihood ratio of the data.</para>
<equation id="26"><graphic xlink:href="graphics/eq26.jpg"/></equation>
<para>Where <inline-graphic xlink:href="graphics/inline2.jpg"/> is the observed data, p(<inline-graphic xlink:href="graphics/inline2.jpg"/>;&#x03B8;) is the likelihood<footnote id="fn6" label="6"><para>The use of ( ; ) indicates that <emphasis>&#x03F4;</emphasis> is a fixed parameter, and not a random variable. This is a frequentist approach and the NP test is typically frequentist. When I speak of Bayesian testing, I use the conditionality notation ( &#x007C; ).</para></footnote> and <emphasis>L</emphasis> is the likelihood ratio. The NP Lemma states that the optimal detector threshold <emphasis>&#x03B3;</emphasis> is found for a fixed (accepted) probability of deciding the false hypothesis P(<emphasis>d</emphasis>=<emphasis>d<subscript>i</subscript></emphasis>; <emphasis>&#x03B8;<subscript>j&#x2260;i</subscript></emphasis>) = <emphasis>&#x03B1;</emphasis>, where <emphasis>&#x03B1;</emphasis> is the significance level. The likelihood functions can be plotted, as in <link linkend="F31">Figure <xref linkend="F31" remap="31"/></link>, to visualize the errors:</para>
<fig id="F31" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 31</label>
<caption><para>A simple hypothesis test. Neyman &#x0026; Pearson defined the two sources of error in a test as type I (False Positives) and (False Negatives) type II errors</para></caption>
<graphic xlink:href="graphics/fig31.jpg"/>
</fig>
<para>The NP tests thus keeps one error-rate constant while minimizing the other. Knowledge of the likelihood in all states is a prerequisite, but the hypotheses are treated unevenly, as the choice of significance level depends on which error is fixed and which is minimized. The Neyman &#x0026; Pearson type-I and &#x2013;II errors are also meaningful for the composite hypothesis test. The evaluation of the performance requires a joint model for all states under each of the hypothesis, so that <emphasis>&#x03B8;<subscript>0,joint</subscript></emphasis> = <emphasis>&#x03F4;<subscript>0</subscript></emphasis> and <emphasis>&#x03B8;<subscript>1,joint</subscript> = &#x03F4;<subscript>1</subscript></emphasis>. In damage detection this corresponds to averaging over the all environmental and operational conditions of <emphasis>&#x03F4;<subscript>0</subscript></emphasis> and <emphasis>&#x03F4;<subscript>1</subscript></emphasis> while also averaging over all damage configurations of <emphasis>&#x03F4;<subscript>1</subscript></emphasis>. The binary approach can be required if the amount of training data available for estimating the feature likelihoods is sparse. This approach was taken in paper I to enable a sequential simple Bayesian test:</para>
</section>
<section class="lev3">
<title>Bayesian test</title>
<para>The Bayesian test is the core of statistical decision theory. The test selects the state with the highest posterior probability <inline-graphic xlink:href="graphics/inline3.jpg"/>:</para>
<equation id="27"><graphic xlink:href="graphics/eq27.jpg"/></equation>
<para>The posterior probabilities are calculated using Bayes rule, in this case on discrete form:</para>
<equation id="28"><graphic xlink:href="graphics/eq28.jpg"/></equation>
<para>Where P(&#x03B8;) is the prior belief and <inline-graphic xlink:href="graphics/inline4.jpg"/> is the likelihood. As the purpose of decision-making is utility-maximization, the addition of a utility function <emphasis>U</emphasis> allows for any statistical test to reflect this purpose:</para>
<equation id="29"><graphic xlink:href="graphics/eq29.jpg"/></equation>
<para>As the Bayesian test is based on probabilities, the addition of a cost<footnote id="fn7" label="7"><para>The concepts of utility, loss and cost have, throughout the thesis, been collected under costs <emphasis>C</emphasis>. This implicitly assumes that loss is the equivalent of economic cost and that loss equals negated utility.</para></footnote> function enables the decision to be based on expected costs; E[<emphasis>C</emphasis>]. If the cost function is uniform, then the test becomes a minimization of total error. If the priors are uniform p(<emphasis>&#x03B8;<subscript>0</subscript></emphasis>) = p(<emphasis>&#x03B8;<subscript>1</subscript></emphasis>) then the test depends only on the likelihood, and thus generalized the NP test. The Bayes decision <emphasis>d<subscript>opt</subscript></emphasis> is the decision that minimizes the expected cost E[<emphasis>C</emphasis>] and the Bayes Risk is the expected costs E[<emphasis>C</emphasis>]<emphasis><subscript>opt</subscript></emphasis> associated with <emphasis>d<subscript>opt</subscript></emphasis>. For the case of damage detection, the test is binary, between two structural states <emphasis>&#x03B8;<subscript>0</subscript></emphasis>, <emphasis>&#x03B8;<subscript>1</subscript></emphasis>.</para>
<equation id="30"><graphic xlink:href="graphics/eq30.jpg"/></equation>
<para>The cost function <emphasis>C<superscript>ter</superscript></emphasis> is of the type in <link linkend="T4">Table <xref linkend="T4" remap="4"/></link> (p. 30). Due to the difference between failure costs and false alarm costs, as well as typically the very low prior probabilities of damage, the Bayes detector performs differently than the NP and the Fisherian test in the damage detection case.</para>
<para>The example shown in <link linkend="F32">Figure <xref linkend="F32" remap="32"/></link> shows the influence of the prior probability. The optimum decision threshold is marked by a circle, for different values of the prior.</para>
<fig id="F32" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 32</label>
<caption><para>Simple Bayes test. Top: feature statistic. Bottom: Expected costs, as a function of the decision threshold (paper I)</para></caption>
<graphic xlink:href="graphics/fig32.jpg"/>
</fig>
<para>For SHM damage detection applications, the influence of the costs of &#x2018;true&#x2019; decisions (<emphasis>C<subscript>11</subscript>,C<subscript>00</subscript>)</emphasis> on the optimum threshold, is negligible. A Bayesian detector was used in papers I, II and V.</para>
</section>
</section>
<section class="lev2" id="sec3.1.3" label="3.1.3" xreflabel="sec3.1.3">
<title>Bayesian decision analysis</title>
<para>Prior-, posterior and pre-posterior decision analysis generalize the Bayes decision for; a) the case of prior belief, b) the case of posterior belief after an experiment, and for c) the special case of pre-posterior analysis where both the decision of the experiment <emphasis>z</emphasis> and of the terminal action <emphasis>d</emphasis> must be taken. The latter case is important in the Design of Experiments as pre-posterior analysis is the generalized Bayesian Experimental Design (BED), which was used by Flynn &#x0026; Todd [<link linkend="B58">58</link>], Flynn [<link linkend="B41">41</link>] and in paper I.</para>
<para>The partial decision tree in <link linkend="F33">Figure <xref linkend="F33" remap="33"/></link> shows the pre-posterior analysis {<emphasis>z,x,d,&#x03B8;</emphasis>}. If the <emphasis>z</emphasis> node is left out, then the tree represents a posterior analysis {<emphasis>x,d,&#x03B8;</emphasis>}, which is the generalized Bayes test. If both the <emphasis>z</emphasis> and the <emphasis>x</emphasis> node are left out, then the tree reduces to a prior analysis { <emphasis>d,&#x03B8;</emphasis>}.</para>
<fig id="F33" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 33</label>
<caption><para>Bayesian pre-posterior decision analysis visualized by a partial decision tree. <emphasis>z</emphasis> is the choice of test, <emphasis>x</emphasis> is the test outcome, <emphasis>d</emphasis> is the decision to make based on the outcome and <emphasis>&#x03B8;</emphasis> is the unknown state</para></caption>
<graphic xlink:href="graphics/fig33.jpg"/>
</fig>
<para>As for the Bayes test, we seek to minimize the total expected cost:</para>
<equation id="31"><graphic xlink:href="graphics/eq31.jpg"/></equation>
<para>In <link linkend="F33">Figure <xref linkend="F33" remap="33"/></link> the decision process is chronologically directed from left to right. In analysis however, the direction is reversed; first we average over <emphasis>&#x03B8;</emphasis>, given <emphasis>x</emphasis> and <emphasis>z</emphasis> and we then select <emphasis>d</emphasis> as the decision that minimizes the resulting expectation. The steps up to this point constitute the posterior decision analysis. We then average over the features <emphasis>x</emphasis> given <emphasis>z</emphasis> and select <emphasis>z</emphasis> to minimize the resulting expectation:</para>
<equation id="32"><graphic xlink:href="graphics/eq32.jpg"/></equation>
<para>The Value of Information (<emphasis>VoI</emphasis>), from Raiffa &#x0026; Schlaifer [<link linkend="B44">44</link>] is the function of <emphasis>z</emphasis> that gives us the expected value of an experiment, before the experiment is performed.</para>
<equation id="33"><graphic xlink:href="graphics/eq33.jpg"/></equation>
<para>Which can be written as:</para>
<equation id="34"><graphic xlink:href="graphics/eq34.jpg"/></equation>
<para>Where we have separated the cost function into terminal costs and experiment costs, so that <emphasis>C</emphasis>(<emphasis>d,&#x03B8;,x,z</emphasis>) = <emphasis>C<superscript>ter</superscript>(d,&#x03B8;)+ C<superscript>exp</superscript>(x,z)</emphasis>. The VoI allows us to determine the expectedly most economic SHM system, as SHM is economic when:</para>
<equation id="35"><graphic xlink:href="graphics/eq35.jpg"/></equation>
<para>In most SHM applications, <emphasis>C</emphasis> will not depend on <emphasis>x</emphasis>, and the VoI reduces to:</para>
<equation id="36"><graphic xlink:href="graphics/eq36.jpg"/></equation>
<para>Following this, a SHM system is economic only if the expected reduction in costs exceeds the costs of the system.</para>
<section class="lev3">
<title>Example use of BED for engineering application</title>
<para>The most well-known risk-based decision-making application in engineering is RBI. However, the framework that has been applied for many decades for inspection planning is easily shown to also encompass UM (load and strain monitoring), as well as vibration-based SHM. The generalization of the pre-posterior analysis {<emphasis>z,x,d,&#x03B8;</emphasis>} is sketched in <link linkend="F34">Figure <xref linkend="F34" remap="34"/></link>.</para>
<fig id="F34" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 34</label>
<caption><para>Pre-posterior analysis as the framework of engineering decision-making under uncertainty for multiple civil engineering disciplines</para></caption>
<graphic xlink:href="graphics/fig34.jpg"/>
</fig>
<para>In this thesis, the bottom row; feature extraction, is explored. Alternatively, the response expansion is a method of prediction the load response, down to strain level, in all locations of the structure, by using a numeric response model and the measured response in few Degree-Of-Freedom (DOF). This method is valuable because the high number of sensors from local UM can be reduced to few sensors. Furthermore, the sensors used can be in easily accessible and less aggressive locations and will generally have a longer lifespan than strain gauges or similar. Due to these large advantages, usage monitoring by response expansion is the natural companion to vibration-based damage detection when the same sensors are used for both purposes.</para>
</section>
</section>
</section>
<section class="lev1" id="sec3.2" label="3.2" xreflabel="sec3.2">
<title>Feature model</title>
<para>The quest for the perfect damage sensitive feature has lasted several decades. As discussed in the introduction, the perfect feature has correlation to damage, noise rejection and insensitivity to environmental effects. Unfortunately, as stated in Axiom IVb of Worden et al. [<link linkend="B25">25</link>] : &#x201C; <emphasis>Without intelligent feature extraction, the more sensitive a measurement is to damage, the more sensitive it is to changing operational and environmental conditions&#x201D;</emphasis>. While the axioms are a good starting point to selecting features, we may choose whichever features we like as input to the probabilistic decision-making. Even the most ridiculous features, e.g. phase of the moon, will not return negative expected costs, when the decision-making is based on accurate statistical models and a Bayesian detection rule.</para>
<para>This being the case from a purely hypothetical view, the selection of damage sensitive features is the premise of economic SHM and thus the attention given to feature selection in the technical literature is not unwarranted. In this section I focus on</para>
<itemizedlist mark="dash" spacing="normal">
<listitem><para>statistical model building of the features,</para></listitem>
<listitem><para>analogy to manual inspections and</para></listitem>
<listitem><para>feature pre-selection, based on classical detection theory.</para></listitem>
</itemizedlist>
<section class="lev2" id="sec3.2.1" label="3.2.1" xreflabel="sec3.2.1">
<title>statistical models of the features</title>
<para>The ideal statistical models of the features is true the joint CDF of the feature vector in every relevant (discrete) state, as this enables composite Bayesian testing. An example of a CDF for two states and a univariate feature, is sketched in <link linkend="F35">Figure <xref linkend="F35" remap="35"/></link>.</para>
<fig id="F35" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 35</label>
<caption><para>The simplest feature statistic has two discrete states. The left plot is a control chart where the state changes from baseline to damaged, indicated by the color</para></caption>
<graphic xlink:href="graphics/fig35.jpg"/>
</fig>
<para>Ideally, statistical models for the features are estimated from samples from the structure, acquired under all operational conditions and in all classes &#x2013; i.e. in all possible damage configurations. There are two main cases:</para>
<orderedlist numeration="loweralpha" continuation="restarts" spacing="normal">
<listitem><para>The structure has been realized</para></listitem>
<listitem><para>The structure has not been realized</para></listitem>
</orderedlist>
<para>Focusing on damage detection by a statistical composite hypothesis test (decide {<emphasis>H<subscript>0</subscript>, H<subscript>1</subscript></emphasis>} for observed <emphasis>x</emphasis>), the framework for estimating the statistical feature model is presented in the following. There are three uncertainty contributions to the response model, shown in <link linkend="F36">Figure <xref linkend="F36" remap="36"/></link>.</para>
<fig id="F36" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 36</label>
<caption><para>Framework for synthesizing the statistical feature model</para></caption>
<graphic xlink:href="graphics/fig36.jpg"/>
</fig>
<para>a) When the structure has been realized, the response covariance in the baseline state can be sampled. Under normal assumptions, the input model represents time-variant parameters. These include all environmental conditions as well as actual loading of the structure. The noise model accounts for time-invariant measurement noise. The geometry model is a deterministic bias from the &#x2018;real&#x2019; world, both in the baseline and in the damages states. If the geometry model is a FE model or similar it models the damage in a finite number of discrete states.</para>
<para>By averaging over samples from a narrow spectrum of operational and environmental conditions, the geometry model bias in the baseline state can be estimated. It is not possible to estimate the bias in the damaged states without data from the damaged states. The input and the noise model are both independent of the damage severity, which means that they can be assessed from the baseline state alone, but they are generally inseparable, as training data from exactly identical input conditions cannot be obtained. If the training data is very sparse, then a common unconditional model can be used for input and noise, where all the operational and environmental conditions present the in training data are averaged out. This approach is less data intensive than the joint model approach, at the cost of a decreased detection performance. This approach was used in papers IV and V, and is further elaborated in <link linkend="sec3.4">section <xref linkend="sec3.4" remap="3.4"/></link>.</para>
<para>b) When the structure has not been realized, we will need to base the joint model on simulations of operational conditions. This introduces very large uncertainties from the simulation assumptions, and the response of the feature should be verified against measurements from a similar structure. After the design phase, when the structure has been realized, the models should be updated using new measurements, given the assumption of baseline state for a given period of time. This is the case for combined SHM/structural design (<link linkend="ch04">chapter <xref linkend="ch04" remap="4"/></link>). The approach was used in papers I,II,III and VI, although no validations against a real structure were performed.</para>
</section>
<section class="lev2" id="sec3.2.2" label="3.2.2" xreflabel="sec3.2.2">
<title>Response model &#x2013; NREL tower</title>
<para>A wind turbine tower was used for numerical examples in papers I,III and VI. The basic response measurements were acceleration time series and the estimation of the statistical feature model was done by combining the three models shown in <link linkend="F36">Figure <xref linkend="F36" remap="36"/></link>.</para>
<section class="lev3">
<title>Discrete geometry model</title>
<para>The wind turbine was modelled as a FE model with lumped rotor-nacelle-assembly inertia. Cracks were modelled by releasing the horizontal interface between shell elements. No contact elements were inserted, meaning that the response was kept linear. The smallest crack length modelled was between 0.34 m and 0.52 m. An example of the model, with a damage outcome, is shown in <link linkend="F37">Figure <xref linkend="F37" remap="37"/></link>.</para>
<fig id="F37" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 37</label>
<caption><para>Deformed shape of NREL tower model with an approx. 1.7 m long crack (paper I)</para></caption>
<graphic xlink:href="graphics/fig37.jpg"/>
</fig>
<para>The modal parameters have a low sensitivity to overall mesh-size, allowing for a coarse FE model. Naturally, stress prediction would require a much finer mesh in the vicinity of the crack.</para>
<para>To preserve the eigensolution in the response, each model was reduced using SEREP by O&#x2019;Callahan et al. [<link linkend="B101">101</link>] to 30 &#x2018;active&#x2019; DOFs, representing the locations and orientations of the sensors. The modal basis is used to simulate the global response in a linear simulation using the Frequency Response Function. To get the picture of how small the induced changes are &#x2013; even from a very long crack, <link linkend="F38">Figure <xref linkend="F38" remap="38"/></link> shows the changes in modal properties of the first four modes as a function of the location of a crack. The modes in the figure are projected onto the unperturbed modes, as the crack causes local asymmetry and mode shape rotation.</para>
<fig id="F38" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 38</label>
<caption><para>Sensitivity of eigenfrequencies (top) and of the MAC-values (bottom) for the first four flexural modes to a through-crack of length = 10% circumference. <emphasis>Angle</emphasis> defines the location in 72 angle increments and <emphasis>Height</emphasis> is the elevation in meters above foundation</para></caption>
<graphic xlink:href="graphics/fig38.jpg"/>
</fig>
<para>The sensitivity of the mode shapes is given as change of Modal Assurance Criterion, &#x0394;MAC. MAC is a well-known measure of mode shape similarity, often employed for FE and experimental mode shape pairing. It has also been investigated for damage detection, see e.g. Sohn et al. [<link linkend="B3">3</link>]. From the figure, the frequencies are seen to be more damage sensitive than the MAC values. The FE shell model was used for paper I and VI and similar models, described in <link linkend="sec3.3">sections <xref linkend="sec3.3" remap="3.3"/></link> and <link linkend="sec3.4"><xref linkend="sec3.4" remap="3.4"/></link>, were used in papers IV and V. FE beam models of the NREL tower were used in Papers II and III, but I found these to have incorrect sensitivity to damage, making the output of the applied <emphasis>damage vector</emphasis> algorithm unrealistic.</para>
</section>
<section class="lev3">
<title>Input model</title>
<para>To simulate the varying operational conditions, I used aeroelastic simulations in the commercial software LACflex. Multiple wind speeds and multiple random &#x2018;seeds&#x2019; of turbulence generation were used as input model (variance between measurements). As the perturbed tower could not be adequately modelled in LACflex, the tower response was simulated separately. This separation of dynamical models builds on the assumption of small changes in the modal parameters, which is valid for small damages a mainly linear behavior. It is arguable if this assumption is fulfilled or not when a 2 m crack is simulated, but as the appearance of non-linear effects benefit a novelty analysis damage indicator, as observed by Figueiredo et al. [<link linkend="B102">102</link>], I consider the assumption conservative in the calculation of expected costs.</para>
</section>
<section class="lev3">
<title>Noise model</title>
<para>Measured accelerations are contaminated by measurement noise. The noise was modelled as random Gaussian, with as RMS level of 1 &#x2013; 10 %. These noise levels exceed the dynamic range of modern high quality accelerometers &#x2013; Br&#x00FC;el &#x0026; Kj&#x00E6;r would be insulted &#x2013; but as was found in paper IV, high noise levels may be used to account for influencing factors that are neglected in the simulations.</para>
</section>
</section>
<section class="lev2" id="sec3.2.3" label="3.2.3" xreflabel="sec3.2.3">
<title>Feature selection</title>
<para>The feature types shown in <link linkend="T7">Table <xref linkend="T7" remap="7"/></link> were investigated for SHM decision-making.</para>
<table-wrap position="float" id="T7">
<label>Table 7</label>
<caption><para>Features used in the papers. SIM: Simulated. EXP: Experimentally obtained</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="all" border="1">
<thead>
<tr>
<th></th>
<th>Paper I</th>
<th>Paper II</th>
<th>Paper III</th>
<th>Paper IV</th>
<th>Paper V</th>
<th>Paper VI</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top">AR coefficients</td>
<td valign="top">SIM</td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top">SIM, EXP</td>
<td valign="top">SIM, EXP</td>
<td valign="top">SIM</td>
</tr>
<tr>
<td valign="top">Eigenfrequencies</td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top">SIM, EXP</td>
<td valign="top">SIM, EXP</td>
<td valign="top"></td>
</tr>
<tr>
<td valign="top">Mode shapes</td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top">SIM, EXP</td>
<td valign="top">SIM, EXP</td>
<td valign="top"></td>
</tr>
<tr>
<td valign="top">Subspace angles</td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top">SIM, EXP</td>
<td valign="top"></td>
<td valign="top"></td>
</tr>
<tr>
<td valign="top">Damage vector</td>
<td valign="top"></td>
<td valign="top">SIM</td>
<td valign="top">SIM</td>
<td valign="top"></td>
<td valign="top"></td>
<td valign="top"></td>
</tr>
</tbody>
</table>
</table-wrap>
<section class="lev3">
<title>Eigenfrequencies &#x0026; mode shapes</title>
<para>The equations of motion for a linear <emphasis>N<subscript>dof</subscript></emphasis> -DOF system with viscous damping are, on matrix form:</para>
<equation id="37"><graphic xlink:href="graphics/eq37.jpg"/></equation>
<para>Where x is the input vector, y is the response vector and <emphasis role="strong">M,C</emphasis> and <emphasis role="strong">K</emphasis> are the mass, damping and stiffness matrices. The dynamic properties are obtained from the eigenvalue decomposition:</para>
<equation id="38"><graphic xlink:href="graphics/eq38.jpg"/></equation>
<para>Where <emphasis role="strong">B</emphasis> is a matrix with the mode shapes in the columns and &#x03A9; is the diagonal matrix holding the eigenvalues. As damage effects the physical properties of structure, due to the above, the modal properties are affected. In a numeric environment, the stiffness and mass matrices are directly available and the modal properties can be directly extracted. For a realized structure, structural identification and modal analysis are used to estimate the modal parameters. If the tests are performed under low-amplitude vibrations, so that the structure does not exhibit significantly non-linear behavior, then a linear dynamical model is normally acceptable. The modal parameters can be extracted using forced excitation and EMA but, in the case of large structures and SHM, they must be estimated using OMA identification techniques. The identification introduces error and bias on the estimates and the modal parameters are sensitive to environmental conditions, which all reduce the discriminative performance.</para>
</section>
<section class="lev3">
<title>Principal subspace angles</title>
<para>Principal angles are the angles between the mode shapes in the subspace spanned by the mode shape vectors. They are a potentially powerful condensation of the information in mode shapes, as the <emphasis>N<subscript>m</subscript></emphasis> estimated mode shapes in <emphasis>N<subscript>d</subscript></emphasis> DOF form a feature vector of length <emphasis>L<subscript>v</subscript></emphasis> = <emphasis>N<subscript>m</subscript>N<subscript>d</subscript></emphasis>, potentially reduced to <emphasis>L<subscript>v</subscript></emphasis> = <emphasis>N<subscript>m</subscript></emphasis>/2, as each pair of modes shapes form one angle. The largest angle is related to a notion of the distance between subspaces. The baseline data are taken e.g. as the sample averages. The mode shape pairs can be made from any two or more modes, not necessarily adjacent in the frequency band. In paper IV the principal angles are calculated using an Singular Value Decomposition of the product of the orthonormal modes, obtained by QR factorization. This follows the method in Golub &#x0026; Van Loan [<link linkend="B103">103</link>] :</para>
<equation id="39"><graphic xlink:href="graphics/eq39.jpg"/></equation>
<para>Where &#x03A6; is a matrix with the subset of mode shapes, for which the angles are calculated, and subscript <emphasis>b</emphasis> denotes the baseline state.</para>
</section>
<section class="lev3">
<title>AR model coefficients</title>
<para>The scalar AR model with <emphasis>p</emphasis> autoregressive parameters, AR(<emphasis>p</emphasis>), is given by</para>
<equation id="40"><graphic xlink:href="graphics/eq40.jpg"/></equation>
<para>The model is fitted to the time series of a sensor using a least-squares formulation. The parameters of the AR model are called the auto regressive coefficients. The AR parameters are directly related to the discrete system poles, <emphasis>&#x03BB;<subscript>i</subscript></emphasis>, <emphasis>i</emphasis> = 1, 2,&#x2026;,2<emphasis>N</emphasis>, where <emphasis>N</emphasis> is the number of modes in the response, through the companion matrix, see e.g. Brincker &#x0026; Ventura [<link linkend="B28">28</link>]. Thus, eigenfrequencies and damping factors may be determined directly from the AR coefficients, meaning that the coefficients also hold information of damage. For the AR model to model a random response, <emphasis>p</emphasis> &#x003E; 2*<emphasis>N</emphasis>, i.e. an oversize model is required. The various criteria described in Figueiredo et al. [<link linkend="B102">102</link>], provide an estimate on the appropriate model order. AR models have been used for damage detection in many publications, a recent review of which are provided by Yao &#x0026; Pakzad [<link linkend="B104">104</link>].</para>
</section>
<section class="lev3">
<title>Damage vector</title>
<para>The method described by Parloo et al. [<link linkend="B100">100</link>] is briefly summarized in the following. The sensitivity equation for mode shape <emphasis>b<subscript>i</subscript></emphasis> of <emphasis>N<subscript>m</subscript></emphasis> modes, to some change parameter <emphasis>u</emphasis> is given by</para>
<equation id="41"><graphic xlink:href="graphics/eq41.jpg"/></equation>
<para>Where <emphasis>m</emphasis> is the modal mass, given by the inner product over the mass matrix:</para>
<equation id="42"><graphic xlink:href="graphics/eq42.jpg"/></equation>
<para>For finite size changes &#x0394;<emphasis>u</emphasis>, a sensitivity matrix &#x0394;<emphasis role="strong">B</emphasis> is constructed from mode shape sensitivities &#x0394;<emphasis>b<subscript>i</subscript></emphasis>, <emphasis>i=1,2,&#x2026;,N<subscript>m</subscript></emphasis>. Each column in the sensitivity matrix contains vec(<emphasis role="strong">B</emphasis><subscript>k</subscript>), <emphasis>k</emphasis> = 1,2&#x2026;,<emphasis>N<subscript>k</subscript></emphasis>, where <emphasis>N<subscript>k</subscript></emphasis> is the number of scenarios and vec() is a <emphasis>reshape to column vector</emphasis> operator. Each scenario <emphasis>k</emphasis> is modelled mass and stiffness perturbations of the baseline system &#x0394;<emphasis role="strong">M</emphasis><subscript>k</subscript> and &#x0394;<emphasis role="strong">K</emphasis><subscript>k</subscript>:</para>
<equation id="43"><graphic xlink:href="graphics/eq43.jpg"/></equation>
<para>Where subscript <emphasis>b</emphasis> denotes the baseline state. The scenarios are predefined and the change matrices are typically obtained from a FE model. The damage vector is given by:</para>
<equation id="44"><graphic xlink:href="graphics/eq44.jpg"/></equation>
<para>Where + denotes the pseudoinverse, <emphasis role="strong">A</emphasis> is the measured mode shape matrix and <emphasis role="strong">A</emphasis><emphasis><subscript>b</subscript></emphasis> is the mode shape matrix in the baseline state. The methods assumes that vec(&#x0394;<emphasis role="strong">A</emphasis>) is a linear combination of the <emphasis>k</emphasis> scenarios in &#x0394;<emphasis role="strong">B</emphasis>. The damage vector is a change vector, indicative of the contribution of each scenario. This type of algorithm is deterministic and it outputs simultaneous detection and localization.</para>
<para>The damage vector feature was used for calculations in papers II and III where the identified modal propertied has been synthesized by simply noising the modal properties obtained from the FE model. To keep the equations well-conditioned, a certain number of mode shapes must be identified and this places some requirements on the number of sensors to be used. The algorithm was found to fail when actual OMA system identification was used and I subsequently abandoned it to instead focus on robust statistical algorithms.</para>
</section>
</section>
<section class="lev2" id="sec3.2.4" label="3.2.4" xreflabel="sec3.2.4">
<title>Feature pre-selection</title>
<para>In the end, we are trying to minimize <emphasis>risk</emphasis>. To do that, we need a cost function, a statistical model of the features and a model for co-optimization of decision variables, feature variables and structural variables in a probabilistic framework. The development in computational power might follow an exponential law, but, due to the size of the full variables domain, this optimization problem will remain intractable for years to come for any real world application. This is where engineering rationality comes into play: by substituting the true objective function (<emphasis>risk</emphasis>) with something a more computationally tractable, we can hope to get close to the true risk minimum. One such substitution is the optimization of the Area Under the Curve (<emphasis>AUC</emphasis>).</para>
<para>The Receiver Operating Characteristics (ROC), shown in <link linkend="F39">Figure <xref linkend="F39" remap="39"/></link>, is obtained by plotting the true positive rate as a function of false positive rate. The <emphasis>AUC</emphasis> is a well-known model metric in detection theory, Bradley [<link linkend="B105">105</link>]. It is defined by the data alone and it is invariant to any choices of prior probabilities or decision functions.</para>
<fig id="F39" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 39</label>
<caption><para>Area Under the Curve (<emphasis>AUC</emphasis>)</para></caption>
<graphic xlink:href="graphics/fig39.jpg"/>
</fig>
<para>The <emphasis>AUC</emphasis> was used for feature selection in papers I, II and IV.</para>
</section>
</section>
<section class="lev1" id="sec3.3" label="3.3" xreflabel="sec3.3">
<title>Damage detection using machine learning</title>
<para>I have now defined the features that are output by the SHM system at each sensing instance, and now face the task of making decisions based on the acquired data. As discussed in previous section; if the full joint probability function of the data and all environmental variables are know, we could base out decisions on simple Bayes tests, which is the risk-optimal test under assumptions of prior distributions and likelihood of the observed data. The absence of sufficient data to obtain the full joint PDF however make it relevant to look at other areas of the world where decisions are made using sparse data.</para>
<para>The are two main forms of statistical learning: <emphasis>regression</emphasis> and <emphasis>classification</emphasis>. The first kind has output of continuous kind while the second has discrete output, in the form of class <emphasis>labels</emphasis>. As all decisions are discrete, continuous output must also be transformed into a decision, typically by using a statistical test. Machine Learning is the overall framework for statistical pattern recognition, wherein regression is in the category of unsupervised learning and classification is in supervised learning. The main difference is that the supervised algorithms require class labels for training, while the unsupervised algorithms do not. Some well-known algorithms are listed in the overview in <link linkend="T8">Table <xref linkend="T8" remap="8"/></link>.</para>
<table-wrap position="float" id="T8">
<label>Table 8</label>
<caption><para>Types of statistical learning algorithms and their applications for SHM</para></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="all" border="1">
<thead>
<tr>
<th><superscript>Type</superscript></th>
<th>Novelty detection</th>
<th>Classification</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top">Examples of use</td>
<td valign="top">Outlier analysis, Clustering</td>
<td valign="top">Pattern recognition, Speech and image recognition</td>
</tr>
<tr>
<td valign="top">Algorithms used in papers</td>
<td valign="top">Mahalanobis Squared Distance (MSD)<?lb?>Gaussian Mixture Models (GMM)<?lb?>Principal Component Analysis (PCA)<?lb?>Factor Analysis (FA)</td>
<td valign="top">Multi-Layer Perceptron (MLP)<?lb?>Support Vector Machines (SVM)<?lb?>Na&#x00EF;ve Bayes (NB)<?lb?>Linear Discriminant Analysis (LDA)</td>
</tr>
<tr>
<td valign="top">Other examples of algorithms</td>
<td valign="top">Kernel Density Estimator (KDE)<?lb?>Nonlinear Principal Component Analysis (NLPCA)</td>
<td valign="top">k-Nearest Neighbor (kNN)<?lb?>One-class SVM (1-SVM)<?lb?>Decision Trees (DT)</td>
</tr>
<tr>
<td valign="top">Application in papers</td>
<td valign="top">Damage Detection</td>
<td valign="top">Damage Detection Damage Localization</td>
</tr>
</tbody>
</table>
</table-wrap>
<para>Novelty detection is the detection of unusual data. It is a common discipline for statistical multivariate outlier detection (MSD, density estimators), which are strictly unsupervised, and for some pattern recognition algorithms (1-SVM), which require labels (and data) of the data in both baseline and unusual class for training.</para>
<para>In the following, the algorithms used in the papers a briefly introduced in the context that they appear in the papers and along with the principal results. For further background reading, Niu et al. [<link linkend="B106">106</link>] provide a</para>
<para>comparison of different classifiers in SHM context, including some of the above. Bishop [<link linkend="B43">43</link>] and Hastie et al. [<link linkend="B107">107</link>] provide the full background on Statistical Learning Theory.</para>
<section class="lev2" id="sec3.3.1" label="3.3.1" xreflabel="sec3.3.1">
<title>Damage detection using Novelty analysis</title>
<para>In the section, detection is cast as outlier analysis. The three main types of parametric outlier analysis output is discordancy, likelihood density score and prediction residual. In SHM context, a continuous output variable is called a score. By plotting the score on a time axis in a control process chart, a graphical way to evaluate the structures condition is created. In <link linkend="F40">Figure <xref linkend="F40" remap="40"/></link> below, a control chart shows the value of a score for training and testing data. From the class labels, the ROC curves are plotted in the center plot and in the right plot, the <emphasis>AUC</emphasis> values are plotted.</para>
<fig id="F40" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 40</label>
<caption><para>Example of a novelty detection score (paper IV)</para></caption>
<graphic xlink:href="graphics/fig40.jpg"/>
</fig>
<para>In paper IV the following algorithms were compared:</para>
<section class="lev3">
<title>The Mahalanobis squared distance</title>
<para>The Mahalanobis Squared Distance (MSD) is a discordancy metric. It is a multivariate generalization of the Euclidian distance of a data point to the center of the distribution. By normalizing by the variance in the direction of the data point, the distance is taken along the principle directions:</para>
<equation id="45"><graphic xlink:href="graphics/eq45.jpg"/></equation>
<para>Where <inline-graphic xlink:href="graphics/inline2.jpg"/> is the observed feature vector and (<emphasis>&#x03BC;</emphasis><subscript><emphasis>x</emphasis>,b</subscript>, &#x03A3;<subscript><emphasis>x</emphasis>,b</subscript>) are the mean vector and covariance matrix of the feature in the baseline state. If the d-dimensional data is Gaussian, then the MSD is asymptotically a <emphasis>X</emphasis><superscript>2</superscript>-distribution with <emphasis>d</emphasis> degrees of freedom. This makes hypothesis testing, by a simple Fisherian test (<link linkend="sec3.1.2">section <xref linkend="sec3.1.2" remap="3.1.2"/></link>) possible.</para>
</section>
<section class="lev3">
<title>Gaussian Mixture Models</title>
<para>A Gaussian Mixture Model (GMM) is a weighted sum of <emphasis>M d</emphasis>-dimensional Gaussian densities:</para>
<equation id="46"><graphic xlink:href="graphics/eq46.jpg"/></equation>
<para>Where N is the <emphasis>d</emphasis>-dimensional Gaussian probability density function with mean value vector <emphasis>&#x03BC;<subscript>i</subscript></emphasis> and covariance matrix &#x03A3;<subscript>i</subscript> and <emphasis>w</emphasis> is a weighing vector of length <emphasis>M</emphasis>. As a GMM is a density estimator, the damage score is the likelihood of the data.</para>
</section>
<section class="lev3">
<title>Principal Component Analysis</title>
<para>Principal Component Analysis (PCA) is based on representing <emphasis>n</emphasis> vectors of <emphasis>d</emphasis>-dimensions as a linear combination of a set of <emphasis>p</emphasis> orthogonal vectors of <emphasis>p</emphasis> dimensions, where <emphasis>p</emphasis> &#x003C; <emphasis>d</emphasis>. The <emphasis>p</emphasis> vectors are the eigenvectors corresponding to the largest singular values diag(S<emphasis><subscript>0</subscript></emphasis>) of the sample covariance matrix:</para>
<equation id="47"><graphic xlink:href="graphics/eq47.jpg"/></equation>
<para>The number of eigenvectors required to maintain significant information are the vectors corresponding to the largest singular values. The projected vectors <emphasis>y</emphasis>, of <emphasis>p</emphasis> uncorrelated variables are:</para>
<equation id="48"><graphic xlink:href="graphics/eq48.jpg"/></equation>
<para>An example of this is shown in <link linkend="F41">Figure <xref linkend="F41" remap="41"/></link>, where 6 frequencies are mapped unto the two first principal components. The data from the damaged states is observed to lie apart from the baseline cluster.</para>
<fig id="F41" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 41</label>
<caption><para>The lowest 6 frequencies of a blade structure, mapped unto the first two principal components</para></caption>
<graphic xlink:href="graphics/fig41.jpg"/>
</fig>
<para>The projected data <emphasis>y</emphasis> could be used directly for a multivariate statistical test, but better results are obtained by considering changes in the <emphasis>null-space</emphasis>. The null-space is defined as the smallest singular values ( &#x007E; 0 ), as can be seen in <link linkend="F41">Figure <xref linkend="F41" remap="41"/></link>. As projection unto the smallest singular values is would lead to singular covariance matrix, we instead project the data back into original space:</para>
<equation id="49"><graphic xlink:href="graphics/eq49.jpg"/></equation>
<para>The sum squared prediction residual represents changes in the null space. This approach was used in paper IV:</para>
<equation id="50"><graphic xlink:href="graphics/eq50.jpg"/></equation>
</section>
<section class="lev3">
<title>Factor Analysis</title>
<para>In Factor Analysis (FA), the observed variables are modelled as linear combinations of underlying unobservable &#x201C;factors&#x201D; added error terms.</para>
<equation id="51"><graphic xlink:href="graphics/eq51.jpg"/></equation>
<para>Where <emphasis>&#x03BC;</emphasis> is the mean vector, &#x039B; is a matrix of factor loadings, <emphasis>f</emphasis> is a vector with the independent factors and <emphasis>e</emphasis> is a vector of independent error terms. A predefined number of factors are tested to the training data and the maximum likelihood estimator of the factor loadings matrix is calculated from the sample correlation matrix. The number of factors can estimated as the number of principal components of the sample covariance matrix.</para>
</section>
<section class="lev3">
<title>Impact of the amount of training data</title>
<para>All four algorithms are based on sampling statistics of the baseline training data. According to the curse of dimensionality, the required amount of training to obtain a constant coefficient of variation on the sampling estimates, increases exponentially with the feature dimensions. This limits the feature dimensionality that can be used, when sparse data is available. A study of the impact of the amount of training data on the detection performance was performed in paper IV. By increasing the amount of samples drawn from the same population, the discrimination performance, taken as E[<emphasis>AUC</emphasis>], was sampled for each of the four algorithms MSD, GMM (2 mixtures), PCA (2 singular components) and FA (2 factors). The coefficients of a AR(40) model was used, but while the full dimension was used for PCA, only the first 10 coefficients were used for the remaining algorithms. The result is shown in <link linkend="F42">Figure <xref linkend="F42" remap="42"/></link>.</para>
<fig id="F42" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 42</label>
<caption><para>Discrimination performance, given by E[AUC] as function of the amount of training data (paper IV)</para></caption>
<graphic xlink:href="graphics/fig42.jpg"/>
</fig>
<para>It is observed that PCA benefits the most from increased training data.</para>
</section>
<section class="lev3">
<title>Example: NREL tower</title>
<para>I used novelty detection was used in papers I and VI for damage detection of fatigue cracks in the NREL tower. The response model and the damage model were described in <link linkend="sec3.2.2">section <xref linkend="sec3.2.2" remap="3.2.2"/></link> (p.36). An AR model was fitted to the response time histories of a biaxial accelerometer placed in the nacelle, all in numerical simulation domain. I applied MSD novelty detection to the AR feature model by combining the features of the two channels. The sensitivity of the <emphasis>AUC</emphasis> to measurement noise and damping ratio was investigated with MCS. Generally, enough data was used to discard the effect of sampling uncertainty. The results are shown in <link linkend="F43">Figure <xref linkend="F43" remap="43"/></link>.</para>
<fig id="F43" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 43</label>
<caption><para>Sensitivity to noise and damping, for AR + Mahalanobis Squared Distance</para></caption>
<graphic xlink:href="graphics/fig43.jpg"/>
</fig>
<para>The gradient of <emphasis>AUC</emphasis> is seen to be negative in both, but the sensitivity to damping is the strongest: the numerical gradient is approx. 5 times higher for damping that for noise, when averaged over the values up to 5%.</para>
<para>The sampled <emphasis>AUC</emphasis> at 87 locations for 9 damage severities are shown in <link linkend="F44">Figure <xref linkend="F44" remap="44"/></link>. This type of plot is essential to SHM design, as blind spots are revealed, which enables the designer to position welds at optimal locations.</para>
<fig id="F44" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 44</label>
<caption><para>Sampling estimate of <emphasis>AUC</emphasis> for crack size = <emphasis>i<subscript>cl</subscript></emphasis> times 1.4 % circumference, for 3 levels of measurement noise (paper I)</para></caption>
<graphic xlink:href="graphics/fig44.jpg"/>
</fig>
<para>I resume the NREL example in <link linkend="sec3.5">section <xref linkend="sec3.5" remap="3.5"/></link> (p.48).</para>
</section>
<section class="lev3">
<title>Experimental validation</title>
<para>In order to test all combinations of the four feature types with four novelty detection algorithms on experimental data, I used two near-identical blade structures, shown in <link linkend="F65">Figure <xref linkend="F65" remap="65"/></link>. These were cantilevered and tapered wooden box girders, with dynamical properties in resemblance of blade-like structures. The thin stretched skin on a center spar cross section is sketched in <link linkend="F45">Figure <xref linkend="F45" remap="45"/></link>.</para>
<fig id="F45" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 45</label>
<caption><para>Sketch of the blade structure cross section</para></caption>
<graphic xlink:href="graphics/fig45.jpg"/>
</fig>
<para>Over the height of 2.4 m, the cross section tapered linearly from 300 &#x00D7; 150 mm<superscript>2</superscript> to 200 &#x00D7; 100 mm<superscript>2</superscript>. Horizontal 6 mm plywood partitions were inserted every 0.6 m to reduce warping. The structure was epoxy-glued to a steel support plate and bolted to a test bench. 18 accelerometers were mounted as sketched in <link linkend="F46">Figure <xref linkend="F46" remap="46"/></link>. 120 seconds were sampled with a rate of 4096 Hz. The experiments were, for each blade, performed over 30 days, during which both temperature and relative humidity varied substantially in the laboratory. Both were logged during the test period, and eigenfrequencies were followingly normalized according to a linear regression on the measured relative humidity. The dependency was strong, due to the wood absorbing moist, thus changing the density. No regression was performed on the temperature.</para>
<fig id="F46" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 46</label>
<caption><para>The blade structure. Right top: damage in blade A, Right bottom: damage in blade B</para></caption>
<graphic xlink:href="graphics/fig46.jpg"/>
</fig>
<para>The cut in blade A varies from 2.5 cm to 12.5 cm in 5 severities. The cut in blade B varied from 6 cm to 30 cm in 5 severities. 60 datasets were acquired, hereof 30 from the baseline state and 6 from each damaged state. Some novelty detection results is shown in <link linkend="F47">Figure <xref linkend="F47" remap="47"/></link>.</para>
<fig id="F47" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 47</label>
<caption><para><emphasis>AUC</emphasis> histograms for the two feature models from the experimental data (paper IV)</para></caption>
<graphic xlink:href="graphics/fig47.jpg"/>
</fig>
<para>The histograms are of the <emphasis>AUC</emphasis> outcome, produced by randomly sampling training data from the full set of baseline data and using the remaining data for testing. This makes the estimates conditioned of the full dataset and not just on a subset of the dataset. The estimators cannot be considered unbiased on the actual detection performance, as the underlying tests represent sampling from the <emphasis>input model</emphasis> of time variant effects (i.e. non stationary), as I described in <link linkend="sec3.2.1">section <xref linkend="sec3.2.1" remap="3.2.1"/></link> (p.35). From the results, the best performing algorithm appears to be applications specific. Of the novelty algorithms, MSD and PCA were the best performing.</para>
</section>
</section>
</section>
<section class="lev1" id="sec3.4" label="3.4" xreflabel="sec3.4">
<title>Damage localization using classification</title>
<para>To refresh where I set out; the first question was: &#x201C;is there damage?&#x201D; As no sensors measure damage, we can never with certainty answer this. However, using the novelty detection approaches from the previous section, we can answer a surrogate question: &#x201C;does the data look strange?&#x201D; As I showed earlier, decision-making regarding O&#x0026;M can just as well be based on answering such a question, as long as the cost of erroneous output is included.</para>
<para>We now come to the second question: &#x201C;where is the damage&#x201D;. I call the topic localization and apply the framework of classification. As opposed to clustering, which is the field of identifying clusters in the data and assigning data points to a cluster, classification assigns labels to the data. This is useful for the purpose of localization, as the label assigned to the data can represent an area of the structure. In <link linkend="F48">Figure <xref linkend="F48" remap="48"/></link> below, data from damaged states in 6 different regions of a structure are clustered together.</para>
<fig id="F48" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 48</label>
<caption><para>Mapping of FE eigenfrequencies for blade A used in papers IV and V unto the three largest principal components reveals how the data clusters within the classes</para></caption>
<graphic xlink:href="graphics/fig48.jpg"/>
</fig>
<para>Naturally, as mentioned before, classification is a substitution of likelihood testing using the full joint CDF. The substitution is required, as data for estimation of the full joint CDF is unavailable. As a classifier outputs discrete labels, it is implicitly a decision function. If outputs are probabilities<footnote id="fn8" label="8"><para>Or can be transformed to probabilities using one of the various methods discussed in Wu et al. [<link linkend="B116">116</link>].</para></footnote>, they can be treated as likelihood of the data. The addition of prior probabilities and a cost matrix obtains an approximated multiclass Bayes detector. Some algorithms are native multiclass, e.g. LDA, MLP and NB while others, e.g. SVM, are natively binary, but can be transformed into multiclass classifiers by using One-Versus-All (OVA) or All-Versus-All (AVA) methods. Bishop [<link linkend="B43">43</link>] provides the background.</para>
<section class="lev2">
<title>Localization approach</title>
<para>Classification is supervised learning and required data belonging to each label. For a numerical simulation it is easy to produce the training data, but for a realistic experimental case, the data belonging to the damaged state does not exist. To overcome this, a novel approach was suggested in paper V: Having only baseline data and a reasonable response model, damage localization is possible using the following assumptions:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>The data response model is <emphasis>biased from the experimental data in the baseline state</emphasis></para></listitem>
<listitem><para><emphasis>Damage causes a migration of the mean of the feature distribution</emphasis></para></listitem>
</itemizedlist>
<para>Which are based on the response model framework discussed in <link linkend="sec3.2.1">section <xref linkend="sec3.2.1" remap="3.2.1"/></link> (p.35). It is thus assumed that the response covariance is independent of the state. The concept is sketched for a 1-dimensional Gaussian feature in <link linkend="F49">Figure <xref linkend="F49" remap="49"/></link>.</para>
<fig id="F49" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 49</label>
<caption><para>Concept used to synthesize data for localization using classification (paper V)</para></caption>
<graphic xlink:href="graphics/fig49.jpg"/>
</fig>
<para>In the figure above, the estimation of p(<emphasis>x<subscript>fe,eq</subscript></emphasis> &#x007C; <emphasis>&#x03B8;<subscript>d</subscript></emphasis>) from the experimental covariance &#x03A3;<emphasis><subscript>exp</subscript></emphasis>, the bias = <emphasis>&#x03BC;<subscript>fe,b</subscript></emphasis> &#x2013; <emphasis>&#x03BC;<subscript>exp,b</subscript></emphasis> and the finite change &#x0394;<emphasis>&#x03BC;</emphasis> = <emphasis>&#x03BC;<subscript>fe,d</subscript></emphasis> &#x2013; <emphasis>&#x03BC;<subscript>fe,b</subscript></emphasis>. The flow chart in <link linkend="F50">Figure <xref linkend="F50" remap="50"/></link> visualizes the process of damage localization using statistical pattern recognition and the concept described in the previous. It is a 3-stage process: in the 1<superscript>st</superscript> step the FE model is calibrated to the modal data and the covariance matrix of the experimental modal parameters is estimated. In the 2<superscript>nd</superscript> step, the damage is modelled and the classifier is trained, using data simulated with the FE model. In the 3<superscript>rd</superscript> step the new (testing) data is classified.</para>
<fig id="F50" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 50</label>
<caption><para>Flow chart for damage localization using classifications (paper V)</para></caption>
<graphic xlink:href="graphics/fig50.jpg"/>
</fig>
<para>The following classification algorithms were applied:</para>
</section>
<section class="lev2">
<title>Linear Discriminant Analysis</title>
<para>Linear Discriminant Analysis (LDA) is also known as Fisher&#x2019;s discriminant analysis. It is considered the supervised version of PCA, because the data are projected into lower-dimensional space using eigenvectors of sample scatter matrices. The goal of LDA is to project into a space where the data are linearly separable. Thus LDA cannot be expected to perform well when the discriminatory information is not in the mean of the data or when the data is significantly non-Gaussian.</para>
</section>
<section class="lev2">
<title>Multi-Layer Perceptron</title>
<para>The Neural Network Multi-Layer Perceptron (MLP) is perhaps the most widely used pattern recognition algorithm for damage detection and localization, see e.g. Doebling et al. [<link linkend="B108">108</link>] and Sohn et al. [<link linkend="B3">3</link>]. The full theory is given e.g. in Bishop [<link linkend="B109">109</link>]. MLPs are layers of series of nodes, where the output of one layer serves as the input to the next. The simplest MLP has a input layer, a hidden layer and an output layer. This is called a two-layer perceptron. The input layer consists of a number of nodes corresponding to the number of inputs. Thus, for a damage sensitive feature of dimension <emphasis>d</emphasis>, the input layer has <emphasis>d</emphasis> nodes. The number of nodes in the output layer is equal to the number of class labels, e.g. the number of damage locations, severities, etc. In the hidden layer, the nodes are sigmoid functions. These functions consist of weights and biases that are trained to the data. I apply the scaled conjugate gradient backpropagation method in the Matlab Neural Network toolbox for training of pattern recognition MLPs.</para>
</section>
<section class="lev2">
<title>Support Vector Machines</title>
<para>Support Vector Machines (SVM) attempt to maximize the margin between the decision boundary and the data of each of the classes that is closest to the boundary, rather than optimize classification performance of training data, such as the case for the MLP. The data points on the boundary are the support vectors. Classification performance of non-linearly separable data is increased through the use of a kernel, i.e. a function that maps the data into a higher dimensionality space. SVMs are initially binary classifiers, but the open source Matlab SVM toolbox LIBSVM [<link linkend="B110">110</link>], which was used in paper V, implements the one-against-one algorithm from Knerr et al. [<link linkend="B111">111</link>] for multiclass classification.</para>
</section>
<section class="lev2">
<title>Na&#x00EF;ve Bayes</title>
<para>Na&#x00EF;ve Bayes (NB) assumes conditional independence between features and uses Bayes rule to calculate the posterior probability of the classes. Labels are selected according to the maximum a posteriori rule. As the feature likelihood is estimated from the training data, typically using a maximum likelihood estimator, knowledge of the underlying distribution is required. NB cannot be expected to perform satisfactory when the classes are significantly correlated or when the assumed distributions fit the data poorly.</para>
</section>
<section class="lev2">
<title>Principal results</title>
<para>Damage was introduced as a 2% reduction of all stiffness properties within a finite region. The selected regions are shown in <link linkend="F51">Figure <xref linkend="F51" remap="51"/></link>.</para>
<fig id="F51" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 51</label>
<caption><para>The FE model used for localization. The red areas are weakened by 2 %</para></caption>
<graphic xlink:href="graphics/fig51.jpg"/>
</fig>
<para>Three types of features were tested and eigenfrequencies were found to give the best performance. In combination with a linear-kernel SVM, the classification rates were 85% for FE and 80 % for experimental data, although significant statistical uncertainty can be associated to this result, due to the very spare testing data. Both a two-step and a one-step approach were considered, where the baseline was an additional class in the latter. To effectively compare the approaches, the Value of Information (VoI) is calculated using the experimental testing data in a MCS setup. The MCS approach thus implicitly accounts for the (unknown) correlation between detection and localization algorithms. While using FE data would give a slightly better performance, experimental data is used as these more likely reflect actual performance. For the two-step approach, the prior probability for the Bayesian detector is given as the ratio of damaged and baseline datasets. The results are shown in <link linkend="F52">Figure <xref linkend="F52" remap="52"/></link>.</para>
<fig id="F52" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 52</label>
<caption><para>Value of Information (VoI) analysis of the three detection approaches for varying costs of inspection, using experimental data (paper V)</para></caption>
<graphic xlink:href="graphics/fig52.jpg"/>
</fig>
<para>From the right graphs, the <emphasis>value of localization</emphasis> can be directly found as the difference between the detection and two-step approach. The VoI is seen to be strongly dependent on the ratio of inspection-to-failure costs.</para>
</section>
</section>
<section class="lev1" id="sec3.5" label="3.5" xreflabel="sec3.5">
<title>Sequential decision-making</title>
<para>The Bayes detector provides the decision that minimizes expected cost for the static detector. We now move on the time-domain, where many sensing instances are planned and sequential decision making is relevant. We consider the following approaches:</para>
<itemizedlist mark="bullet" spacing="normal">
<listitem><para>Sequential Bayes detector (Bayesian filter)</para></listitem>
<listitem><para>RBI approach, using structural reliability methods</para></listitem>
<listitem><para>Fixed decision policies</para></listitem>
<listitem><para>Influence diagrams</para></listitem></itemizedlist>
<section class="lev2" id="sec3.5.1" label="3.5.1" xreflabel="sec3.5.1">
<title>Sequential Bayes detector</title>
<para>In the previous sections, statistical models of damage sensitive features and decision making has been introduced. In the localization example, which I just presented, the value of information was calculated using various SHM approaches for a &#x201C;static&#x201D; detection problem. To facilitate the transformation to time-domain, I first generalize expressions for a static detector: A detector <emphasis>&#x03B4;</emphasis>(<emphasis>x</emphasis>) transforms features <emphasis>x</emphasis> &#x02208; <emphasis>X</emphasis> to discrete decisions <emphasis>d<subscript>k</subscript></emphasis> :</para>
<equation id="52"><graphic xlink:href="graphics/eq52.jpg"/></equation>
<para>For discrete states <emphasis>&#x03B8;<subscript>j</subscript></emphasis> &#x2208; <emphasis>&#x0398;, j</emphasis> = 1,2,&#x2026;,<emphasis>J</emphasis> and outputs <emphasis>d<subscript>k</subscript></emphasis> &#x2208; <emphasis>D, k</emphasis> = 1,2,&#x2026;,<emphasis>K</emphasis>. The expected cost E[<emphasis>C</emphasis>] for a static detector is:</para>
<equation id="53"><graphic xlink:href="graphics/eq53.jpg"/></equation>
<para>Where <emphasis>C<superscript>ter</superscript></emphasis> is the cost matrix (e.g. <link linkend="T4">Table <xref linkend="T4" remap="4"/></link>, p.30) with rows corresponding to detector outputs and columns corresponding to states.</para>
<para>I now roll the problem out as sequential decision-making in time domain, and consider the case when multiple sensing instances are planned at times <emphasis>t</emphasis> = <emphasis>t<subscript>i</subscript></emphasis> , <emphasis>i=1,2,&#x2026;,N</emphasis>. It is intuitive to discretize the time <emphasis>t</emphasis> to corresponding time slices <emphasis>t<subscript>1</subscript>, t<subscript>2</subscript>,&#x2026;,t<subscript>N</subscript></emphasis>. At each time slice the SHM system will acquire features <emphasis>x<subscript>i</subscript></emphasis> and select <emphasis>d<subscript>i</subscript></emphasis>. As the decision at time <emphasis>t<subscript>i</subscript></emphasis> has payoff at <emphasis>t<subscript>i+1</subscript></emphasis>, the probabilities must be with regards to the time interval [<emphasis>t<subscript>i</subscript></emphasis>, <emphasis>t<subscript>i+1</subscript></emphasis>]. Defining &#x0394;P<subscript>i</subscript>() = P<subscript>i+1</subscript>() &#x2013; P<subscript>i</subscript>(), the expected costs are given by:</para>
<equation id="54"><graphic xlink:href="graphics/eq54.jpg"/></equation>
<para>Where <emphasis>t<subscript>i</subscript></emphasis> is inserted in years. The expression is similar to the static case, but where the terminal costs are summed over all time intervals, and the interest rate <emphasis>r</emphasis> is included.</para>
<para>The decision that minimizes expected cost is the sequential Bayes detector:</para>
<equation id="55"><graphic xlink:href="graphics/eq55.jpg"/></equation>
<para><emphasis>&#x03B4;<subscript>opt</subscript></emphasis>(<emphasis>x</emphasis>) is a time variant set of hyper-surfaces in the feature space, making the Probability of Detection (PoD) time variant.</para>
<para>The posterior; &#x0394;P<subscript>i</subscript>(<emphasis>&#x03B8;<subscript>i</subscript></emphasis>,&#x007C;<emphasis>x<subscript>1:i</subscript></emphasis>) is a sequential Bayesian filter, which estimates the current state given past and present observations. In paper I, a MCS time-series simulations approach was used to compare the Bayes detector with various static threshold detectors, including static thresholds on the observations and on an Exponentially Weighted Moving Average (EWMA) of the observations.</para>
<para>A weakness of a sequential Bayesian filter is appears when the damage evolution follows an exponential law. Unless a limited history of observations is used, the prior tends towards Pr(<emphasis>H<subscript>0</subscript></emphasis> = 1). This creates a substantial filter-lag, and due to the accelerating growth rate of the damage, the detector may fail to react before the damage is critical.</para>
</section>
<section class="lev2" id="sec3.5.2" label="3.5.2" xreflabel="sec3.5.2">
<title>RBI approach to SHM</title>
<para>Optimally, decisions of O&#x0026;M actions should be risk-based, using the pre-posterior decision analysis. This approach has been used for decades for RBI, using parallel, mutually exclusive events and structural reliability methods for the risk calculations. I investigate if the approach can be directly transferred to risk-based detector design.</para>
<para>Manual inspection performance is modelled by a PoD curve, known from detection theory. PoD is the CDF of the detectable damage size F<emphasis><subscript>ad</subscript></emphasis>(<emphasis>a<subscript>d</subscript></emphasis>), estimated from numerous experiments. It is possible to derive PoD curves for damage detection, by using the four detector outcomes and the relationship for the Probability of Indication (PoI) <emphasis>P<subscript>11</subscript></emphasis>(<emphasis>a</emphasis>):</para>
<equation id="56"><graphic xlink:href="graphics/eq56.jpg"/></equation>
<para>If <emphasis>P<subscript>10</subscript></emphasis> and PoD are independent (<emphasis>&#x03C1;</emphasis>=0), which seems like a reasonable assumption:</para>
<equation id="57"><graphic xlink:href="graphics/eq57.jpg"/></equation>
<para>An example of the detector outcomes is given in <link linkend="F53">Figure <xref linkend="F53" remap="53"/></link> :</para>
<fig id="F53" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 53</label>
<caption><para>PoD and the four detector outcomes from simulated damage detection, averaged over 8 damage locations using the <emphasis>damage vector</emphasis> feature (paper III)</para></caption>
<graphic xlink:href="graphics/fig53.jpg"/>
</fig>
<para>Observe that <emphasis>P<subscript>11</subscript></emphasis>(<emphasis>a<subscript>d</subscript></emphasis> &#x2192; 0) = <emphasis>P<subscript>10</subscript></emphasis> and <emphasis>P<subscript>11</subscript></emphasis>(<emphasis>a<subscript>d</subscript></emphasis> &#x2192; &#x221E;) = 1. With the PoD, it should be straightforward to use the SHM output and structural reliability methods to update the fatigue reliability. The significant differences are:</para>
<orderedlist numeration="loweralpha" continuation="restarts" spacing="normal">
<listitem><para>The level of uncertainty is much higher than for inspections</para></listitem>
<listitem><para>The PoD of SHM is conditional on damage location, meaning that one PoD curve must be calculated for each detail in the structure and that the limit states of all details in the structure must be updated when new SHM output is available.</para></listitem>
<listitem><para>The dependencies between the detection performance of damage locations is unknown d) The dependencies between the detection performance of sensing instances is unknown</para></listitem>
</orderedlist>
<para>Regarding c), it is usual in inspection updating to assume independence between sequential inspections, even though Straub &#x0026; Faber [<link linkend="B112">112</link>] showed that there is some correlation. For SHM damage detection, e.g. the technologies presented in paper IV, the dependencies of the probability of detection between sensing instances (in time) has not been investigated, but as a first assumption they could be considered independent..</para>
<para>Regarding d), for damage localizing SHM systems, e.g. the one presented in paper V, the dependencies between the probability of localization for each potential damage location has not been investigated, but as a first assumption they could also be considered independent.</para>
<para>So far, we have not encountered any restriction to applying the RBI approach. I now consider the calculations aspects of an RBI approach. In RBI, the optimization variables is the inspection-times. For SHM design, the number of sensing instances will be restricted by economic reasons of operation, whereas the free optimization variables relate to the detector decision function <emphasis>&#x03B4;</emphasis>(<emphasis>x</emphasis>). Unlike the PoD of inspections, the PoD of SHM is function of <emphasis>&#x03B4;</emphasis>(<emphasis>x</emphasis>). If a Bayesian decision approach is employed, then <emphasis>&#x03B4;</emphasis>(<emphasis>x</emphasis>) is inherently time-variant, and an iterative optimization procedure is unavoidable.</para>
<fig id="F54" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 54</label>
<caption><para>An event tree of a simple decision strategy used in RBI: inspection times are fixed and indication triggers a repair which is assumed to restore the initial damage distribution (paper II)</para></caption>
<graphic xlink:href="graphics/fig54.jpg"/>
</fig>
<para>Such a decision strategy is not economic for SHM, as high false positive rates and the lack of (exact) localization information will lead to high expected repair costs.</para>
<para>The approach is tractable for the relatively few inspection events in RBI, but calculation becomes intractable for frequent sensing: for the 480 system runs used in paper I, 10<superscript>145</superscript> parallel systems would need to be analyzed. Instead, I look to other approximations.</para>
</section>
<section class="lev2" id="sec3.5.3" label="3.5.3" xreflabel="sec3.5.3">
<title>Fixed decision policies</title>
<para>A simple approximation is to use fixed, or static, decision policies. They include thresholds on observed quantities, e.g. damage indicators, or on updated quantities, e.g. the failure probability. This greatly reduces the number of optimization parameters, but does not utilize decision theory and cannot be expected to perform as well as a Bayesian detector. However, they have advantages in terms of simplicity and computational cost and, as we saw in paper I, it may also prove more efficient than a detector based on sequential Bayesian filtering. The outcome of three detectors to the exact same realizations of the observable variable and crack growth, is shown in <link linkend="F55">Figure <xref linkend="F55" remap="55"/></link>.</para>
<fig id="F55" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 55</label>
<caption><para>Random realization of crack growth and the corresponding decision histories of 3 approaches to detection (paper I)</para></caption>
<graphic xlink:href="graphics/fig55.jpg"/>
</fig>
<para>A fixed decision rule outperformed a Bayes detector in paper I, due to the updating-lag, described in a previous section. As the damage-growth velocity increases, the failure probability is biased by the low prior. To reduce this effect, a reduced &#x2018;updating-window&#x2019; of length <emphasis>L<subscript>w</subscript></emphasis> was used. The value of <emphasis>L<subscript>w</subscript></emphasis> that gave the best performance was of 5 time steps. The conclusion questions the use of a sequential Bayes detector for damage detection purposes and a similar conclusion was reached by Beck et al. [<link linkend="B113">113</link>], where it was observed that a larger <emphasis>L<subscript>w</subscript></emphasis> increases the reaction time of the detector. The addition of <emphasis>L<subscript>w</subscript></emphasis> as an optimization variable reduces the advantage over static detectors. The optimum of <emphasis>L<subscript>w</subscript></emphasis> was found to be rather flat, indicating that a rational choice of <emphasis>L<subscript>w</subscript></emphasis> could suffice for implementation.</para>
</section>
<section class="lev2" id="sec3.5.4" label="3.5.4" xreflabel="sec3.5.4">
<title>Influence diagrams</title>
<para>As the numerical burden of performing the MCS optimizations, described in the previous section, is very large, an alternative is investigated, in the form of influence diagrams. The influence diagrams can model the whole decision problem, taking all dependencies into account.</para>
<para>An influence diagram is a Bayesian net extended with decision and utility nodes, intended to solve decision problems. The influence diagram in <link linkend="F56">Figure <xref linkend="F56" remap="56"/></link> model the pre-posterior decision analysis. The node <emphasis>z</emphasis> models the decision of making the experiment. <emphasis>u<subscript>e</subscript></emphasis> models the negated experimental cost function <emphasis>C<superscript>exp</superscript></emphasis>. The node <emphasis>x</emphasis> models the outcome of the experiment. The node <emphasis>d</emphasis> models the decision to make after the experiment has been performed, the node <emphasis>&#x03B8;</emphasis> models the unknown state and the node <emphasis>u<superscript>t</superscript></emphasis> models the negated terminal cost function <emphasis>C<superscript>ter</superscript></emphasis>.</para>
<fig id="F56" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 56</label>
<caption><para>Influence diagram of Bayesian Experimental Design (BED). Following common notation chance nodes are round, decision nodes are square and utility nodes are diamond shaped (paper VI)</para></caption>
<graphic xlink:href="graphics/fig56.jpg"/>
</fig>
<para>The utility nodes each contain a utility table with one value for each possible configuration of the parents. The utility nodes are childless. The parents of each decision node is all the nodes that provide past evidence for the decision. The decision nodes contain policy tables, with one value for each possible configuration of the parents. There is a directed path leading through all decision nodes, indicating a sequence of decisions. Links to decision nodes are called information links &#x2013; when a decision is made, the states of all parents are known. In the above example of an influence diagram, the only node that can receive evidence (be observed) is <emphasis>x</emphasis>, and the following decision <inline-graphic xlink:href="graphics/inline5.jpg"/> is a 1-dimensional policy table. When the influence diagram models a sequence of decision in time, then each decision has every previous node that can receive evidence as parents, making the decision problem intractable.</para>
</section>
<section class="lev2" id="sec3.5.5" label="3.5.5" xreflabel="sec3.5.5">
<title>Limited Memory Influence Diagrams</title>
<para>As the full posterior of an Influence Diagram quickly becomes intractable, relaxing some assumptions enables a Bayesian filter decision function can be modelled. This includes linear filters. A Bayesian filter was investigated in paper I, but, as MCS was used to obtain the posterior, the numerical cost was very large. As the expected costs are needed for the Value of SHM, the following alternative method is investigated.</para>
<para>Limited Memory Influence Diagrams (LIMID), by Lauritzen &#x0026; Nilsson [<link linkend="B114">114</link>], relax the requirement of memory links to all previous nodes that can receive evidence. This makes the solution of the optimal policy tables more tractable, at the cost of approximating the full optimal solution. Typically, only the links to the most recent parents are modelled. LIMIDs were used for engineering decision making in Nielsen in [<link linkend="B48">48</link>] and by Luque &#x0026; Straub [<link linkend="B115">115</link>], both from 2013.</para>
<para>The approximation error was investigated in paper VI for the LIMID in <link linkend="F57">Figure <xref linkend="F57" remap="57"/></link>:</para>
<fig id="F57" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 57</label>
<caption><para>Simplified version of the LIMID used in paper VI. A decision of repair is made in time slice 9, based on previous results (paper VI)</para></caption>
<graphic xlink:href="graphics/fig57.jpg"/>
</fig>
<para>The higher the number of parents that are included, the higher the value of information at the expense of increased calculation cost.</para>
<fig id="F58" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 58</label>
<caption><para>VoI and relative calculation cost using Bayes Net Toolbox (BNT)</para></caption>
<graphic xlink:href="graphics/fig58.jpg"/>
</fig>
<para>The rate of change of VoI is seen to exponentially decreasing, while the calculation cost (memory and time) are exponentially increasing, for increased number of memory links. The calculation for 5 links was the highest possible, as it required more than 60 GB of physical memory in the computer.</para>
<para>Single Policy Updating (SPU) by Lauritzen &#x0026; Nilsson [<link linkend="B114">114</link>] is a solution algorithm for LIMIDs that successively updated one at the time all decision policies until convergence of the Maximum Expected Utility (MEU). As the algorithm optimizes one policy at the time, the obtained solution may not be the global optimum, which puts limitations on the complexity of decision problems that may be solved with SPU, as was observed by Nielsen [<link linkend="B48">48</link>]. For SHM detection problems, the Partially Observable Markov Decision Process-type (POMDP), is the simplest model:</para>
<fig id="F59" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 59</label>
<caption><para>Three time slices of a partially observable Markov decision process</para></caption>
<graphic xlink:href="graphics/fig59.jpg"/>
</fig>
<para>The POMDP is first order Markovian, with inter-slice links only between consecutive slices. The approaches to modelling SHM damage detection, shown in <link linkend="F60">Figure <xref linkend="F60" remap="60"/></link>, are suggested.</para>
<fig id="F60" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 60</label>
<caption><para>Two models for SHM decision-making (paper VI)</para></caption>
<graphic xlink:href="graphics/fig60.jpg"/>
</fig>
<para>The models was compared in paper VI. The most effective in terms of the [VoI / (calculation time*memory usage)] ratio is the simple POMDP model, which is a LIMID based on a hidden Markov (HMM) model. Memory usage is determined by the size of the largest utility potential. A higher VoI, using the same number of sensing instances, is found for an autoregressive hidden Markov model (AR-HMM), but at the expense of a greatly increased calculation cost (more than 200 GB of physical memory was required).</para>
<section class="lev3">
<title>Example: NREL tower</title>
<para>The results for 100 time slices using a HMM + direct repair, a HMM + perfect inspection and a AR-HMM + perfect inspection models, are shown in <link linkend="F61">Figure <xref linkend="F61" remap="61"/></link>. The top plots visualize the decision policies as a decision threshold and the bottom plots are area plots of the expected cost contributions in each time slice.</para>
<fig id="F61" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 61</label>
<caption><para>Results of LIMIDs for the NREL tower case (paper VI). Top: decision threshold. Bottom: area plots of expected costs at each time slice</para></caption>
<graphic xlink:href="graphics/fig61.jpg"/>
</fig>
<para>The expected costs of the LIMIDs were compared with the results of the numerical investigations performed in relation to paper I. The results are shown in <link linkend="F62">Figure <xref linkend="F62" remap="62"/></link>.</para>
<fig id="F62" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 62</label>
<caption><para>Relative expected costs of LIMIDs compared with MCS results (paper VI)</para></caption>
<graphic xlink:href="graphics/fig62.jpg"/>
</fig>
<para>The approximation is seen to be good, although the sampling results show lower expected costs. This is because decision nodes in the LIMID only take account of the direct parents, and not of all evidence up until the decision. The expected costs of the LIMID thus approximates the actual expected costs, with the advantage that no MCS simulations are required. The expected costs can be closer approximated using the method discussed by Nielsen [<link linkend="B48">48</link>]: At every time slice the current observable node is instantiated by sampling from its marginal distribution. With the evidence inferred in the current and all previous slices, the current and future decision policies are updated by SPU. The procedure is repeated until all observable nodes are instantiated, and the expected costs, dependent on the observable outcomes, are stored. This is repeated in a MCS sampling of the expected costs.</para>
<para>BNT runs in Matlab and the transparency and flexibility of the code makes it attractive for research purposes, although, at the present time, SPU is costly, both concerning time- and memory.</para>
</section>
</section>
</section>
</chapter>
<chapter class="chapter" id="ch04" label="Chapter 4" xreflabel="ch04">
<title>Combined structural / SHM design</title>
<para>It is safe to say that damage detection is valuable when the expected failure costs are high. Let the vector <emphasis>z</emphasis> hold all design variables of the SHM-equipped structure (geometrical dimensions, mean values of material strengths, number of sensors, dynamic range of sensors). The original structural design set is denoted <emphasis>z<subscript>0</subscript></emphasis>.</para>
<para>The expected failure cost are given by the product of <emphasis>P<subscript>f</subscript></emphasis> and <emphasis>C<subscript>f</subscript></emphasis>, which are both function of <emphasis>z</emphasis>. Since the cost of failure is decided mainly by factors extrinsic to the design variables, e.g. life safety or loss of benefit from operation, we can approximate that <emphasis>C<subscript>f</subscript></emphasis> &#x007E; <emphasis>constant</emphasis>. The same applies to inspection cost and repair cost. The intrinsic costs are the costs directly related to <emphasis>z</emphasis>. Since we are considering the design of new structures in this chapter, they are the initial costs of the structure, <emphasis>C<subscript>ini</subscript></emphasis>. Without SHM, the total expected costs of the structure, E[<emphasis>C</emphasis>]&#x00B4;, are the sum of intrinsic and extrinsic terms:</para>
<equation id="58"><graphic xlink:href="graphics/eq58.jpg"/></equation>
<para>The original optimum of the set of design variables is found by optimization:</para>
<equation id="59"><graphic xlink:href="graphics/eq59.jpg"/></equation>
<para>The expected failure costs are found by marginalizing the time to failure <emphasis>t<subscript>f</subscript></emphasis> out. If there are no scheduled maintenance events, then the expected cost are given by:</para>
<equation id="60"><graphic xlink:href="graphics/eq60.jpg"/></equation>
<para>If the cost of making the right decision is zero (<emphasis>C<subscript>rep</subscript></emphasis> = 0), and perfect information is assumed (i.e. <emphasis>P<subscript>01</subscript> = P<subscript>10</subscript></emphasis> = 0) then there are only two contributions, which are sketched in <link linkend="F63">Figure <xref linkend="F63" remap="63"/></link>.</para>
<fig id="F63" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 63</label>
<caption><para>Expected Value of Perfect Information (EVPI) is the upper bound of the VoI. Before the structure is realized, <emphasis>z</emphasis> is free to be chosen between the simple bounds and the EVPI is large. After the structure is realized <emphasis>z</emphasis> = <emphasis>z<subscript>0</subscript></emphasis> and the EVPI is smaller</para></caption>
<graphic xlink:href="graphics/fig63.jpg"/>
</fig>
<para>The Expected Value of Perfect Information (EVPI) is the upper bound of the VoI for the SHM system, as it corresponds to the perfect detector. Naturally, <emphasis>C<subscript>rep</subscript></emphasis> &#x2260; 0, but as <emphasis>C<subscript>rep</subscript> &#x003C;&#x003C; C<subscript>f</subscript></emphasis>, the overall picture is the same. The lower simple bounds on the design variables are typically given by other non-deterioration driven limit states, e.g. reliability against extreme loads. The upper bounds may be relevant for designs that cannot be designed as <emphasis>safe-life</emphasis>, i.e. sufficient reliability without inspections. This is relevant for offshore structures and components in rotorcraft and avionics.</para>
<para>Inclusion of planned inspections may be an economic alternative to material-bought-safety. The inspection variables (number of inspections, quality of inspections, etc.) are simply included in <emphasis>z</emphasis>. The expected cost relation is visualized in <link linkend="F64">Figure <xref linkend="F64" remap="64"/></link> by fixing initial costs and finding the optimum of the subset <emphasis>z<subscript>insp</subscript></emphasis> of <emphasis>z</emphasis> that is related to inspections.</para>
<fig id="F64" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 64</label>
<caption><para>Expected Value of Perfect Information (EVPI) for the case on planned inspections in the original design</para></caption>
<graphic xlink:href="graphics/fig64.jpg"/>
</fig>
<para>The expected inspection and repair costs are a sum of the expected costs at each sensing instance <emphasis>i, i = 1,2,&#x2026;,Nt</emphasis>.</para>
<equation id="62"><graphic xlink:href="graphics/eq61.jpg"/></equation>
<para>For a structure with implemented damage detection, a critical damage is the damage that leads failure in the time between two sequential sensing instances [<emphasis>t<subscript>i</subscript> , t<subscript>i+1</subscript></emphasis>]. A decision is made at the time <emphasis>t<subscript>i</subscript></emphasis>, and if the damage is incorrectly decided non-critical (false negative), the structure is assumed to fail. Assuming that the sensing instances are frequent, we can approximate the integral over <emphasis>time to failure</emphasis> by the sum of expected <emphasis>false negative</emphasis> costs. For the time between sensing instances, the expected failure costs are:</para>
<equation id="62"><graphic xlink:href="graphics/eq62.jpg"/></equation>
<para>With SHM, the total expected costs of the structure, E[<emphasis>C</emphasis>]&#x00B4;&#x00B4;, includes terms for false positive, true positive and false negatives. Using the BED semantics, the initial costs are experimental costs <emphasis>C<superscript>exp</superscript></emphasis> :</para>
<equation id="63"><graphic xlink:href="graphics/eq63.jpg"/></equation>
<para>If a &#x201C;indication triggers inspection&#x201D; decision strategy is used, then the event costs are known:</para>
<equation id="64"><graphic xlink:href="graphics/eq64.jpg"/></equation>
<para>If the SHM system provides localization as step two of a two-step process, then the first term in the sum is separated into two terms for correct localization and incorrect localizations. Several methods for calculating the expected costs have been applied in the papers. In paper I, a MCS approach was used. In paper VI, a Bayesian network approach was used.</para>
<para>Returning to BED, the optimal design is given for:</para>
<equation id="65"><graphic xlink:href="graphics/eq65.jpg"/></equation>
<para>This is visualized by the partial decision tree in <link linkend="F65">Figure <xref linkend="F65" remap="65"/></link>.</para>
<fig id="F65" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 65</label>
<caption><para>Partial decision tree for the SHM process. The last 7 nodes are repeated for every instance of sensing</para></caption>
<graphic xlink:href="graphics/fig65.jpg"/>
</fig>
<para>According to BED, the experiment (<emphasis>z</emphasis>) and the terminal decision (<emphasis>d</emphasis>) should be chosen as the set {<emphasis>z,d</emphasis>} that minimizes the total expected costs. This is two-layer optimization which can be numerically approached. This was done in paper I, where the choice of <emphasis>z</emphasis> was limited to one SHM technology and three values of material thickness. The detector optimization (<emphasis>d</emphasis>) was discussed in <link linkend="sec3.5.1">section <xref linkend="sec3.5.1" remap="3.5.1"/></link> (p.49).</para>
<para>The BED approach was investigated on component and on system level in paper I. The principal conclusions are given in the following.</para>
<section class="lev1" id="sec4.1" label="4.1" xreflabel="sec4.1">
<title>SHM based design</title>
<para>The foundation of SHM based design, is risk-based optimization. A structural component can be seen as a series system of infinitesimal sections. All the safety margins are identical, and thus the expected costs are determined by the location with the highest probability of failure (the weakest link). A global SHM system provides information in the form of a global indicator. As damage can occur in any location of the structure (if we with certainty knew where, global response monitoring would not make sense), the information impacts all finite number of limit states, which must be updated with the new information using Bayes theorem. As the likelihood p(<emphasis>x&#x007C;&#x03B8;;location</emphasis>) is generally not constant, i.e. p(<emphasis>x&#x007C;&#x03B8;;location</emphasis>) &#x2260; p(<emphasis>x&#x007C;&#x03B8;</emphasis>) as exemplified by the variation of <emphasis>AUC</emphasis> in <link linkend="F44">Figure <xref linkend="F44" remap="44"/></link> (p.43), the updated safety margins are no longer identical. Naturally, they remain correlated as the same evidence is inferred. The principle is sketched in <link linkend="F66">Figure <xref linkend="F66" remap="66"/></link>, where 3 possible damage locations {A,B,C} on a cantilevered beam are assumed. The structure is optimal for no inspections. The initial (intrinsic) cost function <emphasis>C<subscript>ini</subscript></emphasis>(<emphasis>z</emphasis>) is a simple linear function of the beam thickness <emphasis>C<subscript>ini</subscript></emphasis>(<emphasis>t</emphasis>) = <emphasis>c<subscript>1</subscript></emphasis> + <emphasis>c<subscript>2</subscript>t</emphasis>. For each of the three locations, the expected cost given damage at said location is optimized with regards to <emphasis>t</emphasis>.</para>
<fig id="F66" position="float" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Figure 66</label>
<caption><para>Expected cost analysis for system level SHM design. Top: feature likelihood. Middle: cost contributions of prior optimized design. Bottom: cost contributions of SHM based design</para></caption>
<graphic xlink:href="graphics/fig66.jpg"/>
</fig>
<para>For location B, the value of damage detection is negative. The overall expected costs of the whole component is found by averaging over the damage location distribution. In RBI, this distribution is typically discretized to only have a value in hot-spots where fatigue cracks originate. The same approach can be taken to SHM design, wherein the SHM system must be designed so that blind-spots, for which the <emphasis>AUC</emphasis> approaches 0.5, are placed in the low density regions of the damage location PDF. This has an impact on all aspects of SHM design, including location, number and quality of sensors.</para>
<para>Had the prior optimum of the design included inspections, the principle is the same, although most likely with greater benefit, as SHM reduces the expected costs of inspections.</para>
<para>The concept was numerically validated on the NREL tower. The initial costs function was calibrated to ensure that the prior optimum was actually the real optimum, and damage was assumed to occur only at the flange weld at the foundation interface. Significant increases in VoI were achieved when the SHM was taken into account on the initial design. This reflects that the EVPI is larger when the initial design is less safe, as sketched in <link linkend="F63">Figure <xref linkend="F63" remap="63"/></link> (p.55).</para>
</section>
</chapter>
<chapter class="chapter" id="ch05" label="Chapter 5" xreflabel="ch05">
<title>Conclusions and future directions</title>
<para>Under the cover of investigating design of concrete wind turbine towers, this project set out to investigate the Value of SHM. To that end, a simple conclusion can be given: To calculate the value of SHM, one need simply perform an Bayesian pre-posterior analysis. This is a two-layer optimization of design variables and decisions, but, easy at it sounds, the problem is intractable without simplifications, as the number of configurations of the optimization variables is substantial. This makes the &#x2018;real&#x2019; problem the choice of simplifications, in obtaining a tractable version of the pre-posterior analysis. The following were investigated and concluded in the thesis:</para>
<itemizedlist mark="dash" spacing="normal">
<listitem><para>The Value of SHM depends on whether the structure is already realized. It can be significantly higher if a combined design can be achieved.</para></listitem>
<listitem><para>The calculation of the value of SHM is based in Life-Cycle Costs (LCC) analysis and Bayesian pre-posterior analysis. According to the Bayesian pre-posterior analysis, the Bayes decision is the risk-optimal decision, at every time step.</para></listitem>
<listitem><para>The Bayes decision is based on the sequential Bayesian filter, but Monte Carlo Simulation (MCS) show that, due to the updating lag, the detector is sub-optimal compared simpler filters.</para></listitem>
<listitem><para>Elimination of variables by preselecting the SHM technology reduces the optimization problem substantially.</para></listitem>
<listitem><para>Using the Area Under the Curve (<emphasis>AUC</emphasis>) as a substitution for the detection cost, enables performance based pre-selection SHM technology.</para></listitem>
<listitem><para>Using Rytter&#x2019;s original hierarchal method is the most economic form of SHM based decision support, compared to a one-step approach. Following existing trends in SHM, novelty detection algorithms were applied to answer the detection question and classification algorithms were applied to answer the localization question, using few sensors.</para></listitem>
<listitem><para>A damage detection system has blind spots which affect the system reliability of the structure, which in turn may affect the expected costs negatively. The knowledge of blind spots enables the designer to place damage sensitive details in optimal places.</para></listitem>
<listitem><para>The use of a Hidden Markov Model (HMM) in a Limited Memory Influence Diagram (LIMID), based on a Bayesian network model of the deterioration process enables the efficient single policy updating (SPU) to calculate the expected costs and the value of SHM. The LIMID solution approximates the MCS solution conservatively.</para></listitem>
<listitem><para>The value of SHM has been calculated for fatigue deterioration in steel and in concrete, in both cases using four linked models: a <emphasis>design model</emphasis>, a <emphasis>physical model</emphasis>, a <emphasis>response model</emphasis> and a <emphasis>decision model</emphasis>.</para></listitem>
<listitem><para>Due to the very large uncertainties of the physical model in concrete, the value of SHM is small and a system may likely prove uneconomic for the owner. A larger knowledge of the global scale physical manifestation of high-cycle fatigue in concrete is needed for SHM applications. Steel fatigue is a suitable targeted deterioration type, as the model for physical manifestation is well understood.</para></listitem>
<listitem><para>Several feature types were investigated numerically and experimentally, and the frequencies were found to give the best performance, even for experiments where the temperatures varied substantially over the period of baseline data acquisition. The result indicate that frequency measurements along with ambient temperature measurements, fused in a probabilistic model of the joint probability function is a good damage sensitive feature. A HMM Bayesian network with an input stage, i.e. a type of state-space model, may support such an approach, in combination with Bayesian decision-making.</para></listitem>
<listitem><para>The influence of the cost function was investigated by sensitivity analysis. The impact of repair costs is negligible but the impact of inspection costs is large. For the example in papers IV and V, the Value of Information (VoI) of SHM was negative for detection for inspection/failure costs ratios below 2%.</para></listitem>
<listitem><para>By using the VoI, the initial and running costs of maintaining the SHM system can be neglected from the calculations.</para></listitem>
</itemizedlist>
<section class="lev1" id="sec5.1" label="5.1" xreflabel="sec5.1">
<title>Topics for future research</title>
<para>The thesis has sought to uncovered the validity of the pre-posterior approach by coupling probabilistic decision-making to Life-Cycle Costs and optimization. There are many parameters that have left unattended in the choices that were made of models. Among these are:</para>
<itemizedlist mark="dash" spacing="normal">
<listitem><para>Reliability of the SHM system has not been investigated but may have an adverse effect the value of SHM.</para></listitem>
<listitem><para>When a calculation is based on optimizing Life-Cycle Costs, all uncertainties should be accounted for in the probabilistic modelling and this includes uncertainties on the cost function. Following the trend in the scientific literature, the sensitivities to the deterministic function have been investigated in this thesis. but future research could implement probabilistic formulations of the cost function.</para></listitem>
<listitem><para>In this thesis, the cost functions have been estimated from the experience in Risk Based Inspection (RBI). This reflects the lack of experience of SHM decision-making. Future investigations should be performed into application-specific cost functions.</para></listitem>
<listitem><para>LIMIDS have been found to show large potential to SHM decision-making but the current implementations in Bayes Net Toolbox (BNT) are computationally demanding in terms of memory used for utility potentials of very large networks. Also BNT, at present time only incorporates smoothing inference and not filtering. Filtering is important for application where evidence is inferred at every sensing instance and the model is updated. This is however not important for prior calculation of the Value of Information.</para></listitem>
<listitem><para>For SHM damage detection to gain economic motivation for large concrete structures, a macro scale physical model must be developed. The lack of fatigue failures could indicate that the design models are overly conservative, thus indicating an economic potential of damage detection.</para></listitem>
<listitem><para>The Bayesian approach to sequential decision-making is vulnerable to the dominance of the prior, causing a lag in the damage sensitivity. Advanced algorithms from the topic of statistical <emphasis>tracking</emphasis> could prove efficient in estimating the posterior, as the sequential updating is basically a tracking problem. Examples are of likely suited algorithms are the unscented Kalman filters and particle filters.</para></listitem>
<listitem><para>A major potential threat is in the definition of critical damage severity. As axiom VII of Worden et al. [<link linkend="B25">25</link>] states, the size of detectable damage is inversely proportional to the frequency band of excitation. This means that large damage must be accepted for SHM to be economic. If the loading is well specified, the critical damage size may be as well, but for time-dependent loading, the critical damage size is time variant. Even with this effect incorporated, greatly enhancing the complexity of the modelling, the critical damage severity may be set by outside instances, e.g. structural codes or the asset manager. Such limitations would greatly diminish the value of SHM with the current levels of detectability.</para></listitem>
<listitem><para>The risk neutrality of the decision-maker is also an important assumption that might not hold true. It is relevant when technology replaces the human inspections, and even though the risk is reduced, the decision-maker will not accept to completely replace planned inspections with SHM based decisionmaking. A topic for further research could be how to combine SHM with planned inspections, as a transition technology that still has some economic benefit of SHM.</para></listitem>
</itemizedlist>
</section>
</chapter>
<bibliography class="biblio" id="bib01">
<title>6 References</title>
<bibliomixed id="B1"><label>1.</label><authorgroup><author><surname>Downer</surname> <firstname>J.</firstname></author></authorgroup> &#x201C;<article-title>737-Cabriolet</article-title>&#x201D;: <source>The Limits of Knowledge and the Sociology of Inevitable Failure. American Journal of Sociology</source>. <year>2011</year> <month>November</month>; <volumenum>117</volumenum>(<issue>3</issue>): p. <fpage>725</fpage>&#x2013;<lpage>762</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=737-Cabriolet+The+Limits+of+Knowledge+and+the+Sociology+of+Inevitable+Failure%2E+American+Journal+of+Sociology+Downer+J.+2011+3+725-762" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B2"><label>2.</label><authorgroup><author><surname>Thoft-Christensen</surname> <firstname>P</firstname></author>, <author><surname>Baker</surname>. <firstname>MJ.</firstname></author></authorgroup> <source>Structural reliability theory and its applications:</source> <season>Springer</season>; <year>1982</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Structural+reliability+theory+and+its+applications%3A+Thoft-Christensen+P+Baker.+MJ.+1982" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B3"><label>3.</label><authorgroup><author><surname>Sohn</surname> <firstname>H</firstname></author>, <author><surname>Farrar</surname> <firstname>CR</firstname></author>, <author><surname>Hemez</surname> <firstname>FM</firstname></author></authorgroup>, <etal/> <source>A Review of Structural Health Monitoring Literature: 1996-2001</source>. <publisher-loc>Los Alamos, NM</publisher-loc>: <publisher-name>Los Alamos National Laboratory</publisher-name>; <year>2004</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=A+Review+of+Structural+Health+Monitoring+Literature%3A+1996-2001+Sohn+H+Farrar+CR+Hemez+FM+2004" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B4"><label>4.</label><authorgroup><author><surname>Doebling</surname> <firstname>SW</firstname></author>, <author><surname>Farrar</surname> <firstname>CR</firstname></author>, <author><surname>Prime</surname> <firstname>MB</firstname></author></authorgroup>, <etal/> <source>Damage identification and health monitoring of structural and mechanical systems from changes in their vibrations characteristics: A literature review, Technical Report LA-13070-MS</source>. <publisher-loc>NM</publisher-loc>:; <year>1996</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Damage+identification+and+health+monitoring+of+structural+and+mechanical+systems+from+changes+in+their+vibrations+characteristics%3A+A+literature+review%2C+Technical+Report+LA-13070-MS+Doebling+SW+Farrar+CR+Prime+MB+1996" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B5"><label>5.</label><authorgroup><author><surname>Carden</surname> <firstname>EP</firstname></author>, <author><surname>Fanning</surname> <firstname>P.</firstname></author></authorgroup> <article-title>Vibration based condition monitoring: a review</article-title>. <source>Structural Health Monitoring</source>. <year>2004</year>; <volumenum>3</volumenum>(<issue>4</issue>): p. <fpage>355</fpage>&#x2013;<lpage>377</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Vibration+based+condition+monitoring%3A+a+review+Structural+Health+Monitoring+Carden+EP+Fanning+P.+2004+4+355-377" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B6"><label>6.</label><authorgroup><author><surname>Fan</surname> <firstname>W</firstname></author>, <author><surname>Qiao</surname> <firstname>P.</firstname></author></authorgroup> <article-title>Vibration-based damage identification methods: a review and comparative study</article-title>. <source>Structural Health Monitoring</source>. <year>2011</year>; <volumenum>10</volumenum>(<issue>83</issue>). <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Vibration-based+damage+identification+methods%3A+a+review+and+comparative+study+Structural+Health+Monitoring+Fan+W+Qiao+P.+2011+83" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B7"><label>7.</label><authorgroup><author><surname>Farrar</surname> <firstname>CR</firstname></author>, <author><surname>Worden</surname> <firstname>K.</firstname></author></authorgroup> <source>Structural Health Monitoring: A Machine Learning Perspective:</source> <publisher-name>Wiley</publisher-name>; <year>2012</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Structural+Health+Monitoring%3A+A+Machine+Learning+Perspective%3A+Farrar+CR+Worden+K.+2012" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B8"><label>8.</label><authorgroup><author><surname>Adams</surname> <firstname>D.</firstname></author></authorgroup> <source>Health monitoring of structural materials and components: methods with applications:</source> <publisher-name>John Wiley &#x0026; Sons</publisher-name>; <year>2007</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Health+monitoring+of+structural+materials+and+components%3A+methods+with+applications%3A+Adams+D.+2007" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B9"><label>9.</label><authorgroup><author><surname>Balageas</surname> <firstname>D</firstname></author>, <author><surname>Fritzen</surname> <firstname>CP</firstname></author>, <author><surname>G&#x00FC;emes</surname> <firstname>A</firstname></author></authorgroup>, editors. <source>Structural Health Monitoring</source> <publisher-loc>London</publisher-loc>: <publisher-name>ISTE</publisher-name>; <year>2006</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Structural+Health+Monitoring+Balageas+D+Fritzen+CP+G&#x00FC;emes+A+2006" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B10"><label>10.</label><authorgroup><author><surname>Boller</surname> <firstname>C</firstname></author>, <author><surname>Chang</surname> <firstname>FK</firstname></author>, <author><surname>Fujino</surname> <firstname>Y</firstname></author></authorgroup>, editors. <source>Encyclopedia of Structural Health Monitoring</source> <publisher-loc>Chichester</publisher-loc>: <publisher-name>John Wiley &#x0026; Sons</publisher-name>; <year>2009</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Encyclopedia+of+Structural+Health+Monitoring+Boller+C+Chang+FK+Fujino+Y+2009" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B11"><label>11.</label><authorgroup><author><surname>Nathwani</surname> <firstname>J</firstname></author>, <author><surname>Lind</surname> <firstname>N</firstname></author>, <author><surname>Pandey</surname> <firstname>M.</firstname></author></authorgroup> <source>Affordable Safety by Choice: The Life Quality Method</source> <publisher-loc>Ontario, Canada</publisher-loc>: <publisher-name>Institute of Risk Research, University of Waterloo</publisher-name>; <year>1997</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Affordable+Safety+by+Choice%3A+The+Life+Quality+Method+Nathwani+J+Lind+N+Pandey+M.+1997" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B12"><label>12.</label><authorgroup><author><surname>Rackwitz</surname> <firstname>R.</firstname></author></authorgroup> <source>The philosophy behind the life quality index and empirical verification. Memorandum to JCSS</source>. ; <year>2005</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=The+philosophy+behind+the+life+quality+index+and+empirical+verification%2E+Memorandum+to+JCSS+Rackwitz+R.+2005" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B13"><label>13.</label><authorgroup><author><surname>Pandey</surname> <firstname>MD</firstname></author>, <author><surname>Nathwani</surname>. <firstname>JS.</firstname></author></authorgroup> <article-title>Life quality index for the estimation of societal willingness-to-pay for safety</article-title>. <source>Structural Safety</source>. <year>2004</year>; <volumenum>26</volumenum>(<issue>2</issue>): p. <fpage>181</fpage>&#x2013;<lpage>199</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Life+quality+index+for+the+estimation+of+societal+willingness-to-pay+for+safety+Structural+Safety+Pandey+MD+Nathwani.+JS.+2004+2+181-199" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B14"><label>14.</label><authorgroup><author><surname>Boller</surname> <firstname>C</firstname></author>, <author><surname>Buderath</surname> <firstname>M.</firstname></author></authorgroup> <article-title>Fatigue in aerostructures&#x2014;where structural health monitoring can contribute to a complex subject</article-title>. <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>. <year>2007</year>; <volumenum>365</volumenum>(<issue>1851</issue>). <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+in+aerostructures-where+structural+health+monitoring+can+contribute+to+a+complex+subject+Philosophical+Transactions+of+the+Royal+Society+A%3A+Mathematical%2C+Physical+and+Engineering+Sciences+Boller+C+Buderath+M.+2007+1851" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B15"><label>15.</label><authorgroup><author><surname>Johnson</surname> <firstname>EA</firstname></author>, <author><surname>Lam</surname> <firstname>HF</firstname></author>, <author><surname>Katafygiotis</surname> <firstname>LS</firstname></author></authorgroup>, <etal/> <article-title>A benchmark problem for structural health monitoring and damage detection</article-title>. In <source>Structural Control for Civil and Infrastructure Engineering</source>, <author><surname>Casciati</surname> <firstname>F</firstname></author>, <author><surname>Magonette</surname> <firstname>G</firstname></author> (eds). <publisher-name>World Scientific</publisher-name>; <year>2001</year>; <publisher-loc>Singapore</publisher-loc>. p. <fpage>317</fpage>&#x2013;<lpage>324</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=A+benchmark+problem+for+structural+health+monitoring+and+damage+detection+Structural+Control+for+Civil+and+Infrastructure+Engineering+Johnson+EA+Lam+HF+Katafygiotis+LS+2001+317-324" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B16"><label>16.</label><authorgroup><author><surname>Li</surname> <firstname>S</firstname></author>, <author><surname>Li</surname> <firstname>H</firstname></author>, <author><surname>Liu</surname> <firstname>Y</firstname></author></authorgroup>, <etal/> <article-title>SMC structural health monitoring benchmark problem using monitored data from an actual cable-stayed bridge</article-title>. <source>Structural Control and Health Monitoring</source>. <year>2014</year>; <volumenum>2</volumenum>(<issue>21</issue>): p. <fpage>156</fpage>&#x2013;<lpage>172</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=SMC+structural+health+monitoring+benchmark+problem+using+monitored+data+from+an+actual+cable-stayed+bridge+Structural+Control+and+Health+Monitoring+Li+S+Li+H+Liu+Y+2014+21+156-172" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B17"><label>17.</label><authorgroup><author><surname>Ko</surname> <firstname>JM</firstname></author>, <author><surname>Ni</surname> <firstname>YQ.</firstname></author></authorgroup> <article-title>Technology developments in structural health monitoring of large-scale bridges</article-title>. <source>Engineering structures</source>. <year>2005</year>; <volumenum>27</volumenum>(<issue>12</issue>): p. <fpage>1715</fpage>&#x2013;<lpage>1725</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Technology+developments+in+structural+health+monitoring+of+large-scale+bridges+Engineering+structures+Ko+JM+Ni+YQ.+2005+12+1715-1725" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B18"><label>18.</label><authorgroup><author><surname>Brownjohn</surname> <firstname>JM.</firstname></author></authorgroup> <article-title>Structural health monitoring of civil infrastructure</article-title>. <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>. <year>2007</year>; <volumenum>365</volumenum> (<year>1851</year>): p. <fpage>589</fpage>&#x2013;<lpage>622</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Structural+health+monitoring+of+civil+infrastructure+Philosophical+Transactions+of+the+Royal+Society+A%3A+Mathematical%2C+Physical+and+Engineering+Sciences+Brownjohn+JM.+2007" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B19"><label>19.</label><authorgroup><author><surname>Fujino</surname> <firstname>Y</firstname></author>, <author><surname>Siringoringo</surname>. <firstname>DM.</firstname></author></authorgroup> <article-title>Bridge monitoring in Japan: the needs and strategies</article-title>. <source>Structure and Infrastructure Engineering</source>. <year>2011</year>; <volumenum>7.7</volumenum>(<issue>8</issue>): p. <fpage>597</fpage>&#x2013;<lpage>611</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Bridge+monitoring+in+Japan%3A+the+needs+and+strategies+Structure+and+Infrastructure+Engineering+Fujino+Y+Siringoringo.+DM.+2011" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B20"><label>20.</label><authorgroup><author><surname>Rice</surname> <firstname>JA</firstname></author>, <suffix>Jr.</suffix></authorgroup> BFS. <source>Flexible smart sensor framework for autonomous full-scale structural health monitoring</source>. <article-title>60 University of Illinois at Urbana-Champaign, Newmark Structural Engineering Laboratory</article-title>; <year>2009</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=60+University+of+Illinois+at+Urbana-Champaign%2C+Newmark+Structural+Engineering+Laboratory+Flexible+smart+sensor+framework+for+autonomous+full-scale+structural+health+monitoring+Rice+JA+2009" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B21"><label>21.</label><authorgroup><author><surname>Lynch</surname> <firstname>JP</firstname></author>, <author><surname>Sundararajan</surname> <firstname>A</firstname></author>, <author><surname>Law</surname> <firstname>KH</firstname></author></authorgroup>, <etal/> <article-title>Embedding damage detection algorithms in a wireless sensing unit for operational power efficiency</article-title>. <source>Smart Materials and Structures</source>. <year>2004</year>; <volumenum>13</volumenum>(<issue>4</issue>). <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Embedding+damage+detection+algorithms+in+a+wireless+sensing+unit+for+operational+power+efficiency+Smart+Materials+and+Structures+Lynch+JP+Sundararajan+A+Law+KH+2004+4" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B22"><label>22.</label><authorgroup><author><surname>Rytter</surname> <firstname>A.</firstname></author></authorgroup> <source>Vibration based inspection of civil engineering structures</source>, <named-content content-type="ref-degree">Ph. D. Dissertation</named-content> <publisher-loc>Aalborg</publisher-loc>: <publisher-name>Department of Building Technology and Structural Engineering at Aalborg University</publisher-name>; <year>1993</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Vibration+based+inspection+of+civil+engineering+structures+Rytter+A.+1993" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B23"><label>23.</label><authorgroup><author><surname>Farrar</surname> <firstname>CR</firstname></author>, <author><surname>Duffey</surname> <firstname>TA</firstname></author>, <author><surname>Doebling</surname> <firstname>SW</firstname></author></authorgroup>, <etal/> <article-title>A statistical pattern recognition paradigm for vibration-based structural health monitoring</article-title>. In <source>Proc. 2nd Int. Workshop on Structural Health Monitoring, Stanford, CA</source>, <month>September</month> <day>8&#x2013;10</day>; <volumenum>2000</volumenum>; <publisher-loc>Stanford</publisher-loc>. p. <fpage>764</fpage>&#x2013;<lpage>773</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=A+statistical+pattern+recognition+paradigm+for+vibration-based+structural+health+monitoring+Proc%2E+2nd+Int%2E+Workshop+on+Structural+Health+Monitoring%2C+Stanford%2C+CA+Farrar+CR+Duffey+TA+Doebling+SW+1993+764-773" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B24"><label>24.</label><authorgroup><author><surname>Sohn</surname> <firstname>H</firstname></author>, <author><surname>Farrar</surname> <firstname>CR</firstname></author>, <author><surname>Hunter</surname> <firstname>HF</firstname></author></authorgroup>, <etal/> <source>Applying the LANL statistical pattern recognition paradigm for structural health monitoring to data from a surface-effect fast patrol boat. Report LA-13761-MS</source>. <publisher-loc>Los Alamos</publisher-loc>: <publisher-name>Los Alamos National Laboratory</publisher-name>; <year>2001</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Applying+the+LANL+statistical+pattern+recognition+paradigm+for+structural+health+monitoring+to+data+from+a+surface-effect+fast+patrol+boat%2E+Report+LA-13761-MS+Sohn+H+Farrar+CR+Hunter+HF+2001" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B25"><label>25.</label><authorgroup><author><surname>Worden</surname> <firstname>K</firstname></author>, <author><surname>Farrar</surname> <firstname>CR</firstname></author>, <author><surname>Manson</surname> <firstname>G</firstname></author></authorgroup>, <etal/> <article-title>The fundamental axioms of structural health monitoring</article-title>. <source>Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science</source>. <year>2007</year>;(<issue>463</issue>): p. <fpage>1639</fpage>&#x2013;<lpage>1664</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=The+fundamental+axioms+of+structural+health+monitoring+Proceedings+of+the+Royal+Society+A%3A+Mathematical%2C+Physical+and+Engineering+Science+Worden+K+Farrar+CR+Manson+G+2007+463+1639-1664" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B26"><label>26.</label><authorgroup><author><surname>Skjong</surname> <firstname>R.</firstname></author></authorgroup> <article-title>Reliability Based Optimization of Inspection Strategies</article-title>. In <source>Proceeding of the ICOSSAR 85</source>, Vol. <volumenum>III;</volumenum> <year>1985</year>; <publisher-loc>Kobe, Japan</publisher-loc>. p. <fpage>614</fpage>&#x2013;<lpage>618</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Reliability+Based+Optimization+of+Inspection+Strategies+Proceeding+of+the+ICOSSAR+85+Skjong+R.+1985+III;+614-618" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B27"><label>27.</label><authorgroup><author><surname>Yang</surname> <firstname>JN</firstname></author>, <author><surname>Trapp</surname> <firstname>WJ.</firstname></author></authorgroup> <article-title>Reliability Analysis of Aircraft Structures under Random Loading and Periodic Inspection</article-title>. <source>AIAA Journal</source>. <year>1974</year>; <volumenum>12</volumenum>(<issue>12</issue>): p. <fpage>1623</fpage>&#x2013;<lpage>1630</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Reliability+Analysis+of+Aircraft+Structures+under+Random+Loading+and+Periodic+Inspection+AIAA+Journal+Yang+JN+Trapp+WJ.+1974+12+1623-1630" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B28"><label>28.</label><authorgroup><author><surname>Brincker</surname> <firstname>R</firstname></author>, <author><surname>Ventura</surname> <firstname>C.</firstname></author></authorgroup> <source>An Introduction to Operation Modal Analysis:</source> [unpublished]. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=An+Introduction+to+Operation+Modal+Analysis%3A+Brincker+R+Ventura+C.+1974" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B29"><label>29.</label><authorgroup><author><surname>Freudenthal</surname> <firstname>AM.</firstname></author></authorgroup> <article-title>The safety of structures</article-title>. <source>Transactions of the American Society of Civil Engineers</source>. <year>1947</year>; <volumenum>112</volumenum>(<issue>1</issue>): p. <fpage>125</fpage>&#x2013;<lpage>159</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=The+safety+of+structures+Transactions+of+the+American+Society+of+Civil+Engineers+Freudenthal+AM.+1947+1+125-159" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B30"><label>30.</label><authorgroup><author><surname>Madsen</surname> <firstname>HO</firstname></author>, <author><surname>Krenk</surname> <firstname>S</firstname></author>, <author><surname>Lind</surname> <firstname>NC.</firstname></author></authorgroup> <source>Methods of Structural Safety:</source> <publisher-name>Prentice Hall</publisher-name>; <year>1986</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Methods+of+Structural+Safety%3A+Madsen+HO+Krenk+S+Lind+NC.+1986" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B31"><label>31.</label><collab>JCSS Joint Committee on Structural Safety.</collab> <source>Probabilistic Model Code</source>. <year>2006</year>.. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Probabilistic+Model+Code+2006" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B32"><label>32.</label><collab>ISO</collab>. <source>ISO 2394. General Principles on Reliability for Structures;</source> <year>1998</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=ISO+2394%2E+General+Principles+on+Reliability+for+Structures%3B+1998" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B33"><label>33.</label><authorgroup><author><surname>Berger</surname> <firstname>JO.</firstname></author></authorgroup> <source>Statistical decision theory and Bayesian analysis:</source> <publisher-name>Springer</publisher-name>; <year>1985</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Statistical+decision+theory+and+Bayesian+analysis%3A+Berger+JO.+1985" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B34"><label>34.</label><authorgroup><author><surname>Fisher</surname> <firstname>R.</firstname></author></authorgroup> <source>Statistical Methods for Research Workers</source> <publisher-loc>Edinburgh, UK</publisher-loc>: <publisher-name>Oliver and Boyd</publisher-name>; <year>1925</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Statistical+Methods+for+Research+Workers+Fisher+R.+1925" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B35"><label>35.</label><authorgroup><author><surname>Neyman</surname> <firstname>J</firstname></author>, <author><surname>Pearson</surname> <firstname>ES.</firstname></author></authorgroup> <article-title>On the Problem of the Most Efficient Tests of Statistical Hypotheses</article-title>. <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>. <year>1933</year>; <volumenum>231</volumenum>: p. <fpage>289</fpage>&#x2013;<lpage>337</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=On+the+Problem+of+the+Most+Efficient+Tests+of+Statistical+Hypotheses+Philosophical+Transactions+of+the+Royal+Society+A%3A+Mathematical%2C+Physical+and+Engineering+Sciences+Neyman+J+Pearson+ES.+1933+289-337" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B36"><label>36.</label><authorgroup><author><surname>Wald</surname> <firstname>A.</firstname></author></authorgroup> <article-title>Contributions to the Theory of Statistical Estimation and Testing Hypotheses</article-title>. <source>Annals of Mathematical Statistics</source>. <year>1939</year>; <volumenum>10</volumenum>(<issue>4</issue>): p. <fpage>299</fpage>&#x2013;<lpage>326</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Contributions+to+the+Theory+of+Statistical+Estimation+and+Testing+Hypotheses+Annals+of+Mathematical+Statistics+Wald+A.+1939+4+299-326" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B37"><label>37.</label><authorgroup><author><surname>Von Neumann</surname> <firstname>J</firstname></author>, <author><surname>Morgenstern</surname> <firstname>O.</firstname></author></authorgroup> <source>Theory of Games and Economic Behavior</source>. <publisher-loc>Princeton</publisher-loc>: <publisher-name>Princeton University Press</publisher-name>; <year>1944</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Theory+of+Games+and+Economic+Behavior+Von+Neumann+J+Morgenstern+O.+1944" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B38"><label>38.</label><authorgroup><author><surname>Worden</surname> <firstname>K</firstname></author>, <author><surname>Manson</surname> <firstname>G</firstname></author>, <author><surname>Fieller</surname> <firstname>NRJ.</firstname></author></authorgroup> <article-title>Damage detection using outlier analysis</article-title>. <source>Journal of Sound and Vibration</source>. <year>2000</year>; <volumenum>229</volumenum>(<issue>3</issue>): p. <fpage>647</fpage>&#x2013;<lpage>667</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Damage+detection+using+outlier+analysis+Journal+of+Sound+and+Vibration+Worden+K+Manson+G+Fieller+NRJ.+2000+3+647-667" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B39"><label>39.</label><authorgroup><author><surname>D&#x00F6;hler</surname> <firstname>M</firstname></author>, <author><surname>Mevel</surname> <firstname>L</firstname></author>, <author><surname>Hille</surname> <firstname>F.</firstname></author></authorgroup> <article-title>Subspace-based damage detection under changes in the ambient excitation statistics</article-title>. <source>Mechanical Systems and Signal Processing</source>. <year>2014</year>; <volumenum>45</volumenum>(<issue>1</issue>): p. <fpage>207</fpage>&#x2013;<lpage>224</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Subspace-based+damage+detection+under+changes+in+the+ambient+excitation+statistics+Mechanical+Systems+and+Signal+Processing+D&#x00F6;hler+M+Mevel+L+Hille+F.+2014+1+207-224" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B40"><label>40.</label><authorgroup><author><surname>Farhidzadeh</surname> <firstname>A</firstname></author>, <author><surname>Salamone</surname> <firstname>S</firstname></author>, <author><surname>Singla</surname> <firstname>P.</firstname></author></authorgroup> <article-title>A probabilistic approach for damage identification and crack mode classification in reinforced concrete structures</article-title>. <source>Journal of Intelligent Material Systems and Structures</source>. <year>2013</year>; <volumenum>24</volumenum>(<issue>14</issue>): p. <fpage>1722</fpage>&#x2013;<lpage>1735</lpage>. 61 <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=A+probabilistic+approach+for+damage+identification+and+crack+mode+classification+in+reinforced+concrete+structures+Journal+of+Intelligent+Material+Systems+and+Structures+Farhidzadeh+A+Salamone+S+Singla+P.+2013+14+1722-1735" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B41"><label>41.</label><authorgroup><author><surname>Flynn</surname> <firstname>EB.</firstname></author></authorgroup> <source>A Bayesian experimental design approach to structural health monitoring with application to ultrasonic guides waves</source>. <named-content content-type="ref-degree">Doctoral Thesis</named-content>. <publisher-loc>San Diego</publisher-loc>: <publisher-name>UC San Diego</publisher-name>; <year>2010</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=A+Bayesian+experimental+design+approach+to+structural+health+monitoring+with+application+to+ultrasonic+guides+waves+Flynn+EB.+2010" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B42"><label>42.</label><authorgroup><author><surname>Laplace</surname> <firstname>PS.</firstname></author></authorgroup> <source>Th&#x00E9;orie analytique des probabilit&#x00E9;s. : V. Courcier.</source>; <year>1812</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Th%E9orie+analytique+des+probabilit%E9s%2E+%3A+V%2E+Courcier%2E+Laplace+PS.+1812" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B43"><label>43.</label><authorgroup><author><surname>Bishop</surname> <firstname>CM.</firstname></author></authorgroup> <source>Pattern recognition and machine learning</source>. Vol. <volumenum>1</volumenum>. <publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2006</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Pattern+recognition+and+machine+learning+Bishop+CM.+2006" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B44"><label>44.</label><authorgroup><author><surname>Raiffa</surname> <firstname>H</firstname></author>, <author><surname>Schlaifer</surname> <firstname>R.</firstname></author></authorgroup> <source>Applied statistical decision theory</source> <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Division of Research, Harvard Business School</publisher-name>; <year>1961</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Applied+statistical+decision+theory+Raiffa+H+Schlaifer+R.+1961" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B45"><label>45.</label><authorgroup><author><surname>Lindley</surname> <firstname>D.</firstname></author></authorgroup> <source>Bayesian statistics, a review</source>. <publisher-loc>Philadelphia, Pennsylvania</publisher-loc>: <publisher-name>Society for Industrial and Applied Mathematics</publisher-name>; <year>1971</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Bayesian+statistics%2C+a+review+Lindley+D.+1971" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B46"><label>46.</label><authorgroup><author><surname>Benjamin</surname> <firstname>JR</firstname></author>, <author><surname>Cornell</surname> <firstname>CA.</firstname></author></authorgroup> <source>Probability, Statistics and Decisions for Civil Engineering</source> <publisher-loc>New York</publisher-loc>: <publisher-name>McGraw Hill Book Company</publisher-name>; <year>1970</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Probability%2C+Statistics+and+Decisions+for+Civil+Engineering+Benjamin+JR+Cornell+CA.+1970" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B47"><label>47.</label><authorgroup><author><surname>Madsen</surname> <firstname>HO</firstname></author>, <author><surname>S&#x00F8;rensen</surname> <firstname>JD.</firstname></author></authorgroup> <article-title>Probability-based optimization of fatigue design, inspection and maintenance</article-title>. In <source>Proceedings from the fourth symposium on Integrity of offshore structures;</source> <year>1990</year>; <publisher-name>Glasgow</publisher-name>. p. <fpage>421</fpage>&#x2013;<lpage>438</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Probability-based+optimization+of+fatigue+design%2C+inspection+and+maintenance+Proceedings+from+the+fourth+symposium+on+Integrity+of+offshore+structures%3B+Madsen+HO+S&#x00F8;rensen+JD.+1990+421-438" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B48"><label>48.</label><authorgroup><author><surname>Nielsen</surname> <firstname>JS.</firstname></author></authorgroup> <source>Risk-based Operation and Maintenance of Offshore Wind Turbines: PhD Thesis</source> <publisher-loc>Aalborg</publisher-loc>: <publisher-name>Aalborg University, Division of Water and Soil</publisher-name>; <year>2013</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Risk-based+Operation+and+Maintenance+of+Offshore+Wind+Turbines%3A+PhD+Thesis+Nielsen+JS.+2013" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B49"><label>49.</label><authorgroup><author><surname>Kay</surname> <firstname>SM.</firstname></author></authorgroup> <source>Fundamentals of statistical signal processing, Vol. II: Detection Theory</source> <publisher-loc>Upper Saddle River, NJ</publisher-loc>: <publisher-name>Prentice Hall</publisher-name>; <year>1998</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fundamentals+of+statistical+signal+processing%2C+Vol%2E+II%3A+Detection+Theory+Kay+SM.+1998" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B50"><label>50.</label><authorgroup><author><surname>Peterson</surname> <firstname>WW</firstname></author>, <author><surname>Birdsall</surname> <firstname>TG</firstname></author>, <author><surname>Fox</surname> <firstname>WC.</firstname></author></authorgroup> <article-title>The theory of signal detectability</article-title>. In <source>Proceedings of the IRE Professional Group on Information Theory;</source> <year>1954</year>. p. <fpage>171</fpage>&#x2013;<lpage>212</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=The+theory+of+signal+detectability+Proceedings+of+the+IRE+Professional+Group+on+Information+Theory%3B+Peterson+WW+Birdsall+TG+Fox+WC.+1954+171-212" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B51"><label>51.</label><authorgroup><author><surname>Green</surname> <firstname>DM</firstname></author>, <author><surname>Swets</surname>. <firstname>JA.</firstname></author></authorgroup> <source>Signal detection theory and psychophysics</source>. Vol. <volumenum>1</volumenum>. <publisher-name>Wiley</publisher-name>, editor. <publisher-loc>New York</publisher-loc>: <publisher-name>Prentice Hall</publisher-name>; <year>1966</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Signal+detection+theory+and+psychophysics+Green+DM+Swets.+JA.+1966" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B52"><label>52.</label><authorgroup><author><surname>Frangopol</surname> <firstname>DM</firstname></author>, <author><surname>Lin</surname> <firstname>KY</firstname></author>, <author><surname>Estes</surname> <firstname>AC.</firstname></author></authorgroup> <article-title>Life-cycle cost design of deteriorating structures</article-title>. <source>Journal of Structural Engineering</source>. <year>1997</year>; <volumenum>123</volumenum>(<issue>10</issue>): p. <fpage>1390</fpage>&#x2013;<lpage>1401</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Life-cycle+cost+design+of+deteriorating+structures+Journal+of+Structural+Engineering+Frangopol+DM+Lin+KY+Estes+AC.+1997+10+1390-1401" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B53"><label>53.</label><authorgroup><author><surname>Enevoldsen</surname> <firstname>I</firstname></author>, <author><surname>S&#x00F8;rensen</surname> <firstname>J.</firstname></author></authorgroup> <article-title>Reliability-based Optimization in Structural Engineering</article-title>. <source>Structural Safety</source>. <year>1994</year>;(<volumenum>15</volumenum>(<issue>3</issue>)): p. <fpage>169</fpage>&#x2013;<lpage>196</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Reliability-based+Optimization+in+Structural+Engineering+Structural+Safety+Enevoldsen+I+S&#x00F8;rensen+J.+1994+3+169-196" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B54"><label>54.</label><authorgroup><author><surname>Lassen</surname> <firstname>T</firstname></author>, <author><surname>Recho</surname> <firstname>N.</firstname></author></authorgroup> <source>Fatigue life analyses of welded structures</source> <publisher-loc>London</publisher-loc>: <publisher-name>Iste</publisher-name>; <year>2006</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+life+analyses+of+welded+structures+Lassen+T+Recho+N.+2006" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B55"><label>55.</label><authorgroup><author><surname>Faber</surname> <firstname>MH</firstname></author>, <author><surname>Engelund</surname> <firstname>S</firstname></author>, <author><surname>S&#x00F8;rensen</surname> <firstname>JD</firstname></author></authorgroup>, <etal/> <article-title>Simplified and Generic Risk Based Inspection Planning</article-title>. In <source>Proceedings OMAE2000, 19th Conference on Offshore Mechanics and Arctic Engineering</source>, <month>February</month> <day>14&#x2013;17</day>; <year>2000</year>; <publisher-loc>New Orleans, Louisiana, USA</publisher-loc>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Simplified+and+Generic+Risk+Based+Inspection+Planning+Proceedings+OMAE2000%2C+19th+Conference+on+Offshore+Mechanics+and+Arctic+Engineering+Faber+MH+Engelund+S+S&#x00F8;rensen+JD+2000" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B56"><label>56.</label><authorgroup><author><surname>Wong</surname> <firstname>FS</firstname></author>, <author><surname>Yao</surname> <firstname>JT.</firstname></author></authorgroup> <article-title>Health monitoring and structural reliability as a value chain</article-title>. <source>Computer-Aided Civil and Infrastructure Engineering</source>. <year>2001</year>; <volumenum>16</volumenum>(<issue>1</issue>): p. <fpage>71</fpage>&#x2013;<lpage>78</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Health+monitoring+and+structural+reliability+as+a+value+chain+Computer-Aided+Civil+and+Infrastructure+Engineering+Wong+FS+Yao+JT.+2001+1+71-78" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B57"><label>57.</label><authorgroup><author><surname>Pozzi</surname> <firstname>M</firstname></author>, <author><surname>Der Kiureghian</surname> <firstname>A.</firstname></author></authorgroup> <article-title>Assessing the value of information for long-term structural health monitoring</article-title>. In <author><surname>Kundu</surname> <firstname>T</firstname></author>, editor. <source>Proceedings of SPIE 7984, Health Monitoring of Structural and Biological Systems;</source> <year>2011</year>; <publisher-loc>San Diego</publisher-loc>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Assessing+the+value+of+information+for+long-term+structural+health+monitoring+Proceedings+of+SPIE+7984%2C+Health+Monitoring+of+Structural+and+Biological+Systems%3B+Pozzi+M+Der+Kiureghian+A.+2011" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B58"><label>58.</label><authorgroup><author><surname>Flynn</surname> <firstname>E</firstname></author>, <author><surname>Todd</surname> <firstname>MD.</firstname></author></authorgroup> <article-title>A Bayesian Approach to Optimal Sensor Placement for Structural Health Monitoring with Application to Active Sensing</article-title>. <source>Mechanical Systems and Signal Processing</source>. <year>2010</year>;<volumenum>24</volumenum>(<issue>4</issue>): p. <fpage>891</fpage>&#x2013;<lpage>903</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=A+Bayesian+Approach+to+Optimal+Sensor+Placement+for+Structural+Health+Monitoring+with+Application+to+Active+Sensing+Mechanical+Systems+and+Signal+Processing+Flynn+E+Todd+MD.+2010+4+891-903" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B59"><label>59.</label><authorgroup><author><surname>Mao</surname> <firstname>Z.</firstname></author></authorgroup> <source>Uncertainty quantification in vibration-based structural health monitoring for enhanced decisionmaking capability</source>. <named-content content-type="ref-degree">Phd Thesis</named-content>. <publisher-loc>San Diego</publisher-loc>: <publisher-name>UC San Diego</publisher-name>; <year>2012</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Uncertainty+quantification+in+vibration-based+structural+health+monitoring+for+enhanced+decisionmaking+capability+Mao+Z.+2012" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B60"><label>60.</label><source>COST 059/14: COST Action TU1402: Quantifying the value of structural health monitoring</source>. <article-title>Memorandum of Understanding</article-title>. <publisher-loc>Brussels</publisher-loc>: <publisher-name>European Cooperation in the field of Scientific and Technical Research</publisher-name>; <year>2014</year>.</bibliomixed>
<bibliomixed id="B61"><label>61.</label><authorgroup><author><surname>Kirkegaard</surname> <firstname>PH</firstname></author>, <author><surname>S&#x00F8;rensen</surname> <firstname>JD</firstname></author>, <author><surname>Brincker</surname> <firstname>R.</firstname></author></authorgroup> <article-title>Optimization of Measurements on Dynamically Sensitive 62 Structures Using a Reliability Approach</article-title>. In <source>Proceedings of the 9th International Conference on Experimental Mechanics;</source> <year>1990</year>; <publisher-loc>Copenhagen</publisher-loc>: <publisher-name>Technical University of Denmark</publisher-name>. p. <fpage>967</fpage>&#x2013;<lpage>976</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Optimization+of+Measurements+on+Dynamically+Sensitive+62+Structures+Using+a+Reliability+Approach+Proceedings+of+the+9th+International+Conference+on+Experimental+Mechanics%3B+Kirkegaard+PH+S&#x00F8;rensen+JD+Brincker+R.+1990+967-976" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B62"><label>62.</label><authorgroup><author><surname>Th&#x00F6;ns</surname> <firstname>S</firstname></author>, <author><surname>Faber</surname> <firstname>MH.</firstname></author></authorgroup> <article-title>Assessing the Value of Structural Health Monitoring</article-title>. In <source>Proceedings of the ICOSSAR 2013;</source> <year>2013</year>; <publisher-loc>New York</publisher-loc>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Assessing+the+Value+of+Structural+Health+Monitoring+Proceedings+of+the+ICOSSAR+2013%3B+Th&#x00F6;ns+S+Faber+MH.+2013" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B63"><label>63.</label><authorgroup><author><surname>Bourinet</surname> <firstname>JM</firstname></author>, <author><surname>Mattrand</surname> <firstname>C</firstname></author>, <author><surname>Dubourg</surname> <firstname>V.</firstname></author></authorgroup> <article-title>A review of recent features and improvements added to FERUM software</article-title> (software downloadable at <uri>http://www.ifma.fr/FERUM</uri>). In <source>Proc. of the 10th International Conference on Structural Safety and Reliability;</source> <year>2009</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=A+review+of+recent+features+and+improvements+added+to+FERUM+software+Proc%2E+of+the+10th+International+Conference+on+Structural+Safety+and+Reliability%3B+Bourinet+JM+Mattrand+C+Dubourg+V.+2009" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B64"><label>64.</label><authorgroup><author><surname>Melchers</surname> <firstname>RE.</firstname></author></authorgroup> <source>Structural Reliability Analysis and Prediction</source>. <edition>2nd</edition> ed.: <publisher-name>Wiley</publisher-name> ; <year>1999</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Structural+Reliability+Analysis+and+Prediction+Melchers+RE.+1999" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B65"><label>65.</label><authorgroup><author><surname>N&#x00E6;sheim</surname> <firstname>T</firstname></author>, <author><surname>Moan</surname> <firstname>T</firstname></author>, <author><surname>Bekkvik</surname> <firstname>P</firstname></author></authorgroup>, <etal/> <source>&#x201C;Alexander L. Kielland&#x201D; ulykken. In norwegian. NOU 1981: 11</source>. <publisher-loc>Oslo</publisher-loc>: <publisher-name>Norges Offentlige Utredninger</publisher-name>; <year>1981</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=%22Alexander+L%2E+Kielland%22+ulykken%2E+In+norwegian%2E+NOU+1981%3A+11+N&#x00E6;sheim+T+Moan+T+Bekkvik+P+1981" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B66"><label>66.</label><authorgroup><author><surname>Basquin</surname> <firstname>OH.</firstname></author></authorgroup> <article-title>The exponential law of endurance tests</article-title>. <source>Proc. ASTM</source>. <year>1910</year>;<volumenum>10</volumenum>(<issue>II</issue>). <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=The+exponential+law+of+endurance+tests+Proc%2E+ASTM+Basquin+OH.+1910+II" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B67"><label>67.</label><authorgroup><author><surname>Miner</surname> <firstname>MA.</firstname></author></authorgroup> <article-title>Cumulative damage in fatigue</article-title>. <source>Journal of Applied Mechanics (ASME)</source>. <year>1945</year>;<volumenum>12</volumenum>(<issue>67</issue>): p. <fpage>A159</fpage>&#x2013;<lpage>A164</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Cumulative+damage+in+fatigue+Journal+of+Applied+Mechanics+%28ASME%29+Miner+MA.+1945+67+A159-A164" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B68"><label>68.</label><authorgroup><author><surname>Straub</surname> <firstname>D.</firstname></author></authorgroup> <source>Generic approaches to risk based inspection planning for steel structures</source>. <named-content content-type="ref-degree">PhD Thesis</named-content>. <publisher-loc>Zurich</publisher-loc>: <publisher-name>ETH</publisher-name>; <year>2004</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Generic+approaches+to+risk+based+inspection+planning+for+steel+structures+Straub+D.+2004" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B69"><label>69.</label><authorgroup><author><surname>Paris</surname> <firstname>PC</firstname></author>, <author><surname>Erdogan</surname> <firstname>F.</firstname></author></authorgroup> <article-title>A critical analysis of crack propagation laws</article-title>. <source>Journal of Fluids Engineering</source>. <year>1963</year>;<volumenum>85</volumenum>(<issue>4</issue>): p. <fpage>528</fpage>&#x2013;<lpage>533</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=A+critical+analysis+of+crack+propagation+laws+Journal+of+Fluids+Engineering+Paris+PC+Erdogan+F.+1963+4+528-533" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B70"><label>70.</label><authorgroup><author><surname>Lassen</surname> <firstname>T.</firstname></author></authorgroup> <source>Experimental investigation and stochastic modelling of the fatigue behaviour of welded steel joints</source>. <named-content content-type="ref-degree">PhD thesis</named-content>. <publisher-loc>Aalborg</publisher-loc>: <publisher-name>Aalborg University</publisher-name>; <year>1997</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Experimental+investigation+and+stochastic+modelling+of+the+fatigue+behaviour+of+welded+steel+joints+Lassen+T.+1997" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B71"><label>71.</label><authorgroup><author><surname>Holmen</surname> <firstname>JO.</firstname></author></authorgroup> <article-title>Fatigue design evaluation of offshore concrete structures</article-title>. <source>Mat&#x00E9;riaux et Construction</source>. <year>1984</year> janvier-f&#x00E9;vrier; <volumenum>17</volumenum>(<issue>1</issue>): p. <fpage>39</fpage>&#x2013;<lpage>42</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+design+evaluation+of+offshore+concrete+structures+Mat%E9riaux+et+Construction+Holmen+JO.+1984+1+39-42" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B72"><label>72.</label><collab>CEB</collab>. <source>CEB-FIP Model Code 90. Bulletin d&#x2019;Information, No. 213/214</source> <publisher-loc>London</publisher-loc>: <publisher-name>Thomas Telford Ltd</publisher-name>; <year>1993</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=CEB-FIP+Model+Code+90%2E+Bulletin+d%27Information%2C+No%2E+213%2F214+1993" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B73"><label>73.</label><collab>FIB</collab>. <source>Model Code 2010, final draft, Volume 1 and 2: International Federation for Structural Concrete</source>; <year>2012</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Model+Code+2010%2C+final+draft%2C+Volume+1+and+2%3A+International+Federation+for+Structural+Concrete+2012" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B74"><label>74.</label><collab>European Committee for Standardization</collab>. <source>EN 1992-1-1. Eurocode 2: Design of Concrete Structures &#x2013; Part 1-1: General Rules and Rules for Buildings. CEN. EN 1992-1-1;</source> <year>2004</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=EN+1992-1-1%2E+Eurocode+2%3A+Design+of+Concrete+Structures+-+Part+1-1%3A+General+Rules+and+Rules+for+Buildings%2E+CEN%2E+EN+1992-1-1%3B+2004" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B75"><label>75.</label><collab>DNV</collab>. <source>Offshore Concrete Structures</source>. Offshore Standard: DNV-OS-C502. ; <year>2012</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Offshore+Concrete+Structures+2012" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B76"><label>76.</label><collab>Ocean Structures</collab>. <source>Ageing of Offshore Concrete Structures. OSL-804-R04, Rev. 2</source>. <article-title>Report for Petroleum Safety Authority Norway. Myreside Steading</article-title>:; <year>2009</year>.</bibliomixed>
<bibliomixed id="B77"><label>77.</label><authorgroup><author><surname>Bazant</surname> <firstname>ZP</firstname></author>, <author><surname>Hubler</surname> <firstname>MH.</firstname></author></authorgroup> <article-title>Theory of cyclic creep of concrete based on Paris law for fatigue growth of subcritical microcracks</article-title>. <source>Journal of the Mechanics and Physics of Solids</source>. <year>2014</year>;<volumenum>63</volumenum>: p. <fpage>187</fpage>&#x2013;<lpage>200</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Theory+of+cyclic+creep+of+concrete+based+on+Paris+law+for+fatigue+growth+of+subcritical+microcracks+Journal+of+the+Mechanics+and+Physics+of+Solids+Bazant+ZP+Hubler+MH.+2014+187-200" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B78"><label>78.</label><authorgroup><author><surname>Urban</surname> <firstname>S</firstname></author>, <author><surname>Strauss</surname> <firstname>A</firstname></author>, <author><surname>Sch&#x00FC;tz</surname> <firstname>R</firstname></author></authorgroup>, <etal/> <article-title>Dynamically loaded concrete structures &#x2013; monitoring-based assessment of the real degree of fatigue deterioration</article-title>. <source>Structural Concrete</source>. <year>2014</year>;<year>15</year>: p. <fpage>530</fpage>&#x2013;<lpage>542</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Dynamically+loaded+concrete+structures-monitoring-based+assessment+of+the+real+degree+of+fatigue+deterioration+Structural+Concrete+Urban+S+Strauss+A+Sch&#x00FC;tz+R+2014;+15+530-542" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B79"><label>79.</label><collab>RILEM Committee 36-RDL</collab>. <article-title>Long Term Random Dynamic Loading of Concrete Structures</article-title>. <source>Materiaux et Constructions</source>. <year>1984</year>;<volumenum>17</volumenum>(<issue>97</issue>). <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Long+Term+Random+Dynamic+Loading+of+Concrete+Structures+Materiaux+et+Constructions+1984+97" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B80"><label>80.</label><authorgroup><author><surname>Holmen</surname> <firstname>JO.</firstname></author></authorgroup> <source>Fatigue of Concrete by constant and Variable Amplitude Loading</source>. <named-content content-type="ref-degree">Dr.Ing.-Thesis</named-content>. <source>The University of Trondheim, NTH</source>; <year>1979</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+of+Concrete+by+constant+and+Variable+Amplitude+Loading%2E+Dr%2EIng%2E-Thesis%2E++The+University+of+Trondheim%2C+NTH+Holmen+JO.+1979" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B81"><label>81.</label><authorgroup><author><surname>Aas-Jakobsen</surname> <firstname>K.</firstname></author></authorgroup> <source>Fatigue of Concrete Beams and Columns. Bulletin No 70-1</source>. <publisher-loc>Trondheim, Norway</publisher-loc>: <publisher-name>The Norwegian Institute of Technology</publisher-name>; <year>1970</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+of+Concrete+Beams+and+Columns%2E+Bulletin+No+70-1+Aas-Jakobsen+K.+1970" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B82"><label>82.</label><authorgroup><author><surname>Stemland</surname> <firstname>H</firstname></author>, <author><surname>Petkovic</surname> <firstname>G</firstname></author>, <author><surname>Rosseland</surname> <firstname>S</firstname></author></authorgroup>, <etal/> <article-title>Fatigue of High Strength Concrete</article-title>. <source>Nordic Concrete Research</source>. <year>1990</year>; Publication No. 90: p. <fpage>172</fpage>&#x2013;<lpage>196</lpage>. 63 <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+of+High+Strength+Concrete+Nordic+Concrete+Research+Stemland+H+Petkovic+G+Rosseland+S+1990+172-196" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B83"><label>83.</label><authorgroup><author><surname>Tepfers</surname> <firstname>R.</firstname></author></authorgroup> <article-title>Tensile fatigue strength of plain concrete</article-title>. <source>ACI Journal</source>. <year>1979</year>;<volumenum>76</volumenum>(<issue>8</issue>): p. <fpage>919</fpage>&#x2013;<lpage>933</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Tensile+fatigue+strength+of+plain+concrete+ACI+Journal+Tepfers+R.+1979+8+919-933" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B84"><label>84.</label><authorgroup><author><surname>Cornelissen</surname> <firstname>HAW.</firstname></author></authorgroup> <article-title>Fatigue Failure of Concrete in Tension</article-title>. <source>HERON</source>. <year>1984</year>;<volumenum>29</volumenum>(<issue>4</issue>): p. <fpage>68</fpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+Failure+of+Concrete+in+Tension+HERON+Cornelissen+HAW.+1984+4" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B85"><label>85.</label><authorgroup><author><surname>Oh</surname> <firstname>B.</firstname></author></authorgroup> <article-title>Fatigue Analysis of Plain Concrete in Flexure</article-title>. <source>J. Struct. Eng</source>. <year>1986</year>;<volumenum>112</volumenum>(<issue>2</issue>): p. <fpage>273</fpage>&#x2013;<lpage>288</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+Analysis+of+Plain+Concrete+in+Flexure+J%2E+Struct%2E+Eng+Oh+B.+1986+2+273-288" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B86"><label>86.</label><authorgroup><author><surname>McCall</surname> <firstname>JT.</firstname></author></authorgroup> <article-title>Probability of Fatigue Failure of Plain Concrete</article-title>. <source>ACI Journal Proceedings</source>. <year>1958</year>;<volumenum>55</volumenum>(<issue>8</issue>). <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Probability+of+Fatigue+Failure+of+Plain+Concrete+ACI+Journal+Proceedings+McCall+JT.+1958+8" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B87"><label>87.</label><authorgroup><author><surname>Bazant</surname> <firstname>ZP</firstname></author>, <author><surname>Xu</surname> <firstname>K.</firstname></author></authorgroup> <article-title>Size effect in fatigue fracture of concrete</article-title>. <source>ACI Materials Journal</source>. <year>1991</year>;<volumenum>88</volumenum>(<issue>4</issue>). <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Size+effect+in+fatigue+fracture+of+concrete+ACI+Materials+Journal+Bazant+ZP+Xu+K.+1991+4" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B88"><label>88.</label><collab>IEC</collab>. <article-title>IEC 61400-1:2005, Wind turbines-Part 1: Design requirements</article-title>. <year>2005</year>.. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=IEC+61400-1%3A2005%2C+Wind+turbines-Part+1%3A+Design+requirements+2005" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B89"><label>89.</label><authorgroup><author><surname>Petryna</surname> <firstname>YS</firstname></author>, <author><surname>Kr&#x00E4;tzig</surname> <firstname>WB.</firstname></author></authorgroup> <article-title>Computational framework for long-term reliability analysis of RC structures</article-title>. <source>Computer methods in applied mechanics and engineering</source>. <year>2005</year>;<volumenum>194</volumenum>(<issue>12</issue>): p. <fpage>1619</fpage>&#x2013;<lpage>1639</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Computational+framework+for+long-term+reliability+analysis+of+RC+structures+Computer+methods+in+applied+mechanics+and+engineering+Petryna+YS+Kr&#x00E4;tzig+WB.+2005+12+1619-1639" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B90"><label>90.</label><authorgroup><author><surname>Sobol&#x2032;</surname> <firstname>IM.</firstname></author></authorgroup> <article-title>Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates</article-title>. <source>Mathematics and computers in simulation</source>. <year>2001</year>;<volumenum>55</volumenum>: p. <fpage>271</fpage>&#x2013;<lpage>280</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Global+sensitivity+indices+for+nonlinear+mathematical+models+and+their+Monte+Carlo+estimates+Mathematics+and+computers+in+simulation+Sobol&#x2032;+IM.+2001+271-280" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B91"><label>91.</label><authorgroup><author><surname>Cornelissen</surname> <firstname>HAW</firstname></author>, <author><surname>Reinhardt</surname> <firstname>HW.</firstname></author></authorgroup> <article-title>Fatigue of plain concrete in uniaxial tension and in alternating tension-compression loading</article-title>. In <source>IABSE Colloquium on Fatigue of Steel and Concrete Structures;</source> <year>1982</year>; <publisher-loc>Lausanne, Switzerland</publisher-loc>. p. <fpage>273</fpage>&#x2013;<lpage>282</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Fatigue+of+plain+concrete+in+uniaxial+tension+and+in+alternating+tension-compression+loading+IABSE+Colloquium+on+Fatigue+of+Steel+and+Concrete+Structures%3B+Cornelissen+HAW+Reinhardt+HW.+1982+273-282" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B92"><label>92.</label><authorgroup><author><surname>Feret</surname> <firstname>R.</firstname></author></authorgroup> <source>Etude Experimentale du Ciment Arme, Chapter 3 (in french):</source> <publisher-name>Grauthier-Villiers</publisher-name>; <year>1906</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Etude+Experimentale+du+Ciment+Arme%2C+Chapter+3+%28in+french%29%3A+Feret+R.+1906" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B93"><label>93.</label><authorgroup><author><surname>Hordijk</surname> <firstname>DA.</firstname></author></authorgroup> <source>Local approach to fatigue of concrete</source>. <named-content content-type="ref-degree">Phd Thesis</named-content>. <publisher-name>Delft University of Technology</publisher-name>; <year>1991</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Local+approach+to+fatigue+of+concrete+Hordijk+DA.+1991" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B94"><label>94.</label><authorgroup><author><surname>Breitenb&#x00FC;cher</surname> <firstname>R</firstname></author>, <author><surname>Ibuk</surname> <firstname>H.</firstname></author></authorgroup> <article-title>Experimentally Based Investigations on the Degradation-Process of Concrete under Cyclic Load</article-title>. <source>Materials and Structures</source>. <year>2006</year>;<volumenum>39</volumenum>: p. <fpage>717</fpage>&#x2013;<lpage>724</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Experimentally+Based+Investigations+on+the+Degradation-Process+of+Concrete+under+Cyclic+Load+Materials+and+Structures+Breitenb&#x00FC;cher+R+Ibuk+H.+2006+717-724" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B95"><label>95.</label><authorgroup><author><surname>Friis-Hansen</surname> <firstname>A.</firstname></author></authorgroup> <source>Bayesian networks as a decision support tool in marine application</source>. <named-content content-type="ref-degree">PhD thesis</named-content>. <publisher-loc>Lyngby</publisher-loc>: <publisher-name>Technical University of Denmark</publisher-name>; <year>2000</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Bayesian+networks+as+a+decision+support+tool+in+marine+application+Friis-Hansen+A.+2000" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B96"><label>96.</label><authorgroup><author><surname>Straub</surname> <firstname>D.</firstname></author></authorgroup> <article-title>Stochastic modeling of deterioration processes through dynamic Bayesian networks</article-title>. <source>Journal of Engineering Mechanics</source>. <year>2009</year>;<volumenum>135</volumenum>(<issue>10</issue>): p. <fpage>1089</fpage>&#x2013;<lpage>1099</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Stochastic+modeling+of+deterioration+processes+through+dynamic+Bayesian+networks+Journal+of+Engineering+Mechanics+Straub+D.+2009+10+1089-1099" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B97"><label>97.</label><authorgroup><author><surname>Kjaerulff</surname> <firstname>UB</firstname></author>, <author><surname>Madsen</surname> <firstname>AL.</firstname></author></authorgroup> <source>Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis;</source> <year>2008</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Bayesian+Networks+and+Influence+Diagrams%3A+A+Guide+to+Construction+and+Analysis%3B+Kjaerulff+UB+Madsen+AL.+2008" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B98"><label>98.</label><authorgroup><author><surname>Jensen</surname> <firstname>FV</firstname></author>, <author><surname>Nielsen</surname> <firstname>TD.</firstname></author></authorgroup> <source>Bayesian networks and decision graphs:</source> <publisher-name>Springer</publisher-name>; <year>2009</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Bayesian+networks+and+decision+graphs%3A+Jensen+FV+Nielsen+TD.+2009" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B99"><label>99.</label><authorgroup><author><surname>Murphy</surname> <firstname>KP.</firstname></author></authorgroup> <source>Dynamic bayesian networks: representation, inference and learning</source>. <named-content content-type="ref-degree">PhD Thesis</named-content>. : <publisher-name>University of California</publisher-name>, <publisher-loc>Berkeley</publisher-loc>; <year>2002</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Dynamic+bayesian+networks%3A+representation%2C+inference+and+learning+Murphy+KP.+2002" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B100"><label>100.</label><authorgroup><author><surname>Parloo</surname> <firstname>E</firstname></author>, <author><surname>Guillaume</surname> <firstname>P</firstname></author>, <author><surname>Van Overmeire</surname> <firstname>M.</firstname></author></authorgroup> <article-title>Damage assessment using mode shape sensitivities</article-title>. <source>Mechanical Systems and Signal Processing</source>. <year>2003</year>;(<volumenum>17</volumenum>): p. <fpage>499</fpage>&#x2013;<lpage>518</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Damage+assessment+using+mode+shape+sensitivities+Mechanical+Systems+and+Signal+Processing+Parloo+E+Guillaume+P+Van+Overmeire+M.+2003+499-518" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B101"><label>101.</label><authorgroup><author><surname>O&#x2019;Callahan</surname> <firstname>J</firstname></author>, <author><surname>Avitabile</surname> <firstname>P</firstname></author>, <author><surname>Riemer</surname> <firstname>R.</firstname></author></authorgroup> <article-title>System equivalent reduction expansion process (SEREP)</article-title>. In <source>Proceedings of the 7th international modal analysis conference;</source> <year>1989</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=System+equivalent+reduction+expansion+process+%28SEREP%29+Proceedings+of+the+7th+international+modal+analysis+conference%3B+O&#x2019;Callahan+J+Avitabile+P+Riemer+R.+1989" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B102"><label>102.</label><authorgroup><author><surname>Figueiredo</surname> <firstname>E</firstname></author>, <author><surname>Park</surname> <firstname>G</firstname></author>, <author><surname>Figueiras</surname> <firstname>J</firstname></author></authorgroup>, <etal/> <source>LA-14393: Structural Health Monitoring Algorithm Comparisons Using Standard Data Sets</source>. <publisher-name>Los Alamos National Laboratory</publisher-name>; <year>2009</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=LA-14393%3A+Structural+Health+Monitoring+Algorithm+Comparisons+Using+Standard+Data+Sets+Figueiredo+E+Park+G+Figueiras+J+2009" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B103"><label>103.</label><authorgroup><author><surname>Golub</surname> <firstname>GH</firstname></author>, <author><surname>Van Loan</surname> <firstname>CF.</firstname></author></authorgroup> <source>Matrix computations</source>. <edition>3rd</edition> ed.: <publisher-name>JHU Press</publisher-name>; <year>2012</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Matrix+computations+Golub+GH+Van+Loan+CF.+2012" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B104"><label>104.</label><authorgroup><author><surname>Yao</surname> <firstname>R</firstname></author>, <author><surname>Pakzad</surname> <firstname>S.</firstname></author></authorgroup> <article-title>Autoregressive statistical pattern recognition algorithms for damage detection in civil structures</article-title>. <source>Mech Syst Signal Pr</source>. <year>2012</year>;<volumenum>31</volumenum>: p. <fpage>355</fpage>&#x2013;<lpage>368</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Autoregressive+statistical+pattern+recognition+algorithms+for+damage+detection+in+civil+structures+Mech+Syst+Signal+Pr+Yao+R+Pakzad+S.+2012+355-368" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B105"><label>105.</label><authorgroup><author><surname>Bradley</surname> <firstname>AP.</firstname></author></authorgroup> <article-title>The use of the area under the ROC curve in the evaluation of machine learning algorithms</article-title>. <source>Pattern recognition</source>. <year>1997</year>;(<issue>30.7</issue>): p. <fpage>1145</fpage>&#x2013;<lpage>1159</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=The+use+of+the+area+under+the+ROC+curve+in+the+evaluation+of+machine+learning+algorithms+Pattern+recognition+Bradley+AP.+1997+30.7+1145-1159" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B106"><label>106.</label><authorgroup><author><surname>Niu</surname> <firstname>G</firstname></author>, <author><surname>Son</surname> <firstname>JD</firstname></author>, <author><surname>Widodo</surname> <firstname>A</firstname></author></authorgroup>, <etal/> <article-title>Comparison of classifier performance for fault diagnosis of induction 64 motor using multi-type signals</article-title>. <source>Structural Health Monitoring</source>. <year>2007</year>;: p. <fpage>215</fpage>&#x2013;<lpage>229</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Comparison+of+classifier+performance+for+fault+diagnosis+of+induction+64+motor+using+multi-type+signals+Structural+Health+Monitoring+Niu+G+Son+JD+Widodo+A+2007+215-229" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B107"><label>107.</label><authorgroup><author><surname>Hastie</surname> <firstname>T</firstname></author>, <author><surname>Tibshirani</surname> <firstname>R</firstname></author>, <author><surname>Friedman</surname> <firstname>J.</firstname></author></authorgroup> <source>The elements of statistical learning</source>. <edition>2nd</edition> ed. <publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2009</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=The+elements+of+statistical+learning+Hastie+T+Tibshirani+R+Friedman+J.+2009" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B108"><label>108.</label><authorgroup><author><surname>Doebling</surname> <firstname>SW</firstname></author>, <author><surname>Farrar</surname> <firstname>CR</firstname></author>, <author><surname>Prime</surname> <firstname>MB</firstname></author></authorgroup>, <etal/> <source>Damage Identification and Health Monitoring of Structural and Mechanical Systems from Changes in Their Vibration Characteristics: A Literature Review</source>. ; <year>1996</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Damage+Identification+and+Health+Monitoring+of+Structural+and+Mechanical+Systems+from+Changes+in+Their+Vibration+Characteristics%3A+A+Literature+Review+Doebling+SW+Farrar+CR+Prime+MB+1996" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B109"><label>109.</label><authorgroup><author><surname>Bishop</surname> <firstname>CM.</firstname></author></authorgroup> <source>Neural networks for pattern recognition</source> <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>; <year>1995</year>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Neural+networks+for+pattern+recognition+Bishop+CM.+1995" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B110"><label>110.</label><authorgroup><author><surname>Chang</surname> <firstname>CC</firstname></author>, <author><surname>Lin</surname> <firstname>CJ.</firstname></author></authorgroup> <article-title>LIBSVM : a library for support vector machines</article-title>. <source>ACM Transactions on Intelligent Systems and Technology</source>. <year>2011</year>;<volumenum>2</volumenum>(<issue>3</issue>): p. <fpage>27</fpage>:1-27:27. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=LIBSVM+%3A+a+library+for+support+vector+machines+ACM+Transactions+on+Intelligent+Systems+and+Technology+Chang+CC+Lin+CJ.+2011+3" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B111"><label>111.</label><authorgroup><author><surname>Knerr</surname> <firstname>S</firstname></author>, <author><surname>Personnaz</surname> <firstname>L</firstname></author>, <author><surname>Dreyfus</surname> <firstname>G.</firstname></author></authorgroup> <article-title>Single-layer learning revisited: a stepwise procedure for building and training a neural network</article-title>. <source>Neurocomputing: Algorithms, Architectures and Applications</source>. 1990. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Single-layer+learning+revisited%3A+a+stepwise+procedure+for+building+and+training+a+neural+network+Neurocomputing%3A+Algorithms%2C+Architectures+and+Applications+Knerr+S+Personnaz+L+Dreyfus+G.+2011" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B112"><label>112.</label><authorgroup><author><surname>Straub</surname> <firstname>D</firstname></author>, <author><surname>Faber</surname> <firstname>MH.</firstname></author></authorgroup> <article-title>Modeling dependency in inspection performance</article-title>. In <source>Proc. Of the International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP 03);</source> <year>2003</year>; <publisher-loc>Rotterdam</publisher-loc>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Modeling+dependency+in+inspection+performance+Proc%2E+Of+the+International+Conference+on+Applications+of+Statistics+and+Probability+in+Civil+Engineering+%28ICASP+03%29%3B+Straub+D+Faber+MH.+2003" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B113"><label>113.</label><authorgroup><author><surname>Beck</surname> <firstname>JL</firstname></author>, <author><surname>Au</surname> <firstname>S</firstname></author>, <author><surname>Vanik</surname> <firstname>MW.</firstname></author></authorgroup> <article-title>Monitoring structural health using a probabilistic measure</article-title>. <source>Computer-Aided Civil and Infrastructure Engineering</source>. <year>2001</year>;<volumenum>1</volumenum>(<issue>16</issue>). <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Monitoring+structural+health+using+a+probabilistic+measure+Computer-Aided+Civil+and+Infrastructure+Engineering+Beck+JL+Au+S+Vanik+MW.+2001+16" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B114"><label>114.</label><authorgroup><author><surname>Lauritzen</surname> <firstname>SL</firstname></author>, <author><surname>Nilsson</surname> <firstname>D.</firstname></author></authorgroup> <article-title>Representing and solving decision problems with limited information</article-title>. <source>Management Science</source>. <year>2001</year>;<volumenum>47</volumenum>(<issue>9</issue>): p. <fpage>1235</fpage>&#x2013;<lpage>1251</lpage>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Representing+and+solving+decision+problems+with+limited+information+Management+Science+Lauritzen+SL+Nilsson+D.+2001+9+1235-1251" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B115"><label>115.</label><authorgroup><author><surname>Luque</surname> <firstname>J</firstname></author>, <author><surname>Straub</surname> <firstname>D.</firstname></author></authorgroup> <article-title>Algorithms for optimal risk-based planning of inspections using influence diagrams</article-title>. In <source>Proceedings of the 11th International Probabilistic Workshop;</source> <yera>2013</yera>; <publisher-loc>Brno</publisher-loc>. <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Algorithms+for+optimal+risk-based+planning+of+inspections+using+influence+diagrams+Proceedings+of+the+11th+International+Probabilistic+Workshop%3B+Luque+J+Straub+D.+2001" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
<bibliomixed id="B116"><label>116.</label><authorgroup><author><surname>Wu</surname> <firstname>TF</firstname></author>, <author><surname>Lin</surname> <firstname>CJ</firstname></author>, <author><surname>Weng</surname> <firstname>RC.</firstname></author></authorgroup> <article-title>Probability Estimates for Multi-class Classification by Pairwise Coupling</article-title>. <source>Journal of Machine Learning Research</source>. <year>2004</year>;<volumenum>5</volumenum>: p. <fpage>975</fpage>&#x2013;<lpage>1005</lpage>. 65 <bibliomisc><ulink url="http://scholar.google.com/scholar?&#x0026;q=Probability+Estimates+for+Multi-class+Classification+by+Pairwise+Coupling+Journal+of+Machine+Learning+Research+Wu+TF+Lin+CJ+Weng+RC.+2004+975-1005" target="_blank">Google Scholar</ulink></bibliomisc></bibliomixed>
</bibliography>
<appendix class="appendix" id="appa" xreflabel="appa">
<title>7 Appendix A</title>
<para>This appendix contains the following papers:</para>
<orderedlist numeration="upperroman" continuation="restarts" spacing="normal">
<listitem><para>MK Hovgaard &#x0026; R Brincker. SHM based structural design and the value of damage detection. 2014. Submitted to Structural Health Monitoring. (awaiting review)</para></listitem>
<listitem><para>MK Hovgaard &#x0026; R Brincker. Bayesian experimental design of a RC wind turbine tower incorporating modal sensitivity based Structural Health Monitoring. 2014. In: proceedings of the 2014 IALCCE, Tokyo Japan.</para></listitem>
<listitem><para>MK Hovgaard &#x0026; R Brincker. SHM based system design of a wind turbine tower using a modal sensitivity based Bayes detector. 2014. In: proceedings of the 2014 EWSHM, Nantes, France.</para></listitem>
<listitem><para>MK Hovgaard, JB Hansen, A Skafte, P Olsen &#x0026; R Brincker. Study of vibration based SHM technologies, Part II: Detection. To be published in the proceedings of the 10th International Workshop on Structural Health Monitoring. 2015, Palo Alto, CA.</para></listitem>
<listitem><para>MK Hovgaard, JB Hansen, A Skafte, P Olsen &#x0026; R Brincker. Study of vibration based SHM technologies, Part III: Localization using statistical learning theory. To be published in the proceedings of the 10th International Workshop on Structural Health Monitoring. 2015, Palo Alto, CA.</para></listitem>
<listitem><para>MK Hovgaard &#x0026; R Brincker. Limited memory influence diagrams for structural damage detection decision-making. 2014. Submitted to Civil Structural Health Monitoring. (in review)</para></listitem>
</orderedlist>
</appendix>
</book>
