Editors:
Kevin Feeney, Trinity College Dublin, Ireland
Jim Davies, Oxford University, United Kingdom
James Welch, Oxford University, United Kingdom
Sebastian Hellmann, University of Leipzig, Germany
Christian Dirschl, Wolters Kluwer, Germany
Andreas Koller, Semantic Web Company, Austria
Pieter Francois, Oxford University, United Kingdom
Arkadiusz Marciniak, Adam Mickiewicz University, Poland
Authors:
Trinity College Dublin, Ireland, Wolters Kluwer Germany, Germany, Semantic Web Company, Austria, University of Oxford, UK, University of Leipzig, Germany, Wolters Kluwer Poland, Poland
To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.
Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems.
Software engineering, data engineering, big data, Semantic Web, software engineering methodology, RDF, OWL
Chapter 6: Use Cases
by Kevin Feeney, Christian Dirschl, Andreas Koller, JamesWelch,
Dimitris Kontokostas, Pieter Francois, Sabina Åobocka
and Piotr Bledzki