submit CrossRef Open Access Subscribe New Journal Ideal

Click on image to enlarge

Indexed in the SCIE (2018 Impact Factor 0.854), and in Scopus

Journal of Web Engineering

Editors-in-Chief:
Martin Gaedke, Chemnitz University of Technology, Germany
Geert-Jan Houben, Delft University of Technology, The Netherlands
Flavius Frasincar, Erasmus University Rotterdam, The Netherlands
Florian Daniel, Politecnico di Milano, Italy


ISSN: 1540-9589 (Print Version),

ISSN: 1544-5976 (Online Version)
Vol: 16   Issue: Combined Issue 7 & 8

Published In:   December 2017

Publication Frequency: 8 issues per year

Articles in 2020


Search Available Volume and Issue for Journal of Web Engineering


Journal Description        Read Full Articles        Editorial Board        Subscription        Indexed

A Taxonomy of Web Effort Predictors


Ricardo Britto1, Muhammad Usman1 and Emilia Mendes2

1Department of Software Engineering, Blekinge Institute of Technology, Karlskrona, 371 49, Sweden
2Department of Computer Science and Engineering, Blekinge Institute of Technology, Karlskrona, 371 49, Sweden

Abstract: [+]    |    Download File [ 198KB ]

Abstract: Web engineering as a field has emerged to address challenges associated with developingWeb applications. It is known that the development of Web applications differs from the development of non-Web applications, especially regarding some aspects such asWeb size metrics. The classification of existing Web engineering knowledge would be beneficial for both practitioners and researchers in many different ways, such as finding research gaps and supporting decision making. In the context of Web effort estimation, a taxonomy was proposed to classify the existing size metrics, and more recently a systematic literature review was conducted to identify aspects related to Web resource/effort estimation. However, there is no study that classifiesWeb predictors (both size metrics and cost drivers). The main objective of this study is to organize the body of knowledge onWeb effort predictors by designing and using a taxonomy, aiming at supporting both research and practice inWeb effort estimation. To design our taxonomy, we used a recently proposed taxonomy design method. As input, we used the results of a previously conducted systematic literature review (updated in this study), an existing taxonomy of Web size metrics and expert knowledge. We identified 165 unique Web effort predictors from a final set of 98 primary studies; they were used as one of the basis to design our hierarchical taxonomy. The taxonomy has three levels, organized into 13 categories. We demonstrated the utility of the taxonomy and body of knowledge by using examples. The proposed taxonomy can be beneficial in the following ways: i) It can help to identify research gaps and some literature of interest and ii) it can support the selection of predictors for Web effort estimation. We also intend to extend the taxonomy presented to also include effort estimation techniques and accuracy metrics.

Keywords: Web effort predictors, Taxonomy, Knowledge Classification, Web Engineering

An SMIL-Timesheets based temporal behavior model for the visual development of Web user interfaces


M. Linaje, J.C. Preciado, and R. Rodriguez-Echeverria

Quercus Software Engineering Group. School of Technology. University of Extremadura, Spain

Abstract: [+]    |    Download File [ 1417KB ]

Abstract: Temporal behaviors are being incorporated into the user interfaces of Web applications making them look more and more like multimedia applications, the so-called Rich Internet Application (RIA) user interfaces. Due to RIA complexity, some research communities have proposed models to ease its development. However, there is a gap to cover between formal temporal relationships and the current state of the art in the RIA model-driven development techniques. The purpose of this paper is to specify a temporal behavioral model for data-intensive RIA user interfaces with three main objectives. The first one is that the model must be usable by non-experts in engineering specifications (e.g., Web designers). The second one is that the model must be suitable to be implemented in a CASE tool integrating temporal behaviors in the RIA model driven development workflow. The third one is that the temporal behaviors specified must run in current Web browsers. The approach here presented is based on SMIL Timesheets, a standard that can be used as a foundation to extend RIA user interface model driven proposals.

Keywords: Design tools and techniques, Web Engineering, Rich Internet Applications, Multimedia temporal relationships, SMIL Timesheets

Service Recommendation Based on Separated Time-aware Collaborative Poisson Factorization


Shuhui Chen1, Yushun Fan1, Wei Tan2, Jia Zhang3, Bing Bai1, and Zhenfeng Gao1

1Department of Automation, Tsinghua University, Beijing, China
2IBM Thomas J. WatsonResearch Center, Yorktown Heights, New York
2Department of Electrical and Computer Engineering, Carnegie Mellon University, Silicon Valley, California

Abstract: [+]    |    Download File [ 579KB ]

Abstract: With the booming of web service ecosystems, nding suitable services and making service compositions have become an principal challenge for inexperienced developers. Therefore, recommending services based on service composition queries turns out to be a promising solution. Many recent studies apply Latent Dirichlet Allocation (LDA) to model the queries and services' description. However, limited by the restrictive assumption of the Dirichlet-Multinomial distribution assumption, LDA cannot generate highquality latent presentation, thus the accuracy of recommendation isn't quite satisfactory. Based on our previous work, we propose a Separated Time-aware Collaborative Poisson Factorization (STCPF) to tackle the problem in this paper. STCPF takes Poisson Factorization as the foundation to model mashup queries and service descriptions separately, and incorporates them with the historical usage data together by using collective matrix factorization. Experiments on the real-world show that our model outperforms than the state-of-the-art methods (e.g., Time-aware collaborative domain regression) in terms of mean average precision, and costs much less time on the sparse but massive data from web service ecosystem.

Keywords: service recommendation, service composition, Time-aware, Poisson Factorization

A Metric Based Automatic Selection of Ontology Matchers Using Bootstrapped Patterns


B. Sathiya1, Geetha T V1 and Vijayan Sugumaran2

1College of Engineering, Guindy, Anna University, Chennai, Tamil Nadu, India
2Oakland University, Rochester, Michigan, USA

Abstract: [+]    |    Download File [ 1473KB ]

Abstract: The ontology matching process has become a vital part of the (semantic) web, enabling interoperability among heterogeneous data. To enable interoperability, similar entity pairs across heterogeneous data are discovered using a static set of matchers consisting of linguistic, structural and/or instance matchers that discover similar entities. Numerous sets of matchers exist in the literature; however, none of the matcher sets are capable of achieving good results across all data. In addition, it is both tedious and painstaking for domain experts to select the best set of matchers for the given data to be matched. In this paper, we propose two bootstrapping-based approaches, Bottom-up and Top-down, to automatically select the best set of matchers for the given ontologies to be matched. The selection is processed, based on the characteristics of the ontologies which are quantified by a set of quality metrics. Two new structural quality metrics, the Concept External Structural Richness (CESR) and the Concept Internal Structural Richness (CISR), have also been proposed to better quantify the structural characteristics of the ontology. The best set of matchers is chosen using the sets of patterns learned through the proposed Bottom-up and Top-down bootstrapping approaches. The proposed metrics and the patterns constructed using these approaches are evaluated using the COMA matching tool with existing benchmark ontologies (Benchmark, Conference and Benchmark2 tracks of the OAEI 2011). The proposed Bottom-up based patterns, along with the two proposed quality metrics, achieved better effectiveness (F-measure) in selecting the best set of matchers in comparison with the static set of matching, supervised ML algorithms and the existing automatic matching. Specifically, the proposed Bottom-up patterns achieve a 14.6% Average Gain/Task and a significant improvement of 129% in comparison with the existing KNN model’s Average Gain/Task.

Keywords: Automatic Matching, Ontology Matcher Selection, Bootstrapping Patterns, Ontology Matching, Ontology Metrics

Discover Semantic Topics in Patents within a Specific Domain


Wen Ma1, Xiangfeng Luo1, Junyu Xuan2, Ruirong Xue1 and Yike Guo3

1School of Computer Engineering and Science, Shanghai University, Shanghai, China
2Faculty of Engineering and Information Technology, University of Technology, Sydney (UTS) Australia
3Department of Computing, Imperial College London, London, UK

Abstract: [+]    |    Download File [ 2947KB ]

Abstract: Patent topic discovery is critical for innovation-oriented enterprises to hedge the patent application risks and raise the success rate of patent application. Topic models are commonly recognized as an efficient tool for this task by researchers from both academy and industry. However, many existing well-known topic models, e.g., Latent Dirichlet Allocation (LDA), which are particularly designed for the documents represented by word-vectors, exhibit low accuracy and poor interpretability on patent topic discovery task. The reason is that 1) the semantics of documents are still under-explored in a specific domain 2) and the domain background knowledge is not successfully utilized to guide the process of topic discovery. In order to improve the accuracy and the interpretability, we propose a new patent representation and organization with additional inter-word relationships mined from title, abstract, and claim of patents. The representation can endow each patent with more semantics than word-vector. Meanwhile, we build a Backbone Association Link Network (Backbone ALN) to incorporate domain background semantics to further enhance the semantics of patents. With new semantic-rich patent representations, we propose a Semantic LDA model to discover semantic topics from patents within a specific domain. It can discover semantic topics with association relations between words rather than a single word vector. At last, accuracy and interpretability of the proposed model are verified on real-world patents datasets from the United States Patent and Trademark Office. The experimental results show that Semantic LDA model yields better performance than other conventional models (e.g., LDA). Furthermore, our proposed model can be easily generalized to other related text mining corpus.

Keywords: Patent topic discovery, Latent Dirichlet Allocation, Backbone Association Link Network, Domain knowledge

A Hybrid Approach for Automatic Mashup Tag Recommendation


Min Shi, Jianxun Liu, and Dong Zhou

Department of Computer Science and Engineering, Hunan University of Science and Technology

Abstract: [+]    |    Download File [ 764KB ]

Abstract: Tags have been extensively utilized to annotate Web services, which is beneficial to the management, classification and retrieval of Web service data. In the past, a plenty of work have been done on tag recommendation for Web services and their compositions (e.g. mashups). Most of them mainly exploit tag service matrix and textual content of Web services. In the real world, multiple relationships could be mined from the tagging systems, such as composition relationships between mashups and Application Programming Interfaces (APIs), and co-occurrence relationships between APIs. These auxiliary information could be utilized to enhance the current tag recommendation approaches, especially when the tag service matrix is sparse and in the absence of textual content of Web services. In this paper, we propose a hybrid approach for mashup tag recommendation. Our hybrid approach consists of two continuous processes: APIs selection and tags ranking. We first select the most important APIs of a new mashup based on a probabilistic topic model and a weighted PageRank algorithm. The topic model simultaneously incorporates the composition relationships between mashups and APIs as well as the annotation relationships between APIs and tags to elicit the latent topic information. Then, tags of chosen important APIs are recommended to this mashup. In this process, a tag filtering algorithm has been employed to further select the most relevant and prevalent tags. The experimental results on a real world dataset prove that our approach outperforms several state-of-the-art methods.

Keywords: tags, mashups, APIs, tag recommendation, topic model, PageRank

River Publishers: Journal of Web Engineering