submit CrossRef Open Access Subscribe New Journal Ideal

Click on image to enlarge

Software Networking

Editor-in-Chief: Kuinam J. Kim, Kyonggi University, South Korea

ISSN: 2445-9739 (Online Version)
Vol: 2017   Issue: 1

Published In:   January 2017

Publication Frequency: Continuous Article Publication

Search Available Volume and Issue for Software Networking

The Impact of Security Concerns on Personal Innovativeness, Behavioural and Adoption Intentions of Cloud Technology

Prasanna Balasooriya L. N., Santoso Wibowo, Srimannarayana Grandhi, and Marilyn Wells

School of Engineering & Technology, Central Queensland University, Melbourne, Australia

Abstract: [+]    |    Download File [ 1424KB ]    |   Read Article Online

Abstract: Cloud services have gained popularity due to the number of advantages they provide to organizations and individuals such as reduced cost, better storage, and improved performance. However, a lot of organizations are still unwilling to shift their traditional in-house services to the Cloud due to the various security implications. Many Cloud service users are worried about the security of their data and privacy being violated. There are many reported cases of Cloud service providers illegally collecting personal data of their customers, which has led to service providers being viewed with greater suspicion than before.To overcome this, Cloud service providers must ensure that they inform the users exactly about which data is being used and how it is used. While it is the duty of the Cloud service provider to protect the data confidentiality and privacy of their customers, this should not be misunderstood or misused by customers to conduct illegal activities because Cloud service providers have to abide by the rules and regulations, including co-operating with law enforcement agencies if they need any particular customer’s data. In this paper, we research the main security aspects for ensuring data confidentiality and privacy.

Keywords: Cloud security, Data privacy, Confidentiality, Adoption, Challenges.

Threat Models for Analyzing Plausible Deniability of Deniable File Systems

Michal Kedziora1, Yang-Wai Chow2 and Willy Susilo2

1Faculty of Computer Science and Management, Wroclaw University of Science and Technology, Wroclaw, Poland
2Institute of Cybersecurity and Cryptology, School of Computing and Information Technology, University of Wollongong, Wollongong, Australia

Abstract: [+]    |    Download File [ 2536KB ]    |   Read Article Online

Abstract: Plausible deniability is a property of Deniable File System (DFS), which are encrypted using a Plausibly Deniable Encryption (PDE) scheme, where one cannot prove the existence of a hidden file system within it. This paper investigates widely used security models that are commonly employed for analyzing DFSs. We contend that these models are no longer adequate considering the changing technological landscape that now encompass platforms like mobile and cloud computing as a part of everyday life. This necessitates a shift in digital forensic analysis paradigms, as new forensic models are required to detect and analyze DFSs. As such, it is vital to develop new contemporary threat models that cater for the current computing environment that incorporates the increasing use of mobile and cloud technology. In this paper, we present improved threat models for PDE, against which DFS hidden volumes and hidden operating systems can be analyzed. In addition, we demonstrate how these contemporary threat models can be adopted to attack and defeat the plausible deniability property of a widely used DFS software.

Keywords: Deniable file system, Hidden operating system, Plausibly deniable encryption, Veracrypt.

A Secure Service-Specific Overlay Composition Model for Cloud Networks

Ismaeel Al Ridhawi and Yehia Kotb

College of Engineering and Technology, American University of the Middle East (AUM), Eqaila, Kuwait

Abstract: [+]    |    Download File [ 2200KB ]    |   Read Article Online

Abstract: Mobile cloud service subscribers acquire and produce both simple and complex services resulting in tremendous amounts of stored data. Increasing demand for complex services coupled with limitations in current mobile networks has led to the generation of cloud service composition solutions. In this paper, we introduce a cloud mobile subscriber data caching method that allows cloud subscribers to retrieve data in a faster and more efficient way. The solution also allows the retrieval of user-specific composed data through a service-specific overlay (SSO) composition technique.Additionally, a Workflow-net based mathematical framework called Secure Workflownet is proposed to enhance security attributes of SSOs. Simulation results show increased successful file hit ratio and demonstrate how task coverage is achieved in an adequate timely manner when constructing service composition workflows.

Keywords: Petri-net, Workflow-net, Overlay network, Service-specific Overlay, Cloud, Fog.

Performance Evaluation of RSA and NTRU over GPU with Maxwell and Pascal Architecture

Xian-Fu Wong1, Bok-Min Goi1, Wai-Kong Lee2, and Raphael C.-W. Phan3

1Lee Kong Chian Faculty of and Engineering and Science, Universiti Tunku Abdul Rahman, Sungai Long, Malaysia
2Faculty of Information and Communication Technology, Universiti Tunku Abdul Rahman, Kampar, Malaysia
3Faculty of Engineering, Multimedia University, Cyberjaya, Malaysia

Abstract: [+]    |    Download File [ 1070KB ]    |   Read Article Online

Abstract: Public key cryptography important in protecting the key exchange between two parties for secure mobile and wireless communication. RSA is one of the most widely used public key cryptographic algorithms, but the Modular exponentiation involved in RSA is very time-consuming when the bit-size is large, usually in the range of 1024-bit to 4096-bit. The speed performance of RSA comes to concerns when thousands or millions of authentication requests are needed to handle by the server at a time, through a massive number of connected mobile and wireless devices. On the other hand, NTRU is another public key cryptographic algorithm that becomes popular recently due to the ability to resist attack from quantum computer. In this paper, we exploit the massively parallel architecture in GPU to perform RSAand NTRU computations. Various optimization techniques were proposed in this paper to achieve higher throughput in RSA and NTRU computation in two GPU platforms. To allow a fair comparison with existing RSA implementation techniques, we proposed to evaluate the speed performance in the best case (least ‘0’ in exponent bits), average case (random exponent bits) and worse case (all ‘1’ in exponent bits). The overall throughput achieved by our RSA implementation is about 12% higher in random exponent bits and 50% higher in all 1’s exponent bits compared to the implementation without signed-digit recoding technique. Our implementation is able to achieve 17713 and 89043 2048-bit modular exponentiation per second on random exponent bits in GTX 960M and GTX 1080, which represent the two state of the art GPU architecture. We also presented the implementation of NTRU in this paper, which is 62.5 and 38.1 times faster than 2048-bit RSA in GTX 960M and GTX 1080 respectively.

Keywords: RSA, NTRU, GPU, Signed-digit recoding, Montgomery exponentiation.

Towards a Reliable Intrusion Detection Benchmark Dataset

Iman Sharafaldin, Amirhossein Gharib, Arash Habibi Lashkari and Ali A. Ghorbani

Canadian Institute for Cybersecurity (CIC), UNB, Fredericton, Canada

Abstract: [+]    |    Download File [ 1294KB ]    |   Read Article Online

Abstract: The urgently growing number of security threats on Internet and intranet networks highly demands reliable security solutions. Among various options, Intrusion Detection (IDSs) and Intrusion Prevention Systems (IPSs) are used to defend network infrastructure by detecting and preventing attacks and malicious activities. The performance of a detection system is evaluated using benchmark datasets. There exist a number of datasets, such as DARPA98, KDD99, ISC2012, and ADFA13, that have been used by researchers to evaluate the performance of their intrusion detection and prevention approaches. However, not enough research has focused on the evaluation and assessment of the datasets themselves and there is no reliable dataset in this domain. In this paper, we present a comprehensive evaluation of the existing datasets using our proposed criteria, a design and evaluation framework for IDS and IPS datasets, and a dataset generation model to create a reliable IDS or IPS benchmark dataset.

Keywords: Intrusion Detection, Intrusion Prevention, IDS, IPS, Evaluation Framework, IDS dataset.

Fault Identification and Reliability Assessment Tool Based on Deep Learning for Fault Big Data

Yoshinobu Tamura1 and Shigeru Yamada2

1Tokyo City University, Japan
2Tottori University, Japan

Abstract: [+]    |    Download File [ 3093KB ]    |   Read Article Online

Abstract: Recently, many open source software (OSS) are developed under several OSS projects. Then, the software faults detected in OSS projects are managed by the bug tracking systems. Also, many data sets are recorded on the bug tracking systems by many users and project members. In this paper, we propose the useful method based on the deep learning for the improvement activities of OSS reliability. Moreover, we apply the existing software reliability model to the fault data recorded on the bug tracking system. In particular, we develop an application software for visualization and reliability assessment of fault data recorded on OSS. Furthermore, several numerical illustrations of the developed application software in the actual OSS project are shown in this paper. Then, we discuss the analysis results based on the developed application software by using the fault data sets of actual OSS projects.

Keywords: Software tool, fault identification, reliability assessment, fault big data, deep learning.

Composite Metrics for Network Security Analysis

Simon Enoch Yusuf, Jin B. Hong, Mengmeng Ge and Dong Seong Kim

Department of Computer Science and Software Engineering, University of Canterbury, Private Bag 4800, Christchurch, New Zealand

Abstract: [+]    |    Download File [ 1201KB ]    |   Read Article Online

Abstract: Security metrics present the security level of a system or a network in both qualitative and quantitative ways. In general, security metrics are used to assess the security level of a system and to achieve security goals. There are a lot of security metrics for security analysis, but there is no systematic classification of security metrics that is based on network reachability information. To address this, we propose a systematic classification of existing security metrics based on network reachability information. Mainly, we classify the security metrics into host-based and network-based metrics. The host-based metrics are classified into metrics “without probability” and “with probability”, while the network based metrics are classified into “pathbased” and “non-path based”. Finally, we present and describe an approach to develop composite security metrics and it’s calculations using a Hierarchical Attack Representation Model (HARM) via an example network. Our novel classification of security metrics provides a new methodology to assess the security of a system.

Keywords: Attack Graphs, Cyber Security, Graphical Security Model, Security Assessment, Attack Trees.

FARIS: Fast and Memory-Efficient URL Filter on CPU and GPGPU

Yuuki Takano and Ryosuke Miura

National Institute of Information and Communications Technology, Japan

Abstract: [+]    |    Download File [ 7269KB ]    |   Read Article Online

Abstract: Uniform resource locator (URL) filtering is a fundamental technology for intrusion detection, HTTP proxies, content distribution networks, contentcentric networks, and many other application areas. Some applications adopt URL filtering to protect user privacy from malicious or insecure websites. Some web browser extensions, such asAdBlock Plus, provide a URL-filtering mechanism for sites that intend to steal sensitive information.

Unfortunately, these extensions are implemented inefficiently, resulting in a slow application that consumes much memory. Although it provides a domain-specific language (DSL) to represent URLs, it internally uses regular expressions and does not take advantage ofthe benefits ofthe DSL. In addition, the number of filter rules become large, which makes matters worse.

In this paper,we propose the fast uniform resource identifier-specific filter, which is a domain-specific pseudo-machine for the DSL, to dramatically improve the performance of some browser extensions. Compared with a conventional implementation that internally adopts regular expressions, our proof-of-concept implementation is fast and small memory footprint.

Keywords: URL filter,Web, online advertisement.

Detection of Severe SSH Attacks Using Honeypot Servers and Machine Learning Techniques

Gokul Kannan Sadasivam1, Chittaranjan Hota1, Bhojan Anand2

1Department of Computer Science and Information Systems, BITS, Pilani – Hyderabad Campus, Hyderabad, Telangana, India – 500078
2School of Computing, National University of Singapore, Computing 1, 13 Computing Drive, Singapore – 117417

Abstract: [+]    |    Download File [ 1761KB ]    |   Read Article Online

Abstract: There are attacks on or using an SSH server – SSH port scanning, SSH brute-force attack, and attack using a compromised server. Attacks using a server could be DoS attack, Phishing attack, E-mail spamming and so on. Sometimes an attacker breaks into a public SSH server and uses it for the above activities. Mostly, it is hard to detect the compromised SSH servers that were used by the attackers. However, by analysing the system logs an organisation can know about the compromises. For an organisation holding several SSH servers, it would be tedious to analyse the log files manually.Also, high-speed networks demand better mechanisms to detect the compromises. In this paper,we detect a compromised SSH session that is carrying out malicious activities. We use flow-based approach and machine learning techniques to detect a compromised session. In a flow-based approach, individual packets are not scrutinised. Hence, it works better on a high-speed network. The data is extracted from a distributed honeypot. The paper also describes the machine learning techniques with appropriate parameters and feature selection technique.Areal-time detection model that is tested on a public server is also presented. Several analyses proved that J48 decision tree algorithm and the PART algorithm are best suited for detection of SSH compromises. It was inferred that inter-arrival time between packets and the size of a packet payload play a significant role in detecting compromises.

Keywords: SSH Compromises, SSH Attacks, Machine Learning, Feature Selection, Flow-based Analysis.

Automated User Interface Generation Involving Field Classification

Matrtin Tomasek and Tomas Cerny

Dept. of Computer Science, FEE Czech Technical University in Prague Technicka 2 Prague, Czech Republic

Abstract: [+]    |    Download File [ 1769KB ]    |   Read Article Online

Abstract: Software applications are designed with a concrete purpose in mind, specified by business owners basing on the individual requirements. The user-system interaction is specified in the analysis phase. This phase specifies inputs, outputs, interaction, etc. These elements could be specified separately based on the target platform. The designers know that the mobile clients could have a different application flow than desktop clients. However, this is not a rule and designer rarely think about the user context or application. This implies that outputs, inputs and interactions do not change automatically during the software lifecycle. In this paper we present techniques that can determine whether the user, which is in a particular context, should be required to spend his/her time to fill in fields that are not needed for accomplishing a specific business task. Moreover, these techniques are able to determine whether the fields should display or not, as well as how the system might interact with the user. Next, we present a computational architecture capable of these types of determinations and we demonstrate this technique in case study. Finally, we show how automatically combine user interface generation with our approach.

Keywords: MDD, automatically user interface generation, context-aware, business case, field classification, cross-cutting concerns.

Authentication and Authorization Rules Sharing for Internet of Things

Michal Trnka and Tomas Cerny

Dept. of Computer Science, FEE, Czech Technical University, Technick´a 2, 166 27 Praha, Czech Republic

Abstract: [+]    |    Download File [ 954KB ]    |   Read Article Online

Abstract: Significant interest in internet of things drives both research and industry production these days. A lot of important questions has been solved but some remain opened. One of the essential unresolved issue is the identity management of particular end devices.

Having possibility to share authorization and authentication rules across network between various sensors, applications and users have clear advantages. It reduces duplication of those policies, while ensuring that they are coherent across the board. There are various proposals, methods and frameworks for identity management in normal environment. However, just few of them is usable in internet of things environment and even less have been made directly for the usage in the internet of things devices.

This paper proposes solution for management of devices for internet of things. The solution is based on central identity store. Each device has an associated account in the store that any device and application in the network can verify the device’s identity against. The central element does not provide only authentication of the devices, but the devices can be associated with different roles. Those roles can be used for authorization.

Case study for the proposed framework is built on top of the current web standards as OpenID Connect, OAuth and JSON Web Token. Also, central identity store is based on well-established open source solution. The communication scheme is very simple – after the device’s account is established, it retrieves OAuth 2.0 token and uses it to certify itself in every network connection. The token does not only contain authentication information, but also roles assigned to the device. The central element thus creates trusted environment and enables rapid response for any security events.

Keywords: Internet of Things, Security, Identity Management, Authentication, Authorization.

Holistic Service Orchestration over Distributed Micro Data Center

TaeYeon Kim1,2 and Hongseok Jeon1,2

1Network Computing Convergence Research Section, Smart Network Research Department, Korea
2Electronics and Telecommunications Research Institute (ETRI), Korea

Abstract: [+]    |    Download File [ 1706KB ]    |   Read Article Online

Abstract: Three technologies, i.e. Cloud, SDN, and NFV, the most frequently touted as promising terms to pave the way to the next generation networking era are keenly needed to be harmonized for vertical application service providers. This holistic approach to orchestrate application service on the edge of the network is based on requirements such as Service Profile and Service Policy with environmental condition from service providers and network providers respectively. This paper shows how to organize and manage the application service from the perspective of end user side with two requirements, profile and policy, leveraging three disruptive technologies.

Keywords: Cloud, SD, NFV, orchestration.

An Evaluative Analysis of DUAL, SPF, and Bellman-Ford

Shahab Tayeb and Shahram Latifi

Department of Electrical & Computer Engineering, University of Nevada, Las Vegas, NV, United States

Abstract: [+]    |    Download File [ 2550KB ]    |   Read Article Online

Abstract: This paper aims to discuss a comprehensive list of demerits associated with the use of Diffusing Update Algorithm compared to its link-state counterpart, namely, Shortest Path First algorithm which is a variant of Dijkstra’s algorithm. Such a comparison was neglected for the past two decades due to the proprietary nature of the former protocol. This has led to the prevalence of the latter which is why many computer network professionals adamantly recommend implementing link-state protocols in campus implementations. However, this is of importance today pursuant to the release of several IETF Internet drafts in an attempt to standardize the Enhanced Interior Gateway Routing Protocol. Additionally, the results are compared with Bellman-Ford as a simple but widespread routing solution.

Dynamic routing protocols rely on algorithms computing the shortest paths using weighted digraphs and tree traversals. In this paper, not only are the algorithms discussed but also an in-depth analysis of the various features of the aforementioned protocols is conducted. Abandoning the periodicity of update massages and operating in an event-driven fashion with automatic failover capability are some of the features that will be analysed. Part of the novelty of this paper lies in the mathematical representation of decisionmaking processes and metric computation. One of the notable findings of this paper is an evaluative analysis of convergence times achieved in a typical university campus routing implementation. Moreover, using wide metric vectors contributes to energy-aware routing and improved performance for jitter-sensitive services.

Keywords: Convergence, Diffusing Update Algorithm, Routing Protocols, Shortest Path First Algorithm,Wide Metrics.

River Publishers: Software Networking