The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - October-December (2010 vol.3)
pp: 264-265
Published by the IEEE Computer Society
Liang-Jie (LJ) Zhang , Senior Member, IEEE
Welcome to the last issue of the IEEE Transactions on Services Computing (TSC) in 2010. In this issue, I am pleased to publish six research papers. The theme of this issue is “ data intelligence.”
Data intelligence refers to data-driven analytics and associated applications. That means that the type of data, how to collect and process data, and how to use data are all example aspects of data intelligence. In this issue, data intelligence is reflected in decision making scenarios such as resource consolidation, services composition, service orientation, legacy application migration, workflow discovery, and requirements-driven system validation. In the following, I would like to introduce these papers in the context of the body of knowledge areas of services computing.
In the area of “Cloud Computing” (M.8.0.c), resource consolidation is a way to effectively provision available resources to service consumers. One of the resources in cloud computing is servers. From a data intelligence perspective, how to create a data model to address server consolidation issues based on constraints is a challenge. The first paper, “A Mathematical Programming Approach for Server Consolidation Problems in Virtualized Data Centers” by Benjamin Speitkamp and Martin Bichler, presents a way to optimally allocate source servers to physical target servers while considering real world constraints. The presented model is proven to be an NP-hard problem. Therefore, a heuristic approach is also presented to address large-scale server consolidation projects. Meanwhile, a preprocessing method for server load data is introduced to build connections with quality of services.
In order to address data intelligence among services, there is a need to represent the relationships of services from a modeling perspective. The second paper, “Service Data Correlation Modeling and Its Application in Data-Driven Service Composition” by Zhifeng Gu, Bin Xu, and Juanzi Li, proposes a Service Data Link model (SDL) to represent service data correlations. Those are data mappings among the input and output attributes of services. SDL defines the connections between service data correlations and webpage hyperlinks. SDL also defines service data correlations with explicit declarations, making it more expressive than the implicit method. The authors developed an XML implementation for SDL that can coexist with WSDL. This paper is directly associated with the body of knowledge area “Relationship Specification Languages” (M.5.0.b).
When IT concepts and enabling technologies are used in real business scenarios, there is a need to represent them in an understandable way for business professionals to fully leverage those new forces. This type of translation also needs to consider data intelligence in the field of services computing. Specifically, in the body of knowledge area “Service-Oriented Business Consulting” (M.10.1), the third paper, “An Intentional Approach to Service Engineering” by Colette Rolland, Manuele Kirsch-Pinheiro, and Carine Souveyet, proposes a way to describe services in business terms, which includes intentions and strategies. The authors present the Intentional Service Model to describe intentional services and populate the service registry with their descriptions. They also propose a methodology to determine intentional services that meet business goals and to publish them in the registry. As a gap-bridging approach, the authors also introduce a set of transformations between the intentional level and the implementation one.
In the area of “Legacy Application Transformation in Services” (M.14.1.c), data intelligence in the application domain covers application analysis and transformation roadmaps. These are very important steps in application modernization projects. The fourth paper, “Converting Legacy Desktop Applications into On-Demand Personalized Software” by Youhui Zhang, Gelin Su, and Weimin Zheng, introduces a way to enable personalized software applications on demand. In this paper, lightweight virtualization technologies are used to convert the existing desktop software into on-demand software across the Internet without any modification of source code. Specifically, the authors propose a construction and runtime model of software. In addition, a network resource access protocol is developed to implement content-addressable storage, p2p (peer-to-peer) transfer acceleration, content integrity check, and the prevention of illegal copying.
Discovery provides a way to explore data intelligence from a pool of candidates based on querying algorithms. In the area of “Business Process Management” (M.7.b), the fifth paper “Secure Abstraction Views for Scientific Workflow Provenance Querying” by Artem Chebotko, Shiyong Lu, Seunghan Chang, Farshad Fotouhi, and Ping Yang, addresses a provenance issue which has become increasingly important in scientific workflows and services computing to capture the derivation history of a data product. The aspects of the data include original data sources, intermediate data products, and the steps that were applied to produce the data product. In this paper, the authors propose a formal scientific workflow provenance model, which can be used as the basis for querying and access control for workflow provenance. In addition, the authors propose a security model for fine-grained access control for multilevel provenance. Moreover, this paper proposes a formalization of the notion of security views and an algorithm for security view derivation. They also present a formalization of the notion of secure abstraction views and an algorithm for its computation. They develop a prototype called SecProv, and experiments show the effectiveness and efficiency of their approach.
Insights into requirements need analytic technologies to effectively organize the data associated with functional and nonfunctional requirements. Exploring data intelligence from a requirements perspective can help define quality of service (QoS)-related metrics for guiding system design. In the traditional system design, Quality Function Deployment (QFD) has been used as a quality management system to improve complex products. In the area of “Formalization of Services Composition” (M.6.0.d), the sixth paper, “Technical Target Setting in QFD for Web Service Systems Using an Artificial Neural Network” by Lianzhang Zhu and Xiaoqing (Frank) Liu, analyzes requirements for web services and their design attributes and applies QFD for developing web services systems. In this approach, there are connections between QoS requirements and web services design attributes. They propose a new method for technical target setting in QFD based on an artificial neural network. The research results for technical target setting in QFD illustrate that the conventional methods, such as benchmarking and the linear regression method, cannot incorporate nonlinear relationships between design attributes and QoS requirements. The proposed approach in this paper sets up technical targets consistent with relationships between the quality of web services requirements and design attributes to support linear or nonlinear scenarios.
Finally, I would like to send my special thanks to all of the Associate Editors and Guest Editors for their great contributions in coordinating the review process for all submitted papers in 2010. Without the dedicated contributions of the reviewers, it would be impossible to get review comments to support the decision making process. I look forward to your creative input to TSC as an author, reviewer, or Associate Editor.
Liang-Jie (LJ) Zhang
Editor-in-Chief

For information on obtaining reprints of this article, please send e-mail to: tsc@computer.org.

22 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool