The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - February (2006 vol.7)
pp: 3
Published by the IEEE Computer Society
Jacques Mossi?re , Institut National Polytechnique de Grenoble
Edward Curry , The National University of Ireland, Galway
Abdelkrim Beloued , ENST-Bretagne
Jean-Marie Gilliot , ENST-Bretagne
Fran?oise Andr? , University of Rennes
Maria-Teresa Segarra , ENST-Bretagne
Vladimir Dyo , University College London
Etienne Antoniutti Di Muro , Universit? degli Studi di Trieste
Michael A. Jaeger , Berlin University of Technology
ABSTRACT
This issue features summaries of four of the eight dissertations presented at Middleware 2005's 2nd International Middleware Doctoral Symposium. The March issue will feature the other four summaries.
The 2nd International Middleware Doctoral Symposium
Edward Curry, The National University of Ireland, Galway
Jacques Mossière, Institut National Polytechnique de Grenoble
Following the successful symposium at Middleware 2004, the 2nd International Middleware Doctoral Symposium took place at Middleware 2005. The symposium is a forum for an invited group of doctoral students; the students get to present their work, obtain guidance from mentors, and meet with their peers. The mentors are senior university or industry researchers, including many current or former members of the Middleware program committee.
The symposium aims to give students constructive criticism before their thesis defense and foster discussions related to future career perspectives. A similar series of doctoral symposia is held in connection with other conferences, including the International Conference on Object-Oriented Programming, Systems, Languages, and Applications; the European Conference on Object-Oriented Programming; the International Conference on Software Engineering; and the International Semantic Web Conference.
The symposium attracted many high-quality submissions, ensuring a competitive selection process. The program committee selected eight doctoral candidates to present their work:

    • Abdelkrim Beloued, ENST-Bretagne (Context-Aware Replication and Consistency),

    • Vladimir Dyo, University College London (Middleware Design for Integration of Sensor Network and Mobile Devices),

    • Etienne Antoniutti Di Muro, Università degli Studi di Trieste (Translucent Replication: Three Open Questions),

    • Michael A. Jaeger, Berlin University of Technology (Self-Organizing Publish/Subscribe Systems),

    • Rüdiger Kapitza, University of Erlangen Nürnberg (EDAS: Providing an Environment for Decentralized Adaptive Services),

    • Sharath Babu Musunoori, Simula Research Laboratory: (Quality Aware Service Planning in Computational Grids),

    • Trevor Parsons, University College Dublin (A Framework for Detecting Performance Design and Deployment Antipatterns in Component Based Enterprise Systems), and

    • Kurt Schelfthout, K.U. Leuven, Belgium (Coordination Middleware for Decentralized Applications in Dynamic Networks).

This issue of IEEE Distributed Systems Online features summaries of the first four dissertations. The March issue will feature summaries of the final four.
Acknowledgments
We thank the mentor committee members for their time and effort: Pascal Déchamboux (France Té;lé;com R&D), Thomas Gschwind (IBM Research, Switzerland), Cecilia Mascolo (University College London), David Rosenblum (University College London), Rick Schantz (Bolt, Beranek, and Newman Technologies), and Joe Sventek (University of Glasgow). We also gratefully acknowledge the support of BBN Technologies.




Edward Curry is a PhD candidate at the National University of Ireland, Galway. Contact him at edcurry@acm.org.




Jacques Mossière is a professor at the ;Institut National Polytechnique de Grenoble. Contact him at jacques.mossiere@inrialpes.fr.
Context-Aware Replication and Consistency
Abdelkrim Beloued, ENST-Bretagne
Jean-Marie Gilliot, ENST-Bretagne
Maria-Teresa Segarra, ENST-Bretagne
Françoise André, University of Rennes
Replicating data and ensuring its consistency in mobile environments is challenging. Consider applications such as a phone directories, diaries, or photo albums. A community of users—perhaps a family, an organization's members, or company employees—can share the data that these applications handle. However, the data must be available even when the application is disconnected, making data replication and consistency essential.
Also, several users can access these applications simultaneously using heterogeneous devices. In other words, the software and hardware properties, such as storage space and processor capacity, will vary, as will network parameters such as network type and bandwidth. So, the applications must be able to adapt the data to accommodate various devices and user requirements.
To address this issue, we're interested in context-aware replication techniques. We've proposed a context-aware replication system (see figure 1) that dynamically adapts the data replication to changes in contextual information. 1


Figure 1. Our replication system's architecture.

The replication system consists of three principal functionalities enabled through the following components:

    • the replica planner, which creates replicas and places them on nodes;

    • the localization manager, which locates replicas for read-write operations and then performs these operations; and

    • the consistency manager, which ensures replica consistency by exchanging update messages after each write operation and resolving update conflicts.

According to the context, these components provide a replication scheme that enables transparent access to data.
Our replication system takes into account the execution context and adapts its functionalities accordingly. It requires two types of context: required and provided. 1,2 For this reason, we've proposed an adaptation trigger module that comprises three principal submodules: an application interface, a context analyzer, and a system-state monitor.
The application interface and the context analyzer detect the pertinent change in required and provided context information. Then they store the change in the required-context repository and the provided-context repository. Finally, they notify the different modules (replica planner, localization manager, and consistency manager) of this change. The replica planner and the localization and consistency managers modify a replication scheme that is stored in the history. In the same way, the system-state monitor detects pertinent changes in system-state parameters, such as response time, which the system measures and stores in its history. Next, the system-state monitor notifies the replica planner, localization manager, and consistency manager of these changes, and these modules then modify the replication scheme to update the system state.
Our system also adapts the replication strategy to context or system-state variations. For example, the system might change its strategy from pessimistic to optimistic to improve data availability. So, adaptation trigger modules monitor context information and system-state parameters, detecting the pertinent change in this information and then notifying the strategy manager. The strategy manager chooses the most appropriate strategy, ensures system consistency, and implements the strategy in the replica planner, localization manager, and consistency manager.




Abdelkrim Beloued is a PhD student in the Department of Computer Science at ENST-Bretagne, Brest, France. Contact him at abdelkrim.beloued@enst-bretagne.fr.




Jean-Marie Gilliot is an associate professor in the Department of Computer Science at ENST-Bretagne, Brest, France. Contact him at jm.gilliot@enst-bretagne.fr.




Françoise André is a professor at University of Rennes 1 and researcher at IRISA/INRIA Laboratory, Rennes, France. Contact her at francoise.andre@irisa.fr.




Maria-Teresa Segarra is an associate professor in the Department of Computer Science at ENST-Bretagne, Brest, France. Contact her at mt.segarra@enst-bretagne.fr.
Middleware Design for Integrating Sensor Networks and Mobile Devices
Vladimir Dyo, University College London
Most research into data-dissemination protocols in sensor networks has focused on static sensor networks, where data is collected at one or more static points. There hasn't been much research into data collection from a sensor network using mobile devices, even though this approach offers several potential advantages.
First, mobile devices can extend sensor network coverage by providing access to remote and disconnected parts through an available Wi-Fi or cellular link. Second, data collection using mobile devices can be more energy efficient, because a mobile collector can directly pick up the data from the sensor instead of having to send the data across an entire sensor network. Third, data collection with a mobile sink is more practical in certain situations, and we can apply it to a various real-world applications.
At the University College London, we aim to design data-dissemination middleware for mobile collectors. 1 We're investigating a distributed index to help mobile users locate the data in large scale networks 2 and a data dissemination protocol for mobile sinks. Mobile data collection will require faster and better data-aggregation and data-collection algorithms, because mobile users can't wait for data to aggregate and arrive. Data-collection techniques should consider user mobility, speed, and channel bandwidth between the user and sensor network. More research is needed to investigate mobile data-collection protocols' design trade-offs and performance.

References





Vladimir Dyo is a PhD candidate in the Department of Computer Science at University College London. Contact him at v.dyo@cs.ucl.ac.uk.
Translucent Replication: Three Open Questions
Etienne Antoniutti Di Muro, Università degli Studi di Trieste
Traditionally, middleware systems have strived to achieve transparency—to isolate applications from underlying hardware and software changes. To date, middleware products have tended to be monolithic software systems, without much customization support for addressing application needs at runtime. Consequently, resources are underutilized and application performance is poor.
Part of the problem is that architectural transparency affects system dependability—that is, reliability and availability. Software-based replication is a common technique for building dependable services. However, solutions that efficiently implement replication present significant trade-offs, especially when dealing with performance.
So, my first question is, "Can I define a middleware architecture that lets applications adapt middleware functionality to application-specific data-dependability requirements and address performance issues?"
My approach is to address the conflicts between dependability and performance through translucent replication. Middleware translucency is an architectural model that exposes the system designers using the framework to both interfaces and interaction constraints.
I define two models of translucency: top-down and bottom-up. In the top-down model, the upper layers tell the lower layers how to satisfy their requests. In the bottom-up model, the lower layers provide information about how they're performing their work so that the upper layers can adapt their behavior accordingly.
I believe that replication middleware can benefit from both models, because both enable the designers to select appropriate replication strategies and constraints. Furthermore, because the models provide continuous knowledge about the working environment, the middleware can better balance system dependability and performance at runtime.
My second question, then, is, "Which architecture should I use?"
My work focuses on finding the best architectural solution to implementing translucency. Should I use an object-oriented-programming design, or should I consider an architecture based on computation reflection or more complex solutions? Figure 2 highlights my proposed middleware for translucent replication model architecture. Translucent APIs require minimal programming support at the application level to define the communication protocol because they use the middleware upon which the application was built.


Figure 2. My translucent replication model architecture—a single replica.

I augmented the architecture with a Knowledge Repository. Designers plug semantic rules into the Knowledge Repository at deployment. These rules, based on the semantics of the application-specific data being replicated, define how the middleware should process the data (a top-down model).
The context service continuously gathers and delivers information about the working environment to the layered services through an instrumentation sensors layer. Middleware layers use this information to request data-handling decisions to higher layers (a bottom-up model).
My third question is, "How should I evaluate translucent replication?"
That is, how do we quantify the advantages of solutions based on data semantics? Complexity and adaptation make systems difficult to evaluate. We should measure general quality-of-service levels, adaptation and missed opportunities to adapt, sensitivity to context changes, and so forth. I'm taking into account how to derive new metrics and benchmarks for highly adaptable architectures.
Further details about our work appear elsewhere, 1 or feel free to email me for more information.
Acknowledgments
I thank Alberto Bartoli, who introduced me to Arjuna Technologies. I am grateful to Stuart Wheater for his much-valued supervision and his continuous availability. Arnaud Simon guided my first steps into the Java Message Service world, and I will remember my interactions with him with pleasure.

Reference

Etienne Antoniutti Di Muro is a PhD student in the Reliable Distributed Systems Group, Università degli Studi di Trieste, Italy. Contact him at eantoniutti@units.it.
Self-Organizing Publish/Subscribe
Michael A. Jaeger, Berlin University of Technology
Te publish/subscribe communication paradigm best suits information-driven applications, such as news and update services. Pub/sub systems can be a building block for next-generation distributed applications. They facilitate the increasing pervasiveness of networked computers in our daily lives. We can implement these applications in several ways: centralized or distributed, push- or pull-based. With a distributed implementation, brokers form an overlay network and cooperate to provide the notification service's functionality. This removes the information-dissemination task from the application layer and encapsulates it into the notification service.
Clients connect to their local broker and can publish or subscribe to notifications, and the brokers use routing tables to route the published notifications to the clients interested in them (see figure 3).


Figure 3. A distributed notification service. The clients p and s are connected to their local brokers B1 and B5, respectively. Client s subscribed to filter F, which matches the notification n that p publishes. The subscription information is stored in the broker network's routing tables to enable brokers (for example, B3 and B4) to forward n correctly from B1 to B5.

Much research exists on routing algorithms and scalability issues, but the notification services' topology in Internet-like environments is mostly assumed to be static. However, for many applications, network usage patterns are highly dynamic in practice. So, it makes sense to search for mechanisms to enable the broker topology to adapt itself to current usage patterns and optimize its function accordingly.
We're currently investigating situations in which the broker overlay network structure's reconfiguration increases pub/sub system efficiency. 1 We're exploring the system's ability to adapt to changing usage patterns and considering reconfiguration issues such as delay, message overhead, and message ordering. We're also developing a metric to decide whether reconfiguration is beneficial—to determine if it's profitable to insert a new link between two brokers based on the number of identical notifications that brokers consume in a certain neighborhood. If the link is inserted, another link must be removed to maintain the acyclic network structure. Reconfiguring the network structure this way must happen in consensus with the directly affected brokers to avoid oscillation.
We're also working on making pub/sub systems self-stabilizing. If such a system starts in an arbitrary configuration, it's guaranteed to eventually reach a correct state, if one exists. Much of the literature on self-stabilization views reconfigurations as faults that were eventually incorporated into the system structure. Consequently, reconfigurations can lead to service disruption. However, we want to integrate external reconfiguration stimuli into a self-stabilizing pub/sub system such that reconfiguration is possible without service interruption.
Probably the most popular pub/sub application today is news dissemination using RDF Site Summary. We plan to integrate legacy RSS into a scalable decentralized, push-based pub/sub system. The goal is to improve the scalability and timeliness of news dissemination and to enable clients to issue more sophisticated subscriptions using a filtering language that the content-based pub/sub system provides.

Reference





Michael A. Jaeger is a PhD student in the Communication and Operating Systems Group, Berlin University of Technology, Germany. He holds a scholarship from Deutsche Telekom Stiftung. Contact him at michael.jaeger@acm.org.
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool