# MIRACLE—Model for Integration of Remote Laboratories in Courses that Use Laboratory and e-Learning Systems

Drago Cmuk
Tarik Mutapcic
Ivan Bilic

Pages: pp. 275-288

Abstract—Emersion of Websites that enable users to easily participate in creation of their content moved individuals on a scale rarely seen before. Web 2.0 transformed the passive reader into an active user and millions of users were drawn into a community previously reserved for professionals only. Users became able to experiment with data, collaborate with other users, and add value to a community of users. A similar revolution is needed in the electrical engineering education. In this field, courses offer a significant amount of theory and generally an unstimulating content to the students. Remote laboratories (RLs) could, however, make a difference. Instead of being passive collectors of the theory, students could become active builders of their own knowledge. At this point, the design of such a laboratory becomes important. Without a detailed user-oriented design, RL could have a counterproductive effect, generating frustration instead of motivation. A team of researchers used the QFD method to translate multidimensional and interdependent user requirements into the RL design model—MIRACLE. The MIRACLE model is based on survey results, instructional design, and good e-learning practice, and as such this model brings satisfaction, raises effectiveness and motivation, and makes electrical engineering courses appealing to students.

Index Terms—Remote laboratory, e-learning, user-oriented design.

## Introduction

In the last two decades, the World Wide Web has strongly infiltrated all areas of human activity, especially the ones connected to information exchange. Millions of people across the globe started to use the Internet as a communication channel and its potential grows on a daily basis. These changes strongly influence universities and their teaching paradigm. Different technologies are used to enhance the everyday teaching practice, but their use is often not optimized, failing to meet students' real needs [ 1], [ 2]. Improvements in education should be done by systematic implementation of optimized blended learning models based on the utilization of educational theories and appropriate technologies and media on an institutional level—which is not always the case [ 3]. In this paper, the need for such a model in electrical engineering education (3E) will be presented and a solution proposed. This solution should influence the institutional strategic planning and the educational delivery at universities in order to enhance the teaching practice and bring a stimulating learning environment to the students.

Practical experience, as a crucial factor in one's learning cycle, was recognized by many authors [ 4], [ 5], [ 6], [ 7], [ 5], [ 8], [ 1]. In 3E, this kind of experience is gained through experimentation and measurement in the laboratory. When laboratory exercises are not used enough, students' motivation, the practical value of the course and its overall effectiveness are lowered [ 6], [ 5], [ 9], [ 10]. However, the use of hands-on laboratories in 3E courses is often reduced due to the setup difficulties as well as maintenance of such a laboratory. This problem can be, in a big part, solved with the deployment of remote and virtual laboratories.

The virtual laboratory offers a simulation of real measurements but the data are artificial, and even though it is useful in deepening students' conceptual understanding, it is a poor replacement for practical work in real laboratories [ 1], [ 11], [ 6]. Student engineers need to be exposed to practical experiences and the uncertainties of a real-life environment, which can be achieved only in real laboratories. Remote laboratories (RLs) offer students remote control over real experiments operating at a distant location. Therefore, access to the real measurement equipment empowered with advantages of the modern Learning Management System (LMS) (video link, learning community support, etc.) can be the best possible alternative to the real hands-on laboratory [ 6], [ 10], [ 1]. This is one of the reasons why the RL should become an integral part of the teaching practice at electrical engineering universities.

However, RLs alone cannot reach the goal of effective teaching [ 12], [ 13], [ 1], [ 14]. Different technologies of the Web 2.0 could be used to extend the possibilities of such a system. To do so, they must include both pedagogical and didactical aspects of online teaching, where the strengths of classical teaching are combined with the advantages of the emerging e-learning technologies [ 3].

The combination of multiple approaches to learning is referred to as Blended learning (BL), i.e., a blended use of virtual and physical resources. In the strictest sense, blended learning happens anytime an instructor combines two or more methods of instruction delivery [ 3]. Today, 3E requires effective blended learning courses that will find a way to exploit the possibilities of the Web 2.0 technologies, together with the positive sides of classroom teaching and laboratory work. RLs are particularly important and a promising part of that education and it has to be integrated in such courses in an optimized way usually combined with a hands-on and/or a virtual laboratory [ 1].

Varnava-Marouchou writes about recent increased efforts of the universities to enrich the student's learning environment with use of the IC technologies. Our society became a knowledge society and it seems crucial providing learning wherever, whenever, and under whichever terms the student claims it. "We are shifting from 'just-in-case' education, based on degree-based programs, to 'just-in-time' education, where knowledge and skills are obtained during a career, to 'just-for-you' education, which is student centered and customized to the needs of the student. By using information technology, and in an effort to eliminate the constraints of time and space, higher education can begin making learning more inline with the lifestyles and career needs of the students they serve" [ 15].

To implement a successful blend in a course or faculty curriculum is not so much a matter of technology as it is a matter of design [ 13], [ 3], [ 15], [ 16]. One of the biggest problems e-learning faces today is the fact that traditional educational systems want to use it only as an addition to the current system, without changing the core of the teaching practice. In their book, Bonk and Graham [ 3] describe three main levels of blended learning approach as follows:

• Enabling blends primarily focus on addressing issues of access and convenience. For example, blends intended to provide additional flexibility to the learners or blends that attempt to provide the same opportunities or learning experience but through a different modality.
• Enhancing blends allow incremental changes to the pedagogy but do not radically change the way teaching and learning happens. This can occur on both ends of the spectrum. For example, in a traditional F2F learning environment, additional resources and perhaps even some supplementary materials can be included online.
• Transforming blends are blends that allow a radical transformation of the pedagogy. For example, changing from a model where learners just receive information to a model where they actively construct knowledge through dynamic interactions. These types of blends enable intellectual activity that was not possible in practice without the technology.

Each teaching and learning method (applied through different technologies and media) has its own advantages and drawbacks and the key to a successful blended learning is selecting the right combination of methods that would achieve the highest efficiency. The systematic approach to blended learning presented here will identify how a learning audience can give better results in their learning process at the university. It is a compromise between the following:

• the way groups of learners learn best,
• the various ways a material can best be delivered, individualized, presented, and learned,
• the available resources that support learning, training, and social activities,
• the ways to maximize capabilities for access, interaction, and social relationships,
• the way to ease the whole process for the professors and universities, starting from the price to the complexity of use to the way to make the current educational system concurrent and keep up the pace with the transforming information society.

Work necessary to choose the right strategy while designing a remote experiment and supporting e-learning environment is based on theories of learning and instructional design models previously acquired, experience with LMS possibilities, and a good knowledge of students' needs [ 1], [ 12].

The need for a model that could ease this process is elaborated in the following section. To design such a model, a testing of the RL was conducted by two groups of students. The geographically distributed RL LA.DI.RE. "G.Savastano" realized at the University of Sannio was used for this purpose [ 6].

The technical possibilities of this RL laboratory are described in Section 3. Section 4 describes the testing procedure—the experiment and the environment that was used to test the RL and access the real students' needs. Section 5 describes the testing results, while Section 6 describes the method used to translate them into the RL design model. The last section introduces the MIRACLE— user-oriented Model for Integration of Remote Laboratories into Courses that use Laboratory and e-learning Systems. Realized to emphasize the effectiveness of a typical RL, it takes an important step in fulfilling the students' needs, thus, raising effectiveness and the overall quality of the course in which it is used.

## The Need for a Model

The instructional design model gives structure and meaning to an educational problem that could not be solved easily. It breaks down the problem into smaller, manageable units. Such models would significantly ease the integration process of RLs into the blended learning courses and make them fully "of the shelf." Numerous papers in 3E deal with the instructional design, and different learning models were proposed by the scientist and the instruction design companies [ 5], [ 17], [ 18] [ 19]. However, these models were written with a high abstraction level and the recommendations are rarely applied and tested in real-world situations [ 20], [ 1]. The main difficulty lays in the fact that a model for the integration of RLs in electrical engineering courses does not exist.

The emersion of the Web 2.0 caused dramatic changes in the university education process. Resistant to changes or too slow to cope with them, universities are losing the pace. A case study of the current situation at the Faculty of Electrical Engineering and Computing (FER) can serve as a good example of the problems e-learning is facing at universities across the globe.

This faculty is a typical example of a modern faculty, which still has a necessity for an appropriate blended learning system, applicable to a significant number of courses. FER is a well-known university in Europe and a highly respected one in the world. Around 4,000 students currently attend classes there.

New educational technologies strongly influenced the educational processes at this faculty. The benefits of e-learning had been recognized and adopted very early in most courses. The implementation of the Bologna process based on a system which forces continuous tracking of students through exams and seminars speeded up this process dramatically. The number of students increased together with the number of exams they had to pass. The necessity for e-learning systems was immediately altered and a new system was born. In this system, students have an online support for their courses; they can submit their homework or even do some exams (mainly self-assessment and minute exams for collecting activity points). Some courses provide multimedia and advanced activities, although this is still rare. Online communities do not exist, the communication on forums is not organized and professors do not act as online tutors. All things considered, it is a good e-learning-aided system based on a traditional approach. Similar is happening at other universities [ 3], [ 21], [ 11].

The lack of organized online communities at the faculty level (apart from individual efforts of some professors) resulted in the creation of students' own administrated e-learning community. This community ( www.fer2.net), as only a forum-based solution, was initiated, financed, created, and administrated by students of the new faculty program that implements the requirements of the Bologna Declaration—FER2. This community has the goal in helping the student exchange educational materials, knowledge, experiences, opinions, and doubts with other students (but also to pass the results of homework assignments). Only students of FER2 can get an account on this Website when the administrator grants them access and that is the main reason why their informal part of education takes place there. Every student has their own profile with different courses and fields of interest and each one of them is obliged to contribute to the community of users—otherwise, he loses account. The system won huge popularity in student circles because it is based on a direct and practical fulfillment of students' needs. Asynchronous discussions on typical problems solved by their peers, solutions to homework assignments, solutions to different problems concerning lectures and student' s life in general, detailed explanations of complicated theories—these are all exchanged frequently. There have been 710,000 posts made by 2,700 students in the last four years.

This practical example proves the importance of online learning communities in the blended learning system. It is obvious that educational institutions should organize these kinds of communities—and make them better. Crucial would be to add assistants or professors as expert moderators of those communities, so they could offer them their help in coping with the challenges students face today.

Main problems that slow down the adoption of this kind of platform are the following:

• There is a substantial lack of know-how in using and producing content among professors and teaching personnel [ 22], [ 23]. They are often not informed about the novel technologies and the possibilities they offer on the Web. Peter Cochrane said: "Imagine a school with children that can read or write, but with teachers who cannot, and you have a metaphor of the Information Age in which we live" [ 24].
• Teachers are not used to the new way of communicating as students are. They are used to traditional approaches, which characterize them as a part of a higher level society [ 16], [ 23]. In this way, they enjoy the respect required to keep the concentration and control over students' behavior in the classroom on a high level. Unfortunately, this approach puts the students in a subordinate position making them feel uncomfortable to ask questions. This is mainly connected to their fear of being ridiculed in front of others when asking potentially stupid questions or even the proper ones but at the wrong time. They fear to be remembered as a person who does not understand anything or does not study hard enough.
• Additional effort is required to create an e-learning content for students. Educators are not willing to spend additional time online answering questions students might have for them. To overcome this problem, they should be educated on an institutional level and become aware that students expect them to find the time [ 12].
• The effort which is required to implement e-learning is currently not paid by the government. While the implementation of e-learning is kept on voluntary basis, it is impossible to expect great results. Quality systems cannot be based on one's good will [ 12], [ 21].
• Lack of specialized teams in all educational institutions, which would help the personnel involved in teaching to keep up with the newest educational and technological accomplishments and practice [ 12].
• The need for laboratory work in 3E causes additional problems. Until a decade or two ago, laboratory work couldn't be executed from a distance, hence, slowing down the adoption of e-learning in 3E. The fact that the RL can help solve this problem is still not widely known and accepted among final users. Although a lot of papers on this topic have been published, they are mainly technically sound and very rarely include testing the didactic value of the suggested RL [ 1], [ 25]. This way the value of the RL and the way in which it should be integrated in the modern LMSs remains unknown to academic circles [ 6], [ 25], [ 13].

The RL cannot be easily integrated in an e-learning environment, even when perfectly technologically compatible with the LMS in use. When a student is executing an experiment in an hands-on laboratory, he is usually surrounded by his colleagues, he has a teacher at his disposal, and he is free to collaborate with other students. All this is not possible by default when a student is working on the experiment remotely. Different Web 2.0 tools and other e-learning technologies could enrich students' experience during that time, but also diminish it if used improperly. A number of learning models have been proposed to optimize the use of learning technologies in teaching, but none of them can be directly applied in the RL setting.

Bates and Poole [ 12] developed and later refined the ACTIONS model. It was designed as a set of questions that leads distance educators in decisions when choosing appropriate media or technological solution for a course. Bates writes: "Decision making about technology... is a complex process, requiring consideration of great number of factors... These different factors cannot easily be related to one another quantitatively." His model was later transformed into the SECTIONS model, where S stands for students and E for ease of use. These two letters also represent the basic principle of quality management—focus on the user.

Fink [ 5] proposed a teaching strategy for creating significant learning experiences for students based on experimentation and reflection. Similar to Fink, Kolb [ 8] in his experiential theory identifies four stages of the learning process: Concrete experience, Reflective Observation, Abstract Conceptualization, and Active Experimentation. Keller found motivation as one of the most important factors that influence the learning performance [ 26]. He found that motivation significantly depends on interaction among participants. ARCS Model of motivational design he created won great popularity among educators and was applied in different areas of education. It is based on four stages of the learning cycle: Attention, Relevance, Confidence, and Satisfaction. However, Keller does not suggest concrete tools or strategies that would produce the desired effect. Instead, he developed a 10-step model for practical implementation of the ARCS model to the concrete needs of students in a particular course. They include course and audience information analysis, listing of possible (technical or tactical) solutions, brainstorming of tactics, and selection or development of the solution. The last step is a survey on students' reactions. These 10 steps go inline with the QFD method that Cmuk used as the advanced method and they were all used in the same way.

Another important recommendation comes from Felder and Silverman [ 9]. They proposed a model of F2F instructional design for engineering education, which can cover different learning styles effectively. It consists of nine steps, but only four of them are applicable to the RL instructional design and those are: motivate learning, use computer-assisted instruction, ensure activity, and enable cooperation to the greatest possible extent.

It is clear that all these models are very abstract. They are very useful in understanding the role of different technologies (particularly experiments) in an engineering course. Nevertheless, teachers still need a pragmatic model-shaped appropriateness of a particular technology for the final goal of the RL.

The process of choosing the best possible mix of different technologies in online support for an RL, while keeping the final users in mind, is a matter of optimization. Webster dictionary defines optimization as: "an act, process, or methodology of making something (as a design, system, or decision) as fully perfect, functional, or effective as possible." This optimization does not include the bandwidth optimization or other technical issues; instead, it deals with the most important aspect of the RL—the way it is used. This optimization is based on the knowledge of the learning theory evolution, instructional design models, experience with possibilities of technology, and most of all—deep understanding of students' needs [ 12].

In other words, the use of RL in a multidisciplinary field like education is everything but straightforward. Although RL performance can be very thriving from a scientific point of view, its practical value is small unless it becomes widely used in the academic community. Even the best world solutions currently available [ 27], [ 28], [ 11] are facing these problems. Traditional systems are, by default, inertial and suspicious of novel technologies, but it is something more than that. The authors agree that the technology alone cannot serve our purpose; it should be accompanied with concrete models simply applicable to the situations for which they were created. Professors not familiar with online education are having doubts in the quality of the novel systems, and are fearful of complexity of their use (this fear is too often justified).

To conclude, there is an absolute need for educational models applicable to the curriculum of universities, which are losing pace with the online learning systems development. Without a suitable learning model that meets users' needs, a gap between technological solutions and their implementation will still remain. The performance of RLs together with their benefits and lacks (for both students and professors) should be clearly presented to professors, and a concrete model for their integration in a particular course should be created.

## Laboratory Setup

### 3.1 Remote Laboratory

For the testing of the remote experimentation process, RL LA.DI.RE "G. Savastano" realized at the University of Sannio was used. This laboratory has the main goal of enabling distance learning in the field of electrical engineering. It is currently available at the laboratory Web address www.misureremote.unisannio.it. The testing was done during the year 2006, and a lot of the work has been done since then to analyze and implement the results. To make this possible, the platform was improved and the realized model is currently under additional testing.

The laboratory enables students to access remote experiments on the real measurement instrumentation by using a common Web browser only [ 6], [ 11]. Students can access the following: 1) Experiment Visualization, 2) Experiment Control, and 3) Experiment Creation.

The core component of the overall distributed architecture is an improved LMS, chosen on the basis of its capability of managing and building Web-based courses according to Aviation Industries Computer-based training Committee (AICC), Instructional Management Systems (IMS), and Advanced Distributed Learning specifications. This platform delivers user authentication and management as well as the tracking of the user learning process. The RL is currently organized according to a Web-based and multitier distributed architecture ( Fig. 1). The presentation tier, which manages the experiment on client side, is based on standard Web browsers, with no need of specific software components. The server-side logic, composing the middle tier is distributed on the following servers: 1) An LMS, executed on a central server, called Laboratory Portal. The LMS interfaces to the users through a Web Server hosted on the same machine. 2) The Laboratory Server (LS), used to interface each laboratory with the rest of the system. It delivers access to the laboratory equipment. 3) The Measurement Server (MS), a server located in a laboratory that enables interactions with one or more instruments. Each MS is physically connected to a set of instruments. The server-side software component used to control the electronic instruments is mainly LabVIEW, developed by the National Instruments (NI).

Figure    Fig. 1. Hardware components of the proposed architecture.

### 3.2 Remote Management of Experiments

There are numerous solutions which can be used for remote control of measurement devices, or particularly, measurement experiments. The main problem for the RL is the separation of the static user interface (usually graphs, charts, and different instrument controls) from the data they represent. If this separation is done properly, it reduces the bandwidth drastically and makes the visualization and the control over experiments fluent. This is usually done through the realization of a specific client-server structure, where objects only share measurement data and control information, while Graphical User Interface (GUI) elements visualize this information on client's side.

Some of the best solutions available today are proprietary solutions of the National Instruments. They use the NI's LabVIEW—one of the most popular measurement softwares (programming tool) today. Numerous proprietary solutions based on deployment of Web services or custom client-server structure also exist. All these solutions have limited reusability because they cannot be used for experiments previously created. Also, custom solutions are often not user friendly (professors and other users should be skilled in programming in order to use them).

In the solution used in the LA.DI.RE "G. Savastano," all these problems were successfully solved. Thin client paradigm splits the presentation layer between the client and the server. The presentation logic runs on the server, while the thin client has the only task of showing GUI that generally reproduces the experiment window. This approach gives the possibility of providing remote access to any windows application, written in any language, running locally on the server, and requiring only an Internet browser on the client side.

Many implementations of the thin client paradigm have been proposed by researchers and deployed commercially. Starting from the ProperJavaRDP client, a specific client Applet has been developed and described in [ 29], the LaboratoryApplet. This RDP client extends the solution of Proper Java RDP and improves the communication performances by adding some functionalities such as: compression of transmitted data, selection of the cache dimension on client side, and selection of load balancing option that enables choosing the least-loaded server. The developed solution efficiency has been confirmed by the comparative bandwidth occupation measurements, where the LaboratoryApplet gave the best results.

## Testing Procedure

### 4.1 Experiment Development

Testing was performed with two groups of students: the members of the first group were second year undergraduate students at the Faculty of Electrical Engineering and Computing, Croatia, as a part of the Electrical Measurements course while the pilot testing was conducted with graduate students at the University of Sannio, Italy, after a week of lectures on the relevant topic.

In the process of testing such an RL, choosing an adequate experiment is a demanding task. Every remote experiment cannot be an adequate substitute for a hands-on experiment and every experiment cannot be implemented remotely. Only when the presentation and the analysis of the measurement data are computer supported or experiments computer oriented, can the RL be successfully used [ 1], [ 6]. In fact, the experiments that teach students about the actual connecting of the hardware or even certain problems in the experiment setup in real-world, remote experiments would be a poor replacement (typical examples are experiments executed on the first year of undergraduate studies, where students encounter the basic measurement equipment for the first time). The Magnetic Measurement experiment developed by the author was considered very appropriate for testing the characteristics of the RL. The experiment is based on the idea of Professor Dusan Vujevic [ 30] by using several basic functions and GUI solutions from freely available simulation created by Dr. Nesimi Ertugrul from the University of Adelaide, Australia [ 31]. It was made in two different modes: hands-on and remotely. Both required students to calculate and set the load of device under test (magnetic core), observe the measurement data displayed on the oscilloscope (or the monitor), and then extract the measurement data out of the plots by calculating or estimating the required values. Since the goal of the experiment, as well as the structure and the provided data had been the same in both cases, we were able to record the differences between two, but also the differences in students' reaction to them [ 6]. The second testing was conducted in a bigger group of students who performed Magnetic Measurements in a series of hands-on experiments. Their results, request, and opinions were also gathered and analyzed.

### 4.2 The Experiment Concept

The developed experiment was designed to become a relevant improvement to courses and lectures on electromagnetism and magnetic measurement. It gives the student an opportunity of deeper understanding of magnetic characteristics in different magnetic materials. Moreover, it is the first experiment in the LA.DI.RE. "G. Savastano" project giving the possibility of recording the efficacy of a teaching method by means of the feedback system. The LMS system allows testing of knowledge adopted by the students, but like most of the other developed platforms for remote laboratories, it cannot be able to test the experience the student received when executing the experiment. By means of LMS, the student is provided with documentation including: 1) a theoretical background, 2) hardware and circuit descriptions, and 3) a user guide. Therefore, before starting the experiment, student should have a good understanding of the magnetic circuit principles. The graphical user interface of the developed experiment is shown in Fig. 2.

Figure    Fig. 2. Graphical user interface of the developed instrument (GUI). Experiment has multilingual interface, it gives students automatic feedback on their measurement results and assures excellent visualization of the data.

The experiment hardware consists of a programmable source with a separation transformer, a device under test, sensors, and a personal computer with a data acquisition board. The current sensor measures the current in the main coil that enables the computing of the magnetic field (H), while the voltage sensor (a couple of resistors performing a voltage partition) measures the inducted voltage in the search coil that enables the computation of magnetic induction (B). The programmable source is actually a variable transformer controlled by the data acquisition board and a step motor. The source enables students to change the load of the magnetic material, which then results in a change in the supply current spectrum and the shape and area of the magnetic hysteresis loop representing power losses.

### 4.3 Execution of the Experiment

The developed GUI shows the waveforms of the acquired signals, the current spectrum, the magnetic flux, and the BH hysteresis plot ( Fig. 2.). It also computes (for the adjusted voltage supply) the maximum and the RMS value of the magnetic field and magnetic induction ( ${\rm H}_{MAX}$ , ${\rm H}_{RMS}$ , ${\rm B}_{MAX}$ , ${\rm B}_{RMS}$ ), the coercivity Hc, and the retentivity Br.

At the beginning of the experiment, the student is asked to adjust the programmable source to supply the MC with a voltage that will magnetically saturate the ferromagnetic core. Doing this, he/she is able to observe the changes in the given plots and practice determining the point of magnetic saturation. The execution of the experiment is continued with a source adjusted to the mentioned value. During the experiment, the student, on the basis of the displayed information, has to determine and make out the critical data from the plots: the maximum induction B and magnetic field strength H, retentivity Br, coercivity Hc, the saturation flux density of material, but also to try to appraise the area of hysteresis, which represents power losses in the material. Then, he/she is asked to put the results in the form provided on the left side of the GUI. The VI verifies that the provided results are in a given range of the actual values (calculated programmatically for the adjusted supply voltage). If the check passes, the real results are shown on the front panel (until that point they are hidden). Finally, the learner is asked to comment on the potential mistakes he/she made in the calculations. The VI text, i.e., the explanation as well as questions with answers and reference notes have been prepared in four languages: Croatian, English, Italian, and Romanian. This list will grow with the number of countries involved in this project. Such an approach was used to extend the features of the project LA.DI.RE. "G. Savastano" to the academic circles of East Europe, western Balkan, and Mediterranean area. It follows the EU efforts to implement an internal Lifelong Learning Program, hence, influencing the educational systems of the partner and neighboring countries of the EU helping them to overcome problems with their educational systems and the lack of resources [ 6].

## Results

As explained before, two groups of students were chosen to test the RL. The first group was a pilot group of 16 students who were asked to do the Magnetic Measurements experiment after a week of lectures on the subject. Half of them first worked with the hands-on experiment, while others executed the experiment remotely. Their understanding of the topic was tested in three ways: preparation for the experiment, previous knowledge, and crucial understandings that should be gained through experimentation. When they finished executing the first type of the experiment, they were asked to fill in the technical part of a questionnaire where their adopted knowledge was tested. After completing both experiment types, they were asked to rate the advantages of the remote and the hands-on experiment, as well as the supremacy of one over another. They were to rank the importance of different aspects in the process of experimentation. The cognitive style of the students' was also tested by a standard VARK questionnaire in order to find possible correlations. Namely, the pilot testing allowed us to make the necessary changes in the survey, to focus on the most important issues, to abandon several questions, and adopt some alternative ones. The second group consisted of 57 students who executed the Magnetic measurements as one in the series of 16 experiments. The results given by the hands-on laboratory were compared with the results from the RL.

### 5.1 Survey Results

Results showed that 84.1 percent of students in the second group consider the RL the same or more effective than the hands-on. The pilot group produced almost the same result (72 percent) and both were comparable to the results found in other papers [ 33]. The difference of 12 percent can be explained by the fact that the pilot group had not passed the other hands-on 15 experiments and they had rated a very high possibility to conduct the experiment manually and cope with the hands-on measurement (they were mainly computer-oriented engineers). The second group, on the other hand, rated the remote experiments too high and direct conversation with them revealed that that was mainly due to the saturation with the other 15 experiments (high setup time, poor user interfaces). Thus, most of the students found the RL very interesting (87.3 percent) and they were very curious about the whole idea (93.6 percent). For both types of experiments, the most highly rated was the importance of preparatory instructions (96.8) and teacher's presence (87.3). The fact remains that a majority of problems with the remote experiment were caused by superficial reading of the preparatory instructions. It was also found that people who spent more time studying theory and preparatory instructions get a better score. Good theoretical background and interest in the field were shown as the most important issues of every experiment, so that is why every RL should include an entering test. Online help was not available in real time, which was a very important issue or even a problem since they were facing this kind of experiment for the first time. Students often had problems with the PC authorization of the Java applet (it was solved easily, but they lost time in vain) and most spent too much time getting familiar with the whole system (unknown e-learning platform). These problems could all be solved by executing more than one experiment this way. Also, the feeling of immersion was not rated as high as we had expected (av. grade 3.3 with 76 percent of people consider it being more or less acceptable). This was mainly due to the lack of camera in the lab or a video as a part of the experiment setup procedure (video shown on Fig. 2 was added later). As expected, highly rated were the conveniences in the access, the scheduling time, and the reliability in the remote experiment setup. The results of the testing have shown that more than 72 percent of the surveyed students consider the remote experiment the same or more effective than the hands-on. It has also been noted that highly interested students leaned more toward the hands-on experiment because it gave a better insight into the real conditions and problems. The students' learning process and the obtained examination results were analyzed and it was shown that no significant correlation can be found between the type of experiment and the knowledge gained—when this knowledge is strictly defined and well presented through the experiment' s interface. The survey helped us understand the critical design considerations of RLs that have to be taken into account in future optimization and RL design.

### 5.2 Students' Requirements

Through the testing, a number of students' requirements were found and their overall importance was weighted by a team of researchers working in the field, under Cmuk's guidance. Individual questioning and their own attitude helped the team in weighing their requests and putting them in a correct ranking order. The ranking table of user requirements is shown in Table 1. There is another requirement—the requirement for a reward, which was not included in the questionnaire, however, it was recognized as particularly important later. In fact, this requirement was present during the whole testing procedure. One could say that it is understandable that students have to be rewarded for their work. Although this argument is logical, it is not always present in the courses. But the truth is that students chose to perform a certain experiment mainly to get rewarded. It was clear that a reward was the only exclusive request. No reward, no work—as simple as that. However, some students were genuinely interested in the RL, or in the magnetic measurements itself, but not a high number of them. And even if those rare ones would fill in questionnaires, they would certainly never study, spend their time, or give their best for nothing. This phenomenon is well known in Quality Management. Quality must not depend on someone's good will, it must be assured through the design process of a service or a product. This is also a basic principle of behaviorism. So, even if situations exist where a student is motivated by other things besides a reward, this is out of the scope of this testing. In a usual blended learning system, the laboratory work must be made relevant. Usually, this reward represents points that influence the final grade. Due to this, the team decided to accept the reward as a high-priority request.

Table 1. Ranking of Student's Requirements for Remote Experiments Platform (on a Scale from 1 to 5)

## QFD Method

User-centered thinking is in the core of the quality management. It was stated earlier that a new, third generation of LMSs is going to be user-oriented instead of monolithic [ 32]. From the quality management perspective, the difference is considerable, radical, and demanding. Finding services or functionalities that would answer user requirements in an optimal way is a demanding task, especially when dealing with a multilevel influence of a certain service on a user's satisfaction and vice versa. To make this possible, the authors used the Quality Function Deployment method (QFD). Dr. Yoji Akao, author of the method, describes the QFD as a method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process [ 33]. In other words, the QFD helps translate complicated and multidimensional user requirements to the final product characteristics, whether they already exist or are to be invented. Moreover, it enables assigning of priority to each product/service characteristic. The results of this technique yield transparent and visible graphs and matrixes that can be used in future product/service developments. The QFD is applied in the design process of a wide range of services or products— from the car and electronic industry to tourist services and education. The QFD is implicated in the ISO 9000:2000 standard that focuses on customer satisfaction and it is the key practice in the Design for Six Sigma [ 34]. QFD can be used in different design and production phases of a product or a service. A particularly important part of this paper is the set of methods that is used for product (service) planning. The mostly used QFD technique for this purpose is the House of Quality (HOQ). The HOQ is defined as: "A product planning matrix, which somewhat resembles a house that is developed during quality function deployment and shows the relationship of customer requirements to the means of achieving these requirements" [ 35]. The layout of the HOQ is shown in Fig. 3. Each of the blocks building the HOQ will be described separately.

Figure    Fig. 3. House of quality for online experiment production model. It illustrates connection among user requirements and technical possibilities of RL, including the weightings of the design targets.

### 6.1 Customer Requirements

This part of the HOQ matrix is usually the most important one, and it is composed first. It is based on the Voice of the Customer (VOC), which is a list of user requirements described in their own words [ 36]. The information obtained as part of this method must be well structured for it to be useful. The authors gathered this information by testing two groups of students, speaking to them directly and collecting their opinions in a written form. The responses to the survey were a source of diverse data relevant for solving the questions mentioned above. Weighting of the different aspects of the RL was influenced by opinions of the students expressed during the conversation with them. The team of four PhD students working in the field (assistants working in laboratories and finishing their PhD in e-learning and RLs) conducted the structuring of the requests. Since there were two groups of users (learners and teachers), the requirements were accordingly grouped in those two groups. Teachers' requirements were listed and weighted directly by the team members, through brainstorming and analyses. The resulting weight factors (on a scale from 1 to 5) are shown in Fig. 3 as follows:

Reward (5)—points collected for a successful activity, it is the most important issue because it helps them pass the exam. Also, this is the main motive for conducting the experiment at all.

Time saving (4)—online experiments are usually much less time-consuming, and therefore, very convenient for learners.

Preparatory instructions (4)—make the learners' jobs much easier and provide the learners a connection between the theory and a good overview of the topic.

Availability (4)—learners can execute the experiment wherever it suits them, sometimes even through a handheld device.

Scheduling (4)—allows the learners to work whenever it suits them.

Ease of use (4)—helps the users to focus on the experiment goals, and not on the setup and operation of the experiment, etc.

Teacher availability (3)—learners will always have questions and it is therefore very important to receive assistance from teachers. A teacher can be substituted by a demonstrator or a person in charge of platform issues. This is especially important during the first experiment, when learners are not familiar with the system.

Real instrumentation (3)—gives the learners a feeling of immersion and enhances the importance of the experiment. This is crucial for the motivation of both students and tutors. It is important for tutors to have access to real instrumentation as it assists with the learning process and enables learners to gain the necessary practical experience.

Adapted interface (2)—assists the learner to work without frustration. The intuitive GUI helps them focus on important issues and not on the ways of delivery. It also speeds up the process and helps the learners to maintain focus.

Team work (2)—improves their communication skills and the interaction of team members often helps in finding solutions. It also improves motivation and is greatly dependent on prior knowledge. It is limited when operating hardware remotely because it allows only one user to control hardware in particular moment (other students can only watch). For these reasons, team work was more effective in online assignments regarding experiments (forums, wikis, etc.).

Reliability of setup (2)—gives the learners confidence and satisfaction.

New experiences (2)—the experiment topic should teach the learners something new and interesting, which is important for motivation.

Insight (1)—is obviously not so important, however, better students rated it with higher marks.

Report (1)—the students do not like the reporting component very much, however, they consider it important. The report is important for teachers when grading the learners. Teacher requirements are divided into two basic groups: ease of use and quality. These requirements are understandably straightforward. User identification, which would allow, for example, students to be identified when writing exams from a distance, cannot be fully implemented with the current technology. This problem can only be solved through a systematic approach to online teaching that would include users' learning process tracking. This issue is not part of the scope of this paper.

### 6.2 Planning Matrix

The planning matrix attached on the left of the HOQ matrix serves several purposes. First, it quantifies customers' requirement priorities and the performance of the existing products. Second, it allows these priorities to be adjusted based on the issues that concern the design team. The measures used in this section are generally gathered from customers using a questionnaire. The most important measure in this section is the requirement of Importance Weighting, from customer's own perspectives. The measure is shown in a column next to the customer requirements descriptions in the left section of the matrix. The second, and the most common, component of the planning matrix provides a measure of customer satisfaction with the concurrent products. In our case, the testing was based on comparison of students satisfaction on the RL compared to the hands-on. This topic has been covered in previous sections, and will not be used in the QFD on the RL design.

### 6.3 Technical Requirements

This section of the HOQ matrix is also referred to as the functionalities, engineering characteristics, or the Voice of the Company. It describes the product in terms of its producer. This information was generated by the QFD design team who define all the measurable characteristics of a product they want to design in order to meet the specific user requirements. These requirements should be structured the same way as customer requirements. They were grouped into the following groups: Functionality, Quality, Performance, and Novelties. An additional row is usually included in this section to show the directions of improvement. In our case, the most important issue was to reveal the correct learning system design priorities in order to obtain the effectiveness and the wanted quality.

### 6.4 Interrelationship

This section forms the main body of the HOQ matrix and can be very time-consuming before being completed. Its purpose is to translate user requirements into the technical characteristics of the product. Each cell in the intersection of a particular user and the technical requirement is filled with a number corresponding to the level of interrelationship between the two requirements. In our case, the team pondered whether a particular functionality of the LMS can fulfill the specific user request. The level of interrelationship is usually weighted on a four- or a five-point scale.

### 6.5 Roof

The triangular "roof" matrix of the HOQ is used to identify how the technical characteristics of the product influence one another. The procedure is similar to the one in the interrelationship matrix. The QFD team passes through the cells in this matrix to consider the pairings of technical requirements for each field. When the improvement of one parameter leads to the improvement of another, it means that these requirements should get additional weight and priority. When the mutual influence is negative, the engineers have to find a compromise. The information gathered this way helps the design team to discover which improvements could lead to a range of benefits for the final product. Also, it strongly addresses negative relationships in the design, thus, creating space for innovation and research. In this research, an obvious and strong connection has been found between the community creation and the forum and chat. The platform independency requirement slightly influences most properties, but it is fulfilled today with most of LMSs. However, with RLs, this is not always the case, and it should represent an important design requirement. The conflict that has arisen is a community support (including forum and chat) when the user is solving exams online. This can be a problem, and preventing access to the communication channels while solving exams should solve it.

### 6.6 Targets

This is the final section of the HOQ matrix to be completed and it summarizes the conclusions drawn from the data in the entire matrix, as well as the team's discussions. It is generally composed of three parts: Technical priorities, Competitive benchmark, and Targets.

Technical priorities can be simply calculated from the weightings described in Sections 6.2 and 6.4. Each interrelationship weighting is multiplied by the overall weighting from the planning matrix. These values are then summed down to columns to get a priority score for each technical requirement. It is important to understand that the number calculated only gives a priority order; hence, the numbers used represent only the ranking value. The list of technical requirements is shown in Table 2. The table itself does not need a comment because the results obtained are very straightforward. When doing laboratory work, it is important to have a community of users. Their experience, comments, teamwork, or just advice can help the experimenting process to a great extent. It should be noted that the e-learning community is much more than just a set of tools supporting communication and the exchange of data.

Table 2. Weights (Priority) of the Design Targets

Competitive benchmarking. Each of the technical requirements identified as important can be compared with the available competitive products. This helps identify the relative technical position of the existing product and the target levels of performance to be achieved in a new product. In our case, there was no other similar laboratory to compare the performance results with. A comparison has been made with the hands-on laboratory, but not by means of the QFD, since it cannot be applied in such a situation. The results of this testing have been presented in the previous section.

Targets. The final output of the HOQ matrix is a set of engineering target values to be met by the new product design. The process of building this matrix enables these targets to be set and prioritized based on an understanding of customer needs, competitors' performance, and the organization's current performance (8; 73; 78; 79). In the HOQ table, this is illustrated with the level of effort required for the realization of particular functionality to achieve the desired outcome.

## The MIRACLE Model

There are many problems that arise from the deployment of RLs in everyday teaching practice. Some of them were discovered during testing performed by the team, while others were recognized by other researchers. Whether these problems are connected to the technical functionalities of the RL or pedagogical issues, such as presence, collaboration, etc., they all have the same goal. That goal is to simulate a hands-on laboratory environment in an e-learning platform to the highest extent possible. When a student operates on the equipment, he should learn from his or her mistakes, apply theoretical knowledge, analyze the results, etc. These processes are far from being linear or predictive (as when studying theory). Numerous mistakes and misunderstandings occur during laboratory work and are to be solved instantly. In a hands-on lab, an assistant or a demonstrator helps students to solve these types of problems. Moreover, in hands-on labs, students can ask their colleagues for help, and they do help frequently and instantly when a problems occurs [ 37], [ 1]. Students also have different motivating factors, for example, presence of equipment and psychomotor activities that keep them focused. Simulating such an environment online is a task with many multidimensional influencing factors that cannot be realized easily. Solutions that can solve these problems exist, however, even the best one available would still have limitations inherent to all e-learning courses. This fact puts remote laboratories in less of an effective position when gaining experience. However, there are particular strengths online education has to offer to make effectiveness of the RL comparable to the hands-on labs. This process is multidisciplinary and it can be very confusing and time-consuming. This is why it is often avoided by researchers in the field [ 1].

The description of the MIRACLE model is carried out as a set of practical guidelines for implementation of the research results presented in the paper. They are directly tied to the design targets of the QFD process. The model is written in second person as a guide in the following:

1. Decide on the type of reward and make it relevant to the students and their learning process.

RL system technology should be a part of one's learning process on a daily basis. Technology-mediated reminders, e-mails, forum post, or assignments that occur in a personalized e-learning environment can be used to motivate the students to start studying.

2. Encourage people to maintain Web-folios and to collaborate online through different group assignments like wikis or forums.

An online community was highly rated by the students, as well as the QFD team and other researchers [ 37], [ 17], [ 1]. It is absolutely an essential part of online laboratory. Frequent interactions among peers are what build the online community. Discussion with learning peers by text, voice, or messaging must be rewarded, guided, and stimulated by tutors. When students maintain their Web-folios and communicate, they get to know each other in a professional way creating an online community at the same time. If a reward for these activities is present, they should be motivated to conduct the experiment and understand the theory well before starting the laboratory work. M-learning can play an important part in building a community, because according to Keller, motivation is increased with communication among learners [ 26].

3. The teacher becomes an online tutor whose main goal is to supervise online activity of the students and assist them as required.

This teacher activity should be carefully balanced, otherwise it could be exploited by the users. A tutor's time is precious and he cannot be available all the time. On the other hand, a student will follow the tutor in activity. If the tutor is not active, students will not be either.

4. After the completion of lectures, online learning, or discussion on topics, students should be able to schedule session to execute experiments.

Use reminders and entrance test. The experiment must begin with an entrance test, as a condition to approach the experiment. It should abundantly test the theoretical knowledge on the topic relevant to the experiment.

5. Choose experiments suitable for remote execution.

Directions on this choice are presented during the experiment description. It should have an adapted and intuitive interface and it must have automatic feedback. This way there would be no need for supervision over the activity, and the student is relaxed while working. Also, it allows the student to evaluate his learning process, which is an important issue according to the behaviorist attitudes. It is useful when the experiment has an automatic flow control because it eases the use and reduces the time spent switching between windows to read the lab guide.

When students work on the RL for the first time, the administrator (tutor or professor) should be available to them through chat. Frequently Asked Questions (FAQs) should be included in the course. In all the future experimentation, presence of students who had finished the experiment is enough, but it is not sufficient during the start-up. To make this possible, the postlab activity should keep users online after they finish their work. Having tutor at disposal online gives confidence to the students and raises motivation.

7. Organize activities that will motivate students to reflect on the experiment objectives or theoretical part of the experiment.

Reflection was found to be the strong side of RLs. This time is hard to set free during lectures or during work in hands-on laboratories. The LMS support enables advanced design features that can force students to spend some time reflecting on the readings of materials prior to the lab. During the RL testing in oral examination, the authors noticed that superficial reading of preparation material was the main reason for bad lab scores. Preparation for the lab is very important in raising confidence and bringing satisfaction (cognitive domain). The Web 2.0 tools can help a great deal as follows:

1. Discussion on an intriguing topic prior to the lab.
2. Theoretical lessons for certain experiments can be chunked and relevant questions can guide a student through materials "forcing" him or her to prepare for a lab. (Conditional branches and grading of the answers can be very useful.)
3. Multimedia can be used to raise the motivation of a student for a chosen topic. The most effective becomes when used as a trigger for discussion of some measurement problems.
4. Theoretical, numerical assignments can help learners to get prepared for the final, practical phase.

8. Active minimum approach.

This concept was originally created by the first author of the paper, and it is grounded in the experiential theory, constructivist theory, motivational design, and natural, inductive method of learning. A tutor, environment, or community gives to the learner a small but an integral part of the information, which enables him or her to apply it in a meaningful way (solving an assignment or a quiz, executing an experiment, or discuss it with friends in online discussion). This way the student gets the feeling to know something, while the chunked materials help him or her to overcome the barrier of difficult concepts. This approach is based on constructivist theories and is well supported by the m-learning paradigm, but it also goes much further. The collections of chunks can be summarized in reports or wikis written by the whole group, allowing learners to construct their own knowledge.

9. Exit test should follow the experiment execution.

The exit test should test the particular knowledge gained in the experiment. This is a request that is not easily realized. Clear identification of the experiment objectives can help a lot. However, it is not easy to test someone's experience or feeling for a certain field. If the experiment feedback is present, this test is not obligatory.

10. Ask students to create a lab report.

This can be done through the course wiki, their Web-folio blog, or students can submit the results to the platform explaining the problems they encountered and/or solutions they found. The only problem with the reports is that they all should be read to be graded. When working with 1,000 students (typical at bigger faculties), this is quite demanding, so group work can be used effectively. Here, the tutor should be careful not to allow better students to do the majority of the work or to generate reports on behalf of other team members. A good activity tracking software can be of much help.

11. Online exams motivate learners to learn on a regular basis using their computers or handheld devices.

Online exams should be combined with the classical exams due to the authentication problems and possibility of cheating, but classical exams should take place much rarely. Tracking of a student's learning process through the platform tools (activity tracking, forum, e-portfolio, etc.) should reduce the possibility of cheating to an acceptable minimum.

12. F2F exams during hands-on labs are used as occasional control points of a learner's knowledge.

The goal is to enable F2F communication with professors during examination, and at the same time, influence the whole learning process. F2F exams reduce the possibility of cheating during online exams. The role of these kinds of activities is very important. In higher education, they are a source for new scientists and new ideas for teachers, while for the learner, the activities are a touch of reality and experience.

13. Be moderate when using wikis, forums, and interactive lessons.

They are very useful for motivating and gaining confidence through discussions and self-expression online. However, they can distract users or lead to frustration if used too much [ 26].

14. Consider m-learning support for time-consuming measurements.

In certain scenarios, it can raise motivation and support collaboration among students online. M-learning represents a relevant extension to the e-learning environment, but still it is not suitable for every course. It takes time to prepare the materials well, and it should not be done only formally. M-learning can be a useful support to the discursive approach to learning and can enable much faster communication among the users. It can support the active minimum approach presented by the authors.

## Conclusion

Even though e-learning is gaining popularity and becoming an increasingly present form of education, universities still have problems keeping inline with the changes imposed by the exponential growth of information and communication technologies. This is especially true in electrical engineering education, where e-learning systems should support RLs. RLs put a strong emphasis on the theoretical knowledge but they can also become a powerful drawback for distance learning in universities. Besides technical problems with its realization, universities need a systematic approach for integrating the RL in their education systems. The author's goal was to conduct a survey that would help him to obtain relevant information on real students' needs while using RLs. To do this, he designed a multilingual Magnetic Measurements experiment suitable for such a survey. This innovative experiment included an automated sequencing of activities as well as feedback on users' achievements without the need for supervision. He conducted a two-level testing of the RL through the same experiment. The pilot testing conducted by Italian students helped him to optimize the process of testing, while the other helped him to get relevant information on the real students' requirements. Through the process of Quality Function Deployment, students' requirements were translated into the model of experiments executed online. For this purpose, a small team of researchers working in this field gathered. Under the guidance of the first author, they employed the House of Quality method to statistically translate user requirements (students and tutors) into practical functionalities and design recommendations for the remote laboratories' LMS. Combining his findings with the state of the art in instructional design, the author created the MIRACLE—a novel user-oriented Model for Integrating Remote Laboratories in Courses that use Laboratories and e-learning systems.

## References

• 1. J. Ma, and J. Nickerson, "Hands-On, Simulated and Remote Laboratories: A Comparative Literature Review," ACM Computing Surveys, vol. 38, no. 3, p. 7, 2006.
• 2. T. Brown, "Beyond Constructivism: Exploring," Education Today, http://www.dreamland.co.nz/educationtoday/Tom_Brown_ Beyond_Constructivism.pdf, 2005.
• 3. C.J. Bonk, and C.R. Graham, Handbook of Blended Learning: Global Perspectives, Local Designs. Pfeiffer Publishing, 2004.
• 4. M. Brenda, "Learning Theories of Instructional Design," http://www.usask.ca/education/coursework/802papers/mergel/ brenda.htm, May 1998.
• 5. L.D. Fink, Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. Jossey-Bass, 2003.
• 6. D. Cmuk, et al., "A Novel Approach to Remote Teaching: Multilanguage Magnetic Measurement Experiment," IEEE Trans. Instrumentation and Measurement, vol. 57, no. 4, pp. 724-730, Apr. 2008.
• 7. J.M. Keller, "The Systematic Process of Motivational Design," Performance and Instruction, vol. 26, nos. 9/10, pp. 1-8, 1987.
• 8. A.Y. Kolb, and D.A. Kolb, "Learning Styles and Learning Spaces: Enhancing Experiential Learning in Higher Education," Academy of Management Learning & Education, vol. 4, no. 2, pp. 193-212, 2005.
• 9. R.M. Felder, and L.K. Silverman, "Learning and Teaching Styles in Engineering Education," Eng. Education, vol. 78, no. 7, pp. 674-681, 1988.
• 10. J.E. Corter, et al., "Remote versus Hands-On Labs: A Comparative Study," Proc. Frontiers in Education Conf., vol. 2, pp. 17-21, 2004.
• 11. G. Andria, et al., "Remote Didactic Laboratory 'G. Savastano,' Italian Experience for e-Learning," IEEE Trans. Instrumentation and Measurement, vol. 56, no. 4, pp. 1135-1147, Aug. 2007.
• 12. A.W. Bates, and G. Poole, Effective Teaching with Technology in Higher Education, pp. 47-74, Jossey Bass, 2003.
• 13. R. Zemsky, and W.F. Massy, "Thwarted Innovation: What Happened to e-Learning and Why?" The Learning Alliance for Higher Education, http://www.thelearningalliance.info/Docs/Jun2004/ThwartedInnovation.pdf, 2004.
• 14. M. Ogot, G. Eliott, and N. Glumac, "Hands-On Laboratory Experience via Remote Control: Jet Thrust Laboratory," Proc. Am. Soc. for Eng. Education Ann. Conf. and Exposition, 2002.
• 15. D. Varnava-Marouchou, "21st Century Trends in Education: Implications for Learning and Teaching in Higher Education," Proc. Fifth Int'l Conf. Information Technology Based Higher Education and Training (ITHET '04), pp. 443-448, 2004.
• 16. P.R. Polsani, "E-Learning and the Status of Knowledge in the Information Age," Proc. Int'l Conf. Computers in Education, pp. 1068-1069, 2004.
• 17. J. Gerhard, and P. Mayr, "Competing in the E-Learning Environment—Strategies for Universities," Proc. 35th Int'l Conf. System Sciences, 2002.
• 18. T. Brown, "Beyond Constructivism: Navigationism in the Knowledge Era," On the Horizon, no. 3, vol. 14, pp. 108-120, 2006.
• 19. IMS, "Instructional Management System. IMS Global Learning Consortium: Specifications," www.imsglobal.org/specifications. html, 2006.
• 20. N.C. Alparslan, N.E. Cagiltay, M. Ozen, and E. Uray Aydin, "Teaching Usage of Equipments in a Remote Laboratory," The Turkish Online J. Educational Technology, vol. 7, no. 1, pp. 1303-6521, Jan. 2008.
• 21. Z.L. Berge, and M.P. Collins, "Technology and Changing Roles in Education," Proc. IEEE Int'l Professional Comm. Conf. pp. 13-18, Sept. 1995.
• 22. I.J. Jason, "The Definition of the Field of Instructional Technology," http://www.ianjones.us/portfolio/PDF/The%20Field%20of%20 InstructionalTechnology.pdf, Apr. 2008.
• 23. M.E. Noam, "Electronics and the Dim Future of the University," Science, vol. 270, pp. 247-249, Oct. 1995.
• 24. M. Prensky, "Digital Natives, Digital Immigrants," On the Horizon, vol. 9, 2001.
• 25. L. Feisel, and D. Rosa, "The Role of the Laboratory in Undergraduate Engineering Education," J. Electrical Eng., vol. 94, no. 1, Jan. 2005.
• 26. J.M. Keller, "How to Integrate Learner Motivation Planning into Lesson Planning: The ARCS Model Approach," http://mailer. fsu. edu/~jkeller/Articles/Keller%202000%20ARCS%20Lesson%20 Planning.pdf, 2000.
• 27. Massachusetts Inst. of Technology, "MIT iCampus. iLabs Architecture," http://icampus.mit.edu/ilabs/architecture/content/ ?iLabsInteractive10, 2007.
• 28. Univ. of South Australia, NetLab, http://netlab.unisa.edu.au/faces/frameset.jsp, 2006.
• 29. G. Andria et al., "Remote Didactic Laboratory 'G. Savastano,' The Italian Experience for E-Learning at the Technical Universities in the Field of Electrical and Electronic Measurement: Architecture and Optimization of the Communication Performance," IEEE Trans. Instrumentation and Measurement, vol. 56, no. 4, pp. 1124-1134, 2007.
• 30. D. Vujevic, Mjeranja u Elektrotehnici—Upute za Labora- Torijske Vježbe. Dorsum d.o.o., fifth ed., pp. 142-143, 2004.
• 31. N. Ertugrul, "Magnetic Circuit Fundamentals," Nat'l Instruments Developer Zone, http://zone.ni.com/devzone/cda/epd/p/id/4292, 2003.
• 32. D. Dagger, et al., "Service-Oriented E-Learning Platforms," IEEE Internet Computing, vol. 11, no. 3, pp. 28-35, May/June 2007.
• 33. Y. Akao, "Development History of Quality Function Deployment," The Customer Driven Approach to Quality Planning and Deployment, p. 339, Asian Productivity Organization, 1994.
• 34. Int'l Standard Organization, "ISO 9001:2000 Quality Management Systems—Requirements," http://www.iso.org/iso/iso_ catalogue/catalogue_tc/catalogue_detail.htm?csnumber=21823, 2007.
• 35. R.E. Zultner, "Priorities: The Analytic Hierarchy Process in QFD," Proc. Fifth Symp. Quality Function Deployment, 1993.
• 36. A.J. Lowe, "QFD Tutorial," Webducate—Inovative Solutions for e-Learning, http://elsmar.com/pdf_files/QFD-Tutorial.swf, 2000.
• 37. J.E. Ashby, "The Effectiveness of Collaborative Technologies in Remote Lab Delivery Systems," Proc. 38th ASEE/IEEE Frontiers in Education Conf., pp. F4E-7-F4E-12, Oct. 2008.

Drago Cmuk received the graduate degree in 2003 and the PhD degree in the optimization and development of new services in remote laboratories. He is a researcher and an assistant at the Faculty of Electrical Engineering and Computing, Zagreb, Croatia, and the Faculty of Engineering, Benevento, Italy. He started his international postgraduate studies and scientific work in the field of remote laboratories in 2004 at the mentioned universities. He is working as an assistant in a group of courses on electric measurements and quality management.
Tarik Mutapcic received the master's degree from the Department of Electric Machines, Drives, and Automation at the Faculty of Electrical Engineering and Computing, University of Zagreb, 1994. Through an Executive Master of Business Administration (EMBA) program since 2003, his research field was entrepreneurship and technology commercialization. He collaborates with the Laboratory of Signal Processing and Measurement Information of the University of Sannio, Benevento, in activities in area of management of remote laboratories.
Ivan Bilic is a final year student at the Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia. His final work occupies mobile technologies in measurement and e-learning practice.
CITATIONS
SHARE
58 ms
(Ver 3.x)