This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
IEEE Virtual Reality Conference 2002 (VR 2002)
A Scalable, Multi-user VRML Server
Orlando, Florida
March 24-March 28
ISBN: 0-7695-1492-8
Thomas Rischbeck, University of Newcastle Upon Tyne
Paul Watson, University of Newcastle Upon Tyne
VRML97 allows the description of dynamic worlds that can change with both the passage of time, and user interaction. Unfortunately, the current VRML usage model prevents its full potential from being realized. Initially, the whole world must be loaded into the user's desktop browser, and so large worlds can take a very long time to download and render, while a world cannot be shared among multiple users This paper describes the design and implementation of a client-server architecture that was built to overcome these problems. The major novelty is the decoupling of VRML world execution from world rendering. Parallelism and information filtering are exploited to produce a highly scalable system that can support huge, highly active worlds, accessed simultaneously by large numbers of users. A cluster-based parallel server is responsible for maintaining the dynamic world state, and most of the world dynamics are evaluated on the server side. The server streams VRML to the client, using view frustum culling and dynamic LOD selection to reduce clients' network bandwidth, storage and rendering requirements. Clients with limited resources (e.g. wireless-connected PDAs) can therefore participate in highly complex virtual worlds. While the implementation of the design focuses on VRML worlds, the design ideas could be exploited in other types of VR system, e.g. X3D.
Index Terms:
client-server NET-VE, scalable VRML, X3D, distributed event cascades
Citation:
Thomas Rischbeck, Paul Watson, "A Scalable, Multi-user VRML Server," vr, pp.199, IEEE Virtual Reality Conference 2002 (VR 2002), 2002
Usage of this product signifies your acceptance of the Terms of Use.