2014 Fourth International Workshop on Network-Aware Data Management (NDM) (2014)
Nov. 16, 2014 to Nov. 16, 2014
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/NDM.2014.8
Scientific collaborations on a global scale, such as the LHC experiments at CERN , rely today on the presence of high performance, high availability networks. In this paper we review the developments performed over the last several years on high throughput applications, multilayer software-defined network path provisioning, path selection and load balancing methods, and the integration of these methods with the mainstream data transfer and management applications of CMS , one of the major LHC experiments. These developments are folded into a compact system capable of moving data among research sites at the 1 Terabit per second scale. Several aspects that went into the design and target different components of the system are presented, including: evaluation of the 40 and 100Gbps capable hardware on both network and server side, data movement applications, flow management and the network-application interface leveraging advanced network services. We report on comparative results between several multi-path algorithms, the performance increase obtained using this approach, and present results from the related SC'13 demonstration.
Large Hadron Collider, Data transfer, Servers, Switches, Wide area networks, Resource management, Floors
A. Barczyk et al., "Towards Managed Terabit/s Scientific Data Flows," 2014 Fourth International Workshop on Network-Aware Data Management (NDM), LA, USA, 2014, pp. 23-27.