NRL Researchers and Universities Collaborate to Win Supercomputing Bandwidth Challenge


12/22/2009 - 128-09r
Contact: Amanda Bowie, (202) 767-2541


A collaborative team that included the Naval Research Laboratory's Center for Computational Science won the Supercomputing Bandwidth Challenge at the 2009 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC09). Participants on the winning team were the Naval Research Laboratory (NRL), the Laboratory for Advanced Computing (LAC) at the University of Illinois at Chicago, the International Center for Advanced Internet Research (iCAIR) at Northwestern University, and the Open Cloud Consortium. The prize was awarded at the 21st annual conference held in Portland, Oregon, November 14-20, 2009.

Schematic of cross-country 10-Gbps links between NRL, Washington, DC and SC09, Portland, OR. Shown are the major components used to extend two 4x SDR InfiniBand paths across the country.

Several wide area large data applications were part of the team's Bandwidth Challenge Entry, including two that utilized remote direct memory access (RDMA) technology and locally-developed open source protocols with an open source motion imagery viewer and a distributed renderer to fill two 10 Gigabit Ethernet (GigE) paths. One frame path and one packet path were provided by National Lambda Rail (NLR) as part of the SC09 SCinet activities and connected Washington, DC to Portland, OR via Chicago, IL effectively extending the local NRL campus InfiniBand network cloud across the country.

The team drove the links at line rate using RDMA and UDT-based data transfer (UDT) over both IPv6 and InfiniBand technology. Video was rendered on one exhibit floor workstation, captured and transmitted to NRL DC, processed via a cloud computing infrastructure, then sent back to another exhibit floor workstation and displayed. Simultaneously, on one of the exhibit floor workstations, a second full-rate motion imagery source was remotely read via the high I/O parallel Lustre file system from an InfiniBand disk array at NRL DC and rendered locally.

The first application smoothly streamed 2560x1600xRGB uncompressed video at over 70 frames per second

A screen shot of one frame of a "distant view" of the earth showing an overlay of the NRL "Rampant Lion" data set collected over Afghanistan. The view at this "altitude" shows a low-resolution image of the coverage of the entire data set.
(900+ MB/sec) from Oregon to DC and back, with the open source ossimPlanet OSG system via an RDMA relay to complete the pipeline. This provided a demonstration of substantial future functionality for visualization from a distributed Open Cloud Infrastructure. For the second application, rendering was accomplished with NRL's open source Open-GL OpenEyes viewer on the Lustre file system - again utilizing RDMA reads from data located in DC. In parallel, LAC demonstrated the Sphere and Sector open source file system over TeraGrid in conjunction with sites at NRL, Northwestern's Starlight POP (Chicago) and elsewhere.

In addition to winning the "Overall" Bandwidth Challenge category, the team also won an award for "Rich, manifold-process implementations including diverse mechanisms" for their demonstration of "Maximizing Bandwidth Utilization in Distributed Data Intensive Applications." (The two remaining categories, "Classic data movement" and "Impact", were respectively awarded to Caltech for their presentation, "Demonstrating Classic Data Movement", and to the University of Tokyo for their demonstration, "Developments Strongly Affecting a Target Community.")

A high-resolution screen shot (2560 x 1600) of a zoomed-in portion of the NRL "Rampant Lion" data set collected over Afghanistan. It was a "rotating globe" of this data that was streamed from SC09 exhibit floor in Portland, OR to NRL in Washington, DC and back that contributed to winning the SC09 Bandwidth Challenge.

About the imagery: The system that provided the very high-resolution imagery rendered in one of the NRL applications demonstrated at SC09 was developed for, and taken during, the Rampant Lion surveys conducted by fellow NRL researchers in the Marine Physics Branch of the Marine Geosciences Division, the Remote Sensing Division, and researchers from the U.S. Geological Survey and Canadian Armed Forces. During the first Rampant Lion survey, remote sensing data was collected to support economic and civil infrastructure development, providing the means for future development of a stable and legal economy in Afghanistan. Rampant Lion demonstrated the integration and simultaneous operation of the largest suite of remote sensing equipment ever flown in a single aircraft using Synthetic Aperture Radar, hyperspectral imaging, digital photogrammetry, gravity and magnetic sensors. The Rampant Lion II featured an upgraded sensor suite that included a digital photogrammetric camera upgraded to 39 MPixels, an increased spectral range of hyperspectral imaging from 0.4-1.0 microns to 0.4-2.5 microns, the addition of a thermal imaging camera, and the addition of a high-altitude scanning topographic LiDAR system. NRL's VXS-1 aircraft squadron planned and executed all deployment requirements. (Go to http://mapserver.cmf.nrl.navy.mil to view a sample of this data.)

About the Naval Research Laboratory: NRL was established in 1923 as the U.S. Navy's corporate laboratory. Today, NRL is aligned with the Office of Naval Research (ONR) to conduct a broadly based multidisciplinary program of scientific research and advanced technological development. The Center for Computational Science (CCS) is an Information Technology Division (ITD) organization engaged in both Research and Development (R&D) in support of DoD and Government sponsors as well as an IT service provider to NRL/ONR users. The CCS research mission is focused on rapid prototyping open source large data infrastructures to the DoD. Built of leading edge advanced computing, networking, visualization and information storage architectures, the vision of the CCS is working to achieve Terabit data flows.

About the Center for Computational Science: CCS is a DoD Center for High Performance Computing (CHPC) that works in close collaboration with the High Performance Computing Modernization Program (HPCMP). The CCS CHPC participates in DoD wide programs and provides support to the NRL/Navy scientific user community who require shared access to high performance computing for their S&T. Collaborative research efforts of the CCS benefit the NRL/Navy/DoD community as a whole. They provide early access to high speed research computing and experimental networks such as the HPCMP's DREN, NRL's ATDnet, infrastructures that support exposure to the latest technology in high performance computing and wide area networking hardware and software protocols.

About the Open Cloud Consortium: The Open Cloud Consortium (OCC) is a 501(c)(3) not-for-profit that: supports the development of standards for cloud computing and frameworks for interoperating between clouds; develops benchmarks for cloud computing; supports reference implementations for cloud computing; manages a testbed for cloud computing called the Open Cloud Testbed; and, sponsors workshops and other events related to cloud computing. (www.opencloudconsortium.org)

Additional collaborators include Computer Integration & Programming Solutions Corporation, ITT Advanced Engineering & Sciences, Obsidian Strategics Incorporated, and QoSient.



Get NRL News: RSS


About the U.S. Naval Research Laboratory

The U.S. Naval Research Laboratory is the Navy's full-spectrum corporate laboratory, conducting a broadly based multidisciplinary program of scientific research and advanced technological development. The Laboratory, with a total complement of approximately 2,500 personnel, is located in southwest Washington, D.C., with other major sites at the Stennis Space Center, Miss., and Monterey, Calif. NRL has served the Navy and the nation for over 90 years and continues to meet the complex technological challenges of today's world. For more information, visit the NRL homepage or join the conversation on Twitter, Facebook, and YouTube.

Comment policy: We hope to receive submissions from all viewpoints, but we ask that all participants agree to the Department of Defense Social Media User Agreement. All comments are reviewed before being posted.