Resources

RENCI occupies more than 24,000 square feet at 100 Europa Drive in Chapel Hill, including a 2,000 square foot data center. Engagement centers at Duke University in Durham, North Carolina State University in Raleigh, and UNC-Chapel Hill offer visualization and collaboration technologies for campus faculty and RENCI researchers who collaborate with them. These sites are connected to each other by high performance networking (10 Gbs links in RTP).

  • 2000 square feet of floor space on an 18 inch raised floor
  • 600 KVa commercial power
  • 375 KVa UPS power
  • 20 KVA Generator power
  • 134 TONS dedicated cooling
  • Room for 40 Racks of High Performance Computing, Storage, and Networking

RENCI began operations as an organization starting in 2004. Since that time, the organization has acquired various computational systems to support the projects and activities. The following is a list of the major components of infrastructure that is currently active at RENCI.

Hatteras

Deployed in summer 2013, Hatteras is 2048-core cluster running CentOS Linux. Hatteras is segmented into 4 independent sub-clusters that each utilize an FDR-10 Infiniband interconnect. Hatteras can run 4 concurrent jobs each with 512-way parallelism. Hatteras uses Dell’s densest enclosure and the entire system fits in a single rack. Each of the 128 nodes has the following configuration:

  • Dell M420 quarter-height blade server
  • Two Intel Xeon E5-2450 CPUs (2.1GHz, 8-core)
  • 96GB 1600MHz RAM
  • 50GB SSD for local I/O
  • Mellanox FDR-10 Interconnect

Blue Ridge

160-node cluster running RedHat Linux and Windows HPC 2008, The cluster has a 40 Gigabit Infiniband MPI interconnect, a 1 Gigabit file system interconnect, 20 TB shared file system (NFS w/ 20 Gigabit network) . Each node has the following specifications:

  • Dell PowerEdge M610 (Blades)
  • 2 x 2.8Ghz Intel Nehalem-EP 5560, quad core
  • 24 GBs 1333Mhz memory
  • 74 GBs 15K RPM SAS Drive

Additionally, Blue Ridge includes the following:

2 Nodes with GPGPU’s (General Purpose GPGPU) nVidia Tesla S1070’s with the following specifications

  • Dell PowerEdge R710
  • 2 x 2.8Ghz Intel Nehalem 5560-EP, quad core
  • 96 GBs 1066Mhz memory
  • 4 x nVidia Tesla S1070′s

2 Nodes for large memory work, with the following specifications:

  • Dell PowerEdge R910
  • 4 x 2.00Ghz Intel Nehalem-EX, 8 core
  • 1 TB 1066Mhz memory

Croatan

Deployed in the second half of 2012, Croatan has been configured for large scale data analytics. There are 30 nodes within the cluster, where each node includes:

  • Two Intel Xeon E5-2670 processors (16 cores total @ 2.6GHz)
  • 256GB RDIMM RAM @ 1600MHz
  • 36 Terabytes (12 x 3TB) of raw local disk dedicated to the node
  • 146GB RAID-1 volume dedicated for OS
  • 56Gbps FDR Infiniband Interconnect
  • 40Gbps Ethernet Interconnect
  • 10Gbps Dedicated Ethernet NAS Connectivity

In addition to these systems, a Microsoft-sponsored research project has been leveraged to deploy a comprehensive environment supporting both research and general RENCI operations. The Microsoft-sponsored research project encourages the use of leading edge technologies and the services of the entire Microsoft enterprise IT platform. The systems currently deployed include:

  • 40 core Windows Compute Cluster available to the research community
  • 5 TB SQL Server available to the research community
  • 5 TB SQL Server dedicated to a specific Genetics research project
  • Team collaboration server using SharePoint (MOSS 2007)

In addition to these capabilities, the deployed Microsoft based enterprise IT services provide a complete platform for research and operations, including: directory services (AD); federated identity (ADFS); enterprise messaging (Exchange 2007); inventory, monitoring and configuration management (System Center Essentials); project management and collaboration (SharePoint); source control and SDLC (Visual Studio Team Foundation Server); generally available Windows login node (Terminal Services).

The RENCI Storage Infrastructure comprises:

  • Over 256 active Brocade FC4 and FC8 ports
  • Multinode BlueArc Mercury 110 NFS/CIFS cluster, connected via 10Gbit Ethernet
  • Data Direct Networks S2A9900 and 4 S2A6620 SAN’s w/1.7PB of SATA disk space
  • NetApp FAS3240 cluster with 50TB of storage
  • Quantum Scalar i6000 with 1584 tape slots with 8 LTO4 and 4 LTO5 tape drives

Networking

The RENCI production network enables connectivity to the Breakable Experimental Network (BEN), the North Carolina Research and Education Network (NCREN), the University of North Carolina, and National Lambda Rail’s (NLR) FrameNet. Each has its own 1 or 10 gigabit Ethernet connection and is implemented at the IP layer (Layer 3). The NLR FrameNet connection is a 10 gigabit Ethernet connection that’s used to connect to a nationwide layer 2 switch fabric. Each network connection provides access to either the commodity Internet, higher education networking (Internet2 and NLR) or to a network research facility. RENCI is also in the process of acquiring a layer 2 10Gbps connection to Internet 2’s Advanced Layer 2 Service (formerly ION).

RENCI’s connectivity to the outside world is facilitated through a Cisco 7609 router managed by RENCI staff. Internal data center connectivity is facilitated through a mix switched managed by UNC Information Technology Services and RENCI (Enterasys N7). RENCI’s internal networking infrastructure is facilitated by a Force 10 S4810 switch capable of supporting 96 10 Gb/s connections. This allows RENCI to cleanly separate production, research and experimental networking infrastructures such that they can coexist without interfering with each other.

The Breakable Experimental Network (BEN) is the primary platform for RENCI experimental network research. It consists of several segments of NCNI dark fiber, a time-shared resource that’s available to the Triangle research community. BEN is a research facility created for researchers and scientists in order to promote scientific discovery by providing the Universities with world-class infrastructure and resources for experimentation with disruptive technologies. In the context of RENCI, the engagement sites at Duke, NCSU and UNC, as well as RENCI at Europa use BEN to provide experimental, non-production network connectivity between each other. Engagement sites act as PoPs (Points of Presence) that are distributed across the Triangle metro region that form a research test bed. RENCI acts as a caretaker of the facility as well as a participant in the experimentation activities on BEN.

On BEN RENCI has deployed Polatis fiber switches, Infinera DTN bandwidth provisioning platforms and a mix of Cisco 6509 and Juniper routers. This equipment is intended for experimental and research use in GENI and also serves in support of RENCI’s research agenda. BEN connects to the outside world using bandwidth-on-demand Layer 2 connections to FrameNet and the forthcoming Internet 2 Advanced Layer 2 Service.

GENI

Through a project funded by the GENI Project Office, RENCI is in the process of deploying 14 ‘GENI Racks’ across the US university campuses, with each rack consisting of 10 IBM x3650 M4 worker nodes controlled through a head node, a 6TB iSCSI storage array and a 10Gbps/40Gbps BNT 8264 OpenFlow switch.

These ‘ExoGENI’ racks constitute a ‘networked cloud’ prototype infrastructure running OpenStack and xCAT software intended for GENI experimenters but also suitable for distributed computation experiments in various domain sciences. The GENI programming interface for ExoGENI is provided by the ORCA (Open Resource Control Architecture) software designed jointly by RENCI and Duke University.

Racks are connected to the public Internet for management access. In addition they are connected to NLR FrameNet for configurable Layer 2 connections between racks, as well as with other elements of GENI infrastructure, including the OpenFlow overlays on Internet 2 and NLR.

More information at http://wiki.exogeni.net

RENCI has 9 VMware ESX Enterprise plus servers with vSphere that service the needs of most projects

  • Dell PowerEdge R710
  • 2 x 3.0Ghz X5680 Westmere, 6 core
  • 96GB System memory
  • 4 x 10GbE Network connections
  • 2 x FC8 Fiber connections

Visualization Infrastructure

UNC Chapel Hill

  • Social Computing Room at ITS Manning: The Social Computing Room is a 24-foot x 24- foot room that utilizes three projectors per wall, or 12 projectors total, for a 360-degree experience. It is capable of visualizing big scientific data at 9.5 million pixels. The room also supports high-profile arts and humanities projects. The room includes a tracking system capable of tracking up to 15 people or resources in real time.
  • Odum Social Computing Room at Davis Library: The Odum RENCI Social Computing Room is a 20-foot x 20- foot room that provides a 360-degree experience similar to the Social Computing Room at ITS Manning. Available Fall 2013.
  • Tele-immersion Room: The Tele-immersion Room combines two 4K resolution projectors arranged to project true 3D stereoscopic imagery at over four times HD resolution. The room supports a variety of visualization, arts and humanities projects. Capabilities include both interactive and pre-computed streaming movie visualization animations.

Videoconferencing: There is a two projector teleconferencing node at ITS Manning. The Tele-immersion listed above also supports dual-channel HD videoconferencing.

Europa Center

  • HD Edit Suite: This edit suite enables video editing and post-production in native HD resolution.

Videoconferencing: There are three teleconferencing nodes at Europa: two with three projectors (one of these supports dual-channel HD videoconferencing) and the other with one projector and a Cisco CTS 500 Telepresence unit.

Duke

  • zSpace Station – a stereoscopic tracked 3D virtual tabletop visualization platform.

Videoconferencing: A two-projector teleconferencing node that supports traditional and dual-channel HD videoconferencing.

NC State

  • Social Computing Room at D.H. Hill Library: Available in Fall 2013, the Social Computing Room at D.H. Hill Library is a 25-foot x 25- foot room that utilizes three projectors per wall, or 12 projectors total, for a 360-degree experience. It is capable of visualizing data at 9.5 million pixels. The room also supports high-profile arts and humanities projects.

Manteo (Coastal Studies Institute)

Videoconferencing: A teleconferencing node using a 46-inch flat panel display, camera and echo-cancelling microphone on a portable cart.

ben-bThe Breakable Experimental Network (BEN) is the primary platform for RENCI network research. BEN is a dark fiber research facility created for researchers and scientists at the three Triangle area universities and in order to promote scientific discovery by providing world-class infrastructure and resources. RENCI engagement sites at UNC-Chapel Hill, NC State and Duke as well as RENCI’s anchor site at Europa Center in Chapel Hill are home to BEN PoPs (Points of Presence) that form a research test bed, which is used by university researchers on a time-sharing basis. RENCI manages the BEN infrastructure on behalf of the three universities. Teams of researchers can install their equipment at BEN PoPs to perform at-scale experiments with disruptive networking technologies.Details

BEN consists of two planes: the management plane, which runs over a mesh of secure tunnels provisioned over commodity IP; and the data plane, which runs over the dark fiber infrastructure. The management plane allows out-of-band access to control experimental equipment deployed at BEN PoPs. Access to the management plane is granted over a secure VPN.

Each BEN PoP shares the same network architecture that enables the following capabilities in support of regional network research:

  • The power and space requirements needed for equipment used by research teams to perform experiments on BEN.
  • A reconfigurable fiber switch (layer 0), which provides researchers access to dark fiber for dynamically enabling the creation of different physical experiments and allows experimental equipment to connect to the BEN fiber in a time-shared fashion.
  • RENCI additionally provides its own equipment to support collaborative research efforts:
    • Infinera DTN Reconfigurable DWDM (Dense Wavelength Division Multiplexing) platform, which provides up to 100 Gbp/s bandwidth between PoPs.
    • Reconfigurable switch/routers from Cisco and Juniper (layers 2 and 3), which provides packet services.
    • Programmable compute platforms (3-4 server blades at each PoP).