- The computation facility is heavily GPU equipped, including GPU workstations and GPU clusters.
- GPU workstations: We have more than 20 GPU workstations, each with a medium- to high-end GPU card, such as C1060, GTX 580 and GTX590, Quadro K2000, Quadro K4200, GTX1080 Ti and Titan X cards, etc. Those workstations are used as members’ personal computers for code development as well as routine work. Some workstations are well-suited for deep learning. For example, the Titan X workstation has 3584 cores and 12GB GDRR5X memory, with 11 TFlops and 480GB/s memory bandwidth.
- GPU cluster: We have a rack of mounted GPU cluster for deep learning training and testing. Group members remotely access those clusters via high-speed Internet to perform research activities. The cluster has the following nodes:
- NVIDIA® Titan GPU nodes: There are twelve (12) independent nodes. Each node has two GTX Titan (or Titan black) cards. Each card has 6GB of RAM. Each node has 16GB CPU memory, 120GB of SSD and 6TB of HDD.
- NVIDIA® Tesla K80 node (ROK80x8): This is a single node with eight (8) NVIDIA Tesla K80 cards. Each card has two (2) independent GPUs. Each GPU has 12GB memory. It has a total of 39,936 CUDA cores and 192GB GPU memory. It has up to 8.74x8 Tera-Flops of single precision performance and 480x8 GB/sec memory bandwidth.
- NVIDIA® DGX-1 node: A NVIDIA® DGX-1 station will join our deep-learning inventory as the cluster node in early 2018. The NVIDIA® DGX-1 has eight (8) NVIDIA Tesla V100 cards and supports NVLink for peer-to-peer GPU communication. It has 128GB total GPU memory, 40,960 CUDA cores and 5120 Tensor cores, and 512GB system memory. The GPU FP16 performance will be up to one petaflop. It will become our workhorse for deep-learning training.
Free Software and Codes
See bottom for links to other implementations and variants.