Computational fluid dynamics

Transcription

Computational fluid dynamics
Computing Services at CSC
Janne Ignatius
Director
Computing Services
CSC Fact Sheet
Operated on a non-profit
principle
All shares to the Ministry
of Education of Finland
in 1997
CSC Turnover in 2006
15.6 M€,
170 employees
Since March 2005,
facilities
in Keilaniemi, Espoo
Reorganized as a
limited company,
CSC-Scientific Computing
Ltd. in 1993
First supercomputer
Cray X-MP/EA 416
in 1989
Founded in 1971 as a
technical support unit for
Univac 1108
Funet started
in 1984
MISSION:
CSC, as a part of the Finnish national research
structure, develops and offers
high quality information technology
services
VISION 2012:
CSC – a leading center of excellence in
information technology for
science in the European research area
CSC supports the national research structure
CSC Fields of Services
FUNET
FUNETSERVICES
SERVICES
UNIVERSITIES
UNIVERSITIES
POLYTECHNICS
POLYTECHNICS
RESEARCH
RESEARCHINSTITUTES
INSTITUTES
COMPANIES
COMPANIES
COMPUTING
COMPUTINGSERVICES
SERVICES
DATA
DATASERVICES
SERVICES
APPLICATION
APPLICATIONSERVICES
SERVICES
INFORMATION
INFORMATIONMANAGEMENT
MANAGEMENT
SERVICES
SERVICES
Managing
Managingdirector
director
Director,customer
customerservices
services
Director,
Customer Solutions and processes
HR and office services
Communications
Funet
Funetnetwork
networkservices
services
Information
Informationmanagement
managementservices
services
Data
Dataservices
servicesfor
forscience
scienceand
andculture
culture
Application
Applicationservices
services
Computing
Computingservices
services
Finance services
General administration
Computing Services
COMP
Janne Ignatius
Core Computing
Support
CORE
Computing Environment
and Applications
CEA
Special
Computing
SPECO
Tero Tuononen
Jussi Heikonen
Jura Tarus
[Juha Helin (Cray)]
Petri Isopahkala
Esko Keränen
Kari Kiviaura
Jari Niittylahti
Aila Lyijynen
Samuli Saarinen
Dan Still
Joni Virtanen
Sebastian von Alfthan
Tommi Bergman
Jussi Enkovaara
Pekka Manninen
Marko Myllynen
Jarmo Pirhonen
Raimo Uusvuori
Thomas Zwinger
Juha Fagerholm
Juha Jäykkä
Vesa Kolhinen
Olli-Pekka Lehto
Juha Lento
Petri Nikunen
Nino Runeberg
Customers
● 3000 researchers use CSC’s computing
capacity
● Funet connects 85 organisations to the
global research networking infrastructure
– All Finnish universities
– Nearly all Finnish polytechnics
– 30 industrial clients and research institutions
– Total of 350 000 end users
Users by discipline 2007
220
286
Biosciences
Computer science
50
Physics
54
Chemistry
59
Linguistics
Nanoscience
112
211
Computational fluid dynamics
Engineering
121
152
203
Mathematics
Other
Usage of processor time by discipline 2007
1%
2% 1%
2 %1 %
2%
Nanoscience
Physics
6%
38 %
8%
Chemistry
Biosciences
Astrophysics
Computational fluid dynamics
Computer science
Earth sciences
14 %
Environmental sciences
Computational drug design
25 %
Other
Expert Services in Computational Science
•
•
•
•
•
•
•
•
•
•
•
Biosciences
Geosciences
Physics
Chemistry
Nanoscience
Linguistics
Numerics
Parallel Computing
Structural analysis
Computational fluid dynamics
Visualization
Performance pyramid
European
HPC center(s)
National/regional centers
Local centers
TIER 0
TIER 1
TIER 2
The ESFRI Vision for a European HPC
service
ƒ European HPC-facilities at the top of an
tier-0
HPC provisioning pyramid
– Tier-0: 3-5 European Centres
– Tier-1: National Centres
– Tier-2: Regional/University Centres
tier-1
PRACE
DEISA
EGEE
tier-2
ƒ Creation of a European HPC ecosystem
involving all stakeholders
–
–
–
–
HPC service providers on all tiers
Grid Infrastructures
Scientific and industrial user communities
The European HPC hard- and software industry
13
PRACE – Project Facts
ƒ Objectives of the PRACE Project:
– Prepare the contracts to establish the PRACE permanent Research
Infrastructure as a single Legal Entity in 2010 including governance, funding,
procurement, and usage strategies.
– Perform the technical work to prepare operation of the Tier-0 systems in
2009/2010 including deployment and benchmarking of prototypes for
Petaflops systems and porting, optimising, peta-scaling of applications
ƒ Project facts:
– Partners: 16 Legal Entities from 14 countries
– Project duration: January 2008 – December 2009
– Project budget: 20 M € , EC funding: 10 M €
PRACE is funded in part by the EC
under the FP7 Capacities programme
grant agreement INFSO-RI-211528
14
Site
Architecture
Point of contact
FZJ
MPP
Germany
IBM BlueGene/P
Michael Stefan [email protected]
CSC-CSCS
MPP
Finland+Switzerland
Cray XT5 (/XTn) - AMD
Opteron
Janne Ignatius [email protected]
Peter Kunszt [email protected]
CEA-FZJ
SMP-TN
France+Germany
Bull et al. Intel Xeon Nehalem
Gilles Wiber [email protected]
Norbert Eicker [email protected]
NCF
SMP-FN
Netherlands
IBM Power 6
Axel Berg [email protected]
Peter Michielse [email protected]
BSC
Hybrid – fine grain
Sergi Girona [email protected]
Spain
IBM Power6 + Cell
HLRS
Hybrid – coarse grain
Stefan Wesner [email protected]
Germany
NEC Vector SX/9 + x86
DEISA –Distributed European Infrastructure for
Supercomputing Applications
ƒ
A consortium of leading national supercomputing centres deploying
and operating a persistent, production quality, distributed
supercomputing environment with continental scope
ƒ Grid-enabled FP6 funded
Research Infrastructure
ƒ A 4+3-year project started in May
2004
ƒ Total budget is 37,1 M€ (incl.
DEISA and eDEISA contracts),
EU funding - 20.9 M€
EGEE-II Applications Overview
Enabling Grids for E-sciencE
•
•
>200 VOs from several
scientific domains
– Astronomy & Astrophysics
– Civil Protection
– Computational Chemistry
– Comp. Fluid Dynamics
– Computer Science/Tools
– Condensed Matter Physics
– Earth Sciences
– Fusion
– High Energy Physics
– Life Sciences
Further applications
under evaluation
Applications have moved from
testing to routine and daily usage
~80-90% efficiency
EGEE-II INFSO-RI-031688
98k jobs/day
Layers of Computing Capacity for Science
Tier 0: European Top-level Computing
Capacity (under development)
0
1
2
Tier 1: National level: supercomputer system
at CSC
i) more tightly connected part (more expensive per
CPU)
≈ ’capability’ part
ii) massive cluster part (less expensive per CPU)
≈ ’capacity’ part
Tier 2: (Cross-)departmental clusters at
universities and research institutes
Supercomputer ‘Louhi’: Cray
•
Cray XT4/XT5
– genuine homogeneous massively parallel
processor (MPP) system
– Cray’s proprietary interconnect SeaStar2(+),
very good scalability
•
At all phases
– memory 1 GB/core, but in a subset of cores 2
GB/core (in 384 XT4 and 768 XT5 cores)
– local fast work disk 70 TB (fiber channel)
•
Operating system:
– in login nodes complete Linux (Unicos/lc,
based on SuSE Linux)
– in computational cores Cray Linux
Environment: a light-weight Linux kernel
which requires certain minor modifications
to the source code of programs
Supercomputer ‘Louhi’: Cray
Peak performance 10.5 Tflops -> 86 Tflops
• Phase 1
– In full operation since April 2, 2007
– 11 production cabinets, AMD Opteron 2.6 GHz 2-core,
10.5 Tflops peak performance, 2024 computational cores
• Phase 2
– At 2 stages in summer and fall 2008, 2.3 GHz AMD Opteron 4core
– Upgrading in XT4 the dual-core CPUs to quad-core CPUs, and
bringing in XT5 additional cabinets; single combined system
– The combined XT4/XT5 configuration has been in general
customer usage since October 8, 2008: 86.7 Tflops peak power,
9424 computational cores
Supercluster ‘Murska’: HP
Peak performance 11.3 Tflops
•
•
•
•
•
•
•
In full operation since July 12, 2007
2176 processor cores: 2.6 GHz AMD Opteron
2-core
Interconnect: Infiniband (industry standard,
middle-range scalability)
Local fast work disk: 98 TB (SATA)
Memory: in 128 core 8 GB/core (32 GB/node),
in 512 core 4 GB/core, in 512 core 2 GB/core,
in 1024 core 1 GB/core.
HP XC-cluster, operating system is Linux
(based on Red Hat Enterprise Linux
Advanced Server 4.0)
Machine room and hosting provided by HP for
at least the first 2 years
On profiling the systems
• HP supercluster ‘Murska’:
–
–
–
–
–
For a large number of computational customers
No porting of codes needed
Data-intensive computing
Jobs needing more memory
‘Multiserial’ jobs
• Cray XT4/XT5 supercomputer ‘Louhi’:
– For a smaller number of research groups
– For runs/codes which benefit from Cray (in performance/price –
sense)
– Software: only as needed by those groups
– Customers will be helped in porting the codes to Cray
– In batch queue system more towards capability-type profiling:
space for very large parallel jobs => a somewhat
lower usage percentage
CSC’s history of supercomputers
Image: Juha Fagerholm, CSC
CSC / Top500 sum
CSC's share of Top500-sum
0.008
0.006
0.004
0.002
0
1995
2000
Year
2005
[(c) CSC, J. Ignatius]
2010
CSC / Top500 sum
CSC's share of Top500-sum
0.008
0.006
0.004
0.002
0
1995
2000
Year
2005
[(c) CSC, J. Ignatius]
2010
Layers of Computing Capacity for Science
Tier 0: European Top-level Computing
Capacity (under development)
0
1
2
Tier 1: National level: supercomputer system
at CSC
Tier 2: (Cross-)departmental clusters at
universities and research institutes
Material Sciences National Grid
Infrastructure (M-grid)
•
•
•
a joint project of CSC, 7 Finnish universities
and Helsinki Institute of Physics, funded
partially by the Finnish Academy in the
National Research Infrastructure
Programme
has built a homogeneous PC-cluster
environment with theoretical peak of
approx. 3 Tflop/s (350 nodes)
Environment
– Hardware: Dual AMD Opteron 1.8-2.2 Ghz
nodes with 2-8 GB memory, 1-2 TB shared
storage, separate 2xGE (communications and
NFS), remote administration
– OS: NPACI Rocks Cluster Distribution / 64
bit, based on RedHat Enterprise Linux 3
– Grid middleware: NorduGrid ARC Grid
MW compiled
– With Globus 3.2.1 libraries, Sun Grid Engine
as LRMS