High Performance Computer Simulations

Transcription

High Performance Computer Simulations
High Performance Computer Simulations
Borrajo Juan*, Chimenti Martín*, De Vecchi Hugo⁺, Grioni Mauro#, Rojido Martín⁺, Seres Vincent^
* IMPSA HYDRO, ⁺ IMPSA CIT, # IMPSA WIND, ^ IMPSA IT
IMPSA is…
• The only company with proprietary technology for hydropower and
wind power generation in Latin America.
• The largest wind generator manufacturer in Latin America and one
of the biggest hydropower manufacturers in the region.
• The largest investor in wind farms in Latin America with assets in
Brazil, Uruguay, Argentina and Peru.
• The world’s fastest growing family business according to the Top 50
Global Challengers perform by Ernst & Young.
• A company with a high profit track record for more than 106 years.
• A major source of qualified and sustainable work.
• Local and experienced management teams.
• Worldwide exporter in the Hydropower business and for all Latin
America in wind power generation.
• A company with access to international capital markets for over 15
years and financed by multilateral credit banks (BID, CAF, CII).
2
IMPSA - Production centers
BRAZIL
ARGENTINA
MALAYSIA
IMPSA Wind Recife
Suape, Brazil
IMPSA Hydro Recife
Suape, Brazil
IMPSA Plant II
Mendoza - Argentina
IMPSA Malaysia
Malaysia
IMPSA Wind Recife
Suape, Brazil
IMPSA Hydro Recife
Suape, Brazil
IMPSA Plant I
Mendoza - Argentina
IMPSA Malaysia
Malaysia
4
HISTORICAL DEVELOPMENT
Each 15 years, speed of the most powerful computers in the world are multiplied
by 1000.
* Article published in the Financial Times, “Battle of Speed Machines”, on Wednesday, July 10, 2013.
6
HISTORICAL DEVELOPMENT - IMPSA
• IMPSA hardware has followed a similar pattern.
• Historically, the number of degrees of freedom (DOF) was used as a reference.
• 15 years ago, on a HP 7000, for FEM analysis, it was possible to run 100,000
(one hundred thousand) DOF.
• Nowadays, we can analyze more than 100,000,000 (one hundred million) DOF.
• In 15 years, the size of the models created and resolved by IMPSA has been
multiplied by 1000.
7
IMPSA HARDWARE - Workstations
Personal Computer
Nodes
Processors
FEM Calculation
CFD Calculation
X1
X2
X8
(26 Users)
(1 Workstation)
(1 Cluster)
1 Dell 7500
2 Xeon® X5647
2.93 GHz
Nodes
Processors
Nodes
1 Dell 7500
2 Xeon® X5647
2.93 GHz
Processors
8 Dell 7500
16 Xeon® X5647
2.93 GHz
Cores
8 - 2 CPU, 4 cores / CPU
Cores
8 - 2 CPU, 4 cores / CPU
Cores
64 - 16 CPU, 4 cores / CPU
RAM
24 GB
RAM
48 GB
RAM
18 GB/Node = 144 GB
ROM
500GB - 7.2Krpm, SATA 2
ROM
1TB - 7.2Krpm, SATA 2
ROM
4TB - 7.2Krpm, SATA 2
Network
Ethernet 1 Gbps
2007
Network
Ethernet 1 Gbps
Network
2009/2012
8
Ethernet 1 Gbps
IMPSA HARDWARE - Cluster HPC
Cluster Dell HPC




Rack : PowerEdge 4220
HPC Nodes : 8 vertical blades, PowerEdge M620
Head Nodes : 2 horizontal blades
Storage Bay : 30 HDD, SAS 2.0, 10krpm, de 900GB
= 27 TB
Nodes 8 PowerEdge M620
Processors
16 Xeon® E5-2680
2.70 GHz, 331 Gflops
Cores 128 - 16 CPU, 8 cores por CPU
RAM 32 GB/CPU = 512 GB
ROM 2 x 146 GB - 15Krpm SAS 2.0
Network Infiniband 40 Gbps
2013 (2.77 Tflops)
9
COMPUTER APPLICATIONS AND CHALLENGES
SPEED
ACCURACY
LARGE ASSEMBLIES
FLUID DYNAMICS TRANSIENTS
MULTIPHYSICS SIMULATIONS
10
SPEED
MECHANISMS AND ASSEMBLIES WITH NONLINEAR CONTACT
Assembly: Head Cover, Wicket Gate, Bottom Ring, Spiral Case
 Simulation with nonlinear contacts
 Large displacements
 2 Millions DOF
 12 Iterations
11
COMPUTER APPLICATIONS AND CHALLENGES
SPEED
ACCURACY
LARGE ASSEMBLIES
FLUID DYNAMICS TRANSIENTS
MULTIPHYSICS SIMULATIONS
12
ACCURACY
ASSESMENT OF DISPLACEMENTS AND STIFFNESS – HEAD COVER
• Criticality in large models is given by the amount of available RAM.
• On certified Clusters, the use of Virtual Memory is not allowed, so, the size of the model to run is limited.
• On Workstations, because the use of Virtual Memory is allowed, it is possible to run larger models than
those runnable on the Cluster.
13
ACCURACY
Model with 114 Million degrees of freedom
Elapsed time for resolution: 9 hours, 20 minutes (on a Workstation)
14
COMPUTER APPLICATIONS AND CHALLENGES
SPEED
ACCURACY
LARGE ASSEMBLIES
FLUID DYNAMICS TRANSIENTS
MULTIPHYSICS SIMULATIONS
15
LARGE ASSEMBLIES
ASSEMBLY MODEL, EMBEDDED PARTS AND CONDRETE, PORCE III
Interactions between different components can lead to design solutions that would
not be possible in an isolated analysis
16
LARGE ASSEMBLIES
INTERACTION ELECTROMECHANICAL EQUIPMENT - POWERHOUSE
 Linear-elastic analysis
 10.9 millions DOF
 237 pair of contacts
 46 parts
Default:
Direct solver,
SMP, 2 cores
Iterative solver,
SMP, 8 cores
Iterative solver,
SMP, 8 cores
Iterative solver,
SMP, 16 cores
2 cores
6-8 cores
2 cores
6-8 cores
2 cores 16-32 cores
Benefits:
• Removal of simplifications and idealizations.
• Increased accuracy.
• Employment of more realistic behaviors for
interactions between parts.
• Availability of extra information for designing.
Improvement in elapsed time and memory used:
 Large, for different solver configurations (91%).
 Low, for different hardwares (2%).
 Best results running on one node.
17
COMPUTER APPLICATIONS AND CHALLENGES
SPEED
ACCURACY
LARGE ASSEMBLIES
FLUID DYNAMICS TRANSIENTS
MULTIPHYSICS SIMULATIONS
18
FLUID DYNAMICS TRANSIENTS
WIND TURBINE TRANSIENT
Objective: Analyze the flow unsteady in the nacelle region due to the effect of
the rotor disturbance on the oncoming wind field.
Statistics
N° of elements
4 Million
N° of iterations
6500
Time in 7 cores
3 weeks
Time in 32 cores
5 days
Benefits:
•
•
•
Simulate 3D Transients (Blades, nacelle and tower).
Increase the number of elements in area of interest.
Turbulence model more representative of the reality.
Control the Wind Turbine in an optimal manner, in order to optimize the energy production
19
FLUID DYNAMICS TRANSIENTS
CFD SIMULATION OF PRESSURE FLUCTUATION ON HYDRAULIC TURBINES
• By increasing calculation power we can perform complex
simulations, complying deadlines defined in the project’s
design phase.
• Evaluation of the effect
of vane installation in
the draft tube cone on
the pressure fluctuations
for
a
rehabilitation
project.
20
FLUID DYNAMICS TRANSIENTS
Calculation Speed
𝑵° 𝑰𝒕𝒆𝒓𝒂𝒕𝒊𝒐𝒏𝒔
𝑬𝒒𝒖𝒂𝒕𝒊𝒐𝒏𝒔 ∗ 𝑻𝒊𝒎𝒆 ∗ 𝑪𝒐𝒓𝒆𝒔
CALCULATION SPEED INCREASE
Workstations
Cluster
0
1
2
3
4
5
6
7
Nodes
8
9
10
11
12
13
14
Millions
• On the range of everyday meshes (5-8 millions), Dell HPC Cluster increases
the calculation speed respect to the Workstations .
• Dell HPC Cluster allows us to use meshes up to 35 millions elements.
21
FLUID DYNAMICS TRANSIENTS
NORMALIZED TIME REDUCTION
𝑻𝒊𝒎𝒆
𝑵𝒐𝒅𝒆𝒔 ∗ 𝑰𝒕𝒆𝒓𝒂𝒕𝒊𝒐𝒏𝒔
Normalized Time
1000
100
Cluster
Workstation
10
1
0
8
16
24
32
40
N° Cores
Parallelization improves run times
22
FLUID DYNAMICS TRANSIENTS
BELOMONTE – ROTOR & SATOR
Statistics
N° of elements
48 Million
N° of iterations
60
Time in 7 cores
5h50
Time in 32 cores
2h20
400 mn
HPC Cores / Nodes
Tiempo [minutos]
350 mn
300 mn
7500 Cores / Nodes
250 mn
200 mn
150 mn
y = 1026x-0.519
R² = 0.9796
100 mn
50 mn
0 mn
0
20
40
60
80
N° de Cores
100
120
140
A 70% time decrease could be obtained
23
COMPUTER APPLICATIONS AND CHALLENGES
SPEED
ACCURACY
LARGE ASSEMBLIES
FLUID DYNAMICS TRANSIENTS
MULTIPHYSICS SIMULATIONS
24
FLUID STRUCTURE INTERACTION - ACOUSTIC
• Multiphysics simulations of fluid/structure interactions can be
approached in two different ways :
 Modeling fluid through acoustic elements or
 Using a bidirectional interaction CFD and FEM
simultaneously.
• Multiphysics simulations associated with acoustic phenomena, ie
wave, are performed within the same scope FEM.
25
FLUID STRUCTURE INTERACTION - ACOUSTIC
NATURAL FREQUENCIES, RUNNER AND SHAFT LINE
26
FLUID STRUCTURE INTERACTION - ACOUSTIC
STAY VANES NORMAL MODES
Number of
nodes
Mesh
Number of
elements
Number of equations
Maximum
Total used
Memory
Statistics
Total
Estructura
Fluido
Total
Estructura
Fluido
Out of Core
Incore
1,168,971
582,401
800,770
829,350
322,077
507,273
2,535,484
10 GB
50 GB
27
FLUID STRUCTURE INTERACTION - ACOUSTIC
Harmonic Response
EVALUATION OF DYNAMIC RESPONSE AND COUPLINGS BETWEEN HYDRAULIC
DUCTS AND STRUCTURAL PARTS
200 m
Vibrations caused by pressure fluctuations in the draft tube can be predicted
28
CONCLUSIONS
• HPC Cluster has increased the CFD calculation capability. Now bigger and more
complex models can be used in design stages (complete machines, multi-phase
simulations, etc.).
• Optimal hardware configurations for CFD are not the same as FEM.
• CFD computations take advantage of cores number, in a different way that FEM.
• Time grows exponentially, when FEM resolution needs to use virtual memory.
• In FEM models, run times can be optimized greatly by using correct settings according
to the hardware resources, size of the model and type of analysis.
• Optimization of the elapsed time is closely related to the solver configuration. Large
improvements can be achieved even when a local workstation with 2 cores is used. In
some cases best results are obtained by using only one node having a maximum
amount of cores.
• Licensing strategy, "HPC Pack" or "Workgroup", must be carefully designed, depending
on the amount of users and available hardware resources.
29
FUTURE CHALLENGES
• Do a benchmark with GPU platforms.
• Assess the impact on performance by swapping on "SSD" instead
on a "HDD".
• Setting up a specific workstation for FEM simulations, compounds
by a large amount of RAM and as many cores as possible.
• Full fluid structure interaction CFD&FEM.
30
107 years…
La Tasajera
… innovating
HP Tocoma
107 years…
Bom Jardin
… providing total solutions
Agua Doce
www.impsa.com