Transcript Diapositive 1
ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
Key Computation Issues
Large volume of data ( disk / memory / network ) Significant number of solvers iterations due to numerical intractability Redundant memory accesses coming from interleaving data dependencies Use of double precision because of accuracy need ( hardware penalty ) Misaligned data ( inherent to specific data structures ) Exacerbates cache misses (depending on cache size) Becomes a serious problem when consider accelarators Leads to « false sharing » with Shared-Memory paradigm (Posix, OpenMP) Padding is one solution but would dramatically increase memory requirement Memory/Computation compromise in data organization ( e.g. gauge replication ) ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
Why the CELL Processor ?
Highest computing power in a single « computing node » Fast memory access Asynchronysm between data transfers and computation
Issues with the CELL Processor ?
Data alignment (both for calculations and transfers) Heavy use of list DMA Small size of the Local Store (SPU local memory) Ressources sharing with Dual Cell Based Blade Integration into an existing standard framework ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
What we have done
Implementation of each critical kernel on the CELL processor SIMD version of basic operators Appropriate DMA mechanism (efficient list DMA and double buffering) Merging of consecutive operations into a unique operator ( latency & memory reuse ) Aggregation of all these implementations into a single and standalone library A single SPU thread holds the whole set of routines SPU thread remains « permanently » active during a working session Effective integration into the tmLQCD package Data re-alignment Routine calls replacement (invoke CELL versions in place of native ones) This should be the way to commit this back to tmLQCD (external library and « IsCELL » switch) Successful tests (QS20 and QS22) ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
Global Organization
Task partitioning, distribution, and synchorization are done by the PPU Each SPE operates on its data portion by a typical loop of the form (DMA get + SIMD Computation + DMA put) The SPE, always active, switches to the appropriate operation on each request ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
Optimal list DMA organization for the Wilson-Dirac Operator
The computation of Wilson-Dirac action for a set of K contigous spinors required to get 8K spinors ( Example below with 32x16 3 lattice and even-odd ) S[0] P[ 2048 ] P[ 63488 ] P[ 128 ] P[ 1920 ] P[ 8 ] P[ 120 ] P[ 0 ] P[ 7 ] S[1] P[ 2049 ] P[ 63489 ] P[ 129 ] P[ 1921 ] P[ 9 ] P[ 121 ] P[ 1 ] P[ 0 ] S[2] P[ 2050 ] P[ 63490 ] P[ 130 ] P[ 1922 ] P[ 10 ] P[ 122 ] P[ 2 ] P[ 1 ] S[3] P[ 2051 ] P[ 63431 ] P[ 131 ] P[ 1923 ] P[ 11 ] P[ 123 ] P[ 3 ] P[ 2 ] A direct list DMA to get this « spinors matrix » involves 8x4 DMA items A list DMA to get the « transpose » involves 7 + 1 + 1 = 9 DMA items Generally, our list DMA is of size 8 + c k instead of 8K ( bin packing ) No impact on SPU performance because of the uniform access to the LS Significant improvment in global performance and scalability ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
Performance results
We consider a 32x16 3 lattice and CELL-accerated version of tmLQCD #SPE 1 2 3 4 5 6 7 8 Time(s) 0.109
QS20 Speedup 1.00
0.054
0.036
0.027
2.00
3.00
3.99
0.022
0.018
0.015
0.013
4.98
5.96
6.93
7.88
GFlops 0.95
1.92
2.89
3.85
4.73
5.78
6.94
8.01
#SPE 1 2 3 4 5 6 7 8 Time(s) 0.0374
QS22 Speedup 1.00
0.0195
0.0134
0.0105
1.91
2.79
3.56
0.0090
0.0081
0.0076
0.0075
4.15
4.61
4.92
5.75
GFlops 2.76
5.31
7.76
9.90
11.56
12.84
13.88
14.02
INTEL i7 quadcore 2.83 Ghz Without SSE With SSE 1 core 0.0820
4 cores 0.0370
1 core 0.040
4 cores 0.0280
GCR (57 iters) CG (685 iters) INTEL i7 SSE + 4c 11.05 s 89.54 s CELL (8 SPEs) QS20 QS22 3.78 s 42.25 s 2.04 s 22.78 s ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
Comments
We observed a factor 2 between QS20 and QS22 We observed a factor 4 between QS22 and Intel i7 quadcore 2.83 Ghz Good scalability on QS20 Scalability on QS22 is alterated beyond 4 SPEs (probably a binding issue on the Dual Cell Based Blade, which should be easy to fix ) Fixing this scalability issue on QS22 will double actual performances ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
Ways for improvment
Implement the « non GAUGE COPY » version ( significant memory reduction / packing ) Explore the SU(3) reconstruct approach at SPE level ( memory and bandwith savings ) Having the PPU participate in the calculations ( makes sens in double precision ) Try to scale up to the 16 SPEs on the QS22 Dual Cell Based Blade Experiment with a cluster of CELL processors ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010
END
Two accepted conference/workshop publications International Conference on Supercomputing International Workshop on Highly Efficient Accelerators and Reconfigurable Technologies ANR Meeting / PetaQCD LAL / Paris-Sud University , May 10-11, 2010