CUDA超大规模并行程序设计

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

11提纲从GPGPU到CUDACUDA并行程序组织并行执行模型CUDA基础CUDA存储器CUDA程序设计工具程序优化22GraphicProcessingUnit(GPU)用于个人计算机、工作站和游戏机的专用图像显示设备显示卡•nVidia和ATI(nowAMD)是主要制造商Intel准备通过Larrabee进入这一市场主板集成•Intel333维图像流水线VertexProcessorFragmentProcessorRasterizerFramebufferTextureCPUGPU44实时3维高速图形处理一帧典型图像1Mtriangles3Mvertices25Mfragments30frames/s30Mtriangles/s90Mvertices/s750Mfragments/s55传统GPU架构GraphicsprogramVertexprocessorsFragmentprocessorsPixeloperationsOutputimage66GPU的强大运算能力02040608010012020032004200520062007Memorybandwidth(GB/s)GPUCPUG80UltraG80G71NV40NV30HapertownWoodcrestPrescottEENorthwood02040608010012020032004200520062007Memorybandwidth(GB/s)GPUCPUG80UltraG80G71NV40NV30HapertownWoodcrestPrescottEENorthwood•数据级并行:计算一致性•专用存储器通道•有效隐藏存储器延时77GeneralPurposeComputingonGPU(GPGPU)88GPGPU核心思想用图形语言描述通用计算问题把数据映射到vertex或者fragment处理器但是硬件资源使用不充分存储器访问方式严重受限难以调试和查错高度图形处理和编程技巧99NVidiaG200ArchitectureTPCSMSMSMTPCSMSMSMTPCSMSMSMTPCSMSMSMTPCSMSMSMTPCSMSMSMTPCSMSMSMTPCSMSMSMTPCSMSMSMTPCSMSMSMStreamingProcessor(SP)‏SpecialFunctionUnit(SFU)‏DoublePrecisionFPU1010CUDA:ComputeUnifiedDeviceArchitecture通用并行计算模型单指令、多数据执行模式(SIMD)•所有线程执行同一段代码(1000sthreadsonthefly)•大量并行计算资源处理不同数据隐藏存储器延时•提升计算/通信比例•合并相邻地址的内存访问•快速线程切换1cycle@GPUvs.~1000cycles@CPU1111混合计算模型CUDA:集成CPU+GPUC应用程序CPU:顺序执行代码GPU=超大规模数据并行协处理器•“批发”式执行大量细粒度线程......kernel0CPUSerialCodeCPUSerialCodeGPUParallelCodeGPUParallelCodeConcurrentexecution!kernel11212CUDA成功案例1313CUDA性能BLAS3:127GFLOPS//基本线性代数:matrix-matrixFFT:52benchFFT*GFLOPSFDTD:1.2Gcells/sec//计算电动力学SSEARCH:5.2Gcells/sec//Smith-Waterman基因序列比较BlackScholes:4.7GOptions/sec//期权定价模型VMD:290GFLOPS//分子动力学图形显示1414ProblemInstancesforSparseMatrixVectorProduct(SMVP)ProblemInstance#rows#columns#non-zerosAvg.#non-zerosperrowDescriptionLin25600025600017664006.9LargesparseEigenvalueproblemt2em92163292163245908325.0Electromagneticproblemsecology11000000100000049960005.0Circuittheoryappliedtoanimal/geneflowcont111468599196139453829993.7Linearprogrammingsls17481226272968043043.9Largeleast-squaresproblemG3_circuit1585478158547876608264.8AMDcircuitsimulationthermal21228045122804585803137.0FEM,steadystatethermalproblemkkt_power20634942063494127713616.2Optimalpowerflow,nonlinearoptimizationFreescale134287553428755170526265.0Freescalecircuitsimulation1515SPMVThroughputonGTX280ProblemInstanceCPU(GFLOPS)GPU(GFLOPS)Speed-upLin0.269.2336.04t2em0.2912.4143.44ecology10.249.0337.43cont110.3110.6633.84sls0.2810.1036.49G3_circuit0.218.8641.45thermal20.218.9741.89kkt_power0.265.7022.01Freescale10.2911.5640.371616SMVPApplication:StaticTimingAnalysisAdaptedfromRamalingam,A.et.al.AnAccurateSparseMatrixBasedFrameworkforStatisticalStaticTimingAnalysis.ICCAD.2006.1717StaticTimingAnalysisResultsonGTX280InstanceCPU(#pathspersecond)GPU(#pathspersecond)Speed-upb18_50K1.95E+061.02E+0852.33b18_100K1.92E+069.57E+0749.82b19_50K2.46E+061.10E+0844.64b19_100K2.42E+061.14E+0847.001818提纲从GPGPU到CUDACUDA并行程序组织并行执行模型CUDA基础CUDA存储器CUDA程序设计工具程序优化1919并行性的维度1维y=a+b//y,a,bvectors2维P=MN//P,M,Nmatrices3维CTorMRIimaginga[0]a[1]…a[n]b[0]b[1]…b[n]y[0]y[1]…y[n]+++====2020并行线程组织结构Thread:并行的基本单位Threadblock:互相合作的线程组CooperativeThreadArray(CTA)允许彼此同步通过快速共享内存交换数据以1维、2维或3维组织最多包含512个线程Grid:一组threadblock以1维或2维组织共享全局内存Kernel:在GPU上执行的核心程序OnekernelonegridHostKernel1Kernel2DeviceGrid1Block(0,0)Block(1,0)Block(2,0)Block(0,1)Block(1,1)Block(2,1)Grid2Block(1,1)Thread(0,1)Thread(1,1)Thread(2,1)Thread(3,1)Thread(4,1)Thread(0,2)Thread(1,2)Thread(2,2)Thread(3,2)Thread(4,2)Thread(0,0)Thread(1,0)Thread(2,0)Thread(3,0)Thread(4,0)2121ParallelProgramOrganizationinCUDAThreadThreadblockGridSPSoftwareHardwareSMSMGPU…TPCSMSMSMTPCSMSMSMTPCSMSMSM2222并行线程执行调用kernelfunction需要指定执行配置Threads和blocks具有IDsthreadIdx:1D,2D,or3DblockIdx:1D,or2D由此决定相应处理数据__global__voidkernel(...);dim3DimGrid(3,2);//6threadblocksdim3DimBlock(16,16);//256threadsperblockkernelDimGrid,DimBlock(...);2323实例1:Element-WiseAddition//CPUprogram//sumoftwovectorsaandbvoidadd_cpu(float*a,float*b,intN){for(intidx=0;idxN;idx++)a[idx]+=b[idx];}voidmain(){.....fun_add(a,b,N);}//CUDAprogram//sumoftwovectorsaandb__global__voidadd_gpu(float*a,float*b,intN){Intidx=blockIdx.x*blockDim.x+threadIdx.x;if(idxN)a[idx]+=b[idx];}voidmain(){…..dim3dimBlock(256);dim3dimGrid(ceil(N/256);fun_adddimGrid,dimBlock(a,b,N);}2424提纲从GPGPU到CUDACUDA并行程序组织并行执行模型CUDA基础CUDA存储器CUDA程序设计工具程序优化2525CUDAProcessingFlow2626并行线程执行SM内以(warp即32threads)为单位并行执行Warp内线程执行同一条指令Half-warp是存储操作的基本单位WarpBlock0Block1Block22727GPU负载分配Globalblockscheduler管理threadblock级并行从CPU获得线程组织信息根据硬件结构分配threadblock到SM2828StreamingMultiprocessor(SM)2929StreamingMultiprocessor执行ThreadBlocks线程以block为单位分配到SM视资源需求,一个SM分配至多8个blockSMinG80可以接受768个线程•256(threads/block)*3blocks•或128(threads/block)*6blocks,etc.线程并发(concurrently)运行SM分配并维护线程IDSM管理并调度线程t0t1t2…tmt0t1t2…tmBlocksSPSharedMemoryMTIUSPSharedMemoryMTIUTFt0t1t2…tmt0t1t2…tmSM1SM0TextureL1SFUSFU3030ThreadLifeCycleGrid在GPU上启动Threadblocks顺序分配到SM’s一般SM应有1threadblockSM把线程组织为warpsSM调度并执行就绪的warpWarps和threadblocks执行结

1 / 92
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功