Friday, March 15, 2024

Strategies for coaching giant neural networks

Must read


Pipeline parallelism splits a mannequin “vertically” by layer. It’s additionally potential to “horizontally” break up sure operations inside a layer, which is normally referred to as Tensor Parallel coaching. For a lot of trendy fashions (such because the Transformer), the computation bottleneck is multiplying an activation batch matrix with a big weight matrix. Matrix multiplication might be considered dot merchandise between pairs of rows and columns; it’s potential to compute unbiased dot merchandise on totally different GPUs, or to compute components of every dot product on totally different GPUs and sum up the outcomes. With both technique, we will slice the burden matrix into even-sized “shards”, host every shard on a unique GPU, and use that shard to compute the related a part of the general matrix product earlier than later speaking to mix the outcomes.

One instance is Megatron-LM, which parallelizes matrix multiplications inside the Transformer’s self-attention and MLP layers. PTD-P makes use of tensor, knowledge, and pipeline parallelism; its pipeline schedule assigns a number of non-consecutive layers to every machine, lowering bubble overhead at the price of extra community communication.

Typically the enter to the community might be parallelized throughout a dimension with a excessive diploma of parallel computation relative to cross-communication. Sequence parallelism is one such thought, the place an enter sequence is break up throughout time into a number of sub-examples, proportionally lowering peak reminiscence consumption by permitting the computation to proceed with extra granularly-sized examples.



Supply hyperlink

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article