Group Convolution Prototype and Function List¶
Description¶
This kernel implements a grouped 2D convolution which applies general 2D convolution operation on a separate subset (groups) of inputs. In grouped convolutions with \(M\) number of groups, the input and kernel are split by their channels to form \(M\) distinct groups. Each group performs convolutions independent of the other groups to give \(M\) different outputs. These individual outputs are then concatenated together to give the final output.
For example, in a HWCN data layout, if the in
feature map is \((Hi, Wi, Ci)\) and
the weights
is \((Hk, Wk, Cw, Co)\), the output
feature map is \((Ho, Wo, Co)\)
tensor where \(Ci\) is equal to \(Cw * M\) and \(Co\) is multiple of \(M\).
Also spatial dimensions \(H*, W*\) comply with the system of equations (1).
Depthwise convolution (see Depthwise Convolution Prototype and Function List) is an extreme case of group convolution with number of groups \(M\) equal to number of input channels \(Ci\), and with the single filter per each group. TensorFlow-like “channel multiplier” functionality of depthwise convolution can be expressed by group convolution with number of groups equal to input channels \(Ci\) and output channels \(Co\) equal to channel multiplier number of filters per each group.
Note
For more details on group convolutions, see ImageNet classification with deep convolutional neural networks and Aggregated Residual Transformations for Deep Neural Networks.
Optionally, saturating ReLU activation function can be applied to the result of the convolution during the function’s execution. For more information on supported ReLU types and calculations, see ReLU Prototype and Function List.
This is a MAC-based kernel which implies accumulation. See Quantization: Influence of Accumulator Bit Depth for more information on related quantization aspects. The Number of accumulation series is equal to \((Hk * Wk * Cw)\).
Functions¶
Kernels which implement a group convolution have the following prototype:
mli_status mli_krn_group_conv2d_hwcn_<data_format>(
const mli_tensor *in,
const mli_tensor *weights,
const mli_tensor *bias,
const mli_conv2d_cfg *cfg,
mli_tensor *out);
where data_type
is one of the data types listed in Table MLI Data Formats and the function
parameters are shown in the following table:
Parameter |
Type |
Description |
---|---|---|
|
|
[IN] Pointer to constant input tensor. |
|
|
[IN] Pointer to constant weights tensor. |
|
|
[IN] Pointer to constant bias tensor. |
|
|
[IN] Pointer to convolution parameters structure. |
|
|
[IN | OUT] Pointer to output feature map tensor. Result is stored here |
Number of groups to split is not provided to the kernel explicitly. Instead, it
is derived from input and weights tensors shape. For example, in a HWCN data
layout, if the in
feature map is \((Hi, Wi, Ci)\) and the weights
tensor is \((Hk, Wk, Cw, Co)\), number of groups is \(M = Ci / Cw\), and
number of filters per each group is \(Co / M\).
Therefore, number of input channels \(Ci\) must be a multiple of \(Cw\), and number of
output channels \(Co\) must be a multiple of number of groups \(M\).
Here is a list of all available Group Convolution functions:
Function Name |
Details |
---|---|
|
In/out layout: HWC Weights layout: HWCN In/out/weights data type: sa8 Bias data type: sa32 |
|
In/out layout: HWC Weights layout: HWCN All tensors data type: fx16 |
|
In/out layout: HWC Weights layout: HWCN In/out data format: fx16 Weights/Bias data format: fx8 |
|
In/out layout: HWC Weights layout: HWCN All tensors data format: fx16 Width of weights tensor: 3 Height of weights tensor: 3 |
|
In/out layout: HWC Weights layout: HWCN In/out/weights data format: sa8 Bias data format: sa32 Width of weights tensor: 3 Height of weights tensor: 3 |
|
In/out layout: HWC Weights layout: HWCN In/out data format: fx16 Weights/Bias data format: fx8 Width of weights tensor: 3 Height of weights tensor: 3 |
|
In/out layout: HWC Weights layout: HWCN In/out/weights data format: sa8 Bias data format: sa32 Width of weights tensor: 5 Height of weights tensor: 5 |
|
In/out layout: HWC Weights layout: HWCN All tensors data format: fx16 Width of weights tensor: 5 Height of weights tensor: 5 |
|
In/out layout: HWC Weights layout: HWCN In/out data format: fx16 Weights/Bias data format: fx8 Width of weights tensor: 5 Height of weights tensor: 5 |
Conditions¶
Ensure that you satisfy the following general conditions before calling the function:
in
,out
,weights
andbias
tensors must be valid (see mli_tensor Structure Field Descriptions) and satisfy data requirements of the selected version of the kernel.Shapes of
in
,out
,weights
andbias
tensors must be compatible, which implies the following requirements:in
andout
are 3-dimensional tensors (rank==3). Dimensions meaning, and order (layout) is aligned with the specifc version of kernel.weights
is a 4-dimensional tensor (rank==4). Dimensions meaning, and order (layout) is aligned with the specific kernel.bias
must be a one-dimensional tensor (rank==1). Its length must be equal to \(Co\) (output channels OR number of filters).Channel \(Ci\) dimension of
in
tensor must be multiple \(Cw\) channel dimension ofweights
tensors (\(Ci = M * Cw\)).\(Co\) of
weights
tensor (output channels OR number of filters) must be multiple of number of groups e.g. \(Co = M * X\) where \(X\) is the number of filters per group.Shapes of
in
,out
andweights
tensors together withcfg
structure must satisfy the equations (1)Effective width and height of the
weights
tensor after applying dilation factor (see (1)) must not exceed appropriate dimensions of thein
tensor.
in
andout
tensors must not point to overlapped memory regions.
mem_stride
of the innermost dimension must be equal to 1 for all the tensors.
padding_top
andpadding_bottom
parameters must be in the range of [0, \(\hat{Hk}\)) where \(\hat{Hk}\) is the effective kernel height (See (1))
padding_left
andpadding_right
parameters must be in the range of [0, \(\hat{Wk}\)) where \(\hat{Wk}\) is the effective kernel width (See (1))
stride_width
andstride_height
parameters must not be equal to 0.
dilation_width
anddilation_height
parameters must not be equal to 0.
For fx16 and fx16_fx8_fx8 versions of kernel, in addition to the general conditions, ensure that you satisfy the following quantization conditions before calling the function:
The number of
frac_bits
in thebias
andout
tensors must not exceed the sum offrac_bits
in thein
andweights
tensors.
For sa8_sa8_sa32 versions of kernel, in addition to the general conditions, ensure that you satisfy the following quantization conditions before calling the function:
in
andout
tensor must be quantized on the tensor level. This implies that each tensor contains a single scale factor and a single zero offset.Zero offset of
in
andout
tensors must be within [-128, 127] range.
weights
andbias
tensors must be symmetric. Both must be quantized on the same level. Allowed Options:
Per Tensor level. This implies that each tensor contains a single scale factor and a single zero offset equal to 0.
Per \(Co\) dimension level (number of filters). This implies that each tensor contains separate scale point for each sub-tensor. All tensors contain single zero offset equal to 0.
Scale factors of bias tensor must be equal to the multiplication of input scale factor broadcasted on weights array of scale factors. See the example for the similar condition in the Convolution 2D Prototype and Function List.
Ensure that you satisfy the platform-specific conditions in addition to those listed above (see the Platform Specific Details chapter).
Result¶
These functions only modify the memory pointed by out.data.mem
field.
It is assumed that all the other fields of out
tensor are properly populated
to be used in calculations and are not modified by the kernel.
Depending on the debug level (see section Error Codes) this function performs a parameter
check and returns the result as an mli_status
code as described in section Kernel Specific Configuration Structures.