-
Notifications
You must be signed in to change notification settings - Fork 6.7k
MKL-DNN Convolution result is unstable with C API #16286
Description
Description
After issue #16143 was fixed, further testing revealed that the numerical result of the same example is unstable: evaluating the op multiple times can give different results.
Environment info (Required)
----------Python Info----------
Version : 3.7.2
Compiler : GCC 7.3.0
Build : ('default', 'Dec 29 2018 06:19:36')
Arch : ('64bit', '')
------------Pip Info-----------
Version : 19.0.1
Directory : /opt/Anaconda/lib/python3.7/site-packages/pip
----------MXNet Info-----------
Version : 1.5.0
Directory : /home/matteo/Git/mxnet/python/mxnet
Commit hash file "/home/matteo/Git/mxnet/python/mxnet/COMMIT_HASH" not found. Not installed from pre-built package or built from source.
Library : ['/home/matteo/Git/mxnet/python/mxnet/../../lib/libmxnet.so']
Build features:
✖ CUDA
✖ CUDNN
✖ NCCL
✖ CUDA_RTC
✖ TENSORRT
✔ CPU_SSE
✔ CPU_SSE2
✔ CPU_SSE3
✔ CPU_SSE4_1
✔ CPU_SSE4_2
✖ CPU_SSE4A
✔ CPU_AVX
✖ CPU_AVX2
✖ OPENMP
✖ SSE
✔ F16C
✔ JEMALLOC
✖ BLAS_OPEN
✔ BLAS_ATLAS
✖ BLAS_MKL
✖ BLAS_APPLE
✖ LAPACK
✔ MKLDNN
✖ OPENCV
✖ CAFFE
✖ PROFILER
✖ DIST_KVSTORE
✖ CXX14
✖ INT64_TENSOR_SIZE
✖ SIGNAL_HANDLER
✖ DEBUG
----------System Info----------
Platform : Linux-4.15.0-55-generic-x86_64-with-debian-buster-sid
system : Linux
node : mongolius
release : 4.15.0-55-generic
version : #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 94
Model name: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
Stepping: 3
CPU MHz: 2700.253
CPU max MHz: 3500,0000
CPU min MHz: 800,0000
BogoMIPS: 5184.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 6144K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0117 sec, LOAD: 0.8935 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0599 sec, LOAD: 2.1901 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1028 sec, LOAD: 0.9832 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0657 sec, LOAD: 1.2597 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0380 sec, LOAD: 0.8543 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0395 sec, LOAD: 0.4625 sec.
Build info (Required if built from source)
Compiler: gcc
MXNet commit hash: c5007ea
Build config: plain config.mk except USE_OPENCV = 0
Minimum reproducible example
#include <stdio.h>
#include <stdlib.h>
#include "mxnet/c_api.h"
#include "nnvm/c_api.h"
#define checkedMXCall(func, ...) \
{ \
if (func(__VA_ARGS__) != 0) { \
printf("MX call %s failed at line %d:\n%s", \
#func, __LINE__, MXGetLastError()); \
exit(1) ; \
} \
}
#define N_TRIES 10
void NDArraySet(NDArrayHandle arr, mx_float val){
mx_uint rank;
const mx_uint *shape;
checkedMXCall(MXNDArrayGetShape, arr, &rank, &shape);
int n_elements = 1;
int i = 0;
for(i = 0; i < rank; i++) n_elements *= shape[i];
void *data;
checkedMXCall(MXNDArrayWaitToRead, arr);
checkedMXCall(MXNDArrayGetData, arr, &data);
mx_float *data_float = (mx_float*) data;
for(i = 0; i < n_elements; i++) data_float[i] = val;
}
void NDArrayCopy(NDArrayHandle arr, mx_uint n_elements, mx_float* copy){
void *data;
checkedMXCall(MXNDArrayWaitToRead, arr);
checkedMXCall(MXNDArrayGetData, arr, &data);
int i;
mx_float *data_float = (mx_float*) data;
for(i = 0; i < n_elements; i++) copy[i] = data_float[i];
// printf("\n---- Copy Outcome:\n");
// for(i = 0; i < 8; i++) printf("%f %f\n", copy[i], data_float[i]);
}
int main() {
int i, j; // for loops
/* Create symbol variables */
SymbolHandle in_sym;
SymbolHandle w_sym;
SymbolHandle b_sym;
checkedMXCall(MXSymbolCreateVariable, "in", &in_sym);
checkedMXCall(MXSymbolCreateVariable, "w", &w_sym);
checkedMXCall(MXSymbolCreateVariable, "b", &b_sym);
/* Create convolution op */
OpHandle op;
NNGetOpHandle("Convolution", &op);
SymbolHandle sym;
const char *keys1[2] = {"kernel", "num_filter"};
const char *vals[2] = {"(1,1)", "40"};
checkedMXCall(MXSymbolCreateAtomicSymbol, op, 2, keys1, vals, &sym);
/* Compose op and variables */
const char **keys2 = NULL;
SymbolHandle vars[3] = {in_sym, w_sym, b_sym};
checkedMXCall(MXSymbolCompose, sym, "Conv", 3, keys2, vars);
/* Create NDArrays for arguments */
int dev_type = 1;
int dev_id = 0;
mx_uint in_shape[4] = {1, 3, 10, 10};
NDArrayHandle in_arg_arr;
checkedMXCall(MXNDArrayCreateEx, in_shape, 4, dev_type, dev_id, 0, 0, &in_arg_arr);
mx_uint w_shape[4] = {40, 3, 1, 1};
NDArrayHandle w_arg_arr;
checkedMXCall(MXNDArrayCreateEx, w_shape, 4, dev_type, dev_id, 0, 0, &w_arg_arr);
mx_uint b_shape[1] = {40};
NDArrayHandle b_arg_arr;
checkedMXCall(MXNDArrayCreateEx, b_shape, 1, dev_type, dev_id, 0, 0, &b_arg_arr);
/* Set NDArrays */
NDArraySet(in_arg_arr, 1.);
NDArraySet(w_arg_arr, 1.);
NDArraySet(b_arg_arr, 1.);
/* Create and bind executor */
ExecutorHandle ex;
NDArrayHandle arg[3] = {in_arg_arr, w_arg_arr, b_arg_arr};
NDArrayHandle grad[3] = {NULL, NULL, NULL};
NDArrayHandle *aux = NULL;
mx_uint req[3] = {1, 1, 1};
checkedMXCall(MXExecutorBind, sym, dev_type, dev_id, 3, arg, grad, req, 0, aux, &ex);
/* Get executor output handle */
mx_uint out_size;
NDArrayHandle *out_arr_p;
checkedMXCall(MXExecutorOutputs, ex, &out_size, &out_arr_p);
NDArrayHandle out_arr = *out_arr_p;
/* Get output shape */
mx_uint out_rank;
const mx_uint *out_shape;
checkedMXCall(MXNDArrayGetShape, out_arr, &out_rank, &out_shape);
int n_elements = 1;
for(i = 0; i < out_rank; i++) n_elements *= out_shape[i];
/* Forward */
/* checkedMXCall(MXExecutorForward, ex, 0);*/
/* Copy output */
/* mx_float *res1 = (mx_float *) malloc(sizeof(mx_float) * n_elements);
NDArrayCopy(out_arr, n_elements, res1);*/
mx_float *outputs[N_TRIES];
for(i = 0; i < N_TRIES; i++){
checkedMXCall(MXExecutorForward, ex, 0);
outputs[i] = (mx_float *) malloc(sizeof(mx_float) * n_elements);
NDArrayCopy(out_arr, n_elements, outputs[i]);
}
mx_float temp;
mx_float max = 0;
for(i = 0; i < N_TRIES - 1; i++){
for(j = 0; j < n_elements; j++){
temp = outputs[0][j] - outputs[i][j];
if(temp < 0) temp = (-1)*temp;
if(temp > max) max = temp;
}
}
printf("%f\n", max);
return 0;
}
Steps to reproduce
The above program evaluates the same convolution op in #16143 N_TRIES (10) times, copies its output (expected to be an array with all elements equal to 4) and then checks if all the copies are equal by printing the maximum absolute difference between the first copy and the others. So the expected outcome is zero, but every time the program is ran the max is either 4 or a very large number.
This is an equivalent example in python which works correctly:
import mxnet as mx
import numpy as np
sym = mx.sym.Convolution(
mx.sym.Variable('in'),
mx.sym.Variable('w'),
mx.sym.Variable('b'),
kernel=(1, 1),
num_filter=40
)
args = {
'in': mx.nd.ones([1, 3, 10, 10]),
'w': mx.nd.ones([40, 3, 1, 1]),
'b': mx.nd.ones([40]),
}
ex = sym.bind(mx.cpu(), args)
outs = []
for i in range(10):
ex.forward()
outs.append(ex.outputs[0].asnumpy())
print(
max([np.max(np.absolute(outs[0] - out)) for out in outs[1:]])
)
A bonus:
Uncommenting these two lines in the function NDArrayCopy
// printf("\n---- Copy Outcome:\n");
// for(i = 0; i < 8; i++) printf("%f %f\n", copy[i], data_float[i]);
shows that not even copying the output is safe. Elements of the output array apparently get changed right after being copied, so this looks some sort of problem with the asynchronous engine I would guess. The 8 there is not random: apparently there is never any difference after element 8.