Hi everyone!
We were trying to upload some submissions with scaling, but we keep getting this:
"WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
2022-12-11 14:33:14.773349: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-11 14:33:14.915161: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2022-12-11 14:33:14.944237: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-12-11 14:33:16.669021: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-11 14:33:18.394503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22822 MB memory: -> device: 0, name: Quadro RTX 6000, pci bus id: 0000:bf:00.0, compute capability: 7.5
Traceback (most recent call last):
File "/multiverse/storage/lattari/Prj/postdoc/Courses/AN2DL_2022/Competition2_running_dir/worker_gpu7_dir/tmp/codalab/tmp32Jn5v/run/program/score.py", line 174, in <module>
predictions = M.predict(D.input_ts) # [BS]
File "/multiverse/storage/lattari/Prj/postdoc/Courses/AN2DL_2022/Competition2_running_dir/worker_gpu7_dir/tmp/codalab/tmp32Jn5v/run/input/res/model.py", line 13, in predict
scaler = joblib.load(scaler_filename)
File "/usr/local/lib/python3.8/dist-packages/joblib/numpy_pickle.py", line 650, in load
with open(filename, 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'scaler_1.sav'" .
We tried to move the scaler_1.sav even in the same directory as the model.py, but still we are getting this message. Can anyone help us?