Artificial Neural Networks and Deep Learning 2023 - Homework 2 Forum

Go back to competition Back to thread list Post in this thread

> Submission Problem

WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
2023-12-17 14:27:10.593336: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-12-17 14:27:10.629405: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-12-17 14:27:10.629442: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-12-17 14:27:10.629465: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-12-17 14:27:10.636289: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-12-17 14:27:13.119727: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22987 MB memory: -> device: 0, name: Quadro RTX 6000, pci bus id: 0000:96:00.0, compute capability: 7.5
Traceback (most recent call last):
File "/multiverse/storage/lattari/Prj/postdoc/Courses/AN2DL_2023/Competition2_running_dir/worker_gpu5_dir/tmp/codalab/tmpSo6Uxy/run/program/score.py", line 129, in
M = model(submission_dir)
^^^^^^^^^^^^^^^^^^^^^
File "/multiverse/storage/lattari/Prj/postdoc/Courses/AN2DL_2023/Competition2_running_dir/worker_gpu5_dir/tmp/codalab/tmpSo6Uxy/run/input/res/model.py", line 7, in __init__
self.model = tf.keras.models.load_model(os.path.join(path, 'Model_1.keras'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/keras/src/saving/saving_api.py", line 254, in load_model
return saving_lib.load_model(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/keras/src/saving/saving_lib.py", line 281, in load_model
raise e
File "/usr/local/lib/python3.11/dist-packages/keras/src/saving/saving_lib.py", line 269, in load_model
_load_state(
File "/usr/local/lib/python3.11/dist-packages/keras/src/saving/saving_lib.py", line 466, in _load_state
_load_container_state(
File "/usr/local/lib/python3.11/dist-packages/keras/src/saving/saving_lib.py", line 534, in _load_container_state
_load_state(
File "/usr/local/lib/python3.11/dist-packages/keras/src/saving/saving_lib.py", line 457, in _load_state
_load_state(
File "/usr/local/lib/python3.11/dist-packages/keras/src/saving/saving_lib.py", line 457, in _load_state
_load_state(
File "/usr/local/lib/python3.11/dist-packages/keras/src/saving/saving_lib.py", line 435, in _load_state
trackable.load_own_variables(weights_store.get(inner_path))
File "/usr/local/lib/python3.11/dist-packages/keras/src/engine/base_layer.py", line 3531, in load_own_variables
raise ValueError(
ValueError: Layer 'lstm_cell' expected 3 variables, but received 0 variables during loading. Expected: ['bidirectional_lstm/backward_lstm/lstm_cell/kernel:0', 'bidirectional_lstm/backward_lstm/lstm_cell/recurrent_kernel:0', 'bidirectional_lstm/backward_lstm/lstm_cell/bias:0']

I've used this kind of network and I successfully trained it in my notebook:

input_layer (InputLayer) [(None, 200, 1)] 0
bidirectional_lstm (Bidire (None, 200, 64) 8704
conv (Conv1D) (None, 200, 64) 12352
output_layer (Conv1D) (None, 200, 1) 193
cropping (Cropping1D) (None, 9, 1) 0

to avoid also problems, I modified my model.py in this way:

import os
import tensorflow as tf
import numpy as np

class model:
def __init__(self, path):
self.model = tf.keras.models.load_model(os.path.join(path, 'Model_1.keras'))

def predict(self, X, categories):

X = np.expand_dims(X, axis=-1)
# Note: this is just an example.
# Here the model.predict is called
out = self.model.predict(X) # Shape [BSx9] for Phase 1 and [BSx18] for Phase 2
out = np.squeeze(out, axis = -1)

return out

nevertheless my model is working in my notebook and it seems to me that I respected the requirements I found the listed above error

Posted by: ManuelAlfano @ Dec. 17, 2023, 3:05 p.m.

Did you try also to load the model in your notebook as double check?

Posted by: an2dl.competitions @ Dec. 18, 2023, 7:24 a.m.

Yes, but then I found the problem, I was saving the model as:
model.save("Model_1.keras")
it was enough to save as
model.save("Model_1")

Posted by: ManuelAlfano @ Dec. 18, 2023, 1:15 p.m.

Ok

Posted by: an2dl.competitions @ Dec. 19, 2023, 9:30 a.m.
Post in this thread