Great work!
For the problem I studied, the accuracy reaches 97%, which is very impressive.
How can I compute the DEC loss of every instance after the training has been completed. For the autoencoder, it is straightforward by defining a simple function:
def ae_loss(autoencoder, X):
ae_rec = autoencoder.predict(X)
ae_loss = tf.keras.losses.mse(ae_rec, X)
return ae_loss
Defining similar function for computing the clustering loss is not working. Any idea how can this be implemented?
I would like to do a further investigation by studying the loss distribution.