How to access the predicted label in Multiclass Classification

Hi everyone. I am doing my project for the final assignment of Zero To GANs course. I am using the Red Wine dataset, which contains 6 classes, so in my NN I used 6 as output_size and cross entropy function as loss, because I know it is good with these type of multilabel classification.
Now, when I use the prediction function:

def predict_single(input, target, model):
inputs = input.unsqueeze(0)
predictions = model(inputs)
prediction = predictions[0].detach()
print(“Input:”, input)
print(“Target:”, target)
print(“Prediction:”, prediction)

And then:

input, target = val_df[1]
prediction = predict_single(input, target, model)

I obtain this:
Input: tensor([0.8705, 0.3900, 2.1000, 0.0650, 4.1206, 3.3000, 0.5300, 0.2610])
Target: tensor([6.])
Prediction: tensor([ 3.6465, 0.2800, -0.4561, -1.6733, -0.6519, -0.1650])

But I want to know to what class are associated these logits. In the sense that I have these 3.6465, 0.2800 and so on, but I don’t know which is the prediction.
I tried with this:
prediction = F.softmax(prediction)
print(prediction)
output = model(input.unsqueeze(0))
_,pred = output.max(1)
print(pred)

But I obtained:
tensor([0.3296, 0.1361, 0.1339, 0.1324, 0.1335, 0.1346])
tensor([0])

And I don’t know what is that tensor([0])

Thank you

Are you using ImageFolder to create dataset?

If so, it has a classes field which you can access, and these correspond to values in output.

No, I am not using images
I did this:
#Convert dataframe to numpy arrays
inputs_array = wine_copy[input_columns].to_numpy()
targets_array = wine_copy[output_columns].to_numpy()
#Convert numpy array to torch tensor
inputs = torch.from_numpy(inputs_array).type(torch.float)
outputs = torch.from_numpy(targets_array).type(torch.float)
#Create a Tensor Dataset
WineTensorDataset = TensorDataset(inputs, outputs)
WineTensorDataset
#I split in 80% train and 20% val
num_rows = len(wine_copy)
val_percent = 0.2
val_size = int(num_rows * val_percent)
train_size = num_rows - val_size

train_df, val_df = random_split(WineTensorDataset, [train_size, val_size])
len(train_df), len(val_df)

#Pick a batch size for DataLoader
batch_size=32
train_loader = DataLoader(train_df, batch_size, shuffle=True)
val_loader = DataLoader(val_df, batch_size)

Do you load the data with pandas?

Yes
wine_dataset = pd.read_csv("/Users/Casella/Documents/DATA SCIENCE/2 ANNO/NEURAL COMPUTING/PROJECT/Exams_wine_wine.csv")

So your output_columns probably is a single column, which is of categorical type. You probably have also some code that converts categories into numbers (you have to, otherwise you wouldn’t have the target tensors). This happens without any additional manipulations, so your classes correspond to target indices. To access these “classes” you can use df[column].cat.categories.tolist().

Your argmaxed predictions should return a single value, which represents an index with an element with highest value. You can use this index to access the class basing on the categories defined in your dataframe.

Let me know if this solves your problem, I’ve typed the code without checking.

Sorry, but I have not clear the situation
I uploaded the notebook here, so you can see: https://jovian.ai/casella0798/problem-with-classes
My output_columns is equal to the target variable “quality” as you can see, that contains 6 classes, that are integer numbers from 3 to 8.
So I converted in numpy and tensors, and then I trained the model.
At the end I obtained these 6 logits, but I still not understand to what class these logits are referred. I think that the highest logit refers to the predicted class, right? So in the notebook on jovian you can see that I have 6 logits and then tensor([0]), but I want that instead of this tensor([0]) I want the predicted class, like 3 or 4 or 6 and so on.
I have not understood how to use the indeces you talked about, and which are these indeces.

So, what are the names of classes? If you predict only the “quality” then there’s no need for any sort of class list, since you only predict numbers not classes that have meaning (like “red wine”, “white wine”, “pink wine”).

Your targets should start at 0 and since you say that the lowest quality is 3, I would just subtract that number. This way 8 becomes 5, and since you index in python starting with 0, 0 to 5 covers the whole range of possible qualities (6 classes).

To get “back” the quality, you would need to argmax() your predictions, and add back the 3 (to go from 0-5 range to 3-8).

To get from tensor([0]) to normal number 0, you need to use item() method.

I was not able to fix, sorry. I changed targets_array in targets_array -3, but at the end I obtain always tensor([0]) that is not a class. Because for example:

Input: tensor([0.9130, 0.3100, 2.2000, 0.0790, 4.3619, 3.0300, 0.9300, 0.2610])
Target: tensor([6.])
Prediction: tensor([ 3.4754, 0.0432, -1.0402, 0.2745, -0.7804, -0.5333])
tensor([0])

As you can see the target is 6, but I received tensor([0]). Why?
Can you help me to write the right code?

  1. Make sure the min() and max() of target’s array is 0 and 5 respectively. If it’s not, then you have applied -3 incorrectly.
  2. You have 6 logits - you apply argmax() - you get index from 0 to 5 (depending on the learned class).
  3. To get back to your “quality” range, you add 3 to this result.
    1. If you have the tensor([0]) as a result, do as I said - use item() method. Will work either after or before adding 3.

Here the new code with that you suggested:


As you can see in cell 642 I changed the arrays with the -3 and in the next cell the target are from 0 to 5
Then I changed the max of the previous code in argmax, but I receive that error
ValueError: not enough values to unpack (expected 2, got 1)
Instead, previously with max, it worked
So I could not try the final step of item() and +3, due to this error

As I said, argmax will return an index. A single value.
Your _, pred = blabla is therefore incorrect (the previous code, which contained just max() returns two values - max value along with the index; I prefer to use argmax() to avoid any confusion, since my advice).

Change to just pred = argmax() and it should work (unless there’s any additional problem further).


Ok, here is the new code. I think I followed you advices, but as you can see I received 3 as output. But the ground-truth of the example I tried to predict is 6, instead in this way I will obtain always 3, because if i obtain always 0, +3, is always 3

Ok next thing before we continue the journey™

You have ran the cells in this notebook almost 700 times. To be sure there’s no problem left from previous executions, I would restart the notebook to clear all the outputs and then rerun it completely.

The model seems to have learned from the data, basing on the evaluations done before and after learning.

There was some problem like this a bit time ago. Since you have access to val_loader here, I would just test the first validation batch - predictions and targets (just use break to not evaluate everything) instead of a single case.

:smiley:
https://jovian.ai/casella0798/problem-with-classes5 new notebook
As you can see I ran again all the notebook, to be sure
Then, I tried to predict on a batch (I attached the last 2 cells), but I have not idea on the results obtained :frowning:

In the case of batch predictions, they’re completely off :stuck_out_tongue:

You can remove printing the input, it’s not what’s interesting here.

unsqueezeing the input is not necessary, since the val_loader already provides input in a form of batch.

I would actually toss off anything above print(target) since it doesn’t contribute to the result. Just remember to remove the unsqueeze.

The predictions don’t make sense at all. There should be more of them (to match number of elements in the target). I think unnecessary unsqueeze has something to do with it.

Continue the journey :smiley:
def predict_batch():
for i, (input, labels) in enumerate(val_loader):
inputs = input
predictions = model(inputs)
prediction = predictions[0].detach()
#print(prediction)
output = model(inputs)
pred = output.argmax(1)
pred=pred+3
print(“prediction:”, pred)
break

And here the output…
prediction: tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3])

I’m completely frustrated :frowning:

Don’t worry, my asperger-like features seem to be activating again because of the mystery.

Since you’ve added 3 to them, this means that the model predictions are 0 everywhere.
Show me the output.

Here is the output

It’s the wrong one. Show the one from the predict_batch function. This one is from somewhere else.