getting the weights of intermediate layer in keras












3












$begingroup$


I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.



What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:



InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]


How can I resolve this issue?



Here is the code that I have done so far:



train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]









share|improve this question









New contributor




Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$












  • $begingroup$
    Which layer's output are expecting to keep as face feature vectors?
    $endgroup$
    – Kiritee Gak
    yesterday










  • $begingroup$
    @KiriteeGak last convolutional layer in this example 7th
    $endgroup$
    – Alfaisal Albakri
    yesterday
















3












$begingroup$


I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.



What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:



InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]


How can I resolve this issue?



Here is the code that I have done so far:



train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]









share|improve this question









New contributor




Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$












  • $begingroup$
    Which layer's output are expecting to keep as face feature vectors?
    $endgroup$
    – Kiritee Gak
    yesterday










  • $begingroup$
    @KiriteeGak last convolutional layer in this example 7th
    $endgroup$
    – Alfaisal Albakri
    yesterday














3












3








3





$begingroup$


I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.



What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:



InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]


How can I resolve this issue?



Here is the code that I have done so far:



train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]









share|improve this question









New contributor




Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.



What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:



InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]


How can I resolve this issue?



Here is the code that I have done so far:



train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]






machine-learning deep-learning keras cnn image-recognition






share|improve this question









New contributor




Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited yesterday









Ethan

568224




568224






New contributor




Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked yesterday









Alfaisal AlbakriAlfaisal Albakri

185




185




New contributor




Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Alfaisal Albakri is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • $begingroup$
    Which layer's output are expecting to keep as face feature vectors?
    $endgroup$
    – Kiritee Gak
    yesterday










  • $begingroup$
    @KiriteeGak last convolutional layer in this example 7th
    $endgroup$
    – Alfaisal Albakri
    yesterday


















  • $begingroup$
    Which layer's output are expecting to keep as face feature vectors?
    $endgroup$
    – Kiritee Gak
    yesterday










  • $begingroup$
    @KiriteeGak last convolutional layer in this example 7th
    $endgroup$
    – Alfaisal Albakri
    yesterday
















$begingroup$
Which layer's output are expecting to keep as face feature vectors?
$endgroup$
– Kiritee Gak
yesterday




$begingroup$
Which layer's output are expecting to keep as face feature vectors?
$endgroup$
– Kiritee Gak
yesterday












$begingroup$
@KiriteeGak last convolutional layer in this example 7th
$endgroup$
– Alfaisal Albakri
yesterday




$begingroup$
@KiriteeGak last convolutional layer in this example 7th
$endgroup$
– Alfaisal Albakri
yesterday










1 Answer
1






active

oldest

votes


















2












$begingroup$

The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.






share|improve this answer











$endgroup$













  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    yesterday










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    yesterday











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Alfaisal Albakri is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47895%2fgetting-the-weights-of-intermediate-layer-in-keras%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2












$begingroup$

The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.






share|improve this answer











$endgroup$













  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    yesterday










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    yesterday
















2












$begingroup$

The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.






share|improve this answer











$endgroup$













  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    yesterday










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    yesterday














2












2








2





$begingroup$

The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.






share|improve this answer











$endgroup$



The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.







share|improve this answer














share|improve this answer



share|improve this answer








edited yesterday

























answered yesterday









Kiritee GakKiritee Gak

1,3491421




1,3491421












  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    yesterday










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    yesterday


















  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    yesterday










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    yesterday
















$begingroup$
works perfectly thanks . one more question , how do i know which array corresponds to image class?
$endgroup$
– Alfaisal Albakri
yesterday




$begingroup$
works perfectly thanks . one more question , how do i know which array corresponds to image class?
$endgroup$
– Alfaisal Albakri
yesterday












$begingroup$
What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
$endgroup$
– Kiritee Gak
yesterday




$begingroup$
What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
$endgroup$
– Kiritee Gak
yesterday










Alfaisal Albakri is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Alfaisal Albakri is a new contributor. Be nice, and check out our Code of Conduct.













Alfaisal Albakri is a new contributor. Be nice, and check out our Code of Conduct.












Alfaisal Albakri is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47895%2fgetting-the-weights-of-intermediate-layer-in-keras%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Masuk log Menu navigasi

Identifying “long and narrow” polygons in with PostGISlength and width of polygonWhy postgis st_overlaps reports Qgis' “avoid intersections” generated polygon as overlapping with others?Adjusting polygons to boundary and filling holesDrawing polygons with fixed area?How to remove spikes in Polygons with PostGISDeleting sliver polygons after difference operation in QGIS?Snapping boundaries in PostGISSplit polygon into parts adding attributes based on underlying polygon in QGISSplitting overlap between polygons and assign to nearest polygon using PostGIS?Expanding polygons and clipping at midpoint?Removing Intersection of Buffers in Same Layers

Старые Смолеговицы Содержание История | География | Демография | Достопримечательности | Примечания | НавигацияHGЯOLHGЯOL41 206 832 01641 606 406 141Административно-территориальное деление Ленинградской области«Переписная оброчная книга Водской пятины 1500 года», С. 793«Карта Ингерманландии: Ивангорода, Яма, Копорья, Нотеборга», по материалам 1676 г.«Генеральная карта провинции Ингерманландии» Э. Белинга и А. Андерсина, 1704 г., составлена по материалам 1678 г.«Географический чертёж над Ижорскою землей со своими городами» Адриана Шонбека 1705 г.Новая и достоверная всей Ингерманландии ланткарта. Грав. А. Ростовцев. СПб., 1727 г.Топографическая карта Санкт-Петербургской губернии. 5-и верстка. Шуберт. 1834 г.Описание Санкт-Петербургской губернии по уездам и станамСпецкарта западной части России Ф. Ф. Шуберта. 1844 г.Алфавитный список селений по уездам и станам С.-Петербургской губернииСписки населённых мест Российской Империи, составленные и издаваемые центральным статистическим комитетом министерства внутренних дел. XXXVII. Санкт-Петербургская губерния. По состоянию на 1862 год. СПб. 1864. С. 203Материалы по статистике народного хозяйства в С.-Петербургской губернии. Вып. IX. Частновладельческое хозяйство в Ямбургском уезде. СПб, 1888, С. 146, С. 2, 7, 54Положение о гербе муниципального образования Курское сельское поселениеСправочник истории административно-территориального деления Ленинградской области.Топографическая карта Ленинградской области, квадрат О-35-23-В (Хотыницы), 1930 г.АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1933, С. 27, 198АрхивированоАдминистративно-экономический справочник по Ленинградской области. — Л., 1936, с. 219АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1966, с. 175АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1973, С. 180АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1990, ISBN 5-289-00612-5, С. 38АрхивированоАдминистративно-территориальное деление Ленинградской области. — СПб., 2007, с. 60АрхивированоКоряков Юрий База данных «Этно-языковой состав населённых пунктов России». Ленинградская область.Административно-территориальное деление Ленинградской области. — СПб, 1997, ISBN 5-86153-055-6, С. 41АрхивированоКультовый комплекс Старые Смолеговицы // Электронная энциклопедия ЭрмитажаПроблемы выявления, изучения и сохранения культовых комплексов с каменными крестами: по материалам работ 2016-2017 гг. в Ленинградской области