我在训练像甘人一样的模特,但不是完全一样。我将Keras与TensorFlow后端结合使用。
我有两个Keras模型G和D。我希望在中输出G中目标层的G权值参数,作为D模型的输入,并将D.predict(G.weights)的结果作为G损失函数的一部分,即D不可训练,但参数G.weights是可训练的。以这种方式想要进一步训练G.weights。
我试着用
def custom_loss(ytrue, ypred):
### Something to do with ytrue and ypred
weight = self.G.get_layer('target
我已经使用Keras regressor对数据进行回归拟合。我使用Scikit-learn wrapper和Pipeline首先对数据进行标准化,然后将其放在Keras regressor上。有点像这样:
from sklearn.grid_search import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.preprocessing imp
当我处理强化学习问题时,我试图想出自己的损失函数。(打开Ai的“Cartpole-V0”游戏。)
inputs = keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = keras.layers.Dense(64, activation='relu')(inputs)
x = keras.layers.Dense(64, activation='relu')(x)
我无法从GRU层解释get_weights的结果。这是我的密码-
#Modified from - https://machinelearningmastery.com/understanding-simple-recurrent-neural-networks-in-keras/
from pandas import read_csv
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, SimpleRNN, GRU
from sklearn.preprocessing i
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation
# X has shape (num_rows, num_cols), where the training data are stored
# as row vectors
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=np.float32)
# y must have an o