如何从使用TensorFlow概率创建的混合密度网络中获取混合参数?
我正在尝试学习一些关于混合密度网络的知识,并在TensorFlow Probability documentation here中遇到了一个示例。顺便说一下,我是这方面的初学者。
下面是我以上面的例子为起点的完整代码。我不得不对原始的关于AdamOptimizer
的内容进行了修改,并在末尾添加了一个model.predict()
。调用predict(X)
似乎是从条件分布P(Y|X)中提取样本,但我希望获得混合模型的参数,以获得提供的X值,即每个num_components
混合组件的权重、平均值和标准偏差。有什么想法吗?
我看过MixtureNormal
层的convert_to_tensor_fn
参数,并尝试添加:
convert_to_tensor_fn=tfp.distributions.Distribution.sample
-确认predict()
绘制样本
和
convert_to_tensor_fn=tfp.distributions.Distribution.mean
-看起来像是predict()
返回了条件期望
所以我希望有一些其他的选择来获得混合物成分,但到目前为止我还没有找到它。
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
tfd = tfp.distributions
tfpl = tfp.layers
tfk = tf.keras
tfkl = tf.keras.layers
# Load data -- graph of a [cardioid](https://en.wikipedia.org/wiki/Cardioid).
n = 2000
t = tfd.Uniform(low=-np.pi, high=np.pi).sample([n, 1])
r = 2 * (1 - tf.cos(t))
x = r * tf.sin(t) + tfd.Normal(loc=0., scale=0.1).sample([n, 1])
y = r * tf.cos(t) + tfd.Normal(loc=0., scale=0.1).sample([n, 1])
# Model the distribution of y given x with a Mixture Density Network.
event_shape = [1]
num_components = 5
params_size = tfpl.MixtureNormal.params_size(num_components, event_shape)
model = tfk.Sequential([
tfkl.Dense(12, activation='relu'),
tfkl.Dense(params_size, activation=None),
tfpl.MixtureNormal(num_components=num_components,
event_shape=event_shape
)
])
# Fit.
batch_size = 100
epochs=20
#model.compile(optimizer=tf.train.AdamOptimizer(learning_rate=0.02),
# loss=lambda y, model: -model.log_prob(y))
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.02),
loss=lambda y, model: -model.log_prob(y))
history = model.fit(x, y,
batch_size=batch_size,
epochs=epochs,
steps_per_epoch=n // batch_size)
#
# use the model to make prediction (draws samples from the conditional distribution)
# but how do you get to the mixture parameters for each value of x_pred???
#
x_pred = tf.convert_to_tensor(np.linspace(-2.7,+2.7,1000))
y_pred = model.predict(x_pred)
现在我们有了答案,完整的代码如下所示:
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
tfd = tfp.distributions
tfpl = tfp.layers
tfk = tf.keras
tfkl = tf.keras.layers
# Load data -- graph of a [cardioid](https://en.wikipedia.org/wiki/Cardioid).
n = 2000
t = tfd.Uniform(low=-np.pi, high=np.pi).sample([n, 1])
r = 2 * (1 - tf.cos(t))
x = r * tf.sin(t) + tfd.Normal(loc=0., scale=0.1).sample([n, 1])
y = r * tf.cos(t) + tfd.Normal(loc=0., scale=0.1).sample([n, 1])
# Model the distribution of y given x with a Mixture Density Network.
event_shape = [1]
num_components = 5
params_size = tfpl.MixtureNormal.params_size(num_components, event_shape)
model = tfk.Sequential([
tfkl.Dense(12, activation='relu'),
tfkl.Dense(params_size, activation=None),
tfpl.MixtureNormal(num_components=num_components,
event_shape=event_shape
)
])
# Fit.
batch_size = 100
epochs=20
#model.compile(optimizer=tf.train.AdamOptimizer(learning_rate=0.02),
# loss=lambda y, model: -model.log_prob(y))
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.02),
loss=lambda y, model: -model.log_prob(y))
history = model.fit(x, y,
batch_size=batch_size,
epochs=epochs,
steps_per_epoch=n // batch_size)
#
# use the model to get parameters of the conditional distribution:
#
x = np.linspace(-2.7,+2.7,1000)
x_pred = tf.convert_to_tensor(x[:,np.newaxis])
#
# compute the mixture parameters at each x:
#
gm = model(x_pred)
#
# get the mixture parameters:
#
gm_weights = gm.mixture_distribution.probs_parameter().numpy()
gm_means = gm.components_distribution.mean().numpy()
gm_vars = gm.components_distribution.variance().numpy()
print(gm_weights)
发布于 2021-04-21 06:34:20
我也为此而苦苦挣扎。通过查看Github (here)上的源代码,我找到了一种获得给定输出分布的参数的方法。
例如,如果我有一个名为' model‘的模型,并在一个特定的输入'x_star’调用它,返回一个分布对象--你想要的属性可以像这样访问:
x_star = 1
model_star = model(np.array([x_star]))
comp_weights = np.array(model_star.mixture_distribution.probs_parameter())
comp_means = np.array(model_star.components_distribution.mean())
comp_vars = np.array(model_star.components_distribution.variance())
我不知道为什么他们不做广告如何访问这个网站。也许他们希望这些模型被用作黑盒。
https://stackoverflow.com/questions/65918888
复制相似问题