# 1、FFM理论

FFM模型中引入了类别的概念，即field。还是拿上一讲中的数据来讲，先看下图：

# 2、FFM实现细节

FFM将问题定义为分类问题，使用的是logistic loss，同时加入了正则项

# 3、tensorflow实现代码

```def gen_data():
labels = [-1,1]
y = [np.random.choice(labels,1)[0] for _ in range(all_data_size)]
x_field = [i // 10 for i in range(input_x_size)]
x = np.random.randint(0,2,size=(all_data_size,input_x_size))
return x,y,x_field```

```def createTwoDimensionWeight(input_x_size,field_size,vector_dimension):
weights = tf.truncated_normal([input_x_size,field_size,vector_dimension])

tf_weights = tf.Variable(weights)

return tf_weights

def createOneDimensionWeight(input_x_size):
weights = tf.truncated_normal([input_x_size])
tf_weights = tf.Variable(weights)
return tf_weights

def createZeroDimensionWeight():
weights = tf.truncated_normal([1])
tf_weights = tf.Variable(weights)
return tf_weights```

```def inference(input_x,input_x_field,zeroWeights,oneDimWeights,thirdWeight):
"""计算回归模型输出的值"""

secondValue = tf.reduce_sum(tf.multiply(oneDimWeights,input_x,name='secondValue'))

thirdValue = tf.Variable(0.0,dtype=tf.float32)
input_shape = input_x_size

for i in range(input_shape):
featureIndex1 = I
fieldIndex1 = int(input_x_field[I])
for j in range(i+1,input_shape):
featureIndex2 = j
fieldIndex2 = int(input_x_field[j])
vectorLeft = tf.convert_to_tensor([[featureIndex1,fieldIndex2,i] for i in range(vector_dimension)])
weightLeft = tf.gather_nd(thirdWeight,vectorLeft)
weightLeftAfterCut = tf.squeeze(weightLeft)

vectorRight = tf.convert_to_tensor([[featureIndex2,fieldIndex1,i] for i in range(vector_dimension)])
weightRight = tf.gather_nd(thirdWeight,vectorRight)
weightRightAfterCut = tf.squeeze(weightRight)

tempValue = tf.reduce_sum(tf.multiply(weightLeftAfterCut,weightRightAfterCut))

indices2 = [I]
indices3 = [j]

xi = tf.squeeze(tf.gather_nd(input_x, indices2))
xj = tf.squeeze(tf.gather_nd(input_x, indices3))

product = tf.reduce_sum(tf.multiply(xi, xj))

secondItemVal = tf.multiply(tempValue, product)

```lambda_w = tf.constant(0.001, name='lambda_w')
lambda_v = tf.constant(0.001, name='lambda_v')

zeroWeights = createZeroDimensionWeight()

oneDimWeights = createOneDimensionWeight(input_x_size)

thirdWeight = createTwoDimensionWeight(input_x_size,  # 创建二次项的权重变量
field_size,
vector_dimension)  # n * f * k

y_ = inference(input_x, trainx_field,zeroWeights,oneDimWeights,thirdWeight)

l2_norm = tf.reduce_sum(
tf.multiply(lambda_w, tf.pow(oneDimWeights, 2)),
tf.reduce_sum(tf.multiply(lambda_v, tf.pow(thirdWeight, 2)),axis=[1,2])
)
)

loss = tf.log(1 + tf.exp(input_y * y_)) + l2_norm

```input_x_batch = trainx[t]
input_y_batch = trainy[t]
predict_loss,_, steps = sess.run([loss,train_step, global_step],
feed_dict={input_x: input_x_batch, input_y: input_y_batch})```

# 参考文章

1、https://tech.meituan.com/deep-understanding-of-ffm-principles-and-practices.html 2、https://www.cnblogs.com/ljygoodgoodstudydaydayup/p/6340129.html 3、https://www.cnblogs.com/pinard/p/5970503.html

DQN三大改进(一)-Double DQN

263 篇文章89 人订阅

0 条评论

## 相关文章

31060

18130

20120

1.6K100

16130

11430

26650

### ResNet论文翻译——中文版

Deep Residual Learning for Image Recognition 摘要 更深的神经网络更难训练。我们提出了一种残差学习框架来减轻网络训...

41170

15220

55660