所有model都是可调用的(All models are callable, just like layers)
可以在之前的模型基础上修改,类似迁移学习
keras.input 输入变量(pytorch–>variable,tensorflow–>placeHolder)
model = Sequential()
model.add(Conv2D(32, (5,5), activation='relu', input_shape=[28, 28, 1]))
model.add(Conv2D(64, (5,5), activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
tf.keras.model(input,output)
y=f(x)单调函数模型,DNN可拟合任意函数(不包含分段函数和非单调函数f(x,y)=0) 残差网络:f(x)+x输入
model.compile设置训练参数,单独赋值sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)或 model.optimizer.lr.assign 默认学习率0.01 设置准确率metrics=[‘accuracy’]
model.fit训练 loss,accuracy = model.evaluate模型评估计算准确率 model.predict预测
model.summary 打印模型结构 model.get_config
layer.dense 线性变换+激活(全连接层),默认relu layer.concatenate合并两输入个张量 layer.lambda添加表达式层 lambda x:x**2
梯度爆炸,BN、L1、L2正则化,减小整体数值 https://blog.csdn.net/qq_32002253/article/details/89109214 https://blog.csdn.net/qq_29462849/article/details/83068421