首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

EF Code First"无效的列名'Discriminator'"但没有继承

首先,我们需要了解EF Code First是什么。EF Code First是Entity Framework的一种开发模式,它允许开发人员通过编写代码来定义数据模型,而不是使用外部设计器或XML文件。这使得开发人员可以更加灵活地控制数据模型的结构和映射。

在这个问题中,我们遇到了一个错误:“无效的列名'Discriminator'”。这个错误表明在使用EF Code First时,程序试图访问一个名为“Discriminator”的列,但是这个列不存在或者定义不正确。

要解决这个问题,我们需要检查代码中的数据模型定义。我们可以尝试以下几种方法:

  1. 确保数据模型中定义了正确的列名。检查代码中的属性名称是否与数据库中的列名相匹配。如果它们不匹配,我们需要更新代码或数据库以使它们匹配。
  2. 如果我们使用了继承,确保映射正确。在继承的数据模型中,我们需要使用“Table-per-Hierarchy”(TPH)或“Table-per-Type”(TPT)策略来正确映射子类。我们可以使用“Table("TableName")”属性来指定表名,并使用“Key”属性来指定主键。
  3. 检查数据库连接字符串。确保我们连接到正确的数据库,并且数据库中存在正确的表和列。

如果我们仍然遇到问题,我们可以尝试在互联网上搜索解决方案,或者寻求其他开发人员的帮助。

请注意,我们在回答中没有提到任何云计算品牌商,因为这个问题与云计算无关。

页面内容是否对你有帮助?
有帮助
没帮助

相关·内容

使用infogan学习可解释的隐变量特征学习-及代码示例(代码和官方有差异)

In this week’s post I want to explore a simple addition to Generative Adversarial Networks which make them more useful for both researchers interested in their potential as an unsupervised learning tool 无监督, as well as the enthusiast or practitioner who wants more control over the kinds of data they can generate. If you are new to GANs, check out this earlier tutorial I wrote a couple weeks ago introducing them. The addition I want to go over in this post is called InfoGAN, and it was introduced in this paper published by OpenAI earlier this year. It allows GANs to learn disentangled latent representations, which can then be exploited in a number of useful ways. For those interested in the mathematics behind the technique, I high recommend reading the paper, as it is a theoretically interesting approach. In this post though, I would like to provide a more intuitive explanation of what InfoGANs do, and how they can be easily implemented in current GANs.

03

利用pytorch实现GAN(生成对抗网络)-MNIST图像-cs231n-assignment3

In 2014, Goodfellow et al. presented a method for training generative models called Generative Adversarial Networks (GANs for short). In a GAN, we build two different neural networks. Our first network is a traditional classification network, called the discriminator. We will train the discriminator to take images, and classify them as being real (belonging to the training set) or fake (not present in the training set). Our other network, called the generator, will take random noise as input and transform it using a neural network to produce images. The goal of the generator is to fool the discriminator into thinking the images it produced are real. 在生成网络中,我们建立了两个神经网络。第一个网络是典型的分类神经网络,称为discriminator重点内容,我们训练这个网络对图像进行识别,以区别真假的图像(真的图片在训练集当中,而假的则不在。另一个网络称之为generator,它将随机的噪声作为输入,将其转化为使用神经网络训练出来产生出来的图像,它的目的是混淆discriminator使其认为它生成的图像是真的。

05
领券