首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >socket.gaierror:[Errno -2]名称或服务未知urllib.error.URLError:打开错误[Errno -2]名称或服务不是known>

socket.gaierror:[Errno -2]名称或服务未知urllib.error.URLError:打开错误[Errno -2]名称或服务不是known>
EN

Stack Overflow用户
提问于 2017-07-06 02:30:08
回答 2查看 1K关注 0票数 0

我正在研究如何实现卷积神经网络的this tutorial

我遵循了那里的说明,所以现在我有了以下代码:

代码语言:javascript
运行
复制
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function  
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)    
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))   
import tensorflow as tf    
from tensorflow.contrib import learn
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib

tf.logging.set_verbosity(tf.logging.INFO)

def cnn_model_fn(features, labels, mode):
  """Model function for CNN."""
  # Input Layer
  input_layer = tf.reshape(features, [-1, 28, 28, 1])

  # Convolutional Layer #1
  conv1 = tf.layers.conv2d(
      inputs=input_layer,
      filters=32,
      kernel_size=[5, 5],
      padding="same",
      activation=tf.nn.relu)

  # Pooling Layer #1
  pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)

  # Convolutional Layer #2 and Pooling Layer #2
  conv2 = tf.layers.conv2d(
      inputs=pool1,
      filters=64,
      kernel_size=[5, 5],
      padding="same",
      activation=tf.nn.relu)
  pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)

  # Dense Layer
  pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
  dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
  dropout = tf.layers.dropout(
      inputs=dense, rate=0.4, training=mode == learn.ModeKeys.TRAIN)

  # Logits Layer
  logits = tf.layers.dense(inputs=dropout, units=10)

  loss = None
  train_op = None

  # Calculate Loss (for both TRAIN and EVAL modes)
  if mode != learn.ModeKeys.INFER:
    onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
    loss = tf.losses.softmax_cross_entropy(
        onehot_labels=onehot_labels, logits=logits)

  # Configure the Training Op (for TRAIN mode)
  if mode == learn.ModeKeys.TRAIN:
    train_op = tf.contrib.layers.optimize_loss(
        loss=loss,
        global_step=tf.contrib.framework.get_global_step(),
        learning_rate=0.001,
        optimizer="SGD")

  # Generate Predictions
  predictions = {
      "classes": tf.argmax(
          input=logits, axis=1),
      "probabilities": tf.nn.softmax(
          logits, name="softmax_tensor")
  }

  # Return a ModelFnOps object
  return model_fn_lib.ModelFnOps(
      mode=mode, predictions=predictions, loss=loss, train_op=train_op)


def main():
    print("In main")
    # Load training and eval data
    mnist = learn.datasets.load_dataset("mnist")
    train_data = tf.train.string_input_producer(tf.train.match_filenames_once("../inputs/train/*.jpg")) # Returns np.array
    train_labels = np.asarray(train_labels.csv, dtype=np.float32)
    test_data = tf.train.string_input_producer(tf.train.match_filenames_once("../inputs/test/*.jpg")) # Returns np.array
    # eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)


    # Create the Estimator
    mnist_classifier = learn.Estimator(
          model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")

    # Set up logging for predictions
    tensors_to_log = {"probabilities": "softmax_tensor"}
    logging_hook = tf.train.LoggingTensorHook(
      tensors=tensors_to_log, every_n_iter=50)


    mnist_classifier.fit(
        x=train_data,
        y=train_labels,
        batch_size=100,
        steps=20000,
        monitors=[logging_hook])

    # Configure the accuracy metric for evaluation
    metrics = {
        "accuracy":
            learn.MetricSpec(
                metric_fn=tf.metrics.accuracy, prediction_key="classes"),
    }


    # Evaluate the model and print results
    eval_results = mnist_classifier.evaluate(
        x=eval_data, y=eval_labels, metrics=metrics)
    print(eval_results)

main()

对于这段代码,我得到了这个错误:

代码语言:javascript
运行
复制
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/urllib/request.py", line 1318, in do_open
10.0s
3
    encode_chunked=req.has_header('Transfer-encoding'))
  File "/opt/conda/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/opt/conda/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/opt/conda/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/opt/conda/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/opt/conda/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/opt/conda/lib/python3.6/http/client.py", line 1392, in connect
    super().connect()
  File "/opt/conda/lib/python3.6/http/client.py", line 936, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "/opt/conda/lib/python3.6/socket.py", line 704, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/opt/conda/lib/python3.6/socket.py", line 743, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "../src/script.py", line 130, in <module>
    main()
  File "../src/script.py", line 93, in main
    mnist = learn.datasets.load_dataset("mnist")
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/__init__.py", line 73, in load_dataset
10.1s
4
    return DATASETS[name]()
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py", line 279, in load_mnist
10.2s
5
    return read_data_sets(train_dir)
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py", line 235, in read_data_sets
    SOURCE_URL + TRAIN_IMAGES)
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py", line 208, in maybe_download
10.2s
6
    temp_file_name, _ = urlretrieve_with_retry(source_url)
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py", line 165, in wrapped_fn
    return fn(*args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py", line 190, in urlretrieve_with_retry
    return urllib.request.urlretrieve(url, filename)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 248, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "/opt/conda/lib/python3.6/urllib/request.py", line 223, in urlopen
10.2s
7
    return opener.open(url, data, timeout)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 526, in open
    response = self._open(req, data)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 544, in _open
    '_open', req)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 1361, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 1320, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>

这条信息对我来说似乎很不清楚。你知道这是什么原因吗?

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2017-07-06 02:53:13

该错误是由于试图下载mnist数据集的行mnist = learn.datasets.load_dataset("mnist")造成的。但是我并没有在你的代码中使用mnist。因此,如果您不使用数据集,则可以对其进行注释。或者,如果您想使用它,可以从http://yann.lecun.com/exdb/mnist/下载它(所有四个文件),并将目录文件的路径放在:mnist = learn.datasets.load_dataset("/path/to/mnist")中。

票数 2
EN

Stack Overflow用户

发布于 2017-07-06 02:56:26

看起来你的脚本在这一行中失败了:

代码语言:javascript
运行
复制
mnist = learn.datasets.load_dataset("mnist")

因为它在本地(在磁盘上)找不到mnist数据集,所以尝试下载它,但它(由于某种原因)找不到。

尝试将该数据集downloadingMNIST-data目录(检查load_dataset()的源代码,它使用默认的train_dir='MNIST-data'调用load_mnist()。最后,read_data_sets尝试从磁盘加载训练/测试图像/标签,但如果在./MNIST-data中找不到,则尝试从train-images-idx3-ubyte.gz下载它们,等等。)

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/44933545

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档