24小时热门版块排行榜    

CyRhmU.jpeg
查看: 3608  |  回复: 6

HZSte1204

新虫 (初入文坛)

[求助] 求助AttributeError: module ' ' has no attribute ''

求助各位,运行代码时出现AttributeError: module 'cs_gan.utils' has no attribute 'get_train_dataset',代码中有from cs_gan import utils,查看utils.py中也有Function:get_train_dataset ,出现此问题该如何解决呢?

发自小木虫Android客户端
回复此楼
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

HZSte1204

新虫 (初入文坛)

大佬你好,请问运行代码时出现AttributeError: module 'cs_gan.utils' has no attribute 'get_train_dataset',代码中有from cs_gan import utils,查看utils.py中也有Function:get_train_dataset ,出现此问题该如何解决呢?谢谢

发自小木虫Android客户端
2楼2020-04-15 18:52:52
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

天天进步啊

木虫 (著名写手)

引用回帖:
2楼: Originally posted by HZSte1204 at 2020-04-15 18:52:52
大佬你好,请问运行代码时出现AttributeError: module 'cs_gan.utils' has no attribute 'get_train_dataset',代码中有from cs_gan import utils,查看utils.py中也有Function:get_train_dataset ,出现此问题该如 ...

一看就知道是python代码,先把你代码贴出来,再讲你的报错

» 本帖已获得的红花(最新10朵)

分子筛、分子动力学模拟,有问题可咨询我的知乎主页 https://www.zhihu.com/people/rao777
3楼2020-04-15 20:41:22
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

HZSte1204

新虫 (初入文坛)

送红花一朵
引用回帖:
3楼: Originally posted by 天天进步啊 at 2020-04-15 20:41:22
一看就知道是python代码,先把你代码贴出来,再讲你的报错...

您好,这个代码是Yan Wu, Mihaela Rosca, Timothy Lillicrap Deep Compressed Sensing. ICML 2019的开源代码(This is the example code for the following ICML 2019 paper. If you use the code here please cite this paper),网址https://github.com/deepmind/deepmind-research/tree/master/cs_gan
下面是报错,运行后报错的main_cs.py文件以及utils.py文件代码
  File "D:\deep compressed sensing\main_cs.py", line 76, in main
    images = utils.get_train_dataset(data_processor, FLAGS.dataset,

AttributeError: module 'cs_gan.utils' has no attribute 'get_train_dataset'

#main_cs.py
"""Training script."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os

from absl import app
from absl import flags
from absl import logging

import tensorflow.compat.v1 as tf
import tensorflow_probability as tfp

from cs_gan import cs
from cs_gan import file_utils
from cs_gan import utils

tfd = tfp.distributions

flags.DEFINE_string(
    'mode', 'recons', 'Model mode.')
flags.DEFINE_integer(
    'num_training_iterations', 10000000,
    'Number of training iterations.')
flags.DEFINE_integer(
    'batch_size', 64, 'Training batch size.')
flags.DEFINE_integer(
    'num_measurements', 25, 'The number of measurements')
flags.DEFINE_integer(
    'num_latents', 100, 'The number of latents')
flags.DEFINE_integer(
    'num_z_iters', 3, 'The number of latent optimisation steps.')
flags.DEFINE_float(
    'z_step_size', 0.01, 'Step size for latent optimisation.')
flags.DEFINE_string(
    'z_project_method', 'norm', 'The method to project z.')
flags.DEFINE_integer(
    'summary_every_step', 1000,
    'The interval at which to log debug ops.')
flags.DEFINE_integer(
    'export_every', 10,
    'The interval at which to export samples.')
flags.DEFINE_string(
    'dataset', 'mnist', 'The dataset used for learning (cifar|mnist.')
flags.DEFINE_float('learning_rate', 1e-4, 'Learning rate.')
flags.DEFINE_string(
    'output_dir', '/tmp/cs_gan/cs', 'Location where to save output files.')


FLAGS = flags.FLAGS

# Log info level (for Hooks).
tf.logging.set_verbosity(tf.logging.INFO)


def main(argv):
  del argv

  utils.make_output_dir(FLAGS.output_dir)
  data_processor = utils.DataProcessor()
  images = utils.get_train_dataset(data_processor, FLAGS.dataset,
                                   FLAGS.batch_size)

  logging.info('Learning rate: %d', FLAGS.learning_rate)

  # Construct optimizers.
  optimizer = tf.train.AdamOptimizer(FLAGS.learning_rate)

  # Create the networks and models.
  generator = utils.get_generator(FLAGS.dataset)
  metric_net = utils.get_metric_net(FLAGS.dataset, FLAGS.num_measurements)
  model = cs.CS(metric_net, generator,
                FLAGS.num_z_iters, FLAGS.z_step_size, FLAGS.z_project_method)
  prior = utils.make_prior(FLAGS.num_latents)
  generator_inputs = prior.sample(FLAGS.batch_size)

  model_output = model.connect(images, generator_inputs)
  optimization_components = model_output.optimization_components
  debug_ops = model_output.debug_ops
  reconstructions, _ = utils.optimise_and_sample(
      generator_inputs, model, images, is_training=False)

  global_step = tf.train.get_or_create_global_step()
  update_op = optimizer.minimize(
      optimization_components.loss,
      var_list=optimization_components.vars,
      global_step=global_step)

  sample_exporter = file_utils.FileExporter(
      os.path.join(FLAGS.output_dir, 'reconstructions'))

  # Hooks.
  debug_ops['it'] = global_step
  # Abort training on Nans.
  nan_hook = tf.train.NanTensorHook(optimization_components.loss)
  # Step counter.
  step_conter_hook = tf.train.StepCounterHook()

  checkpoint_saver_hook = tf.train.CheckpointSaverHook(
      checkpoint_dir=utils.get_ckpt_dir(FLAGS.output_dir), save_secs=10 * 60)

  loss_summary_saver_hook = tf.train.SummarySaverHook(
      save_steps=FLAGS.summary_every_step,
      output_dir=os.path.join(FLAGS.output_dir, 'summaries'),
      summary_op=utils.get_summaries(debug_ops))

  hooks = [checkpoint_saver_hook, nan_hook, step_conter_hook,
           loss_summary_saver_hook]

  # Start training.
  with tf.train.MonitoredSession(hooks=hooks) as sess:
    logging.info('starting training')

    for i in range(FLAGS.num_training_iterations):
      sess.run(update_op)

      if i % FLAGS.export_every == 0:
        reconstructions_np, data_np = sess.run([reconstructions, images])
        # Create an object which gets data and does the processing.
        data_np = data_processor.postprocess(data_np)
        reconstructions_np = data_processor.postprocess(reconstructions_np)
        sample_exporter.save(reconstructions_np, 'reconstructions')
        sample_exporter.save(data_np, 'data')


if __name__ == '__main__':
  app.run(main)



#utils.py
"""Tools for latent optimisation."""
import collections
import os

from absl import logging
import numpy as np
import tensorflow.compat.v1 as tf
import tensorflow_probability as tfp

from cs_gan import nets

tfd = tfp.distributions


class ModelOutputs(
    collections.namedtuple('AdversarialModelOutputs',
                           ['optimization_components', 'debug_ops'])):
  """All the information produced by the adversarial module.
  Fields:
    * `optimization_components`: A dictionary. Each entry in this dictionary
      corresponds to a module to train using their own optimizer. The keys are
      names of the components, and the values are `common.OptimizationComponent`
      instances. The keys of this dict can be made keys of the configuration
      used by the main train loop, to define the configuration of the
      optimization details for each module.
    * `debug_ops`: A dictionary, from string to a scalar `tf.Tensor`. Quantities
      used for tracking training.
  """


class OptimizationComponent(
    collections.namedtuple('OptimizationComponent', ['loss', 'vars'])):
  """Information needed by the optimizer to train modules.
  Usage:
      `optimizer.minimize(
          opt_compoment.loss, var_list=opt_component.vars)`
  Fields:
    * `loss`: A `tf.Tensor` the loss of the module.
    * `vars`: A list of variables, the ones which will be used to minimize the
      loss.
  """


def cross_entropy_loss(logits, expected):
  """The cross entropy classification loss between logits and expected values.
  The loss proposed by the original GAN paper: https://arxiv.org/abs/1406.2661.
  Args:
    logits: a `tf.Tensor`, the model produced logits.
    expected: a `tf.Tensor`, the expected output.
  Returns:
    A scalar `tf.Tensor`, the average loss obtained on the given inputs.
  Raises:
    ValueError: if the logits do not have shape [batch_size, 2].
  """

  num_logits = logits.get_shape()[1]
  if num_logits != 2:
    raise ValueError(('Invalid number of logits for cross_entropy_loss! '
                      'cross_entropy_loss supports only 2 output logits!'))
  return tf.reduce_mean(
      tf.nn.sparse_softmax_cross_entropy_with_logits(
          logits=logits, labels=expected))


def optimise_and_sample(init_z, module, data, is_training):
  """Optimising generator latent variables and sample."""

  if module.num_z_iters == 0:
    z_final = init_z
  else:
    init_loop_vars = (0, _project_z(init_z, module.z_project_method))
    loop_cond = lambda i, _: i < module.num_z_iters
    def loop_body(i, z):
      loop_samples = module.generator(z, is_training)
      gen_loss = module.gen_loss_fn(data, loop_samples)
      z_grad = tf.gradients(gen_loss, z)[0]
      z -= module.z_step_size * z_grad
      z = _project_z(z, module.z_project_method)
      return i + 1, z

    # Use the following static loop for debugging
    # z = init_z
    # for _ in xrange(num_z_iters):
    #   _, z = loop_body(0, z)
    # z_final = z

    _, z_final = tf.while_loop(loop_cond,
                               loop_body,
                               init_loop_vars)

  return module.generator(z_final, is_training), z_final


def get_optimisation_cost(initial_z, optimised_z):
  optimisation_cost = tf.reduce_mean(
      tf.reduce_sum((optimised_z - initial_z)**2, -1))
  return optimisation_cost


def _project_z(z, project_method='clip'):
  """To be used for projected gradient descent over z."""
  if project_method == 'norm':
    z_p = tf.nn.l2_normalize(z, axis=-1)
  elif project_method == 'clip':
    z_p = tf.clip_by_value(z, -1, 1)
  else:
    raise ValueError('Unknown project_method: {}'.format(project_method))
  return z_p


class DataProcessor(object):

  def preprocess(self, x):
    return x * 2 - 1

  def postprocess(self, x):
    return (x + 1) / 2.


def _get_np_data(data_processor, dataset, split='train'):
  """Get the dataset as numpy arrays."""
  index = 0 if split == 'train' else 1
  if dataset == 'mnist':
    # Construct the dataset.
    x, _ = tf.keras.datasets.mnist.load_data()[index]
    # Note: tf dataset is binary so we convert it to float.
    x = x.astype(np.float32)
    x = x / 255.
    x = x.reshape((-1, 28, 28, 1))

  if dataset == 'cifar':
    x, _ = tf.keras.datasets.cifar10.load_data()[index]
    x = x.astype(np.float32)
    x = x / 255.

  if data_processor:
    # Normalize data if a processor is given.
    x = data_processor.preprocess(x)
  return x


def make_output_dir(output_dir):
  logging.info('Creating output dir %s', output_dir)
  if not tf.gfile.IsDirectory(output_dir):
    tf.gfile.MakeDirs(output_dir)


def get_ckpt_dir(output_dir):
  ckpt_dir = os.path.join(output_dir, 'ckpt')
  if not tf.gfile.IsDirectory(ckpt_dir):
    tf.gfile.MakeDirs(ckpt_dir)
  return ckpt_dir


def get_real_data_for_eval(num_eval_samples, dataset, split='valid'):
  data = _get_np_data(data_processor=None, dataset=dataset, split=split)
  data = data[:num_eval_samples]
  return tf.constant(data)


def get_summaries(ops):
  summaries = []
  for name, op in ops.items():
    # Ensure to log the value ops before writing them in the summary.
    # We do this instead of a hook to ensure IS/FID are never computed twice.
    print_op = tf.print(name, [op], output_stream=tf.logging.info)
    with tf.control_dependencies([print_op]):
      summary = tf.summary.scalar(name, op)
      summaries.append(summary)
  return summaries


def get_train_dataset(data_processor, dataset, batch_size):
  """Creates the training data tensors."""
  x_train = _get_np_data(data_processor, dataset, split='train')
  # Create the TF dataset.
  dataset = tf.data.Dataset.from_tensor_slices(x_train)

  # Shuffle and repeat the dataset for training.
  # This is required because we want to do multiple passes through the entire
  # dataset when training.
  dataset = dataset.shuffle(100000).repeat()

  # Batch the data and return the data batch.
  one_shot_iterator = dataset.batch(batch_size).make_one_shot_iterator()
  data_batch = one_shot_iterator.get_next()
  return data_batch


def get_generator(dataset):
  if dataset == 'mnist':
    return nets.MLPGeneratorNet()
  if dataset == 'cifar':
    return nets.SNGenNet()


def get_metric_net(dataset, num_outputs=2):
  if dataset == 'mnist':
    return nets.MLPMetricNet(num_outputs)
  if dataset == 'cifar':
    return nets.SNMetricNet(num_outputs)


def make_prior(num_latents):
  # Zero mean, unit variance prior.
  prior_mean = tf.zeros(shape=(num_latents), dtype=tf.float32)
  prior_scale = tf.ones(shape=(num_latents), dtype=tf.float32)

  return tfd.Normal(loc=prior_mean, scale=prior_scale)
4楼2020-04-18 12:23:05
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

天天进步啊

木虫 (著名写手)

引用回帖:
4楼: Originally posted by HZSte1204 at 2020-04-18 12:23:05
您好,这个代码是Yan Wu, Mihaela Rosca, Timothy Lillicrap Deep Compressed Sensing. ICML 2019的开源代码(This is the example code for the following ICML 2019 paper. If you use the code here please cit ...

dir(utils)试试
分子筛、分子动力学模拟,有问题可咨询我的知乎主页 https://www.zhihu.com/people/rao777
5楼2020-04-18 17:14:42
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

HZSte1204

新虫 (初入文坛)

引用回帖:
5楼: Originally posted by 天天进步啊 at 2020-04-18 17:14:42
dir(utils)试试...

您好,使用dir(utils)后看到utils的内置属性里有_get_np_data,_get_ckpt_dir等,但是没有get_train_data,但是utils.py里面是有def get_train_dataset(data_processor, dataset, batch_size):
  """Creates the training data tensors."""
  x_train = _get_np_data(data_processor, dataset, split='train')
  # Create the TF dataset.
  dataset = tf.data.Dataset.from_tensor_slices(x_train)的,此时还是报一样的错误应该要怎么办呢?谢谢您的帮助
6楼2020-04-20 23:09:45
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

HZSte1204

新虫 (初入文坛)

引用回帖:
6楼: Originally posted by HZSte1204 at 2020-04-20 23:09:45
您好,使用dir(utils)后看到utils的内置属性里有_get_np_data,_get_ckpt_dir等,但是没有get_train_data,但是utils.py里面是有def get_train_dataset(data_processor, dataset, batch_size):
  &quot;&q ...

已解决

发自小木虫Android客户端
7楼2020-04-22 15:46:16
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
相关版块跳转 我要订阅楼主 HZSte1204 的主题更新
信息提示
请填处理意见