程序员问答大本营 98sky.com.


10 questions online user: 26

222
votes
answers
17 views
+10

Rails check if yield :area is defined in content_for

I want to do a conditional rendering at the layout level based on the actual template has defined content_for(:an__area), any idea how to get this done?

238
votes
answers
27 views
+10

Convert UTC to local time in Rails 3

I'm having trouble converting a UTC Time or TimeWithZone to local time in Rails 3.

Say moment is some Time variable in UTC (e.g. moment = Time.now.utc). How do I convert moment to my time zone, taking care of DST (i.e. using EST/EDT)?

More precisely, I'd like to printout "Monday March 14, 9 AM" if the time correspond to this morning 9 AM EDT and "Monday March 7, 9 AM" if the time was 9 AM EST last monday.

Hopefully there's another way?

Edit: I first thought that "EDT" should be a recognized timezone, but "EDT" is not an actual timezone, more like the state of a timezone. For instance it would not make any sense to ask for Time.utc(2011,1,1).in_time_zone("EDT"). It is a bit confusing, as "EST" is an actual timezone, used in a few places that do not use Daylight savings time and are (UTC-5) yearlong.

237
votes
answers
20 views
+10

Are there console commands to look at whats in the queue and to clear the queue in Sidekiq?

I'm used to using delayed_jobs method of going into the console to see whats in the queue, and the ease of clearing the queue when needed. Are there similar commands in Sidekiq for this? Thanks!

29
votes
answers
16 views
+10

Tensorflow vocabularyprocessor

I am following the wildml blog on text classification using tensorflow. I am not able to understand the purpose of max_document_length in the code statement :

vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)

Also how can i extract vocabulary from the vocab_processor

27
votes
answers
26 views
+10

what does arg_scope actually do?

I am a beginner in neural nets and TensorFlow, and I am trying to understand the role of arg_scope.

It seems to me that it is a way to put together a dictionary of "things you want to do" to a certain layer with certain variables. Please correct me if I am wrong. How would you explain exactly what it is for, to a beginner?

27
votes
answers
20 views
+10

Running TensorFlow on a Slurm Cluster?

I could get access to a computing cluster, specifically one node with two 12-Core CPUs, which is running with Slurm Workload Manager.

I would like to run TensorFlow on that system but unfortunately I were not able to find any information about how to do this or if this is even possible. I am new to this but as far as I understand it, I would have to run TensorFlow by creating a Slurm job and can not directly execute python/tensorflow via ssh.

Has anyone an idea, tutorial or any kind of source on this topic?

24
votes
answers
21 views
+10

TensorFlow Variables and Constants

I am new to tensorflow , I am not able to understand the difference of variable and constant, I get the idea that we use variables for equations and constants for direct values , but why code #1 works only and why not code#2 and #3, and please explain in which cases we have to run our graph first(a) and then our variable(b) i.e

 (a) session.run(model)
 (b) print(session.run(y))

and in which case I can directly execute this command i.e

print(session.run(y))

Code #1 :

x = tf.constant(35, name='x')
y = tf.Variable(x + 5, name='y')

model = tf.global_variables_initializer() 

with tf.Session() as session:
    session.run(model)
    print(session.run(y))

Code #2 :

x = tf.Variable(35, name='x')
y = tf.Variable(x + 5, name='y')

model = tf.global_variables_initializer() 

with tf.Session() as session:
    session.run(model)
    print(session.run(y))

Code #3 :

x = tf.constant(35, name='x')
y = tf.constant(x + 5, name='y')

model = tf.global_variables_initializer() 

with tf.Session() as session:
    session.run(model)
    print(session.run(y))
23
votes
answers
25 views
+10

Warning: Please use alternatives such as official/mnist/dataset.py from tensorflow/models

I'm doing a simple tutorial using Tensorflow, I have just installed so it should be updated, first I load the mnist data using the following code:

import numpy as np
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
train_data = mnist.train.images  # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images  # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)

But when I run it I get the following warning:

WARNING:tensorflow:From C:UsersuserPycharmProjectsTensorFlowRNNvenvlibsite-packages	ensorflowcontriblearnpythonlearndatasetsase.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
WARNING:tensorflow:From C:/Users/user/PycharmProjects/TensorFlowRNN/sample.py:5: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From C:UsersuserPycharmProjectsTensorFlowRNNvenvlibsite-packages	ensorflowcontriblearnpythonlearndatasetsmnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From C:UsersuserPycharmProjectsTensorFlowRNNvenvlibsite-packages	ensorflowcontriblearnpythonlearndatasetsmnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-images-idx3-ubyte.gz
WARNING:tensorflow:From C:UsersuserPycharmProjectsTensorFlowRNNvenvlibsite-packages	ensorflowcontriblearnpythonlearndatasetsmnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From C:UsersuserPycharmProjectsTensorFlowRNNvenvlibsite-packages	ensorflowcontriblearnpythonlearndatasetsmnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From C:UsersuserPycharmProjectsTensorFlowRNNvenvlibsite-packages	ensorflowcontriblearnpythonlearndatasetsmnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.

I have used the line os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' which should avoid getting warnings and tried other alternatives to obtain mnist, however always appear the same warnings, can someone help me figure out is this happening?

PD: I am using Python 3.6 in windows 10, in case it helps.

34
votes
answers
29 views
+10

What are c_state and m_state in Tensorflow LSTM?

Tensorflow r0.12's documentation for tf.nn.rnn_cell.LSTMCell describes this as the init:

tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None)

where state is as follows:

state: if state_is_tuple is False, this must be a state Tensor, 2-D, batch x state_size. If state_is_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c_state and m_state.

What aare c_state and m_state and how do they fit into LSTMs? I cannot find reference to them anywhere in the documentation.

Here is a link to that page in the documentation.

32
votes
answers
28 views
+10

TensorFlow on Windows: “not a supported wheel on this platform” error

Was happy to know Tensorflow is made available for Windows and we don't have to use Docker.

I tried to install as per instructions but I get this error.

pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.

What does that error mean?

I am running latest version of Python.

python --version Python 3.5.2