Home Php C# Sql C C++ Javascript Python Java Go Android Git Linux Asp.net Django .net Node.js Ios Xcode Cocoa Iphone Mysql Tomcat Mongodb Bash Objective-c Scala Visual-studio Apache Elasticsearch Jar Eclipse Jquery Ruby-on-rails Ruby Rubygems Android-studio Spring Lua Sqlite Emacs Ubuntu Perl Docker Swift Amazon-web-services Svn Html Ajax Xml Java-ee Maven Intellij-idea Rvm Macos Unix Css Ipad Postgresql Css3 Json Windows-server Vue.js Typescript Oracle Hibernate Internet-explorer Github Tensorflow Laravel Symfony Redis Html5 Google-app-engine Nginx Firefox Sqlalchemy Lucene Erlang Flask Vim Solr Webview Facebook Zend-framework Virtualenv Nosql Ide Twitter Safari Flutter Bundle Phonegap Centos Sphinx Actionscript Tornado Register | Login | Edit Tags | New Questions | 繁体 | 简体


10 questions online user: 3

0
votes
answers
43 views
+10

import error keras.models密集,LSTM,嵌入

-1

我無法運行此神經語言模型教程中的代碼。 雖然我已經安裝了keras和tensorflow,但似乎我無法從keras.models中導入相關軟件包。import error keras.models密集,LSTM,嵌入

一)keras installed

B)TensorFlow installed

C)Spyder error message

我也試圖在Windows控制檯運行導入命令。那裏的錯誤信息說「CPU支持這個TensorFlow二進制文件未編譯使用的指令」。

d)Error message in windows console

背景信息: 我使用Spyer 3.2.3和已經安裝了Python 3.6.0。

你能幫我找出問題所在嗎?

謝謝,非常感謝!

+0

而不是鏈接到的過程中,你可以提供一個更明顯的例子你在哪裏遇到問題?這個問題目前似乎過於寬泛,並且確實有幫助:您可能希望縮小錯誤信息的範圍,以便爲您認爲事情發生錯誤的地方提供更多線索。 –

+0

導入這些模塊的正確語句可以很容易地在鏈接到的教程的「完整示例」部分找到... – desertnaut

沙发
0
0

Dense不是模型。密集是一個層,它是在keras.layers

from keras.layers import Dense,LSTM,Embedding 
from keras.models import Sequential,Model 

通常我的工作在一次進口一切,忘掉它:

from keras.layers import * 
from keras.models import * 
import keras.backend as K #for some advanced functions  
0
votes
answers
27 views
+10

Tensorflow變量初始化錯誤

1

我收到了tensorflow中的變量初始化錯誤,有人可以幫我嗎?我在GPU上使用python版本(3.5.4)和TF版本(1.2.1)。如果我從代碼中刪除最後一行然後它的工作,似乎有一些問題與tensorflow和python庫sync之間的差距精細。Tensorflow變量初始化錯誤

import numpy as np 
import tensorflow as tf 

with tf.Session() as sess: 
    init = tf.global_variables_initializer() 
    sess.run(init) 

    in_size = 100             
    h1_size = 10              

    x = tf.placeholder(tf.float32,(None,in_size))     
    w = tf.Variable(tf.random_normal([in_size,h1_size])) 
    b = tf.Variable(tf.ones([h1_size])) 

    xw = tf.matmul(x,w) 
    z = tf.add(xw,b) 

    a = tf.nn.relu(z) 

    yhat = sess.run(a,feed_dict={x:np.random.random([100000,in_size])}) 



Error:- 

FailedPreconditionError: Attempting to use uninitialized value Variable_12 
    [[Node: Variable_12/read = Identity[T=DT_FLOAT, _class=["loc:@Variable_12"], _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_12)]] 

Caused by op 'Variable_12/read', defined at: 
    File "C:UsersSachin-PCAnaconda3libsite-packagesspyderutilsipythonstart_kernel.py", line 245, in <module> 
    main() 
    File "C:UsersSachin-PCAnaconda3libsite-packagesspyderutilsipythonstart_kernel.py", line 241, in main 
    kernel.start() 
    File "C:UsersSachin-PCAnaconda3libsite-packagesipykernelkernelapp.py", line 477, in start 
    ioloop.IOLoop.instance().start() 
    File "C:UsersSachin-PCAnaconda3libsite-packageszmqeventloopioloop.py", line 177, in start 
    super(ZMQIOLoop, self).start() 
    File "C:UsersSachin-PCAnaconda3libsite-packages	ornadoioloop.py", line 888, in start 
    handler_func(fd_obj, events) 
    File "C:UsersSachin-PCAnaconda3libsite-packages	ornadostack_context.py", line 277, in null_wrapper 
    return fn(*args, **kwargs) 
    File "C:UsersSachin-PCAnaconda3libsite-packageszmqeventloopzmqstream.py", line 440, in _handle_events 
    self._handle_recv() 
    File "C:UsersSachin-PCAnaconda3libsite-packageszmqeventloopzmqstream.py", line 472, in _handle_recv 
    self._run_callback(callback, msg) 
    File "C:UsersSachin-PCAnaconda3libsite-packageszmqeventloopzmqstream.py", line 414, in _run_callback 
    callback(*args, **kwargs) 
沙发
0
0

您需要在初始化變量前聲明變量。下面的代碼應該工作:

in_size = 100             
h1_size = 10  

x = tf.placeholder(tf.float32,(None,in_size))     
w = tf.Variable(tf.random_normal([in_size,h1_size])) 
b = tf.Variable(tf.ones([h1_size])) 

xw = tf.matmul(x,w) 
z = tf.add(xw,b) 
a = tf.nn.relu(z) 

init = tf.global_variables_initializer() 

with tf.Session() as sess:  
    sess.run(init)             
    yhat = sess.run(a,feed_dict={x:np.random.random([100000,in_size])}) 
+0

它的工作。非常感謝你馬特.... :) –

+0

你很受歡迎,薩欽。你會通過點擊左邊的複選標記來接受答案嗎? – MatthewScarpino

+0

當然,馬特,我做到了。我並沒有意識到這一點。 –

0
votes
answers
32 views
+10

如何使用keras和tensorflow後端將密集層的輸出作爲一個numpy數組?

0

我是Keras和Tensorflow的新手。我正在使用深度學習來開展面部識別項目。我使用此代碼(輸出softmax圖層)將輸入主題的類標籤作爲輸出獲得,並且我的100個類的自定義數據集的準確率爲97.5%。如何使用keras和tensorflow後端將密集層的輸出作爲一個numpy數組?

但是現在我對特徵向量表示感興趣,所以我想通過網絡傳遞測試圖像並在softmax(最後一層)之前從激活的密集層提取輸出。我提到了Keras的文檔,但似乎沒有任何效果。任何人都可以請幫助我如何從密集層激活提取輸出並保存爲一個numpy數組?提前致謝。

class Faces: 
    @staticmethod 
    def build(width, height, depth, classes, weightsPath=None): 
     # initialize the model 
     model = Sequential() 
     model.add(Conv2D(100, (5, 5), padding="same",input_shape=(depth, height, width), data_format="channels_first")) 
     model.add(Activation("relu")) 
     model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2),data_format="channels_first")) 

     model.add(Conv2D(100, (5, 5), padding="same")) 
     model.add(Activation("relu")) 
     model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), data_format="channels_first")) 

     # 3 set of CONV => RELU => POOL 
     model.add(Conv2D(100, (5, 5), padding="same")) 
     model.add(Activation("relu")) 
     model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2),data_format="channels_first")) 

     # 4 set of CONV => RELU => POOL 
     model.add(Conv2D(50, (5, 5), padding="same")) 
     model.add(Activation("relu")) 
     model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2),data_format="channels_first")) 

     # 5 set of CONV => RELU => POOL 
     model.add(Conv2D(50, (5, 5), padding="same")) 
     model.add(Activation("relu")) 
     model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), data_format="channels_first")) 

     # 6 set of CONV => RELU => POOL 
     model.add(Conv2D(50, (5, 5), padding="same")) 
     model.add(Activation("relu")) 
     model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), data_format="channels_first")) 

     # set of FC => RELU layers 
     model.add(Flatten()) 
     #model.add(Dense(classes)) 
     #model.add(Activation("relu")) 

     # softmax classifier 
     model.add(Dense(classes)) 
     model.add(Activation("softmax")) 

     return model 

ap = argparse.ArgumentParser() 
ap.add_argument("-l", "--load-model", type=int, default=-1, 
    help="(optional) whether or not pre-trained model should be loaded") 
ap.add_argument("-w", "--weights", type=str, 
    help="(optional) path to weights file") 
args = vars(ap.parse_args()) 


path = 'C:\Users\Project\FaceGallery' 
image_paths = [os.path.join(path, f) for f in os.listdir(path)] 
images = [] 
labels = [] 
name_map = {} 
demo = {} 
nbr = 0 
j = 0 
for image_path in image_paths: 
    image_pil = Image.open(image_path).convert('L') 
    image = np.array(image_pil, 'uint8') 
    cv2.imshow("Image",image) 
    cv2.waitKey(5) 
    name = image_path.split("\")[4][0:5] 
    print(name) 
    # Get the label of the image 
    if name in demo.keys(): 
     pass 
    else: 
     demo[name] = j 
     j = j+1 
    nbr =demo[name] 

    name_map[nbr] = name 
    images.append(image) 
    labels.append(nbr) 
print(name_map) 
# Training and testing data split ratio = 60:40 
(trainData, testData, trainLabels, testLabels) = train_test_split(images, labels, test_size=0.4) 

trainLabels = np_utils.to_categorical(trainLabels, 100) 
testLabels = np_utils.to_categorical(testLabels, 100) 

trainData = np.asarray(trainData) 
testData = np.asarray(testData) 

trainData = trainData[:, np.newaxis, :, :]/255.0 
testData = testData[:, np.newaxis, :, :]/255.0 

opt = SGD(lr=0.01) 
model = Faces.build(width=200, height=200, depth=1, classes=100, 
        weightsPath=args["weights"] if args["load_model"] > 0 else None) 

model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) 
if args["load_model"] < 0: 
    model.fit(trainData, trainLabels, batch_size=10, epochs=300) 
(loss, accuracy) = model.evaluate(testData, testLabels, batch_size=100, verbose=1) 
print("Accuracy: {:.2f}%".format(accuracy * 100)) 
if args["save_model"] > 0: 
    model.save_weights(args["weights"], overwrite=True) 

for i in np.arange(0, len(testLabels)): 
    probs = model.predict(testData[np.newaxis, i]) 
    prediction = probs.argmax(axis=1) 
    image = (testData[i][0] * 255).astype("uint8") 
    name = "Subject " + str(prediction[0]) 
    if prediction[0] in name_map: 
     name = name_map[prediction[0]] 
    cv2.putText(image, name, (5, 20), cv2.FONT_HERSHEY_PLAIN, 1.3, (255, 255, 255), 2) 
    print("Predicted: {}, Actual: {}".format(prediction[0], np.argmax(testLabels[i]))) 
    cv2.imshow("Testing Face", image) 
    cv2.waitKey(1000) 
沙发
0
0

參見https://keras.io/getting-started/faq/我怎樣才能獲得的中間層的輸出?

您需要通過爲定義添加「名稱」參數來命名要輸出的圖層。如.. model.add(Dense(xx, name='my_dense'))
然後,您可以定義一箇中間模型,並通過執行類似運行...

m2 = Model(inputs=model.input, outputs=model.get_layer('my_dense').output) 
Y = m2.predict(X) 
+0

我得到這樣的輸出: *張量( 「dense_1/BiasAdd:0」,形狀=( ?,442),dtype = float32)* 但我需要打印一個numpy數組,它是輸入圖像的一個特徵表示。 – TheBiometricsGuy

+0

在上面的例子中,X需要是一個numpy數組(即真實數據)。 keras model.predict()函數通過Tensorflow圖處理該數據並返回一個numpy數組。 「張量」類型是用於創建圖形的內部變量。你不應該看到這是來自model.predict()的輸出 – bivouac0

0
votes
answers
41 views
+10

在Ubuntu下爲Go安裝tensorflow錯誤17.10

0

我按照以下步驟安裝了用於Go的tensorflow,沒有錯誤信息顯示。在Ubuntu下爲Go安裝tensorflow錯誤17.10

TF_TYPE="cpu" # Change to "gpu" for GPU support 
TARGET_DIRECTORY='/usr/local' 
curl -L  
    "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.4.0.tar.gz" | 
sudo tar -C $TARGET_DIRECTORY -xz 

sudo ldconfig 
go get github.com/tensorflow/tensorflow/tensorflow/go 

但測試失敗,go test github.com/tensorflow/tensorflow/tensorflow/go,有錯誤消息。

2017-11-18 06:25:59.874418: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX 
2017-11-18 06:25:59.877032: F tensorflow/core/framework/tensor.cc:822] Unexpected type: 23 
SIGABRT: abort 
PC=0x7f6c4afd70bb m=7 sigcode=18446744073709551610 
signal arrived during cgo execution 

goroutine 24 [syscall, locked to thread]: 
runtime.cgocall(0x656610, 0xc4200439c8, 0xc4200439f0) 
    /home/qcg/share/go/src/runtime/cgocall.go:132 +0xe4 fp=0xc420043998 sp=0xc420043958 pc=0x405434 
github.com/tensorflow/tensorflow/tensorflow/go._Cfunc_TF_SetAttrTensor(0x7f6c24003c10, 0x7f6c2400e060, 0x7f6c2400e250, 0x7f6c2400e1f0) 
    github.com/tensorflow/tensorflow/tensorflow/go/_test/_obj_test/_cgo_gotypes.go:890 +0x45 fp=0xc4200439c8 sp=0xc420043998 pc=0x52cb25 
github.com/tensorflow/tensorflow/tensorflow/go.setAttr.func18(0x7f6c24003c10, 0x7f6c2400e060, 0x7f6c2400e250, 0x7f6c2400e1f0) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/graph.go:273 +0xec fp=0xc420043a00 sp=0xc4200439c8 pc=0x538d0c 
github.com/tensorflow/tensorflow/tensorflow/go.setAttr(0x7f6c24003c10, 0xc42000e0c0, 0x6ef103, 0x5, 0x6b60c0, 0xc4200e2440, 0x0, 0x0) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/graph.go:273 +0x11b9 fp=0xc420043c00 sp=0xc420043a00 pc=0x52f759 
github.com/tensorflow/tensorflow/tensorflow/go.(*Graph).AddOperation(0xc42000e080, 0x6eef64, 0x5, 0xc42001a8d0, 0x6, 0x0, 0x0, 0x0, 0xc42007af00, 0x4b36cb, ...) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/graph.go:176 +0x4a0 fp=0xc420043d60 sp=0xc420043c00 pc=0x52e3a0 
github.com/tensorflow/tensorflow/tensorflow/go.Const(0xc42000e080, 0xc42001a8d0, 0x6, 0x6827a0, 0xc4200e22c0, 0xc42001a8d0, 0x6, 0x4d463d, 0x7aaad0) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/util_test.go:38 +0x221 fp=0xc420043e38 sp=0xc420043d60 pc=0x529d41 
github.com/tensorflow/tensorflow/tensorflow/go.TestOutputDataTypeAndShape.func1(0xc4200fa780) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/operation_test.go:137 +0x11e fp=0xc420043fa8 sp=0xc420043e38 pc=0x53619e 
testing.tRunner(0xc4200fa780, 0xc4200e2400) 
    /home/qcg/share/go/src/testing/testing.go:746 +0xd0 fp=0xc420043fd0 sp=0xc420043fa8 pc=0x4d46e0 
runtime.goexit() 
    /home/qcg/share/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc420043fd8 sp=0xc420043fd0 pc=0x45f831 
created by testing.(*T).Run 
    /home/qcg/share/go/src/testing/testing.go:789 +0x2de 

goroutine 1 [chan receive]: 
testing.(*T).Run(0xc4200fa000, 0x6f51c9, 0x1a, 0x702600, 0x47b201) 
    /home/qcg/share/go/src/testing/testing.go:790 +0x2fc 
testing.runTests.func1(0xc4200fa000) 
    /home/qcg/share/go/src/testing/testing.go:1004 +0x64 
testing.tRunner(0xc4200fa000, 0xc420053de0) 
    /home/qcg/share/go/src/testing/testing.go:746 +0xd0 
testing.runTests(0xc4200e2220, 0xa413c0, 0x11, 0x11, 0xc420053e78) 
    /home/qcg/share/go/src/testing/testing.go:1002 +0x2d8 
testing.(*M).Run(0xc420053f18, 0xc420053f70) 
    /home/qcg/share/go/src/testing/testing.go:921 +0x111 
main.main() 
    github.com/tensorflow/tensorflow/tensorflow/go/_test/_testmain.go:82 +0xdb 

goroutine 20 [chan receive]: 
testing.(*T).Run(0xc4200fa3c0, 0xc420014860, 0x13, 0xc4200e2400, 0x2) 
    /home/qcg/share/go/src/testing/testing.go:790 +0x2fc 
github.com/tensorflow/tensorflow/tensorflow/go.TestOutputDataTypeAndShape(0xc4200fa3c0) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/operation_test.go:136 +0x56e 
testing.tRunner(0xc4200fa3c0, 0x702600) 
    /home/qcg/share/go/src/testing/testing.go:746 +0xd0 
created by testing.(*T).Run 
    /home/qcg/share/go/src/testing/testing.go:789 +0x2de 

rax 0x0 
rbx 0x7f6c37ffeaa0 
rcx 0x7f6c4afd70bb 
rdx 0x0 
rdi 0x2 
rsi 0x7f6c37ffe840 
rbp 0x7f6c37ffea90 
rsp 0x7f6c37ffe840 
r8  0x0 
r9  0x7f6c37ffe840 
r10 0x8 
r11 0x246 
r12 0x7f6c37ffecc0 
r13 0x17 
r14 0x5 
r15 0x7f6c37ffecc0 
rip 0x7f6c4afd70bb 
rflags 0x246 
cs  0x33 
fs  0x0 
gs  0x0 
FAIL github.com/tensorflow/tensorflow/tensorflow/go 0.052s 

任何想法?謝謝。需要更多細節?需要更多細節?需要更多細節?需要更多細節?需要更多細節?

環境:

OS: Ubuntu 17.10 64bit 
go version go1.9 linux/amd64 
gcc version 7.2.0 (Ubuntu 7.2.0-8ubuntu3) 
+0

我我與修訂的答案更新。 – nessuno

沙发
0
0

我已經開了一個issue

爲了您的興趣:tensorflow軟件包工作正常(因爲您可以從問題中的對話中讀取)。您可以將導致測試失敗的行註釋掉,或者跳過go test命令並使用它(可能與tfgo一起使用,這會讓您的生活更輕鬆)。

更新:

來解決,我們只需要在旅途中包結帳到版本1.4的問題:

cd $GOPATH/src/github.com/tensorflow/tensorflow/tensorflow/go 
git checkout r1.4 
go test 
0
votes
answers
43 views
+10

如何繪製邊界框而不使用張量流對象檢測api中的歸一化座標

0

如何繪製邊界框而不使用tensorflow object-detection api中的歸一化座標?在object_detection_tutorial.ipynb中,我注意到默認座標是在標準化座標中,框的形式是[xmin,ymin,xmax,ymax]以及如何將它們轉換爲[image_length xmin,image_width ymin,image_length xmax,image_width ymax ]? 我嘗試使用如何繪製邊界框而不使用張量流對象檢測api中的歸一化座標

 boxes[0]=boxes[0]*200 
     boxes[1]=boxes[1]*100 
     boxes[2]=boxes[2]*200 
     boxes[3]=boxes[3]*100 

但發生錯誤:

--------------------------------------------------------------------------- 
IndexError        Traceback (most recent call last) 
<ipython-input-72-efcec9615ee3> in <module>() 
    30     feed_dict={image_tensor: image_np_expanded}) 
    31     boxes[0]=boxes[0]*200 
---> 32     boxes[1]=boxes[1]*100 
    33     boxes[2]=boxes[2]*200 
    34     boxes[3]=boxes[3]*100 
IndexError: index 1 is out of bounds for axis 0 with size 1 
+0

檢查盒子變暗的變量。在索引0處是圖像1的bboxes。在索引1是第二個圖像的Bbox的Nx4矩陣,等等...... –

沙发
0
0

如果你看一下研究/ object_detection/utils的/ visualization_utils.py框[0] YMIN不是當你乘這些座標XMIN與100或200確保它仍然在圖像邊界(im_width,im_height)。

您可以嘗試框[0] * 100,框[1] * -200,框[2] * -100,框[3] * 200,這與此代碼類似。

ymin = boxes[0]*100 
xmin = boxes[1]*-200 
ymax = boxes[2]*-100 
xmax = boxes[3]*200 

draw = ImageDraw.Draw(image) 
im_width, im_height = image.size 
(left, right, top, bottom) = (xmin * im_width, xmax * im_width, 
           ymin * im_height, ymax * im_height) 

draw.line([(left, top), (left, bottom), (right, bottom), 
       (right, top), (left, top)], width=thickness, fill=color) 
+0

非常感謝,它的工作原理! –

0
votes
answers
42 views
+10

如何使用python和tensorflow從降噪堆疊自動編碼器中提取低維特徵向量

0

下面的代碼導入MNIST數據集並訓練堆疊降噪自動編碼器,以破壞,編碼然後解碼數據。基本上我想用它作爲非線性尺寸縮減技術。如何訪問模型編碼的較低維特徵,以便將這些特徵放入聚類模型中?理想情況下,我會期望較低的維度特徵是循環或直線(顯然這實際上並非如此)。如何使用python和tensorflow從降噪堆疊自動編碼器中提取低維特徵向量

import numpy as np 
import os 
import sys 
import tensorflow as tf 


from tensorflow.examples.tutorials.mnist import input_data 
mnist = input_data.read_data_sets("/tmp/data/") 


def plot_image(image, shape=[28, 28]): 
    plt.imshow(image.reshape(shape), cmap="Greys", interpolation="nearest") 
    plt.axis("off") 

def reset_graph(seed=42): 
    tf.reset_default_graph() 
    tf.set_random_seed(seed) 
    np.random.seed(seed) 


def show_reconstructed_digits(X, outputs, model_path = None, n_test_digits = 2): 
    with tf.Session() as sess: 
     if model_path: 
      saver.restore(sess, model_path) 
     X_test = mnist.test.images[:n_test_digits] 
     outputs_val = outputs.eval(feed_dict={X: X_test}) 

    fig = plt.figure(figsize=(8, 3 * n_test_digits)) 
    for digit_index in range(n_test_digits): 
     plt.subplot(n_test_digits, 2, digit_index * 2 + 1) 
     plot_image(X_test[digit_index]) 
     plt.subplot(n_test_digits, 2, digit_index * 2 + 2) 
     plot_image(outputs_val[digit_index]) 


reset_graph() 

n_inputs = 28 * 28 
n_hidden1 = 300 
n_hidden2 = 150 # codings 
n_hidden3 = n_hidden1 
n_outputs = n_inputs 

learning_rate = 0.01 

noise_level = 1.0 

X = tf.placeholder(tf.float32, shape=[None, n_inputs]) 
X_noisy = X + noise_level * tf.random_normal(tf.shape(X)) 

hidden1 = tf.layers.dense(X_noisy, n_hidden1, activation=tf.nn.relu, 
          name="hidden1") 
hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, # not shown in the book 
          name="hidden2")       # not shown 
hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, # not shown 
          name="hidden3")       # not shown 
outputs = tf.layers.dense(hidden3, n_outputs, name="outputs")  # not shown 

reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE 

optimizer = tf.train.AdamOptimizer(learning_rate) 
training_op = optimizer.minimize(reconstruction_loss) 

init = tf.global_variables_initializer() 
saver = tf.train.Saver() 

n_epochs = 10 
batch_size = 150 

with tf.Session() as sess: 
    init.run() 
    for epoch in range(n_epochs): 
     n_batches = mnist.train.num_examples // batch_size 
     for iteration in range(n_batches): 
      print("
{}%".format(100 * iteration // n_batches), end="") 
      sys.stdout.flush() 
      X_batch, y_batch = mnist.train.next_batch(batch_size) 
      sess.run(training_op, feed_dict={X: X_batch}) 
     loss_train = reconstruction_loss.eval(feed_dict={X: X_batch}) 
     print("
{}".format(epoch), "Train MSE:", loss_train) 
     saver.save(sess, "./my_model_stacked_denoising_gaussian.ckpt") 


show_reconstructed_digits(X, outputs, "./my_model_stacked_denoising_gaussian.ckpt") 
+0

Whta你的意思是循環或直線? –

+0

例如,8的較低維度特徵是兩個循環,或者9是循環和直線 –

沙发
0
0

自動編碼器,編碼部分的每一層,學習判別特徵,然後在重建階段(在解碼部分)它嘗試使用這些特徵來塑造輸出。 但是,使用自動編碼器在本地提取較低維度的特徵時,如果使用Convolutional Autoencoders(CAE),效率會更高。

直觀問題的答案可以使用特徵映射其以CAE的解碼部分作爲低維提取的特徵產生的。我的意思是,在您的數據集上訓練一個N層 CAE,然後忽略輸出層,並使用卷積層的輸出進行聚類。

enter image description here

只是爲了更清楚起見,每一個5×5 特徵的映射(S_2)以上的圖像中可以被認爲是一個特徵。 您可以快速演示和實施CAE here

最後,最好在Data Science社區提出這樣的問題。

0
votes
answers
39 views
+10

錯誤Tensorflow使用JSON導出的數據

1

我使用下面的代碼,從文檔衍生的時:錯誤Tensorflow使用JSON導出的數據

import tensorflow as tf 
import matplotlib.pyplot as plt 
import numpy as np 
import json 
from pprint import pprint 

with open('/root/ml/2017110508.training.json') as text: 
    data = json.load(text) 
    features = np.array(data['input']['values']) 
    labels = np.array(data['output']['values']) 
    pprint(features.shape) 
    pprint(labels.shape) 
    pprint(features[0:3]) 
    pprint(labels[0:3]) 

# Assume that each row of `features` corresponds to the same row as `labels`. 
assert features.shape[0] == labels.shape[0] 

dataset = tf.data.Dataset.from_tensor_slices((features, labels)) 

在數據中的數據[「輸入」] [「值」]和數據['輸出'] [' 值]只是彩車行,但我得到:

TypeError: Expected binary or unicode string, got [0.6, 0.0, 0.6, 0.0, 0.0, 0.0, 0.0, 0.3, 0.6, 1.5, 0.0, 0.4, 7.7, -8.5, 158.0, 6.2, 55.3, 203.4, 205.7, 156.5, -8.5, 7.3, -8.8, 53.5, -0.9, -31.2, 15.3, -1.9, -87.6, 21.3, -21.6, -34.7, -17.1, -85.0, 28.6, -19.1]

什麼格式from_tensor_slices期待?

謝謝。從pprint電話

輸出:

(58502,)

(58502, 5)

array([ list([0.6, 0.0, 0.6, 0.0, 0.0, 0.0, 0.0, 0.3, 0.6, 1.5, 0.0, 0.4, 7.7, -8.5, 158.0, 6.2, 55.3, 203.4, 205.7, 156.5, -8.5, 7.3, -8.8, 53.5, -0.9, -31.2, 15.3, -1.9, -87.6, 21.3, -21.6, -34.7, -17.1, -85.0, 28.6, -19.1]), list([1.3, 0.0, 1.2, 0.0, 0.0, 0.0, 0.0, 0.6, 1.0, 2.3, 0.0, 0.6, 7.7, -8.5, 158.0, 6.2, 55.3, 203.4, 205.7, 156.4, -8.5, 7.5, -8.8, 53.4, -0.9, -31.2, 15.3, -1.9, -87.6, 21.3, -21.6, -34.7, -17.0, -85.0, 28.6, -19.1]), list([2.0, 0.0, 1.6, 0.0, 0.0, 0.0, 0.2, 0.8, 1.1, 2.9, 0.0, 0.9, 8.0, -8.5, 158.2, 6.2, 55.3, 203.4, 205.7, 156.3, -8.5, 8.0, -8.8, 53.3, -0.9, -31.2, 15.1, -1.9, -87.6, 21.3, -21.6, -34.8, -16.8, -84.9, 28.6, -19.1])], dtype=object)

array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]])

+0

你可以添加'features.shape','labels.shape'的輸出和每個的前幾行嗎? – Stephen

+0

嗨斯蒂芬,我更新了上面的帖子,回答你的問題。 –

沙发
0
0

它看起來像這個問題實際上是在你的JSON文件。你的labels變量的輸出很好;你的features變量的輸出應該具有相同的結構,但它有一些list。如果您的JSON文件具有作爲屬性出現多次的「列表」,則應刪除這些文件及其圓括號(可能還有一些額外的方括號)。如果您自己生成了JSON,那麼在那裏進行更改可能更容易。

一個不相關的問題是你的標籤都是0;如果這不是你所期望的,那麼你可能在某處丟失了一些數據。

0
votes
answers
45 views
+10

Nodejs Tensorflow服務客戶端錯誤3

0

我正在服務一個預先訓練的啓動模型,並且我已經按照官方教程直到現在都提供它。目前,我得到一個錯誤代碼3,如下:Nodejs Tensorflow服務客戶端錯誤3

{ Error: contents must be scalar, got shape [305] 
    [[Node: map/while/DecodeJpeg = DecodeJpeg[_output_shapes=[[?,?,3]], acceptable_fraction=1, channels=3, dct_method="", fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/while/TensorArrayReadV3)]] 
    at /server/node_modules/grpc/src/client.js:554:15 code: 3, metadata: Metadata { _internal_repr: {} } } 

我使用的prediction_service.proto,因爲它是從Tensorflow服務的API。這是我的文件的NodeJS,我定義函數:

const PROTO_PATH = "./pb/prediction_service.proto"; 
const TensorflowServing = grpc.load(PROTO_PATH).tensorflow.serving; 

const testClient = new TensorflowServing.PredictionService(
    TF_TEST, grpc.credentials.createInsecure() 
); 

function getTestModelMsg(val){ 
    return { 
     model_spec: { name: "inception", signature_name: "predict_images", version: 1}, 
     inputs: { 
      images: { 
       dtype: "DT_STRING", 
       tensor_shape: { 
        dim: [{size: 220}, {size: 305}], 
        unknown_rank: false 
       }, 
       string_val: val 
      } 
     } 
    } 
} 


function predictTest(array, callback) { 
    testClient.predict(getTestModelMsg(array), (error, response) => { 
     if(error) 
      return callback(error); 

    callback(null, response.outputs) 
})} 

而我路過的圖像作爲二進制圖像如下:

fs.readFile('./test/Xiang_Xiang_panda.jpg', (err, data) => { 
    if(err) { 
     return res.json({message: "Not found"}); 
    } 

    predictTest(data.toString('binary') , (error, outputs) => { 
     if (error) { 
      console.error(error); 
      return res.status(500).json({ error }); 
     } 
     res.status(200).json({ outputs }); 
    }) 
}) 

我一直停留在這一段時間,以便如果有人能幫助我,我會很感激!任何幫助將是偉大的! 在此先感謝! :)

沙发
0
0

好吧,所以我終於設法解決這個問題。如果有人面臨完全相同的問題,請將其作爲答案發布。

所以以來模型預計base64編碼的圖像:

fs.readFile('./test/Xiang_Xiang_panda.jpg', (err, data) => { 
    if(err) { 
     return res.json({message: "Not found"}); 
    } 

    predictTest(data.toString('base64') , (error, outputs) => { 
     if (error) { 
      console.error(error); 
      return res.status(500).json({ error }); 
     } 
     res.status(200).json({ outputs }); 
    }) 
}) 

然後看着從Tensorflow服務的inception_client.py,我發現了張量卻有shape=[1]。所以這使得getTestModelMsg如下:

function getTestModelMsg(val){ 
return { 
    model_spec: { name: "inception", signature_name: "serving_default", version: 1}, 
    inputs: { 
     images: { 
      dtype: "DT_STRING", 
      tensor_shape: { 
       dim: [{size: 1}], 
       unknown_rank: false 
      }, 
      string_val: val 
     } 
    } 
} 

希望能幫助別人。祝你好運。 :)

0
votes
answers
36 views
+10

Tensorflow在不同名稱範圍內重用變量

0

我已經在不同名稱範圍內重用變量的問題。下面的代碼將源代碼嵌入和目標嵌入分離到兩個不同的空間中,我想要做的是將源代碼和目標代碼放在同一個空間中,重用查找表中的變量。Tensorflow在不同名稱範圍內重用變量

''' Applying bidirectional encoding for source-side inputs and first-word decoding. 
''' 
def decode_first_word(self, source_vocab_id_tensor, source_mask_tensor, scope, reuse): 
    with tf.name_scope('Word_Embedding_Layer'): 
     with tf.variable_scope('Source_Side'): 
      source_embedding_tensor = self._src_lookup_table(source_vocab_id_tensor) 
    with tf.name_scope('Encoding_Layer'): 
     source_concated_hidden_tensor = self._encoder.get_biencoded_tensor( 
      source_embedding_tensor, source_mask_tensor) 
    with tf.name_scope('Decoding_Layer_First'): 
     rvals = self.decode_next_word(source_concated_hidden_tensor, source_mask_tensor,  
      None, None, None, scope, reuse) 
    return rvals + [source_concated_hidden_tensor] 


''' Applying one-step decoding. 
''' 
def decode_next_word(self, enc_concat_hidden, src_mask, cur_dec_hidden,  
          cur_trg_wid, trg_mask=None, scope=None, reuse=False,  
          src_side_pre_act=None): 
    with tf.name_scope('Word_Embedding_Layer'): 
     with tf.variable_scope('Target_Side'): 
      cur_trg_wemb = None 
      if None == cur_trg_wid: 
       pass 
      else: 
       cur_trg_wemb = self._trg_lookup_table(cur_trg_wid) 

我想讓他們如下,所以只會出現在全圖中的一個節點嵌入:

def decode_first_word_shared_embedding(self, source_vocab_id_tensor, source_mask_tensor, scope, reuse): 
    with tf.name_scope('Word_Embedding_Layer'): 
     with tf.variable_scope('Bi_Side'): 
      source_embedding_tensor = self._bi_lookup_table(source_vocab_id_tensor) 
    with tf.name_scope('Encoding_Layer'): 
     source_concated_hidden_tensor = self._encoder.get_biencoded_tensor( 
      source_embedding_tensor, source_mask_tensor) 
    with tf.name_scope('Decoding_Layer_First'): 
     rvals = self.decode_next_word_shared_embedding(source_concated_hidden_tensor, source_mask_tensor,  
      None, None, None, scope, reuse) 
    return rvals + [source_concated_hidden_tensor] 

def decode_next_word_shared_embedding(self, enc_concat_hidden, src_mask, cur_dec_hidden,  
          cur_trg_wid, trg_mask=None, scope=None, reuse=False,  
          src_side_pre_act=None): 
    with tf.name_scope('Word_Embedding_Layer'):    
     cur_trg_wemb = None 
     if None == cur_trg_wid: 
      pass 
     else: 
      with tf.variable_scope('Bi_Side'): 
       cur_trg_wemb = self._bi_lookup_table(cur_trg_wid) 

如何實現這一目標?

沙发
0
1

我解決了它通過使用字典來保存嵌入的權重矩陣。提示https://www.tensorflow.org/versions/r0.12/how_tos/variable_scope/

0
votes
answers
32 views
+10

ValueError:無法提供形狀的值TensorFlow

0

我遇到TFLearn/TensorFlow的一些問題。我已經調整了我的np.reshape到合適的尺寸,但我與錯誤而崩潰:ValueError:無法提供形狀的值TensorFlow

這個錯誤發生在訓練代碼行17:

ValueError: Cannot feed value of shape (48, 1) for Tensor 'TargetsData/Y:0', which has shape '(?, 2)' 

線路供參考:

model.fit(X, Y, n_epoch=250, validation_set=(W,Z), show_metric=True) 

我的訓練代碼如下:

import deepneuralnet as net 
import numpy as np 
from tflearn.data_utils import image_preloader 
import os 

model = net.model 
train_path = os.path.abspath('train') 
print(train_path) 
X, Y = image_preloader(target_path=train_path, image_shape=(100, 100), 
mode='folder', grayscale=False, categorical_labels=True, normalize=True) 
X = np.reshape(X, (-1, 100, 100, 3)) 

validate_path = os.path.abspath('validate') 
W, Z = image_preloader(target_path=validate_path, image_shape=(100, 100), 
mode='folder', grayscale=False, categorical_labels=True, normalize=True) 
W = np.reshape(W, (-1, 100, 100, 3)) 
model.fit(X, Y, n_epoch=250, validation_set=(W,Z), show_metric=True) 
model.save('./ZtrainedNet/final-model.tfl') 

而神經網爲:

import tflearn 
from tflearn.layers.core import input_data, dropout, fully_connected 
from tflearn.layers.conv import conv_2d, max_pool_2d 
from tflearn.layers.estimator import regression 
from tflearn.metrics import Accuracy 

acc = Accuracy() 
network = input_data(shape=[None, 100, 100, 3]) 
# Conv layers ------------------------------------ 
network = conv_2d(network, 64, 3, strides=1, activation='relu') 
network = max_pool_2d(network, 2, strides=2) 
network = conv_2d(network, 64, 3, strides=1, activation='relu') 
network = max_pool_2d(network, 2, strides=2) 
network = conv_2d(network, 64, 3, strides=1, activation='relu') 
network = conv_2d(network, 64, 3, strides=1, activation='relu') 
network = conv_2d(network, 64, 3, strides=1, activation='relu') 
network = max_pool_2d(network, 2, strides=2) 
# Fully Connected Layers ------------------------- 
network = fully_connected(network, 1024, activation='tanh') 
network = dropout(network, 0.5) 
network = fully_connected(network, 1024, activation='tanh') 
network = dropout(network, 0.5) 
network = fully_connected(network, 2, activation='softmax') 
network = regression(network, optimizer='momentum', 
loss='categorical_crossentropy', 
learning_rate=0.001, metric=acc) 
model = tflearn.DNN(network) 

我的理解是它與softmax有關嗎?我不確定。

+0

所以你有一個數據集有2個類,大小爲48?你可以在這裏發佈一個Y值的樣本嗎? –

+0

錯誤說'Y'沒有預期的形狀。 –

沙发
0
0

原來,子文件夾被搞砸了。 2與我有的子文件夾的數量相對應,我認爲我設置正確,但在「火車」內部只有1個子文件夾。

板凳
0
0

你的Y值是否也是一個熱點編碼?我只是想猜測爲什麼Y的形狀(?,2)。 如果你可以在你的訓練集中共享一些樣本標籤,那會很好。