Home Php C# Sql C C++ Javascript Python Java Go Android Git Linux Asp.net Django .net Node.js Ios Xcode Cocoa Iphone Mysql Tomcat Mongodb Bash Objective-c Scala Visual-studio Apache Elasticsearch Jar Eclipse Jquery Ruby-on-rails Ruby Rubygems Android-studio Spring Lua Sqlite Emacs Ubuntu Perl Docker Swift Amazon-web-services Svn Html Ajax Xml Java-ee Maven Intellij-idea Rvm Macos Unix Css Ipad Postgresql Css3 Json Windows-server Vue.js Typescript Oracle Hibernate Internet-explorer Github Tensorflow Laravel Symfony Redis Html5 Google-app-engine Nginx Firefox Sqlalchemy Lucene Erlang Flask Vim Solr Webview Facebook Zend-framework Virtualenv Nosql Ide Twitter Safari Flutter Bundle Phonegap Centos Sphinx Actionscript Tornado Register | Login | Edit Tags | New Questions | 繁体 | 简体


10 questions online user: 40

0
votes
answers
19 views
+10

在Ubuntu下爲Go安裝tensorflow錯誤17.10

0

我按照以下步驟安裝了用於Go的tensorflow,沒有錯誤信息顯示。在Ubuntu下爲Go安裝tensorflow錯誤17.10

TF_TYPE="cpu" # Change to "gpu" for GPU support 
TARGET_DIRECTORY='/usr/local' 
curl -L  
    "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.4.0.tar.gz" | 
sudo tar -C $TARGET_DIRECTORY -xz 

sudo ldconfig 
go get github.com/tensorflow/tensorflow/tensorflow/go 

但測試失敗,go test github.com/tensorflow/tensorflow/tensorflow/go,有錯誤消息。

2017-11-18 06:25:59.874418: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX 
2017-11-18 06:25:59.877032: F tensorflow/core/framework/tensor.cc:822] Unexpected type: 23 
SIGABRT: abort 
PC=0x7f6c4afd70bb m=7 sigcode=18446744073709551610 
signal arrived during cgo execution 

goroutine 24 [syscall, locked to thread]: 
runtime.cgocall(0x656610, 0xc4200439c8, 0xc4200439f0) 
    /home/qcg/share/go/src/runtime/cgocall.go:132 +0xe4 fp=0xc420043998 sp=0xc420043958 pc=0x405434 
github.com/tensorflow/tensorflow/tensorflow/go._Cfunc_TF_SetAttrTensor(0x7f6c24003c10, 0x7f6c2400e060, 0x7f6c2400e250, 0x7f6c2400e1f0) 
    github.com/tensorflow/tensorflow/tensorflow/go/_test/_obj_test/_cgo_gotypes.go:890 +0x45 fp=0xc4200439c8 sp=0xc420043998 pc=0x52cb25 
github.com/tensorflow/tensorflow/tensorflow/go.setAttr.func18(0x7f6c24003c10, 0x7f6c2400e060, 0x7f6c2400e250, 0x7f6c2400e1f0) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/graph.go:273 +0xec fp=0xc420043a00 sp=0xc4200439c8 pc=0x538d0c 
github.com/tensorflow/tensorflow/tensorflow/go.setAttr(0x7f6c24003c10, 0xc42000e0c0, 0x6ef103, 0x5, 0x6b60c0, 0xc4200e2440, 0x0, 0x0) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/graph.go:273 +0x11b9 fp=0xc420043c00 sp=0xc420043a00 pc=0x52f759 
github.com/tensorflow/tensorflow/tensorflow/go.(*Graph).AddOperation(0xc42000e080, 0x6eef64, 0x5, 0xc42001a8d0, 0x6, 0x0, 0x0, 0x0, 0xc42007af00, 0x4b36cb, ...) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/graph.go:176 +0x4a0 fp=0xc420043d60 sp=0xc420043c00 pc=0x52e3a0 
github.com/tensorflow/tensorflow/tensorflow/go.Const(0xc42000e080, 0xc42001a8d0, 0x6, 0x6827a0, 0xc4200e22c0, 0xc42001a8d0, 0x6, 0x4d463d, 0x7aaad0) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/util_test.go:38 +0x221 fp=0xc420043e38 sp=0xc420043d60 pc=0x529d41 
github.com/tensorflow/tensorflow/tensorflow/go.TestOutputDataTypeAndShape.func1(0xc4200fa780) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/operation_test.go:137 +0x11e fp=0xc420043fa8 sp=0xc420043e38 pc=0x53619e 
testing.tRunner(0xc4200fa780, 0xc4200e2400) 
    /home/qcg/share/go/src/testing/testing.go:746 +0xd0 fp=0xc420043fd0 sp=0xc420043fa8 pc=0x4d46e0 
runtime.goexit() 
    /home/qcg/share/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc420043fd8 sp=0xc420043fd0 pc=0x45f831 
created by testing.(*T).Run 
    /home/qcg/share/go/src/testing/testing.go:789 +0x2de 

goroutine 1 [chan receive]: 
testing.(*T).Run(0xc4200fa000, 0x6f51c9, 0x1a, 0x702600, 0x47b201) 
    /home/qcg/share/go/src/testing/testing.go:790 +0x2fc 
testing.runTests.func1(0xc4200fa000) 
    /home/qcg/share/go/src/testing/testing.go:1004 +0x64 
testing.tRunner(0xc4200fa000, 0xc420053de0) 
    /home/qcg/share/go/src/testing/testing.go:746 +0xd0 
testing.runTests(0xc4200e2220, 0xa413c0, 0x11, 0x11, 0xc420053e78) 
    /home/qcg/share/go/src/testing/testing.go:1002 +0x2d8 
testing.(*M).Run(0xc420053f18, 0xc420053f70) 
    /home/qcg/share/go/src/testing/testing.go:921 +0x111 
main.main() 
    github.com/tensorflow/tensorflow/tensorflow/go/_test/_testmain.go:82 +0xdb 

goroutine 20 [chan receive]: 
testing.(*T).Run(0xc4200fa3c0, 0xc420014860, 0x13, 0xc4200e2400, 0x2) 
    /home/qcg/share/go/src/testing/testing.go:790 +0x2fc 
github.com/tensorflow/tensorflow/tensorflow/go.TestOutputDataTypeAndShape(0xc4200fa3c0) 
    /home/qcg/share/g/src/github.com/tensorflow/tensorflow/tensorflow/go/operation_test.go:136 +0x56e 
testing.tRunner(0xc4200fa3c0, 0x702600) 
    /home/qcg/share/go/src/testing/testing.go:746 +0xd0 
created by testing.(*T).Run 
    /home/qcg/share/go/src/testing/testing.go:789 +0x2de 

rax 0x0 
rbx 0x7f6c37ffeaa0 
rcx 0x7f6c4afd70bb 
rdx 0x0 
rdi 0x2 
rsi 0x7f6c37ffe840 
rbp 0x7f6c37ffea90 
rsp 0x7f6c37ffe840 
r8  0x0 
r9  0x7f6c37ffe840 
r10 0x8 
r11 0x246 
r12 0x7f6c37ffecc0 
r13 0x17 
r14 0x5 
r15 0x7f6c37ffecc0 
rip 0x7f6c4afd70bb 
rflags 0x246 
cs  0x33 
fs  0x0 
gs  0x0 
FAIL github.com/tensorflow/tensorflow/tensorflow/go 0.052s 

任何想法?謝謝。需要更多細節?需要更多細節?需要更多細節?需要更多細節?需要更多細節?

環境:

OS: Ubuntu 17.10 64bit 
go version go1.9 linux/amd64 
gcc version 7.2.0 (Ubuntu 7.2.0-8ubuntu3) 
+0

我我與修訂的答案更新。 – nessuno

沙发
0
0

我已經開了一個issue

爲了您的興趣:tensorflow軟件包工作正常(因爲您可以從問題中的對話中讀取)。您可以將導致測試失敗的行註釋掉,或者跳過go test命令並使用它(可能與tfgo一起使用,這會讓您的生活更輕鬆)。

更新:

來解決,我們只需要在旅途中包結帳到版本1.4的問題:

cd $GOPATH/src/github.com/tensorflow/tensorflow/tensorflow/go 
git checkout r1.4 
go test 
13
votes
answers
27 views
+10

TensorFlow Object Detection API print objects found on image to console

I'm trying to return list of objects that have been found at image with TF Object Detection API.

To do that I'm using print([category_index.get(i) for i in classes[0]]) to print list of objects that have been found or print(num_detections) to display number of found objects, but in both cases it gives me list with 300 values or simply value [300.] correspondingly.

How it`s possible to return only that objects that are on image? Or if there is some mistake please help to figure out what is wrong.

I was using Faster RCNN models config file and checkpoints while training. Be sure it really detects few objects at image, here it is:

enter image description here

My code:

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

PATH_TO_CKPT = 'frozen_graph/frozen_inference_graph.pb'

PATH_TO_LABELS = 'object_detection/pascal_label_map.pbtxt'

NUM_CLASSES = 7

detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)


PATH_TO_TEST_IMAGES_DIR = 'object_detection/test_images/'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 2) ]

IMAGE_SIZE = (12, 8)

with detection_graph.as_default():
  with tf.Session(graph=detection_graph) as sess:
    sess.run(tf.global_variables_initializer())
    img = 1
    for image_path in TEST_IMAGE_PATHS:
      image = Image.open(image_path)
      image_np = load_image_into_numpy_array(image)
      # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
      image_np_expanded = np.expand_dims(image_np, axis=0)
      image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
      # Each box represents a part of the image where a particular object was detected.
      boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
      scores = detection_graph.get_tensor_by_name('detection_scores:0')
      classes = detection_graph.get_tensor_by_name('detection_classes:0')
      num_detections = detection_graph.get_tensor_by_name('num_detections:0')

      (boxes, scores, classes, num_detections) = sess.run(
          [boxes, scores, classes, num_detections],
          feed_dict={image_tensor: image_np_expanded})

      vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          np.squeeze(boxes),
          np.squeeze(classes).astype(np.int32),
          np.squeeze(scores),
          category_index,
          use_normalized_coordinates=True,
          line_thickness=8)
      plt.figure(figsize=IMAGE_SIZE)
      plt.imsave('RESULTS/' + str(img) + '.jpg', image_np)
      img += 1

      # Return found objects
      print([category_index.get(i) for i in classes[0]])
      print(boxes.shape)
      print(num_detections)

Which gives following result:

[{'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}]
(1, 300, 4)
[ 300.]

Thanks in advance for any information!

UPD:

Thousand thanks for everyone who helped with this question. Following line of code is exactly what I needed, it gives me list with objects that were found so I can do other operations on them.

print [category_index.get(value) for index,value in enumerate(classes[0]) if scores[0,index] > 0.5]
沙发
+60
+50

據我所知,你有300次檢測。 visualize_boxes_and_labels_on_image_array 顯示的很少,因為 min_score_thresh = .5 (這是默認值)對於大多數人來說太高了。

如果你想要將這樣的過濾添加到您可以寫入的輸出中:

  min_score_thresh = 0.5 print([category_index.get(i)for class in [0]如果得分[0,i]> min_score_thresh)  

您可以更改 min_score_thresh 以選擇所需的閾值。使用類別名稱打印分數值可能很有用。

哦,我認為就是這樣,我抓住了主要想法...我會稍後想出來,但是先嘗試給我這個錯誤IndexError:只有整數,切片(':'),省略號('.. 。'),numpy.newaxis('None')和整數或布爾數組是有效的索引,你嘗試過你的代碼嗎? - 邁克爾18年8月26日22:41

好的,我只修改了一下你的代碼,並且如果得分[0,index]> 0.5,則以下語句給出了我需要打印的內容[category_index.get(value)for index,value in enumerate(classes [0]) ] - 邁克爾18年7月27日在13:05

+40

從功能簽名 visualize_boxes_and_labels_on_image_array ,你必須設置參數 max_boxes_to_draw min_score_thresh

  visualize_boxes_and_labels_on_image_array(圖片,方框,類,分數) ,category_index,instance_masks =無,關鍵點=無,use_normalized_coordinates = False,max_boxes_to_draw = 20,min_score_thresh = .5,agnostic_mode = False,line_thickness = 4) 
     
			
        

謝謝回答。我嘗試了你的提示,但遺憾的是,即使這些屬性,結果也是一樣的。 - 邁克爾18年8月14日12:57

問題是檢測到的類的概率是多少? - Dat Tran 18年8月14日14:03

@Dat Tran不,我只想返回檢測到的對象的列表或字典,但是我在上面發布的代碼給了我300個值的文件,但是只有少數檢測到的圖像上的對象 - 邁克爾18年8月14日17: 55

我不明白你的意思......當然它只會給你一些檢測到的對象,因為並非真正顯示了字典中的所有值。這取決於概率閾值。 - Dat Tran 18年8月15日9:59

@Dat Tran我的目標是打印到只控制那些在圖像上找到的對像或對象列表。我無法理解為什麼num_detections返回300值或者[class [0]中的i的category_index.get(i)]返回包含300個值的列表。有可能以某種方式返回列表,其中包含在圖像上找到的對像數量嗎? - 邁克爾18年8月18日7:54

+20

我今天遇到了這個問題。你應該在“visualize_boxes_and_labels_on_image_array()”中更改兩個parm

  1. max_boxes_to_draw = 20(僅繪製20個框)“
  2. min_score_thresh = .5(僅繪製得分> = .5框)“

    更改檢測的兩個數字

+10

嘗試將 min_score_thresh 設置為0.然後您可能會看到300次檢測。

0

打開visualization_utils.py並添加 - > print(class_name)

 之後:class_name ='N / A'display_str ='{}:{}%'。format(class_name,int(100 *)得分[i]))  

這將打印檢測到的對象

0

 之後添加 print(class_name):class_name ='N / A'display_str ='{}:{}%'。format(class_name,  visualization_utils.py 文件中的int(100 * scores [i]))  

打印檢測到的對象。我想知道在哪裡添加print命令來打印時間戳以及輸出的準確度百分比。

0
  //這將加載標籤和類別以及類別索引label_map = label_map_util.load_labelmap(PATH_TO_LABELS)categories = label_map_util.convert_label_map_to_categories(label_map,max_num_classes = NUM??_CLASSES,use_display_name = True)category_index = label_map_util.create_category_index(categories )//打印已識別的對象執行以下操作:  

打印類別而不是類別索引。索引包含數值,類別包含對象的名稱。一旦用上述閾值識別,

  min_score_thresh = 0.5 print([category.get(1)] for class in [0]如果得分[0,i]> min_score_thresh)  

這將打印已識別的類別。

29
votes
answers
41 views
+10

How to perform k-fold cross validation with tensorflow?

I am following the IRIS example of tensorflow.

My case now is I have all data in a single CSV file, not separated, and I want to apply k-fold cross validation on that data.

I have

data_set = tf.contrib.learn.datasets.base.load_csv(filename="mydata.csv",
                                                   target_dtype=np.int)

How can I perform k-fold cross validation on this dataset with multi-layer neural network as same as IRIS example?

沙发
+180
kFold中的test_index(n_splits).split(X_data):X_train,X_data = X_data [train_index],X_data [test_index] y_train,y_test = y_data [train_index],y_data [test_index]產生X_train,y_train,X_test,y_test返回tf.data .Dataset.from_generator(gen,(tf.float64,tf.float64,tf.float64,tf.float64))dataset = make_dataset(X,y,10)

然後可以迭代通過基於圖形的張量流或使用急切執行的數據集。使用急切執行:

  for X_train,y_train,X_test,y_test in tfe.Iterator(dataset):....   X_test,y_test返回tf.data.Dataset.from_generator(gen,(tf.float64,tf.float64,tf.float64,tf.float64))dataset = make_dataset(X,y,10)  

然後,可以在基於圖形的張量流或使用急切執行中迭代數據集。使用急切執行:

  for X_train,y_train,X_test,y_test in tfe.Iterator(dataset):....   X_test,y_test返回tf.data.Dataset.from_generator(gen,(tf.float64,tf.float64,tf.float64,tf.float64))dataset = make_dataset(X,y,10)  

然後,可以在基於圖形的張量流或使用急切執行中迭代數據集。使用急切執行:

  for X_train,y_train,X_test,y_test in tfe.Iterator(dataset):....  
     
			
        

這應該標記為我想的答案...... - AGP 18年8月19日16:55

如果此片段假定X和y不能保留在內存中怎麼辦?我認為使用生成器的重點是按需加載樣本而不是將整個數據集加載到內存中。 - fabiomaia 18年12月29日17:23

板凳
+110

NN通常用於不使用CV的大型數據集 - 而且非常昂貴。對於IRIS(每個物種50個樣本),您可能需要它..為什麼不使用 scikit-learn with different random seeds 來分割你的訓練和測試?

  X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.33,random_state = 42)  

對於k in kfold:

  1. 以不同方式分割數據,將不同的值傳遞給“random_state”
  2. 使用_train
  3. 使用_test測試網絡

    如果您不喜歡隨機種子並想要更結構化的K折疊,您可以使用這裡

     來自sklearn.model_selection import KFold,cross_val_score X = [“a”,“a”,“b”,“c”,“c” ,“c”] k_fold = kFold(n_splits = 3)表示train_indices,test_indices表示k_fold.split(X):print('Train:%s | test:%s'%(train_indices,test_indices))火車:[2 3 4 5] | 測試:[0 1]火車:[0 1 4 5] | 測試:[2 3]火車:[0 1 2 3] | 測試:[4 5]   ] k_fold = kFold(n_splits = 3)表示train_indices,test_indices表示k_fold.split(X):print('Train:%s | test:%s'%(train_indices,test_indices))火車:[2 3 4 5] | 測試:[0 1]火車:[0 1 4 5] | 測試:[2 3]火車:[0 1 2 3] | 測試:[4 5]   ] k_fold = kFold(n_splits = 3)表示train_indices,test_indices表示k_fold.split(X):print('Train:%s | test:%s'%(train_indices,test_indices))火車:[2 3 4 5] | 測試:[0 1]火車:[0 1 4 5] | 測試:[2 3]火車:[0 1 2 3] | 測試:[4 5]  
         
    			
            

答案與問題無關!!! 應提供Tensorflow解決方案的答案 - AGP於18年8月19日16:53

由於答案提供了可與Tensorflow一起使用的解決方案 - 我無法看到問題。 - 時間:18年12月19日22:40

我們怎樣才能使這更隨機化? - Mona Jalal 3月30日1:57

0
votes
answers
26 views
+10

如何將numpy數組存儲爲tfrecord?

1

我想從numpy數組中創建tfrecord格式的數據集。我試圖存儲2D和3D座標。如何將numpy數組存儲爲tfrecord?

2D座標型的形狀(2,10)的numpy的陣列float64 三維座標型float64

的形狀(3,10)的numpy的陣列,這是我的代碼:

def _floats_feature(value): 
    return tf.train.Feature(float_list=tf.train.FloatList(value=value)) 


train_filename = 'train.tfrecords' # address to save the TFRecords file 
writer = tf.python_io.TFRecordWriter(train_filename) 


for c in range(0,1000): 

    #get 2d and 3d coordinates and save in c2d and c3d 

    feature = {'train/coord2d': _floats_feature(c2d), 
        'train/coord3d': _floats_feature(c3d)} 
    sample = tf.train.Example(features=tf.train.Features(feature=feature)) 
    writer.write(sample.SerializeToString()) 

writer.close() 

當我運行此我得到的錯誤:

feature = {'train/coord2d': _floats_feature(c2d), 
    File "genData.py", line 19, in _floats_feature 
return tf.train.Feature(float_list=tf.train.FloatList(value=value)) 
    File "C:UsersUserAppDataLocalProgramsPythonPython36libsite-packagesgoogleprotobufinternalpython_message.py", line 510, in init 
copy.extend(field_value) 
    File "C:UsersUserAppDataLocalProgramsPythonPython36libsite-packagesgoogleprotobufinternalcontainers.py", line 275, in extend 
new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter] 
    File "C:UsersUserAppDataLocalProgramsPythonPython36libsite-packagesgoogleprotobufinternalcontainers.py", line 275, in <listcomp> 
new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter] 
    File "C:UsersUserAppDataLocalProgramsPythonPython36libsite-packagesgoogleprotobufinternal	ype_checkers.py", line 109, in CheckValue 
raise TypeError(message) 
TypeError: array([-163.685, 240.818, -114.05 , -518.554, 107.968, 427.184, 
    157.418, -161.798, 87.102, 406.318]) has type <class 'numpy.ndarray'>, but expected one of: ((<class 'numbers.Real'>,),) 

我不知道如何解決這個問題。我應該存儲的功能爲int64或字節?我不知道如何去做這件事,因爲我對tensorflow完全陌生。任何幫助將是偉大的!感謝

沙发
0
1

tf.train.Feature的類只支持列表(或1-d陣列)使用float_list參數時。根據您的資料,您可以嘗試以下方法之一:

  1. 拼合你的陣列中的數據將它傳遞給tf.train.Feature前:

    def _floats_feature(value): 
        return tf.train.Feature(float_list=tf.train.FloatList(value=value.reshape(-1))) 
    

    請注意,您可能需要另一個功能添加到表明這個數據應該如何重塑當你再次解析它(你可以使用爲目的的int64_list功能)。

  2. 將多維特徵拆分爲多個一維特徵。例如,如果c2d包含N * 2陣列x和y座標的,可以拆分特徵爲單獨train/coord2d/xtrain/coord2d/y特徵,每個包含x和分別y座標數據,。

0
votes
answers
41 views
+10

TF - 帶有地面真值盒的物體檢測

0

在我用tensorflow對目標檢測API進行一些研究的那一刻。爲此,我跟着這個教程:TF - 帶有地面真值盒的物體檢測

https://www.oreilly.com/ideas/object-detection-with-tensorflow

本教程介紹瞭如何從圖像也PASCAL VOC XML標籤文件tfrecord。以及開始使用對象檢測API。

要生成我修改了一些代碼,從引用的浣熊庫在GitHub上那些tfrecords:

https://github.com/datitran/raccoon_dataset

我標記我的圖片與LabelImg(https://github.com/tzutalin/labelImg)有你有在PASCAL VOC格式保存的可能性。

所以現在我按照教程做了60個圖像的第一次測試(訓練),一小時後(574步)我做了一箇中斷來保存檢查點。在此之後,我做了一個導出「inference.py圖」並保存了冷凍模型(糾正我,如果我說一些愚蠢的東西,這東西也是我的新東西...)

而之後,我修改了jupyter筆記本從我的慾望和tada的教程中有一些在測試圖像中的識別。

到目前爲止這麼好,但現在我想看看對象檢測有多好(爲此),爲此我想從我的測試PASCAL VOC數據集中添加一些地面實況框。但我有一些麻煩來實現我的目標。

我做的第一件事是手動添加這是我從我的VOC數據集讀取箱子,並將它們添加到圖像,我與https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Rectangle.html

但在我的解決方案,這是獲得不同的地塊/附圖。 ...

所以,然後我想也許對象檢測API提供了一些功能來添加盒子/地面真值盒並評估我的檢測與我的測試VOC數據集的準確性。

所以我想我看看https://github.com/tensorflow/models/tree/master/research/object_detection/utils,我以爲我找到了一個函數(DEF draw_bounding_box_on_image_array),使一些箱子到我的image_np,但沒有任何事情發生,所以這是什麼API用來做一些可視化:

vis_util.visualize_boxes_and_labels_on_image_array(
     image_np, 
     np.squeeze(boxes), 
     np.squeeze(classes).astype(np.int32), 
     np.squeeze(scores), 
     category_index, 
     use_normalized_coordinates=True, 
     line_thickness=2) 

,這是我曾嘗試使用:

vis_util.draw_bounding_box_on_image(
     image_np, 
     bndbox_coordinates[0][1], 
     bndbox_coordinates[0][0], 
     bndbox_coordinates[0][3], 
     bndbox_coordinates[0][2]) 

但有箱子的arent如果我嘗試繪圖這個numpy的陣列圖像

我錯過了什麼嗎?問題2是否有API中的某些類正在進行準確性評估?我沒有看到我的乾眼......如果此類/功能使用PASCAL VOC來確定? Mybe我可以使用這個:https://github.com/tensorflow/models/blob/master/research/object_detection/utils/object_detection_evaluation.py,但我沒有信心,因爲我也是新的python和一些代碼/評論很難讓我明白...

也許你職業的球員,有可以幫助我

在此先感謝

編輯: https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/

現在:

我從這篇文章中讀出一點點我知道我需要一個IoU(聯合交集) - 所以有人知道對象檢測API是否爲此提供了一個函數?我會重新考慮的API ...

沙发
0
0

我覺得你沒有經過完整的參數

vis_util.visualize_boxes_and_labels_on_image_array(
    image_np, 
    np.squeeze(boxes), 
    np.squeeze(classes).astype(np.int32), 
    np.squeeze(scores), 
    category_index, 
    use_normalized_coordinates=True, 
    line_thickness=2) 

你需要傳遞 image_np=ImageIDnp.squeeze(boxes)=bounding box coordinatesnp.squeeze(classes).astype(np.int32)=to which class this object belongs tonp.squeeze(scores)=confidence score that will always be 1

45
votes
answers
40 views
+10

Tensorflow serving No versions of servable found under base path

I was following this tutorial to use tensorflow serving using my object detection model. I am using tensorflow object detection for generating the model. I have created a frozen model using this exporter (the generated frozen model works using python script).

The frozen graph directory has following contents ( nothing on variables directory)

variables/

saved_model.pb

Now when I try to serve the model using the following command,

tensorflow_model_server --port=9000 --model_name=ssd --model_base_path=/serving/ssd_frozen/

It always shows me

...

tensorflow_serving/model_servers/server_core.cc:421] (Re-)adding model: ssd 2017-08-07 10:22:43.892834: W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:262] No versions of servable ssd found under base path /serving/ssd_frozen/ 2017-08-07 10:22:44.892901: W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:262] No versions of servable ssd found under base path /serving/ssd_frozen/

...

up vote 36 down vote accepted favorite
沙发
+360
+50

我遇到了同樣的問題,原因是因為對象檢測api在導出檢測模型時沒有指定模型的版本。但是,tensorflow服務需要您指定檢測模型的版本號,以便您可以選擇要提供的不同版本的模型。在您的情況下,您應將檢測模型(.pb文件和變量文件夾)放在文件夾:/ serving / ssd_frozen / 1 /下。通過這種方式,您可以將模型分配給版本1,並且tensorflow服務將自動加載此版本,因為您只有一個版本。默認情況下,tensorflow服務將自動提供最新版本(即最大版本的數量)。

注意,在創建1 /文件夾後,仍需要將model_base_path設置為--model_base_path = /服務/ ssd_frozen /.

非常感謝你的幫助。解決了這個問題。:) - Ultraviolet 18年8月7日在19:10

我知道我遲到了這個帖子,但你能輸出命令的預期完整值應該使用他的初始命令嗎?謝謝! - 承包商狼18年8月22日2:26

@contractorwolf已經兩年了,我不記得細節,但我相信你不需要對上面的命令進行任何修改就可以讓它工作。我也相信Object Detection API應該解決這個問題。很抱歉沒有幫助。 - 王新耀3月26日0:10

謝謝你的回复@XinyaoWang我最終確定你的答案是正確的但我完全不理解它,直到我玩了並創建了一個像你建議的編號文件夾。然後它就像你上面的命令一樣工作。非常感激! - 承包商狼3月27日15:00

上帝保佑你的靈魂 - farza 4月11日21:01

+90

對於新版本的tf服務,如你所知,它不再支持SessionBundle導出的模型格式,但現在已經是SavedModelBuilder。

我認為最好從舊模型恢復會話格式然後由SavedModelBuilder導出。您可以用它指示模型的版本。

  def export_saved_model(version,path,sess = None):tf.app.flags.DEFINE_integer('version',version,'version number模型。')tf.app.flags.DEFINE_string('work_dir',path,'你的舊模型目錄。')tf.app.flags.DEFINE_string('model_dir','/ tmp / model_name','保存模型目錄')FLAGS = tf.app.flags。prediction_signature},legacy_init_op = legacy_init_op)builder.save()print('Export SavedModel!')  

你可以在服務示例中找到上面代碼的主要部分。最後,它將以可以提供的格式生成SavedModel。

很好的例子。謝謝!! - 赫菲斯托斯4月14日6:29

0
votes
answers
53 views
+10

哪裏寫圖像分類器的python腳本

0

我想學習如何使用轉移學習來重新訓練圖像分類器。我按照this tutorial.哪裏寫圖像分類器的python腳本

中顯示的步驟我成功地重新訓練了模型,但是在他編寫用於分類新訓練過的模型的python腳本的最後一步中遇到了問題。在視頻中,他開始在4:18編寫代碼,但沒有指定在哪裏。我嘗試將它寫入泊塢窗容器中,但它給了我no module named platform錯誤和NameError: name 'sys' is not defined錯誤。我嘗試在本地機器上編寫它,並獲取錯誤,因爲我沒有本地安裝的依賴項。我不確定在本教程的最後一步中編寫python代碼的位置。任何幫助表示讚賞。

終端代碼和錯誤:

[email protected]:/tensorflow# python 
Python 2.7.12 (default, Nov 19 2016, 06:48:10) 
[GCC 5.4.0 20160609] on linux2 
Type "help", "copyright", "credits" or "license" for more information. 
>>> import tensorflow as tf, sys 
Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "tensorflow/__init__.py", line 24, in <module> 
    from tensorflow.python import * 
    File "tensorflow/python/__init__.py", line 49, in <module> 
    from tensorflow.python import pywrap_tensorflow 
    File "tensorflow/python/pywrap_tensorflow.py", line 25, in <module> 
    from tensorflow.python.platform import self_check 
ImportError: No module named platform 
>>> image_path = sys.argv[1] 
Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
NameError: name 'sys' is not defined 
沙发
0
0

在這種情況下,正確的方法是隻使用任何IDE並將python腳本寫入本地計算機的.py文件中,然後將該文件複製到Docker容器中。如果您沒有在本地安裝tensorflow庫,tensorflow特定語句(如import tensorflow as tf)將生成錯誤,但可以忽略它們,因爲該文件將最終在容器中運行。

讓我們假設你命名.py文件myScript.py。爲了將.py文件從您的計算機傳輸到容器,請運行以下命令:docker cp myScript.py myContainer:/myScript.py

確保將.py文件放在容器中的某個地方,您可以輕鬆找到它。在對模型進行再訓練時,請確保通過運行以下代碼運行正確的腳本: python tensorflow/examples/image_retraining/myScript.py 然後,這將指向正確的文件,並且會像在本地執行所有操作一樣讀取它。

查閱這些鏈接,瞭解更多信息:

Copying files from host to Docker container

https://docs.docker.com/engine/reference/commandline/cp/

板凳
0
0

你tensorflow包裝容器爲broken。我認爲最好的辦法是再次卸載並再次運行tensorflow,但要確保你採用最新版本(截至目前爲止的1.4)。

第二個錯誤"NameError: name 'sys' is not defined"簡單地跟在第一個錯誤之後。聲明import tensorflow as tf引發錯誤,這就是爲什麼sys模塊未被導入。

當您運行tensorflow後,您可以在python控制檯中編寫代碼,ipython notebook或創建standalone python scripts

+0

重新安裝tensorflow的時候,我應該還在使用 「gcr.io/tensorflow/tensorflow:latest-devel」 命令?我如何檢查我的版本? –

+0

你試過簡單的'pip install tensorflow'嗎? – Maxim

+0

我正在使用張量圖像的Docker容器。如果我在我的電腦中擁有本地所有內容,那麼pip安裝命令是否可用? –

0
votes
answers
35 views
+10

Tensorflow對象檢測訓練使python崩潰

0

我正在從Tensorflow對象檢測模型動物園訓練ssd_mobilenet_v1_coco_2017_11_17模型。 我的數據集是衛星圖像,我的目標是檢測圖像中的車輛。 但是,培訓失敗與python內存問題。我正在使用CPU進行培訓,我的Windows 10機器有32 GB RAM。用於培訓的TF記錄文件大小爲1.7 GB。Tensorflow對象檢測訓練使python崩潰

enter image description here

我無法檢測這種故障的原因。

請幫忙。

+0

github問題: https://github.com/tensorflow/models/issues/2891 – Mandroid

沙发
0
0

嘗試調整批量大小,隊列容量,讀取器線程。

0
votes
answers
40 views
+10

TensorFlow(神經網絡)FC輸出大小

0

不確定我的問題是否是特定於TF或僅僅是NNs,但是我使用tensorflow創建了CNN。而且我很難理解爲什麼我的完全連接層上的輸出的大小就是這樣。TensorFlow(神經網絡)FC輸出大小

X = tf.placeholder(tf.float32, [None, 32, 32, 3]) 
y = tf.placeholder(tf.int64, [None]) 
is_training = tf.placeholder(tf.bool) 

# define model 
def complex_model(X,y,is_training): 

    # conv layer 
    wconv_1 = tf.get_variable('wconv_1', [7 ,7 ,3, 32]) 
    bconv_1 = tf.get_variable('bconv_1', [32]) 

    # affine layer 1 
    w1 = tf.get_variable('w1', [26*26*32//4, 1024]) #LINE 13 
    b1 = tf.get_variable('b1', [1024]) 

    # batchnorm params 

    bn_gamma = tf.get_variable('bn_gamma', shape=[32]) #scale 
    bn_beta = tf.get_variable('bn_beta', shape=[32]) #shift 

    # affine layer 2 
    w2 = tf.get_variable('w2', [1024, 10]) 
    b2 = tf.get_variable('b2', [10]) 


    c1_out = tf.nn.conv2d(X, wconv_1, strides=[1, 1, 1, 1], padding="VALID") + bconv_1 
    activ_1 = tf.nn.relu(c1_out) 

    mean, var = tf.nn.moments(activ_1, axes=[0,1,2], keep_dims=False) 
    bn = tf.nn.batch_normalization(act_1, mean, var, bn_gamma, bn_beta, 1e-6) 
    mp = tf.nn.max_pool(bn, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') 

    affine_in_flat = tf.reshape(mp, [-1, 26*26*32//4]) 

    affine_1 = tf.matmul(affine_in_flat, w1) + b1 
    activ_2 = tf.nn.relu(affine_1) 

    affine_2 = tf.matmul(activ_2, w2) + b2 
    return affine_2 

    #print(affine_2.shape) 

在線路13,其中i設定w1的值i本來期望只是把:

w1 = tf.get_variable('w1', [26*26*32, 1024]) 

但是如果我運行與線上面的代碼示出和

affine_in_flat = tf.reshape(mp, [-1, 26*26*32]) 

我的輸出大小是16,10而不是64,10這是我期望給予以下初始化:

x = np.random.randn(64, 32, 32,3) 
with tf.Session() as sess: 
    with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0" 
     tf.global_variables_initializer().run() 
     #print("train", x.size, is_training, y_out) 
     ans = sess.run(y_out,feed_dict={X:x,is_training:True}) 
     %timeit sess.run(y_out,feed_dict={X:x,is_training:True}) 
     print(ans.shape) 
     print(np.array_equal(ans.shape, np.array([64, 10]))) 

有人可以告訴我爲什麼我需要將w1 [0]的大小除以4嗎?

沙发
0
0

添加爲bnmpprint語句,我得到:

bn<tf.Tensor 'batchnorm/add_1:0' shape=(?, 26, 26, 32) dtype=float32>

mp<tf.Tensor 'MaxPool:0' shape=(?, 13, 13, 32) dtype=float32>

這似乎是由於上最大的strides=[1, 2, 2, 1]池(但要保持26, 26你還需要padding='SAME')。

0
votes
answers
34 views
+10

Tensorflow對象檢測API中沒有檢測到什麼

8

我試圖實現Tensorflow對象檢測API示例。我正在關注sentdex視頻。示例代碼運行良好,它還顯示用於測試結果的圖像,但未顯示檢測到的對象周圍的邊界。只是平面圖像顯示沒有任何錯誤。Tensorflow對象檢測API中沒有檢測到什麼

我使用此代碼:This Github link

這是運行示例代碼後的結果。

enter image description here

沒有任何檢測的另一圖像。

enter image description here

什麼我錯過這裏?代碼包含在上面的鏈接中,並且沒有錯誤日誌。

按照該順序的框,分數,類別,數量的結果。

[[[ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.20880508 1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.20934391 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.20880508 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.74907303 0.14624023 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ]]] 
[[ 0.03587547 0.02224986 0.0186467 0.01096812 0.01003207 0.00654409 
    0.00633549 0.00534311 0.0049596 0.00410213 0.00362371 0.00339186 
    0.00308251 0.00303347 0.00293389 0.00277099 0.00269575 0.00266825 
    0.00263925 0.00263331 0.00258657 0.00240822 0.0022581 0.00186967 
    0.00184311 0.00180467 0.00177475 0.00173655 0.00172811 0.00171935 
    0.00171891 0.00170288 0.00163755 0.00162967 0.00160273 0.00156545 
    0.00153615 0.00140941 0.00132407 0.00131524 0.0013105 0.00129431 
    0.0012582 0.0012553 0.00122365 0.00119186 0.00115651 0.00115186 
    0.00112369 0.00107097 0.00105805 0.00104338 0.00102719 0.00102337 
    0.00100349 0.00097762 0.00096851 0.00092741 0.00088506 0.00087696 
    0.0008734 0.00084826 0.00084135 0.00083513 0.00083398 0.00082068 
    0.00080583 0.00078979 0.00078059 0.00077476 0.00075448 0.00074426 
    0.00074421 0.00070195 0.00068741 0.00068138 0.00067262 0.00067125 
    0.00067033 0.00066035 0.00064729 0.00064205 0.00061964 0.00061794 
    0.00060835 0.00060465 0.00059548 0.00059479 0.00059461 0.00059436 
    0.00059426 0.00059411 0.00059406 0.00059392 0.00059365 0.00059351 
    0.00059191 0.00058798 0.00058682 0.00058148]] 
[[ 1. 1. 18. 32. 62. 60. 63. 67. 61. 49. 31. 84. 50. 54. 
    15. 44. 44. 49. 31. 56. 88. 28. 88. 52. 17. 32. 38. 75. 
    3. 33. 48. 59. 35. 57. 47. 51. 19. 27. 72. 4. 84. 6. 
    55. 20. 58. 65. 61. 82. 42. 34. 40. 21. 43. 64. 39. 62. 
    36. 22. 79. 46. 16. 40. 41. 77. 16. 48. 78. 77. 89. 86. 
    27. 8. 87. 5. 25. 70. 80. 76. 75. 67. 65. 37. 2. 9. 
    73. 63. 29. 30. 69. 66. 68. 26. 71. 12. 45. 83. 13. 85. 
    74. 23.]] 
[ 100.] 
[[[ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.00784111 0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.   1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ] 
    [ 0.   0.68494415 1.   1.  ]]] 
[[ 0.01044297 0.0098214 0.00942165 0.00846471 0.00613666 0.00398615 
    0.00357754 0.0030054 0.00255861 0.00236574 0.00232631 0.00220291 
    0.00185227 0.0016354 0.0015979 0.00145072 0.00143661 0.00141369 
    0.00122685 0.00118978 0.00108457 0.00104251 0.00099215 0.00096401 
    0.0008708 0.00084773 0.00080484 0.00078507 0.00078378 0.00076876 
    0.00072774 0.00071732 0.00071348 0.00070812 0.00069253 0.0006762 
    0.00067269 0.00059905 0.00059367 0.000588 0.00056114 0.0005504 
    0.00051472 0.00051057 0.00050973 0.00048486 0.00047297 0.00046204 
    0.00044787 0.00043259 0.00042987 0.00042673 0.00041978 0.00040494 
    0.00040087 0.00039576 0.00039059 0.00037274 0.00036831 0.00036417 
    0.00036119 0.00034645 0.00034479 0.00034078 0.00033771 0.00033605 
    0.0003333 0.0003304 0.0003294 0.00032326 0.00031787 0.00031773 
    0.00031748 0.00031741 0.00031732 0.00031729 0.00031724 0.00031722 
    0.00031717 0.00031708 0.00031702 0.00031579 0.00030416 0.00030222 
    0.00029739 0.00029726 0.00028289 0.0002653 0.00026325 0.00024584 
    0.00024221 0.00024156 0.00023911 0.00023335 0.00021619 0.0002001 
    0.00019127 0.00018342 0.00017273 0.00015509]] 
[[ 38. 1. 1. 16. 25. 38. 64. 24. 49. 56. 20. 3. 28. 2. 
    48. 19. 21. 62. 50. 6. 8. 7. 67. 18. 35. 53. 39. 55. 
    15. 57. 72. 52. 10. 5. 42. 43. 76. 22. 82. 4. 61. 23. 
    17. 16. 87. 62. 51. 60. 36. 58. 59. 33. 31. 54. 70. 11. 
    40. 79. 31. 9. 41. 77. 80. 34. 90. 89. 73. 13. 84. 32. 
    63. 29. 30. 69. 66. 68. 26. 71. 12. 45. 83. 14. 44. 78. 
    85. 46. 47. 19. 65. 74. 37. 27. 63. 88. 28. 81. 86. 75. 
    27. 18.]] 
[ 100.] 

編輯:按參考答案,這是工作,當我們使用faster_rcnn_resnet101_coco_2017_11_08模型。但它更準確,這就是爲什麼要慢。我希望高速應用這個應用程序,因爲我將實時(在網絡攝像頭上)對象檢測中使用它。所以,我需要使用更快的模型(ssd_mobilenet_v1_coco_2017_11_08

+2

你能告訴我們的價值觀(盒,分數,等級,NUM) ;我想了解是否有任何物體被檢測到。 – Zephro

+0

我該怎麼做? @Zephro – Kaushal28

+0

好嗎通過打印框的座標? – Kaushal28

沙发
0
-1

功能visualize_boxes_and_labels_on_image_array具有下面的代碼:

for i in range(min(max_boxes_to_draw, boxes.shape[0])): 
    if scores is None or scores[i] > min_score_thresh: 

如此,得分必須比min_score_thresh(默認值0.5)更大,你可以檢查是否有一些分數比它大。

+0

那麼爲什麼即使檢測正確,分數也不會大於0.5? – Kaushal28

+0

因此,如果模型'ssd_mobilenet_v1_coco_2017_11_08'有問題,那麼是否意味着使用它的訓練也會有問題?我試圖訓練它,但它在第一步被卡住:global_step /秒:0. 它堅持了將近9個小時。 我正在使用CPU進行培訓。 – Mandroid

+0

@ Kaushal28您可以使用型號「faster_rcnn_resnet101_coco_2017_11_08」而不是「ssd_mobilenet_v1_coco_2017_11_08」 –

板凳
0
2

解決方法將#MODEL_NAME ='ssd_mobilenet_v1_coco_2017_11_08'更改爲MODEL_NAME ='faster_rcnn_resnet101_coco_2017_11_08'。

地板
0
1

您可以使用較舊的'ssd_mobilenet_v1 ...',並用盒子完全運行您的程序(我現在就運行它,它是正確的)。這是舊版本的link。希望他們儘快修正更新的版本!

4楼
0
2

的問題是從模型:'ssd_mobilenet_v1_coco_2017_11_08'

解決方法:更改爲differrent版本'ssd_mobilenet_v1_coco_11_06_2017'(這種模式類型爲最快的國家之一,更改爲其他模型類型將使其更慢,而不是東西你想要的)

只要改變1行代碼:

# What model to download. 
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017' 

當我用你的鱈魚e,什麼都沒有顯示,但當我用我以前的實驗模型替換它'ssd_mobilenet_v1_coco_11_06_2017'它工作正常

5楼
0
0

我曾經有同樣的問題。

但一種新的模式最近已被上傳「ssd_mobilenet_v1_coco_2017_11_17」

我嘗試過了,就像魅力:)