多线程调用yolo模型会出错(包括使用cv2载入yolo模型)

李魔佛 发表了文章 • 0 个评论 • 10 次浏览 • 2019-12-05 10:27 • 来自相关话题

占坑。
 
多线程调用yolo模型会出错(包括使用cv2载入yolo模型)
 
占坑。
 
多线程调用yolo模型会出错(包括使用cv2载入yolo模型)
 

keras yolo物体检测 入门教程

李魔佛 发表了文章 • 0 个评论 • 33 次浏览 • 2019-11-28 16:03 • 来自相关话题

占坑
占坑

RuntimeError: `get_session` is not available when using TensorFlow 2.0.

李魔佛 发表了文章 • 0 个评论 • 156 次浏览 • 2019-11-28 15:10 • 来自相关话题

这个问题是TensorFlow版本问题,在2.0以上get_session是被移除了。需要做一些修改,或者把tf降级。可以安装1.15版本
pip install tensorflow==1.15 --upgradeHere, we will see how we can upgrade our code to work with tensorflow 2.0.

This error is usually faced when we are loading pre-trained model with tensorflow session/graph or we are building flask api over a pre-trained model and loading model in tensorflow graph to avoid collision of sessions while application is getting multiple requests at once or say in case of multi-threadinng

After tensorflow 2.0 upgrade, i also started facing above error in one of my project when i had built api of pre-trained model with flask. So i looked around in tensorflow 2.0 documents to find a workaround, to avoid this runtime error and upgrade my code to work with tensorflow 2.0 as well rather than downgrading it to tensorflow 1.x .

I had a project on which i had written tutorial as well on how to build Flask api on trained keras model of text classification and then use it in production

But this project was not working after tensorflow upgrade and was facing runtime error.

Stacktrace of error was something like below:

File "/Users/Upasana/Documents/playground/deploy-keras-model-in-production/src/main.py", line 37, in model_predict
with backend.get_session().graph.as_default() as g:
File "/Users/Upasana/Documents/playground/deploy-keras-model-in-production/venv-tf2/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 379, in get_session
'`get_session` is not available '
RuntimeError: `get_session` is not available when using TensorFlow 2.0.
Related code to get model
with backend.get_session().graph.as_default() as g:
model = SentimentService.get_model1()
Related code to load model
def load_deep_model(self, model):
json_file = open('./src/mood-saved-models/' + model + '.json', 'r')
loaded_model_json = json_file.read()
loaded_model = model_from_json(loaded_model_json)

loaded_model.load_weights("./src/mood-saved-models/" + model + ".h5")

loaded_model._make_predict_function()
return loaded_model
get_session is removed in tensorflow 2.0 and hence not available.

so, in order to load saved model we switched methods. Rather than using keras’s load_model, we used tensorflow to load model so that we can load model using distribution strategy.

Note
The tf.distribute.Strategy API provides an abstraction for distributing your training across multiple processing units.

New code to get model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
model = SentimentService.get_model1()
New code to load model
def load_deep_model(self, model):
loaded_model = tf.keras.models.load_model("./src/mood-saved-models/"model + ".h5")
return loaded_model
This worked and solved the problem with runtime error of get_session not available in tensorflow 2.0 . You can refer to Tensorflow 2.0 upgraded article too

Hope, this will solve your problem too. Thanks for following this article. 查看全部
这个问题是TensorFlow版本问题,在2.0以上get_session是被移除了。需要做一些修改,或者把tf降级。可以安装1.15版本
pip install tensorflow==1.15 --upgrade
Here, we will see how we can upgrade our code to work with tensorflow 2.0.

This error is usually faced when we are loading pre-trained model with tensorflow session/graph or we are building flask api over a pre-trained model and loading model in tensorflow graph to avoid collision of sessions while application is getting multiple requests at once or say in case of multi-threadinng

After tensorflow 2.0 upgrade, i also started facing above error in one of my project when i had built api of pre-trained model with flask. So i looked around in tensorflow 2.0 documents to find a workaround, to avoid this runtime error and upgrade my code to work with tensorflow 2.0 as well rather than downgrading it to tensorflow 1.x .

I had a project on which i had written tutorial as well on how to build Flask api on trained keras model of text classification and then use it in production

But this project was not working after tensorflow upgrade and was facing runtime error.

Stacktrace of error was something like below:

File "/Users/Upasana/Documents/playground/deploy-keras-model-in-production/src/main.py", line 37, in model_predict
with backend.get_session().graph.as_default() as g:
File "/Users/Upasana/Documents/playground/deploy-keras-model-in-production/venv-tf2/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 379, in get_session
'`get_session` is not available '
RuntimeError: `get_session` is not available when using TensorFlow 2.0.
Related code to get model
with backend.get_session().graph.as_default() as g:
model = SentimentService.get_model1()
Related code to load model
def load_deep_model(self, model):
json_file = open('./src/mood-saved-models/' + model + '.json', 'r')
loaded_model_json = json_file.read()
loaded_model = model_from_json(loaded_model_json)

loaded_model.load_weights("./src/mood-saved-models/" + model + ".h5")

loaded_model._make_predict_function()
return loaded_model
get_session is removed in tensorflow 2.0 and hence not available.

so, in order to load saved model we switched methods. Rather than using keras’s load_model, we used tensorflow to load model so that we can load model using distribution strategy.

Note
The tf.distribute.Strategy API provides an abstraction for distributing your training across multiple processing units.

New code to get model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
model = SentimentService.get_model1()
New code to load model
def load_deep_model(self, model):
loaded_model = tf.keras.models.load_model("./src/mood-saved-models/"model + ".h5")
return loaded_model
This worked and solved the problem with runtime error of get_session not available in tensorflow 2.0 . You can refer to Tensorflow 2.0 upgraded article too

Hope, this will solve your problem too. Thanks for following this article.

yolo voc_label 源码分析

李魔佛 发表了文章 • 0 个评论 • 28 次浏览 • 2019-11-27 15:19 • 来自相关话题

作用读取每个xml文件,把坐标转化为相对坐标,对应文件名保存起来
 
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join




sets = [('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test')]

# 20类
classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog",
"horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]

# size w,h
# box x-min,x-max,y-min,y-max
def convert(size, box):
dw = 1. / size[0]
dh = 1. / size[1]
x = (box[0] + box[1]) / 2.0 # 中心点位置
y = (box[2] + box[3]) / 2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x * dw
w = w * dw
y = y * dh
h = h * dh # 全部转化为相对坐标
return (x, y, w, h)


def convert_annotation(year, image_id):
# 找到2个同样的文件
in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml' % (year, image_id))
out_file = open('VOCdevkit/VOC%s/labels/%s.txt' % (year, image_id), 'w')

tree = ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)

for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult) == 1: # difficult ==1 的不要了
continue
cls_id = classes.index(cls) # 排在第几位
xmlbox = obj.find('bndbox')
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
float(xmlbox.find('ymax').text))
# 传入的是w,h 与框框的周边
bb = convert((w, h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')


wd = getcwd()

for year, image_set in sets:
# ('2012', 'train') 循环5次
# 创建目录 一次性
if not os.path.exists('VOCdevkit/VOC%s/labels/' % (year)):
os.makedirs('VOCdevkit/VOC%s/labels/' % (year))

# 图片的id数据
image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt' % (year, image_set)).read().strip().split()

# 结果写入这个文件
list_file = open('%s_%s.txt' % (year, image_set), 'w')

for image_id in image_ids:
list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n' % (wd, year, image_id)) # 补全路径
convert_annotation(year, image_id)
list_file.close()

  查看全部
作用读取每个xml文件,把坐标转化为相对坐标,对应文件名保存起来
 
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join




sets = [('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test')]

# 20类
classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog",
"horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]

# size w,h
# box x-min,x-max,y-min,y-max
def convert(size, box):
dw = 1. / size[0]
dh = 1. / size[1]
x = (box[0] + box[1]) / 2.0 # 中心点位置
y = (box[2] + box[3]) / 2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x * dw
w = w * dw
y = y * dh
h = h * dh # 全部转化为相对坐标
return (x, y, w, h)


def convert_annotation(year, image_id):
# 找到2个同样的文件
in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml' % (year, image_id))
out_file = open('VOCdevkit/VOC%s/labels/%s.txt' % (year, image_id), 'w')

tree = ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)

for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult) == 1: # difficult ==1 的不要了
continue
cls_id = classes.index(cls) # 排在第几位
xmlbox = obj.find('bndbox')
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
float(xmlbox.find('ymax').text))
# 传入的是w,h 与框框的周边
bb = convert((w, h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')


wd = getcwd()

for year, image_set in sets:
# ('2012', 'train') 循环5次
# 创建目录 一次性
if not os.path.exists('VOCdevkit/VOC%s/labels/' % (year)):
os.makedirs('VOCdevkit/VOC%s/labels/' % (year))

# 图片的id数据
image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt' % (year, image_set)).read().strip().split()

# 结果写入这个文件
list_file = open('%s_%s.txt' % (year, image_set), 'w')

for image_id in image_ids:
list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n' % (wd, year, image_id)) # 补全路径
convert_annotation(year, image_id)
list_file.close()

 

神经网络中数值梯度的计算 python代码

李魔佛 发表了文章 • 0 个评论 • 522 次浏览 • 2019-05-07 19:12 • 来自相关话题

深度学习入门python
 
import matplotlib.pyplot as plt
import numpy as np
import time
from collections import OrderedDict

def softmax(a):
a = a - np.max(a)
exp_a = np.exp(a)
exp_a_sum = np.sum(exp_a)
return exp_a / exp_a_sum


def cross_entropy_error(t, y):
delta = 1e-7
s = -1 * np.sum(t * np.log(y + delta))
# print('cross entropy ',s)
return s


class simpleNet:
def __init__(self):
self.W = np.random.randn(2, 3)

def predict(self, x):
print('current w',self.W)
return np.dot(x, self.W)

def loss(self, x, t):
z = self.predict(x)
# print(z)
# print(z.ndim)
y = softmax(z)
# print('y',y)
loss = cross_entropy_error(y, t) # y为预测的值
return loss



def numerical_gradient_(f, x): # 针对2维的情况 甚至是多维
h = 1e-4 # 0.0001
grad = np.zeros_like(x)

it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
idx = it.multi_index
print('idx', idx)
tmp_val = x[idx]
x[idx] = float(tmp_val) + h
fxh1 = f(x) # f(x+h)
print('fxh1 ', fxh1)
# print('current W', net.W)
x[idx] = tmp_val - h
fxh2 = f(x) # f(x-h)
print('fxh2 ', fxh2)
# print('next currnet W ', net.W)
grad[idx] = (fxh1 - fxh2) / (2 * h)

x[idx] = tmp_val # 还原值
it.iternext()

return grad



net = simpleNet()
x=np.array([0.6,0.9])
t = np.array([0.0,0.0,1.0])

def f(W):
return net.loss(x,t)

grads =numerical_gradient_(f,net.W)
print(grads) 查看全部
深度学习入门python
 
import matplotlib.pyplot as plt
import numpy as np
import time
from collections import OrderedDict

def softmax(a):
a = a - np.max(a)
exp_a = np.exp(a)
exp_a_sum = np.sum(exp_a)
return exp_a / exp_a_sum


def cross_entropy_error(t, y):
delta = 1e-7
s = -1 * np.sum(t * np.log(y + delta))
# print('cross entropy ',s)
return s


class simpleNet:
def __init__(self):
self.W = np.random.randn(2, 3)

def predict(self, x):
print('current w',self.W)
return np.dot(x, self.W)

def loss(self, x, t):
z = self.predict(x)
# print(z)
# print(z.ndim)
y = softmax(z)
# print('y',y)
loss = cross_entropy_error(y, t) # y为预测的值
return loss



def numerical_gradient_(f, x): # 针对2维的情况 甚至是多维
h = 1e-4 # 0.0001
grad = np.zeros_like(x)

it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
idx = it.multi_index
print('idx', idx)
tmp_val = x[idx]
x[idx] = float(tmp_val) + h
fxh1 = f(x) # f(x+h)
print('fxh1 ', fxh1)
# print('current W', net.W)
x[idx] = tmp_val - h
fxh2 = f(x) # f(x-h)
print('fxh2 ', fxh2)
# print('next currnet W ', net.W)
grad[idx] = (fxh1 - fxh2) / (2 * h)

x[idx] = tmp_val # 还原值
it.iternext()

return grad



net = simpleNet()
x=np.array([0.6,0.9])
t = np.array([0.0,0.0,1.0])

def f(W):
return net.loss(x,t)

grads =numerical_gradient_(f,net.W)
print(grads)

运行keras报错 No module named 'numpy.core._multiarray_umath'

李魔佛 发表了文章 • 0 个评论 • 3068 次浏览 • 2019-03-26 18:10 • 来自相关话题

python用的是anaconda安装的。ModuleNotFoundError Traceback (most recent call last)
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
ImportError: numpy.core.multiarray failed to import

The above exception was the direct cause of the following exception:

SystemError Traceback (most recent call last)
C:\ProgramData\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_)

SystemError: <class '_frozen_importlib._ModuleLockManager'> returned a result with an error set
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
ImportError: numpy.core._multiarray_umath failed to import
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
ImportError: numpy.core.umath failed to import
2019-03-26 18:01:48.643796: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr
 
以前没遇到这个问题,所以怀疑是conda自带的numpy版本过低,然后使用命令 pip install numpy -U
把numpy更新到最新的版本,然后问题就解决了。
  查看全部
python用的是anaconda安装的。
ModuleNotFoundError                       Traceback (most recent call last)
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
ImportError: numpy.core.multiarray failed to import

The above exception was the direct cause of the following exception:

SystemError Traceback (most recent call last)
C:\ProgramData\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_)

SystemError: <class '_frozen_importlib._ModuleLockManager'> returned a result with an error set
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
ImportError: numpy.core._multiarray_umath failed to import
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
ImportError: numpy.core.umath failed to import
2019-03-26 18:01:48.643796: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr
 
以前没遇到这个问题,所以怀疑是conda自带的numpy版本过低,然后使用命令 pip install numpy -U
把numpy更新到最新的版本,然后问题就解决了。
 

《机器学习在线 解析阿里云机器学习平台》读后感

李魔佛 发表了文章 • 0 个评论 • 374 次浏览 • 2019-03-10 20:18 • 来自相关话题

 现在阿里云的在线学习平台要收费了,一年1700多,所以看了第二章,准备上机试试。没办法,不建议大家看这本书,如果你买了阿里云的学习平台,那么可以去看一下。 不过,既然能买阿里的平台的,应该是直接是要在上面部署的了,这本书是入门的入门,所以还是不要看了。
 
  查看全部
IMG_20190310_201215R.jpg

 现在阿里云的在线学习平台要收费了,一年1700多,所以看了第二章,准备上机试试。没办法,不建议大家看这本书,如果你买了阿里云的学习平台,那么可以去看一下。 不过,既然能买阿里的平台的,应该是直接是要在上面部署的了,这本书是入门的入门,所以还是不要看了。
 
 

sklearn中的Bunch数据类型

李魔佛 发表了文章 • 0 个评论 • 4935 次浏览 • 2018-06-07 19:10 • 来自相关话题

在sklearn中自带部分数据 如 datasets 包中




 
那么这个
<class 'sklearn.utils.Bunch'>
是什么数据格式 ?
 
打印一下:




好吧,原来就是一个字典结构。可以像调用字典一样使用Bunch。
比如 data['image'] 就获取 key为image的内容。
 

原创地址:http://30daydo.com/article/325 
欢迎转载,请注明出处。 查看全部
在sklearn中自带部分数据 如 datasets 包中
k1.PNG

 
那么这个
<class 'sklearn.utils.Bunch'>
是什么数据格式 ?
 
打印一下:
k3.PNG

好吧,原来就是一个字典结构。可以像调用字典一样使用Bunch。
比如 data['image'] 就获取 key为image的内容。
 

原创地址:http://30daydo.com/article/325 
欢迎转载,请注明出处。

sklearn中SGDClassifier分类器每次得到的结果都不一样?

李魔佛 发表了文章 • 0 个评论 • 2191 次浏览 • 2018-06-07 17:14 • 来自相关话题

如下代码:sgdc = SGDClassifier()
sgdc.fit(X_train,y_train)
sgdc_predict_y = sgdc.predict(X_test)
print 'Accuary of SGD classifier ', sgdc.score(X_test,y_test)
print classification_report(y_test,sgdc_predict_y,target_names=['Benign','Malignant'])
每次输出的结果都不一样? WHY
 
 
因为你使用了一个默认参数:
SGDClassifier(random_state = None)
 
所以这个随机种子每次不一样,所以得到的结果就可能不一样,如果你指定随机种子值,那么每次得到的结果都是一样的了。
 

原创地址:http://30daydo.com/article/323 
欢迎转载,请注明出处。 查看全部
如下代码:
sgdc = SGDClassifier()
sgdc.fit(X_train,y_train)
sgdc_predict_y = sgdc.predict(X_test)
print 'Accuary of SGD classifier ', sgdc.score(X_test,y_test)
print classification_report(y_test,sgdc_predict_y,target_names=['Benign','Malignant'])

每次输出的结果都不一样? WHY
 
 
因为你使用了一个默认参数:
SGDClassifier(random_state = None)
 
所以这个随机种子每次不一样,所以得到的结果就可能不一样,如果你指定随机种子值,那么每次得到的结果都是一样的了。
 

原创地址:http://30daydo.com/article/323 
欢迎转载,请注明出处。