tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.

  (0) Internal:  Blas GEMM launch failed : a.shape=(512, 16), b.shape=(16, 16), m=512, n=16, k=16

            [[node sequential/dense/MatMul (defined at O:/PycharmProjects/catdogtf2.2/004.py:74) ]]

  (1) Internal:  Blas GEMM launch failed : a.shape=(512, 16), b.shape=(16, 16), m=512, n=16, k=16

            [[node sequential/dense/MatMul (defined at O:/PycharmProjects/catdogtf2.2/004.py:74) ]]

            [[gradient_tape/sequential/embedding/embedding_lookup/Reshape/_46]]

 

 

 

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=53066

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

PyDev console: starting.

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

>>> runfile('O:/PycharmProjects/catdogtf2.2/004.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 05:20:09.882744: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2.2.0

O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\keras\datasets\imdb.py:155: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

  x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx])

O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\keras\datasets\imdb.py:156: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

  x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:])

훈련 샘플: 25000, 레이블: 25000

[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

[   1   14   22   16   43  530  973 1622 1385   65  458 4468   66 3941

    4  173   36  256    5   25  100   43  838  112   50  670    2    9

   35  480  284    5  150    4  172  112  167    2  336  385   39    4

  172 4536 1111   17  546   38   13  447    4  192   50   16    6  147

 2025   19   14   22    4 1920 4613  469    4   22   71   87   12   16

   43  530   38   76   15   13 1247    4   22   17  515   17   12   16

  626   18    2    5   62  386   12    8  316    8  106    5    4 2223

 5244   16  480   66 3785   33    4  130   12   16   38  619    5   25

  124   51   36  135   48   25 1415   33    6   22   12  215   28   77

   52    5   14  407   16   82    2    8    4  107  117 5952   15  256

    4    2    7 3766    5  723   36   71   43  530  476   26  400  317

   46    7    4    2 1029   13  104   88    4  381   15  297   98   32

 2071   56   26  141    6  194 7486   18    4  226   22   21  134  476

   26  480    5  144   30 5535   18   51   36   28  224   92   25  104

    4  226   65   16   38 1334   88   12   16  283    5   16 4472  113

  103   32   15   16 5345   19  178   32    0    0    0    0    0    0

    0    0    0    0    0    0    0    0    0    0    0    0    0    0

    0    0    0    0    0    0    0    0    0    0    0    0    0    0

    0    0    0    0]

2020-08-11 05:20:18.596801: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 05:20:18.637406: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:20:18.637937: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:20:18.645961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:20:18.652048: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:20:18.655131: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:20:18.663508: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:20:18.668992: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:20:18.685127: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:20:18.685528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:20:18.686031: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 05:20:18.697734: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1a1122e60f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:20:18.698127: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 05:20:18.698600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:20:18.699083: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:20:18.699331: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:20:18.699595: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:20:18.699820: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:20:18.700003: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:20:18.700183: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:20:18.700378: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:20:18.700642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:20:19.358531: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 05:20:19.358724: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 05:20:19.358810: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 05:20:19.359171: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 05:20:19.362586: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1a113c59140 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:20:19.362846: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

Model: "sequential"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

embedding (Embedding)        (None, None, 16)          160000   

_________________________________________________________________

global_average_pooling1d (Gl (None, 16)                0        

_________________________________________________________________

dense (Dense)                (None, 16)                272      

_________________________________________________________________

dense_1 (Dense)              (None, 1)                 17       

=================================================================

Total params: 160,289

Trainable params: 160,289

Non-trainable params: 0

_________________________________________________________________

Epoch 1/40

2020-08-11 05:20:20.301884: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

30/30 [==============================] - 1s 23ms/step - loss: 0.6917 - accuracy: 0.5509 - val_loss: 0.6898 - val_accuracy: 0.5002

Epoch 2/40

30/30 [==============================] - 0s 16ms/step - loss: 0.6846 - accuracy: 0.6117 - val_loss: 0.6797 - val_accuracy: 0.7024

Epoch 3/40

30/30 [==============================] - 0s 17ms/step - loss: 0.6703 - accuracy: 0.7325 - val_loss: 0.6630 - val_accuracy: 0.7302

Epoch 4/40

30/30 [==============================] - 0s 16ms/step - loss: 0.6471 - accuracy: 0.7548 - val_loss: 0.6368 - val_accuracy: 0.7576

Epoch 5/40

30/30 [==============================] - 0s 16ms/step - loss: 0.6132 - accuracy: 0.7815 - val_loss: 0.6014 - val_accuracy: 0.7876

Epoch 6/40

30/30 [==============================] - 0s 16ms/step - loss: 0.5710 - accuracy: 0.8108 - val_loss: 0.5609 - val_accuracy: 0.8045

Epoch 7/40

30/30 [==============================] - 1s 17ms/step - loss: 0.5249 - accuracy: 0.8304 - val_loss: 0.5189 - val_accuracy: 0.8153

Epoch 8/40

30/30 [==============================] - 0s 16ms/step - loss: 0.4791 - accuracy: 0.8481 - val_loss: 0.4785 - val_accuracy: 0.8324

Epoch 9/40

30/30 [==============================] - 0s 16ms/step - loss: 0.4364 - accuracy: 0.8610 - val_loss: 0.4428 - val_accuracy: 0.8430

Epoch 10/40

30/30 [==============================] - 0s 16ms/step - loss: 0.3986 - accuracy: 0.8729 - val_loss: 0.4132 - val_accuracy: 0.8495

Epoch 11/40

30/30 [==============================] - 0s 16ms/step - loss: 0.3664 - accuracy: 0.8797 - val_loss: 0.3890 - val_accuracy: 0.8559

Epoch 12/40

30/30 [==============================] - 0s 16ms/step - loss: 0.3392 - accuracy: 0.8879 - val_loss: 0.3684 - val_accuracy: 0.8636

Epoch 13/40

30/30 [==============================] - 0s 16ms/step - loss: 0.3158 - accuracy: 0.8953 - val_loss: 0.3523 - val_accuracy: 0.8677

Epoch 14/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2961 - accuracy: 0.9003 - val_loss: 0.3392 - val_accuracy: 0.8716

Epoch 15/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2785 - accuracy: 0.9069 - val_loss: 0.3283 - val_accuracy: 0.8743

Epoch 16/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2631 - accuracy: 0.9111 - val_loss: 0.3204 - val_accuracy: 0.8760

Epoch 17/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2498 - accuracy: 0.9151 - val_loss: 0.3122 - val_accuracy: 0.8796

Epoch 18/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2375 - accuracy: 0.9199 - val_loss: 0.3064 - val_accuracy: 0.8795

Epoch 19/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2257 - accuracy: 0.9237 - val_loss: 0.3023 - val_accuracy: 0.8803

Epoch 20/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2153 - accuracy: 0.9259 - val_loss: 0.2972 - val_accuracy: 0.8824

Epoch 21/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2055 - accuracy: 0.9297 - val_loss: 0.2950 - val_accuracy: 0.8819

Epoch 22/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1970 - accuracy: 0.9327 - val_loss: 0.2915 - val_accuracy: 0.8834

Epoch 23/40

30/30 [==============================] - 0s 17ms/step - loss: 0.1881 - accuracy: 0.9379 - val_loss: 0.2898 - val_accuracy: 0.8832

Epoch 24/40

30/30 [==============================] - 0s 17ms/step - loss: 0.1804 - accuracy: 0.9411 - val_loss: 0.2882 - val_accuracy: 0.8842

Epoch 25/40

30/30 [==============================] - 1s 17ms/step - loss: 0.1733 - accuracy: 0.9441 - val_loss: 0.2867 - val_accuracy: 0.8850

Epoch 26/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1656 - accuracy: 0.9472 - val_loss: 0.2858 - val_accuracy: 0.8861

Epoch 27/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1593 - accuracy: 0.9504 - val_loss: 0.2857 - val_accuracy: 0.8858

Epoch 28/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1529 - accuracy: 0.9529 - val_loss: 0.2857 - val_accuracy: 0.8862

Epoch 29/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1469 - accuracy: 0.9555 - val_loss: 0.2863 - val_accuracy: 0.8862

Epoch 30/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1416 - accuracy: 0.9579 - val_loss: 0.2871 - val_accuracy: 0.8865

Epoch 31/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1360 - accuracy: 0.9603 - val_loss: 0.2889 - val_accuracy: 0.8858

Epoch 32/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1310 - accuracy: 0.9619 - val_loss: 0.2897 - val_accuracy: 0.8867

Epoch 33/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1260 - accuracy: 0.9637 - val_loss: 0.2917 - val_accuracy: 0.8860

Epoch 34/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1210 - accuracy: 0.9655 - val_loss: 0.2925 - val_accuracy: 0.8856

Epoch 35/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1174 - accuracy: 0.9663 - val_loss: 0.2950 - val_accuracy: 0.8871

Epoch 36/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1123 - accuracy: 0.9689 - val_loss: 0.2974 - val_accuracy: 0.8854

Epoch 37/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1082 - accuracy: 0.9699 - val_loss: 0.3024 - val_accuracy: 0.8819

Epoch 38/40

30/30 [==============================] - 1s 17ms/step - loss: 0.1044 - accuracy: 0.9715 - val_loss: 0.3019 - val_accuracy: 0.8837

Epoch 39/40

30/30 [==============================] - 0s 17ms/step - loss: 0.1002 - accuracy: 0.9731 - val_loss: 0.3048 - val_accuracy: 0.8841

Epoch 40/40

30/30 [==============================] - 0s 16ms/step - loss: 0.0964 - accuracy: 0.9750 - val_loss: 0.3096 - val_accuracy: 0.8812

782/782 - 1s - loss: 0.3282 - accuracy: 0.8718

[0.32820913195610046, 0.8718400001525879]

 

 

 

 원래 잘 되던게 다시 해보면, 한 번에 되는게 없네 ㅋ 믓튼, 자료 준비 잼남. tutorials 소스 요청은 mynameis@hajunho.com 으로 (은근 일임)

 

import tensorflow as tf

from tensorflow import keras

 

import numpy as np

 

print(tf.__version__)

 

imdb = keras.datasets.imdb

 

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

 

print("훈련 샘플: {}, 레이블: {}".format(len(train_data), len(train_labels)))

print(train_data[0])

 

len(train_data[0]), len(train_data[1])

 

# 단어와 정수 인덱스를 매핑한 딕셔너리

word_index = imdb.get_word_index()

 

# 처음 몇 개 인덱스는 사전에 정의되어 있습니다

word_index = {k:(v+3) for k,v in word_index.items()}

word_index["<PAD>"] = 0

word_index["<START>"] = 1

word_index["<UNK>"] = 2  # unknown

word_index["<UNUSED>"] = 3

 

reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

 

def decode_review(text):

    return ' '.join([reverse_word_index.get(i, '?') for i in text])

 

decode_review(train_data[0])

 

train_data = keras.preprocessing.sequence.pad_sequences(train_data,

                                                        value=word_index["<PAD>"],

                                                        padding='post',

                                                        maxlen=256)

 

test_data = keras.preprocessing.sequence.pad_sequences(test_data,

                                                       value=word_index["<PAD>"],

                                                       padding='post',

                                                       maxlen=256)

 

len(train_data[0]), len(train_data[1])

 

print(train_data[0])

 

# 입력 크기는 영화 리뷰 데이터셋에 적용된 어휘 사전의 크기입니다(10,000개의 단어)

vocab_size = 10000

 

model = keras.Sequential()

model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))

model.add(keras.layers.GlobalAveragePooling1D())

model.add(keras.layers.Dense(16, activation='relu'))

model.add(keras.layers.Dense(1, activation='sigmoid'))

 

model.summary()

 

model.compile(optimizer='adam',

              loss='binary_crossentropy',

              metrics=['accuracy'])

 

x_val = train_data[:10000]

partial_x_train = train_data[10000:]

 

y_val = train_labels[:10000]

partial_y_train = train_labels[10000:]

 

history = model.fit(partial_x_train,

                    partial_y_train,

                    epochs=40,

                    batch_size=512,

                    validation_data=(x_val, y_val),

                    verbose=1)

 

results = model.evaluate(test_data,  test_labels, verbose=2)

 

print(results)

 

history_dict = history.history

history_dict.keys()

 

import matplotlib.pyplot as plt

 

acc = history_dict['accuracy']

val_acc = history_dict['val_accuracy']

loss = history_dict['loss']

val_loss = history_dict['val_loss']

 

epochs = range(1, len(acc) + 1)

 

# "bo" "파란색 점"입니다

plt.plot(epochs, loss, 'bo', label='Training loss')

# b "파란 실선"입니다

plt.plot(epochs, val_loss, 'b', label='Validation loss')

plt.title('Training and validation loss')

plt.xlabel('Epochs')

plt.ylabel('Loss')

plt.legend()

 

plt.show()

 

plt.clf()   # 그림을 초기화합니다

 

plt.plot(epochs, acc, 'bo', label='Training acc')

plt.plot(epochs, val_acc, 'b', label='Validation acc')

plt.title('Training and validation accuracy')

plt.xlabel('Epochs')

plt.ylabel('Accuracy')

plt.legend()

 

plt.show()

 

 

#

# Copyright (c) 2017 François Chollet

#

# Permission is hereby granted, free of charge, to any person obtaining a

# copy of this software and associated documentation files (the "Software"),

# to deal in the Software without restriction, including without limitation

# the rights to use, copy, modify, merge, publish, distribute, sublicense,

# and/or sell copies of the Software, and to permit persons to whom the

# Software is furnished to do so, subject to the following conditions:

#

# The above copyright notice and this permission notice shall be included in

# all copies or substantial portions of the Software.

#

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

# DEALINGS IN THE SOFTWARE.

 

 

'진행 프로젝트 > [진행] Tensorflow2 &amp;amp;amp;amp;quot;해볼까?&amp;amp;amp;amp;quot;' 카테고리의 다른 글

tutorials 06  (0) 2020.08.11
tutorials 05  (0) 2020.08.11
tutorials 03  (0) 2020.08.11
tutorials 02  (0) 2020.08.11
tutorial 01 running on pyCharm 2020.2 & 3.7  (1) 2020.08.11

+ Recent posts