tokenizer = tfds.deprecated.text.Tokenizer()

로 바뀌었다. 세상 참 빠르다.

www.tensorflow.org/datasets/api_docs/python/tfds/deprecated/text/Tokenizer

 

tfds.deprecated.text.Tokenizer  |  TensorFlow Datasets

Splits a string into tokens, and joins them back.

www.tensorflow.org

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=64082

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)]

Type 'copyright', 'credits' or 'license' for more information

IPython 7.17.0 -- An enhanced Interactive Python. Type '?' for help.

PyDev console: using IPython 7.17.0

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

In[2]: runfile('O:/PycharmProjects/catdogtf2.2/010.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 23:43:20.730164: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 23:43:23.946122: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 23:43:24.000619: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 23:43:24.001194: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 23:43:24.011525: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 23:43:24.019517: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 23:43:24.024019: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 23:43:24.032859: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 23:43:24.038094: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 23:43:24.052864: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 23:43:24.053230: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 23:43:24.053758: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 23:43:24.067367: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1c9cd3779b0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 23:43:24.067961: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 23:43:24.068675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 23:43:24.069504: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 23:43:24.069864: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 23:43:24.070004: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 23:43:24.070148: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 23:43:24.070297: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 23:43:24.070429: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 23:43:24.070567: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 23:43:24.070797: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 23:43:25.001396: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 23:43:25.001678: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 23:43:25.001850: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 23:43:25.002247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 23:43:25.006712: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1c9f7068730 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 23:43:25.007107: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

(<tf.Tensor: shape=(), dtype=string, numpy=b'not a cloud to be seen neither on plain nor mountain. These last'>, <tf.Tensor: shape=(), dtype=int64, numpy=2>)

(<tf.Tensor: shape=(), dtype=string, numpy=b'To win the heart; there Love, there young Desire,'>, <tf.Tensor: shape=(), dtype=int64, numpy=1>)

(<tf.Tensor: shape=(), dtype=string, numpy=b'To parching airs beside the running stream;'>, <tf.Tensor: shape=(), dtype=int64, numpy=0>)

(<tf.Tensor: shape=(), dtype=string, numpy=b'Their people as the pastured flock the ram'>, <tf.Tensor: shape=(), dtype=int64, numpy=0>)

(<tf.Tensor: shape=(), dtype=string, numpy=b"A vessel's plank is smooth and even laid,">, <tf.Tensor: shape=(), dtype=int64, numpy=1>)

b'not a cloud to be seen neither on plain nor mountain. These last'

[213, 12965, 228, 9770, 15265, 11378, 3288, 17101, 5332, 13656, 4080, 8818, 14602]

Epoch 1/3

2020-08-11 23:43:42.133082: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 23:43:52.484863: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:184] Filling up shuffle buffer (this may take a while): 35287 of 50000

2020-08-11 23:43:54.470819: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:233] Shuffle buffer filled.

2020-08-11 23:43:54.500071: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

697/697 [==============================] - 13s 19ms/step - loss: 0.5028 - accuracy: 0.7522 - val_loss: 0.3978 - val_accuracy: 0.8140

Epoch 2/3

2020-08-11 23:44:19.091046: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:184] Filling up shuffle buffer (this may take a while): 35584 of 50000

2020-08-11 23:44:21.195127: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:233] Shuffle buffer filled.

697/697 [==============================] - 12s 18ms/step - loss: 0.2949 - accuracy: 0.8707 - val_loss: 0.4052 - val_accuracy: 0.8206

Epoch 3/3

2020-08-11 23:44:43.657965: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:184] Filling up shuffle buffer (this may take a while): 35537 of 50000

2020-08-11 23:44:45.518081: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:233] Shuffle buffer filled.

697/697 [==============================] - 12s 17ms/step - loss: 0.2191 - accuracy: 0.9055 - val_loss: 0.3737 - val_accuracy: 0.8298

79/79 [==============================] - 2s 20ms/step - loss: 0.3737 - accuracy: 0.8298

Eval loss: 0.374, Eval accuracy: 0.830

 

 

import tensorflow as tf

 

import tensorflow_datasets as tfds

import os

 

DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'

FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt']

 

for name in FILE_NAMES:

    text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL + name)

 

parent_dir = os.path.dirname(text_dir)

 

parent_dir

 

 

def labeler(example, index):

    return example, tf.cast(index, tf.int64)

 

 

labeled_data_sets = []

 

for i, file_name in enumerate(FILE_NAMES):

    lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name))

    labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))

    labeled_data_sets.append(labeled_dataset)

 

BUFFER_SIZE = 50000

BATCH_SIZE = 64

TAKE_SIZE = 5000

 

all_labeled_data = labeled_data_sets[0]

for labeled_dataset in labeled_data_sets[1:]:

    all_labeled_data = all_labeled_data.concatenate(labeled_dataset)

 

all_labeled_data = all_labeled_data.shuffle(

    BUFFER_SIZE, reshuffle_each_iteration=False)

 

for ex in all_labeled_data.take(5):

    print(ex)

 

tokenizer = tfds.features.text.Tokenizer()

 

vocabulary_set = set()

for text_tensor, _ in all_labeled_data:

    some_tokens = tokenizer.tokenize(text_tensor.numpy())

    vocabulary_set.update(some_tokens)

 

vocab_size = len(vocabulary_set)

vocab_size

 

encoder = tfds.features.text.TokenTextEncoder(vocabulary_set)

 

example_text = next(iter(all_labeled_data))[0].numpy()

print(example_text)

 

encoded_example = encoder.encode(example_text)

print(encoded_example)

 

 

def encode(text_tensor, label):

    encoded_text = encoder.encode(text_tensor.numpy())

    return encoded_text, label

 

 

def encode_map_fn(text, label):

    # py_func doesn't set the shape of the returned tensors.

    encoded_text, label = tf.py_function(encode,

                                         inp=[text, label],

                                         Tout=(tf.int64, tf.int64))

 

    # `tf.data.Datasets` work best if all components have a shape set

    #  so set the shapes manually:

    encoded_text.set_shape([None])

    label.set_shape([])

 

    return encoded_text, label

 

 

all_encoded_data = all_labeled_data.map(encode_map_fn)

 

train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE)

train_data = train_data.padded_batch(BATCH_SIZE)

 

test_data = all_encoded_data.take(TAKE_SIZE)

test_data = test_data.padded_batch(BATCH_SIZE)

 

sample_text, sample_labels = next(iter(test_data))

 

sample_text[0], sample_labels[0]

 

vocab_size += 1

 

model = tf.keras.Sequential()

 

model.add(tf.keras.layers.Embedding(vocab_size, 64))

 

model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)))

 

# One or more dense layers.

# Edit the list in the `for` line to experiment with layer sizes.

for units in [64, 64]:

    model.add(tf.keras.layers.Dense(units, activation='relu'))

 

# Output layer. The first argument is the number of labels.

model.add(tf.keras.layers.Dense(3))

 

model.compile(optimizer='adam',

              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),

              metrics=['accuracy'])

 

model.fit(train_data, epochs=3, validation_data=test_data)

 

eval_loss, eval_acc = model.evaluate(test_data)

 

print('\nEval loss: {:.3f}, Eval accuracy: {:.3f}'.format(eval_loss, eval_acc))

 

'진행 프로젝트 > [진행] Tensorflow2 &amp;amp;amp;amp;quot;해볼까?&amp;amp;amp;amp;quot;' 카테고리의 다른 글

ㅡㅡ;  (0) 2020.10.27
datas  (0) 2020.08.11
tutorials 09  (0) 2020.08.11
tutorials 08  (0) 2020.08.11
tutorials 07  (0) 2020.08.11

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=63288

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)]

Type 'copyright', 'credits' or 'license' for more information

IPython 7.17.0 -- An enhanced Interactive Python. Type '?' for help.

PyDev console: using IPython 7.17.0

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

runfile('O:/PycharmProjects/catdogtf2.2/009.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 23:38:38.375911: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

Downloading data from https://storage.googleapis.com/applied-dl/heart.csv

16384/13273 [=====================================] - 0s 0us/step

2020-08-11 23:38:41.529035: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 23:38:41.572391: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 23:38:41.572897: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 23:38:41.585503: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 23:38:41.592012: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 23:38:41.595184: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 23:38:41.602528: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 23:38:41.607053: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 23:38:41.619809: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 23:38:41.620202: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 23:38:41.620769: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 23:38:41.630669: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2417f28da80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 23:38:41.631101: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 23:38:41.631514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 23:38:41.631942: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 23:38:41.632082: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 23:38:41.632363: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 23:38:41.632610: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 23:38:41.632981: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 23:38:41.633288: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 23:38:41.633583: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 23:38:41.633919: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 23:38:42.309719: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 23:38:42.309959: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 23:38:42.310107: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 23:38:42.310492: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 23:38:42.313704: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2412eb83730 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 23:38:42.314096: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

Features: [ 63.    1.    1.  145.  233.    1.    2.  150.    0.    2.3   3.    0.

   2. ], Target: 0

Features: [ 67.    1.    4.  160.  286.    0.    2.  108.    1.    1.5   2.    3.

   3. ], Target: 1

Features: [ 67.    1.    4.  120.  229.    0.    2.  129.    1.    2.6   2.    2.

   4. ], Target: 0

Features: [ 37.    1.    3.  130.  250.    0.    0.  187.    0.    3.5   3.    0.

   3. ], Target: 0

Features: [ 41.    0.    2.  130.  204.    0.    2.  172.    0.    1.4   1.    0.

   3. ], Target: 0

Epoch 1/15

WARNING:tensorflow:Layer dense is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

2020-08-11 23:38:43.103814: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

303/303 [==============================] - 1s 2ms/step - loss: 1.0260 - accuracy: 0.6832

Epoch 2/15

303/303 [==============================] - 1s 2ms/step - loss: 0.7020 - accuracy: 0.7558

Epoch 3/15

303/303 [==============================] - 1s 3ms/step - loss: 0.6972 - accuracy: 0.7294

Epoch 4/15

303/303 [==============================] - 1s 3ms/step - loss: 0.6353 - accuracy: 0.7294

Epoch 5/15

303/303 [==============================] - 1s 3ms/step - loss: 0.6456 - accuracy: 0.7492

Epoch 6/15

303/303 [==============================] - 1s 2ms/step - loss: 0.6248 - accuracy: 0.7492

Epoch 7/15

303/303 [==============================] - 1s 2ms/step - loss: 0.4927 - accuracy: 0.7855

Epoch 8/15

303/303 [==============================] - 1s 3ms/step - loss: 0.5099 - accuracy: 0.7756

Epoch 9/15

303/303 [==============================] - 1s 2ms/step - loss: 0.5669 - accuracy: 0.7492

Epoch 10/15

303/303 [==============================] - 1s 2ms/step - loss: 0.5558 - accuracy: 0.7888

Epoch 11/15

303/303 [==============================] - 1s 2ms/step - loss: 0.5408 - accuracy: 0.7624

Epoch 12/15

303/303 [==============================] - 1s 2ms/step - loss: 0.4900 - accuracy: 0.7987

Epoch 13/15

303/303 [==============================] - 1s 2ms/step - loss: 0.4866 - accuracy: 0.8053

Epoch 14/15

303/303 [==============================] - 1s 3ms/step - loss: 0.4382 - accuracy: 0.7855

Epoch 15/15

303/303 [==============================] - 1s 3ms/step - loss: 0.4960 - accuracy: 0.7657

({'age': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([63, 67, 67, 37, 41, 56, 62, 57, 63, 53, 57, 56, 56, 44, 52, 57])>, 'sex': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1])>, 'cp': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 4, 4, 3, 2, 2, 4, 4, 4, 4, 4, 2, 3, 2, 3, 3])>, 'trestbps': <tf.Tensor: shape=(16,), dtype=int32, numpy=

array([145, 160, 120, 130, 130, 120, 140, 120, 130, 140, 140, 140, 130,

       120, 172, 150])>, 'chol': <tf.Tensor: shape=(16,), dtype=int32, numpy=

array([233, 286, 229, 250, 204, 236, 268, 354, 254, 203, 192, 294, 256,

       263, 199, 168])>, 'fbs': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0])>, 'restecg': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([2, 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 2, 2, 0, 0, 0])>, 'thalach': <tf.Tensor: shape=(16,), dtype=int32, numpy=

array([150, 108, 129, 187, 172, 178, 160, 163, 147, 155, 148, 153, 142,

       173, 162, 174])>, 'exang': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0])>, 'oldpeak': <tf.Tensor: shape=(16,), dtype=float32, numpy=

array([2.3, 1.5, 2.6, 3.5, 1.4, 0.8, 3.6, 0.6, 1.4, 3.1, 0.4, 1.3, 0.6,

       0. , 0.5, 1.6], dtype=float32)>, 'slope': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([3, 2, 2, 3, 1, 1, 3, 1, 2, 3, 2, 2, 2, 1, 1, 1])>, 'ca': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([0, 3, 2, 0, 0, 0, 2, 0, 1, 0, 0, 0, 1, 0, 0, 0])>, 'thal': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([2, 3, 4, 3, 3, 3, 3, 3, 4, 4, 2, 3, 2, 4, 4, 3])>}, <tf.Tensor: shape=(16,), dtype=int64, numpy=array([0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0], dtype=int64)>)

Epoch 1/15

19/19 [==============================] - 0s 3ms/step - loss: 167.4701 - accuracy: 0.2739

Epoch 2/15

19/19 [==============================] - 0s 3ms/step - loss: 139.3064 - accuracy: 0.2739

Epoch 3/15

19/19 [==============================] - 0s 3ms/step - loss: 111.9945 - accuracy: 0.2739

Epoch 4/15

19/19 [==============================] - 0s 4ms/step - loss: 84.2170 - accuracy: 0.2739

Epoch 5/15

19/19 [==============================] - 0s 3ms/step - loss: 53.5864 - accuracy: 0.2739

Epoch 6/15

19/19 [==============================] - 0s 3ms/step - loss: 19.6813 - accuracy: 0.3069

Epoch 7/15

19/19 [==============================] - 0s 3ms/step - loss: 3.5970 - accuracy: 0.6766

Epoch 8/15

19/19 [==============================] - 0s 3ms/step - loss: 3.0850 - accuracy: 0.7030

Epoch 9/15

19/19 [==============================] - 0s 3ms/step - loss: 2.6416 - accuracy: 0.6403

Epoch 10/15

19/19 [==============================] - 0s 3ms/step - loss: 2.4151 - accuracy: 0.6766

Epoch 11/15

19/19 [==============================] - 0s 3ms/step - loss: 2.2261 - accuracy: 0.6766

Epoch 12/15

19/19 [==============================] - 0s 3ms/step - loss: 2.0685 - accuracy: 0.6865

Epoch 13/15

19/19 [==============================] - 0s 3ms/step - loss: 1.9093 - accuracy: 0.6865

Epoch 14/15

19/19 [==============================] - 0s 3ms/step - loss: 1.7673 - accuracy: 0.6865

Epoch 15/15

19/19 [==============================] - 0s 3ms/step - loss: 1.6325 - accuracy: 0.6898

 

import pandas as pd

import tensorflow as tf

 

csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/applied-dl/heart.csv')

 

df = pd.read_csv(csv_file)

 

df.head()

 

df.dtypes

 

df['thal'] = pd.Categorical(df['thal'])

df['thal'] = df.thal.cat.codes

 

df.head()

 

target = df.pop('target')

 

dataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))

 

for feat, targ in dataset.take(5):

    print('Features: {}, Target: {}'.format(feat, targ))

 

tf.constant(df['thal'])

 

train_dataset = dataset.shuffle(len(df)).batch(1)

 

 

def get_compiled_model():

    model = tf.keras.Sequential([

        tf.keras.layers.Dense(10, activation='relu'),

        tf.keras.layers.Dense(10, activation='relu'),

        tf.keras.layers.Dense(1)

    ])

 

    model.compile(optimizer='adam',

                  loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),

                  metrics=['accuracy'])

    return model

 

 

model = get_compiled_model()

model.fit(train_dataset, epochs=15)

 

inputs = {key: tf.keras.layers.Input(shape=(), name=key) for key in df.keys()}

x = tf.stack(list(inputs.values()), axis=-1)

 

x = tf.keras.layers.Dense(10, activation='relu')(x)

output = tf.keras.layers.Dense(1)(x)

 

model_func = tf.keras.Model(inputs=inputs, outputs=output)

 

model_func.compile(optimizer='adam',

                   loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),

                   metrics=['accuracy'])

 

dict_slices = tf.data.Dataset.from_tensor_slices((df.to_dict('list'), target.values)).batch(16)

 

for dict_slice in dict_slices.take(1):

    print(dict_slice)

 

model_func.fit(dict_slices, epochs=15)

 

 

'진행 프로젝트 > [진행] Tensorflow2 &amp;amp;amp;amp;quot;해볼까?&amp;amp;amp;amp;quot;' 카테고리의 다른 글

datas  (0) 2020.08.11
tutorials 10  (0) 2020.08.11
tutorials 08  (0) 2020.08.11
tutorials 07  (0) 2020.08.11
tutorials 06  (0) 2020.08.11

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=61540

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)]

Type 'copyright', 'credits' or 'license' for more information

IPython 7.17.0 -- An enhanced Interactive Python. Type '?' for help.

PyDev console: using IPython 7.17.0

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

runfile('O:/PycharmProjects/catdogtf2.2/008.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 23:31:05.534081: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 23:31:10.139176: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 23:31:10.210855: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 23:31:10.211409: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 23:31:10.348189: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 23:31:10.468280: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 23:31:10.493316: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 23:31:10.595905: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 23:31:10.652508: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 23:31:10.838579: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 23:31:10.838907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 23:31:10.842771: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 23:31:10.877618: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x217e0c4ae60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 23:31:10.877938: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 23:31:10.879107: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 23:31:10.879515: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 23:31:10.879707: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 23:31:10.879947: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 23:31:10.880133: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 23:31:10.880332: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 23:31:10.880536: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 23:31:10.880744: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 23:31:10.881010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 23:31:13.480632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 23:31:13.480840: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 23:31:13.480971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 23:31:13.482523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 23:31:13.488531: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x217aceaba50 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 23:31:13.488852: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

Epoch 1/10

2020-08-11 23:31:14.484151: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

938/938 [==============================] - 2s 2ms/step - loss: 3.1091 - sparse_categorical_accuracy: 0.8768

Epoch 2/10

938/938 [==============================] - 2s 2ms/step - loss: 0.4619 - sparse_categorical_accuracy: 0.9322

Epoch 3/10

938/938 [==============================] - 2s 2ms/step - loss: 0.3516 - sparse_categorical_accuracy: 0.9468

Epoch 4/10

938/938 [==============================] - 2s 2ms/step - loss: 0.2997 - sparse_categorical_accuracy: 0.9552

Epoch 5/10

938/938 [==============================] - 2s 2ms/step - loss: 0.2718 - sparse_categorical_accuracy: 0.9610

Epoch 6/10

938/938 [==============================] - 2s 2ms/step - loss: 0.2567 - sparse_categorical_accuracy: 0.9646

Epoch 7/10

938/938 [==============================] - 2s 2ms/step - loss: 0.2304 - sparse_categorical_accuracy: 0.9681

Epoch 8/10

938/938 [==============================] - 2s 2ms/step - loss: 0.2215 - sparse_categorical_accuracy: 0.9702

Epoch 9/10

938/938 [==============================] - 2s 2ms/step - loss: 0.2072 - sparse_categorical_accuracy: 0.9721

Epoch 10/10

938/938 [==============================] - 2s 2ms/step - loss: 0.2026 - sparse_categorical_accuracy: 0.9743

157/157 [==============================] - 0s 2ms/step - loss: 0.6995 - sparse_categorical_accuracy: 0.9564

 

import numpy as np

import tensorflow as tf

 

DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'

 

path = tf.keras.utils.get_file('mnist.npz', DATA_URL)

with np.load(path) as data:

    train_examples = data['x_train']

    train_labels = data['y_train']

    test_examples = data['x_test']

    test_labels = data['y_test']

 

train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))

test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))

 

BATCH_SIZE = 64

SHUFFLE_BUFFER_SIZE = 100

 

train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)

test_dataset = test_dataset.batch(BATCH_SIZE)

 

model = tf.keras.Sequential([

    tf.keras.layers.Flatten(input_shape=(28, 28)),

    tf.keras.layers.Dense(128, activation='relu'),

    tf.keras.layers.Dense(10)

])

 

model.compile(optimizer=tf.keras.optimizers.RMSprop(),

              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),

              metrics=['sparse_categorical_accuracy'])

 

model.fit(train_dataset, epochs=10)

 

model.evaluate(test_dataset)

 

 

 

'진행 프로젝트 > [진행] Tensorflow2 &amp;amp;amp;amp;quot;해볼까?&amp;amp;amp;amp;quot;' 카테고리의 다른 글

tutorials 10  (0) 2020.08.11
tutorials 09  (0) 2020.08.11
tutorials 07  (0) 2020.08.11
tutorials 06  (0) 2020.08.11
tutorials 05  (0) 2020.08.11

ctx=ctx)

  File "O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute

    inputs, attrs, num_outputs)

tensorflow.python.framework.errors_impl.InternalError:  Blas GEMM launch failed : a.shape=(32, 784), b.shape=(784, 416), m=32, n=416, k=784

            [[node sequential/dense/MatMul (defined at O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\kerastuner\engine\multi_execution_tuner.py:96) ]] [Op:__inference_train_function_612]

Function call stack:

train_function

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=55536

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)]

Type 'copyright', 'credits' or 'license' for more information

IPython 7.17.0 -- An enhanced Interactive Python. Type '?' for help.

PyDev console: using IPython 7.17.0

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

In[2]: runfile('O:/PycharmProjects/catdogtf2.2/007.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 05:34:45.873806: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

INFO:tensorflow:Reloading Oracle from existing project my_dir\intro_to_kt\oracle.json

2020-08-11 05:34:50.283008: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 05:34:50.339156: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:34:50.339806: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:34:50.351520: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:34:50.359913: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:34:50.364738: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:34:50.374217: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:34:50.380346: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:34:50.413055: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:34:50.413370: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:34:50.413871: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 05:34:50.427756: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1d956d60970 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:34:50.428254: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 05:34:50.428743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:34:50.429271: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:34:50.429544: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:34:50.429793: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:34:50.430049: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:34:50.430300: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:34:50.430560: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:34:50.430817: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:34:50.431157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:34:51.313967: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 05:34:51.314219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 05:34:51.314470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 05:34:51.314864: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 05:34:51.319706: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1d9802ad530 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:34:51.319979: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

Epoch 1/2

2020-08-11 05:34:52.438439: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

1875/1875 [==============================] - 6s 3ms/step - loss: 0.6090 - accuracy: 0.7996 - val_loss: 0.4837 - val_accuracy: 0.8321

Epoch 2/2

1875/1875 [==============================] - 5s 3ms/step - loss: 0.4334 - accuracy: 0.8519 - val_loss: 0.4366 - val_accuracy: 0.8480

[Trial complete]

[Trial summary]

 |-Trial ID: aaeeb7bcca2fc76db68c3ad3e310cecd

 |-Score: 0.8479999899864197

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.0001

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 416

Epoch 1/2

1875/1875 [==============================] - 4s 2ms/step - loss: 0.6126 - accuracy: 0.8012 - val_loss: 0.4814 - val_accuracy: 0.8346

Epoch 2/2

1875/1875 [==============================] - 4s 2ms/step - loss: 0.4354 - accuracy: 0.8519 - val_loss: 0.4488 - val_accuracy: 0.8424

[Trial complete]

[Trial summary]

 |-Trial ID: 12814fc7320f16f22710c9a8c269170e

 |-Score: 0.8424000144004822

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.0001

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 352

Epoch 1/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5419 - accuracy: 0.8075 - val_loss: 0.5194 - val_accuracy: 0.8094

Epoch 2/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4433 - accuracy: 0.8429 - val_loss: 0.5086 - val_accuracy: 0.8158

[Trial complete]

[Trial summary]

 |-Trial ID: c4fff7531ca3a6969006c400938b85a1

 |-Score: 0.8158000111579895

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.01

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 96

Epoch 1/2

1875/1875 [==============================] - 4s 2ms/step - loss: 0.5358 - accuracy: 0.8119 - val_loss: 0.6265 - val_accuracy: 0.8015

Epoch 2/2

1875/1875 [==============================] - 4s 2ms/step - loss: 0.4350 - accuracy: 0.8443 - val_loss: 0.4645 - val_accuracy: 0.8439

[Trial complete]

[Trial summary]

 |-Trial ID: ec06e969cbb76d2fb1ad4f264391d014

 |-Score: 0.8439000248908997

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.01

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 416

Epoch 1/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5177 - accuracy: 0.8195 - val_loss: 0.4538 - val_accuracy: 0.8375

Epoch 2/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3902 - accuracy: 0.8606 - val_loss: 0.4393 - val_accuracy: 0.8451

[Trial complete]

[Trial summary]

 |-Trial ID: c42f239d4c723bcf0e7a03a6d08811da

 |-Score: 0.8450999855995178

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 64

Epoch 1/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5298 - accuracy: 0.8122 - val_loss: 0.5086 - val_accuracy: 0.8244

Epoch 2/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4337 - accuracy: 0.8436 - val_loss: 0.4526 - val_accuracy: 0.8363

[Trial complete]

[Trial summary]

 |-Trial ID: 95f9684bb63af658265fd9ff2ae1623a

 |-Score: 0.8363000154495239

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.01

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 192

Epoch 1/2

1875/1875 [==============================] - 4s 2ms/step - loss: 0.5481 - accuracy: 0.8112 - val_loss: 0.4766 - val_accuracy: 0.8269

Epoch 2/2

1875/1875 [==============================] - 4s 2ms/step - loss: 0.4346 - accuracy: 0.8457 - val_loss: 0.4664 - val_accuracy: 0.8368

[Trial complete]

[Trial summary]

 |-Trial ID: 5faf0d2eddbc67fb81570f53b9c9cf09

 |-Score: 0.8367999792098999

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.01

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 448

Epoch 1/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5613 - accuracy: 0.8004 - val_loss: 0.5258 - val_accuracy: 0.8135

Epoch 2/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4710 - accuracy: 0.8332 - val_loss: 0.4863 - val_accuracy: 0.8204

[Trial complete]

[Trial summary]

 |-Trial ID: 50758537cee209ba7ce1005d343d09da

 |-Score: 0.8203999996185303

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.01

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 32

Epoch 1/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5370 - accuracy: 0.8104 - val_loss: 0.4769 - val_accuracy: 0.8300

Epoch 2/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4406 - accuracy: 0.8430 - val_loss: 0.4628 - val_accuracy: 0.8363

[Trial complete]

[Trial summary]

 |-Trial ID: fa9f01ffb454aa479866752e810edbc1

 |-Score: 0.8363000154495239

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.01

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 256

Epoch 1/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4737 - accuracy: 0.8302 - val_loss: 0.4213 - val_accuracy: 0.8463

Epoch 2/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3562 - accuracy: 0.8694 - val_loss: 0.3660 - val_accuracy: 0.8701

[Trial complete]

[Trial summary]

 |-Trial ID: 4c4399268ccd2b7d62689f2e06de9e91

 |-Score: 0.8701000213623047

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 384

Epoch 1/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4978 - accuracy: 0.8265 - val_loss: 0.4572 - val_accuracy: 0.8358

Epoch 2/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3791 - accuracy: 0.8635 - val_loss: 0.4036 - val_accuracy: 0.8550

[Trial complete]

[Trial summary]

 |-Trial ID: ddb27855abe1d4b89719dd06610f0fa0

 |-Score: 0.8550000190734863

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 128

Epoch 1/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.6642 - accuracy: 0.7836 - val_loss: 0.5190 - val_accuracy: 0.8224

Epoch 2/2

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4622 - accuracy: 0.8428 - val_loss: 0.4652 - val_accuracy: 0.8373

[Trial complete]

[Trial summary]

 |-Trial ID: 5d91cbe4452612810982997a0a2a4b28

 |-Score: 0.8373000025749207

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.0001

 |-tuner/bracket: 2

 |-tuner/epochs: 2

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 192

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4765 - accuracy: 0.8317 - val_loss: 0.4468 - val_accuracy: 0.8362

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3597 - accuracy: 0.8690 - val_loss: 0.3860 - val_accuracy: 0.8580

[Trial complete]

[Trial summary]

 |-Trial ID: 3131589c4b900c107a9423455eeaff10

 |-Score: 0.8579999804496765

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 2

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 2

 |-tuner/round: 1

 |-tuner/trial_id: 4c4399268ccd2b7d62689f2e06de9e91

 |-units: 384

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4935 - accuracy: 0.8268 - val_loss: 0.4095 - val_accuracy: 0.8550

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3712 - accuracy: 0.8662 - val_loss: 0.3852 - val_accuracy: 0.8626

[Trial complete]

[Trial summary]

 |-Trial ID: ceafb91456da418a31d16505cd2402e4

 |-Score: 0.8626000285148621

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 2

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 2

 |-tuner/round: 1

 |-tuner/trial_id: ddb27855abe1d4b89719dd06610f0fa0

 |-units: 128

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.6121 - accuracy: 0.7983 - val_loss: 0.4894 - val_accuracy: 0.8328

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4304 - accuracy: 0.8534 - val_loss: 0.4399 - val_accuracy: 0.8473

[Trial complete]

[Trial summary]

 |-Trial ID: 0c04ed5138b9f602660b69378a623a2a

 |-Score: 0.8472999930381775

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.0001

 |-tuner/bracket: 2

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 2

 |-tuner/round: 1

 |-tuner/trial_id: aaeeb7bcca2fc76db68c3ad3e310cecd

 |-units: 416

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5186 - accuracy: 0.8202 - val_loss: 0.4477 - val_accuracy: 0.8440

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3938 - accuracy: 0.8603 - val_loss: 0.4160 - val_accuracy: 0.8508

[Trial complete]

[Trial summary]

 |-Trial ID: d78890f285f61be834ced94336dafd4d

 |-Score: 0.8507999777793884

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 2

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 2

 |-tuner/round: 1

 |-tuner/trial_id: c42f239d4c723bcf0e7a03a6d08811da

 |-units: 64

Epoch 5/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4951 - accuracy: 0.8264 - val_loss: 0.4453 - val_accuracy: 0.8354

Epoch 6/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3694 - accuracy: 0.8663 - val_loss: 0.3749 - val_accuracy: 0.8632

Epoch 7/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3337 - accuracy: 0.8773 - val_loss: 0.3648 - val_accuracy: 0.8698

Epoch 8/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3088 - accuracy: 0.8860 - val_loss: 0.3688 - val_accuracy: 0.8678

Epoch 9/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2924 - accuracy: 0.8916 - val_loss: 0.3695 - val_accuracy: 0.8634

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2780 - accuracy: 0.8967 - val_loss: 0.3497 - val_accuracy: 0.8787

[Trial complete]

[Trial summary]

 |-Trial ID: dccb65692f7afb5fcc4b63a3b7bbaf1d

 |-Score: 0.8787000179290771

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 2

 |-tuner/epochs: 10

 |-tuner/initial_epoch: 4

 |-tuner/round: 2

 |-tuner/trial_id: ceafb91456da418a31d16505cd2402e4

 |-units: 128

Epoch 5/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4855 - accuracy: 0.8264 - val_loss: 0.4290 - val_accuracy: 0.8395

Epoch 6/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3606 - accuracy: 0.8683 - val_loss: 0.3868 - val_accuracy: 0.8615

Epoch 7/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3232 - accuracy: 0.8814 - val_loss: 0.3795 - val_accuracy: 0.8599

Epoch 8/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3010 - accuracy: 0.8876 - val_loss: 0.3692 - val_accuracy: 0.8676

Epoch 9/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2805 - accuracy: 0.8947 - val_loss: 0.3407 - val_accuracy: 0.8771

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2677 - accuracy: 0.8993 - val_loss: 0.3363 - val_accuracy: 0.8806

[Trial complete]

[Trial summary]

 |-Trial ID: 722faad2bf2c23229d312728f586150f

 |-Score: 0.8805999755859375

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 2

 |-tuner/epochs: 10

 |-tuner/initial_epoch: 4

 |-tuner/round: 2

 |-tuner/trial_id: 3131589c4b900c107a9423455eeaff10

 |-units: 384

Epoch 1/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4884 - accuracy: 0.8278 - val_loss: 0.4205 - val_accuracy: 0.8515

Epoch 2/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3679 - accuracy: 0.8651 - val_loss: 0.3766 - val_accuracy: 0.8627

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3297 - accuracy: 0.8788 - val_loss: 0.3615 - val_accuracy: 0.8687

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3039 - accuracy: 0.8879 - val_loss: 0.3549 - val_accuracy: 0.8697

[Trial complete]

[Trial summary]

 |-Trial ID: 8cef999a933a85c276bec9a56319c8c5

 |-Score: 0.869700014591217

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 1

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 224

Epoch 1/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4777 - accuracy: 0.8306 - val_loss: 0.4090 - val_accuracy: 0.8552

Epoch 2/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3593 - accuracy: 0.8694 - val_loss: 0.3658 - val_accuracy: 0.8683

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3244 - accuracy: 0.8803 - val_loss: 0.3450 - val_accuracy: 0.8733

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3016 - accuracy: 0.8889 - val_loss: 0.3361 - val_accuracy: 0.8772

[Trial complete]

[Trial summary]

 |-Trial ID: cbe62524be42f5d78bae019f08ceca2c

 |-Score: 0.8772000074386597

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 1

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 416

Epoch 1/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5410 - accuracy: 0.8098 - val_loss: 0.5298 - val_accuracy: 0.8120

Epoch 2/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4354 - accuracy: 0.8451 - val_loss: 0.5052 - val_accuracy: 0.8351

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4108 - accuracy: 0.8530 - val_loss: 0.4673 - val_accuracy: 0.8400

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4032 - accuracy: 0.8559 - val_loss: 0.4690 - val_accuracy: 0.8374

[Trial complete]

[Trial summary]

 |-Trial ID: 7af2991e92f13605318218c96306888a

 |-Score: 0.8399999737739563

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.01

 |-tuner/bracket: 1

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 352

Epoch 1/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.6267 - accuracy: 0.7936 - val_loss: 0.5030 - val_accuracy: 0.8281

Epoch 2/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4430 - accuracy: 0.8476 - val_loss: 0.4615 - val_accuracy: 0.8394

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4012 - accuracy: 0.8622 - val_loss: 0.4195 - val_accuracy: 0.8545

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3775 - accuracy: 0.8685 - val_loss: 0.4026 - val_accuracy: 0.8601

[Trial complete]

[Trial summary]

 |-Trial ID: 6154ceac2594f8d81d6a2eb16bcd0a7d

 |-Score: 0.8600999712944031

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.0001

 |-tuner/bracket: 1

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 288

Epoch 1/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4829 - accuracy: 0.8287 - val_loss: 0.4142 - val_accuracy: 0.8501

Epoch 2/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3640 - accuracy: 0.8681 - val_loss: 0.3878 - val_accuracy: 0.8580

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3262 - accuracy: 0.8790 - val_loss: 0.3577 - val_accuracy: 0.8677

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3033 - accuracy: 0.8886 - val_loss: 0.3413 - val_accuracy: 0.8765

[Trial complete]

[Trial summary]

 |-Trial ID: c06517aade90662bc6193c1aa08cbbba

 |-Score: 0.8765000104904175

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 1

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 256

Epoch 1/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5961 - accuracy: 0.8044 - val_loss: 0.4801 - val_accuracy: 0.8356

Epoch 2/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4256 - accuracy: 0.8543 - val_loss: 0.4295 - val_accuracy: 0.8484

Epoch 3/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3846 - accuracy: 0.8673 - val_loss: 0.4070 - val_accuracy: 0.8581

Epoch 4/4

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3592 - accuracy: 0.8752 - val_loss: 0.3905 - val_accuracy: 0.8607

[Trial complete]

[Trial summary]

 |-Trial ID: f5cddcedf8442fdb6d564890dff92027

 |-Score: 0.8607000112533569

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.0001

 |-tuner/bracket: 1

 |-tuner/epochs: 4

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 480

Epoch 5/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4768 - accuracy: 0.8308 - val_loss: 0.4162 - val_accuracy: 0.8462

Epoch 6/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3586 - accuracy: 0.8692 - val_loss: 0.3695 - val_accuracy: 0.8682

Epoch 7/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3237 - accuracy: 0.8812 - val_loss: 0.3581 - val_accuracy: 0.8705

Epoch 8/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2994 - accuracy: 0.8888 - val_loss: 0.4071 - val_accuracy: 0.8526

Epoch 9/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2808 - accuracy: 0.8954 - val_loss: 0.3533 - val_accuracy: 0.8739

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2644 - accuracy: 0.9014 - val_loss: 0.3326 - val_accuracy: 0.8796

[Trial complete]

[Trial summary]

 |-Trial ID: 08338fc947066bc24893e649b0b6f4d4

 |-Score: 0.8795999884605408

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 1

 |-tuner/epochs: 10

 |-tuner/initial_epoch: 4

 |-tuner/round: 1

 |-tuner/trial_id: cbe62524be42f5d78bae019f08ceca2c

 |-units: 416

Epoch 5/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4837 - accuracy: 0.8290 - val_loss: 0.4841 - val_accuracy: 0.8230

Epoch 6/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3628 - accuracy: 0.8684 - val_loss: 0.3854 - val_accuracy: 0.8594

Epoch 7/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3270 - accuracy: 0.8798 - val_loss: 0.3702 - val_accuracy: 0.8718

Epoch 8/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2999 - accuracy: 0.8888 - val_loss: 0.3448 - val_accuracy: 0.8793

Epoch 9/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2845 - accuracy: 0.8945 - val_loss: 0.3497 - val_accuracy: 0.8722

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2705 - accuracy: 0.8989 - val_loss: 0.3287 - val_accuracy: 0.8816

[Trial complete]

[Trial summary]

 |-Trial ID: 5ce94ce93658715d3479344b029cfe30

 |-Score: 0.881600022315979

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 1

 |-tuner/epochs: 10

 |-tuner/initial_epoch: 4

 |-tuner/round: 1

 |-tuner/trial_id: c06517aade90662bc6193c1aa08cbbba

 |-units: 256

Epoch 1/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.5468 - accuracy: 0.8072 - val_loss: 0.4499 - val_accuracy: 0.8389

Epoch 2/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4324 - accuracy: 0.8436 - val_loss: 0.4621 - val_accuracy: 0.8389

Epoch 3/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4125 - accuracy: 0.8515 - val_loss: 0.4438 - val_accuracy: 0.8486

Epoch 4/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3960 - accuracy: 0.8562 - val_loss: 0.4496 - val_accuracy: 0.8422

Epoch 5/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3888 - accuracy: 0.8603 - val_loss: 0.4836 - val_accuracy: 0.8350

Epoch 6/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3824 - accuracy: 0.8637 - val_loss: 0.4556 - val_accuracy: 0.8430

Epoch 7/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3764 - accuracy: 0.8641 - val_loss: 0.4791 - val_accuracy: 0.8347

Epoch 8/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3730 - accuracy: 0.8652 - val_loss: 0.4642 - val_accuracy: 0.8479

Epoch 9/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3648 - accuracy: 0.8671 - val_loss: 0.4617 - val_accuracy: 0.8510

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3647 - accuracy: 0.8678 - val_loss: 0.4392 - val_accuracy: 0.8534

[Trial complete]

[Trial summary]

 |-Trial ID: 383e951e645acdb9d8f48b64b88c1518

 |-Score: 0.8533999919891357

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.01

 |-tuner/bracket: 0

 |-tuner/epochs: 10

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 320

Epoch 1/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.6824 - accuracy: 0.7817 - val_loss: 0.5233 - val_accuracy: 0.8219

Epoch 2/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.4649 - accuracy: 0.8431 - val_loss: 0.4741 - val_accuracy: 0.8342

Epoch 3/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.4211 - accuracy: 0.8552 - val_loss: 0.4367 - val_accuracy: 0.8494

Epoch 4/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3959 - accuracy: 0.8637 - val_loss: 0.4180 - val_accuracy: 0.8545

Epoch 5/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3784 - accuracy: 0.8692 - val_loss: 0.4095 - val_accuracy: 0.8555

Epoch 6/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3645 - accuracy: 0.8742 - val_loss: 0.3973 - val_accuracy: 0.8604

Epoch 7/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3521 - accuracy: 0.8771 - val_loss: 0.3914 - val_accuracy: 0.8631

Epoch 8/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3425 - accuracy: 0.8805 - val_loss: 0.3880 - val_accuracy: 0.8657

Epoch 9/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3336 - accuracy: 0.8831 - val_loss: 0.3746 - val_accuracy: 0.8674

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3246 - accuracy: 0.8859 - val_loss: 0.3709 - val_accuracy: 0.8693

[Trial complete]

[Trial summary]

 |-Trial ID: ad2dae77806d10ee1e9164e17739e588

 |-Score: 0.8693000078201294

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.0001

 |-tuner/bracket: 0

 |-tuner/epochs: 10

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 160

Epoch 1/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4730 - accuracy: 0.8315 - val_loss: 0.4586 - val_accuracy: 0.8299

Epoch 2/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3602 - accuracy: 0.8669 - val_loss: 0.3750 - val_accuracy: 0.8652

Epoch 3/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3231 - accuracy: 0.8814 - val_loss: 0.3479 - val_accuracy: 0.8749

Epoch 4/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.2984 - accuracy: 0.8890 - val_loss: 0.3415 - val_accuracy: 0.8781

Epoch 5/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2796 - accuracy: 0.8958 - val_loss: 0.3658 - val_accuracy: 0.8727

Epoch 6/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.2653 - accuracy: 0.9013 - val_loss: 0.3472 - val_accuracy: 0.8750

Epoch 7/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.2520 - accuracy: 0.9063 - val_loss: 0.3212 - val_accuracy: 0.8845

Epoch 8/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.2418 - accuracy: 0.9083 - val_loss: 0.3224 - val_accuracy: 0.8873

Epoch 9/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.2325 - accuracy: 0.9137 - val_loss: 0.3241 - val_accuracy: 0.8874

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2216 - accuracy: 0.9172 - val_loss: 0.3340 - val_accuracy: 0.8868

[Trial complete]

[Trial summary]

 |-Trial ID: 7dc5b8f48a0d9d9a593d4c8d8207e06b

 |-Score: 0.8873999714851379

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 0

 |-tuner/epochs: 10

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 448

Epoch 1/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4765 - accuracy: 0.8311 - val_loss: 0.4336 - val_accuracy: 0.8453

Epoch 2/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.3590 - accuracy: 0.8692 - val_loss: 0.4138 - val_accuracy: 0.8507

Epoch 3/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.3229 - accuracy: 0.8823 - val_loss: 0.3560 - val_accuracy: 0.8707

Epoch 4/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.3002 - accuracy: 0.8882 - val_loss: 0.3403 - val_accuracy: 0.8762

Epoch 5/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2795 - accuracy: 0.8967 - val_loss: 0.3331 - val_accuracy: 0.8806

Epoch 6/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2651 - accuracy: 0.9011 - val_loss: 0.3328 - val_accuracy: 0.8820

Epoch 7/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2540 - accuracy: 0.9050 - val_loss: 0.3346 - val_accuracy: 0.8813

Epoch 8/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2439 - accuracy: 0.9084 - val_loss: 0.3258 - val_accuracy: 0.8838

Epoch 9/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2343 - accuracy: 0.9124 - val_loss: 0.3196 - val_accuracy: 0.8863

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2243 - accuracy: 0.9163 - val_loss: 0.3269 - val_accuracy: 0.8862

[Trial complete]

[Trial summary]

 |-Trial ID: 8bc6752e2d7d1387f1a20986f63026f3

 |-Score: 0.8863000273704529

 |-Best step: 0

 > Hyperparameters:

 |-learning_rate: 0.001

 |-tuner/bracket: 0

 |-tuner/epochs: 10

 |-tuner/initial_epoch: 0

 |-tuner/round: 0

 |-units: 352

INFO:tensorflow:Oracle triggered exit

The hyperparameter search is complete. The optimal number of units in the first densely-connected

layer is 448 and the optimal learning rate for the optimizer

is 0.001.

Epoch 1/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.4745 - accuracy: 0.8312 - val_loss: 0.4404 - val_accuracy: 0.8445

Epoch 2/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3599 - accuracy: 0.8680 - val_loss: 0.3671 - val_accuracy: 0.8655

Epoch 3/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3229 - accuracy: 0.8810 - val_loss: 0.3693 - val_accuracy: 0.8693

Epoch 4/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.2989 - accuracy: 0.8895 - val_loss: 0.3565 - val_accuracy: 0.8695

Epoch 5/10

1875/1875 [==============================] - 4s 2ms/step - loss: 0.2814 - accuracy: 0.8961 - val_loss: 0.3458 - val_accuracy: 0.8747

Epoch 6/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2657 - accuracy: 0.9000 - val_loss: 0.3325 - val_accuracy: 0.8799

Epoch 7/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2531 - accuracy: 0.9050 - val_loss: 0.3556 - val_accuracy: 0.8699

Epoch 8/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2436 - accuracy: 0.9082 - val_loss: 0.3251 - val_accuracy: 0.8872

Epoch 9/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2322 - accuracy: 0.9125 - val_loss: 0.3169 - val_accuracy: 0.8910

Epoch 10/10

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2219 - accuracy: 0.9167 - val_loss: 0.3242 - val_accuracy: 0.8887

 

import tensorflow as tf

from tensorflow import keras

 

import IPython

 

#!pip install -q -U keras-tuner

import kerastuner as kt

 

(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()

 

# Normalize pixel values between 0 and 1

img_train = img_train.astype('float32') / 255.0

img_test = img_test.astype('float32') / 255.0

 

 

def model_builder(hp):

    model = keras.Sequential()

    model.add(keras.layers.Flatten(input_shape=(28, 28)))

 

    # Tune the number of units in the first Dense layer

    # Choose an optimal value between 32-512

    hp_units = hp.Int('units', min_value=32, max_value=512, step=32)

    model.add(keras.layers.Dense(units=hp_units, activation='relu'))

    model.add(keras.layers.Dense(10))

 

    # Tune the learning rate for the optimizer

    # Choose an optimal value from 0.01, 0.001, or 0.0001

    hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])

 

    model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),

                  loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),

                  metrics=['accuracy'])

 

    return model

 

 

tuner = kt.Hyperband(model_builder,

                     objective='val_accuracy',

                     max_epochs=10,

                     factor=3,

                     directory='my_dir',

                     project_name='intro_to_kt')

 

 

class ClearTrainingOutput(tf.keras.callbacks.Callback):

    def on_train_end(*args, **kwargs):

        IPython.display.clear_output(wait=True)

 

 

tuner.search(img_train, label_train, epochs=10, validation_data=(img_test, label_test),

             callbacks=[ClearTrainingOutput()])

 

# Get the optimal hyperparameters

best_hps = tuner.get_best_hyperparameters(num_trials=1)[0]

 

print(f"""

The hyperparameter search is complete. The optimal number of units in the first densely-connected

layer is {best_hps.get('units')} and the optimal learning rate for the optimizer

is {best_hps.get('learning_rate')}.

""")

 

# Build the model with the optimal hyperparameters and train it on the data

model = tuner.hypermodel.build(best_hps)

model.fit(img_train, label_train, epochs=10, validation_data=(img_test, label_test))

 

 

 

 

'진행 프로젝트 > [진행] Tensorflow2 &amp;amp;amp;amp;quot;해볼까?&amp;amp;amp;amp;quot;' 카테고리의 다른 글

tutorials 09  (0) 2020.08.11
tutorials 08  (0) 2020.08.11
tutorials 06  (0) 2020.08.11
tutorials 05  (0) 2020.08.11
tutorials 04  (0) 2020.08.11

ctx=ctx)

  File "O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute

    inputs, attrs, num_outputs)

tensorflow.python.framework.errors_impl.InternalError:  Blas GEMM launch failed : a.shape=(32, 784), b.shape=(784, 512), m=32, n=512, k=784

            [[node sequential/dense/MatMul (defined at O:/PycharmProjects/catdogtf2.2/006.py:51) ]] [Op:__inference_train_function_589]

Function call stack:

train_function

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=54455

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

PyDev console: starting.

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

>>> runfile('O:/PycharmProjects/catdogtf2.2/006.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 05:29:58.544667: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2.2.0

2020-08-11 05:30:01.282588: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 05:30:01.324897: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:30:01.325397: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:30:01.333925: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:30:01.339713: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:30:01.342729: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:30:01.350152: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:30:01.354987: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:30:01.382853: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:30:01.383178: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:30:01.383630: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 05:30:01.397994: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1eabe93ddf0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:30:01.398331: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 05:30:01.398700: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:30:01.399211: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:30:01.399492: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:30:01.399779: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:30:01.400141: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:30:01.400451: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:30:01.400725: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:30:01.401037: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:30:01.401408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:30:02.056830: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 05:30:02.057100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 05:30:02.057241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 05:30:02.057523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 05:30:02.061005: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1ea874059e0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:30:02.061296: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

Model: "sequential"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

dense (Dense)                (None, 512)               401920   

_________________________________________________________________

dropout (Dropout)            (None, 512)               0        

_________________________________________________________________

dense_1 (Dense)              (None, 10)                5130     

=================================================================

Total params: 407,050

Trainable params: 407,050

Non-trainable params: 0

_________________________________________________________________

Epoch 1/10

2020-08-11 05:30:02.840239: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

28/32 [=========================>....] - ETA: 0s - loss: 1.2189 - accuracy: 0.6618   

Epoch 00001: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 13ms/step - loss: 1.1645 - accuracy: 0.6750 - val_loss: 0.7062 - val_accuracy: 0.7760

Epoch 2/10

24/32 [=====================>........] - ETA: 0s - loss: 0.4462 - accuracy: 0.8750

Epoch 00002: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 7ms/step - loss: 0.4426 - accuracy: 0.8740 - val_loss: 0.5803 - val_accuracy: 0.8140

Epoch 3/10

24/32 [=====================>........] - ETA: 0s - loss: 0.2971 - accuracy: 0.9180

Epoch 00003: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 7ms/step - loss: 0.2959 - accuracy: 0.9170 - val_loss: 0.4929 - val_accuracy: 0.8440

Epoch 4/10

22/32 [===================>..........] - ETA: 0s - loss: 0.2194 - accuracy: 0.9489

Epoch 00004: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 8ms/step - loss: 0.2136 - accuracy: 0.9510 - val_loss: 0.4599 - val_accuracy: 0.8460

Epoch 5/10

22/32 [===================>..........] - ETA: 0s - loss: 0.1591 - accuracy: 0.9631

Epoch 00005: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 7ms/step - loss: 0.1630 - accuracy: 0.9630 - val_loss: 0.4229 - val_accuracy: 0.8590

Epoch 6/10

25/32 [======================>.......] - ETA: 0s - loss: 0.1330 - accuracy: 0.9700

Epoch 00006: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 7ms/step - loss: 0.1269 - accuracy: 0.9710 - val_loss: 0.4153 - val_accuracy: 0.8710

Epoch 7/10

26/32 [=======================>......] - ETA: 0s - loss: 0.0856 - accuracy: 0.9844

Epoch 00007: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 7ms/step - loss: 0.0904 - accuracy: 0.9840 - val_loss: 0.4451 - val_accuracy: 0.8570

Epoch 8/10

23/32 [====================>.........] - ETA: 0s - loss: 0.0675 - accuracy: 0.9959

Epoch 00008: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 7ms/step - loss: 0.0716 - accuracy: 0.9950 - val_loss: 0.4232 - val_accuracy: 0.8630

Epoch 9/10

25/32 [======================>.......] - ETA: 0s - loss: 0.0572 - accuracy: 0.9975

Epoch 00009: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 7ms/step - loss: 0.0555 - accuracy: 0.9980 - val_loss: 0.4201 - val_accuracy: 0.8640

Epoch 10/10

24/32 [=====================>........] - ETA: 0s - loss: 0.0416 - accuracy: 1.0000

Epoch 00010: saving model to training_1/cp.ckpt

32/32 [==============================] - 0s 7ms/step - loss: 0.0432 - accuracy: 0.9980 - val_loss: 0.4192 - val_accuracy: 0.8590

32/32 - 0s - loss: 2.3390 - accuracy: 0.1360

훈련되지 않은 모델의 정확도: 13.60%

32/32 - 0s - loss: 0.4192 - accuracy: 0.8590

복원된 모델의 정확도: 85.90%

WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate

WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

Epoch 00005: saving model to training_2/cp-0005.ckpt

Epoch 00010: saving model to training_2/cp-0010.ckpt

Epoch 00015: saving model to training_2/cp-0015.ckpt

Epoch 00020: saving model to training_2/cp-0020.ckpt

Epoch 00025: saving model to training_2/cp-0025.ckpt

Epoch 00030: saving model to training_2/cp-0030.ckpt

Epoch 00035: saving model to training_2/cp-0035.ckpt

Epoch 00040: saving model to training_2/cp-0040.ckpt

Epoch 00045: saving model to training_2/cp-0045.ckpt

Epoch 00050: saving model to training_2/cp-0050.ckpt

32/32 - 0s - loss: 0.4963 - accuracy: 0.8730

복원된 모델의 정확도: 87.30%

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay

WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate

WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

32/32 - 0s - loss: 0.4963 - accuracy: 0.8730

복원된 모델의 정확도: 87.30%

Epoch 1/5

32/32 [==============================] - 0s 2ms/step - loss: 1.1610 - accuracy: 0.6710

Epoch 2/5

32/32 [==============================] - 0s 3ms/step - loss: 0.4298 - accuracy: 0.8760

Epoch 3/5

32/32 [==============================] - 0s 2ms/step - loss: 0.2997 - accuracy: 0.9150

Epoch 4/5

32/32 [==============================] - 0s 2ms/step - loss: 0.2052 - accuracy: 0.9560

Epoch 5/5

32/32 [==============================] - 0s 2ms/step - loss: 0.1526 - accuracy: 0.9670

2020-08-11 05:30:14.908899: W tensorflow/python/util/util.cc:329] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.

WARNING:tensorflow:From O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.

Instructions for updating:

If using Keras pass *_constraint arguments to layers.

Model: "sequential_5"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

dense_10 (Dense)             (None, 512)               401920   

_________________________________________________________________

dropout_5 (Dropout)          (None, 512)               0         

_________________________________________________________________

dense_11 (Dense)             (None, 10)                5130     

=================================================================

Total params: 407,050

Trainable params: 407,050

Non-trainable params: 0

_________________________________________________________________

32/32 - 0s - loss: 0.4585 - accuracy: 0.8460

복원된 모델의 정확도: 84.60%

(1000, 10)

Epoch 1/5

32/32 [==============================] - 0s 2ms/step - loss: 1.1752 - accuracy: 0.6670

Epoch 2/5

32/32 [==============================] - 0s 2ms/step - loss: 0.4129 - accuracy: 0.8900

Epoch 3/5

32/32 [==============================] - 0s 2ms/step - loss: 0.2810 - accuracy: 0.9220

Epoch 4/5

32/32 [==============================] - 0s 2ms/step - loss: 0.2027 - accuracy: 0.9490

Epoch 5/5

32/32 [==============================] - 0s 2ms/step - loss: 0.1418 - accuracy: 0.9770

Model: "sequential_6"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

dense_12 (Dense)             (None, 512)               401920   

_________________________________________________________________

dropout_6 (Dropout)          (None, 512)               0        

_________________________________________________________________

dense_13 (Dense)             (None, 10)                5130     

=================================================================

Total params: 407,050

Trainable params: 407,050

Non-trainable params: 0

_________________________________________________________________

32/32 - 0s - loss: 0.4315 - accuracy: 0.8680

복원된 모델의 정확도: 86.80%

 

#pip install -q pyyaml h5py  # HDF5 포맷으로 모델을 저장하기 위해서 필요합니다

 

import os

 

import tensorflow as tf

from tensorflow import keras

 

print(tf.version.VERSION)

 

(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()

 

train_labels = train_labels[:1000]

test_labels = test_labels[:1000]

 

train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0

test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0

 

# 간단한 Sequential 모델을 정의합니다

def create_model():

  model = tf.keras.models.Sequential([

    keras.layers.Dense(512, activation='relu', input_shape=(784,)),

    keras.layers.Dropout(0.2),

    keras.layers.Dense(10)

  ])

 

  model.compile(optimizer='adam',

                loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),

                metrics=['accuracy'])

 

  return model

 

# 모델 객체를 만듭니다

model = create_model()

 

# 모델 구조를 출력합니다

model.summary()

 

checkpoint_path = "training_1/cp.ckpt"

checkpoint_dir = os.path.dirname(checkpoint_path)

 

# 모델의 가중치를 저장하는 콜백 만들기

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,

                                                 save_weights_only=True,

                                                 verbose=1)

 

# 새로운 콜백으로 모델 훈련하기

model.fit(train_images,

          train_labels,

          epochs=10,

          validation_data=(test_images,test_labels),

          callbacks=[cp_callback])  # 콜백을 훈련에 전달합니다

 

# 옵티마이저의 상태를 저장하는 것과 관련되어 경고가 발생할 수 있습니다.

# 이 경고는 (그리고 이 노트북의 다른 비슷한 경고는) 이전 사용 방식을 권장하지 않기 위함이며 무시해도 좋습니다.

 

#ls {checkpoint_dir}

 

# 기본 모델 객체를 만듭니다

model = create_model()

 

# 모델을 평가합니다

loss, acc = model.evaluate(test_images,  test_labels, verbose=2)

print("훈련되지 않은 모델의 정확도: {:5.2f}%".format(100*acc))

 

# 가중치 로드

model.load_weights(checkpoint_path)

 

# 모델 재평가

loss,acc = model.evaluate(test_images,  test_labels, verbose=2)

print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))

 

# 파일 이름에 에포크 번호를 포함시킵니다(`str.format` 포맷)

checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"

checkpoint_dir = os.path.dirname(checkpoint_path)

 

# 다섯 번째 에포크마다 가중치를 저장하기 위한 콜백을 만듭니다

cp_callback = tf.keras.callbacks.ModelCheckpoint(

    filepath=checkpoint_path,

    verbose=1,

    save_weights_only=True,

    period=5)

 

# 새로운 모델 객체를 만듭니다

model = create_model()

 

# `checkpoint_path` 포맷을 사용하는 가중치를 저장합니다

model.save_weights(checkpoint_path.format(epoch=0))

 

# 새로운 콜백을 사용하여 모델을 훈련합니다

model.fit(train_images,

          train_labels,

          epochs=50,

          callbacks=[cp_callback],

          validation_data=(test_images,test_labels),

          verbose=0)

 

#ls {checkpoint_dir}

 

latest = tf.train.latest_checkpoint(checkpoint_dir)

latest

 

# 새로운 모델 객체를 만듭니다

model = create_model()

 

# 이전에 저장한 가중치를 로드합니다

model.load_weights(latest)

 

# 모델을 재평가합니다

loss, acc = model.evaluate(test_images,  test_labels, verbose=2)

print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))

 

# 가중치를 저장합니다

model.save_weights('./checkpoints/my_checkpoint')

 

# 새로운 모델 객체를 만듭니다

model = create_model()

 

# 가중치를 복원합니다

model.load_weights('./checkpoints/my_checkpoint')

 

# 모델을 평가합니다

loss,acc = model.evaluate(test_images,  test_labels, verbose=2)

print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))

 

# 새로운 모델 객체를 만들고 훈련합니다

model = create_model()

model.fit(train_images, train_labels, epochs=5)

 

# SavedModel로 전체 모델을 저장합니다

#!mkdir -p saved_model

model.save('saved_model/my_model')

 

# my_model 디렉토리

#!ls saved_model

 

# assests 폴더, saved_model.pb, variables 폴더

#!ls saved_model/my_model

 

new_model = tf.keras.models.load_model('saved_model/my_model')

 

# 모델 구조를 확인합니다

new_model.summary()

 

# 복원된 모델을 평가합니다

loss, acc = new_model.evaluate(test_images,  test_labels, verbose=2)

print('복원된 모델의 정확도: {:5.2f}%'.format(100*acc))

 

print(new_model.predict(test_images).shape)

 

# 새로운 모델 객체를 만들고 훈련합니다

model = create_model()

model.fit(train_images, train_labels, epochs=5)

 

# 전체 모델을 HDF5 파일로 저장합니다

# '.h5' 확장자는 이 모델이 HDF5로 저장되었다는 것을 나타냅니다

model.save('my_model.h5')

 

# 가중치와 옵티마이저를 포함하여 정확히 동일한 모델을 다시 생성합니다

new_model = tf.keras.models.load_model('my_model.h5')

 

# 모델 구조를 출력합니다

new_model.summary()

 

loss, acc = new_model.evaluate(test_images,  test_labels, verbose=2)

print('복원된 모델의 정확도: {:5.2f}%'.format(100*acc))

 

 

#

# Copyright (c) 2017 François Chollet

#

# Permission is hereby granted, free of charge, to any person obtaining a

# copy of this software and associated documentation files (the "Software"),

# to deal in the Software without restriction, including without limitation

# the rights to use, copy, modify, merge, publish, distribute, sublicense,

# and/or sell copies of the Software, and to permit persons to whom the

# Software is furnished to do so, subject to the following conditions:

#

# The above copyright notice and this permission notice shall be included in

# all copies or substantial portions of the Software.

#

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

# DEALINGS IN THE SOFTWARE.

'진행 프로젝트 > [진행] Tensorflow2 &amp;amp;amp;amp;quot;해볼까?&amp;amp;amp;amp;quot;' 카테고리의 다른 글

tutorials 08  (0) 2020.08.11
tutorials 07  (0) 2020.08.11
tutorials 05  (0) 2020.08.11
tutorials 04  (0) 2020.08.11
tutorials 03  (0) 2020.08.11

File "O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute

    inputs, attrs, num_outputs)

tensorflow.python.framework.errors_impl.InternalError:  Blas GEMM launch failed : a.shape=(512, 1000), b.shape=(1000, 16), m=512, n=16, k=1000

            [[node sequential/dense/MatMul (defined at O:/PycharmProjects/catdogtf2.2/005.py:44) ]] [Op:__inference_train_function_802]

Function call stack:

train_function

 

 

 원래 잘 되던게 다시 해보면, 한 번에 되는게 없네 ㅋ 믓튼, 자료 준비 잼남. tutorials 소스 요청은 mynameis@hajunho.com 으로 (은근 일임)

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=53949

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

PyDev console: starting.

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

>>> runfile('O:/PycharmProjects/catdogtf2.2/005.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 05:25:13.752742: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2.2.0

O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\keras\datasets\imdb.py:155: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

  x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx])

O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\keras\datasets\imdb.py:156: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

  x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:])

2020-08-11 05:25:22.396835: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 05:25:22.439433: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:25:22.439885: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:25:22.447119: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:25:22.452747: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:25:22.455753: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:25:22.462337: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:25:22.466528: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:25:22.479498: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:25:22.480014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:25:22.480546: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 05:25:22.491175: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2f4b5273130 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:25:22.491609: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 05:25:22.492219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:25:22.492727: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:25:22.493048: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:25:22.493329: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:25:22.493587: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:25:22.493868: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:25:22.494077: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:25:22.494301: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:25:22.494694: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:25:23.230270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 05:25:23.230485: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 05:25:23.230618: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 05:25:23.230958: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 05:25:23.234767: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2f4e029c510 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:25:23.235134: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

Model: "sequential"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

dense (Dense)                (None, 16)                16016    

_________________________________________________________________

dense_1 (Dense)              (None, 16)                272      

_________________________________________________________________

dense_2 (Dense)              (None, 1)                 17       

=================================================================

Total params: 16,305

Trainable params: 16,305

Non-trainable params: 0

_________________________________________________________________

Epoch 1/20

2020-08-11 05:25:24.247888: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

49/49 - 1s - loss: 0.5903 - accuracy: 0.7084 - binary_crossentropy: 0.5903 - val_loss: 0.4558 - val_accuracy: 0.8147 - val_binary_crossentropy: 0.4558

Epoch 2/20

49/49 - 0s - loss: 0.3836 - accuracy: 0.8444 - binary_crossentropy: 0.3836 - val_loss: 0.3563 - val_accuracy: 0.8506 - val_binary_crossentropy: 0.3563

Epoch 3/20

49/49 - 0s - loss: 0.3294 - accuracy: 0.8653 - binary_crossentropy: 0.3294 - val_loss: 0.3359 - val_accuracy: 0.8578 - val_binary_crossentropy: 0.3359

Epoch 4/20

49/49 - 0s - loss: 0.3123 - accuracy: 0.8708 - binary_crossentropy: 0.3123 - val_loss: 0.3299 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3299

Epoch 5/20

49/49 - 0s - loss: 0.3045 - accuracy: 0.8743 - binary_crossentropy: 0.3045 - val_loss: 0.3275 - val_accuracy: 0.8600 - val_binary_crossentropy: 0.3275

Epoch 6/20

49/49 - 0s - loss: 0.3018 - accuracy: 0.8745 - binary_crossentropy: 0.3018 - val_loss: 0.3268 - val_accuracy: 0.8606 - val_binary_crossentropy: 0.3268

Epoch 7/20

49/49 - 0s - loss: 0.2953 - accuracy: 0.8778 - binary_crossentropy: 0.2953 - val_loss: 0.3273 - val_accuracy: 0.8608 - val_binary_crossentropy: 0.3273

Epoch 8/20

49/49 - 1s - loss: 0.2928 - accuracy: 0.8785 - binary_crossentropy: 0.2928 - val_loss: 0.3276 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3276

Epoch 9/20

49/49 - 0s - loss: 0.2884 - accuracy: 0.8806 - binary_crossentropy: 0.2884 - val_loss: 0.3264 - val_accuracy: 0.8593 - val_binary_crossentropy: 0.3264

Epoch 10/20

49/49 - 0s - loss: 0.2860 - accuracy: 0.8817 - binary_crossentropy: 0.2860 - val_loss: 0.3266 - val_accuracy: 0.8591 - val_binary_crossentropy: 0.3266

Epoch 11/20

49/49 - 0s - loss: 0.2815 - accuracy: 0.8814 - binary_crossentropy: 0.2815 - val_loss: 0.3295 - val_accuracy: 0.8576 - val_binary_crossentropy: 0.3295

Epoch 12/20

49/49 - 0s - loss: 0.2768 - accuracy: 0.8832 - binary_crossentropy: 0.2768 - val_loss: 0.3297 - val_accuracy: 0.8582 - val_binary_crossentropy: 0.3297

Epoch 13/20

49/49 - 0s - loss: 0.2729 - accuracy: 0.8856 - binary_crossentropy: 0.2729 - val_loss: 0.3327 - val_accuracy: 0.8566 - val_binary_crossentropy: 0.3327

Epoch 14/20

49/49 - 0s - loss: 0.2700 - accuracy: 0.8874 - binary_crossentropy: 0.2700 - val_loss: 0.3313 - val_accuracy: 0.8568 - val_binary_crossentropy: 0.3313

Epoch 15/20

49/49 - 0s - loss: 0.2649 - accuracy: 0.8894 - binary_crossentropy: 0.2649 - val_loss: 0.3322 - val_accuracy: 0.8566 - val_binary_crossentropy: 0.3322

Epoch 16/20

49/49 - 1s - loss: 0.2597 - accuracy: 0.8914 - binary_crossentropy: 0.2597 - val_loss: 0.3345 - val_accuracy: 0.8560 - val_binary_crossentropy: 0.3345

Epoch 17/20

49/49 - 0s - loss: 0.2553 - accuracy: 0.8941 - binary_crossentropy: 0.2553 - val_loss: 0.3370 - val_accuracy: 0.8565 - val_binary_crossentropy: 0.3370

Epoch 18/20

49/49 - 0s - loss: 0.2515 - accuracy: 0.8936 - binary_crossentropy: 0.2515 - val_loss: 0.3370 - val_accuracy: 0.8548 - val_binary_crossentropy: 0.3370

Epoch 19/20

49/49 - 0s - loss: 0.2483 - accuracy: 0.8973 - binary_crossentropy: 0.2483 - val_loss: 0.3411 - val_accuracy: 0.8539 - val_binary_crossentropy: 0.3411

Epoch 20/20

49/49 - 0s - loss: 0.2432 - accuracy: 0.8991 - binary_crossentropy: 0.2432 - val_loss: 0.3424 - val_accuracy: 0.8540 - val_binary_crossentropy: 0.3424

Model: "sequential_1"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

dense_3 (Dense)              (None, 4)                 4004     

_________________________________________________________________

dense_4 (Dense)              (None, 4)                 20       

_________________________________________________________________

dense_5 (Dense)              (None, 1)                 5        

=================================================================

Total params: 4,029

Trainable params: 4,029

Non-trainable params: 0

_________________________________________________________________

Epoch 1/20

49/49 - 0s - loss: 0.6227 - accuracy: 0.6724 - binary_crossentropy: 0.6227 - val_loss: 0.5254 - val_accuracy: 0.7791 - val_binary_crossentropy: 0.5254

Epoch 2/20

49/49 - 0s - loss: 0.4480 - accuracy: 0.8207 - binary_crossentropy: 0.4480 - val_loss: 0.4032 - val_accuracy: 0.8371 - val_binary_crossentropy: 0.4032

Epoch 3/20

49/49 - 0s - loss: 0.3674 - accuracy: 0.8526 - binary_crossentropy: 0.3674 - val_loss: 0.3607 - val_accuracy: 0.8492 - val_binary_crossentropy: 0.3607

Epoch 4/20

49/49 - 0s - loss: 0.3346 - accuracy: 0.8646 - binary_crossentropy: 0.3346 - val_loss: 0.3421 - val_accuracy: 0.8559 - val_binary_crossentropy: 0.3421

Epoch 5/20

49/49 - 0s - loss: 0.3201 - accuracy: 0.8698 - binary_crossentropy: 0.3201 - val_loss: 0.3375 - val_accuracy: 0.8567 - val_binary_crossentropy: 0.3375

Epoch 6/20

49/49 - 0s - loss: 0.3115 - accuracy: 0.8727 - binary_crossentropy: 0.3115 - val_loss: 0.3325 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3325

Epoch 7/20

49/49 - 0s - loss: 0.3065 - accuracy: 0.8742 - binary_crossentropy: 0.3065 - val_loss: 0.3325 - val_accuracy: 0.8581 - val_binary_crossentropy: 0.3325

Epoch 8/20

49/49 - 0s - loss: 0.3025 - accuracy: 0.8758 - binary_crossentropy: 0.3025 - val_loss: 0.3283 - val_accuracy: 0.8597 - val_binary_crossentropy: 0.3283

Epoch 9/20

49/49 - 0s - loss: 0.3009 - accuracy: 0.8760 - binary_crossentropy: 0.3009 - val_loss: 0.3316 - val_accuracy: 0.8588 - val_binary_crossentropy: 0.3316

Epoch 10/20

49/49 - 0s - loss: 0.3007 - accuracy: 0.8759 - binary_crossentropy: 0.3007 - val_loss: 0.3307 - val_accuracy: 0.8595 - val_binary_crossentropy: 0.3307

Epoch 11/20

49/49 - 0s - loss: 0.2980 - accuracy: 0.8786 - binary_crossentropy: 0.2980 - val_loss: 0.3296 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3296

Epoch 12/20

49/49 - 0s - loss: 0.2969 - accuracy: 0.8774 - binary_crossentropy: 0.2969 - val_loss: 0.3295 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3295

Epoch 13/20

49/49 - 0s - loss: 0.2968 - accuracy: 0.8778 - binary_crossentropy: 0.2968 - val_loss: 0.3314 - val_accuracy: 0.8595 - val_binary_crossentropy: 0.3314

Epoch 14/20

49/49 - 0s - loss: 0.2982 - accuracy: 0.8772 - binary_crossentropy: 0.2982 - val_loss: 0.3294 - val_accuracy: 0.8593 - val_binary_crossentropy: 0.3294

Epoch 15/20

49/49 - 0s - loss: 0.2956 - accuracy: 0.8788 - binary_crossentropy: 0.2956 - val_loss: 0.3311 - val_accuracy: 0.8591 - val_binary_crossentropy: 0.3311

Epoch 16/20

49/49 - 0s - loss: 0.2950 - accuracy: 0.8787 - binary_crossentropy: 0.2950 - val_loss: 0.3299 - val_accuracy: 0.8591 - val_binary_crossentropy: 0.3299

Epoch 17/20

49/49 - 0s - loss: 0.2946 - accuracy: 0.8789 - binary_crossentropy: 0.2946 - val_loss: 0.3298 - val_accuracy: 0.8590 - val_binary_crossentropy: 0.3298

Epoch 18/20

49/49 - 0s - loss: 0.2936 - accuracy: 0.8784 - binary_crossentropy: 0.2936 - val_loss: 0.3297 - val_accuracy: 0.8597 - val_binary_crossentropy: 0.3297

Epoch 19/20

49/49 - 0s - loss: 0.2932 - accuracy: 0.8796 - binary_crossentropy: 0.2932 - val_loss: 0.3300 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3300

Epoch 20/20

49/49 - 0s - loss: 0.2924 - accuracy: 0.8790 - binary_crossentropy: 0.2924 - val_loss: 0.3302 - val_accuracy: 0.8597 - val_binary_crossentropy: 0.3302

Model: "sequential_2"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

dense_6 (Dense)              (None, 512)               512512   

_________________________________________________________________

dense_7 (Dense)              (None, 512)               262656   

_________________________________________________________________

dense_8 (Dense)              (None, 1)                 513      

=================================================================

Total params: 775,681

Trainable params: 775,681

Non-trainable params: 0

_________________________________________________________________

Epoch 1/20

49/49 - 1s - loss: 0.4360 - accuracy: 0.7928 - binary_crossentropy: 0.4360 - val_loss: 0.3299 - val_accuracy: 0.8587 - val_binary_crossentropy: 0.3299

Epoch 2/20

49/49 - 0s - loss: 0.2871 - accuracy: 0.8828 - binary_crossentropy: 0.2871 - val_loss: 0.3241 - val_accuracy: 0.8604 - val_binary_crossentropy: 0.3241

Epoch 3/20

49/49 - 0s - loss: 0.2167 - accuracy: 0.9152 - binary_crossentropy: 0.2167 - val_loss: 0.3428 - val_accuracy: 0.8565 - val_binary_crossentropy: 0.3428

Epoch 4/20

49/49 - 0s - loss: 0.0946 - accuracy: 0.9723 - binary_crossentropy: 0.0946 - val_loss: 0.4560 - val_accuracy: 0.8392 - val_binary_crossentropy: 0.4560

Epoch 5/20

49/49 - 0s - loss: 0.0225 - accuracy: 0.9967 - binary_crossentropy: 0.0225 - val_loss: 0.5348 - val_accuracy: 0.8516 - val_binary_crossentropy: 0.5348

Epoch 6/20

49/49 - 0s - loss: 0.0040 - accuracy: 0.9998 - binary_crossentropy: 0.0040 - val_loss: 0.6235 - val_accuracy: 0.8520 - val_binary_crossentropy: 0.6235

Epoch 7/20

49/49 - 0s - loss: 0.0011 - accuracy: 1.0000 - binary_crossentropy: 0.0011 - val_loss: 0.6615 - val_accuracy: 0.8538 - val_binary_crossentropy: 0.6615

Epoch 8/20

49/49 - 0s - loss: 6.0507e-04 - accuracy: 1.0000 - binary_crossentropy: 6.0507e-04 - val_loss: 0.6893 - val_accuracy: 0.8552 - val_binary_crossentropy: 0.6893

Epoch 9/20

49/49 - 0s - loss: 4.2356e-04 - accuracy: 1.0000 - binary_crossentropy: 4.2356e-04 - val_loss: 0.7128 - val_accuracy: 0.8552 - val_binary_crossentropy: 0.7128

Epoch 10/20

49/49 - 0s - loss: 3.2108e-04 - accuracy: 1.0000 - binary_crossentropy: 3.2108e-04 - val_loss: 0.7312 - val_accuracy: 0.8550 - val_binary_crossentropy: 0.7312

Epoch 11/20

49/49 - 0s - loss: 2.5215e-04 - accuracy: 1.0000 - binary_crossentropy: 2.5215e-04 - val_loss: 0.7479 - val_accuracy: 0.8550 - val_binary_crossentropy: 0.7479

Epoch 12/20

49/49 - 1s - loss: 2.0315e-04 - accuracy: 1.0000 - binary_crossentropy: 2.0315e-04 - val_loss: 0.7639 - val_accuracy: 0.8550 - val_binary_crossentropy: 0.7639

Epoch 13/20

49/49 - 0s - loss: 1.6640e-04 - accuracy: 1.0000 - binary_crossentropy: 1.6640e-04 - val_loss: 0.7787 - val_accuracy: 0.8552 - val_binary_crossentropy: 0.7787

Epoch 14/20

49/49 - 0s - loss: 1.3808e-04 - accuracy: 1.0000 - binary_crossentropy: 1.3808e-04 - val_loss: 0.7912 - val_accuracy: 0.8550 - val_binary_crossentropy: 0.7912

Epoch 15/20

49/49 - 0s - loss: 1.1601e-04 - accuracy: 1.0000 - binary_crossentropy: 1.1601e-04 - val_loss: 0.8047 - val_accuracy: 0.8551 - val_binary_crossentropy: 0.8047

Epoch 16/20

49/49 - 0s - loss: 9.8321e-05 - accuracy: 1.0000 - binary_crossentropy: 9.8321e-05 - val_loss: 0.8168 - val_accuracy: 0.8550 - val_binary_crossentropy: 0.8168

Epoch 17/20

49/49 - 1s - loss: 8.4098e-05 - accuracy: 1.0000 - binary_crossentropy: 8.4098e-05 - val_loss: 0.8283 - val_accuracy: 0.8548 - val_binary_crossentropy: 0.8283

Epoch 18/20

49/49 - 0s - loss: 7.2466e-05 - accuracy: 1.0000 - binary_crossentropy: 7.2466e-05 - val_loss: 0.8398 - val_accuracy: 0.8550 - val_binary_crossentropy: 0.8398

Epoch 19/20

49/49 - 0s - loss: 6.2920e-05 - accuracy: 1.0000 - binary_crossentropy: 6.2920e-05 - val_loss: 0.8506 - val_accuracy: 0.8550 - val_binary_crossentropy: 0.8506

Epoch 20/20

49/49 - 0s - loss: 5.4970e-05 - accuracy: 1.0000 - binary_crossentropy: 5.4970e-05 - val_loss: 0.8606 - val_accuracy: 0.8548 - val_binary_crossentropy: 0.8606

Epoch 1/20

49/49 - 0s - loss: 0.6338 - accuracy: 0.7066 - binary_crossentropy: 0.5913 - val_loss: 0.4948 - val_accuracy: 0.8186 - val_binary_crossentropy: 0.4545

Epoch 2/20

49/49 - 0s - loss: 0.4199 - accuracy: 0.8464 - binary_crossentropy: 0.3789 - val_loss: 0.3895 - val_accuracy: 0.8535 - val_binary_crossentropy: 0.3481

Epoch 3/20

49/49 - 0s - loss: 0.3664 - accuracy: 0.8661 - binary_crossentropy: 0.3253 - val_loss: 0.3720 - val_accuracy: 0.8609 - val_binary_crossentropy: 0.3315

Epoch 4/20

49/49 - 0s - loss: 0.3514 - accuracy: 0.8716 - binary_crossentropy: 0.3116 - val_loss: 0.3676 - val_accuracy: 0.8611 - val_binary_crossentropy: 0.3287

Epoch 5/20

49/49 - 0s - loss: 0.3452 - accuracy: 0.8717 - binary_crossentropy: 0.3072 - val_loss: 0.3643 - val_accuracy: 0.8623 - val_binary_crossentropy: 0.3271

Epoch 6/20

49/49 - 0s - loss: 0.3424 - accuracy: 0.8736 - binary_crossentropy: 0.3059 - val_loss: 0.3626 - val_accuracy: 0.8611 - val_binary_crossentropy: 0.3271

Epoch 7/20

49/49 - 0s - loss: 0.3382 - accuracy: 0.8754 - binary_crossentropy: 0.3032 - val_loss: 0.3647 - val_accuracy: 0.8596 - val_binary_crossentropy: 0.3305

Epoch 8/20

49/49 - 0s - loss: 0.3367 - accuracy: 0.8757 - binary_crossentropy: 0.3031 - val_loss: 0.3611 - val_accuracy: 0.8604 - val_binary_crossentropy: 0.3282

Epoch 9/20

49/49 - 0s - loss: 0.3364 - accuracy: 0.8749 - binary_crossentropy: 0.3040 - val_loss: 0.3624 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3306

Epoch 10/20

49/49 - 0s - loss: 0.3333 - accuracy: 0.8750 - binary_crossentropy: 0.3019 - val_loss: 0.3590 - val_accuracy: 0.8597 - val_binary_crossentropy: 0.3281

Epoch 11/20

49/49 - 0s - loss: 0.3313 - accuracy: 0.8760 - binary_crossentropy: 0.3008 - val_loss: 0.3580 - val_accuracy: 0.8595 - val_binary_crossentropy: 0.3281

Epoch 12/20

49/49 - 0s - loss: 0.3296 - accuracy: 0.8751 - binary_crossentropy: 0.2999 - val_loss: 0.3578 - val_accuracy: 0.8610 - val_binary_crossentropy: 0.3285

Epoch 13/20

49/49 - 0s - loss: 0.3280 - accuracy: 0.8765 - binary_crossentropy: 0.2991 - val_loss: 0.3562 - val_accuracy: 0.8606 - val_binary_crossentropy: 0.3277

Epoch 14/20

49/49 - 0s - loss: 0.3263 - accuracy: 0.8774 - binary_crossentropy: 0.2979 - val_loss: 0.3557 - val_accuracy: 0.8602 - val_binary_crossentropy: 0.3276

Epoch 15/20

49/49 - 0s - loss: 0.3249 - accuracy: 0.8773 - binary_crossentropy: 0.2970 - val_loss: 0.3568 - val_accuracy: 0.8582 - val_binary_crossentropy: 0.3293

Epoch 16/20

49/49 - 0s - loss: 0.3252 - accuracy: 0.8775 - binary_crossentropy: 0.2978 - val_loss: 0.3583 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3312

Epoch 17/20

49/49 - 0s - loss: 0.3237 - accuracy: 0.8764 - binary_crossentropy: 0.2967 - val_loss: 0.3596 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3328

Epoch 18/20

49/49 - 0s - loss: 0.3234 - accuracy: 0.8777 - binary_crossentropy: 0.2966 - val_loss: 0.3560 - val_accuracy: 0.8570 - val_binary_crossentropy: 0.3295

Epoch 19/20

49/49 - 1s - loss: 0.3194 - accuracy: 0.8774 - binary_crossentropy: 0.2928 - val_loss: 0.3551 - val_accuracy: 0.8607 - val_binary_crossentropy: 0.3286

Epoch 20/20

49/49 - 0s - loss: 0.3195 - accuracy: 0.8778 - binary_crossentropy: 0.2931 - val_loss: 0.3539 - val_accuracy: 0.8592 - val_binary_crossentropy: 0.3276

Epoch 1/20

49/49 - 0s - loss: 0.6746 - accuracy: 0.5684 - binary_crossentropy: 0.6746 - val_loss: 0.5901 - val_accuracy: 0.7660 - val_binary_crossentropy: 0.5901

Epoch 2/20

49/49 - 0s - loss: 0.5596 - accuracy: 0.7112 - binary_crossentropy: 0.5596 - val_loss: 0.4331 - val_accuracy: 0.8308 - val_binary_crossentropy: 0.4331

Epoch 3/20

49/49 - 1s - loss: 0.4721 - accuracy: 0.7833 - binary_crossentropy: 0.4721 - val_loss: 0.3661 - val_accuracy: 0.8482 - val_binary_crossentropy: 0.3661

Epoch 4/20

49/49 - 0s - loss: 0.4245 - accuracy: 0.8172 - binary_crossentropy: 0.4245 - val_loss: 0.3439 - val_accuracy: 0.8554 - val_binary_crossentropy: 0.3439

Epoch 5/20

49/49 - 0s - loss: 0.3954 - accuracy: 0.8349 - binary_crossentropy: 0.3954 - val_loss: 0.3323 - val_accuracy: 0.8571 - val_binary_crossentropy: 0.3323

Epoch 6/20

49/49 - 0s - loss: 0.3780 - accuracy: 0.8465 - binary_crossentropy: 0.3780 - val_loss: 0.3271 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3271

Epoch 7/20

49/49 - 0s - loss: 0.3614 - accuracy: 0.8554 - binary_crossentropy: 0.3614 - val_loss: 0.3262 - val_accuracy: 0.8578 - val_binary_crossentropy: 0.3262

Epoch 8/20

49/49 - 0s - loss: 0.3513 - accuracy: 0.8590 - binary_crossentropy: 0.3513 - val_loss: 0.3226 - val_accuracy: 0.8596 - val_binary_crossentropy: 0.3226

Epoch 9/20

49/49 - 0s - loss: 0.3354 - accuracy: 0.8654 - binary_crossentropy: 0.3354 - val_loss: 0.3224 - val_accuracy: 0.8593 - val_binary_crossentropy: 0.3224

Epoch 10/20

49/49 - 0s - loss: 0.3339 - accuracy: 0.8690 - binary_crossentropy: 0.3339 - val_loss: 0.3224 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3224

Epoch 11/20

49/49 - 0s - loss: 0.3263 - accuracy: 0.8711 - binary_crossentropy: 0.3263 - val_loss: 0.3245 - val_accuracy: 0.8585 - val_binary_crossentropy: 0.3245

Epoch 12/20

49/49 - 0s - loss: 0.3184 - accuracy: 0.8758 - binary_crossentropy: 0.3184 - val_loss: 0.3287 - val_accuracy: 0.8588 - val_binary_crossentropy: 0.3287

Epoch 13/20

49/49 - 0s - loss: 0.3149 - accuracy: 0.8778 - binary_crossentropy: 0.3149 - val_loss: 0.3283 - val_accuracy: 0.8596 - val_binary_crossentropy: 0.3283

Epoch 14/20

49/49 - 0s - loss: 0.3117 - accuracy: 0.8770 - binary_crossentropy: 0.3117 - val_loss: 0.3306 - val_accuracy: 0.8570 - val_binary_crossentropy: 0.3306

Epoch 15/20

49/49 - 0s - loss: 0.3030 - accuracy: 0.8826 - binary_crossentropy: 0.3030 - val_loss: 0.3322 - val_accuracy: 0.8574 - val_binary_crossentropy: 0.3322

Epoch 16/20

49/49 - 0s - loss: 0.2985 - accuracy: 0.8822 - binary_crossentropy: 0.2985 - val_loss: 0.3324 - val_accuracy: 0.8566 - val_binary_crossentropy: 0.3324

Epoch 17/20

49/49 - 0s - loss: 0.2943 - accuracy: 0.8840 - binary_crossentropy: 0.2943 - val_loss: 0.3337 - val_accuracy: 0.8568 - val_binary_crossentropy: 0.3337

Epoch 18/20

49/49 - 0s - loss: 0.2948 - accuracy: 0.8846 - binary_crossentropy: 0.2948 - val_loss: 0.3348 - val_accuracy: 0.8559 - val_binary_crossentropy: 0.3348

Epoch 19/20

49/49 - 0s - loss: 0.2897 - accuracy: 0.8843 - binary_crossentropy: 0.2897 - val_loss: 0.3388 - val_accuracy: 0.8554 - val_binary_crossentropy: 0.3388

Epoch 20/20

49/49 - 0s - loss: 0.2836 - accuracy: 0.8870 - binary_crossentropy: 0.2836 - val_loss: 0.3458 - val_accuracy: 0.8563 - val_binary_crossentropy: 0.3458

 

import tensorflow as tf

from tensorflow import keras

 

import numpy as np

import matplotlib.pyplot as plt

 

print(tf.__version__)

 

NUM_WORDS = 1000

 

(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)

 

def multi_hot_sequences(sequences, dimension):

    # 0으로 채워진 (len(sequences), dimension) 크기의 행렬을 만듭니다

    results = np.zeros((len(sequences), dimension))

    for i, word_indices in enumerate(sequences):

        results[i, word_indices] = 1.0  # results[i]의 특정 인덱스만 1로 설정합니다

    return results

 

 

train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)

test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)

 

plt.plot(train_data[0])

 

baseline_model = keras.Sequential([

    # `.summary` 메서드 때문에 `input_shape`가 필요합니다

    keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),

    keras.layers.Dense(16, activation='relu'),

    keras.layers.Dense(1, activation='sigmoid')

])

 

baseline_model.compile(optimizer='adam',

                       loss='binary_crossentropy',

                       metrics=['accuracy', 'binary_crossentropy'])

 

baseline_model.summary()

 

baseline_history = baseline_model.fit(train_data,

                                      train_labels,

                                      epochs=20,

                                      batch_size=512,

                                      validation_data=(test_data, test_labels),

                                      verbose=2)

 

smaller_model = keras.Sequential([

    keras.layers.Dense(4, activation='relu', input_shape=(NUM_WORDS,)),

    keras.layers.Dense(4, activation='relu'),

    keras.layers.Dense(1, activation='sigmoid')

])

 

smaller_model.compile(optimizer='adam',

                      loss='binary_crossentropy',

                      metrics=['accuracy', 'binary_crossentropy'])

 

smaller_model.summary()

 

smaller_history = smaller_model.fit(train_data,

                                    train_labels,

                                    epochs=20,

                                    batch_size=512,

                                    validation_data=(test_data, test_labels),

                                    verbose=2)

 

bigger_model = keras.models.Sequential([

    keras.layers.Dense(512, activation='relu', input_shape=(NUM_WORDS,)),

    keras.layers.Dense(512, activation='relu'),

    keras.layers.Dense(1, activation='sigmoid')

])

 

bigger_model.compile(optimizer='adam',

                     loss='binary_crossentropy',

                     metrics=['accuracy','binary_crossentropy'])

 

bigger_model.summary()

 

bigger_history = bigger_model.fit(train_data, train_labels,

                                  epochs=20,

                                  batch_size=512,

                                  validation_data=(test_data, test_labels),

                                  verbose=2)

 

def plot_history(histories, key='binary_crossentropy'):

  plt.figure(figsize=(16,10))

 

  for name, history in histories:

    val = plt.plot(history.epoch, history.history['val_'+key],

                   '--', label=name.title()+' Val')

    plt.plot(history.epoch, history.history[key], color=val[0].get_color(),

             label=name.title()+' Train')

 

  plt.xlabel('Epochs')

  plt.ylabel(key.replace('_',' ').title())

  plt.legend()

 

  plt.xlim([0,max(history.epoch)])

 

 

plot_history([('baseline', baseline_history),

              ('smaller', smaller_history),

              ('bigger', bigger_history)])

 

l2_model = keras.models.Sequential([

    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),

                       activation='relu', input_shape=(NUM_WORDS,)),

    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),

                       activation='relu'),

    keras.layers.Dense(1, activation='sigmoid')

])

 

l2_model.compile(optimizer='adam',

                 loss='binary_crossentropy',

                 metrics=['accuracy', 'binary_crossentropy'])

 

l2_model_history = l2_model.fit(train_data, train_labels,

                                epochs=20,

                                batch_size=512,

                                validation_data=(test_data, test_labels),

                                verbose=2)

 

plot_history([('baseline', baseline_history),

              ('l2', l2_model_history)])

 

dpt_model = keras.models.Sequential([

    keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),

    keras.layers.Dropout(0.5),

    keras.layers.Dense(16, activation='relu'),

    keras.layers.Dropout(0.5),

    keras.layers.Dense(1, activation='sigmoid')

])

 

dpt_model.compile(optimizer='adam',

                  loss='binary_crossentropy',

                  metrics=['accuracy','binary_crossentropy'])

 

dpt_model_history = dpt_model.fit(train_data, train_labels,

                                  epochs=20,

                                  batch_size=512,

                                  validation_data=(test_data, test_labels),

                                  verbose=2)

 

plot_history([('baseline', baseline_history),

              ('dropout', dpt_model_history)])

 

 

#

# Copyright (c) 2017 François Chollet

#

# Permission is hereby granted, free of charge, to any person obtaining a

# copy of this software and associated documentation files (the "Software"),

# to deal in the Software without restriction, including without limitation

# the rights to use, copy, modify, merge, publish, distribute, sublicense,

# and/or sell copies of the Software, and to permit persons to whom the

# Software is furnished to do so, subject to the following conditions:

#

# The above copyright notice and this permission notice shall be included in

# all copies or substantial portions of the Software.

#

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

# DEALINGS IN THE SOFTWARE.

 

'진행 프로젝트 > [진행] Tensorflow2 &amp;amp;amp;amp;quot;해볼까?&amp;amp;amp;amp;quot;' 카테고리의 다른 글

tutorials 07  (0) 2020.08.11
tutorials 06  (0) 2020.08.11
tutorials 04  (0) 2020.08.11
tutorials 03  (0) 2020.08.11
tutorials 02  (0) 2020.08.11

 

tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.

  (0) Internal:  Blas GEMM launch failed : a.shape=(512, 16), b.shape=(16, 16), m=512, n=16, k=16

            [[node sequential/dense/MatMul (defined at O:/PycharmProjects/catdogtf2.2/004.py:74) ]]

  (1) Internal:  Blas GEMM launch failed : a.shape=(512, 16), b.shape=(16, 16), m=512, n=16, k=16

            [[node sequential/dense/MatMul (defined at O:/PycharmProjects/catdogtf2.2/004.py:74) ]]

            [[gradient_tape/sequential/embedding/embedding_lookup/Reshape/_46]]

 

 

 

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=53066

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

PyDev console: starting.

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

>>> runfile('O:/PycharmProjects/catdogtf2.2/004.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 05:20:09.882744: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2.2.0

O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\keras\datasets\imdb.py:155: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

  x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx])

O:\PycharmProjects\catdogtf2.2\venv\lib\site-packages\tensorflow\python\keras\datasets\imdb.py:156: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

  x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:])

훈련 샘플: 25000, 레이블: 25000

[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

[   1   14   22   16   43  530  973 1622 1385   65  458 4468   66 3941

    4  173   36  256    5   25  100   43  838  112   50  670    2    9

   35  480  284    5  150    4  172  112  167    2  336  385   39    4

  172 4536 1111   17  546   38   13  447    4  192   50   16    6  147

 2025   19   14   22    4 1920 4613  469    4   22   71   87   12   16

   43  530   38   76   15   13 1247    4   22   17  515   17   12   16

  626   18    2    5   62  386   12    8  316    8  106    5    4 2223

 5244   16  480   66 3785   33    4  130   12   16   38  619    5   25

  124   51   36  135   48   25 1415   33    6   22   12  215   28   77

   52    5   14  407   16   82    2    8    4  107  117 5952   15  256

    4    2    7 3766    5  723   36   71   43  530  476   26  400  317

   46    7    4    2 1029   13  104   88    4  381   15  297   98   32

 2071   56   26  141    6  194 7486   18    4  226   22   21  134  476

   26  480    5  144   30 5535   18   51   36   28  224   92   25  104

    4  226   65   16   38 1334   88   12   16  283    5   16 4472  113

  103   32   15   16 5345   19  178   32    0    0    0    0    0    0

    0    0    0    0    0    0    0    0    0    0    0    0    0    0

    0    0    0    0    0    0    0    0    0    0    0    0    0    0

    0    0    0    0]

2020-08-11 05:20:18.596801: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 05:20:18.637406: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:20:18.637937: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:20:18.645961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:20:18.652048: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:20:18.655131: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:20:18.663508: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:20:18.668992: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:20:18.685127: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:20:18.685528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:20:18.686031: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 05:20:18.697734: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1a1122e60f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:20:18.698127: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 05:20:18.698600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:20:18.699083: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:20:18.699331: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:20:18.699595: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:20:18.699820: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:20:18.700003: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:20:18.700183: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:20:18.700378: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:20:18.700642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:20:19.358531: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 05:20:19.358724: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 05:20:19.358810: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 05:20:19.359171: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 05:20:19.362586: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1a113c59140 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:20:19.362846: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

Model: "sequential"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

embedding (Embedding)        (None, None, 16)          160000   

_________________________________________________________________

global_average_pooling1d (Gl (None, 16)                0        

_________________________________________________________________

dense (Dense)                (None, 16)                272      

_________________________________________________________________

dense_1 (Dense)              (None, 1)                 17       

=================================================================

Total params: 160,289

Trainable params: 160,289

Non-trainable params: 0

_________________________________________________________________

Epoch 1/40

2020-08-11 05:20:20.301884: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

30/30 [==============================] - 1s 23ms/step - loss: 0.6917 - accuracy: 0.5509 - val_loss: 0.6898 - val_accuracy: 0.5002

Epoch 2/40

30/30 [==============================] - 0s 16ms/step - loss: 0.6846 - accuracy: 0.6117 - val_loss: 0.6797 - val_accuracy: 0.7024

Epoch 3/40

30/30 [==============================] - 0s 17ms/step - loss: 0.6703 - accuracy: 0.7325 - val_loss: 0.6630 - val_accuracy: 0.7302

Epoch 4/40

30/30 [==============================] - 0s 16ms/step - loss: 0.6471 - accuracy: 0.7548 - val_loss: 0.6368 - val_accuracy: 0.7576

Epoch 5/40

30/30 [==============================] - 0s 16ms/step - loss: 0.6132 - accuracy: 0.7815 - val_loss: 0.6014 - val_accuracy: 0.7876

Epoch 6/40

30/30 [==============================] - 0s 16ms/step - loss: 0.5710 - accuracy: 0.8108 - val_loss: 0.5609 - val_accuracy: 0.8045

Epoch 7/40

30/30 [==============================] - 1s 17ms/step - loss: 0.5249 - accuracy: 0.8304 - val_loss: 0.5189 - val_accuracy: 0.8153

Epoch 8/40

30/30 [==============================] - 0s 16ms/step - loss: 0.4791 - accuracy: 0.8481 - val_loss: 0.4785 - val_accuracy: 0.8324

Epoch 9/40

30/30 [==============================] - 0s 16ms/step - loss: 0.4364 - accuracy: 0.8610 - val_loss: 0.4428 - val_accuracy: 0.8430

Epoch 10/40

30/30 [==============================] - 0s 16ms/step - loss: 0.3986 - accuracy: 0.8729 - val_loss: 0.4132 - val_accuracy: 0.8495

Epoch 11/40

30/30 [==============================] - 0s 16ms/step - loss: 0.3664 - accuracy: 0.8797 - val_loss: 0.3890 - val_accuracy: 0.8559

Epoch 12/40

30/30 [==============================] - 0s 16ms/step - loss: 0.3392 - accuracy: 0.8879 - val_loss: 0.3684 - val_accuracy: 0.8636

Epoch 13/40

30/30 [==============================] - 0s 16ms/step - loss: 0.3158 - accuracy: 0.8953 - val_loss: 0.3523 - val_accuracy: 0.8677

Epoch 14/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2961 - accuracy: 0.9003 - val_loss: 0.3392 - val_accuracy: 0.8716

Epoch 15/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2785 - accuracy: 0.9069 - val_loss: 0.3283 - val_accuracy: 0.8743

Epoch 16/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2631 - accuracy: 0.9111 - val_loss: 0.3204 - val_accuracy: 0.8760

Epoch 17/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2498 - accuracy: 0.9151 - val_loss: 0.3122 - val_accuracy: 0.8796

Epoch 18/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2375 - accuracy: 0.9199 - val_loss: 0.3064 - val_accuracy: 0.8795

Epoch 19/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2257 - accuracy: 0.9237 - val_loss: 0.3023 - val_accuracy: 0.8803

Epoch 20/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2153 - accuracy: 0.9259 - val_loss: 0.2972 - val_accuracy: 0.8824

Epoch 21/40

30/30 [==============================] - 0s 16ms/step - loss: 0.2055 - accuracy: 0.9297 - val_loss: 0.2950 - val_accuracy: 0.8819

Epoch 22/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1970 - accuracy: 0.9327 - val_loss: 0.2915 - val_accuracy: 0.8834

Epoch 23/40

30/30 [==============================] - 0s 17ms/step - loss: 0.1881 - accuracy: 0.9379 - val_loss: 0.2898 - val_accuracy: 0.8832

Epoch 24/40

30/30 [==============================] - 0s 17ms/step - loss: 0.1804 - accuracy: 0.9411 - val_loss: 0.2882 - val_accuracy: 0.8842

Epoch 25/40

30/30 [==============================] - 1s 17ms/step - loss: 0.1733 - accuracy: 0.9441 - val_loss: 0.2867 - val_accuracy: 0.8850

Epoch 26/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1656 - accuracy: 0.9472 - val_loss: 0.2858 - val_accuracy: 0.8861

Epoch 27/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1593 - accuracy: 0.9504 - val_loss: 0.2857 - val_accuracy: 0.8858

Epoch 28/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1529 - accuracy: 0.9529 - val_loss: 0.2857 - val_accuracy: 0.8862

Epoch 29/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1469 - accuracy: 0.9555 - val_loss: 0.2863 - val_accuracy: 0.8862

Epoch 30/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1416 - accuracy: 0.9579 - val_loss: 0.2871 - val_accuracy: 0.8865

Epoch 31/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1360 - accuracy: 0.9603 - val_loss: 0.2889 - val_accuracy: 0.8858

Epoch 32/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1310 - accuracy: 0.9619 - val_loss: 0.2897 - val_accuracy: 0.8867

Epoch 33/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1260 - accuracy: 0.9637 - val_loss: 0.2917 - val_accuracy: 0.8860

Epoch 34/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1210 - accuracy: 0.9655 - val_loss: 0.2925 - val_accuracy: 0.8856

Epoch 35/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1174 - accuracy: 0.9663 - val_loss: 0.2950 - val_accuracy: 0.8871

Epoch 36/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1123 - accuracy: 0.9689 - val_loss: 0.2974 - val_accuracy: 0.8854

Epoch 37/40

30/30 [==============================] - 0s 16ms/step - loss: 0.1082 - accuracy: 0.9699 - val_loss: 0.3024 - val_accuracy: 0.8819

Epoch 38/40

30/30 [==============================] - 1s 17ms/step - loss: 0.1044 - accuracy: 0.9715 - val_loss: 0.3019 - val_accuracy: 0.8837

Epoch 39/40

30/30 [==============================] - 0s 17ms/step - loss: 0.1002 - accuracy: 0.9731 - val_loss: 0.3048 - val_accuracy: 0.8841

Epoch 40/40

30/30 [==============================] - 0s 16ms/step - loss: 0.0964 - accuracy: 0.9750 - val_loss: 0.3096 - val_accuracy: 0.8812

782/782 - 1s - loss: 0.3282 - accuracy: 0.8718

[0.32820913195610046, 0.8718400001525879]

 

 

 

 원래 잘 되던게 다시 해보면, 한 번에 되는게 없네 ㅋ 믓튼, 자료 준비 잼남. tutorials 소스 요청은 mynameis@hajunho.com 으로 (은근 일임)

 

import tensorflow as tf

from tensorflow import keras

 

import numpy as np

 

print(tf.__version__)

 

imdb = keras.datasets.imdb

 

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

 

print("훈련 샘플: {}, 레이블: {}".format(len(train_data), len(train_labels)))

print(train_data[0])

 

len(train_data[0]), len(train_data[1])

 

# 단어와 정수 인덱스를 매핑한 딕셔너리

word_index = imdb.get_word_index()

 

# 처음 몇 개 인덱스는 사전에 정의되어 있습니다

word_index = {k:(v+3) for k,v in word_index.items()}

word_index["<PAD>"] = 0

word_index["<START>"] = 1

word_index["<UNK>"] = 2  # unknown

word_index["<UNUSED>"] = 3

 

reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

 

def decode_review(text):

    return ' '.join([reverse_word_index.get(i, '?') for i in text])

 

decode_review(train_data[0])

 

train_data = keras.preprocessing.sequence.pad_sequences(train_data,

                                                        value=word_index["<PAD>"],

                                                        padding='post',

                                                        maxlen=256)

 

test_data = keras.preprocessing.sequence.pad_sequences(test_data,

                                                       value=word_index["<PAD>"],

                                                       padding='post',

                                                       maxlen=256)

 

len(train_data[0]), len(train_data[1])

 

print(train_data[0])

 

# 입력 크기는 영화 리뷰 데이터셋에 적용된 어휘 사전의 크기입니다(10,000개의 단어)

vocab_size = 10000

 

model = keras.Sequential()

model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))

model.add(keras.layers.GlobalAveragePooling1D())

model.add(keras.layers.Dense(16, activation='relu'))

model.add(keras.layers.Dense(1, activation='sigmoid'))

 

model.summary()

 

model.compile(optimizer='adam',

              loss='binary_crossentropy',

              metrics=['accuracy'])

 

x_val = train_data[:10000]

partial_x_train = train_data[10000:]

 

y_val = train_labels[:10000]

partial_y_train = train_labels[10000:]

 

history = model.fit(partial_x_train,

                    partial_y_train,

                    epochs=40,

                    batch_size=512,

                    validation_data=(x_val, y_val),

                    verbose=1)

 

results = model.evaluate(test_data,  test_labels, verbose=2)

 

print(results)

 

history_dict = history.history

history_dict.keys()

 

import matplotlib.pyplot as plt

 

acc = history_dict['accuracy']

val_acc = history_dict['val_accuracy']

loss = history_dict['loss']

val_loss = history_dict['val_loss']

 

epochs = range(1, len(acc) + 1)

 

# "bo" "파란색 점"입니다

plt.plot(epochs, loss, 'bo', label='Training loss')

# b "파란 실선"입니다

plt.plot(epochs, val_loss, 'b', label='Validation loss')

plt.title('Training and validation loss')

plt.xlabel('Epochs')

plt.ylabel('Loss')

plt.legend()

 

plt.show()

 

plt.clf()   # 그림을 초기화합니다

 

plt.plot(epochs, acc, 'bo', label='Training acc')

plt.plot(epochs, val_acc, 'b', label='Validation acc')

plt.title('Training and validation accuracy')

plt.xlabel('Epochs')

plt.ylabel('Accuracy')

plt.legend()

 

plt.show()

 

 

#

# Copyright (c) 2017 François Chollet

#

# Permission is hereby granted, free of charge, to any person obtaining a

# copy of this software and associated documentation files (the "Software"),

# to deal in the Software without restriction, including without limitation

# the rights to use, copy, modify, merge, publish, distribute, sublicense,

# and/or sell copies of the Software, and to permit persons to whom the

# Software is furnished to do so, subject to the following conditions:

#

# The above copyright notice and this permission notice shall be included in

# all copies or substantial portions of the Software.

#

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

# DEALINGS IN THE SOFTWARE.

 

 

'진행 프로젝트 > [진행] Tensorflow2 &amp;amp;amp;amp;quot;해볼까?&amp;amp;amp;amp;quot;' 카테고리의 다른 글

tutorials 06  (0) 2020.08.11
tutorials 05  (0) 2020.08.11
tutorials 03  (0) 2020.08.11
tutorials 02  (0) 2020.08.11
tutorial 01 running on pyCharm 2020.2 & 3.7  (1) 2020.08.11

import numpy as np

 

import tensorflow as tf

 

#!pip install -q tensorflow-hub

#!pip install -q tfds-nightly

import tensorflow_hub as hub

import tensorflow_datasets as tfds

 

print("버전: ", tf.__version__)

print("즉시 실행 모드: ", tf.executing_eagerly())

print("허브 버전: ", hub.__version__)

print("GPU", "사용 가능" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE")

 

# 훈련 세트를 6 4로 나눕니다.

# 결국 훈련에 15,000개 샘플, 검증에 10,000개 샘플, 테스트에 25,000개 샘플을 사용하게 됩니다.

train_data, validation_data, test_data = tfds.load(

    name="imdb_reviews",

    split=('train[:60%]', 'train[60%:]', 'test'),

    as_supervised=True)

 

train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))

train_examples_batch

 

train_labels_batch

 

embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"

hub_layer = hub.KerasLayer(embedding, input_shape=[],

                           dtype=tf.string, trainable=True)

hub_layer(train_examples_batch[:3])

 

model = tf.keras.Sequential()

model.add(hub_layer)

model.add(tf.keras.layers.Dense(16, activation='relu'))

model.add(tf.keras.layers.Dense(1))

 

model.summary()

 

model.compile(optimizer='adam',

              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),

              metrics=['accuracy'])

 

history = model.fit(train_data.shuffle(10000).batch(512),

                    epochs=20,

                    validation_data=validation_data.batch(512),

                    verbose=1)

 

results = model.evaluate(test_data.batch(512), verbose=2)

 

for name, value in zip(model.metrics_names, results):

  print("%s: %.3f" % (name, value))

 

  #

  # Copyright (c) 2017 François Chollet

  #

  # Permission is hereby granted, free of charge, to any person obtaining a

  # copy of this software and associated documentation files (the "Software"),

  # to deal in the Software without restriction, including without limitation

  # the rights to use, copy, modify, merge, publish, distribute, sublicense,

  # and/or sell copies of the Software, and to permit persons to whom the

  # Software is furnished to do so, subject to the following conditions:

  #

  # The above copyright notice and this permission notice shall be included in

  # all copies or substantial portions of the Software.

  #

  # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

  # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

  # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

  # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

  # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

  # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

  # DEALINGS IN THE SOFTWARE.

 

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=52110

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

PyDev console: starting.

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

>>> runfile('O:/PycharmProjects/catdogtf2.2/003.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 05:11:11.697892: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

버전:  2.2.0

즉시 실행 모드:  True

허브 버전:  0.8.0

2020-08-11 05:11:14.939954: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 05:11:14.981186: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:11:14.981690: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:11:14.987793: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:11:14.992555: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:11:14.995107: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:11:15.000251: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:11:15.003882: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:11:15.011752: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:11:15.012144: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

GPU 사용 가능

Downloading and preparing dataset imdb_reviews/plain_text/1.0.0 (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\joe\tensorflow_datasets\imdb_reviews\plain_text\1.0.0...

Dl Completed...: 0 url [00:00, ? url/s]

Dl Completed...:   0%|          | 0/1 [00:00<?, ? url/s]

Dl Completed...:   0%|          | 0/1 [00:00<?, ? url/s]

Dl Size...:   0%|          | 0/80 [00:00<?, ? MiB/s]

Dl Completed...:   0%|          | 0/1 [00:01<?, ? url/s]

Dl Size...:   1%|         | 1/80 [00:01<02:24,  1.83s/ MiB]

Dl Completed...:   0%|          | 0/1 [00:02<?, ? url/s]

Dl Size...:   2%|         | 2/80 [00:02<01:46,  1.37s/ MiB]

Dl Completed...:   0%|          | 0/1 [00:02<?, ? url/s]

Dl Size...:   4%|         | 3/80 [00:02<01:17,  1.00s/ MiB]

Dl Completed...:   0%|          | 0/1 [00:02<?, ? url/s]

Dl Completed...:   0%|          | 0/1 [00:02<?, ? url/s]

Dl Size...:   6%|         | 5/80 [00:02<00:59,  1.25 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:02<?, ? url/s]

Dl Size...:   8%|         | 6/80 [00:02<00:42,  1.73 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:02<?, ? url/s]

Dl Size...:   9%|         | 7/80 [00:02<00:32,  2.22 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:03<?, ? url/s]

Dl Size...:  10%|         | 8/80 [00:03<00:25,  2.79 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:03<?, ? url/s]

Dl Size...:  11%|        | 9/80 [00:03<00:20,  3.40 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:03<?, ? url/s]

Dl Size...:  12%|        | 10/80 [00:03<00:17,  4.01 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:03<?, ? url/s]

Dl Size...:  14%|        | 11/80 [00:03<00:15,  4.49 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:03<?, ? url/s]

Dl Size...:  15%|█▌        | 12/80 [00:03<00:13,  5.05 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:03<?, ? url/s]

Dl Size...:  16%|        | 13/80 [00:03<00:12,  5.50 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:03<?, ? url/s]

Dl Size...:  18%|        | 14/80 [00:03<00:11,  5.93 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:04<?, ? url/s]

Dl Size...:  19%|        | 15/80 [00:04<00:10,  6.35 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:04<?, ? url/s]

Dl Size...:  20%|██        | 16/80 [00:04<00:09,  6.73 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:04<?, ? url/s]

Dl Size...:  21%|██       | 17/80 [00:04<00:09,  6.86 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:04<?, ? url/s]

Dl Size...:  22%|██       | 18/80 [00:04<00:08,  6.92 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:04<?, ? url/s]

Dl Size...:  24%|██       | 19/80 [00:04<00:08,  7.09 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:04<?, ? url/s]

Dl Size...:  25%|██▌       | 20/80 [00:04<00:08,  7.45 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:04<?, ? url/s]

Dl Size...:  26%|██       | 21/80 [00:04<00:07,  7.50 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:04<?, ? url/s]

Dl Size...:  28%|██       | 22/80 [00:04<00:07,  7.63 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:05<?, ? url/s]

Dl Size...:  29%|██       | 23/80 [00:05<00:07,  7.68 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:05<?, ? url/s]

Dl Completed...:   0%|          | 0/1 [00:05<?, ? url/s]

Dl Size...:  31%|███      | 25/80 [00:05<00:09,  5.57 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:05<?, ? url/s]

Dl Size...:  32%|███      | 26/80 [00:05<00:08,  6.68 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:05<?, ? url/s]

Dl Size...:  34%|███      | 27/80 [00:05<00:08,  6.32 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:05<?, ? url/s]

Dl Size...:  35%|███▌      | 28/80 [00:05<00:08,  6.24 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:06<?, ? url/s]

Dl Size...:  36%|███      | 29/80 [00:06<00:08,  6.10 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:06<?, ? url/s]

Dl Size...:  38%|███      | 30/80 [00:06<00:08,  6.10 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:06<?, ? url/s]

Dl Size...:  39%|███      | 31/80 [00:06<00:07,  6.16 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:06<?, ? url/s]

Dl Size...:  40%|████      | 32/80 [00:06<00:07,  6.08 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:06<?, ? url/s]

Dl Size...:  41%|████     | 33/80 [00:06<00:07,  6.13 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:06<?, ? url/s]

Dl Size...:  42%|████     | 34/80 [00:06<00:07,  6.05 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:07<?, ? url/s]

Dl Size...:  44%|████     | 35/80 [00:07<00:07,  6.17 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:07<?, ? url/s]

Dl Size...:  45%|████▌     | 36/80 [00:07<00:06,  6.31 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:07<?, ? url/s]

Dl Size...:  46%|████     | 37/80 [00:07<00:06,  6.33 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:07<?, ? url/s]

Dl Size...:  48%|████     | 38/80 [00:07<00:06,  6.40 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:07<?, ? url/s]

Dl Size...:  49%|████     | 39/80 [00:07<00:06,  6.39 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:07<?, ? url/s]

Dl Size...:  50%|█████     | 40/80 [00:07<00:06,  6.46 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:07<?, ? url/s]

Dl Size...:  51%|█████    | 41/80 [00:07<00:06,  6.46 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:08<?, ? url/s]

Dl Size...:  52%|█████    | 42/80 [00:08<00:06,  5.74 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:08<?, ? url/s]

Dl Size...:  54%|█████    | 43/80 [00:08<00:06,  5.90 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:08<?, ? url/s]

Dl Size...:  55%|█████▌    | 44/80 [00:08<00:06,  5.45 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:08<?, ? url/s]

Dl Size...:  56%|█████    | 45/80 [00:08<00:06,  5.25 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:08<?, ? url/s]

Dl Size...:  57%|█████    | 46/80 [00:08<00:06,  5.14 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:09<?, ? url/s]

Dl Size...:  59%|█████    | 47/80 [00:09<00:06,  5.14 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:09<?, ? url/s]

Dl Size...:  60%|██████    | 48/80 [00:09<00:06,  5.13 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:09<?, ? url/s]

Dl Size...:  61%|██████   | 49/80 [00:09<00:06,  5.04 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:09<?, ? url/s]

Dl Size...:  62%|██████   | 50/80 [00:09<00:05,  5.08 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:09<?, ? url/s]

Dl Size...:  64%|██████   | 51/80 [00:09<00:05,  5.08 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:10<?, ? url/s]

Dl Size...:  65%|██████▌   | 52/80 [00:10<00:05,  5.16 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:10<?, ? url/s]

Dl Size...:  66%|██████   | 53/80 [00:10<00:05,  5.21 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:10<?, ? url/s]

Dl Size...:  68%|██████   | 54/80 [00:10<00:05,  5.20 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:10<?, ? url/s]

Dl Size...:  69%|██████   | 55/80 [00:10<00:04,  5.27 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:10<?, ? url/s]

Dl Size...:  70%|███████   | 56/80 [00:10<00:04,  5.25 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:11<?, ? url/s]

Dl Size...:  71%|███████  | 57/80 [00:11<00:04,  5.35 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:11<?, ? url/s]

Dl Size...:  72%|███████  | 58/80 [00:11<00:04,  4.59 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:11<?, ? url/s]

Dl Size...:  74%|███████  | 59/80 [00:11<00:03,  5.35 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:11<?, ? url/s]

Dl Size...:  75%|███████▌  | 60/80 [00:11<00:04,  4.47 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:12<?, ? url/s]

Dl Size...:  76%|███████  | 61/80 [00:12<00:04,  3.93 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:12<?, ? url/s]

Dl Size...:  78%|███████  | 62/80 [00:12<00:05,  3.47 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:12<?, ? url/s]

Dl Size...:  79%|███████  | 63/80 [00:12<00:05,  3.37 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:13<?, ? url/s]

Dl Size...:  80%|████████  | 64/80 [00:13<00:04,  3.26 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:13<?, ? url/s]

Dl Size...:  81%|████████ | 65/80 [00:13<00:04,  3.20 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:13<?, ? url/s]

Dl Size...:  82%|████████ | 66/80 [00:13<00:04,  3.22 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:14<?, ? url/s]

Dl Size...:  84%|████████ | 67/80 [00:14<00:04,  3.20 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:14<?, ? url/s]

Dl Size...:  85%|████████▌ | 68/80 [00:14<00:03,  3.14 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:14<?, ? url/s]

Dl Size...:  86%|████████ | 69/80 [00:14<00:03,  3.22 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:14<?, ? url/s]

Dl Size...:  88%|████████ | 70/80 [00:14<00:03,  3.28 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:15<?, ? url/s]

Dl Size...:  89%|████████ | 71/80 [00:15<00:02,  3.27 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:15<?, ? url/s]

Dl Size...:  90%|█████████ | 72/80 [00:15<00:02,  3.31 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:15<?, ? url/s]

Dl Size...:  91%|█████████| 73/80 [00:15<00:02,  3.23 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:16<?, ? url/s]

Dl Size...:  92%|█████████| 74/80 [00:16<00:01,  3.26 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:16<?, ? url/s]

Dl Size...:  94%|█████████| 75/80 [00:16<00:01,  3.33 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:16<?, ? url/s]

Dl Size...:  95%|█████████▌| 76/80 [00:16<00:01,  3.36 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:17<?, ? url/s]

Dl Size...:  96%|█████████| 77/80 [00:17<00:00,  3.33 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:17<?, ? url/s]

Dl Size...:  98%|█████████| 78/80 [00:17<00:00,  3.37 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:17<?, ? url/s]

Dl Size...:  99%|█████████| 79/80 [00:17<00:00,  3.22 MiB/s]

Dl Completed...:   0%|          | 0/1 [00:17<?, ? url/s]

Dl Completed...: 100%|██████████| 1/1 [00:18<00:00, 18.17s/ url]

Dl Size...: 100%|██████████| 80/80 [00:18<00:00,  3.30 MiB/s]

Dl Size...: 100%|██████████| 80/80 [00:18<00:00,  4.40 MiB/s]

Dl Completed...: 100%|██████████| 1/1 [00:18<00:00, 18.19s/ url]

2020-08-11 05:12:33.707697: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 05:12:33.717327: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2cfc5ef4840 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:12:33.717705: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 05:12:33.718160: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:12:33.718695: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:12:33.718988: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:12:33.719280: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:12:33.719484: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:12:33.719627: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:12:33.719774: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:12:33.720091: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:12:33.720416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:12:34.390731: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 05:12:34.391027: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 05:12:34.391187: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 05:12:34.391527: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 05:12:34.395072: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2cfe70ccd90 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:12:34.395451: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

2020-08-11 05:12:34.537614: W tensorflow/core/kernels/data/cache_dataset_ops.cc:794] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

 

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=52502

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

PyDev console: starting.

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

>>> runfile('O:/PycharmProjects/catdogtf2.2/003.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 05:13:57.568884: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

버전:  2.2.0

즉시 실행 모드:  True

허브 버전:  0.8.0

2020-08-11 05:14:00.879870: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 05:14:00.922800: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:14:00.923333: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:14:00.930349: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:14:00.936246: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:14:00.939358: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:14:00.946195: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:14:00.950313: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:14:00.967528: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:14:00.967911: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

GPU 사용 가능

2020-08-11 05:14:00.973950: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 05:14:00.984209: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2226ccff3e0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:14:00.984530: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 05:14:00.984990: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:14:00.985551: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:14:00.985809: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:14:00.986154: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:14:00.986450: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:14:00.986694: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:14:00.986984: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:14:00.987291: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:14:00.987605: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:14:01.809327: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 05:14:01.809490: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 05:14:01.809577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 05:14:01.810043: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 05:14:01.815096: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2220f136b30 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:14:01.815298: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

2020-08-11 05:14:02.005695: W tensorflow/core/kernels/data/cache_dataset_ops.cc:794] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Model: "sequential"

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

keras_layer (KerasLayer)     (None, 20)                400020   

_________________________________________________________________

dense (Dense)                (None, 16)                336      

_________________________________________________________________

dense_1 (Dense)              (None, 1)                 17       

=================================================================

Total params: 400,373

Trainable params: 400,373

Non-trainable params: 0

_________________________________________________________________

Epoch 1/20

2020-08-11 05:14:05.483625: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 4 in the outer inference context.

2020-08-11 05:14:05.483920: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 3 in the outer inference context.

2020-08-11 05:14:05.484223: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 2 in the outer inference context.

2020-08-11 05:14:05.484393: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 1 in the outer inference context.

2020-08-11 05:14:05.661180: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 4 in the outer inference context.

2020-08-11 05:14:05.661578: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 3 in the outer inference context.

2020-08-11 05:14:05.661921: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 2 in the outer inference context.

2020-08-11 05:14:05.662296: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 1 in the outer inference context.

2020-08-11 05:14:05.862861: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

30/30 [==============================] - 2s 77ms/step - loss: 0.9824 - accuracy: 0.4959 - val_loss: 0.7466 - val_accuracy: 0.5309

Epoch 2/20

30/30 [==============================] - 2s 63ms/step - loss: 0.6758 - accuracy: 0.5873 - val_loss: 0.6331 - val_accuracy: 0.6114

Epoch 3/20

30/30 [==============================] - 2s 64ms/step - loss: 0.6032 - accuracy: 0.6542 - val_loss: 0.5875 - val_accuracy: 0.6675

Epoch 4/20

30/30 [==============================] - 2s 64ms/step - loss: 0.5615 - accuracy: 0.6960 - val_loss: 0.5546 - val_accuracy: 0.6944

Epoch 5/20

30/30 [==============================] - 2s 64ms/step - loss: 0.5261 - accuracy: 0.7221 - val_loss: 0.5262 - val_accuracy: 0.7258

Epoch 6/20

30/30 [==============================] - 2s 64ms/step - loss: 0.4938 - accuracy: 0.7515 - val_loss: 0.4988 - val_accuracy: 0.7450

Epoch 7/20

30/30 [==============================] - 2s 64ms/step - loss: 0.4632 - accuracy: 0.7757 - val_loss: 0.4738 - val_accuracy: 0.7636

Epoch 8/20

30/30 [==============================] - 2s 63ms/step - loss: 0.4335 - accuracy: 0.7958 - val_loss: 0.4511 - val_accuracy: 0.7811

Epoch 9/20

30/30 [==============================] - 2s 63ms/step - loss: 0.4051 - accuracy: 0.8124 - val_loss: 0.4291 - val_accuracy: 0.7933

Epoch 10/20

30/30 [==============================] - 2s 64ms/step - loss: 0.3793 - accuracy: 0.8265 - val_loss: 0.4112 - val_accuracy: 0.8127

Epoch 11/20

30/30 [==============================] - 2s 63ms/step - loss: 0.3532 - accuracy: 0.8447 - val_loss: 0.3921 - val_accuracy: 0.8124

Epoch 12/20

30/30 [==============================] - 2s 64ms/step - loss: 0.3298 - accuracy: 0.8587 - val_loss: 0.3778 - val_accuracy: 0.8323

Epoch 13/20

30/30 [==============================] - 2s 64ms/step - loss: 0.3074 - accuracy: 0.8713 - val_loss: 0.3630 - val_accuracy: 0.8384

Epoch 14/20

30/30 [==============================] - 2s 63ms/step - loss: 0.2867 - accuracy: 0.8837 - val_loss: 0.3511 - val_accuracy: 0.8370

Epoch 15/20

30/30 [==============================] - 2s 64ms/step - loss: 0.2676 - accuracy: 0.8936 - val_loss: 0.3404 - val_accuracy: 0.8460

Epoch 16/20

30/30 [==============================] - 2s 64ms/step - loss: 0.2502 - accuracy: 0.9006 - val_loss: 0.3318 - val_accuracy: 0.8513

Epoch 17/20

30/30 [==============================] - 2s 64ms/step - loss: 0.2345 - accuracy: 0.9081 - val_loss: 0.3253 - val_accuracy: 0.8579

Epoch 18/20

30/30 [==============================] - 2s 65ms/step - loss: 0.2197 - accuracy: 0.9150 - val_loss: 0.3219 - val_accuracy: 0.8656

Epoch 19/20

30/30 [==============================] - 2s 64ms/step - loss: 0.2057 - accuracy: 0.9225 - val_loss: 0.3145 - val_accuracy: 0.8608

Epoch 20/20

30/30 [==============================] - 2s 66ms/step - loss: 0.1934 - accuracy: 0.9298 - val_loss: 0.3126 - val_accuracy: 0.8656

49/49 - 2s - loss: 0.3249 - accuracy: 0.8554

loss: 0.325

accuracy: 0.855

 

 

 원래 잘 되던게 다시 해보면, 한 번에 되는게 없네 ㅋ 믓튼, 자료 준비 잼남. tutorials 소스 요청은 mynameis@hajunho.com 으로 (은근 일임)

 

tensorflow.python.framework.errors_impl.InternalError:  Blas GEMM launch failed : a.shape=(32, 784), b.shape=(784, 128), m=32, n=128, k=784

            [[node sequential/dense/MatMul (defined at O:/PycharmProjects/catdogtf2.2/002.py:58) ]] [Op:__inference_train_function_542]

 

Blas GEMM launch failed : pyCharm 껐다 켜면 됨. 무적의 옵앤온(off&on)

 

O:\PycharmProjects\catdogtf2.2\venv\Scripts\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=51027

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\catdogtf2.2', 'O:/PycharmProjects/catdogtf2.2'])

PyDev console: starting.

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

>>> runfile('O:/PycharmProjects/catdogtf2.2/002.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-11 05:03:54.636978: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2.2.0

2020-08-11 05:03:59.300858: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-11 05:03:59.345092: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:03:59.345646: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:03:59.354065: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:03:59.359976: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:03:59.363292: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:03:59.370767: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:03:59.375232: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:03:59.402544: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:03:59.402886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:03:59.403354: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-11 05:03:59.413780: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x17b3e2401b0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:03:59.414181: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-11 05:03:59.414663: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-11 05:03:59.415114: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-11 05:03:59.415306: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-11 05:03:59.415512: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-11 05:03:59.415727: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-11 05:03:59.415931: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-11 05:03:59.416156: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-11 05:03:59.416363: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-11 05:03:59.416598: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-11 05:04:00.086495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-11 05:04:00.086665: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-11 05:04:00.086839: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-11 05:04:00.087157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-11 05:04:00.091100: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x17b8d4b5440 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-11 05:04:00.091383: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

Epoch 1/5

2020-08-11 05:04:00.883945: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

1875/1875 [==============================] - 4s 2ms/step - loss: 0.4979 - accuracy: 0.8245

Epoch 2/5

1875/1875 [==============================] - 4s 2ms/step - loss: 0.3748 - accuracy: 0.8647

Epoch 3/5

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3337 - accuracy: 0.8779

Epoch 4/5

1875/1875 [==============================] - 3s 2ms/step - loss: 0.3118 - accuracy: 0.8850

Epoch 5/5

1875/1875 [==============================] - 3s 2ms/step - loss: 0.2945 - accuracy: 0.8916

313/313 - 1s - loss: 0.3482 - accuracy: 0.8756

테스트 정확도: 0.8755999803543091

 

# tensorflow tf.keras를 임포트합니다

import tensorflow as tf

from tensorflow import keras

 

# 헬퍼(helper) 라이브러리를 임포트합니다

import numpy as np

import matplotlib.pyplot as plt

 

print(tf.__version__)

 

fashion_mnist = keras.datasets.fashion_mnist

 

(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

 

class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',

               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

 

train_images.shape

 

len(train_labels)

 

train_labels

 

test_images.shape

 

len(test_labels)

 

plt.figure()

plt.imshow(train_images[0])

plt.colorbar()

plt.grid(False)

plt.show()

 

train_images = train_images / 255.0

 

test_images = test_images / 255.0

 

plt.figure(figsize=(10,10))

for i in range(25):

    plt.subplot(5,5,i+1)

    plt.xticks([])

    plt.yticks([])

    plt.grid(False)

    plt.imshow(train_images[i], cmap=plt.cm.binary)

    plt.xlabel(class_names[train_labels[i]])

plt.show()

 

model = keras.Sequential([

    keras.layers.Flatten(input_shape=(28, 28)),

    keras.layers.Dense(128, activation='relu'),

    keras.layers.Dense(10, activation='softmax')

])

 

model.compile(optimizer='adam',

              loss='sparse_categorical_crossentropy',

              metrics=['accuracy'])

 

model.fit(train_images, train_labels, epochs=5)

 

test_loss, test_acc = model.evaluate(test_images,  test_labels, verbose=2)

 

print('\n테스트 정확도:', test_acc)

 

predictions = model.predict(test_images)

 

predictions[0]

 

np.argmax(predictions[0])

 

test_labels[0]

 

def plot_image(i, predictions_array, true_label, img):

  predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]

  plt.grid(False)

  plt.xticks([])

  plt.yticks([])

 

  plt.imshow(img, cmap=plt.cm.binary)

 

  predicted_label = np.argmax(predictions_array)

  if predicted_label == true_label:

    color = 'blue'

  else:

    color = 'red'

 

  plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],

                                100*np.max(predictions_array),

                                class_names[true_label]),

                                color=color)

 

def plot_value_array(i, predictions_array, true_label):

  predictions_array, true_label = predictions_array[i], true_label[i]

  plt.grid(False)

  plt.xticks([])

  plt.yticks([])

  thisplot = plt.bar(range(10), predictions_array, color="#777777")

  plt.ylim([0, 1])

  predicted_label = np.argmax(predictions_array)

 

  thisplot[predicted_label].set_color('red')

  thisplot[true_label].set_color('blue')

 

i = 0

plt.figure(figsize=(6,3))

plt.subplot(1,2,1)

plot_image(i, predictions, test_labels, test_images)

plt.subplot(1,2,2)

plot_value_array(i, predictions,  test_labels)

plt.show()

 

i = 12

plt.figure(figsize=(6,3))

plt.subplot(1,2,1)

plot_image(i, predictions, test_labels, test_images)

plt.subplot(1,2,2)

plot_value_array(i, predictions,  test_labels)

plt.show()

 

# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다

# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다

num_rows = 5

num_cols = 3

num_images = num_rows*num_cols

plt.figure(figsize=(2*2*num_cols, 2*num_rows))

for i in range(num_images):

  plt.subplot(num_rows, 2*num_cols, 2*i+1)

  plot_image(i, predictions, test_labels, test_images)

  plt.subplot(num_rows, 2*num_cols, 2*i+2)

  plot_value_array(i, predictions, test_labels)

plt.show()

 

# 테스트 세트에서 이미지 하나를 선택합니다

img = test_images[0]

 

print(img.shape)

 

# 이미지 하나만 사용할 때도 배치에 추가합니다

img = (np.expand_dims(img,0))

 

print(img.shape)

 

predictions_single = model.predict(img)

 

print(predictions_single)

 

plot_value_array(0, predictions_single, test_labels)

_ = plt.xticks(range(10), class_names, rotation=45)

 

np.argmax(predictions_single[0])

 

 

#

# Copyright (c) 2017 François Chollet

#

# Permission is hereby granted, free of charge, to any person obtaining a

# copy of this software and associated documentation files (the "Software"),

# to deal in the Software without restriction, including without limitation

# the rights to use, copy, modify, merge, publish, distribute, sublicense,

# and/or sell copies of the Software, and to permit persons to whom the

# Software is furnished to do so, subject to the following conditions:

#

# The above copyright notice and this permission notice shall be included in

# all copies or substantial portions of the Software.

#

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

# DEALINGS IN THE SOFTWARE.

 

 

 

 

 

 

 

 

테스트 정확도: 0.8712000250816345

(28, 28)

(1, 28, 28)

[[1.18709238e-08 2.98055518e-08 5.65596814e-09 2.69023825e-08

  2.10091784e-08 4.35429800e-04 2.36874556e-07 1.28662065e-02

  5.69078793e-06 9.86692369e-01]]

 

 

import tensorflow as tf

 

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D

from tensorflow.keras.preprocessing.image import ImageDataGenerator

 

import os

import numpy as np

import matplotlib.pyplot as plt

import pydicom

import glob

 

os.environ['CUDA_VISIBLE_DEVICES'] = "0"

 

tf.debugging.set_log_device_placement(True)

 

try:

  # 유효하지 않은 GPU 장치를 명시

  with tf.device('/device:GPU:2'):

    a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])

    b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])

    c = tf.matmul(a, b)

except RuntimeError as e:

  print(e)

 

#from pydicom.data import get_testdata_files

 

#_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'

 

#path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)

 

#PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')

 

train_dir = os.path.join(os.path.curdir, './train')

validation_dir = os.path.join(os.path.curdir, './validation')

 

train_cats_dir = os.path.join(train_dir, 'cats')  # directory with our training cat pictures

train_dogs_dir = os.path.join(train_dir, 'dogs')  # directory with our training dog pictures

validation_cats_dir = os.path.join(validation_dir, 'cats')  # directory with our validation cat pictures

validation_dogs_dir = os.path.join(validation_dir, 'dogs')  # directory with our validation dog pictures

 

num_cats_tr = len(os.listdir(train_cats_dir))

num_dogs_tr = len(os.listdir(train_dogs_dir))

 

num_cats_val = len(os.listdir(validation_cats_dir))

num_dogs_val = len(os.listdir(validation_dogs_dir))

 

total_train = num_cats_tr + num_dogs_tr

total_val = num_cats_val + num_dogs_val

 

print('total training cat images:', num_cats_tr)

print('total training dog images:', num_dogs_tr)

 

print('total validation cat images:', num_cats_val)

print('total validation dog images:', num_dogs_val)

print("--")

print("Total training images:", total_train)

print("Total validation images:", total_val)

 

batch_size = 128

epochs = 15

IMG_HEIGHT = 150

IMG_WIDTH = 150

 

train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data

validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data

 

train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,

                                                           directory=train_dir,

                                                           shuffle=True,

                                                           target_size=(IMG_HEIGHT, IMG_WIDTH),

                                                           class_mode='binary')

def transform_to_hu(medical_image, image):

    intercept = medical_image.RescaleIntercept

    slope = medical_image.RescaleSlope

    hu_image = image * slope + intercept

 

    return hu_image

 

def window_image(image, window_center, window_width):

    img_min = window_center - window_width // 2

    img_max = window_center + window_width // 2

    window_image = image.copy()

    window_image[window_image < img_min] = img_min

    window_image[window_image > img_max] = img_max

 

    return window_image

 

def load_image(file_path):

    medical_image = pydicom.read_file(file_path)

    image = medical_image.pixel_array

 

    hu_image = transform_to_hu(medical_image, image)

    brain_image = window_image(hu_image, 40, 80)

    return brain_image

 

#files = sorted(glob.glob('qb02/*.dcm'))

files2 = load_image('./train/qb02/1-01.dcm')

#images = np.array([load_image(path) for path in files])

 

#plt.imshow([load_image(path) for path in files])

plt.imshow(files2)

#train_data_gen = train_image_generator.flow(images, images, batch_size=9)

 

val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,

                                                              directory=validation_dir,

                                                              target_size=(IMG_HEIGHT, IMG_WIDTH),

                                                              class_mode='binary')

 

sample_training_images, _ = next(train_data_gen)

 

# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.

def plotImages(images_arr):

    fig, axes = plt.subplots(1, 5, figsize=(20,20))

    axes = axes.flatten()

    for img, ax in zip( images_arr, axes):

        ax.imshow(img)

        ax.axis('off')

    plt.tight_layout()

    plt.show()

 

plotImages(sample_training_images[:5])

 

model = Sequential([

    Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),

    MaxPooling2D(),

    Conv2D(32, 3, padding='same', activation='relu'),

    MaxPooling2D(),

    Conv2D(64, 3, padding='same', activation='relu'),

    MaxPooling2D(),

    Flatten(),

    Dense(512, activation='relu'),

    Dense(1)

])

 

model.compile(optimizer='adam',

              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),

              metrics=['accuracy'])

 

model.summary()

 

history = model.fit_generator(

    train_data_gen,

    steps_per_epoch=total_train // batch_size,

    epochs=epochs,

    validation_data=val_data_gen,

    validation_steps=total_val // batch_size

)

 

acc = history.history['accuracy']

val_acc = history.history['val_accuracy']

 

loss=history.history['loss']

val_loss=history.history['val_loss']

 

epochs_range = range(epochs)

 

plt.figure(figsize=(8, 8))

plt.subplot(1, 2, 1)

plt.plot(epochs_range, acc, label='Training Accuracy')

plt.plot(epochs_range, val_acc, label='Validation Accuracy')

plt.legend(loc='lower right')

plt.title('Training and Validation Accuracy')

 

plt.subplot(1, 2, 2)

plt.plot(epochs_range, loss, label='Training Loss')

plt.plot(epochs_range, val_loss, label='Validation Loss')

plt.legend(loc='upper right')

plt.title('Training and Validation Loss')

plt.show()

 

image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)

#image_gen.flow_from_dataframe()

#dcm = pydicom.dcmread(train_dir)

train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,

                                               directory=train_dir,

                                               shuffle=True,

                                               target_size=(IMG_HEIGHT, IMG_WIDTH))

 

augmented_images = [train_data_gen[0][0][0] for i in range(5)]

 

# Re-use the same custom plotting function defined and used

# above to visualize the training images

plotImages(augmented_images)

 

image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)

 

train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,

                                               directory=train_dir,

                                               shuffle=True,

                                               target_size=(IMG_HEIGHT, IMG_WIDTH))

 

augmented_images = [train_data_gen[0][0][0] for i in range(5)]

 

plotImages(augmented_images)

 

# zoom_range from 0 - 1 where 1 = 100%.

image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #

 

train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,

                                               directory=train_dir,

                                               shuffle=True,

                                               target_size=(IMG_HEIGHT, IMG_WIDTH))

 

augmented_images = [train_data_gen[0][0][0] for i in range(5)]

 

plotImages(augmented_images)

 

image_gen_train = ImageDataGenerator(

                    rescale=1./255,

                    rotation_range=45,

                    width_shift_range=.15,

                    height_shift_range=.15,

                    horizontal_flip=True,

                    zoom_range=0.5

                    )

 

train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,

                                                     directory=train_dir,

                                                     shuffle=True,

                                                     target_size=(IMG_HEIGHT, IMG_WIDTH),

                                                     class_mode='binary')

 

augmented_images = [train_data_gen[0][0][0] for i in range(5)]

plotImages(augmented_images)

 

image_gen_val = ImageDataGenerator(rescale=1./255)

 

val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,

                                                 directory=validation_dir,

                                                 target_size=(IMG_HEIGHT, IMG_WIDTH),

                                                 class_mode='binary')

 

model_new = Sequential([

    Conv2D(16, 3, padding='same', activation='relu',

           input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),

    MaxPooling2D(),

    Dropout(0.2),

    Conv2D(32, 3, padding='same', activation='relu'),

    MaxPooling2D(),

    Conv2D(64, 3, padding='same', activation='relu'),

    MaxPooling2D(),

    Dropout(0.2),

    Flatten(),

    Dense(512, activation='relu'),

    Dense(1)

])

 

model_new.compile(optimizer='adam',

                  loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),

                  metrics=['accuracy'])

 

model_new.summary()

 

history = model_new.fit_generator(

    train_data_gen,

    steps_per_epoch=total_train // batch_size,

    epochs=epochs,

    validation_data=val_data_gen,

    validation_steps=total_val // batch_size

)

 

acc = history.history['accuracy']

val_acc = history.history['val_accuracy']

 

loss = history.history['loss']

val_loss = history.history['val_loss']

 

epochs_range = range(epochs)

 

plt.figure(figsize=(8, 8))

plt.subplot(1, 2, 1)

plt.plot(epochs_range, acc, label='Training Accuracy')

plt.plot(epochs_range, val_acc, label='Validation Accuracy')

plt.legend(loc='lower right')

plt.title('Training and Validation Accuracy')

 

plt.subplot(1, 2, 2)

plt.plot(epochs_range, loss, label='Training Loss')

plt.plot(epochs_range, val_loss, label='Validation Loss')

plt.legend(loc='upper right')

plt.title('Training and Validation Loss')

plt.show()

 

_________________________________________________________________

WARNING:tensorflow:From O:/PycharmProjects/catdogtf2.2/001.py:133: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.

Instructions for updating:

Please use Model.fit, which supports generators.

 

 

 

failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected

 

D:\Program Files\NVIDIA Corporation\nvsmi

 

https://www.nvidia.co.kr/Download/index.aspx?lang=kr

 

d:\nvidia

 

Failed to initialize NVML: Unknown Error

 

 

 

D:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi.exe

Failed to initialize NVML: Unknown Error

 

D:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi.exe

Mon Aug 10 18:56:04 2020

+-----------------------------------------------------------------------------+

| NVIDIA-SMI 388.13                 Driver Version: 451.67                    |

|-------------------------------+----------------------+----------------------+

| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |

| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |

|===============================+======================+======================|

|   0  GeForce RTX 208... WDDM  | 00000000:09:00.0  On |                  N/A |

| 16%   49C    P8     6W / 250W |    372MiB /  8192MiB |      5%      Default |

+-------------------------------+----------------------+----------------------+

 

+-----------------------------------------------------------------------------+

| Processes:                                                       GPU Memory |

|  GPU       PID   Type   Process name                             Usage      |

|=============================================================================|

Internal error

 

D:\Program Files\NVIDIA Corporation\NVSMI>

 

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

>>> runfile('O:/PycharmProjects/catdogtf2.2/001.py', wdir='O:/PycharmProjects/catdogtf2.2')

2020-08-10 18:58:07.273739: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-10 18:58:09.472382: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-08-10 18:58:09.522700: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-10 18:58:09.523231: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-10 18:58:09.630640: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-10 18:58:09.739451: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-10 18:58:09.767974: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-10 18:58:09.859263: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-10 18:58:09.901611: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-10 18:58:10.104479: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-10 18:58:10.104874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-10 18:58:10.105523: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2020-08-10 18:58:10.117841: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1ee580c2550 initialized for platform Host (this does not guarantee that XLA will be used). Devices:

2020-08-10 18:58:10.118253: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version

2020-08-10 18:58:10.118581: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-08-10 18:58:10.119137: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-08-10 18:58:10.119473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-08-10 18:58:10.119767: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-08-10 18:58:10.120060: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-08-10 18:58:10.120364: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-08-10 18:58:10.120668: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-08-10 18:58:10.120965: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-08-10 18:58:10.121312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0

2020-08-10 18:58:10.979136: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-08-10 18:58:10.979373: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0

2020-08-10 18:58:10.979563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N

2020-08-10 18:58:10.979987: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6198 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-08-10 18:58:10.983679: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1ee20800930 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2020-08-10 18:58:10.984072: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2080 SUPER, Compute Capability 7.5

2020-08-10 18:58:10.986288: I tensorflow/core/common_runtime/eager/execute.cc:501] Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0

2020-08-10 18:58:10.986835: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

 

 

CPU 사용량 100% 에서

 

30%로 떨어짐.

 

GPU는 메모리가 중요...

RTX 8000이 쿠다 성능 점수는 같은데 900만원 하는 이유가 있네.

 

프로젝트는 종료 되었으나 내가 본 것 중

꽤 괜찮은 공개 자료를 올려 두려고 한다. 보통 시간 지나면 페이지가 없어지는 경우가 워낙 많아서...

 

딥러닝을활용한영상기반폐질환진단및유사증례검색SW개발.pdf
0.37MB

https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html

 

Training Custom Object Detector — TensorFlow 2 Object Detection API tutorial documentation

Before we begin training our model, let’s go and copy the TensorFlow/models/research/object_detection/model_main_tf2.py script and paste it straight into our training_demo folder. We will need this script in order to train our model. Now, to initiate a n

tensorflow-object-detection-api-tutorial.readthedocs.io

https://teachablemachine.withgoogle.com/train/image

 

Teachable Machine

Train a computer to recognize your own images, sounds, & poses. A fast, easy way to create machine learning models for your sites, apps, and more – no expertise or coding required.

teachablemachine.withgoogle.com

 

https://emaru.tistory.com/41

 

TensorFlow - (streaming)Object Detection API 사용하기 3편

안녕하세요 마루입니다~ 오늘은 Google의 Object Detection API 사용하기 3편을 포스팅해볼려고 합니다 ㅎㅎ 1편이 Jupyter notebook에서 2편은 Local에서 사용했었다면 3편은 Local에서 웹캠의 스트리밍에 Object

emaru.tistory.com

 

https://ballentain.tistory.com/18

 

1.2. Pretrained model 다운로드 및 압축 풀기

object_detection_tutorial.ipynb을 살펴보면 전체적인 흐름이 다음과 같다. 사용하고자 하는 pretrained model의 tar.gz 파일 다운로드 tar.gz 파일 압축 풀기 pretrained model의 계산 그래프 로드 모델 실행..

ballentain.tistory.com

 

https://ukayzm.github.io/python-object-detection-tensorflow/

 

Python Object Detection with Tensorflow

구글은 텐서플로로 구현된 많은 모델을 아파치 라이센스로 공개하고 있습니다. 그 중에서 object detection API 사진에서 물체를 인식하는 모델을 쉽게 제작/학습/배포할 수 있는 오픈소스 프레임워��

ukayzm.github.io

 

https://www.geeksforgeeks.org/object-detection-vs-object-recognition-vs-image-segmentation/

 

Object Detection vs Object Recognition vs Image Segmentation - GeeksforGeeks

A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.

www.geeksforgeeks.org

 

https://musma.github.io/2019/02/15/tensorflow-on-windows.html

 

Windows에서 Tensorflow Object Detection API 설치하기!!

Windows에서 Tensorflow Object Detection API 설치하기!

musma.github.io

 

https://blog.naver.com/laonple/221327285813

 

4. Tensorflow(텐서플로우) - MLP(다층 신경망) 구현하기!

안녕하세요 라온피플(주)입니다. 이번 시간에는 Tensorflow를 이용하여 MLP를 구현해보겠습니다.MLP...

blog.naver.com

 

https://www.tensorflow.org/tutorials/images/classification

 

이미지 분류  |  텐서 플로우 코어  |  TensorFlow

이 학습서는 꽃의 이미지를 분류하는 방법을 보여줍니다. keras.Sequential 모델을 사용하여 이미지 분류기를 만들고 preprocessing.image_dataset_from_directory 사용하여 데이터를로드합니다. 다음과 같은 개�

www.tensorflow.org

 

https://docs.aws.amazon.com/ko_kr/sagemaker/latest/dg/image-classification.html

 

Image Classification 알고리즘 - Amazon SageMaker

기존 딥 러닝 프레임워크와의 더 나은 상호 운용성을 유지하기 위해 다른 Amazon SageMaker 알고리즘에서 일반적으로 사용되는 protobuf 데이터 형식과는 다릅니다.

docs.aws.amazon.com

 

https://cloud.google.com/ai-platform/training/docs/algorithms/image-classification-start#console_2

 

기본 제공 이미지 분류 알고리즘 시작하기  |  AI Platform Training  |  Google Cloud

베타 이 제품이나 기능은 출시 전 상태로 변경되거나 지원이 제한될 수 있습니다. 자세한 내용은 제품 출시 단계를 참조하세요. AI Platform Training의 기본 제공 알고리즘을 사용하면 학습 애플리케

cloud.google.com

 

AIaaS 가 대세지...

분류는 해서 회사에서 잘 써먹었다. 나 이런거 해요. 나 잘하죠? 나 대단하죠? 그러니 건들지 마세요. 그런 수준?

 

그런데 이 프로젝트는 의사가 없으면 터무니 없는 프로젝트라는 것을 알게 되었다.

 

supervised 가 아닌 deep learning 을 이용해서 뭘 한다고 해도 검증, 평가를 위해서도 의사가 해야 한다는 말이다.

 

그러나 수 많은 삽질로 얻게 된 것은  private repo만 하다, open source contributing 이 뭔지 그 느낌을

 

FSF 이 후 다시 상기하게 되었다. 회사일도 오픈소스에 기여하고 그것을 쓰면 되겠다는 생각으로 바뀌었다.

 

그리고 생각보다 공개된 의학 data 가 공개 인터넷 상에 많다는 것(CT만 국한된게 아니라 의학 지식들도)

 

인터넷에 많지만 의사가 없으면 그 지식을 엮지 못한다는 것 등을 알았다.

 

언젠가 다시 시작할 수 있지만 의학 관련 회사를 가지 않는 이상은 불가능 할 것 같고,

 

병원이 더 설립이 되겠지 그것이 일반 아파트로 들어가지는 않을거라는 생각이다.

 

물론, 병원도 건축 기준이 있고 관련해서 누군가에게 이야기 들은 바로는

 

국내 유명 병원이 동남아 쪽에 분점 설치를 의뢰 받고 건물까지 받았지만 그것을 기준에 부합하고 리모델링 하는데 비용이 많이 들어서

 

포기했다는 말도 들었다. 아무래도 자본주의 사회다 보니 비용이 걱정되는 것 같은데...

 

이게 참... 병원이 필요한게 돈으로만 환산 된다면 자원 광물도 없고, 못 사는 나라에서의 병원을 짓는 것은 항상 요원할거라는 판단이다.

 

글로벌하게 보면 병원 뿐 아니라 당장 물이 더러워서 식중독으로 죽는 아이들도 넘쳐나는 상황이다.

 

이런 시각으로 보니 불모지의 땅에 꼭 병원이 아니더라도 뭔가 가치 창출을 하려는 사람들의 행적이 새삼 대단하게 생각되었다.

 

믓튼, 그랬다.

"""
====================
Read DICOM directory
====================

This example shows how to read DICOM directory.

"""

# authors : Guillaume Lemaitre <g.lemaitre58@gmail.com>
# license : MIT

from os.path import dirname, join
from pprint import pprint

import pydicom
from pydicom.data import get_testdata_file
from pydicom.filereader import read_dicomdir

# fetch the path to the test data
filepath = get_testdata_file('DICOMDIR')
print('Path to the DICOM directory: {}'.format(filepath))
# load the data
dicom_dir = read_dicomdir(filepath)
base_dir = dirname(filepath)

# go through the patient record and print information
for patient_record in dicom_dir.patient_records:
if (hasattr(patient_record, 'PatientID') and
hasattr(patient_record, 'PatientName')):
print("Patient: {}: {}".format(patient_record.PatientID,
patient_record.PatientName))
studies = patient_record.children
# got through each serie
for study in studies:
print(" " * 4 + "Study {}: {}: {}".format(study.StudyID,
study.StudyDate,
study.StudyDescription))
all_series = study.children
# go through each serie
for series in all_series:
image_count = len(series.children)
plural = ('', 's')[image_count > 1]

# Write basic series info and image count

# Put N/A in if no Series Description
if 'SeriesDescription' not in series:
series.SeriesDescription = "N/A"
print(" " * 8 + "Series {}: {}: {} ({} image{})".format(
series.SeriesNumber, series.Modality, series.SeriesDescription,
image_count, plural))

# Open and read something from each image, for demonstration
# purposes. For simple quick overview of DICOMDIR, leave the
# following out
print(" " * 12 + "Reading images...")
image_records = series.children
image_filenames = [join(base_dir, *image_rec.ReferencedFileID)
for image_rec in image_records]

datasets = [pydicom.dcmread(image_filename)
for image_filename in image_filenames]

patient_names = set(ds.PatientName for ds in datasets)
patient_IDs = set(ds.PatientID for ds in datasets)

# List the image filenames
print("\n" + " " * 12 + "Image filenames:")
print(" " * 12, end=' ')
pprint(image_filenames, indent=12)

# Expect all images to have same patient name, id
# Show the set of all names, IDs found (should each have one)
print(" " * 12 + "Patient Names in images..: {}".format(
patient_names))
print(" " * 12 + "Patient IDs in images..: {}".format(
patient_IDs))

parisot1.pdf
1.30MB

 

pyDicom 과 오픈 소스가 많아서 dcm 이미지를 바로 부를 수 있었다. 문제는 학습을 돌리는데 그래프가 좀처럼 올라가지 않는 것이었다. 그래서 다운 받은 파일을 온라인 dcm 뷰어로 열어보니... 동일 X-ray만 모두 둔게 아니었다. 우선, 의학 지식이 없어 X-ray를 볼 줄 모르는 것도 문제였다. 받은 자료의 폴더 하나가 하나의 환자 혹은 시간 단위의 같은 환자 일 수 있었고, 해당 폴더는 2개로 나누어 지는데 그 중 하나는 MRI 같았다. 여러장을 볼 때 특정한 위치의 몇 장에서 breast 모양이 약간 나오는 것으로 봐서는 X-ray가 여러 층으로 되어 있는 것 같았다.(그래서 MRI라고 추측 해봄)

 자료 출처가 있는 곳으로 가서 설명을 자세히 보면 되겠다. 아래 쪽 폴더의 X-ray(좌측 검은색 물체가 약간 보이는)는 당최 뭔지 모르니 train data에서 빼야 겠다는 생각이 들었다. 그리고 환자별 MRI가 모두 암 환자의 MRI가 맞다면, 암이 보이는 X-ray가 가슴모양이 그나마 나오는 

여기서 암이 다 보이는 것인지 알아 봐야 했다. 그래야 일단 비슷한 분류끼리 모아서 한 번에 학습을 시킬텐데 말이다.

 

전문의 한테 물어봐야 하는 단계에 왔다. 상식적으로 생각해봐도 X-ray는 단층이라 x, y만 있으니 암세포는 z축으로 자랄 수도 있을 것 같다. 그래서 당연히 모든 X-ray 를 봐야 할 것 같다. 이 정도 진행을 해 보니 차라리 학습 전에 2D 만으로 3D를 구성하고 암세포가 있는 해당 3D 파일 구조에서 암세포를 학습 시키는 것이 더 나을 것 같다는 생각도 들었다.

 

이러니 IBM 왓슨 암 진달률이 떨어져서 요즘 병원에서 점차 물러나고 있는 추세 인 것 같다. 동양인 서양인 데이터도 다르다고 하던데 그 부분도 검증을 해 봐야 하고 말이다.

 

의료 분야는 정말 생각할게 많네. ㅡㅡ;

 

다행히 아는 전문의가 있고 자주 기술 문제 문의로 전화를 주시기 때문에 의료 데이터에 대해서 여쭤볼 수 있을 것 같다. 가슴 쪽 말고 다른 CT를 좀 보다보면 세웠던 가설에 대한 검증도 될 것 같지만 전문가는 만나서 물어보는게 최고지. 다음 주 술 자리만 3개라... 2주 쯤 걸릴 것 같다.

 

암이던 아니던 어느 부위인지 자동 인식 하는 분류부터 만들고 들어가면 더 쉽겠네.

#!/usr/bin/env python
"""A very simple cat vs dog classifier.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import sys
import collections
import os
import datetime

import numpy as np
import PIL
from PIL import Image
import tensorflow as tf
from tensorflow_core.tools.compatibility import tf_upgrade_v2

tf.debugging.set_log_device_placement(True)

TrainTestDataset = collections.namedtuple('TrainTestDataset',['train', 'test'])

class Dataset():
  """Dataset with next_batch capability"""
 
def __init__(self, images, labels):
    assert(len(labels) == len(images))
    self.n = len(labels)
    self.images = images
    self.labels = labels

  def next_batch(self, n=100):
    "Return a random subset of n images and labels"""
   
n = min(self.n,n)
    if n == 0:
      return (np.zeros([0,64*64]), np.zeros([0]))
    indices = np.random.choice(self.n, n, replace=False)
    return (self.images[indices], self.labels[indices])
   

def countPNGs(path):
  return len([name for name in os.listdir(path) if os.path.isfile(os.path.join(path, name)) and name.endswith('png')])


def load_cats_dogs(cat_dir, dog_dir, train_test_ratio=0.75):
  print("Loading %s and %s" % (cat_dir, dog_dir))
  cat_n = countPNGs(cat_dir)
  dog_n = countPNGs(dog_dir)
  print('  %s contains %d files' % (cat_dir, cat_n))
  print('  %s contains %d files' % (dog_dir, dog_n))

  # Labels are one-hot, 2 columns, first is cat second is dog
 
labels = np.zeros([cat_n+dog_n, 2], dtype=np.float32)
  labels[:cat_n,0] = 1
 
labels[cat_n:,1] = 1

 
# Images are flattened n x 64*64 of the grayscale pixel values as float32s
 
flat_images = np.zeros([cat_n+dog_n, 64*64], dtype=np.float32)

  # Load images into flat_images
  # Add cats
 
for i in range(cat_n):
    filepath = os.path.join(cat_dir, "%04d.png" % i)
    flat_images[i,:] = np.array(Image.open(filepath)).flatten()

  # Add dogs
 
for i in range(dog_n):
    filepath = os.path.join(dog_dir, "%04d.png" % i)
    flat_images[cat_n + i,:] = np.array(Image.open(filepath)).flatten()

  return splitIntoTrainingAndTestDatasets(flat_images, labels, train_test_ratio)

def splitIntoTrainingAndTestDatasets(flat_images, labels, ratio=0.7):
  n = len(labels)
  cutoff = int(n*ratio)
  if cutoff == 0 or cutoff == n:
    raise Exception('Not enough data to split into training/test')

  # First shuffle images and labels
 
new_order = np.random.permutation(np.arange(n))
  flat_images = flat_images[new_order]
  labels = labels[new_order]

  training = Dataset(flat_images[:cutoff], labels[:cutoff])
  test = Dataset(flat_images[cutoff:], labels[cutoff:])
  return TrainTestDataset(train=training, test=test)


FLAGS = None

def
weight_variable(shape):
  """Create a weight variable with appropriate initialization."""
 
initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  """Create a bias variable with appropriate initialization."""
 
initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

def variable_summaries(var):
  """Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
 
with tf.name_scope('summaries'):
    mean = tf.reduce_mean(var)
    tf.summary.scalar('mean', mean)
    with tf.name_scope('stddev'):
      stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
    tf.summary.scalar('stddev', stddev)
    tf.summary.scalar('max', tf.reduce_max(var))
    tf.summary.scalar('min', tf.reduce_min(var))
    tf.summary.histogram('histogram', var)

def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
  """Reusable code for making a simple neural net layer.

  It does a matrix multiply, bias add, and then uses relu to nonlinearize.
  It also sets up name scoping so that the resultant graph is easy to read,
  and adds a number of summary ops.
  """
 
# Adding a name scope ensures logical grouping of the layers in the graph.
 
with tf.name_scope(layer_name):
    # This Variable will hold the state of the weights for the layer
   
with tf.name_scope('weights'):
      weights = weight_variable([input_dim, output_dim])
      variable_summaries(weights)
    with tf.name_scope('biases'):
      biases = bias_variable([output_dim])
      variable_summaries(biases)
    with tf.name_scope('Wx_plus_b'):
      preactivate = tf.matmul(input_tensor, weights) + biases
      tf.summary.histogram('pre_activations', preactivate)
    activations = act(preactivate, name='activation')
    tf.summary.histogram('activations', activations)
    return activations

def main(_):
  # Import data
 
cat_dog_dataset = load_cats_dogs(FLAGS.cat_dir, FLAGS.dog_dir)
  print("Contains %d Training samples" % cat_dog_dataset.train.n)
  print("Contains %d Test samples" % cat_dog_dataset.test.n)

  sess = tf.InteractiveSession()

  # Create the model
 
with tf.name_scope('input'):
    x = tf.placeholder(tf.float32, [None, 64*64], name='x-input')
    y_ = tf.placeholder(tf.float32, [None, 2], name='y-input')
 
  with tf.name_scope('input_reshape'):
    image_shaped_input = tf.reshape(x, [-1, 64, 64, 1])
    tf.summary.image('input', image_shaped_input, 10)

  hidden1 = nn_layer(x, 64*64, 20, 'layer1')

  with tf.name_scope('dropout'):
    keep_prob = tf.placeholder(tf.float32)
    tf.summary.scalar('dropout_keep_probability', keep_prob)
    dropped = tf.nn.dropout(hidden1, keep_prob)

  # Do not apply softmax activation yet, see below.
 
y = nn_layer(dropped, 20, 2, 'layer2', act=tf.identity)

  with tf.name_scope('cross_entropy'):
    diff = tf.nn.softmax_cross_entropy_with_logits(y, y_)
    with tf.name_scope('total'):
      cross_entropy = tf.reduce_mean(diff)
  tf.summary.scalar('cross_entropy', cross_entropy)

  with tf.name_scope('train'):
    train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(
        cross_entropy)

  with tf.name_scope('accuracy'):
    with tf.name_scope('correct_prediction'):
      correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
    with tf.name_scope('accuracy'):
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  tf.summary.scalar('accuracy', accuracy)

  # Merge all the summaries and write them out to /tmp/mnist_logs (by default)
 
merged = tf.summary.merge_all()
  train_writer = tf.summary.FileWriter(FLAGS.log_dir + '/train', sess.graph)
  test_writer = tf.summary.FileWriter(FLAGS.log_dir + '/test')
  tf.global_variables_initializer().run()

  # Train the model, and also write summaries.
  # Every 10th step, measure test-set accuracy, and write test summaries
  # All other steps, run train_step on training data, & add training summaries

 
def feed_dict(train):
    """Make a TensorFlow feed_dict: maps data onto Tensor placeholders."""
   
if train:
      xs, ys = cat_dog_dataset.train.next_batch(100)
      k = FLAGS.dropout
    else:
      xs, ys = cat_dog_dataset.test.images, cat_dog_dataset.test.labels
      k = 1.0
   
return {x: xs, y_: ys, keep_prob: k}

  for i in range(FLAGS.max_steps):
    if i % 10 == 0:  # Record summaries and test-set accuracy
     
summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))
      test_writer.add_summary(summary, i)
      print('Accuracy at step %s: %s' % (i, acc))
    else:  # Record train set summaries, and train
     
if i % 100 == 99:  # Record execution stats
       
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
        run_metadata = tf.RunMetadata()
        summary, _ = sess.run([merged, train_step],
                             
feed_dict=feed_dict(True),
                             
options=run_options,
                             
run_metadata=run_metadata)
        train_writer.add_run_metadata(run_metadata, 'step%03d' % i)
        train_writer.add_summary(summary, i)
        print('Adding run metadata for', i)
      else:  # Record a summary
       
summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))
        train_writer.add_summary(summary, i)
  train_writer.close()
  test_writer.close()

if __name__ == '__main__':
  parser = argparse.ArgumentParser()
  parser.add_argument('--cat_dir', type=str, default='images/cats',
                     
help='Directory for storing input cat images')
  parser.add_argument('--dog_dir', type=str, default='images/dogs',
                     
help='Directory for storing input dog images')
  parser.add_argument('--max_steps', type=int, default=1000,
                     
help='Number of steps to run trainer.')
  parser.add_argument('--learning_rate', type=float, default=0.001,
                     
help='Initial learning rate')
  parser.add_argument('--dropout', type=float, default=0.9,
                     
help='Keep probability for training dropout.')
  output_dir = '/tmp/tensorflow/catdog/' + datetime.datetime.now().strftime("%y_%m_%d_%H_%M_%S") + '/'
 
print('Default log output dir: %s' % output_dir)
  parser.add_argument('--log_dir', type=str, default=output_dir,
                     
help='Summaries log directory')
  #print('Log output dir used: %s' % FLAGS.output_dir)
 
FLAGS, unparsed = parser.parse_known_args()

  tf.compat.v1.app.run(main=main, argv='')
  #'[sys.argv[0]] + unparsed)

 

 

 

=-0=-=0=-0-=0-=0=-0-=0-=0-=0=-0-=0-=0=-0=-0-=0-=0=-0=-0=0-0=0-=0=-

import tensorflow as tf

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator

import os
import numpy as np
import matplotlib.pyplot as plt

#_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'

#path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)

#PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')

train_dir = os.path.join(os.path.curdir, './train')
validation_dir = os.path.join(os.path.curdir, './validation')

train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures

num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))

num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))

total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val

print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)

print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)

batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150

train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data

train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')

val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')

sample_training_images, _ = next(train_data_gen)

# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()

plotImages(sample_training_images[:5])

model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])

model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])

model.summary()

history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss=history.history['loss']
val_loss=history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)

train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))

augmented_images = [train_data_gen[0][0][0] for i in range(5)]

# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)

image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)

train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))

augmented_images = [train_data_gen[0][0][0] for i in range(5)]

plotImages(augmented_images)

# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #

train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))

augmented_images = [train_data_gen[0][0][0] for i in range(5)]

plotImages(augmented_images)

image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)

train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')

augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

image_gen_val = ImageDataGenerator(rescale=1./255)

val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')

model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])

model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])

model_new.summary()

history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

(base) O:\yolov3-keras-tf2>conda install -c conda-forge opencv

Collecting package metadata (current_repodata.json): done

Solving environment: done

 

## Package Plan ##

 

  environment location: J:\Anaconda3

 

  added / updated specs:

    - opencv

 

 

The following packages will be downloaded:

 

    package                    |            build

    ---------------------------|-----------------

    ca-certificates-2020.6.20  |       hecda079_0         184 KB  conda-forge

    certifi-2020.6.20          |   py37hc8dfbb8_0         151 KB  conda-forge

    icu-64.2                   |       he025d50_1        14.1 MB  conda-forge

    jpeg-9d                    |       he774522_0         344 KB  conda-forge

    libblas-3.8.0              |           16_mkl         3.6 MB  conda-forge

    libcblas-3.8.0             |           16_mkl         3.6 MB  conda-forge

    libclang-9.0.1             |default_hf44288c_0        20.8 MB  conda-forge

    liblapack-3.8.0            |           16_mkl         3.6 MB  conda-forge

    liblapacke-3.8.0           |           16_mkl         3.6 MB  conda-forge

    libopencv-4.3.0            |           py37_1        45.2 MB  conda-forge

    libwebp-base-1.1.0         |       hfa6e2cd_3         356 KB  conda-forge

    opencv-4.3.0               |           py37_1          20 KB  conda-forge

    openssl-1.1.1g             |       he774522_0         5.7 MB  conda-forge

    py-opencv-4.3.0            |   py37h43977f1_1          22 KB  conda-forge

    pyqt-5.12.3                |   py37h1834ac0_3         4.8 MB  conda-forge

    python_abi-3.7             |          1_cp37m           4 KB  conda-forge

    qt-5.12.5                  |       h7ef1ec2_0       104.4 MB  conda-forge

    ------------------------------------------------------------

                                           Total:       210.4 MB

 

The following NEW packages will be INSTALLED:

 

  libblas            conda-forge/win-64::libblas-3.8.0-16_mkl

  libcblas           conda-forge/win-64::libcblas-3.8.0-16_mkl

  libclang           conda-forge/win-64::libclang-9.0.1-default_hf44288c_0

  liblapack          conda-forge/win-64::liblapack-3.8.0-16_mkl

  liblapacke         conda-forge/win-64::liblapacke-3.8.0-16_mkl

  libopencv          conda-forge/win-64::libopencv-4.3.0-py37_1

  libwebp-base       conda-forge/win-64::libwebp-base-1.1.0-hfa6e2cd_3

  opencv             conda-forge/win-64::opencv-4.3.0-py37_1

  py-opencv          conda-forge/win-64::py-opencv-4.3.0-py37h43977f1_1

  python_abi         conda-forge/win-64::python_abi-3.7-1_cp37m

 

The following packages will be UPDATED:

 

  conda                       pkgs/main::conda-4.8.3-py37_0 --> conda-forge::conda-4.8.3-py37hc8dfbb8_1

  icu                        pkgs/main::icu-58.2-ha925a31_3 --> conda-forge::icu-64.2-he025d50_1

  jpeg                        pkgs/main::jpeg-9b-hb83a4c4_2 --> conda-forge::jpeg-9d-he774522_0

  pyqt                 pkgs/main::pyqt-5.9.2-py37h6538335_2 --> conda-forge::pyqt-5.12.3-py37h1834ac0_3

  qt                     pkgs/main::qt-5.9.7-vc14h73c81de_0 --> conda-forge::qt-5.12.5-h7ef1ec2_0

 

The following packages will be SUPERSEDED by a higher-priority channel:

 

  ca-certificates    pkgs/main::ca-certificates-2020.6.24-0 --> conda-forge::ca-certificates-2020.6.20-hecda079_0

  certifi               pkgs/main::certifi-2020.6.20-py37_0 --> conda-forge::certifi-2020.6.20-py37hc8dfbb8_0

  openssl                                         pkgs/main --> conda-forge

 

 

Proceed ([y]/n)?

 

 

Downloading and Extracting Packages

liblapacke-3.8.0     | 3.6 MB    | ############################################################################################################################################################################################ | 100%

pyqt-5.12.3          | 4.8 MB    | ############################################################################################################################################################################################ | 100%

libopencv-4.3.0      | 45.2 MB   | ############################################################################################################################################################################################ | 100%

libclang-9.0.1       | 20.8 MB   | ############################################################################################################################################################################################ | 100%

qt-5.12.5            | 104.4 MB  | ############################################################################################################################################################################################ | 100%

libcblas-3.8.0       | 3.6 MB    | ############################################################################################################################################################################################ | 100%

ca-certificates-2020 | 184 KB    | ############################################################################################################################################################################################ | 100%

libblas-3.8.0        | 3.6 MB    | ############################################################################################################################################################################################ | 100%

libwebp-base-1.1.0   | 356 KB    | ############################################################################################################################################################################################ | 100%

certifi-2020.6.20    | 151 KB    | ############################################################################################################################################################################################ | 100%

liblapack-3.8.0      | 3.6 MB    | ############################################################################################################################################################################################ | 100%

py-opencv-4.3.0      | 22 KB     | ############################################################################################################################################################################################ | 100%

icu-64.2             | 14.1 MB   | ############################################################################################################################################################################################ | 100%

opencv-4.3.0         | 20 KB     | ############################################################################################################################################################################################ | 100%

python_abi-3.7       | 4 KB      | ############################################################################################################################################################################################ | 100%

openssl-1.1.1g       | 5.7 MB    | ############################################################################################################################################################################################ | 100%

jpeg-9d              | 344 KB    | ############################################################################################################################################################################################ | 100%

Preparing transaction: done

Verifying transaction: done

Executing transaction: \

done

 

 

 

 

pip install tensorflow-gpu==2.2

Collecting tensorflow-gpu==2.2

  Using cached tensorflow_gpu-2.2.0-cp37-cp37m-win_amd64.whl (460.4 MB)

Requirement already satisfied: keras-preprocessing>=1.1.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.1.0)

Requirement already satisfied: protobuf>=3.8.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (3.12.3)

Requirement already satisfied: opt-einsum>=2.3.2 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (3.1.0)

Collecting tensorflow-gpu-estimator<2.3.0,>=2.2.0

  Using cached tensorflow_gpu_estimator-2.2.0-py2.py3-none-any.whl (470 kB)

Requirement already satisfied: termcolor>=1.1.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.1.0)

Collecting gast==0.3.3

  Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)

Requirement already satisfied: wrapt>=1.11.1 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.12.1)

Requirement already satisfied: six>=1.12.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.15.0)

Collecting scipy==1.4.1; python_version >= "3"

  Using cached scipy-1.4.1-cp37-cp37m-win_amd64.whl (30.9 MB)

Requirement already satisfied: tensorboard<2.3.0,>=2.2.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (2.2.1)

Requirement already satisfied: wheel>=0.26; python_version >= "3" in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (0.34.2)

Requirement already satisfied: absl-py>=0.7.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (0.9.0)

Requirement already satisfied: numpy<2.0,>=1.16.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.18.5)

Requirement already satisfied: h5py<2.11.0,>=2.10.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (2.10.0)

Requirement already satisfied: grpcio>=1.8.6 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.27.2)

Collecting astunparse==1.6.3

  Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)

Requirement already satisfied: google-pasta>=0.1.8 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (0.2.0)

Requirement already satisfied: setuptools in j:\anaconda3\lib\site-packages (from protobuf>=3.8.0->tensorflow-gpu==2.2) (47.3.1.post20200622)

Requirement already satisfied: google-auth<2,>=1.6.3 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (1.14.1)

Requirement already satisfied: markdown>=2.6.8 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (3.1.1)

Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (1.6.0)

Requirement already satisfied: requests<3,>=2.21.0 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (2.24.0)

Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (0.4.1)

Requirement already satisfied: werkzeug>=0.11.15 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (0.16.0)

Requirement already satisfied: cachetools<5.0,>=2.0.0 in j:\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (4.1.0)

Requirement already satisfied: pyasn1-modules>=0.2.1 in j:\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (0.2.7)

Requirement already satisfied: rsa<4.1,>=3.1.4 in j:\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (4.0)

Requirement already satisfied: certifi>=2017.4.17 in j:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (2020.6.20)

Requirement already satisfied: chardet<4,>=3.0.2 in j:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (3.0.4)

Requirement already satisfied: idna<3,>=2.5 in j:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (2.10)

Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in j:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (1.25.9)

Requirement already satisfied: requests-oauthlib>=0.7.0 in j:\anaconda3\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (1.3.0)

Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in j:\anaconda3\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (0.4.8)

Requirement already satisfied: oauthlib>=3.0.0 in j:\anaconda3\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (3.1.0)

ERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.

ERROR: tensorflow 2.1.0 has requirement tensorboard<2.2.0,>=2.1.0, but you'll have tensorboard 2.2.1 which is incompatible.

Installing collected packages: tensorflow-gpu-estimator, gast, scipy, astunparse, tensorflow-gpu

  Attempting uninstall: gast

    Found existing installation: gast 0.2.2

    Uninstalling gast-0.2.2:

      Successfully uninstalled gast-0.2.2

  Attempting uninstall: scipy

    Found existing installation: scipy 1.5.0

    Uninstalling scipy-1.5.0:

      Successfully uninstalled scipy-1.5.0

ERROR: Could not install packages due to an EnvironmentError: [WinError 5] 액세스가 거부되었습니다: 'j:\\anaconda3\\lib\\site-packages\\~cipy\\fft\\_pocketfft\\pypocketfft.cp37-win_amd64.pyd'

Consider using the `--user` option or check the permissions.

 

 

J:\Anaconda3\python.exe O:\PyCharm\plugins\python\helpers\pydev\pydevconsole.py --mode=client --port=55041

import sys; print('Python %s on %s' % (sys.version, sys.platform))

sys.path.extend(['O:\\PycharmProjects\\test001', 'O:/PycharmProjects/test001'])

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)]

Type 'copyright', 'credits' or 'license' for more information

IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help.

PyDev console: using IPython 7.16.1

Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] on win32

pip install tensorflow-gpu==2.2

Collecting tensorflow-gpu==2.2

  Using cached tensorflow_gpu-2.2.0-cp37-cp37m-win_amd64.whl (460.4 MB)

Requirement already satisfied: wheel>=0.26; python_version >= "3" in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (0.34.2)

Requirement already satisfied: gast==0.3.3 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (0.3.3)

Requirement already satisfied: wrapt>=1.11.1 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.12.1)

Requirement already satisfied: numpy<2.0,>=1.16.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.18.5)

Requirement already satisfied: protobuf>=3.8.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (3.12.3)

Requirement already satisfied: tensorflow-gpu-estimator<2.3.0,>=2.2.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (2.2.0)

Requirement already satisfied: keras-preprocessing>=1.1.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.1.0)

Collecting astunparse==1.6.3

  Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)

Requirement already satisfied: six>=1.12.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.15.0)

Requirement already satisfied: termcolor>=1.1.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.1.0)

Requirement already satisfied: h5py<2.11.0,>=2.10.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (2.10.0)

Requirement already satisfied: tensorboard<2.3.0,>=2.2.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (2.2.1)

Requirement already satisfied: scipy==1.4.1; python_version >= "3" in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.4.1)

Requirement already satisfied: opt-einsum>=2.3.2 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (3.1.0)

Requirement already satisfied: absl-py>=0.7.0 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (0.9.0)

Requirement already satisfied: google-pasta>=0.1.8 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (0.2.0)

Requirement already satisfied: grpcio>=1.8.6 in j:\anaconda3\lib\site-packages (from tensorflow-gpu==2.2) (1.27.2)

Requirement already satisfied: setuptools in j:\anaconda3\lib\site-packages (from protobuf>=3.8.0->tensorflow-gpu==2.2) (47.3.1.post20200622)

Requirement already satisfied: requests<3,>=2.21.0 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (2.24.0)

Requirement already satisfied: google-auth<2,>=1.6.3 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (1.14.1)

Requirement already satisfied: werkzeug>=0.11.15 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (0.16.0)

Requirement already satisfied: markdown>=2.6.8 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (3.1.1)

Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (0.4.1)

Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in j:\anaconda3\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (1.6.0)

Requirement already satisfied: idna<3,>=2.5 in j:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (2.10)

Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in j:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (1.25.9)

Requirement already satisfied: chardet<4,>=3.0.2 in j:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (3.0.4)

Requirement already satisfied: certifi>=2017.4.17 in j:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (2020.6.20)

Requirement already satisfied: rsa<4.1,>=3.1.4 in j:\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (4.0)

Requirement already satisfied: pyasn1-modules>=0.2.1 in j:\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (0.2.7)

Requirement already satisfied: cachetools<5.0,>=2.0.0 in j:\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (4.1.0)

Requirement already satisfied: requests-oauthlib>=0.7.0 in j:\anaconda3\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (1.3.0)

Requirement already satisfied: pyasn1>=0.1.3 in j:\anaconda3\lib\site-packages (from rsa<4.1,>=3.1.4->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (0.4.8)

Requirement already satisfied: oauthlib>=3.0.0 in j:\anaconda3\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu==2.2) (3.1.0)

Installing collected packages: astunparse, tensorflow-gpu

Successfully installed astunparse-1.6.3 tensorflow-gpu-2.2.0

Note: you may need to restart the kernel to use updated packages.

import tensorflow as tf

2020-07-01 06:12:31.732542: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

print(tf.__version__)

2.2.0

 

import tensorflow as tf
print(tf.__version__)
2.1.0

 

generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator:

 

https://github.com/tensorflow/tensorflow/issues/37515

 

Tensorflow 2.1 Error “when finalizing GeneratorDataset iterator” - a memory leak? · Issue #37515 · tensorflow/tensorflow

Reopening of issue #35100, as more and more people report to still have the same problem: Problem description I am using TensorFlow 2.1.0 for image classification under Centos Linux. As my image tr...

github.com

(tf-gpu) C:\Users\joe>pip install tensorflow==2.2.0rc3

Collecting tensorflow==2.2.0rc3

  Downloading https://files.pythonhosted.org/packages/af/b6/c634218cd4602e906a922fe8b372d582e29624358f2e997aa1cd097164bf/tensorflow-2.2.0rc3-cp37-cp37m-win_amd64.whl (459.2MB)

     |████████████████████████████████| 459.2MB 42kB/s

Requirement already satisfied: grpcio>=1.8.6 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (1.16.1)

Requirement already satisfied: numpy<2.0,>=1.16.0 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (1.17.3)

Requirement already satisfied: wheel>=0.26; python_version >= "3" in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (0.33.6)

Requirement already satisfied: six>=1.12.0 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (1.13.0)

Collecting h5py<2.11.0,>=2.10.0

  Using cached https://files.pythonhosted.org/packages/a1/6b/7f62017e3f0b32438dd90bdc1ff0b7b1448b6cb04a1ed84f37b6de95cd7b/h5py-2.10.0-cp37-cp37m-win_amd64.whl

Requirement already satisfied: opt-einsum>=2.3.2 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (3.1.0)

Requirement already satisfied: wrapt>=1.11.1 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (1.11.2)

Requirement already satisfied: absl-py>=0.7.0 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (0.8.1)

Requirement already satisfied: keras-preprocessing>=1.1.0 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (1.1.0)

Collecting tensorboard<2.3.0,>=2.2.0

  Using cached https://files.pythonhosted.org/packages/1d/74/0a6fcb206dcc72a6da9a62dd81784bfdbff5fedb099982861dc2219014fb/tensorboard-2.2.2-py3-none-any.whl

Collecting tensorflow-estimator<2.3.0,>=2.2.0rc0

  Using cached https://files.pythonhosted.org/packages/a4/f5/926ae53d6a226ec0fda5208e0e581cffed895ccc89e36ba76a8e60895b78/tensorflow_estimator-2.2.0-py2.py3-none-any.whl

Requirement already satisfied: google-pasta>=0.1.8 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (0.1.8)

Requirement already satisfied: protobuf>=3.8.0 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (3.10.1)

Collecting astunparse==1.6.3

  Using cached https://files.pythonhosted.org/packages/2b/03/13dde6512ad7b4557eb792fbcf0c653af6076b81e5941d36ec61f7ce6028/astunparse-1.6.3-py2.py3-none-any.whl

Collecting gast==0.3.3

  Using cached https://files.pythonhosted.org/packages/d6/84/759f5dd23fec8ba71952d97bcc7e2c9d7d63bdc582421f3cd4be845f0c98/gast-0.3.3-py2.py3-none-any.whl

Collecting scipy==1.4.1; python_version >= "3"

  Using cached https://files.pythonhosted.org/packages/61/51/046cbc61c7607e5ecead6ff1a9453fba5e7e47a5ea8d608cc7036586a5ef/scipy-1.4.1-cp37-cp37m-win_amd64.whl

Requirement already satisfied: termcolor>=1.1.0 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorflow==2.2.0rc3) (1.1.0)

Collecting google-auth-oauthlib<0.5,>=0.4.1

  Using cached https://files.pythonhosted.org/packages/7b/b8/88def36e74bee9fce511c9519571f4e485e890093ab7442284f4ffaef60b/google_auth_oauthlib-0.4.1-py2.py3-none-any.whl

Requirement already satisfied: markdown>=2.6.8 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0rc3) (3.1.1)

Collecting tensorboard-plugin-wit>=1.6.0

  Downloading https://files.pythonhosted.org/packages/b6/85/5c5ac0a8c5efdfab916e9c6bc18963f6a6996a8a1e19ec4ad8c9ac9c623c/tensorboard_plugin_wit-1.7.0-py3-none-any.whl (779kB)

     |████████████████████████████████| 788kB 6.4MB/s

Requirement already satisfied: werkzeug>=0.11.15 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0rc3) (0.16.0)

Collecting requests<3,>=2.21.0

  Using cached https://files.pythonhosted.org/packages/45/1e/0c169c6a5381e241ba7404532c16a21d86ab872c9bed8bdcd4c423954103/requests-2.24.0-py2.py3-none-any.whl

Collecting google-auth<2,>=1.6.3

  Using cached https://files.pythonhosted.org/packages/21/57/d706964a7e4056f3f2244e16705388c11631fbb53d3e2d2a2d0fbc24d470/google_auth-1.18.0-py2.py3-none-any.whl

Requirement already satisfied: setuptools>=41.0.0 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0rc3) (42.0.1.post20191125)

Collecting requests-oauthlib>=0.7.0

  Using cached https://files.pythonhosted.org/packages/a3/12/b92740d845ab62ea4edf04d2f4164d82532b5a0b03836d4d4e71c6f3d379/requests_oauthlib-1.3.0-py2.py3-none-any.whl

Requirement already satisfied: certifi>=2017.4.17 in j:\anaconda3\envs\tf-gpu\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0rc3) (2020.6.20)

Collecting chardet<4,>=3.0.2

  Using cached https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl

Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1

  Using cached https://files.pythonhosted.org/packages/e1/e5/df302e8017440f111c11cc41a6b432838672f5a70aa29227bf58149dc72f/urllib3-1.25.9-py2.py3-none-any.whl

Collecting idna<3,>=2.5

  Downloading https://files.pythonhosted.org/packages/a2/38/928ddce2273eaa564f6f50de919327bf3a00f091b5baba8dfa9460f3a8a8/idna-2.10-py2.py3-none-any.whl (58kB)

     |████████████████████████████████| 61kB ...

Collecting rsa<5,>=3.1.4; python_version >= "3"

  Using cached https://files.pythonhosted.org/packages/1c/df/c3587a667d6b308fadc90b99e8bc8774788d033efcc70f4ecaae7fad144b/rsa-4.6-py3-none-any.whl

Collecting pyasn1-modules>=0.2.1

  Using cached https://files.pythonhosted.org/packages/95/de/214830a981892a3e286c3794f41ae67a4495df1108c3da8a9f62159b9a9d/pyasn1_modules-0.2.8-py2.py3-none-any.whl

Collecting cachetools<5.0,>=2.0.0

  Downloading https://files.pythonhosted.org/packages/cd/5c/f3aa86b6d5482f3051b433c7616668a9b96fbe49a622210e2c9781938a5c/cachetools-4.1.1-py3-none-any.whl

Collecting oauthlib>=3.0.0

  Using cached https://files.pythonhosted.org/packages/05/57/ce2e7a8fa7c0afb54a0581b14a65b56e62b5759dbc98e80627142b8a3704/oauthlib-3.1.0-py2.py3-none-any.whl

Collecting pyasn1>=0.1.3

  Using cached https://files.pythonhosted.org/packages/62/1e/a94a8d635fa3ce4cfc7f506003548d0a2447ae76fd5ca53932970fe3053f/pyasn1-0.4.8-py2.py3-none-any.whl

ERROR: tensorboard 2.2.2 has requirement grpcio>=1.24.3, but you'll have grpcio 1.16.1 which is incompatible.

Installing collected packages: h5py, pyasn1, rsa, pyasn1-modules, cachetools, google-auth, chardet, urllib3, idna, requests, oauthlib, requests-oauthlib, google-auth-oauthlib, tensorboard-plugin-wit, tensorboard, tensorflow-estimator, astunparse, gast, scipy, tensorflow

  Found existing installation: h5py 2.9.0

    Uninstalling h5py-2.9.0:

      Successfully uninstalled h5py-2.9.0

  Found existing installation: tensorboard 2.0.0

    Uninstalling tensorboard-2.0.0:

      Successfully uninstalled tensorboard-2.0.0

  Found existing installation: tensorflow-estimator 2.0.0

    Uninstalling tensorflow-estimator-2.0.0:

      Successfully uninstalled tensorflow-estimator-2.0.0

  Found existing installation: gast 0.2.2

    Uninstalling gast-0.2.2:

      Successfully uninstalled gast-0.2.2

  Found existing installation: scipy 1.3.1

    Uninstalling scipy-1.3.1:

      Successfully uninstalled scipy-1.3.1

  Found existing installation: tensorflow 2.0.0

    Uninstalling tensorflow-2.0.0:

      Successfully uninstalled tensorflow-2.0.0

Successfully installed astunparse-1.6.3 cachetools-4.1.1 chardet-3.0.4 gast-0.3.3 google-auth-1.18.0 google-auth-oauthlib-0.4.1 h5py-2.10.0 idna-2.10 oauthlib-3.1.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.24.0 requests-oauthlib-1.3.0 rsa-4.6 scipy-1.4.1 tensorboard-2.2.2 tensorboard-plugin-wit-1.7.0 tensorflow-2.2.0rc3 tensorflow-estimator-2.2.0 urllib3-1.25.9

 

(tf-gpu) C:\Users\joe>

muellerdo commented on 9 May  

edited 

Update:
Tensorflow 2.2.0 does not support Keras Data Generators for validation, anymore.

I'm probably have to rework the complete Data Generator of MIScnn into TF datasets...
Any suggestions are welcome.

Source: https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit

 

(tf-gpu) C:\Users\joe>conda update --all

Collecting package metadata (current_repodata.json): done

Solving environment: done

 

## Package Plan ##

 

  environment location: J:\Anaconda3\envs\tf-gpu

 

 

The following packages will be downloaded:

 

    package                    |            build

    ---------------------------|-----------------

    cudnn-7.6.5                |       cuda10.0_0       164.2 MB

    pyopenssl-19.1.0           |           py37_0          91 KB

    pysocks-1.7.1              |           py37_0          31 KB

    python-3.7.7               |       h81c818b_4        14.3 MB

    qtpy-1.9.0                 |             py_0          38 KB

    ------------------------------------------------------------

                                           Total:       178.6 MB

 

The following NEW packages will be INSTALLED:

 

  blinker            pkgs/main/win-64::blinker-1.4-py37_0

  brotlipy           pkgs/main/win-64::brotlipy-0.7.0-py37he774522_1000

  cachetools         pkgs/main/noarch::cachetools-4.1.0-py_1

  cffi               pkgs/main/win-64::cffi-1.14.0-py37h7a1dbc1_0

  chardet            pkgs/main/win-64::chardet-3.0.4-py37_1003

  click              pkgs/main/noarch::click-7.1.2-py_0

  cryptography       pkgs/main/win-64::cryptography-2.9.2-py37h7a1dbc1_0

  google-auth        pkgs/main/noarch::google-auth-1.14.1-py_0

  google-auth-oauth~ pkgs/main/noarch::google-auth-oauthlib-0.4.1-py_2

  idna               pkgs/main/noarch::idna-2.10-py_0

  importlib-metadata pkgs/main/win-64::importlib-metadata-1.7.0-py37_0

  oauthlib           pkgs/main/noarch::oauthlib-3.1.0-py_0

  packaging          pkgs/main/noarch::packaging-20.4-py_0

  prompt-toolkit     pkgs/main/noarch::prompt-toolkit-3.0.5-py_0

  pyasn1             pkgs/main/noarch::pyasn1-0.4.8-py_0

  pyasn1-modules     pkgs/main/noarch::pyasn1-modules-0.2.7-py_0

  pycparser          pkgs/main/noarch::pycparser-2.20-py_0

  pyjwt              pkgs/main/win-64::pyjwt-1.7.1-py37_0

  pyopenssl          pkgs/main/win-64::pyopenssl-19.1.0-py37_0

  pyparsing          pkgs/main/noarch::pyparsing-2.4.7-py_0

  pysocks            pkgs/main/win-64::pysocks-1.7.1-py37_0

  qtpy               pkgs/main/noarch::qtpy-1.9.0-py_0

  requests           pkgs/main/noarch::requests-2.24.0-py_0

  requests-oauthlib  pkgs/main/noarch::requests-oauthlib-1.3.0-py_0

  rsa                pkgs/main/noarch::rsa-4.0-py_0

  tensorboard-plugi~ pkgs/main/noarch::tensorboard-plugin-wit-1.6.0-py_0

  urllib3            pkgs/main/noarch::urllib3-1.25.9-py_0

  win_inet_pton      pkgs/main/win-64::win_inet_pton-1.1.0-py37_0

 

The following packages will be REMOVED:

 

  more-itertools-7.2.0-py37_0

 

The following packages will be UPDATED:

 

  absl-py                                      0.8.1-py37_0 --> 0.9.0-py37_0

  backcall           pkgs/main/win-64::backcall-0.1.0-py37~ --> pkgs/main/noarch::backcall-0.2.0-py_0

  bleach              pkgs/main/win-64::bleach-3.1.0-py37_0 --> pkgs/main/noarch::bleach-3.1.5-py_0

  colorama           pkgs/main/win-64::colorama-0.4.1-py37~ --> pkgs/main/noarch::colorama-0.4.3-py_0

  cudnn                                    7.6.4-cuda10.0_0 --> 7.6.5-cuda10.0_0

  decorator                                      4.4.1-py_0 --> 4.4.2-py_0

  google-pasta                                   0.1.8-py_0 --> 0.2.0-py_0

  grpcio                              1.16.1-py37h351948d_1 --> 1.27.2-py37h351948d_0

  h5py                                 2.9.0-py37h5e291fa_0 --> 2.10.0-py37h5e291fa_0

  icu                                       58.2-ha66f8fd_1 --> 58.2-ha925a31_3

  importlib_metadata pkgs/main/win-64::importlib_metadata-~ --> pkgs/main/noarch::importlib_metadata-1.7.0-0

  intel-openmp                                   2019.4-245 --> 2020.1-216

  ipykernel                            5.1.3-py37h39e3cac_0 --> 5.3.0-py37h5ca1d4c_0

  ipython                              7.9.0-py37h39e3cac_0 --> 7.16.1-py37h5ca1d4c_0

  jedi                                        0.15.1-py37_0 --> 0.17.1-py37_0

  jinja2                                        2.10.3-py_0 --> 2.11.2-py_0

  jupyter_client     pkgs/main/win-64::jupyter_client-5.3.~ --> pkgs/main/noarch::jupyter_client-6.1.3-py_0

  jupyter_console    pkgs/main/win-64::jupyter_console-6.0~ --> pkgs/main/noarch::jupyter_console-6.1.0-py_0

  jupyter_core                                 4.6.1-py37_0 --> 4.6.3-py37_0

  libprotobuf                             3.10.1-h7bd577a_0 --> 3.12.3-h7bd577a_0

  libsodium                               1.0.16-h9d3ae62_0 --> 1.0.18-h62dcd97_0

  mkl                                            2019.4-245 --> 2020.1-216

  mkl_fft                             1.0.15-py37h14836fe_0 --> 1.1.0-py37h45dec08_0

  mkl_random                           1.1.0-py37h675688f_0 --> 1.1.1-py37h47e9c7a_0

  nbformat           pkgs/main/win-64::nbformat-4.4.0-py37~ --> pkgs/main/noarch::nbformat-5.0.7-py_0

  notebook                                     6.0.2-py37_0 --> 6.0.3-py37_0

  numpy                               1.17.3-py37h4ceb530_0 --> 1.18.5-py37h6530119_0

  numpy-base                          1.17.3-py37hc3f5095_0 --> 1.18.5-py37hc3f5095_0

  pandoc                                          2.2.3.2-0 --> 2.9.2.1-0

  parso                                          0.5.1-py_0 --> 0.7.0-py_0

  pip                                         19.3.1-py37_0 --> 20.1.1-py37_1

  prometheus_client                              0.7.1-py_0 --> 0.8.0-py_0

  prompt_toolkit                                2.0.10-py_0 --> 3.0.5-0

  protobuf                            3.10.1-py37h33f27b4_0 --> 3.12.3-py37h33f27b4_0

  pygments                                       2.4.2-py_0 --> 2.6.1-py_0

  pyrsistent                          0.15.6-py37he774522_0 --> 0.16.0-py37he774522_0

  python                                   3.7.5-h8c8aaf0_0 --> 3.7.7-h81c818b_4

  pywin32                                223-py37hfa6e2cd_1 --> 227-py37he774522_1

  pywinpty                                  0.5.5-py37_1000 --> 0.5.7-py37_0

  pyzmq                               18.1.0-py37ha925a31_0 --> 19.0.1-py37ha925a31_1

  qtconsole                                      4.6.0-py_0 --> 4.7.5-py_0

  scipy                                1.3.1-py37h29ff71c_0 --> 1.5.0-py37h9439919_0

  setuptools                                  42.0.1-py37_0 --> 47.3.1-py37_0

  six                   pkgs/main/win-64::six-1.13.0-py37_0 --> pkgs/main/noarch::six-1.15.0-py_0

  sqlite                                  3.30.1-he774522_0 --> 3.32.3-h2a8f88b_0

  tensorboard                            2.0.0-pyhb38c66f_1 --> 2.2.1-pyh532a8cf_0

  tornado                              6.0.3-py37he774522_0 --> 6.0.4-py37he774522_1

  vs2015_runtime                     14.16.27012-hf0eaf9b_0 --> 14.16.27012-hf0eaf9b_2

  wcwidth            pkgs/main/win-64::wcwidth-0.1.7-py37_0 --> pkgs/main/noarch::wcwidth-0.2.5-py_0

  wheel                                       0.33.6-py37_0 --> 0.34.2-py37_0

  wrapt                               1.11.2-py37he774522_0 --> 1.12.1-py37he774522_1

  zeromq                                   4.3.1-h33f27b4_3 --> 4.3.2-ha925a31_2

  zipp                                           0.6.0-py_0 --> 3.1.0-py_0

  zlib                                    1.2.11-h62dcd97_3 --> 1.2.11-h62dcd97_4

 

 

| DEBUG menuinst_win32:__init__(199): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'J:\Anaconda3', env_name: 'None', mode: 'user', used_mode: 'user'

DEBUG menuinst_win32:create(323): Shortcut cmd is %windir%\System32\cmd.exe, args are ['"/K"', 'J:\\Anaconda3\\Scripts\\activate.bat', 'J:\\Anaconda3']

DEBUG menuinst_win32:__init__(199): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'J:\Anaconda3', env_name: 'None', mode: 'user', used_mode: 'user'

DEBUG menuinst_win32:create(323): Shortcut cmd is %windir%\System32\WindowsPowerShell\v1.0\powershell.exe, args are ['-ExecutionPolicy', 'ByPass', '-NoExit', '-Command', '"& \'J:\\Anaconda3\\shell\\condabin\\conda-hook.ps1\' ; conda activate \'J:\\Anaconda3\' "']

/ DEBUG menuinst_win32:__init__(199): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'J:\Anaconda3', env_name: 'None', mode: 'user', used_mode: 'user'

DEBUG menuinst_win32:create(323): Shortcut cmd is J:\Anaconda3\pythonw.exe, args are ['J:\\Anaconda3\\cwp.py', 'J:\\Anaconda3', 'J:\\Anaconda3\\pythonw.exe', 'J:\\Anaconda3\\Scripts\\spyder-script.py']

DEBUG menuinst_win32:create(323): Shortcut cmd is J:\Anaconda3\python.exe, args are ['J:\\Anaconda3\\cwp.py', 'J:\\Anaconda3', 'J:\\Anaconda3\\python.exe', 'J:\\Anaconda3\\Scripts\\spyder-script.py', '--reset']

done

 

C:\WINDOWS\system32>

사이킷런으로 연산 하던 것을 텐서 2.0 GPU로 바꾸니 20 분 걸리던데 5초 걸린다.

GPU도 이런데 구글의 TPU는 더 어마어마 하겠네. 물론, 맞는 상황이면...  2017 저장된 구글링 자료는 저런 비교가 되어 있던데... 시간 날 때 좀 더 자세히 봐야겠다. 일단 가설은 텐서 짱.

2020-06-30 23:56:10.818918: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-06-30 23:56:10.819336: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-06-30 23:56:10.819574: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-06-30 23:56:10.819780: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-06-30 23:56:10.819988: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-06-30 23:56:10.820184: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-06-30 23:56:10.820390: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-06-30 23:56:10.820562: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-06-30 23:56:10.821190: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0

2020-06-30 23:56:10.821391: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-06-30 23:56:10.821600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102]      0

2020-06-30 23:56:10.821752: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0:   N

2020-06-30 23:56:10.822327: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2885 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

 

사이킷런(Scikit-learn) vs 텐서플로우(TensorFlow)

Kim Kanu

2017. 10. 13. 18:03

 이웃추가

본문 기타 기능

머신러닝 프레임워크는 분류, 회귀(Regression), 클러스터링, 비정상행위 탐지(Anomaly Detection), 데이터 준비(Data Preparation)를 위한 다양한 학습 방법을 다루며, 인공 신경망 메소드(Method)를 포함할 수도, 포함하지 않을 수도 있음

 

Ex) 사이킷런(Scikit-learn)과 스파크(Spark) MLlib는 머신러닝 프레임워크

 

 

 

 

딥러닝 또는 심층 신경망(Deep Neural Network: DNN) 프레임워크는 여러 개의 은닉 계층(Hidden Layer)을 가진 다양한 신경망 토폴로지, 이런 계층은 다단계 프로세스의 패턴 인식으로 이루어져 있다. 망에 계층이 많을수록 클러스터링과 분류를 위해 추출할 수 있는 특징이 더 다양

 

Ex) 카페(Caffe), 마이크로소프트 인지 툴킷(Cognitive Toolkit: CNTK 2)과 딥러닝4j(하둡과 스파크에서 사용하는 자바와 스칼라(Scalar)용 딥러닝 소프트웨어), 케라스(Keras: 테아노와 텐서플로우용 딥러닝 프론트엔드), MX넷, 텐서플로우(TensorFlow) 등은 딥러닝 프레임워크

 

 

 

하기 프레임워크에 대한 내용은 머신러닝 프레임워크는 사이킷런 & 딥러닝 프레임워크로는 텐서플로우를 다룬다.

 

 

 

사이킷런

사이킷런 파이썬 프레임워크는 탄탄한 학습 알고리즘이 장점

잘 정의된 알고리즘과 통합 그래픽, 검증된 라이브러리라는 것도 장점

설치, 학습, 사용하기 쉽고 예제와 사용 설명서가 잘 돼 있음

 

딥러닝이나 강화 학습을 다루지 않는 단점

그래픽 모델과 시퀀스 예측(Sequence Prediction) 기능을 지원하지 않음

이썬 이외의 언어에서는 사용할 수 없고, 파이썬 JIT(Just-in-Time) 컴파일러인 파이파이(PyPy)나 GPU를 지원하지 않음 

 

사이킷런은 분류와 회귀, 클러스터링, 차원 축소(Dimensionality Reduction), 모델 선택, 전처리에 대해 다양한 알고리즘을 지원, 이와 관련된 문서화와 예제도 훌륭하다. 하지만 이런 작업을 완료하기 위한 안내 워크플로우가 전혀 없음 

 

딥러닝이나 강화 학습을 지원하지 않아 정확한 이미지 분류와 신뢰성 있는 실시간 언어 구문 분석(Language Parsing), 번역 같은 문제를 해결하는 데는 적절치 않음

 

여러 가지 다른 관측값(Observation)을 연결하는 예측 함수를 만드는 것부터 관측값을 분류하는 것, 라벨이 붙어있지 않은 데이터 세트의 구조를 학습하는 것까지, 수십 개의 뉴런 계층이 필요 없는 일반적인 머신러닝 용도라면 사이킷만한 것이 없음

 

 

 

텐서플로우

텐서플로우는 구글이 내놓은 이식성 좋은 머신러닝과 인공 신경망 라이브러리

배우기가 조금 어렵지만 성능과 확장성이 좋음 

텐서플로우에는 딥러닝에서 많이 사용하는 다양한 모델과 알고리즘이 들어 있으며 GPU(훈련용)나 구글 TPU(현업에 적용할 수 있는 규모로 예측용)를 장착한 하드웨어에서 탁월한 성능

이썬 지원이 훌륭하며 문서화가 잘되어 있고 텐서보드라고 하는 소프트웨어가 포함돼 있어 결과를 설명하는 데이터 플로우그래프(Data Flow Graph: DFG)를 표시하고 이해하기 좋음

데이터 플로우 그래프에서 노드(Node)는 수학적 연산을 나타내며, 그래프 에지(Edge)는 노드 간을 흐르는 다차원 데이터 어레이(텐서)를 나타낸다. 이런 유연한 아키텍처가 적용돼 있어 사용자가 코드를 재작성하지 않고도 데스크톱과 서버, 모바일 기기에 있는 하나 또는 이 이상의 CPU나 GPU에 배포할 수 있음

 

텐서플로우를 사용하기 위한 주요 언어는 파이썬

C++에 대한 지원은 일부 제한

 

텐서플로우와 함께 제공되는 사용 설명서를 보면, 수기 숫자 분류와 이미지 인식, 워드 임베딩( Word Embedding: 단어 표현), RNN (Recurrent Neural Network: 순환 신경망), 기계 번역(Machine Translation)을 위한 시퀀스-투-시퀀스(Sequence-to-Sequence) 모델, 자연어 처리, 그리고 PDE(Partial Differential Equation: 편 미분 방정식) 기반의 시뮬레이션에 대한 애플리케이션 등이 포함

텐서플로우를 이용하면 현재 이미지 인식과 언어 처리 분야를 바꿔놓고 있는 딥 CNN과 LSTM 재귀 모델을 포함해 온갖 종류의 신경망을 쉽게 처리가능 계층을 정의하는 코드가 다소 복잡하지만 3가지 딥러닝 인터페이스 옵션 중 한 가지로 이 불편함을 해결

비동기 네트워크 솔버(Asynchronous Network Solver)를 디버깅하기가 만만치 않지만 텐서보드 소프트웨어는 사용자가 그래프를 시각화할 수 있게 해줌

 

Source : 구글 텐서플로우부터 MS CNTK까지딥러닝/머신러닝 프레임워크 6종 비교 분석(IDG)

댓글 0공유하기

Kim Kanu

스타트업 백서 저자, 기술신용평가사 저자

 

의학 자료 구글링은 좋은 크롤러를 만날 수 있었고, 한국 개발자라 자랑스럽다는 것을 알았으나,

이런 방식의 시도는 자료가 오픈 되었다는 것을 알았을 때, 약간 바보 같은 짓이었다. ^^

 

 

다운로더 까지 제공하며 3개의 BREAST CANCER 이미지만 100기가다...

덕분에 GTA5를 지웠다. ㅠㅠ 뜨어어어어... 언젠가 스트레스가 극에 달했을 때 다시 설치 해야지.

다이콤이라 뷰어를 찾아 봤다. 어차피 가공하려면 소스 레벨에서 해야 되나... 밤이라 오버워치 하면서 한다고 집중도 안되니. 토이 프로젝트라는 것을 잊지 말자. 그 누구의 지원도 그 누구의 기획도 없는 자유로운 취미 활동. 어느 정도의 몰입은 좋지만 밸런스를 잘 못 맞추면 또 일이 되어 버린다.

https://www.fosshub.com/IrfanView.html?dwl=iview454_x64_setup.exe

 

IrfanView

IrfanView: Free software download for windows.

www.fosshub.com

음... DICOM만 선택해서 깔았는데 플러그인 깔라고 한다.

http://dk.kisti.re.kr/?q=node/7

 

VIEWER | Digital Korean

 

dk.kisti.re.kr

여기 뷰어들이 있는데 더 설치하는 것은 나에게 의미 없으니 파이썬 소스를 찾았다.

 

https://pydicom.github.io/pydicom/stable/auto_examples/input_output/plot_read_dicom.html

 

Read DICOM and ploting using matplotlib — pydicom 2.0.0 documentation

© Copyright 2008-2020, Darcy Mason and pydicom contributors

pydicom.github.io

 

https://anaconda.org/conda-forge/pydicom

 

Pydicom :: Anaconda Cloud

License: MIT 251105 total downloads Last upload: 27 days and 38 minutes ago Installers Info: This package contains files in non-standard labels. conda install linux-64  v0.9.9 win-32  v0.9.9 noarch  v2.0.0 win-64  v0.9.9 osx-64  v0.9.9 To install this

anaconda.org

 

 

(base) C:\Users\joe>conda install -c conda-forge pydicom

Collecting package metadata (current_repodata.json): done

Solving environment: done

 

## Package Plan ##

 

  environment location: J:\Anaconda3

 

  added / updated specs:

    - pydicom

 

 

The following packages will be downloaded:

 

    package                    |            build

    ---------------------------|-----------------

    ca-certificates-2020.6.20  |       hecda079_0         184 KB  conda-forge

    certifi-2020.6.20          |   py37hc8dfbb8_0         151 KB  conda-forge

    conda-4.8.3                |   py37hc8dfbb8_1         3.1 MB  conda-forge

    openssl-1.1.1g             |       he774522_0         5.7 MB  conda-forge

    pydicom-2.0.0              |     pyh9f0ad1d_0        26.5 MB  conda-forge

    python_abi-3.7             |          1_cp37m           4 KB  conda-forge

    ------------------------------------------------------------

                                           Total:        35.6 MB

 

The following NEW packages will be INSTALLED:

 

  pydicom            conda-forge/noarch::pydicom-2.0.0-pyh9f0ad1d_0

  python_abi         conda-forge/win-64::python_abi-3.7-1_cp37m

 

The following packages will be UPDATED:

 

  ca-certificates     pkgs/main::ca-certificates-2020.1.1-0 --> conda-forge::ca-certificates-2020.6.20-hecda079_0

  conda                       pkgs/main::conda-4.8.3-py37_0 --> conda-forge::conda-4.8.3-py37hc8dfbb8_1

 

The following packages will be SUPERSEDED by a higher-priority channel:

 

  certifi               pkgs/main::certifi-2020.6.20-py37_0 --> conda-forge::certifi-2020.6.20-py37hc8dfbb8_0

  openssl                                         pkgs/main --> conda-forge

 

 

Proceed ([y]/n)? y

 

 

Downloading and Extracting Packages

certifi-2020.6.20    | 151 KB    | ############################################################################ | 100%

conda-4.8.3          | 3.1 MB    | ############################################################################ | 100%

pydicom-2.0.0        | 26.5 MB   | ##########################################################################2  |  98%

 

anaconda navigator에서 안 보인다. 커맨드로 설치.

 

 

잘되네.

그런데 왠지... dcm으로 돌리는 AIaaS도 있을 것 같다.

 

그나저나 GTA5로 될게 아니었다... breast cancer 자료 타래가 10 타래 넘어간다.(카테고리 용어 지겨워서...) 어느 타래는 접근 제한으로 암호를 물어 본다.

그러나 지금 하드 용량으로도 10TB 받지 못한다 ㅠ 왜냐면 꽉 차서 ㅋ 뭘 지워야 하나... 가족 사진이 반 이상인데 ㅠ 

어느 타래는 1TB 넘는 타래도 있고, 100GB도 있다. 당장은 저렇게 까지는 많이 필요 없을 것 같아서 집에 있는 10TB 이상의 공 하드 디스크(오래전에 쟁겨놔서 1, 1.5TB 짜리 들이다.. ㅠㅠ)를 달 필요는 없을 것 같다. 쟁겨 놓는거 좋아해서 쓰레드 리퍼도 포장 된 채로 있는데 ㅠ

 

믓튼...

이런 방식으로 진행한다. 최고의 대학교에서 박사 받았던 아는 형도 어느 날 갑자기 자율 주행차 만들라고 해서 구글링부터 했다더라. 1년 정도 연구가 큰 진전이 없어서 세계 유수 회사들과 연계했다고...

 

삽질도 다 같이 삽질하면 됴코. 여긴 없나요?

_breast cancer_ CT.7z
1.60MB

All of the data is attached above.

 

AutoCralwer solution is not working properly when the datas are huge.

The saving mechanism is not working I think.

After one more try without "naver" option, I will fix the code.

 

current code snapshot

---

"""
Copyright 2018 YoongiKim

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""


import os
import requests
import shutil
from multiprocessing import Pool
import argparse
from collect_links import CollectLinks
import imghdr
import base64


class Sites:
GOOGLE = 1
NAVER = 2
GOOGLE_FULL = 3
NAVER_FULL = 4

@staticmethod
def get_text(code):
if code == Sites.GOOGLE:
return 'google'
elif code == Sites.NAVER:
return 'naver'
elif code == Sites.GOOGLE_FULL:
return 'google'
elif code == Sites.NAVER_FULL:
return 'naver'

@staticmethod
def get_face_url(code):
if code == Sites.GOOGLE or Sites.GOOGLE_FULL:
return "&tbs=itp:face"
if code == Sites.NAVER or Sites.NAVER_FULL:
return "&face=1"


class AutoCrawler:
def __init__(self, skip_already_exist=True, n_threads=4, do_google=True, do_naver=True, download_path='download',
full_resolution=False, face=False):
"""
:param skip_already_exist: Skips keyword already downloaded before. This is needed when re-downloading.
:param n_threads: Number of threads to download.
:param do_google: Download from google.com (boolean)
:param do_naver: Download from naver.com (boolean)
:param download_path: Download folder path
:param full_resolution: Download full resolution image instead of thumbnails (slow)
:param face: Face search mode
"""

self.skip = skip_already_exist
self.n_threads = n_threads
self.do_google = do_google
self.do_naver = do_naver
self.download_path = download_path
self.full_resolution = full_resolution
self.face = face

os.makedirs('./{}'.format(self.download_path), exist_ok=True)

@staticmethod
def all_dirs(path):
paths = []
for dir in os.listdir(path):
if os.path.isdir(path + '/' + dir):
paths.append(path + '/' + dir)

return paths

@staticmethod
def all_files(path):
paths = []
for root, dirs, files in os.walk(path):
for file in files:
if os.path.isfile(path + '/' + file):
paths.append(path + '/' + file)

return paths

@staticmethod
def get_extension_from_link(link, default='jpg'):
splits = str(link).split('.')
if len(splits) == 0:
return default
ext = splits[-1].lower()
if ext == 'jpg' or ext == 'jpeg':
return 'jpg'
elif ext == 'gif':
return 'gif'
elif ext == 'png':
return 'png'
else:
return default

@staticmethod
def validate_image(path):
ext = imghdr.what(path)
if ext == 'jpeg':
ext = 'jpg'
return ext # returns None if not valid

@staticmethod
def make_dir(dirname):
current_path = os.getcwd()
path = os.path.join(current_path, dirname)
if not os.path.exists(path):
os.makedirs(path)

@staticmethod
def get_keywords(keywords_file='keywords.txt'):
# read search keywords from file
with open(keywords_file, 'r', encoding='utf-8-sig') as f:
text = f.read()
lines = text.split('\n')
lines = filter(lambda x: x != '' and x is not None, lines)
keywords = sorted(set(lines))

print('{} keywords found: {}'.format(len(keywords), keywords))

# re-save sorted keywords
with open(keywords_file, 'w+', encoding='utf-8') as f:
for keyword in keywords:
f.write('{}\n'.format(keyword))

return keywords

@staticmethod
def save_object_to_file(object, file_path, is_base64=False):
try:
with open('{}'.format(file_path), 'wb') as file:
if is_base64:
file.write(object)
else:
shutil.copyfileobj(object.raw, file)
except Exception as e:
print('Save failed - {}'.format(e))

@staticmethod
def base64_to_object(src):
header, encoded = str(src).split(',', 1)
data = base64.decodebytes(bytes(encoded, encoding='utf-8'))
return data

def download_images(self, keyword, links, site_name):
self.make_dir('{}/{}'.format(self.download_path, keyword))
total = len(links)

for index, link in enumerate(links):
try:
print('Downloading {} from {}: {} / {}'.format(keyword, site_name, index + 1, total))

if str(link).startswith('data:image/jpeg;base64'):
response = self.base64_to_object(link)
ext = 'jpg'
is_base64 = True
elif str(link).startswith('data:image/png;base64'):
response = self.base64_to_object(link)
ext = 'png'
is_base64 = True
else:
response = requests.get(link, stream=True)
ext = self.get_extension_from_link(link)
is_base64 = False

no_ext_path = '{}/{}/{}_{}'.format(self.download_path, keyword, site_name, str(index).zfill(4))
path = no_ext_path + '.' + ext
self.save_object_to_file(response, path, is_base64=is_base64)

del response

ext2 = self.validate_image(path)
if ext2 is None:
print('Unreadable file - {}'.format(link))
os.remove(path)
else:
if ext != ext2:
path2 = no_ext_path + '.' + ext2
os.rename(path, path2)
print('Renamed extension {} -> {}'.format(ext, ext2))

except Exception as e:
print('Download failed - ', e)
continue

def download_from_site(self, keyword, site_code):
site_name = Sites.get_text(site_code)
add_url = Sites.get_face_url(site_code) if self.face else ""

try:
collect = CollectLinks() # initialize chrome driver
except Exception as e:
print('Error occurred while initializing chromedriver - {}'.format(e))
return

try:
print('Collecting links... {} from {}'.format(keyword, site_name))

if site_code == Sites.GOOGLE:
links = collect.google(keyword, add_url)

elif site_code == Sites.NAVER:
links = collect.naver(keyword, add_url)

elif site_code == Sites.GOOGLE_FULL:
links = collect.google_full(keyword, add_url)

elif site_code == Sites.NAVER_FULL:
links = collect.naver_full(keyword, add_url)

else:
print('Invalid Site Code')
links = []

print('Downloading images from collected links... {} from {}'.format(keyword, site_name))
self.download_images(keyword, links, site_name)

print('Done {} : {}'.format(site_name, keyword))

except Exception as e:
print('Exception {}:{} - {}'.format(site_name, keyword, e))

def download(self, args):
self.download_from_site(keyword=args[0], site_code=args[1])

def do_crawling(self):
keywords = self.get_keywords()

tasks = []

for keyword in keywords:
dir_name = '{}/{}'.format(self.download_path, keyword)
if os.path.exists(os.path.join(os.getcwd(), dir_name)) and self.skip:
print('Skipping already existing directory {}'.format(dir_name))
continue

if self.do_google:
if self.full_resolution:
tasks.append([keyword, Sites.GOOGLE_FULL])
else:
tasks.append([keyword, Sites.GOOGLE])

if self.do_naver:
if self.full_resolution:
tasks.append([keyword, Sites.NAVER_FULL])
else:
tasks.append([keyword, Sites.NAVER])

pool = Pool(self.n_threads)
pool.map_async(self.download, tasks)
pool.close()
pool.join()
print('Task ended. Pool join.')

self.imbalance_check()

print('End Program')

def imbalance_check(self):
print('Data imbalance checking...')

dict_num_files = {}

for dir in self.all_dirs(self.download_path):
n_files = len(self.all_files(dir))
dict_num_files[dir] = n_files

avg = 0
for dir, n_files in dict_num_files.items():
avg += n_files / len(dict_num_files)
print('dir: {}, file_count: {}'.format(dir, n_files))

dict_too_small = {}

for dir, n_files in dict_num_files.items():
if n_files < avg * 0.5:
dict_too_small[dir] = n_files

if len(dict_too_small) >= 1:
print('Data imbalance detected.')
print('Below keywords have smaller than 50% of average file count.')
print('I recommend you to remove these directories and re-download for that keyword.')
print('_________________________________')
print('Too small file count directories:')
for dir, n_files in dict_too_small.items():
print('dir: {}, file_count: {}'.format(dir, n_files))

print("Remove directories above? (y/n)")
answer = input()

if answer == 'y':
# removing directories too small files
print("Removing too small file count directories...")
for dir, n_files in dict_too_small.items():
shutil.rmtree(dir)
print('Removed {}'.format(dir))

print('Now re-run this program to re-download removed files. (with skip_already_exist=True)')
else:
print('Data imbalance not detected.')


if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--skip', type=str, default='true',
help='Skips keyword already downloaded before. This is needed when re-downloading.')
parser.add_argument('--threads', type=int, default=4, help='Number of threads to download.')
parser.add_argument('--google', type=str, default='true', help='Download from google.com (boolean)')
parser.add_argument('--naver', type=str, default='true', help='Download from naver.com (boolean)')
parser.add_argument('--full', type=str, default='false', help='Download full resolution image instead of thumbnails (slow)')
parser.add_argument('--face', type=str, default='false', help='Face search mode')
args = parser.parse_args()

_skip = False if str(args.skip).lower() == 'false' else True
_threads = args.threads
_google = False if str(args.google).lower() == 'false' else True
_naver = False if str(args.naver).lower() == 'false' else True
_full = False if str(args.full).lower() == 'false' else True
_face = False if str(args.face).lower() == 'false' else True

print('Options - skip:{}, threads:{}, google:{}, naver:{}, full_resolution:{}, face:{}'.format(_skip, _threads, _google, _naver, _full, _face))

crawler = AutoCrawler(skip_already_exist=_skip, n_threads=_threads, do_google=_google, do_naver=_naver, full_resolution=_full, face=_face)
crawler.do_crawling()

=-=-=-=-=

err msg

Downloading images from collected links... "breast cancer" CT from google
Exception google:"breast cancer" CT - [WinError 123] 파일 이름, 디렉터리 이름
 또는 볼륨 레이블 구문이 잘못되었습니다: 'C:\\morpheus\\JOE\\AutoCrawler-mast
er 2\\AutoCrawler-master\\download/"breast cancer" CT'
Task ended. Pool join.
Data imbalance checking...

 

Downloading images from collected links... "breast cancer" CT from google
Exception google:"breast cancer" CT - [WinError 123] 파일 이름, 디렉터리 이름
 또는 볼륨 레이블 구문이 잘못되었습니다: 'C:\\morpheus\\JOE\\AutoCrawler-mast
er 2\\AutoCrawler-master\\download/"breast cancer" CT'
Task ended. Pool join.
Data imbalance checking...

 

is the problem

 

[4876:14980:0625/200842.641:ERROR:ssl_client_socket_impl.cc(959)] handshake f ailed; returned -1, SSL error code 1, net_error -200

 

[4876:14980:0625/200842.641:ERROR:ssl_client_socket_impl.cc(959)] handshake f ailed; returned -1, SSL error code 1, net_error -200

 

 

 

collect_links.py 48 line

 

chrome_options = Options()

        chrome_options.add_argument('--headless')

        chrome_options.add_argument('--no-sandbox')

        chrome_options.add_argument('--disable-dev-shm-usage')

        self.browser = webdriver.Chrome(executable, chrome_options=chrome_options)

 

still...

 

 

430: https://www.oncologynurseadvisor.com/wp-content/uploads/sites/13/2019/01

/c752406354ff50ac055bfccde4105efd_bookmarkImage_335x250_large_original_1.jpg

[0625/211556.818:ERROR:ssl_client_socket_impl.cc(959)] handshake failed; retu

rned -1, SSL error code 1, net_error -100

[0625/211556.820:ERROR:ssl_client_socket_impl.cc(959)] handshake failed; retu

rned -1, SSL error code 1, net_error -100

[0625/211557.085:ERROR:ssl_client_socket_impl.cc(959)] handshake failed; retu

rned -1, SSL error code 1, net_error -100

[0625/211557.088:ERROR:ssl_client_socket_impl.cc(959)] handshake failed; retu

rned -1, SSL error code 1, net_error -100

 

maybe keyword?

흠...

다양한 시스템(맥, 윈서버, 윈10)에서 돌리다보니,  문제의 양상이 다양하다. 이제 Click time out - //input[@type="button"] 으로 나타난다. 

 

chrome_options.add_argument('--headless') 옵션을 주니 창이 나오지 않아 더 빠른 것 같은데...  위의 에러가 나서 창을 띄우니 제대로 받아진다.

 

Collecting links... Breast cancer CT from google
Scrolling down
Scraping links
Collect links done. Site: google, Keyword: Breast cancer CT, Total: 434
Downloading images from collected links... Breast cancer CT from google
Downloading Breast cancer CT from google: 1 / 434
Download failed -  Invalid URL 'None': No schema supplied. Perhaps you meant http://None?
Downloading Breast cancer CT from google: 2 / 434
Downloading Breast cancer CT from google: 3 / 434
Downloading Breast cancer CT from google: 4 / 434
Downloading Breast cancer CT from google: 5 / 434
Downloading Breast cancer CT from google: 6 / 434
Downloading Breast cancer CT from google: 7 / 434
Downloading Breast cancer CT from google: 8 / 434
Downloading Breast cancer CT from google: 9 / 434
Downloading Breast cancer CT from google: 10 / 434

 

구글인데 434 밖에 없나? 쩝.. ㅠㅠ 창으로 확인해 보니 더 자료가 없다고 한다. 데이터 제공처를 더 찾아보고 키워드 조합도 다양하게 try 해봐야겠다. 

 

Breast cancer CT.7z
3.13MB

그래도 이미지 퀄리티가 괜찮네. 

breast cancer CT 말고 breast cancer ct scan images 로 키워드 바꾸고 full resolution mode로 해 본다.

그러니 700개가 나온다.

너무 적은데? ...

 

인터넷 뒤져보니 내 상상보다 많은 이미지 데이터가 이미 있다.

https://www.cancerimagingarchive.net/collections/

 

TCIA Collections - The Cancer Imaging Archive (TCIA)

TCIA data are organized as “collections”; typically these are patient cohorts related by a common disease (e.g. lung cancer), image modality or type (MRI, CT, digital histopathology, etc) or research focus. Supporting data related to the images such as

www.cancerimagingarchive.net

데이터 받는 앱도 있고. 아마 토렌트 겠지?

믓튼, 저 사이트 외에도 꽤 있다. 잘 정리해 둬야지.

 

어머니께서 유방암이셨기 때문에 유방암부터 시작한다. 19개의 선택지 중 구글링(네이버 이미지 검색은 자료가 거의 없다) 해서 나오는 CT 이미지가 많은 것을 우선 순위로 하는게 맞다. 방향이 잘못되었을 때 다시 이 곳으로 돌아와서 시작하면 될 것 같다.

https://en.wikipedia.org/wiki/Breast_cancer

 

Breast cancer - Wikipedia

From Wikipedia, the free encyclopedia Jump to navigation Jump to search Cancer that originates in the mammary gland Breast cancerMammograms showing a normal breast (left) and a breast with cancer (right)SpecialtyOncologySymptomsA lump in a breast, a change

en.wikipedia.org

breast cancer가 학명인 것 같다.

이런 사이트도 찾았다.

https://www.nationalbreastcancer.org/types-of-breast-cancer/

 

Types Archives - National Breast Cancer Foundation

Types of breast cancer include ductal carcinoma in situ, invasive ductal carcinoma, inflammatory breast cancer, and metastatic breast cancer.

www.nationalbreastcancer.org

음 이런 좋은 사이트를 놓칠 뻔 했네.

breast cancer로 구글링 해보니

https://learningenglish.voanews.com/a/google-ai-system-could-improve-breast-cancer-detection/5231018.html

 

Google AI System Could Improve Breast Cancer Detection

A Google artificial intelligence system was as good as expert radiologists at discovering which women had breast cancer in a new study.

learningenglish.voanews.com

올 해 1월 1일 구글 AI를 통하여 근육 조직 사이의 암을 발견했다는 내용이었다. 

음... 내 눈으로 봐도 암 인지 모르겠다. 결국 평가 방법도 AI 여야 한다는 것이네. 이미 확정된 암 CT 사진을 돌려서 만들어진 모델이 암을 판별하는 것인지 아닌지 봐야 한다는 뜻.

 

이 정도만 해도... 늘 같은 위치에서 각은 각도로, 각은 기기로 각은 해상도 등 같은 표현 방법으로 표현된 데이터를 학습 시켜도 퀄리티가 잘 나올지 의문인 상태다. 그냥 암세포만 가까이 찍은 CT로 암세포인지 판별하는 방법이 더 나을 것도 같다.

 

병원과 연계가 힘드니, 수집 된 구글링 된 사진이 breast cancer CT 사진인지 아닌지부터 알아내는 것이 중요할 것 같다. 

 

01010101101010101011101011010100010110001010110010010101010101010101011001010101010101101010

 

1년이 되어 리모트 컴퓨터를 켰는데 ftp 붙는데만 30분 걸렸다 ㅡㅡ; 보통 재부팅 시간 1~2분 내외였는데. 확인 방법은 간단, 켜질 때까지 ping 걸어 두고 보는 것.(노가다 폴링 방식) MSTSC가 먹기까지 또 20분 더 걸렸다. 부팅은 늦지만 서비스가 순서대로 사는 중에도 FTP 파일 전송이 되는 것을 보고 확실히 윈도우 서버 버전은 안정화에 중점을 둔 솔루션이 맞다는 생각이다.

 

켜진 후에 오토 크롤 돌렸다. 그동안 맥에서만 83으로 올리다 보니 윈도우용 업글을 안해서 드라이버 업글했다. 83이 크롬 최신인데 벌써 84도 나왔네 ㅋㅋ

https://chromedriver.storage.googleapis.com/index.html?path=83.0.4103.39/

 

https://chromedriver.storage.googleapis.com/index.html?path=83.0.4103.39/

 

chromedriver.storage.googleapis.com

그 사이 해당 데이터를 제공하는 웹 서핑을 30분 정도 할 예정.

 

그리고 업데이트도. 윈도우 서버는 선택적 업데이트를 해야 한다. 이거 몰라서 다른 작업 못하게 막는 서버팀 봤었는데, 때론 작은 지식이 매우 큰 결과를 낼 때도 있다.

1년 즈음 되면 업데이트를 한다.(다들 서버 뚫리니 어떠니 할 때는 매일 보안 업데이트 검색하고 적용하고 그랬는데) 대학생 때 서버 호스팅 사업을 시작으로 서버는 20년을 해 보니 보안이 뭔지 조금 알겠다.

우선, IP만 안 알려지면 된다. 크래킹 하려면 도메인에 IP 연결된 대표 서버를 깨면 되고, 다른 서버 보다 DNS 서버 보안이 가장 중요하다.

 

Downloading breast cancer ct scan images from google: 736 / 743
Downloading breast cancer ct scan images from google: 737 / 743
Downloading breast cancer ct scan images from google: 738 / 743
Downloading breast cancer ct scan images from google: 739 / 743
Downloading breast cancer ct scan images from google: 740 / 743
Downloading breast cancer ct scan images from google: 741 / 743
Downloading breast cancer ct scan images from google: 742 / 743
Downloading breast cancer ct scan images from google: 743 / 743
Done google : breast cancer ct scan images
Task ended. Pool join.
Data imbalance checking...
dir: download/breast cancer ct scan images, file_count: 676
Data imbalance not detected.
End Program

Process finished with exit code 0

암 종류별 사진 크롤링 이후에는

비슷한 것 끼리 묶고

묶여진 자료만 학습

 

테스트베드는 윈도우 데탑... 다른 애들은 다들 일해서...

 

C:\Users\joe>pip3 install tensorflow

Collecting tensorflow

  Downloading https://files.pythonhosted.org/packages/af/50/d7da24189d95e2084bb1cc350a8e4acdf1b0c9b3d57def7a348f0d9cb062/tensorflow-2.2.0-cp37-cp37m-win_amd64.whl (459.2MB)

     |                               | 15.7MB 204kB/s eta 0:36:06    

설치

엥 그런데 파이썬 어딨었지?

뭐가 진짜야

C:\Users\joe>echo %PATH%

o:\WinAVR\bin;o:\WinAVR\utils\bin;O:\arm\DS-5 v5.27.1\sw\ARMCompiler5.06u5\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;J:\!!!!!!!!\visualSVN\bin;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files (x86)\Xoreax\IncrediBuild;C:\Program Files (x86)\Bitvise SSH Client;O:\CMake\bin;o:\Git LFS;C:\Program Files\Git\cmd;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\ePapyrus\Papyrus-PlugIn;C:\Program Files (x86)\ePapyrus\Papyrus-PlugIn\Addins;J:\Program Files\Java\jdk-11.0.3\bin;J:\Program Files\Java\jdk-11.0.3\bin;C:\ProgramData\chocolatey\bin;;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Users\joe\AppData\Local\Microsoft\WindowsApps;;o:\PyCharm\bin;

C:\Users\joe\AppData\Local\Microsoft\WindowsApps\python3.exe

 

얘군.

 

진행 중이던 내용은 역시나 에러 발생

 

WARNING: The script f2py.exe is installed in 'C:\Users\joe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\Scripts' which is not on PATH.

  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.

ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: 'C:\\Users\\joe\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python37\\site-packages\\tensorflow_estimator\\python\\estimator\\canned\\linear_optimizer\\python\\utils\\__pycache__\\sharded_mutable_dense_hashtable.cpython-37.pyc'

 

WARNING: You are using pip version 19.2.3, however version 20.1.1 is available.

You should consider upgrading via the 'python -m pip install --upgrade pip' command.

 

pip3로 깔았는데 ㅡㅡ;

 

믓튼,

C:\Users\joe>python -m pip install --upgrade pip

Collecting pip

  Downloading https://files.pythonhosted.org/packages/43/84/23ed6a1796480a6f1a2d38f2802901d078266bda38388954d01d3f2e821d/pip-20.1.1-py2.py3-none-any.whl (1.5MB)

     |██████████████                  | 655kB 70kB/s eta 0:00:12

마찬가지겠지.

kdown, pyasn1, rsa, cachetools, pyasn1-modules, google-auth, idna, certifi, urllib3, chardet, requests, oauthlib, requests-oauthlib, google-auth-oauthlib, tensorboard, tensorflow

  WARNING: The script wheel.exe is installed in 'C:\Users\joe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\Scripts' which is not on PATH.

  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.

ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: 'C:\\Users\\joe\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python37\\site-packages\\tensorflow_estimator\\python\\estimator\\canned\\linear_optimizer\\python\\utils\\__pycache__\\sharded_mutable_dense_hashtable.cpython-37.pyc'

 

 역시나 

C:\Users\joe>

C:\Users\joe>pip3 install tensorflow-gpu

Collecting tensorflow-gpu

  Downloading tensorflow_gpu-2.2.0-cp37-cp37m-win_amd64.whl (460.4 MB)

     |                                | 256 kB 192 kB/s eta 0:39:46

 

윈도우 환경 텐서면 gpu로 가겠지? 하고 gpu로 재시도.

 

 

sts-oauthlib, google-auth-oauthlib, tensorboard, wrapt, google-pasta, gast, termcolor, scipy, opt-einsum, astunparse, tensorflow-gpu-estimator, tensorflow-gpu

  WARNING: The script chardetect.exe is installed in 'C:\Users\joe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\Scripts' which is not on PATH.

  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.

  WARNING: The scripts pyrsa-decrypt.exe, pyrsa-encrypt.exe, pyrsa-keygen.exe, pyrsa-priv2pub.exe, pyrsa-sign.exe and pyrsa-verify.exe are installed in 'C:\Users\joe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\Scripts' which is not on PATH.

  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.

ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: 'C:\\Users\\joe\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python37\\site-packages\\tensorboard_plugin_wit\\_vendor\\tensorflow_serving\\sources\\storage_path\\__pycache__\\file_system_storage_path_source_pb2.cpython-37.pyc'

 

아나콘다 가야겠네.

 

아나콘다로 모두 설치하고,

파이참으로 마무리.

 

리눅스도 우분투 전에는 늘 소스로 설치해야 하고, 순서나 패키지 궁합이 맞지 않을 때 수 없이 지웠다 재설치를 반복해야 했었다. 이제는 깔끔하게 환경 설정이 되지 않는 경우 해당 시스템을 바꿀 정도라... 뭐, 그래서 지금은 숙성된 AI를 하기 딱 좋은 시기인 듯 보인다. 

 

J:\Anaconda3\python.exe O:/PycharmProjects/test001/bs/tensorflow-gpu_test.py

2020-06-24 01:57:56.286036: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-06-24 01:58:00.182514: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll

2020-06-24 01:58:00.220162: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-06-24 01:58:00.220480: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-06-24 01:58:00.228728: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-06-24 01:58:00.233755: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-06-24 01:58:00.235671: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-06-24 01:58:00.240970: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-06-24 01:58:00.244173: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-06-24 01:58:00.255182: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-06-24 01:58:00.255874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0

2020-06-24 01:58:00.256555: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

2020-06-24 01:58:00.261185: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:

pciBusID: 0000:09:00.0 name: GeForce RTX 2080 SUPER computeCapability: 7.5

coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s

2020-06-24 01:58:00.261506: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

2020-06-24 01:58:00.261658: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

2020-06-24 01:58:00.261815: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll

2020-06-24 01:58:00.261989: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll

2020-06-24 01:58:00.262142: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll

2020-06-24 01:58:00.262296: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll

2020-06-24 01:58:00.262461: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll

2020-06-24 01:58:00.263027: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0

2020-06-24 01:58:00.997862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:

2020-06-24 01:58:00.998035: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102]      0

2020-06-24 01:58:00.998133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0:   N

2020-06-24 01:58:00.999020: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6265 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5)

2020-06-24 01:58:01.003901: I tensorflow/core/common_runtime/eager/execute.cc:573] Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0

2020-06-24 01:58:01.004173: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll

tf.Tensor(

[[22. 28.]

 [49. 64.]], shape=(2, 2), dtype=float32)

 

소스는

https://www.tensorflow.org/guide/gpu?hl=ko여기 참고

 

GPU 사용하기  |  TensorFlow Core

Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 공식 영문 문서의 내용과 일치하지 않을 수

www.tensorflow.org

# import os
# os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
tf.debugging.set_log_device_placement(True)
#
# hello = tf.constant('Hello, TensorFlow!')
#
# print(hello.numpy())

# 텐서 생성
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)

print(c)

 

그런데 2080 super 8GB 샀던 것 같은데 왜 6GB로 나오는겨 --> 6GB 를 설정하는거네. 2GB니 하던 게임이 죽는다.

https://developer.nvidia.com/cuda-gpus#compute

 

 

CUDA GPUs

Recommended GPU for Developers NVIDIA TITAN RTX NVIDIA TITAN RTX is built for data science, AI research, content creation and general GPU development. Built on the Turing architecture, it features 4608, 576 full-speed mixed precision Tensor Cores for accel

developer.nvidia.com

Compute Capability 는 ...

GPUCompute Capability

Quadro RTX 8000 7.5

얘랑 같다고?

ARM Firmware 강사라고 중간에 어떻게 연결되는지 파다가는 인생도 멍들 것 같아서 그러려니 하고 일단 넘어가야 겠다.

음... 15년 게임컴 하다... 나중에 클라우드만 쓰다 다시 개인컴이라 직접 체감하는 수 밖에는... 그래도 2080인데 또 무한 대기는 아니겠지.

https://www.ncc.re.kr/

 

국립암센터

이 웹사이트는 국립암센터 공식 웹사이트 입니다.

www.ncc.re.kr

그런데 암 종류 정리를 어디서 찾는지 모르겠다.

디비피아 논문을 찾아 보니,

윤담희, 이종훈, 손창규, 유화승, 이연월, 방선휘, 조종관, 이남헌, 조정효, LEE. (2006). 암의 종류 및 증상에 따른 삶의 질 평가 척도에 관한 연구 현황 분석. , 27(3), 555-560.

있는데 다운로드가 안된다.

구글에서 제목 검색으로 찾아 보니,

https://www.jikm.or.kr/upload/pdf/200609/1%5b1%5d.pdf

표로 암의 종류가 정리되어 있지는 않았다.
논문 내용 중 

폐암(EORTC QLQ-LC14), 유방암(EORTC QLQ-BR23), 대장암 (EORTC QLQ-CR38), 식도암(EORTC QLQOES18), 위암(EORTC QLQ-ST22), 두경부암 (EORTC QLQ-H&N38), 난소암(EORTC QLQOV28), 자궁경부암(EORTC QLQ-CX24), 간암 (EORTC QLQ-HCC18) 등 암 부위 특이성(cancer site specific) 모듈과 함께 사용할 수 있다10.

유방암(FACT-B), 방광암(FACT-Bl), 대장암 (FACT-C), 두경부암(FACT-H&N), 폐암(FACTL), 난소암(FACT-O), 전립선암(FACT-P)

의 키워드로 봤을 때 암 종류는 이 정도 인 것 같다. carcinoma 가 암땡이를 뜻하는 단어라는 것을 알게 되어 좀 더 검색이 쉬웠다. 그런데 논문 내용 중

http://www.proqolid.org/

은 404 에러. 뭔지 궁금했다.

carcinoma 로 검색하니

https://www.plusmedical.ro/en/articole/tratament-cancer-oncologie/?gclid=EAIaIQobChMIp_ntxc-X6gIVWqqWCh3aoQCQEAAYASAAEgJcSvD_BwE

 

Tratament cancer (oncologie) - PlusMedical

Transplantul de ficat consta in inlocuirea ficatului bolnav, care nu isi mai poate indeplini functiile, cu un ficat sanatos fie de la un donator decedat, fie de la o persoana in viata

www.plusmedical.ro

자료가 나왔다.

 

The most common types of cancer

  • Oral cavity cancer
  • Lung cancer
  • Tumors on the brain
  • Skin cancer
  • Throat cancer (larynx)
  • Liver cancer
  • Bone tumors
  • Colorectal cancer
  • Lymphoma
  • Breast cancer
  • Bladder cancer
  • Stomach cancer
  • Cancer pancreatic
  • Pediatric oncology and hematology
  • Prostate cancer
  • Endometrial cancer (uterus)
  • Uterine cervical cancer
  • Thyroid cancer
  • Ovarian cancer

내 생각보다 더 종류가 많았다. 19가지.

most common types라 그 외에도 더 많겠지? 

내 친구가 대학 병원에 수백만원 짜리 효도 건진을 부모님께 시켜 드렸는데 암 발견을 못했었다. 그 뒤에 이상 증상이 있어도 건진을 이미 했다며 병원에서도 따로 암 건진을 안 받으셔서 그게 금방 말기가 되어 결국 돌아가셨었다. 친구가 소송을 준비했는데 다른 병원에서도 도와주는 사람이 없어서 결국 포기 했는데, 그 만큼 우리나라에 의료 사고가 많다는 것을 말하는 것이라고 했었다.

뭐, 이 뿐 아니라 병원끼리 자료 공유는 안되는 것은 익히 아는 사실이라 하나의 병원과 계약된 솔루션은 아마 다중 계약이 안 될 것으로 판단된다. 인터넷으로 CT 자료를 구하려고 하는데, 

우리나라 사람이 메인 개발자인 Auto Crawler란 좋은 오픈 소스가 있다.

 

이렇게 동작되네.

몰라서 질문 하나 남겨둡니다. 당장 답이 필요한 것은 아니라 언제 시간 나실 때 한번 말씀주세요. 암 진단기(CT)를 아파트 단지에 설치해서 같은 아파트의 방사선 자격증 있는 분이 예약 받았을 때 간편하게 CT를 찍을 수 있고 병원 가기 전 AI로 자동 판별이 되는 솔루션을 생각하고 있습니다. 혹시, 아파트 단지에도 저런 의료기기를 설치할 수 있나요? 병원에서만 가능한 건가요? 어머니께서 암 수술 후 완쾌 하셨는데 워낙 병원을 안 가시려고 하니 제가 찝찝해서 일년 동안 한 번 만들어 보려구요. 혹은 아이디어가 전파되어서 다른 분이 더 빨리 만들어주시면 좋을 것 같아서 카톡으로 질문 남겨 둡니다. 좋은 하루 되세요!

[ 30 초 뒤 ... ]

CT는 방사선 기기라서 설치 운영관련 별도의  법적 기준이 필요하구요. 방사선 기기 등록 과 별개로 병원이나 의사 진단 처방 없이는 사용이 불가합니다

[ㅠㅠ]

음...

로 시작했다. 불가능한 프로젝트라 TOY PROJECT로 적격.

 

의사도 병원도 환자도 관련 제품 판매 회사 등, 모두가 win win 하는 서비스가 되면 될 것 같네.

 

우선, 암은 흔한 질병이라고 한다. 그래서 가까운 곳에서 진단을 할 수 있게 만들 가능성이 보인다.

 

2020년 6월 23일 시작.

 

 

 

https://github.com/thedataincubator/data-science-blogs/blob/master/output/DL_libraries_final_Rankings.csv

 

thedataincubator/data-science-blogs

A Handful of D(u)S(t). Contribute to thedataincubator/data-science-blogs development by creating an account on GitHub.

github.com

 

LibraryRankOverallGithubStack OverflowGoogle Results

tensorflow 1 10.8676777173 4.25282914794 4.371905768 2.24294280139
keras 2 1.92768682345 0.613405340454 0.830444013135 0.483837469861
caffe 3 1.85536658344 1.00172325244 0.301598379669 0.552044951334
theano 4 0.757142065184 -0.156657475854 0.361637072631 0.552162468406
pytorch 5 0.481418742361 -0.198079135346 -0.30225967424 0.981757551946
sonnet 6 0.427865682184 -0.326074511957 -0.361634296039 1.11557449018
mxnet 7 0.0987996914674 0.121327235453 -0.306328604959 0.283801060973
torch 8 0.00559731666893 -0.153332101969 -0.00824393023136 0.167173348869
cntk 9 -0.0205203098963 0.0965088202554 -0.282173869559 0.165144739407
dlib 10 -0.599823512154 -0.39578194316 -0.223382454956 0.0193408859617
caffe2 11 -0.671062928351 -0.274071118159 -0.359648165565 -0.0373436446266
chainer 12 -0.70151841136 -0.400397905813 -0.234603397931 -0.0665171076164
paddlepaddle 13 -0.833003782881 -0.267123408237 -0.366884083295 -0.198996291348
deeplearning4j 14 -0.893319117931 -0.0575131634759 -0.321347169592 -0.514458784863
lasagne 15 -1.10606125475 -0.381150749139 -0.287853956451 -0.437056549158
bigdl 16 -1.12821350465 -0.458674544538 -0.367555905286 -0.301983054824
dynet 17 -1.25088837288 -0.465671394541 -0.367690269684 -0.417526708658
apache singa 18 -1.33963459336 -0.502246959001 -0.367824634082 -0.469563000276
nvidia digits 19 -1.39248467556 -0.407011549848 -0.346078273813 -0.639394851898
matconvnet 20 -1.41327975079 -0.487125591647 -0.346308395531 -0.579845763615
tflearn 21 -1.44982650865 -0.226089464016 -0.282710110548 -0.941026934086
nervana neon 22 -1.65176202195 -0.39497574163 -0.366989720498 -0.889796559818
opennn 23 -1.97015587693 -0.53381703821 -0.366068321175 -1.07027051754

 

엔진은 거창한 말이긴 하나 API의 한 종류이다. 3D 엔진은 언리얼이 최고인 것 같고 AI는 케라스까지 품은 텐서플로우가 통일한 것으로 보인다. AI 강의 하며 충분히 기다렸으니 해당 엔진으로 이래저래 뭘 만들어 볼까 한다. 모두 TOY project.

[광고]입니다.

The purpose of writing is to share my thoughts and information about my experiences.

 

I thought it was the real value of life to get some credit for what I did from someone I really wanted to be identified. I don't think there's a significant change in my thoughts NOW. But there's a small change that the person I want to be recognized for turns into myself. I know what I really want, like the sunlight of Diogenes, I feel that the king can be ignored when I feel free and comfortable.

 

 

http://m.yes24.com/Goods/Detail/61762

 

디오게네스의 햇빛

이 책은 에피소드로 꾸며져 있어 어려운 개념이나 용어가 별로 등장하지 않는다. 대화체로 되어 있어 철학자들과 바로 곁에서 이야기를 하고 있다는 느낌도 준다. 그들의 삶을�

m.yes24.com

I don't even have to use this blog. Everybody can use other media with high-performance ripple effects.

 

https://www.youtube.com/watch?v=lbKFh0xVxIM

 

Thanks to that, I met two good people fighting for justice, which helped me a lot. They gave me respect for what I did that made a book to write the list of BAD peoples. It is an honor to know these people on my level of life.

Miss Oh got a grade from Yonsei University Law School. I think it's okay to concentrate on getting lots of money & an outstanding career, like KBS, MBC, and JTBC, which makes money. I can hear,


"It's okay if you are comfortable working with the best and nicest people." 

Every chance to meet her. She knows a lot, whenever I meet her  I learn something new about humanities. 

 

I have a philosophy of learning is to meet people and learn them in person. It is because of these people.

I see Miss. Oh again, this weekend, and I think I can hear a good story back.

 

 Mr. Choi got a degree at Korea University and could live comfortably. COULD. But NOT. Because of his unbearable personality when he sees injustice.

 

 However, since he became a person who can really tell justice and truth to his children.
 He is considered to live a better life than a rich man with hundreds of trillions. Because he has "Nothing to hide.", which is challenging to have as he grows older. 

 

The KBS1 radio with his name is also in progress.

http://www.podbbang.com/ch/16839

 

[KBS] 최경영의 경제쇼

1라디오 월-금 16:10~16:55 지식보다 ‘재미’를 추구하는 본격 경제수다 라디오쇼!! 웃고 즐기다보면 경제지식이 야금야금 쌓여있을거예요^^

www.podbbang.com

 

난 19년 3월 28일 출연했다. 이제 1년이 넘었네. 두 분다 공인이시니 오은지, 최경영 기자로 기사를 검색하면 꼭 알아야 할 정보들이 많다. 매적어도 모두 "진실" 이기 때문에 완전한 중립에서의 "진짜 뉴스"라고 보면 된다. 

 

내 삼성전자 지인 중에 술자리에서 들은 이야기로는 노조 활동 하다가 삼성 봉고차에 잡혀 갔다고 한 친구가 있었다. 휴대폰으로 연락하면 되지 하니 휴대폰은 압수를 당했다고 했었다. 나중에 풀려 나서 신고하지 했더니 휴대폰만 뺏고 전혀 터치를 한 게 아니라서 혐의가 없다고 했다. 나는 필요하다면 다른 사람 한 말을 잘 기억하기 때문에 이 이야기를 듣고 9년이 지나 최기자님을 소개시켜 주면 좋을 것 같다고 생각해서 그렇게 진행하고 있었다. 진행 도 중... 그 전에 들었던 말이 다 거짓말이라는 것을 알게 되었다.

 재미있는 사실은 내가 뉴스타파 출연한지 6년이 되었는데 그 지인을 정기적으로 보면서도 한 번도 이야기를 하지 않아서 그 친구는 내가 이런 커넥션이 있는지 몰랐다는 것이다.

 그와 비슷한 사건이 최근에도 있었다. A, B 둘 다 10년 넘게 본 사이인데 내가 두 사람을 소개 시켜 준 적은 없었다. 그런데 A도 법인을 만들자고 하고 B도 법인을 만들자고 하는데 난 내가 만들 생각이 없어서 A, B를 소개시켜 줬다. 그런데 A, B 둘 다 서로 언제부터 알았는지에 대해 의심하고 질문하는게 대부분 이었다.

 내가 파급력에 대해서 모르는 것은 아니다. 단지, 파급력이 없다는 것을 알리기 위해서 삼성에게 피해를 보고 욕하는 사람들을 위해 파급력이 없다고 오래 전 글을 썼던 것이다. 그러면 어떻게 싸워야지 하는 사람들에게 "미래전략실 명단"을 가지고 그 사람들 한 명 한 명에게 책임을 물을 생각이 있다면 도와 주겠다고 했다. 그러나 그 사람들도 우리와 같은 월급쟁이라고 했다. 삼성을 욕하는 사람들이 제대로 싸울 줄 몰라서 핵심되는 사람들에게 돌을 던지지 않는게 아니다. 이재용 개인을 욕하는게 아니라 이재용이 삼성의 대표라 생각하고 삼성이라는 거대한 인격을 욕해서 바로 잡으려는 것이다. 그게 대한민국 사람들의 정이다. 그리고 뭇 사람들은 그런 행동을 "쯧쯧, 돈 벌려고 저러나 보다"라고 폄하한다. 그런 욕하는 사람들에게 정당한 시위를 알릴 필요는 없다. 미래전략실 명단처럼 파급력도 타게팅이 필요한 것이다. 난 미래 전략실 명단을 퍼뜨린 적은 없으나 구글 드라이브에 올라가 있다. 아마 내가 삼성폰으로 구글 드라이브에 로그인 할 일은 없을 것이다. 그래서 난 아이폰을 쓴다.

 

나 역시 파급력에 대해 모르는 것이 아니었으나, 파급력에 대해 좀 더 깊게 배운 것은 최경영 기자님, 오은지 기자님 덕이다. 가장 크게 배운 것은 "시기"라는 것. 이미 여러 영화나 뉴스, 커뮤니티를 통해서 뉴스가 터지는 시기에 대해 알고 있던 사실이었지만 그것이 그토록 중요하다는 것은 정말 나중에 배웠다. 이건희 성매매 사건에 영상이 10개월 동안 뉴스타파를 제외한 모든 미디어에서 거절 당했었는데 결국 국가 기관의 동의까지 얻고나서 공개된 이후에는 엄청난 파급력이 가졌었다.

 

https://www.youtube.com/watch?v=jZMdXqa_Vko

이미 1000만 뷰가 넘어갔다. 시사 분야가 인기가 없기 때문에 BTS 류의 영상으로 치면 사실 10억건이 넘었다고 보면 된다. 뉴스타파기 때문에, 유튜브기 때문에 가능한 것이다. 이 이후 JTBC 의 최순실 사건이 터졌다. 아마 최순실 사건을 제외하고는 이 사건을 묻을 것은 없었을 것이다. 그래서 큰 사건임에도 묻혔다. JTBC는 중앙일보 것이고 중앙일보는 삼성것이다. 그래서 최순실 사건 정리 이후에 손석희는 그만두었다. 내가 보는 시각에서는 파급력만 보였고 복잡한 여러 이야기가 있겠지만, 내 눈에는 꽤 단순했다.

 

나도 삼성에서 만6년을 일했기 때문에 삼성 스타일은 맞다. 결과 지향주의적이다.

https://www.mk.co.kr/news/society/view/2017/02/136013/

 

삼성, 대관업무 손 뗀다…그룹 공채도 올 상반기가 마지막 - 매일경제

앞으로는 계열사별 채용…이사회 중심 `계열사 자율경영` 체제로 삼성이 정부, 국회, 지방자치단체 등 이른바 `관(官)`을 상대로 로비나 민원 등을 하는 `대관` 조직을 폐지하고 대관업무에서 손�

www.mk.co.kr

내가 원하는 결과를 내었다.

 

그 덕에 삼성에서 많은 지인들을 잃었다. 물론, 나랑 연락하면 손해 보겠지 하다가 그 위에 그룹장이 나랑 친한 사람 인 것을 알고 다시 연락하는 인력도 있었지만. 최근에 내가 조용히 사니 다시 연락이 없어진 사람도 더러 있다.

 

뭐, 난 사람 만나서 술 마시는 것을 워낙 좋아했는데 이제 건강도 건강이지만 좋은 일 할 때 가까운 사람들은 모두 멀어지고 최경영/오은지 기자님 같이 정의로운 사람들만 도와주는게 참 웃긴 사실이었다. 물론, 내가 여기서 언급하지 못하는 많은 분들이 있다. 저 두 분은 공인이고 오래도록 연락하는 사이라 이제 써도 될 것 같아서 쓰는 것이다.

 

이런 삶도 있다고 딸애게 말할 시간이 되었기 때문이다. 눈 앞의 안락함과 돈 보다. 피해를 보지만, 정의와 진실을 추구하며 바른 삶을 살아가며 마음 편하게 사는 삶 말이다.

 

누가 보면 삼성 욕을 많이 한 것 같은데 진실을 밝혀보면, 삼성이 변했으면 하는 사람은 사실 삼성전자 임직원이 더 하다. 그게 내부 힘으로는 바뀔 수 없어 외부 힘으로 바꿔줘야 할 때도 있다. 아무런 니즈 없이 내가 작업을 했다면 ... 

 

그리고 말하지 않다가 내가 이런 말을 하니까 삼성에서 짤린 것이라고 루머를 퍼뜨리는 것을 목격해서 적어두는데 난 삼성 그만두고 영화 한 편 한게 아니다. 그리고 그 영화는 내부자들 이라는 영화가 나오기 전이었다.

 

김구, 윤봉길, 안중근, 김재규 등 워낙 저명해서 존칭을 안 붙여도 되는 분들과 비할 바는 아니지만 나름 열심히, 그리고 순수한 목적으로 살았다. 그것을 이용하려는 많은 무리도 만났다. 자주 뵙지는 못해도 최기자님 오기자님을 알게 되고 인간다운 대화의 시간을 보낸 것으로 삶에 후회가 없을 정도다. 

 

아마 직장에서 정말 밥도 같이 먹기 싫고, 말 한마디도 나누기 싫은 사람을 대한 적이 있는 분들이라면 충분히 공감하실 것 같다.

 

이런 분들에 대해서 자세히 쓰고, 뒤 늦게 밝히는 이유는 내가 아는 파급력은

 

다른 사람들을 안다는 것에서 나오지 않는다. 아예 아니라는 것이 아니다. 이 분들이 보고 싶어 하시는 유명한 분을 다른 일로 알게 되어 니즈에 의해 식사 자리까지 주선한 경우도 있지만 나 정도 수준에서는 아무런 파급력을 가질 수 없다. 시진핑 와이프 정도는 되어야 다른 사람들을 아는 것에 의해 파급력이 나온다. 아주 강력하면서도 지속적인 파급력 말이다. 그러나 내가 사교계에서 성공할 생각을 품은 사람도 아니고 코딩 좋아하고, 히키코모리 같은 삶을 동경하는 배 많이 나온 개발자에 불과한데 왜 이런 일들을 했을까? 내가 아니면 정말 아무도 안할 것 같아서 잠시 외도를 했다. 다들 삼성전자를 가고 싶어하고, 거기서 본사로 가고 싶어한다. 아마 본사에서 자의로 아무런 댓가 없이 퇴직한 사람은 나 밖에 없을 것이다. 그리고 삼성의 경우 마음만 먹으면 내 인생 망가뜨리는 것은 일도 아니다. 내부자로 터뜨리고 아무런 추가 조치가 없는 이유도 내가 나중에 삼성의 기무사 같은 곳에서 나온 분을 만났을 때. 이 일과 관계된 사람들 아무도 처벌 안했으면 한다는 말 때문이라고 생각한다. 왜냐면 이 일 말고도 다른 일을 한 게 있는데 주변 사람들이 다 알아서 증거 확보는 되었지만 내가 인정을 해야 처벌이 가능한 일이었다. 그러나 삼성이 이렇게 커진 것도 다 그 분들이 더러운 일을 했기 때문이고 지금까지 더러운 일인 줄 몰랐다면 그것을 모를 사람들이 아니니(뭐... 학벌이 좋으니) 이제서라도 알고 고치고 처벌은 원하지 않는다고 했기 때문이다.

 

죄는 미워하되 사람은 미워하지 말라고 했다. 나는 사실 오기자님이나 최기자님 처럼 위대하지 않고 일개 히키코모리 개발자라 이건희 사건도 너무 나쁜 쪽으로 만연되어 있는 한국 성문화의 피해 산실이라고 생각했다. 물론, 원본을 공개 하는 것을 추천하기는 했지만... ^^;;; 그 정도라도 이건희 의 이름이 멋진 경영인으로만 평가되는데 하나의 팩트를 추가 했다는데 의의를 가진다. 그리고 부자라고 해서 부끄러움이 없을 것도 아니고 다른 사람 욕하는데 마음이 안 상할 것도 아니니 고인이 된 이후에는 관련 자료가 내려졌으면 하는 바람도 있다.

 

 현실은 참 어렵다. 여러 나라를 돌아다니며, 일할 때 외국인들이 대한민국은 몰라도 삼성을 아는 것에 대해서 직접 경험했었기 때문에 그 핵심 인물 정보가 사라지는 것도 불가능에 가까울 것 같다. 그리고 진실을 원하는 사람은 워낙에 많다. 심지어 거짓을 만드는 사람도 본인은 진실을 아는 것을 원하니 말이다. 진실을 알아야 사기를 더 잘 칠 수 있는 목적이겠지.

 

그냥 여러 이야기를 했다.

 

정의로운 일을 하는 것이나 진실을 밝히는 것은 나 같은 일반인도 할 수 있다는 말을 하고 싶었다. 그리고 뉴스타파가 있다.

https://newstapa.org/

 

뉴스타파(NEWSTAPA) | 한국탐사저널리즘센터(KCIJ) | 99% 시민들의 독립언론

한국탐사저널리즘센터/뉴스타파는 99% 시민을 위한 비영리, 비당파, 독립 언론기관입니다.

newstapa.org

 

그래도 BTS 같은 파급력은 가질 수 없다. 중요한 것은 BTS 같은 파급력을 원하는지. 아니면 진실을 찾는 사람들을 타게팅해서 그 사람들에게 전해지기를 원하는지다.

 

자 이제 블로그에 대한 이야기로 돌아오겠다.

 

파급력은 

 

1. 파급력을 가지고 싶은 사람의 니즈

2. 파급력을 가지고 싶은 사람이 전하고 싶은 메세지

3. 타게팅

 

이다.

 

1, 2 번을 위해 기꺼이 돈을 지불할 마음이 있는 사람이 바로 파급력을 가지고 싶은 사람이다.

 

그래서 facebook, google 은 타게팅을 제공한다. 타게팅을 하기 위해 정보를 수집해서 어느 지역에 사는지, 성별/관심사/연령대는 무엇인지, 수집한다. 그리고 사람들이 쓰는 글을 분석해서 어떤 사람인지 분류해 놓는다.

 

그리고 그 방법이 참 많이 틀렸기 때문에 파급력에 대한 글을 "미래전략실" 분류에 넣는다.

 

많이 노출이 되면 CPC가 올라간다. 그런데 그 노출 타게팅이 잘못되면 전환율이 떨어진다. 그러면 노출된 수 만큼 그것을 보기 싫은 사람에게 많이 전달이 되었다는 것이다.

 

그래서 구글은 첫 화면에 검색창 하나 달랑 있는 것이다. 그냥 원하는 것을 찾으라고 하고. 애드센스라는 부가적인 서비스를 통해서 블로거들이 스스로 타게팅을 하라고 하는 것이다. 물론, 쿠키 분석해서 자동으로 관심사를 찾아 주라고 하긴 하지만... 광고 승인이라는 세부 컨트롤을 제공 한다. 의학 전문 블로거의 경우 의학쪽 광고만 승인하면 된다. 물론 트래픽을 낮추는 기술적 목적도 있지만.

 

유입경로가 도메인 직접 타이핑이나 즐겨찾기로 찾는 것이 아니라면 대부분 검색이라는 것이다. 검색해서 들어온 사람들에게 광고를 클릭하게 하기 위해서 글 속에 교묘하게 숨겨서 광고 클릭하게 하는데... 이 방법은 정말 최악이긴 해도 광고로만 먹고 사는 사람들에게는 이 "사기" 수법이 필요한 테크닉으로 인식되고 있다.

 

지금은 돈을 많이 벌어서 그런 광고도 떼버린 개발자를 아는데 가난할 때는 갑자기 광고가 뜨게 해서 클릭하도록 앱을 만들었었다. 확인 버튼 누르는 곳에, 딱 확인 버튼 누르는 시점에 광고가 그 위를 덮도록 하는 트릭을 말하고 다녔었다. 난 그 당시 대기업에 있었기 때문에 저런 삶도 있구나 했는데... 최근 블로그를 제대로 파 볼 생각이 들고 나니 그런 이야기도 떠오른다.

 

그리고 파급력을 가질 것이다. blog history 에 적었듯이 미래 계획은 작업이 완료 된 이후에 적을 것이다.

삼성전자가 힘든 것은 내 주변에 다니는 사람이 많기 때문에 잘 알고 있는 사실이다. 보너스가 ...

 

https://www.youtube.com/watch?v=eDe2SILQaro

아이폰의 경우 200만 원(기다려서 보내는 직구 대행), 180 만원, 150만 원, 130만 원으로 계속 공기계를 샀다. 그래서 내 입장에서는 거의 공짜폰에 가깝게 풀고 광고를 넣는 이 방식이 이해는 된다.

 

그러나 브랜드 이미지는 확실히 타격을 입는다. 샤넬, 루이뷔통, 에르메스 같은 패션 기업들은 브랜드 이미지로 먹고 산다. 그런 브랜드 이미지를 만드는 데는 정말 힘들고, 오랜 기간이 걸린다.

 

갤럭시 시리즈는 정말 수많은 사람들이 밤새고 건강 잃고, 관계까지 잃어가며 만든 작품이다. 해외에서, 특정 나라에서는 아이폰 보다 더 고급스러운 이미지를 가지고 있었다.

 

난 이 동영상을 보며 결국 욕은 삼성이 먹는구나 라는 생각을 했다. 그래서 간단한 몇 가지를 알려 드리려고 한다.

 

우선, 폰 회사는 을이다. 통신사가 갑이다. 아이폰의 경우 예외지만, 모든 제조사는 그것을 사는 사람에게 을이 된다.

 

통신사 별로 기본 앱이 다르다는 것은 알기 힘들다. 왜냐면 한 사람이 여러 통신사 제품을 쓰기는 힘드니까. 나 같은 경우 2개 통신사 2개 휴대폰을 쓰면서 알았고, 삼성전자 무선사업부에서 다양한 통신사 제품을 만들면서 알았다.

 

통신사 별로 기본 앱이 다르고, 커스터마이징이 다르다.

 

즉, 저 광고는 삼성이 안 넣었을 가능성이 90% 이상이라는 것이다. 나도 삼성 퇴직한 지  오래되었으니 그 간 어떻게 바뀐지는 잘 모른다. 그러나 통신사가 갑이라는 사실은 변함이 없을 것이고.

 

휴대폰을 주문하고 통신사에서 테스트하는 기간이 있다. LAB ENTRY라고 하는데 그게 1차 2차로 이어지고 마지막에 USER TEST를 거칠 때 사용하는 휴대폰까지 각 과정마다 수만 대를 구입한다. 그래서 제조사는 일정에 쪼달리는 것이다. 미리 테스트를 할 수천 명의 계약직 유저를 확보하고 기다리고 있는데 통신사 규모마다 다르지만, 하루 밀리면 10억씩 위약금을 내기도 한다.

 

믓튼 갑이 있는데 을이 마음대로 뭘 만들 수는 없다. 나 혼자 만든 앱이 유럽 시장에 출시된 적이 있는데 마음대로 할 수 있을 거면 그 당시 핫했던 애드 콜로니 달아서 수백억 벌고, 교도소 갔겠지. 혹은 잘 튀었거나.

 

갑의 REQUIREMENT가 있고, 을은 하드웨어뿐 아니라, 소프트웨어를 릴리즈 할 때도 수많은 단계가 있다. 테스트 단계도 정말 많은 과정을 거친다. QA, 신뢰성, 에이징 테스트, 출하 테스트 등등...

 

내가 삼성 관련해서 쓴 다른 글을 보면 무작정 삼성 편이 아니라는 것을 알 것이다.

 

광고를 없애려면 삼성을 욕하면 안 된다. 내가 일전에 삼성 전체적으로 욕하지 말고 미래 전략실 사람들을 공략하라고 했듯이...

 

광고가 뜨는 통신사를 욕하면 된다. 그리고 다른 통신사로 갈아 타자는 운동을 하면 기본 앱 광고는 사라진다.

 

 

그게 아니면 기본 앱 광고는 작아질 뿐이다.

 

자... 이쯤 되면 삼성의 입장이 눈에 선하다. 사람들은 을을 욕하지. 갑이 있어야 휴대폰을 파는데 갑은 광고비 챙기면서 욕먹는  뒷짐 지고 있지. 뭐, 통신사는 글로벌 회사가 아닌데 삼성은 글로벌 회사잖아 그 덩치로 버텨 ~

 

자세히 살펴봐라. KT에서 그런 딜을 하는 인간과 삼성에서 그런 딜을 들어주는 인간은 혈연, 지연, 학연 혹은 회사 끝나면 서로 밀어주고 당겨주는 관계가 분명하다.

 

뭐, 내가 알아볼 수도 있지만 굳이 그렇게 하지는 않겠다. 내 삶이 피폐해짐. 이런 활동 지원해주는 모금 있으면 좋겠다.

 

그럼, 담당자 알아낼 제보자들에게 돈 주면서 결국 진짜 정보를 알 텐데.

 

 

=-0=-=0=0=-0=-0=-0=-0=-0=-=-0=0=-0=-0=-

 

 

내가 조금 더 알고 있다는 착각에서 알게 된 정보는.

 

딱히 크게 대응 하지 않아도 어차피 해당 광고가 큰 수익을 통신사에게 주고, 그 수익이 다시 삼성으로 흐른다면.

 

아마 삼성은 계속해서 기본앱에 광고를 넣을 것 같다.

 

나도 블로그에 광고 넣으면서 그것을 욕할 수 있는 자격은 없지만,

 

난 광고가 보기 싫어서 아이폰을 쓴다.

 

그래서 광고를 보면서도 내 블로그에 들어오는 독자들에게 고급 브랜드 이미지를 주지는 못해도 얻기 힘든 경험에서의 지식을 적으려고 노력해야 하겠다.

 

광고가 정말 필요한 정보가 될 수 있다면 정말 좋을 것 같다. 애드센스가 타게팅을 잘 해주길 바래야지.

+ Recent posts