Está en la página 1de 8

Deep Learning: 3DUnet i

Deep Learning: 3DUnet


Deep Learning: 3DUnet ii

REVISION HISTORY

NUMBER DATE DESCRIPTION NAME


Deep Learning: 3DUnet iii

Contents

1 HERRAMIENTA 1
1.1 Instalación . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Módulos Python3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Estructura de imágenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 TRAINING 1
2.1 Algoritmo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2.2 Errores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.1 Running out of Memory OOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.2 nombre del argumento . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.3 Solucionado . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

3 RESULTADO DEL TRAINING 2


Deep Learning: 3DUnet 1/5

1 HERRAMIENTA

1.1 Instalación

• instalación con git

– Máquina fpglab
– usuario: upf
– cd /home/upf/
– git clone https://github.com/tkuanlun350/3DUnet-Tensorflow-Brats18.git
* /home/upf/3DUnet-Tensorflow-Brats18
--- config.py data_loader.py eval.py gifs preprocess.py README.md train.py custom_ops.py data_sampler.py gener-
ate_5fold.py model.py pycache train_log utils.py --- * pip install scikit-image * pip install --upgrade git+https://github.com/tensorp
* Del libro "KERAS PYTHON DEEP LEARNING" → PERO NO ES NECESARIO ** pip install opencv-python

1.2 Módulos Python3

• config.py : parámetros de la herramienta

– BASEDIR = "/disco/biomedical/deep_learning_brain/" : carpeta con las imagenes

• train.py : algoritmo de entrenamiento

1.3 Estructura de imágenes

• Estructura de ficheros para las imagenes:

– ls /disco/biomedical/deep_learning_brain/training
/disco/biomedical/deep_learning_brain/training
├── HGG
│~~ ├── BraTS19_2013_10_1
│~~ │~~ ├── flair.nii.gz
│~~ │~~ ├── t1ce.nii.gz
│~~ │~~ ├── t1.nii.gz
│~~ │~~ ├── t2.nii.gz
│~~ │~~ └── truth.nii.gz

└── LGG
├── BraTS19_2013_0_1
│~~ ├── flair.nii.gz
│~~ ├── t1ce.nii.gz
│~~ ├── t1.nii.gz
│~~ ├── t2.nii.gz
│~~ └── truth.nii.gz

2 TRAINING

2.1 Algoritmo

Algoritmo train.py : https://github.com/tkuanlun350/3DUnet-Tensorflow-Brats18 python3 train.py --logdir=./train_log/unet3d


--gpu 0
Deep Learning: 3DUnet 2/5

2.2 Errores

2.2.1 Running out of Memory OOM

• El entrenamiento provoca que la memoria RAM se desborde y el sistema operativo mata el procesor de entrenamiento →
KILLED
• SOLUCION: cambiar los parámetros de la configuration.py ó REDUCIR EL NUMERO DE PACIENTES SUBJECTS

– reduzco en HGG a 20 subjects y en LGG a 10

2.2.2 nombre del argumento

TypeError: get_data format() got an unexpected keyword argument 'tfmode'

vi custom_ops.py
@layer_register()
def InstanceNorm5d(x, epsilon=1e-5, use_affine=True, gamma_init=None, data_format ←-
='channels_last'):

python3 from tensorpack.utils.argtools import get_data_format data_format=channels_last get_data_format(data_format,tfmode=False)


Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: get_data_format() got an unexpected keyword
argument tfmode dir() → list import modules import inspect → https://docs.python.org/3/library/inspect.html inspect.getsource(get_data_
→ captura el código del módulo get_data_format
def get_data_format(data_format, keras_mode=True):\n if keras_mode:\n dic = {’NCHW\: ’channels_first’, ’NHWC’: ’chan-
nels_last’}n else:\n dic = {’channels_first’: ’NCHW’, ’channels_last’: ’NHWC’}n ret = dic.get(data_format, data_format)\n if
ret not in dic.values():\n raise ValueError("Unknown data_format: {}".format(data_format))\n return ret\n’
EL ERROR ES EL NOMBRE DEL ARGUMENTO: en el prototipo de la función el segundo argumento es "keras_mode" y en
la llamada a la función "tfmode"
Dónde está definida la función get_data_format get_data_format.globals[file] → /home/upf/.local/lib/python3.6/site-packages/tensorpack
EDICION: en el fichero custom_ops.py en la llamada get_data_format(data_format,tfmode=False) cambio el argumento tfmode
por keras_mode

2.2.3 Solucionado

Con los cambios realizados se produce el entrenamiento

3 RESULTADO DEL TRAINING

• Se han utilizado en HGG 20 subjects y en LGG a 10: los subjects BraTS19_2013_*


• La información sobre el training está en train_log/unet3d/log.log
• upf@fpgalab:~/3DUnet-Tensorflow-Brats18$ cat train_log/unet3d/log.log
[0525 06:11:27 @logger.py:92] Argv: train.py --logdir=./train_log/unet3d --gpu 0
[0525 06:11:27 @gpu.py:43] WRN Found non-empty CUDA_VISIBLE_DEVICES. But TensorFlow was ←-
not built with CUDA support and could not use GPUs!
[0525 06:11:50 @parallel.py:340] [MultiProcessRunnerZMQ] Will fork a dataflow more than ←-
one times. This assumes the datapoints are i.i.d.
[0525 06:11:50 @gpu.py:43] WRN Found non-empty CUDA_VISIBLE_DEVICES. But TensorFlow was ←-
not built with CUDA support and could not use GPUs!
Deep Learning: 3DUnet 3/5

[0525 06:11:50 @input_source.py:221] Setting up the queue 'QueueInput/input_queue' for CPU ←-


prefetching ...
[0525 06:11:50 @training.py:108] Building graph for training tower 0 on device /gpu:0 ...
[0525 06:11:52 @registry.py:90] 'unet3d': [1, 4, 128, 128, 128] --> [1, 128, 128, 128, 4]
[0525 06:11:53 @regularize.py:97] regularize_cost() found 30 variables to regularize.
[0525 06:11:53 @regularize.py:21] The following tensors will be regularized: unet3d/ ←-
init_conv/kernel:0, unet3d/down0_conv_0/kernel:0, unet3d/down0_conv_1/kernel:0, unet3d ←-
/stride2conv0/kernel:0, unet3d/down1_conv_0/kernel:0, unet3d/down1_conv_1/kernel:0, ←-
unet3d/stride2conv1/kernel:0, unet3d/down2_conv_0/kernel:0, unet3d/down2_conv_1/kernel ←-
:0, unet3d/stride2conv2/kernel:0, unet3d/down3_conv_0/kernel:0, unet3d/down3_conv_1/ ←-
kernel:0, unet3d/stride2conv3/kernel:0, unet3d/down4_conv_0/kernel:0, unet3d/ ←-
down4_conv_1/kernel:0, unet3d/up_conv1_3/kernel:0, unet3d/lo_conv0_3/kernel:0, unet3d/ ←-
lo_conv1_3/kernel:0, unet3d/up_conv1_2/kernel:0, unet3d/lo_conv0_2/kernel:0, unet3d/ ←-
lo_conv1_2/kernel:0, unet3d/deep_super_2/kernel:0, unet3d/up_conv1_1/kernel:0, unet3d/ ←-
lo_conv0_1/kernel:0, unet3d/lo_conv1_1/kernel:0, unet3d/deep_super_1/kernel:0, unet3d/ ←-
up_conv1_0/kernel:0, unet3d/lo_conv0_0/kernel:0, unet3d/lo_conv1_0/kernel:0, unet3d/ ←-
final/kernel:0
[0525 06:11:56 @model_utils.py:67] List of Trainable Variables:
name shape #elements
---------------------------------- ------------------- -----------
unet3d/init_conv/kernel [3, 3, 3, 4, 16] 1728
unet3d/init_conv/bias [16] 16
unet3d/init_conv/ins_norm/beta [16] 16
unet3d/init_conv/ins_norm/gamma [16] 16
unet3d/down0_conv_0/kernel [3, 3, 3, 16, 16] 6912
unet3d/down0_conv_0/bias [16] 16
unet3d/down0_conv_0/ins_norm/beta [16] 16
unet3d/down0_conv_0/ins_norm/gamma [16] 16
unet3d/down0_conv_1/kernel [3, 3, 3, 16, 16] 6912
unet3d/down0_conv_1/bias [16] 16
unet3d/down0_conv_1/ins_norm/beta [16] 16
unet3d/down0_conv_1/ins_norm/gamma [16] 16
unet3d/stride2conv0/kernel [3, 3, 3, 16, 32] 13824
unet3d/stride2conv0/bias [32] 32
unet3d/stride2conv0/ins_norm/beta [32] 32
unet3d/stride2conv0/ins_norm/gamma [32] 32
unet3d/down1_conv_0/kernel [3, 3, 3, 32, 32] 27648
unet3d/down1_conv_0/bias [32] 32
unet3d/down1_conv_0/ins_norm/beta [32] 32
unet3d/down1_conv_0/ins_norm/gamma [32] 32
unet3d/down1_conv_1/kernel [3, 3, 3, 32, 32] 27648
unet3d/down1_conv_1/bias [32] 32
unet3d/down1_conv_1/ins_norm/beta [32] 32
unet3d/down1_conv_1/ins_norm/gamma [32] 32
unet3d/stride2conv1/kernel [3, 3, 3, 32, 64] 55296
unet3d/stride2conv1/bias [64] 64
unet3d/stride2conv1/ins_norm/beta [64] 64
unet3d/stride2conv1/ins_norm/gamma [64] 64
unet3d/down2_conv_0/kernel [3, 3, 3, 64, 64] 110592
unet3d/down2_conv_0/bias [64] 64
unet3d/down2_conv_0/ins_norm/beta [64] 64
unet3d/down2_conv_0/ins_norm/gamma [64] 64
unet3d/down2_conv_1/kernel [3, 3, 3, 64, 64] 110592
unet3d/down2_conv_1/bias [64] 64
unet3d/down2_conv_1/ins_norm/beta [64] 64
unet3d/down2_conv_1/ins_norm/gamma [64] 64
unet3d/stride2conv2/kernel [3, 3, 3, 64, 128] 221184
unet3d/stride2conv2/bias [128] 128
unet3d/stride2conv2/ins_norm/beta [128] 128
unet3d/stride2conv2/ins_norm/gamma [128] 128
unet3d/down3_conv_0/kernel [3, 3, 3, 128, 128] 442368
unet3d/down3_conv_0/bias [128] 128
Deep Learning: 3DUnet 4/5

unet3d/down3_conv_0/ins_norm/beta [128] 128


unet3d/down3_conv_0/ins_norm/gamma [128] 128
unet3d/down3_conv_1/kernel [3, 3, 3, 128, 128] 442368
unet3d/down3_conv_1/bias [128] 128
unet3d/down3_conv_1/ins_norm/beta [128] 128
unet3d/down3_conv_1/ins_norm/gamma [128] 128
unet3d/stride2conv3/kernel [3, 3, 3, 128, 256] 884736
unet3d/stride2conv3/bias [256] 256
unet3d/stride2conv3/ins_norm/beta [256] 256
unet3d/stride2conv3/ins_norm/gamma [256] 256
unet3d/down4_conv_0/kernel [3, 3, 3, 256, 256] 1769472
unet3d/down4_conv_0/bias [256] 256
unet3d/down4_conv_0/ins_norm/beta [256] 256
unet3d/down4_conv_0/ins_norm/gamma [256] 256
unet3d/down4_conv_1/kernel [3, 3, 3, 256, 256] 1769472
unet3d/down4_conv_1/bias [256] 256
unet3d/down4_conv_1/ins_norm/beta [256] 256
unet3d/down4_conv_1/ins_norm/gamma [256] 256
unet3d/up_conv1_3/kernel [3, 3, 3, 256, 128] 884736
unet3d/up_conv1_3/bias [128] 128
unet3d/up_conv1_3/ins_norm/beta [128] 128
unet3d/up_conv1_3/ins_norm/gamma [128] 128
unet3d/lo_conv0_3/kernel [3, 3, 3, 256, 128] 884736
unet3d/lo_conv0_3/bias [128] 128
unet3d/lo_conv0_3/ins_norm/beta [128] 128
unet3d/lo_conv0_3/ins_norm/gamma [128] 128
unet3d/lo_conv1_3/kernel [1, 1, 1, 128, 128] 16384
unet3d/lo_conv1_3/bias [128] 128
unet3d/lo_conv1_3/ins_norm/beta [128] 128
unet3d/lo_conv1_3/ins_norm/gamma [128] 128
unet3d/up_conv1_2/kernel [3, 3, 3, 128, 64] 221184
unet3d/up_conv1_2/bias [64] 64
unet3d/up_conv1_2/ins_norm/beta [64] 64
unet3d/up_conv1_2/ins_norm/gamma [64] 64
unet3d/lo_conv0_2/kernel [3, 3, 3, 128, 64] 221184
unet3d/lo_conv0_2/bias [64] 64
unet3d/lo_conv0_2/ins_norm/beta [64] 64
unet3d/lo_conv0_2/ins_norm/gamma [64] 64
unet3d/lo_conv1_2/kernel [1, 1, 1, 64, 64] 4096
unet3d/lo_conv1_2/bias [64] 64
unet3d/lo_conv1_2/ins_norm/beta [64] 64
unet3d/lo_conv1_2/ins_norm/gamma [64] 64
unet3d/deep_super_2/kernel [1, 1, 1, 64, 4] 256
unet3d/deep_super_2/bias [4] 4
unet3d/up_conv1_1/kernel [3, 3, 3, 64, 32] 55296
unet3d/up_conv1_1/bias [32] 32
unet3d/up_conv1_1/ins_norm/beta [32] 32
unet3d/up_conv1_1/ins_norm/gamma [32] 32
unet3d/lo_conv0_1/kernel [3, 3, 3, 64, 32] 55296
unet3d/lo_conv0_1/bias [32] 32
unet3d/lo_conv0_1/ins_norm/beta [32] 32
unet3d/lo_conv0_1/ins_norm/gamma [32] 32
unet3d/lo_conv1_1/kernel [1, 1, 1, 32, 32] 1024
unet3d/lo_conv1_1/bias [32] 32
unet3d/lo_conv1_1/ins_norm/beta [32] 32
unet3d/lo_conv1_1/ins_norm/gamma [32] 32
unet3d/deep_super_1/kernel [1, 1, 1, 32, 4] 128
unet3d/deep_super_1/bias [4] 4
unet3d/up_conv1_0/kernel [3, 3, 3, 32, 16] 13824
unet3d/up_conv1_0/bias [16] 16
unet3d/up_conv1_0/ins_norm/beta [16] 16
unet3d/up_conv1_0/ins_norm/gamma [16] 16
Deep Learning: 3DUnet 5/5

unet3d/lo_conv0_0/kernel [3, 3, 3, 32, 16] 13824


unet3d/lo_conv0_0/bias [16] 16
unet3d/lo_conv0_0/ins_norm/beta [16] 16
unet3d/lo_conv0_0/ins_norm/gamma [16] 16
unet3d/lo_conv1_0/kernel [1, 1, 1, 16, 16] 256
unet3d/lo_conv1_0/bias [16] 16
unet3d/lo_conv1_0/ins_norm/beta [16] 16
unet3d/lo_conv1_0/ins_norm/gamma [16] 16
unet3d/final/kernel [1, 1, 1, 16, 4] 64
unet3d/final/bias [4] 4
Number of trainable variables: 114
Number of parameters (elements): 8269676
Storage space needed for all trainable variables: 31.55MB
[0525 06:11:56 @base.py:207] Setup callbacks graph ...
[0525 06:11:56 @argtools.py:138] WRN Starting a process with 'fork' method is efficient ←-
but not safe and may cause deadlock or crash.Use 'forkserver' or 'spawn' method ←-
instead if you run into such issues.See https://docs.python.org/3/library/ ←-
multiprocessing.html#contexts-and-start-methods on how to set them.
[0525 06:11:56 @argtools.py:138] WRN "import prctl" failed! Install python-prctl so that ←-
processes can be cleaned with guarantee.
[0525 06:11:56 @summary.py:47] [MovingAverageSummary] 2 operations in collection ' ←-
MOVING_SUMMARY_OPS' will be run with session hooks.
[0525 06:11:56 @summary.py:94] Summarizing collection 'summaries' of size 4.
[0525 06:11:56 @base.py:228] Creating the session ...

También podría gustarte