Post
Ch5 Transfer Learning(전이 학습) | Gihun Son

Ch5 Transfer Learning(전이 학습)

Deep learning기반의 모델을 제대로 훈련시키기 위해서는 많은 양의 데이터가 필요하다. 하지만 충분히 큰 데이터셋을 확보하는 것은 돈과 시간이 많이 들기 때문에 쉽지 않다. 이것의 해결방법이 Transfer Learning이다. ImageNet과 같은 큰 데이터셋을 사용하여 훈련된 모델의 가중치를 가져와 목적에 맞게 보정하여 사용하는 것을 Transfer learning이라고 한다. Pre-trained Model을 사용하면 비교적 적은 수의 데이터를 통해 원하는 과제를 해결할 수 있다.

image.png

Transfer Learning을 위한 방법으로 Feature ExtractorFine-tunning 방법이 있다.

5.3.1 Feature Extractor(특성 추출 기법)

Feature Extractor는 ImageNet 데이터셋으로 pre-train된 모델을 가져와 마지막 Fully connected layer부분만 새롭게 구성한다.(FCN에서 image의 category를 결정) FCN만 학습하고 나머지 부분은 학습되지 않도록 한다.

  • Convolutional layer: Data로부터 Feature를 추출(Feature Extractor)
  • Data Classifier: 추출된 Feature를 입력으로 받아 이미지에 대한 클래스를 분류

따라서 pre-train된 모델의 Convolutional layer(weight고정)에 새로운 데이터를 통과시키고, 그 출력을 Data Classifier에서 training시킨다. 사용 가능한 image classification model은 다음과 같다.

• Xception

• Inception V3

• ResNet50

• VGG16

• VGG19

• MobileNet

image.png

  1. 필요한 라이브러리 호출
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import os
import time
import copy
import glob
import cv2 #opencv library
import shutil

import torch
import torchvision #computer vision용도의 package
import torchvision.transforms as transforms # data preprocessing을 위한 package
import torchvision.models as models #pytorch network를 사용하기 위한 package
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader

import matplotlib.pyplot as plt
  1. 예제에서 사용한 image data를 pre-processing

Kaggle dataset불러오는 과정

1
!pip install kaggle --upgrade
1
2
3
4
5
6
7
8
9
10
11
12
13
Requirement already satisfied: kaggle in /usr/local/lib/python3.10/dist-packages (1.5.16)
Requirement already satisfied: six>=1.10 in /usr/local/lib/python3.10/dist-packages (from kaggle) (1.16.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from kaggle) (2023.7.22)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.10/dist-packages (from kaggle) (2.8.2)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from kaggle) (2.31.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from kaggle) (4.66.1)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.10/dist-packages (from kaggle) (8.0.1)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.10/dist-packages (from kaggle) (2.0.6)
Requirement already satisfied: bleach in /usr/local/lib/python3.10/dist-packages (from kaggle) (6.0.0)
Requirement already satisfied: webencodings in /usr/local/lib/python3.10/dist-packages (from bleach->kaggle) (0.5.1)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.10/dist-packages (from python-slugify->kaggle) (1.3)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->kaggle) (3.3.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->kaggle) (3.4)
1
2
from google.colab import files
files.upload()
1
2
3
4
5
6
7
 <input type="file" id="files-2ad7f9e2-eb1e-4bd8-b78b-2b1f20f76e24" name="files[]" multiple disabled
    style="border:none" />
 <output id="result-2ad7f9e2-eb1e-4bd8-b78b-2b1f20f76e24">
  Upload widget is only available when the cell has been executed in the
  current browser session. Please rerun this cell to enable.
  </output>
  <script>// Copyright 2017 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // //      http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License.

/**

  • @fileoverview Helpers for google.colab Python module. */ (function(scope) { function span(text, styleAttributes = {}) { const element = document.createElement(‘span’); element.textContent = text; for (const key of Object.keys(styleAttributes)) { element.style[key] = styleAttributes[key]; } return element; }

// Max number of bytes which will be uploaded at a time. const MAX_PAYLOAD_SIZE = 100 * 1024;

function _uploadFiles(inputId, outputId) { const steps = uploadFilesStep(inputId, outputId); const outputElement = document.getElementById(outputId); // Cache steps on the outputElement to make it available for the next call // to uploadFilesContinue from Python. outputElement.steps = steps;

return _uploadFilesContinue(outputId); }

// This is roughly an async generator (not supported in the browser yet), // where there are multiple asynchronous steps and the Python side is going // to poll for completion of each step. // This uses a Promise to block the python side on completion of each step, // then passes the result of the previous step as the input to the next step. function _uploadFilesContinue(outputId) { const outputElement = document.getElementById(outputId); const steps = outputElement.steps;

const next = steps.next(outputElement.lastPromiseValue); return Promise.resolve(next.value.promise).then((value) => { // Cache the last promise value to make it available to the next // step of the generator. outputElement.lastPromiseValue = value; return next.value.response; }); }

/**

  • Generator function which is called between each async step of the upload
  • process.
  • @param {string} inputId Element ID of the input file picker element.
  • @param {string} outputId Element ID of the output display.
  • @return {!Iterable<!Object>} Iterable of next steps. / function uploadFilesStep(inputId, outputId) { const inputElement = document.getElementById(inputId); inputElement.disabled = false;

const outputElement = document.getElementById(outputId); outputElement.innerHTML = ‘’;

const pickedPromise = new Promise((resolve) => { inputElement.addEventListener(‘change’, (e) => { resolve(e.target.files); }); });

const cancel = document.createElement(‘button’); inputElement.parentElement.appendChild(cancel); cancel.textContent = ‘Cancel upload’; const cancelPromise = new Promise((resolve) => { cancel.onclick = () => { resolve(null); }; });

// Wait for the user to pick the files. const files = yield { promise: Promise.race([pickedPromise, cancelPromise]), response: { action: ‘starting’, } };

cancel.remove();

// Disable the input element since further picks are not allowed. inputElement.disabled = true;

if (!files) { return { response: { action: ‘complete’, } }; }

for (const file of files) { const li = document.createElement(‘li’); li.append(span(file.name, {fontWeight: ‘bold’})); li.append(span( (${file.type || 'n/a'}) - ${file.size} bytes, + last modified: ${ file.lastModifiedDate ? file.lastModifiedDate.toLocaleDateString() : 'n/a'} - )); const percent = span(‘0% done’); li.appendChild(percent);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
outputElement.appendChild(li);

const fileDataPromise = new Promise((resolve) => {
  const reader = new FileReader();
  reader.onload = (e) => {
    resolve(e.target.result);
  };
  reader.readAsArrayBuffer(file);
});
// Wait for the data to be ready.
let fileData = yield {
  promise: fileDataPromise,
  response: {
    action: 'continue',
  }
};

// Use a chunked sending to avoid message size limits. See b/62115660.
let position = 0;
do {
  const length = Math.min(fileData.byteLength - position, MAX_PAYLOAD_SIZE);
  const chunk = new Uint8Array(fileData, position, length);
  position += length;

  const base64 = btoa(String.fromCharCode.apply(null, chunk));
  yield {
    response: {
      action: 'append',
      file: file.name,
      data: base64,
    },
  };

  let percentDone = fileData.byteLength === 0 ?
      100 :
      Math.round((position / fileData.byteLength) * 100);
  percent.textContent = `${percentDone}% done`;

} while (position < fileData.byteLength);   }

// All done. yield { response: { action: ‘complete’, } }; }

scope.google = scope.google || {}; scope.google.colab = scope.google.colab || {}; scope.google.colab._files = { _uploadFiles, _uploadFilesContinue, }; })(self); </script>

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---------------------------------------------------------------------------

KeyboardInterrupt                         Traceback (most recent call last)

<ipython-input-5-5c2e8a8d365b> in <cell line: 2>()
      1 from google.colab import files
----> 2 files.upload()


/usr/local/lib/python3.10/dist-packages/google/colab/files.py in upload()
     67   """
     68 
---> 69   uploaded_files = _upload_files(multiple=True)
     70   # Mapping from original filename to filename as saved locally.
     71   local_filenames = dict()


/usr/local/lib/python3.10/dist-packages/google/colab/files.py in _upload_files(multiple)
    154 
    155   # First result is always an indication that the file picker has completed.
--> 156   result = _output.eval_js(
    157       'google.colab._files._uploadFiles("{input_id}", "{output_id}")'.format(
    158           input_id=input_id, output_id=output_id


/usr/local/lib/python3.10/dist-packages/google/colab/output/_js.py in eval_js(script, ignore_result, timeout_sec)
     38   if ignore_result:
     39     return
---> 40   return _message.read_reply_from_input(request_id, timeout_sec)
     41 
     42 


/usr/local/lib/python3.10/dist-packages/google/colab/_message.py in read_reply_from_input(message_id, timeout_sec)
     94     reply = _read_next_input_message()
     95     if reply == _NOT_READY or not isinstance(reply, dict):
---> 96       time.sleep(0.025)
     97       continue
     98     if (


KeyboardInterrupt: 
1
2
3
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
1
!ls -1ha kaggle.json
1
!kaggle competitions download -c dogs-vs-cats
1
!unzip dogs-vs-cats.zip
1
2
!unzip test1.zip
!unzip train.zip

해당예제에서는 올려준 파일을 사용하겠다.

1
2
from google.colab import files # 데이터 불러오기
file_uploaded=files.upload()   # 데이터 불러오기: chap05/data/catndog.zip 파일 선택
1
2
3
4
5
6
7
 <input type="file" id="files-3122ad56-c824-4afd-89b1-cace9678767c" name="files[]" multiple disabled
    style="border:none" />
 <output id="result-3122ad56-c824-4afd-89b1-cace9678767c">
  Upload widget is only available when the cell has been executed in the
  current browser session. Please rerun this cell to enable.
  </output>
  <script>// Copyright 2017 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // //      http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License.

/**

  • @fileoverview Helpers for google.colab Python module. */ (function(scope) { function span(text, styleAttributes = {}) { const element = document.createElement(‘span’); element.textContent = text; for (const key of Object.keys(styleAttributes)) { element.style[key] = styleAttributes[key]; } return element; }

// Max number of bytes which will be uploaded at a time. const MAX_PAYLOAD_SIZE = 100 * 1024;

function _uploadFiles(inputId, outputId) { const steps = uploadFilesStep(inputId, outputId); const outputElement = document.getElementById(outputId); // Cache steps on the outputElement to make it available for the next call // to uploadFilesContinue from Python. outputElement.steps = steps;

return _uploadFilesContinue(outputId); }

// This is roughly an async generator (not supported in the browser yet), // where there are multiple asynchronous steps and the Python side is going // to poll for completion of each step. // This uses a Promise to block the python side on completion of each step, // then passes the result of the previous step as the input to the next step. function _uploadFilesContinue(outputId) { const outputElement = document.getElementById(outputId); const steps = outputElement.steps;

const next = steps.next(outputElement.lastPromiseValue); return Promise.resolve(next.value.promise).then((value) => { // Cache the last promise value to make it available to the next // step of the generator. outputElement.lastPromiseValue = value; return next.value.response; }); }

/**

  • Generator function which is called between each async step of the upload
  • process.
  • @param {string} inputId Element ID of the input file picker element.
  • @param {string} outputId Element ID of the output display.
  • @return {!Iterable<!Object>} Iterable of next steps. / function uploadFilesStep(inputId, outputId) { const inputElement = document.getElementById(inputId); inputElement.disabled = false;

const outputElement = document.getElementById(outputId); outputElement.innerHTML = ‘’;

const pickedPromise = new Promise((resolve) => { inputElement.addEventListener(‘change’, (e) => { resolve(e.target.files); }); });

const cancel = document.createElement(‘button’); inputElement.parentElement.appendChild(cancel); cancel.textContent = ‘Cancel upload’; const cancelPromise = new Promise((resolve) => { cancel.onclick = () => { resolve(null); }; });

// Wait for the user to pick the files. const files = yield { promise: Promise.race([pickedPromise, cancelPromise]), response: { action: ‘starting’, } };

cancel.remove();

// Disable the input element since further picks are not allowed. inputElement.disabled = true;

if (!files) { return { response: { action: ‘complete’, } }; }

for (const file of files) { const li = document.createElement(‘li’); li.append(span(file.name, {fontWeight: ‘bold’})); li.append(span( (${file.type || 'n/a'}) - ${file.size} bytes, + last modified: ${ file.lastModifiedDate ? file.lastModifiedDate.toLocaleDateString() : 'n/a'} - )); const percent = span(‘0% done’); li.appendChild(percent);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
outputElement.appendChild(li);

const fileDataPromise = new Promise((resolve) => {
  const reader = new FileReader();
  reader.onload = (e) => {
    resolve(e.target.result);
  };
  reader.readAsArrayBuffer(file);
});
// Wait for the data to be ready.
let fileData = yield {
  promise: fileDataPromise,
  response: {
    action: 'continue',
  }
};

// Use a chunked sending to avoid message size limits. See b/62115660.
let position = 0;
do {
  const length = Math.min(fileData.byteLength - position, MAX_PAYLOAD_SIZE);
  const chunk = new Uint8Array(fileData, position, length);
  position += length;

  const base64 = btoa(String.fromCharCode.apply(null, chunk));
  yield {
    response: {
      action: 'append',
      file: file.name,
      data: base64,
    },
  };

  let percentDone = fileData.byteLength === 0 ?
      100 :
      Math.round((position / fileData.byteLength) * 100);
  percent.textContent = `${percentDone}% done`;

} while (position < fileData.byteLength);   }

// All done. yield { response: { action: ‘complete’, } }; }

scope.google = scope.google || {}; scope.google.colab = scope.google.colab || {}; scope.google.colab._files = { _uploadFiles, _uploadFilesContinue, }; })(self); </script>

1
Saving catanddog (1).zip to catanddog (1).zip
1
!unzip catanddog.zip -d catanddog/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
Archive:  catanddog.zip
   creating: catanddog/test/
   creating: catanddog/test/Cat/
  inflating: catanddog/test/Cat/8100.jpg  
  inflating: catanddog/test/Cat/8101.jpg  
  inflating: catanddog/test/Cat/8102.jpg  
  inflating: catanddog/test/Cat/8103.jpg  
  inflating: catanddog/test/Cat/8104.jpg  
  inflating: catanddog/test/Cat/8105.jpg  
  inflating: catanddog/test/Cat/8106.jpg  
  inflating: catanddog/test/Cat/8107.jpg  
  inflating: catanddog/test/Cat/8108.jpg  
  inflating: catanddog/test/Cat/8109.jpg  
  inflating: catanddog/test/Cat/8110.jpg  
  inflating: catanddog/test/Cat/8111.jpg  
  inflating: catanddog/test/Cat/8112.jpg  
  inflating: catanddog/test/Cat/8113.jpg  
  inflating: catanddog/test/Cat/8114.jpg  
  inflating: catanddog/test/Cat/8115.jpg  
  inflating: catanddog/test/Cat/8116.jpg  
  inflating: catanddog/test/Cat/8117.jpg  
  inflating: catanddog/test/Cat/8118.jpg  
  inflating: catanddog/test/Cat/8119.jpg  
  inflating: catanddog/test/Cat/8120.jpg  
  inflating: catanddog/test/Cat/8121.jpg  
  inflating: catanddog/test/Cat/8122.jpg  
  inflating: catanddog/test/Cat/8123.jpg  
  inflating: catanddog/test/Cat/8124.jpg  
  inflating: catanddog/test/Cat/8125.jpg  
  inflating: catanddog/test/Cat/8126.jpg  
  inflating: catanddog/test/Cat/8127.jpg  
  inflating: catanddog/test/Cat/8128.jpg  
  inflating: catanddog/test/Cat/8129.jpg  
  inflating: catanddog/test/Cat/8130.jpg  
  inflating: catanddog/test/Cat/8131.jpg  
  inflating: catanddog/test/Cat/8132.jpg  
  inflating: catanddog/test/Cat/8133.jpg  
  inflating: catanddog/test/Cat/8134.jpg  
  inflating: catanddog/test/Cat/8135.jpg  
  inflating: catanddog/test/Cat/8136.jpg  
  inflating: catanddog/test/Cat/8137.jpg  
  inflating: catanddog/test/Cat/8138.jpg  
  inflating: catanddog/test/Cat/8139.jpg  
  inflating: catanddog/test/Cat/8140.jpg  
  inflating: catanddog/test/Cat/8141.jpg  
  inflating: catanddog/test/Cat/8142.jpg  
  inflating: catanddog/test/Cat/8143.jpg  
  inflating: catanddog/test/Cat/8144.jpg  
  inflating: catanddog/test/Cat/8145.jpg  
  inflating: catanddog/test/Cat/8146.jpg  
  inflating: catanddog/test/Cat/8147.jpg  
  inflating: catanddog/test/Cat/8148.jpg  
   creating: catanddog/test/Dog/
  inflating: catanddog/test/Dog/8100.jpg  
  inflating: catanddog/test/Dog/8101.jpg  
  inflating: catanddog/test/Dog/8102.jpg  
  inflating: catanddog/test/Dog/8103.jpg  
  inflating: catanddog/test/Dog/8104.jpg  
  inflating: catanddog/test/Dog/8105.jpg  
  inflating: catanddog/test/Dog/8106.jpg  
  inflating: catanddog/test/Dog/8107.jpg  
  inflating: catanddog/test/Dog/8108.jpg  
  inflating: catanddog/test/Dog/8109.jpg  
  inflating: catanddog/test/Dog/8110.jpg  
  inflating: catanddog/test/Dog/8111.jpg  
  inflating: catanddog/test/Dog/8112.jpg  
  inflating: catanddog/test/Dog/8113.jpg  
  inflating: catanddog/test/Dog/8114.jpg  
  inflating: catanddog/test/Dog/8115.jpg  
  inflating: catanddog/test/Dog/8116.jpg  
  inflating: catanddog/test/Dog/8117.jpg  
  inflating: catanddog/test/Dog/8118.jpg  
  inflating: catanddog/test/Dog/8119.jpg  
  inflating: catanddog/test/Dog/8120.jpg  
  inflating: catanddog/test/Dog/8121.jpg  
  inflating: catanddog/test/Dog/8122.jpg  
  inflating: catanddog/test/Dog/8123.jpg  
  inflating: catanddog/test/Dog/8124.jpg  
  inflating: catanddog/test/Dog/8125.jpg  
  inflating: catanddog/test/Dog/8126.jpg  
  inflating: catanddog/test/Dog/8127.jpg  
  inflating: catanddog/test/Dog/8128.jpg  
  inflating: catanddog/test/Dog/8129.jpg  
  inflating: catanddog/test/Dog/8130.jpg  
  inflating: catanddog/test/Dog/8131.jpg  
  inflating: catanddog/test/Dog/8132.jpg  
  inflating: catanddog/test/Dog/8133.jpg  
  inflating: catanddog/test/Dog/8134.jpg  
  inflating: catanddog/test/Dog/8135.jpg  
  inflating: catanddog/test/Dog/8136.jpg  
  inflating: catanddog/test/Dog/8137.jpg  
  inflating: catanddog/test/Dog/8138.jpg  
  inflating: catanddog/test/Dog/8139.jpg  
  inflating: catanddog/test/Dog/8140.jpg  
  inflating: catanddog/test/Dog/8141.jpg  
  inflating: catanddog/test/Dog/8142.jpg  
  inflating: catanddog/test/Dog/8143.jpg  
  inflating: catanddog/test/Dog/8144.jpg  
  inflating: catanddog/test/Dog/8145.jpg  
  inflating: catanddog/test/Dog/8146.jpg  
  inflating: catanddog/test/Dog/8147.jpg  
  inflating: catanddog/test/Dog/8148.jpg  
   creating: catanddog/train/
   creating: catanddog/train/Cat/
  inflating: catanddog/train/Cat/0.jpg  
  inflating: catanddog/train/Cat/1.jpg  
  inflating: catanddog/train/Cat/10.jpg  
  inflating: catanddog/train/Cat/11.jpg  
  inflating: catanddog/train/Cat/12.jpg  
  inflating: catanddog/train/Cat/13.jpg  
  inflating: catanddog/train/Cat/14.jpg  
  inflating: catanddog/train/Cat/15.jpg  
  inflating: catanddog/train/Cat/16.jpg  
  inflating: catanddog/train/Cat/17.jpg  
  inflating: catanddog/train/Cat/18.jpg  
  inflating: catanddog/train/Cat/19.jpg  
  inflating: catanddog/train/Cat/2.jpg  
  inflating: catanddog/train/Cat/20.jpg  
  inflating: catanddog/train/Cat/21.jpg  
  inflating: catanddog/train/Cat/22.jpg  
  inflating: catanddog/train/Cat/23.jpg  
  inflating: catanddog/train/Cat/24.jpg  
  inflating: catanddog/train/Cat/25.jpg  
  inflating: catanddog/train/Cat/26.jpg  
  inflating: catanddog/train/Cat/27.jpg  
  inflating: catanddog/train/Cat/28.jpg  
  inflating: catanddog/train/Cat/29.jpg  
  inflating: catanddog/train/Cat/3.jpg  
  inflating: catanddog/train/Cat/30.jpg  
  inflating: catanddog/train/Cat/31.jpg  
  inflating: catanddog/train/Cat/32.jpg  
  inflating: catanddog/train/Cat/33.jpg  
  inflating: catanddog/train/Cat/34.jpg  
  inflating: catanddog/train/Cat/35.jpg  
  inflating: catanddog/train/Cat/36.jpg  
  inflating: catanddog/train/Cat/37.jpg  
  inflating: catanddog/train/Cat/38.jpg  
  inflating: catanddog/train/Cat/39.jpg  
  inflating: catanddog/train/Cat/4.jpg  
  inflating: catanddog/train/Cat/40.jpg  
  inflating: catanddog/train/Cat/41.jpg  
  inflating: catanddog/train/Cat/42.jpg  
  inflating: catanddog/train/Cat/43.jpg  
  inflating: catanddog/train/Cat/44.jpg  
  inflating: catanddog/train/Cat/45.jpg  
  inflating: catanddog/train/Cat/46.jpg  
  inflating: catanddog/train/Cat/47.jpg  
  inflating: catanddog/train/Cat/48.jpg  
  inflating: catanddog/train/Cat/49.jpg  
  inflating: catanddog/train/Cat/5.jpg  
  inflating: catanddog/train/Cat/50.jpg  
  inflating: catanddog/train/Cat/51.jpg  
  inflating: catanddog/train/Cat/52.jpg  
  inflating: catanddog/train/Cat/53.jpg  
  inflating: catanddog/train/Cat/54.jpg  
  inflating: catanddog/train/Cat/55.jpg  
  inflating: catanddog/train/Cat/56.jpg  
  inflating: catanddog/train/Cat/57.jpg  
  inflating: catanddog/train/Cat/58.jpg  
  inflating: catanddog/train/Cat/59.jpg  
  inflating: catanddog/train/Cat/6.jpg  
  inflating: catanddog/train/Cat/60.jpg  
  inflating: catanddog/train/Cat/61.jpg  
  inflating: catanddog/train/Cat/62.jpg  
  inflating: catanddog/train/Cat/7.jpg  
  inflating: catanddog/train/Cat/7539.jpg  
  inflating: catanddog/train/Cat/7540.jpg  
  inflating: catanddog/train/Cat/7541.jpg  
  inflating: catanddog/train/Cat/7542.jpg  
  inflating: catanddog/train/Cat/7543.jpg  
  inflating: catanddog/train/Cat/7544.jpg  
  inflating: catanddog/train/Cat/7545.jpg  
  inflating: catanddog/train/Cat/7546.jpg  
  inflating: catanddog/train/Cat/7547.jpg  
  inflating: catanddog/train/Cat/7548.jpg  
  inflating: catanddog/train/Cat/7549.jpg  
  inflating: catanddog/train/Cat/7550.jpg  
  inflating: catanddog/train/Cat/7551.jpg  
  inflating: catanddog/train/Cat/7552.jpg  
  inflating: catanddog/train/Cat/7553.jpg  
  inflating: catanddog/train/Cat/7554.jpg  
  inflating: catanddog/train/Cat/7555.jpg  
  inflating: catanddog/train/Cat/7556.jpg  
  inflating: catanddog/train/Cat/7557.jpg  
  inflating: catanddog/train/Cat/7558.jpg  
  inflating: catanddog/train/Cat/7559.jpg  
  inflating: catanddog/train/Cat/7560.jpg  
  inflating: catanddog/train/Cat/7561.jpg  
  inflating: catanddog/train/Cat/7562.jpg  
  inflating: catanddog/train/Cat/7563.jpg  
  inflating: catanddog/train/Cat/7564.jpg  
  inflating: catanddog/train/Cat/7565.jpg  
  inflating: catanddog/train/Cat/7566.jpg  
  inflating: catanddog/train/Cat/7567.jpg  
  inflating: catanddog/train/Cat/7568.jpg  
  inflating: catanddog/train/Cat/7569.jpg  
  inflating: catanddog/train/Cat/7570.jpg  
  inflating: catanddog/train/Cat/7571.jpg  
  inflating: catanddog/train/Cat/7572.jpg  
  inflating: catanddog/train/Cat/7573.jpg  
  inflating: catanddog/train/Cat/7574.jpg  
  inflating: catanddog/train/Cat/7575.jpg  
  inflating: catanddog/train/Cat/7576.jpg  
  inflating: catanddog/train/Cat/7577.jpg  
  inflating: catanddog/train/Cat/7578.jpg  
  inflating: catanddog/train/Cat/7579.jpg  
  inflating: catanddog/train/Cat/7580.jpg  
  inflating: catanddog/train/Cat/7581.jpg  
  inflating: catanddog/train/Cat/7582.jpg  
  inflating: catanddog/train/Cat/7583.jpg  
  inflating: catanddog/train/Cat/7584.jpg  
  inflating: catanddog/train/Cat/7585.jpg  
  inflating: catanddog/train/Cat/7586.jpg  
  inflating: catanddog/train/Cat/7587.jpg  
  inflating: catanddog/train/Cat/7588.jpg  
  inflating: catanddog/train/Cat/7589.jpg  
  inflating: catanddog/train/Cat/7590.jpg  
  inflating: catanddog/train/Cat/7591.jpg  
  inflating: catanddog/train/Cat/7592.jpg  
  inflating: catanddog/train/Cat/7593.jpg  
  inflating: catanddog/train/Cat/7594.jpg  
  inflating: catanddog/train/Cat/7595.jpg  
  inflating: catanddog/train/Cat/7596.jpg  
  inflating: catanddog/train/Cat/7597.jpg  
  inflating: catanddog/train/Cat/7598.jpg  
  inflating: catanddog/train/Cat/7599.jpg  
  inflating: catanddog/train/Cat/7600.jpg  
  inflating: catanddog/train/Cat/7601.jpg  
  inflating: catanddog/train/Cat/7602.jpg  
  inflating: catanddog/train/Cat/7603.jpg  
  inflating: catanddog/train/Cat/7604.jpg  
  inflating: catanddog/train/Cat/7605.jpg  
  inflating: catanddog/train/Cat/7606.jpg  
  inflating: catanddog/train/Cat/7607.jpg  
  inflating: catanddog/train/Cat/7608.jpg  
  inflating: catanddog/train/Cat/7609.jpg  
  inflating: catanddog/train/Cat/7610.jpg  
  inflating: catanddog/train/Cat/7611.jpg  
  inflating: catanddog/train/Cat/7612.jpg  
  inflating: catanddog/train/Cat/7613.jpg  
  inflating: catanddog/train/Cat/7614.jpg  
  inflating: catanddog/train/Cat/7615.jpg  
  inflating: catanddog/train/Cat/7616.jpg  
  inflating: catanddog/train/Cat/7617.jpg  
  inflating: catanddog/train/Cat/7618.jpg  
  inflating: catanddog/train/Cat/7619.jpg  
  inflating: catanddog/train/Cat/7620.jpg  
  inflating: catanddog/train/Cat/7621.jpg  
  inflating: catanddog/train/Cat/7622.jpg  
  inflating: catanddog/train/Cat/7623.jpg  
  inflating: catanddog/train/Cat/7624.jpg  
  inflating: catanddog/train/Cat/7625.jpg  
  inflating: catanddog/train/Cat/7626.jpg  
  inflating: catanddog/train/Cat/7627.jpg  
  inflating: catanddog/train/Cat/7628.jpg  
  inflating: catanddog/train/Cat/7629.jpg  
  inflating: catanddog/train/Cat/7630.jpg  
  inflating: catanddog/train/Cat/7631.jpg  
  inflating: catanddog/train/Cat/7632.jpg  
  inflating: catanddog/train/Cat/7633.jpg  
  inflating: catanddog/train/Cat/7634.jpg  
  inflating: catanddog/train/Cat/7635.jpg  
  inflating: catanddog/train/Cat/7636.jpg  
  inflating: catanddog/train/Cat/7637.jpg  
  inflating: catanddog/train/Cat/7638.jpg  
  inflating: catanddog/train/Cat/7639.jpg  
  inflating: catanddog/train/Cat/7640.jpg  
  inflating: catanddog/train/Cat/7641.jpg  
  inflating: catanddog/train/Cat/7642.jpg  
  inflating: catanddog/train/Cat/7643.jpg  
  inflating: catanddog/train/Cat/7644.jpg  
  inflating: catanddog/train/Cat/7645.jpg  
  inflating: catanddog/train/Cat/7646.jpg  
  inflating: catanddog/train/Cat/7647.jpg  
  inflating: catanddog/train/Cat/7648.jpg  
  inflating: catanddog/train/Cat/7649.jpg  
  inflating: catanddog/train/Cat/7650.jpg  
  inflating: catanddog/train/Cat/7651.jpg  
  inflating: catanddog/train/Cat/7652.jpg  
  inflating: catanddog/train/Cat/7653.jpg  
  inflating: catanddog/train/Cat/7654.jpg  
  inflating: catanddog/train/Cat/7655.jpg  
  inflating: catanddog/train/Cat/7656.jpg  
  inflating: catanddog/train/Cat/7657.jpg  
  inflating: catanddog/train/Cat/8.jpg  
  inflating: catanddog/train/Cat/9.jpg  
   creating: catanddog/train/Dog/
  inflating: catanddog/train/Dog/0.jpg  
  inflating: catanddog/train/Dog/1.jpg  
  inflating: catanddog/train/Dog/10.jpg  
  inflating: catanddog/train/Dog/1000.jpg  
  inflating: catanddog/train/Dog/1001.jpg  
  inflating: catanddog/train/Dog/1002.jpg  
  inflating: catanddog/train/Dog/1003.jpg  
  inflating: catanddog/train/Dog/1004.jpg  
  inflating: catanddog/train/Dog/1005.jpg  
  inflating: catanddog/train/Dog/1006.jpg  
  inflating: catanddog/train/Dog/1007.jpg  
  inflating: catanddog/train/Dog/1008.jpg  
  inflating: catanddog/train/Dog/1009.jpg  
  inflating: catanddog/train/Dog/1010.jpg  
  inflating: catanddog/train/Dog/1011.jpg  
  inflating: catanddog/train/Dog/1012.jpg  
  inflating: catanddog/train/Dog/1013.jpg  
  inflating: catanddog/train/Dog/1014.jpg  
  inflating: catanddog/train/Dog/1015.jpg  
  inflating: catanddog/train/Dog/1016.jpg  
  inflating: catanddog/train/Dog/1017.jpg  
  inflating: catanddog/train/Dog/1018.jpg  
  inflating: catanddog/train/Dog/1019.jpg  
  inflating: catanddog/train/Dog/1020.jpg  
  inflating: catanddog/train/Dog/1021.jpg  
  inflating: catanddog/train/Dog/1022.jpg  
  inflating: catanddog/train/Dog/1023.jpg  
  inflating: catanddog/train/Dog/1024.jpg  
  inflating: catanddog/train/Dog/1025.jpg  
  inflating: catanddog/train/Dog/1026.jpg  
  inflating: catanddog/train/Dog/1027.jpg  
  inflating: catanddog/train/Dog/1028.jpg  
  inflating: catanddog/train/Dog/1029.jpg  
  inflating: catanddog/train/Dog/1030.jpg  
  inflating: catanddog/train/Dog/1031.jpg  
  inflating: catanddog/train/Dog/1032.jpg  
  inflating: catanddog/train/Dog/1033.jpg  
  inflating: catanddog/train/Dog/1034.jpg  
  inflating: catanddog/train/Dog/1035.jpg  
  inflating: catanddog/train/Dog/1036.jpg  
  inflating: catanddog/train/Dog/1037.jpg  
  inflating: catanddog/train/Dog/1038.jpg  
  inflating: catanddog/train/Dog/1039.jpg  
  inflating: catanddog/train/Dog/1040.jpg  
  inflating: catanddog/train/Dog/1041.jpg  
  inflating: catanddog/train/Dog/1042.jpg  
  inflating: catanddog/train/Dog/1043.jpg  
  inflating: catanddog/train/Dog/1044.jpg  
  inflating: catanddog/train/Dog/1045.jpg  
  inflating: catanddog/train/Dog/1046.jpg  
  inflating: catanddog/train/Dog/1047.jpg  
  inflating: catanddog/train/Dog/1048.jpg  
  inflating: catanddog/train/Dog/1049.jpg  
  inflating: catanddog/train/Dog/1050.jpg  
  inflating: catanddog/train/Dog/1051.jpg  
  inflating: catanddog/train/Dog/1052.jpg  
  inflating: catanddog/train/Dog/1053.jpg  
  inflating: catanddog/train/Dog/1054.jpg  
  inflating: catanddog/train/Dog/1055.jpg  
  inflating: catanddog/train/Dog/1056.jpg  
  inflating: catanddog/train/Dog/1057.jpg  
  inflating: catanddog/train/Dog/1058.jpg  
  inflating: catanddog/train/Dog/1059.jpg  
  inflating: catanddog/train/Dog/1060.jpg  
  inflating: catanddog/train/Dog/1061.jpg  
  inflating: catanddog/train/Dog/1062.jpg  
  inflating: catanddog/train/Dog/1063.jpg  
  inflating: catanddog/train/Dog/1064.jpg  
  inflating: catanddog/train/Dog/1065.jpg  
  inflating: catanddog/train/Dog/1066.jpg  
  inflating: catanddog/train/Dog/1067.jpg  
  inflating: catanddog/train/Dog/1068.jpg  
  inflating: catanddog/train/Dog/1069.jpg  
  inflating: catanddog/train/Dog/1070.jpg  
  inflating: catanddog/train/Dog/11.jpg  
  inflating: catanddog/train/Dog/12.jpg  
  inflating: catanddog/train/Dog/13.jpg  
  inflating: catanddog/train/Dog/14.jpg  
  inflating: catanddog/train/Dog/15.jpg  
  inflating: catanddog/train/Dog/16.jpg  
  inflating: catanddog/train/Dog/17.jpg  
  inflating: catanddog/train/Dog/18.jpg  
  inflating: catanddog/train/Dog/19.jpg  
  inflating: catanddog/train/Dog/2.jpg  
  inflating: catanddog/train/Dog/20.jpg  
  inflating: catanddog/train/Dog/21.jpg  
  inflating: catanddog/train/Dog/22.jpg  
  inflating: catanddog/train/Dog/23.jpg  
  inflating: catanddog/train/Dog/24.jpg  
  inflating: catanddog/train/Dog/25.jpg  
  inflating: catanddog/train/Dog/26.jpg  
  inflating: catanddog/train/Dog/27.jpg  
  inflating: catanddog/train/Dog/28.jpg  
  inflating: catanddog/train/Dog/29.jpg  
  inflating: catanddog/train/Dog/3.jpg  
  inflating: catanddog/train/Dog/30.jpg  
  inflating: catanddog/train/Dog/31.jpg  
  inflating: catanddog/train/Dog/32.jpg  
  inflating: catanddog/train/Dog/33.jpg  
  inflating: catanddog/train/Dog/34.jpg  
  inflating: catanddog/train/Dog/35.jpg  
  inflating: catanddog/train/Dog/36.jpg  
  inflating: catanddog/train/Dog/37.jpg  
  inflating: catanddog/train/Dog/38.jpg  
  inflating: catanddog/train/Dog/39.jpg  
  inflating: catanddog/train/Dog/4.jpg  
  inflating: catanddog/train/Dog/40.jpg  
  inflating: catanddog/train/Dog/41.jpg  
  inflating: catanddog/train/Dog/42.jpg  
  inflating: catanddog/train/Dog/43.jpg  
  inflating: catanddog/train/Dog/44.jpg  
  inflating: catanddog/train/Dog/45.jpg  
  inflating: catanddog/train/Dog/46.jpg  
  inflating: catanddog/train/Dog/47.jpg  
  inflating: catanddog/train/Dog/48.jpg  
  inflating: catanddog/train/Dog/49.jpg  
  inflating: catanddog/train/Dog/5.jpg  
  inflating: catanddog/train/Dog/50.jpg  
  inflating: catanddog/train/Dog/51.jpg  
  inflating: catanddog/train/Dog/52.jpg  
  inflating: catanddog/train/Dog/53.jpg  
  inflating: catanddog/train/Dog/54.jpg  
  inflating: catanddog/train/Dog/55.jpg  
  inflating: catanddog/train/Dog/56.jpg  
  inflating: catanddog/train/Dog/57.jpg  
  inflating: catanddog/train/Dog/58.jpg  
  inflating: catanddog/train/Dog/59.jpg  
  inflating: catanddog/train/Dog/6.jpg  
  inflating: catanddog/train/Dog/60.jpg  
  inflating: catanddog/train/Dog/61.jpg  
  inflating: catanddog/train/Dog/62.jpg  
  inflating: catanddog/train/Dog/7.jpg  
  inflating: catanddog/train/Dog/8.jpg  
  inflating: catanddog/train/Dog/9.jpg  
  inflating: catanddog/train/Dog/931.jpg  
  inflating: catanddog/train/Dog/932.jpg  
  inflating: catanddog/train/Dog/933.jpg  
  inflating: catanddog/train/Dog/934.jpg  
  inflating: catanddog/train/Dog/935.jpg  
  inflating: catanddog/train/Dog/936.jpg  
  inflating: catanddog/train/Dog/937.jpg  
  inflating: catanddog/train/Dog/938.jpg  
  inflating: catanddog/train/Dog/939.jpg  
  inflating: catanddog/train/Dog/940.jpg  
  inflating: catanddog/train/Dog/941.jpg  
  inflating: catanddog/train/Dog/942.jpg  
  inflating: catanddog/train/Dog/943.jpg  
  inflating: catanddog/train/Dog/944.jpg  
  inflating: catanddog/train/Dog/945.jpg  
  inflating: catanddog/train/Dog/946.jpg  
  inflating: catanddog/train/Dog/947.jpg  
  inflating: catanddog/train/Dog/948.jpg  
  inflating: catanddog/train/Dog/949.jpg  
  inflating: catanddog/train/Dog/950.jpg  
  inflating: catanddog/train/Dog/951.jpg  
  inflating: catanddog/train/Dog/952.jpg  
  inflating: catanddog/train/Dog/953.jpg  
  inflating: catanddog/train/Dog/954.jpg  
  inflating: catanddog/train/Dog/955.jpg  
  inflating: catanddog/train/Dog/956.jpg  
  inflating: catanddog/train/Dog/957.jpg  
  inflating: catanddog/train/Dog/958.jpg  
  inflating: catanddog/train/Dog/959.jpg  
  inflating: catanddog/train/Dog/960.jpg  
  inflating: catanddog/train/Dog/961.jpg  
  inflating: catanddog/train/Dog/962.jpg  
  inflating: catanddog/train/Dog/963.jpg  
  inflating: catanddog/train/Dog/964.jpg  
  inflating: catanddog/train/Dog/965.jpg  
  inflating: catanddog/train/Dog/966.jpg  
  inflating: catanddog/train/Dog/967.jpg  
  inflating: catanddog/train/Dog/968.jpg  
  inflating: catanddog/train/Dog/969.jpg  
  inflating: catanddog/train/Dog/970.jpg  
  inflating: catanddog/train/Dog/971.jpg  
  inflating: catanddog/train/Dog/972.jpg  
  inflating: catanddog/train/Dog/973.jpg  
  inflating: catanddog/train/Dog/974.jpg  
  inflating: catanddog/train/Dog/975.jpg  
  inflating: catanddog/train/Dog/976.jpg  
  inflating: catanddog/train/Dog/977.jpg  
  inflating: catanddog/train/Dog/978.jpg  
  inflating: catanddog/train/Dog/979.jpg  
  inflating: catanddog/train/Dog/980.jpg  
  inflating: catanddog/train/Dog/981.jpg  
  inflating: catanddog/train/Dog/982.jpg  
  inflating: catanddog/train/Dog/983.jpg  
  inflating: catanddog/train/Dog/984.jpg  
  inflating: catanddog/train/Dog/985.jpg  
  inflating: catanddog/train/Dog/986.jpg  
  inflating: catanddog/train/Dog/987.jpg  
  inflating: catanddog/train/Dog/988.jpg  
  inflating: catanddog/train/Dog/989.jpg  
  inflating: catanddog/train/Dog/990.jpg  
  inflating: catanddog/train/Dog/991.jpg  
  inflating: catanddog/train/Dog/992.jpg  
  inflating: catanddog/train/Dog/993.jpg  
  inflating: catanddog/train/Dog/994.jpg  
  inflating: catanddog/train/Dog/995.jpg  
  inflating: catanddog/train/Dog/996.jpg  
  inflating: catanddog/train/Dog/997.jpg  
  inflating: catanddog/train/Dog/998.jpg  
  inflating: catanddog/train/Dog/999.jpg  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
data_path='./catanddog/train' #데이터가 저장된 경로

transform=transforms.Compose( # 이미지 데이터를 변환하여 모델의 입력으로 사용할 수 있도록 함
    [
        transforms.Resize([256,256]),
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor() #이미지 데이터를 Tensor로 변환
    ]
)
 #dataloader가 데이터를 불러올 대상(또는 경로)과 방법(preprocessing)을 정의
train_dataset=torchvision.datasets.ImageFolder(data_path,transform=transform)
#ImageFolder를 할당하고, batch_size, shuffle여부 등을 결정
train_loader=torch.utils.data.DataLoader(
    train_dataset,
    batch_size=32,
    num_workers=8,#데이터를 불러올 때, 사용할 하위 프로세스 개수
    shuffle=True
)

print(len(train_dataset))
1
2
3
4
5
385


/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(

+) 한가지 몰랐던 점

ImageFolder는 폴더명을 label(정답)으로 인식한다. 순서대로(0, 1, 2, …)로 불러온다. 아래와 같은 형태이다.

1
2
3
4
5
6
7
8
9
10
data_path/
    class1/
        image1.jpg
        image2.jpg
        ...
    class2/
        image1.jpg
        image2.jpg
        ...
    ...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import numpy as np
#iterator를 사용하기 위해 iter()와 next()가 필요하다.
#samples,labels=iter(train_loader).next() 최신 pytorch에서 이렇게 사용하지 않는다
data_iter = iter(train_loader)
samples, labels = next(data_iter) #하나의 batch(32)단위 데이터를 불러온다.(즉, samples,labels안에는 32개의 data가 들어있다.)
classes={0:'cat',1:'dog'}
fig=plt.figure(figsize=(16,24)) #새로운 그림을 생성하는 함수 호출(가로 16, 세로 24 inch)
for i in range(24):
    a=fig.add_subplot(4,6,i+1)# 4x6 격자 형태의 서브플롯(subplot)생성, i+1은 subplot의 인덱스값
    a.set_title(classes[labels[i].item()]) #label데이터에서 변수값(0 또는 1)만을 가져오고, class 딕셔너리를 통해 class이름을 title로 설정
    a.axis('off') #x, y축을 표시하지 않고, 이미지만 나타낸다.
    #pytorch tensor는 image가 (CxHxW)로 되어있지만, Matplotlib의 imshow는 (HxWxC)로 입력되어야 하므로 np.transpose를 통해 차원 변환
    a.imshow(np.transpose(samples[i].numpy(),(1,2,0)))#현재 mini-batch에서 i번째 data를 표시
plt.subplots_adjust(bottom=0.2,top=0.6,hspace=0) #subplot들의 위치 및 간격을 조정

png

+) iterator(반복자)는 순서대로 다음값을 반환할 수 있는 객체, 즉 nextmethod를 사용할 수 있다.

iterable한 객체는 요소를 하나씩 반환할 수 있는 객체를 말한다.(list, tuple 등 for문으로 반환 가능한 값들)

따라서 위 예제에서는 iter()와 next를 사용하여 iterable한 DataLoader의 반환값을 iterator로 만들어준것이다.

1
2
3
4
5
#np.transpose()
print(samples.shape) #batch하나의 크기
print(samples[1].shape) #batch내에서 1개의 data의 크기
p=np.transpose(samples[1].numpy(),(1,2,0)) #matplolib의 imshow를 사용하기 위한 변환
print(p.shape)#np.transpose를 통해 차원 변환 후 크기
1
2
3
torch.Size([32, 3, 224, 224])
torch.Size([3, 224, 224])
(224, 224, 3)

위에서 data가 모두 준비되었으므로 pretrained ResNet18모델을 불러오자.

1
resnet18=models.resnet18(pretrained=True)

불러온 ResNet18의 Convolutional layer를 사용하지만, parameter에 대해서는 학습하지 않도록 고정(freeze)시킨다.

image.png

1
2
3
4
5
def set_parameter_requires_grad(model,feature_extracting=True):
    if feature_extracting:
        for param in model.parameters(): #model.parameters()를 통해 parameter들을 하나씩 불러온다.
            param.requires_grad=False #required_grad=False로 설정하여 학습되지 않도록 고정한다.
set_parameter_requires_grad(resnet18)

이제 마지막으로 ResNet18의 FC layer를 추가한다(binary classification을 수행)

1
2
#resnet18.fc는 resnet18의 FC layer를 의미
resnet18.fc=nn.Linear(512,2) #512는 FC layer의 input차원, 2는 output차원

param.requires_grad가 True인 값, 즉 학습 가능한 FC layer의 파라미터들을 print찍어보면 아래와 같다.

1
2
3
for name,param in resnet18.named_parameters():
    if param.requires_grad:
        print('name:',name,'/data:',param.data)
1
2
3
name: fc.weight /data: tensor([[ 0.0099,  0.0340,  0.0292,  ...,  0.0397, -0.0191,  0.0128],
        [ 0.0381, -0.0113,  0.0061,  ...,  0.0014, -0.0304, -0.0157]])
name: fc.bias /data: tensor([-0.0393, -0.0140])

위처럼 FC layer는 학습가능하도록 하고, Feature Extractor는 고정시키는 과정을 보았는데, 이제 실제로 사용해보기 위한 코드를 작성해보자.

1
2
3
4
5
6
7
8
9
10
11
12
model=models.resnet18(pretrained=True) #모델 객체 생성

for param in model.parameters():#모델의 convolutional layer weight고정
    param.requires_grad=False

model.fc=torch.nn.Linear(512,2)
for param in model.fc.parameters(): #FC layer는 학습
    param.requires_grad=True

optimizer=torch.optim.Adam(model.fc.parameters()) #optimizer가 fc layer의 weight를 찾아갈 수 있도록
cost=torch.nn.CrossEntropyLoss() #Loss Function
print(model)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=512, out_features=2, bias=True)
)

위에서 모델이 준비가 되었고, 이제 학습을 위한 함수를 생성해보자.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
 def train_model(model,
                 dataloaders,
                 criterion,
                 optimizer,
                 device,
                 num_epochs=13,
                 is_train=True
                 ):
     since=time.time() #컴퓨터의 현재시각을 구하는 함수
     acc_history=[] #Acc를 저장하는 list
     loss_history=[] #Loss를 저장하는 list
     best_acc=0.0 #Best accuracy를 저장하는 변수

     for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch,num_epochs-1)) #epoch을 나타내기 위해 print
        print('-'*10)

        running_loss=0.0
        running_corrects=0

        for inputs, labels in dataloaders:
            inputs=inputs.to(device)#input(image data)을 GPU로 보낸다.
            labels=labels.to(device)

            model.to(device) #모델과 데이터는 같은 위치에 있어야함(GPU)
            optimizer.zero_grad() #gradient를 0으로 설정
            outputs=model(inputs) #forward propagation(데이터에 대한 예측), output은 각 클래스일 확률을 나타낸다.
            loss=criterion(outputs,labels) #outputs이 음수가 나오더라도, cross entory loss는 softmax를 적용하여 괜찮다.

            #max(outpts,1)은 dim=1을 따라 최대값을 반환한다.
            #이 때 max값과 max값의 index값 두개를 반환하기 때문에 '_,preds'를 통해 index값, 즉 class에 대한 정보만 저장한다.
            _,preds=torch.max(outputs,1)

            loss.backward()#back propagation(미분을 통해 Loss function에 끼친 영향력(변화량)을 구한다.)
            optimizer.step()#Loss function을 최적화하도록 parameter값들을 조정한다.

            #Tensor.size(0)는 0차원의 크기를 의미, 예제에서 BS=32이므로 32xCxHxW이고, size(0)는 32이다.
            running_loss+=loss.item()*inputs.size(0) #loss가 32개의 image에 대한 loss이므로 bs만큼 곱하여 running loss에 저장
            running_corrects+=torch.sum(preds==labels.data) #labels의 data와 일치하면 1, 다르면 0을 더한다.

        epoch_loss=running_loss/len(dataloaders.dataset) #epoch이 끝난 후 data의 전체 길이를 나누어 평균 오차를 계산
        epoch_acc=running_corrects.double()/len(dataloaders.dataset) #평균 정확도 계산

        print('Loss:{:.4f} Acc:{:.4f}'.format(epoch_loss,epoch_acc)) #{index순서(지정하지 않아도 순서대로):문자길이와 형식}

        if epoch_acc > best_acc:
            best_acc=epoch_acc

        acc_history.append(epoch_acc.item())
        loss_history.append(epoch_loss)
        torch.save(model.state_dict(),os.path.join('./my_ckpt/','{0:0=2d}.pth'.format(epoch))) #checkpoint 파일 저장

        print()

     time_elapsed=time.time()-since #실행 시간(학습시간) 계산
     print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed//60,time_elapsed%60))
     print('Best Acc: {:4f}'.format(best_acc))
     return acc_history,loss_history

위처럼 training code를 완성하였다. 이제 직접 학습시켜보자

1
2
3
4
5
6
7
params_to_update=[]
for name,param in model.named_parameters():
    if param.requires_grad==True:
        params_to_update.append(param) #parameter학습 결과를 저장
        print("\t",name)

optimizer=optim.Adam(params_to_update) #학습 결과를 optimizer에 전달
1
2
	 fc.weight
	 fc.bias

위 결과를 통해 fc layer들의 parameter만이 학습된다는 것을 알 수 있다. 이제 모델을 학습시켜보자.

1
2
3
device=torch.device("cuda" if torch.cuda.is_available() else "cpu")
criterion=nn.CrossEntropyLoss()
train_acc_hist,train_loss_hist=train_model(model,train_loader,criterion,optimizer,device)#resnet18은 이전에 만든 model
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Epoch 0/12
----------


/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(


Loss:0.5743 Acc:0.6857

Epoch 1/12
----------
Loss:0.3762 Acc:0.8416

Epoch 2/12
----------
Loss:0.3174 Acc:0.8545

Epoch 3/12
----------
Loss:0.2587 Acc:0.8935

Epoch 4/12
----------
Loss:0.2533 Acc:0.9065

Epoch 5/12
----------
Loss:0.2895 Acc:0.8545

Epoch 6/12
----------
Loss:0.2878 Acc:0.8727

Epoch 7/12
----------
Loss:0.2395 Acc:0.8883

Epoch 8/12
----------
Loss:0.2635 Acc:0.9013

Epoch 9/12
----------
Loss:0.2480 Acc:0.8857

Epoch 10/12
----------
Loss:0.1860 Acc:0.9325

Epoch 11/12
----------
Loss:0.1450 Acc:0.9662

Epoch 12/12
----------
Loss:0.1689 Acc:0.9351

Training complete in 0m 34s
Best Acc: 0.966234

96%의 정확도를 보여준다. 이제 test set을 사용하여 모델의 정확도를 평가해보자.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
test_path='./catanddog/test'

transform=transforms.Compose(
    [
        transforms.Resize(224),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
    ]
)

test_dataset=torchvision.datasets.ImageFolder(
    root=test_path,
    transform=transform
)
test_loader=torch.utils.data.DataLoader(
    test_dataset,
    batch_size=32,
    num_workers=1,
    shuffle=True
)

print(len(test_dataset))
1
98

위 코드를 통해 test_loader를 코딩하였고, test dataset이 98개임을 알 수 있었다. 이제 Evaluation을 위한 코드를 작성해보자.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def eval_model(model,dataloaders,device):
    since=time.time()
    acc_history=[]
    best_acc=0.0

    #glob는 디렉토리에서 원하는 파일들을 추출할 때 사용
    saved_models=glob.glob('./my_ckpt/'+'*.pth') # .pth확장자를 갖는 파일들을 불러온다.
    saved_models.sort() #불러온 pth파일들을 정렬
    print('saved_model',saved_models)

    for model_path in saved_models:
        print('Loading model',model_path)

        #torch.load는 pth파일을 불러오고, load_state_dict를 통해 model에 파라미터를 전달한다.
        model.load_state_dict(torch.load(model_path))
        model.eval() #model을 evaluation mode로 전환
        model.to(device)
        running_corrects=0

        for inputs,labels in dataloaders:
            inputs=inputs.to(device)
            labels=labels.to(device)

            #autograd를 사용하지 않는다는 의미로 with torch.no_grad()를 사용
            #eval과정이기 때문에 forward propagation을 할 필요가 없다.
            with torch.no_grad():
                outputs=model(inputs)

            _,preds=torch.max(outputs.data,1)
            #내 생각에는 이미 preds는 index값이기 때문에 아래 코드는 필요없을 것 같다.
            #preds[preds>=0,5]=1
            #preds[preds<0.5]=0

            #preds.eq(labels)은 preds배열과 labels가 일치하는지 검사하는 용도로 사용한다.
            #sum()을 통해 일치하는 label의 개수를 더한다.
            running_corrects+=preds.eq(labels).int().sum()

        epoch_acc=running_corrects.double()/len(dataloaders.dataset)
        print('Acc:{:.4f}'.format(epoch_acc))

        if epoch_acc > best_acc:
            best_acc=epoch_acc

        acc_history.append(epoch_acc.item())
        print()

    time_elapsed=time.time()-since
    print('Validation complete in {:.0f}m {:.0f}s'.format(time_elapsed//60,time_elapsed%60))
    print('Best Acc: {:4f}'.format(best_acc))

    return acc_history

위에 test를 위한 함수를 작성하였고, 이제 실제로 평가해보자.

1
val_acc_hist=eval_model(model,test_loader,device)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
saved_model ['./my_ckpt/00.pth', './my_ckpt/01.pth', './my_ckpt/02.pth', './my_ckpt/03.pth', './my_ckpt/04.pth', './my_ckpt/05.pth', './my_ckpt/06.pth', './my_ckpt/07.pth', './my_ckpt/08.pth', './my_ckpt/09.pth', './my_ckpt/10.pth', './my_ckpt/11.pth', './my_ckpt/12.pth']
Loading model ./my_ckpt/00.pth
Acc:0.9388

Loading model ./my_ckpt/01.pth
Acc:0.8980

Loading model ./my_ckpt/02.pth
Acc:0.9490

Loading model ./my_ckpt/03.pth
Acc:0.9388

Loading model ./my_ckpt/04.pth
Acc:0.9286

Loading model ./my_ckpt/05.pth
Acc:0.9490

Loading model ./my_ckpt/06.pth
Acc:0.9286

Loading model ./my_ckpt/07.pth
Acc:0.9388

Loading model ./my_ckpt/08.pth
Acc:0.9694

Loading model ./my_ckpt/09.pth
Acc:0.9184

Loading model ./my_ckpt/10.pth
Acc:0.9388

Loading model ./my_ckpt/11.pth
Acc:0.9694

Loading model ./my_ckpt/12.pth
Acc:0.9388

Validation complete in 0m 8s
Best Acc: 0.969388

위 결과를 보면 96%의 정확도를 보인다. 이제 matpolib 라이브러리를 통해 결과를 그래프로 확인해보자.

1
2
3
plt.plot(train_acc_hist)
plt.plot(val_acc_hist)
plt.show()

png

다음은 train dataset에 대해 epoch이 진행될 떄마다 오차를 출력한 결과이다. 오차가 점점 줄어드는 것으로 보아 학습이 잘되었다고 볼 수 있다.

1
2
plt.plot(train_loss_hist)
plt.show()

png

Acc와 Loss 외에도, 실제 데이터를 잘 예측하는지 살펴보자.

먼저 예측된 이미지를 출력하기 위한 pre-processing 함수를 생성한다.

1
2
3
4
5
6
7
8
9
def im_convert(tensor):
    #clone()은 기존 tensor의 내용을 복사한 tensor를 생성하는 의미
    #pytorch는 tensor의 모든 연산을 기록하는데(Computational graph)
    #detach()를 사용하여 전파가 이루어지지 않는 객체를 만들 수 있다.
    image=tensor.clone().detach().numpy()
    image=image.transpose(1,2,0)
    image=image*(np.array((0.5,0.5,0.5)) + np.array((0.5,0.5,0.5))) #왜 이런 코드가 있는지 모르겠다..
    image=image.clip(0,1) #image데이터를 0~1사이의 값으로 제한할 때 사용
    return image

+) Computational Graph

계산 그래프란 계산 과정을 그래프로 나타낸 것

image.png

계산 그래프를 사용하는 이유는 두 가지이다.

• 국소적 계산이 가능. 국소적 계산이 가능하다는 의미는 그림 5-39에서 Z 값이 변경되었다면 X, Y 계산 결과를 그대로 유지한 채로 바뀐 Z의 연산이 필요한 F=A×Z만 계산하면 된다.

• 역전파를 통한 미분 계산이 편리. 그림 5-39의 주황색 선이 역전파를 구하는 과정을 보여 주는데 연쇄 법칙(chain rule)을 이용하여 빠르고 간편하게 미분을 계산할 수 있다.

+) Chain Rule

image.png

이제 test dataset을 통해 실제로 잘 분류하는지 확인해보자.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
classes = {0:'cat', 1:'dog'}

dataiter = iter(test_loader)
images, labels = next(dataiter)
output = model(images)
_, preds = torch.max(output, 1)

fig=plt.figure(figsize=(25,4))
for idx in np.arange(20):
    ax=fig.add_subplot(2,10,idx+1,xticks=[],yticks=[])
    plt.imshow(im_convert(images[idx]))
    a.set_title(classes[labels[idx].item()])
    ax.set_title("{}({})".format(str(classes[preds[idx].item()]), str(classes[labels[idx].
             item()])), color=("green" if preds[idx]==labels[idx] else "red"))
plt.show()
plt.subplots_adjust(bottom=0.2, top=0.6, hspace=0)

png

1
<Figure size 640x480 with 0 Axes>

Fine-tuning

Fine-tuning은 Feature Extraction에서 더 나아가 pre-train된 모델의 convolutional layer와 classifier의 weight를 업데이트하여 훈련하는 방식이다. Feature extraction은 특성 추출의 성능이 좋다는 전제하에 좋은 성능을 낼 수 있다. 하지만 특성이 잘못 추출되었다면(ImageNet의 물건의 특징이 적용 대상과 다름) fine-tuning을 통해 weight를 update해서 목적에 맞게 다시 학습시킬 수 있다.

즉 fine-tuning은 pre-train된 모델을 미세하게 조정하여 분석하려는 데이터셋에 맞도록 모델의 파라미터를 수정하는 것이다.

fine-tuning은 train dataset의 크기와 pre-trained model에 따라 다른 전략을 세울 수 있다.

데이터셋이 크고 사전 훈련된 모델과 유사성이 작을 경우: 모델 전체를 재학습. 데이터셋 크기가 크기 때문에 재학습시키는 것이 좋은 전략이다.

데이터셋이 크고 사전 훈련된 모델과 유사성이 클 경우: 합성곱층의 뒷부분(완전연결층과 가까운 부분)과 데이터 분류기를 학습. 데이터셋이 유사하기 때문에 전체를 학습시키는 것보다는 강한 특징이 나타나는 합성곱층의 뒷부분과 데이터 분류기만 새로 학습하더라도 최적의 성능을 낼 수 있다.

데이터셋이 작고 사전 훈련된 모델과 유사성이 작을 경우: 합성곱층의 일부분과 데이터 분류기를 학습. 데이터가 적기 때문에 일부 계층에 미세 조정 기법을 적용한다고 해도 효과가 없을 수 있다. 따라서 합성곱층 중 어디까지 새로 학습시켜야 할지 적절하게 설정해주어야 한다.

데이터셋이 작고 사전 훈련된 모델과 유사성이 클 경우: 데이터 분류기만 학습. 데이터가 적기 때문에 많은 계층에 미세 조정 기법을 적용하면 과적합이 발생할 수 있다. 따라서 최종 데이터 분류기인 완전연결층에 대해서만 미세 조정 기법을 적용한다.

image.png

This post is licensed under CC BY 4.0 by the author.