联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-23:00
  • 微信:codinghelp

您当前位置:首页 >> Python编程Python编程

日期:2025-03-12 10:16

Exercise set 2

Deadline: 11-3-2025 10:00

Note that these networks will run substantially faster on dedicated hardware available via Google Collab, for which you can make a free account, or Surf, for which you should have received accounts.


Exercise 1:


In the paper by LeCun in 1998, entitled Gradient-based learning applied to document recognition, the LeNet-5 convolutional neural network is introduced. This was the first trainable CNN and a landmark paper in the world of deep learning (50k+ citations). From the image above, and from the information provided in the paper, please explain the number of trainable parameters for each layer:

C1: 156

S2: 12

C3: 1,516 (check Table 1!)

S4: 32

C5: 48,120

F6: 10,164


Exercise 2: U-Net

In a U-net, one has to make sure that dimensions of the filters work out. In this exercise, you will calculate the dimensions of different layers within a U-Net. Below, we have drawn a toy U-net


A: Assuming the input image has spatial dimensions 69x69x34(x1 feature), what will be the dimensions of A-C? Are the dimensions of D defined? Why (not)?  When padding is applied, assume a value of 1 in each direction.

Tip: if you do not know what stride and zero padding mean, please check google.

B: The U-net shown above is fairly useless, especially as the input and output dimensions do not match. In practice, U-nets are programmed more symmetric, with most modern U-net architectures using Zero Padding at each step, and typically stride 1x1, to ensure resolution of the input and output remains the same. Nevertheless, it is important to think about the dimensions of your input image with respect to the strided/non-zero-padded convolutions and pooling operations to ensure the dimensions match. In the 2D U-net architecture below (taken from the original U-net paper, cited almost 60k times), what would be the minimum size of the input image? Assume 3x3 max pooling (stride 3x3), stride is 1x1 for all conv operations and assume zero-padding is used.


Exercise 3: CNN (week 4)

On Canvas (Surf: Deep Learning for Medical Image Analysis) you can find how to connect to Snellius (surf service).

The exercise 2 of Git contains code (ISIC 2019) that loads the challenge data, and trains a classification network. Study the code on your local machine (e.g. in Spider or Pycharm). You can run a local copy to investigate the data, but probably your computer will not support training the network. Note that your need to download the data and point towards the right folder on your computer in line 51 to run it locally:

#set data location on your local computer. Data can be downloaded from:

# https://surfdrive.surf.nl/files/index.php/s/epjCz4fip1pkWN7

# PW: deeplearningformedicalimaging

data_dir = 'C:\scratch\Surf\Documents\Onderwijs\DeepLearning_MedicalImaging\opgaven\opgave 2\AI-Course_StudentChallenge\data\classification'


You can also chose to run a copy on Google Colab for debugging purposes. But we advise you to run most of the work on Snellius which has dedicated hardware.

Note you will need to log in to W&B first using an interactive slurm session on Snellius

srun --partition=rome --ntasks=1 --cpus-per-task=9 --time=00:10:00 --pty bash -i

module purge

module load 2023

module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1


source /gpfs/work5/0/prjs1312/venv/bin/activate

wandb login

then ctr+D

exit


Copy your code to Snellius and run it (using slurm) to train the classification network (main_CNN.py). Congratulations, you trained your first network. But how well did it perform?

First, you can try to check the training progress in W&B.


To access the final performance, you will want to specify where the model is being saved during runtime using

python main_CNN.py -- checkpoint_folder_save path/to/folder


and then run the same script, but give the input --checkpoint_folder_path to the command

python main_CNN.py -- checkpoint_folder_path path/to/folder

You will try and improve the network’s performance. Feel free to boast about your performance and compare it to your peers at the Canvas Discussion Boasting CNN and U-net on canvas.

This exercise (A-E) should be handed in as:

-a short (max 2  a4 text) scientific rapport containing at least

oA description of what you implemented and why

oThe results (how did it change the network’s performance)

oInterpretation of the results. Why do you expect certain approaches were better than others?

-Some additional figures/tables are welcome but only those relevant

oGive tables/figures a caption

oRefer to the figure/table from the main text such that we know what we are looking at, why they are relevant, and how to interpret the data

-The code you used (especially when implementing new features)

Note that you will probably not be able to do all suggestions under A-D.

A: Change the network structure in CNNs.py. For example, you can add layers, change convolutional filter sizes, change activation functions or add skip layers. Try developing a network that performs better than the provided one. Please consider which of the metrics are relevant and applied to what data (train/val/test?) for the statements you are making. Please show us the data in easy and understandable fashion. Uploading tens of training curves for all metrics generally is not needed for your message.

B: Tune the training parameters and hyperparameters (e.g. number of epochs, training rate, loss function, optimizer, schedulers) to optimize the network. For most, this can be done by giving additional commands. E.g.

Python main_CNN.py --optimizer_lr <lr> --batch_size <bs> --optimizer_name <on> --max_epochs <me>

Where <lr> is your desired learning rate, <bs> your desired batch size etc.

You may also want to try different losses, which may require some programming. Currently loss is defined as:

loss = F.binary_cross_entropy_with_logits(y_hat, y.float())


C: The current data-augmentation consists of simple rotations. You can add additional data augmentation to improve the network’s performance/generalizability. To do so, look at Data_loader.py from line 127 onwards. Currently, this shows the rotation example. You can either adapt this to include additional augmentations, or write your own augmentation. Note that the augmentation is applied to the data at lines 21 and 49 (to the masks) of that same file:

if transform:

 self.train_transforms = transforms.Compose([Random_Rotate(0.1), transforms.ToTensor()])

else:

 self.train_transforms = transforms.Compose([transforms.ToTensor()])

self.val_transforms  = transforms.Compose([transforms.ToTensor()])

and

if transform:

 self.train_transforms = transforms.Compose([Random_Rotate_Seg(0.1), ToTensor_Seg()])

else:

 self.train_transforms = transforms.Compose([ToTensor_Seg()])

self.val_transforms   = transforms.Compose([ToTensor_Seg()])


If you  perform augmentation, please add augmented images to your report and discuss them. What do we see? Why?

D. Transfer learn from an existing CNN.



Exercise 4: U-Net (week 5)

This exercise (A-D) should be handed in as a short (max 2  a4 including figures) rapport containing some description of what you implemented and the results (how did it change the network’s performance), alongside the code you used (especially when implementing new features).

Note that you will probably not be able to do all suggestions under A-D.

In the same ISIC challenge, there are segmentation examples. U-net allows segmenting.

A: In CNNs.py, complete the code for a U-net (e.g. using the provided paper in the link, although contrary to the original paper, you may prefer to use padding instead) and run main_Unet.py to train a segmentation network. The network should be trained by running “main_unet.py”.

Similar to exercise 4, optimize the network by changing:

B: Change the network structure. For example, you can add layers, change convolutional filter sizes, change activation funcions or add skip layers. Does this improve the performance?

C: Tune the training parameters and hyperparameters (e.g. number of epochs, training rate, loss function, optimizer, schedulers) to optimize the network.

D: Adapt the data augmentation (you may use the code from the previous exercise).


相关文章

【上一篇】:到头了
【下一篇】:没有了

版权所有:留学生编程辅导网 2020 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp