MATLAB: Want to add regularization (L2) in Resnet50 code

MATLABneural networkstransfer function

Hi guys,
I am training my data using Resnet50 in CNN but data is overfit. I want to reduce overfitting. So I want to add Regularizatin L2. Can anybody tell me how to add L2 in my code. You can see my code below.
clear all
close all
imds = imageDatastore("E:\test\data", …
'IncludeSubfolders',true,'LabelSource','foldernames');
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomize'); %70% for train 30% for test
net=resnet50; % for the first time,you have to download the package from Add-on explorer
%Replace Final Layers
numClasses = numel(categories(imdsTrain.Labels));
lgraph = layerGraph(net);
newFCLayer = fullyConnectedLayer(numClasses,'Name','new_fc','WeightLearnRateFactor',10,'BiasLearnRateFactor',10);
lgraph = replaceLayer(lgraph,'fc1000' ,newFCLayer);
newClassLayer = classificationLayer('Name','new_classoutput');
lgraph = replaceLayer(lgraph,'ClassificationLayer_predictions',newClassLayer);
%Train Network
inputSize = net.Layers(1).InputSize;
augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain);
augimdsValidation = augmentedImageDatastore(inputSize(1:2),imdsValidation);
options = trainingOptions('sgdm', …
'MiniBatchSize',10, …
'MaxEpochs',20, …
'InitialLearnRate',1e-3, …
'Shuffle','every-epoch', …
'ValidationData',augimdsValidation, …
'ValidationFrequency',5, …
'Verbose',false, …
'Plots','training-progress');
trainedNet = trainNetwork(augimdsTrain,lgraph,options);
YPred = classify(trainedNet,augimdsValidation);
accuracy = mean(YPred == imdsValidation.Labels)
C = confusionmat(imdsValidation.Labels,YPred)
cm = confusionchart(imdsValidation.Labels,YPred);
cm.Title = 'Confusion Matrix for Validation Data';
cm.ColumnSummary = 'column-normalized';
cm.RowSummary = 'row-normalized';

Best Answer

  • You can specify the L2 regularization factors for the weights and biases in Convolutional and Fully Connected Layers by specifying the BiasL2Factor and WeightL2Factor properties, respectively. trainNetwork then multiplies the L2 regularization factors that you specify by using trainingOptions with these factors.