Multilayer Perceptron

Subsequently, we give details on our implementation of a Multilayer Perceptron (MLP, also known as feedforward neural network). References for a more detailed theoretical background can be found at the end of this page, which were also used for writing this text. We use PyTorch for our implementation. For more information on specific PyTorch objects that we use, e.g. layers, see the PyTorch documentation.

Some of the methods and attributes relevant for the MLP are already defined in its parent class TorchModel. There, you can e.g. find the epoch- and batch-wise training loop. In the code block below, we show the constructor of TorchModel.

class TorchModel(_base_model.BaseModel, abc.ABC):
    def __init__(self, task: str, optuna_trial: optuna.trial.Trial, encoding: str = None, n_outputs: int = 1,
                 n_features: int = None, width_onehot: int = None, batch_size: int = None, n_epochs: int = None,
                 early_stopping_point: int = None):
        self.all_hyperparams = self.common_hyperparams()  # add hyperparameters commonly optimized for all torch models
        self.n_features = n_features
        self.width_onehot = width_onehot  # relevant for models using onehot encoding e.g. CNNs
        super().__init__(task=task, optuna_trial=optuna_trial, encoding=encoding, n_outputs=n_outputs)
        self.batch_size = \
            batch_size if batch_size is not None else self.suggest_hyperparam_to_optuna('batch_size')
        self.n_epochs = n_epochs if n_epochs is not None else self.suggest_hyperparam_to_optuna('n_epochs')
        self.optimizer = torch.optim.Adam(params=self.model.parameters(),
                                          lr=self.suggest_hyperparam_to_optuna('learning_rate'))
        self.loss_fn = torch.nn.CrossEntropyLoss() if task == 'classification' else torch.nn.MSELoss()
        # self.l1_factor = self.suggest_hyperparam_to_optuna('l1_factor')
        # early stopping if there is no improvement on validation loss for a certain number of epochs
        self.early_stopping_patience = self.suggest_hyperparam_to_optuna('early_stopping_patience')
        self.early_stopping_point = early_stopping_point
        self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

We define attributes and suggest hyperparameters that are relevant for all neural network implementations, e.g. the optimizer to use and the learning_rate to apply. Some attributes are also set to fixed values, for instance the loss function (self.loss_fn) depending on the detected machine learning task. Furthermore, early stopping is parametrized, which we use as a measure to prevent overfitting. With early stopping, the validation loss is monitored and if it does not improve for a certain number of epochs (self.early_stopping_patience), the training process is stopped. When working with our MLP implementation, it is important to keep in mind that some relevant code and hyperparameters can also be found in TorchModel.

The definition of the MLP model itself as well as of some specific hyperparameters and ranges can be found in the Mlp class. In the code block below, we show its define_model() method. Our MLP model consists of n_layers of blocks, which include a Linear(), BatchNorm() and Dropout layer. The last of these blocks is followed by a Linear() output layer. The number of outputs in the first layers is defined by a hyperparameter (n_initial_units_factor), that is multiplied with the number of inputs. Then, with each of the above-mentioned blocks, the number of outputs decreases by a percentage parameter perc_decrease. Further, we use Dropout for regularization and define the dropout rate as the hyperparameter p. Finally, we transform the list to which we added all network layers into a torch.nn.Sequential() object.

def define_model(self) -> torch.nn.Sequential:
    """
    Definition of an MLP network.

    Architecture:

        - N_LAYERS of (Linear + BatchNorm + Dropout)
        - Linear output layer

    Number of units in the first linear layer and percentage decrease after each may be fixed or optimized.
    """
    n_layers = self.suggest_hyperparam_to_optuna('n_layers')
    model = []
    act_function = self.get_torch_object_for_string(string_to_get=self.suggest_hyperparam_to_optuna('act_function'))
    in_features = self.n_features
    out_features = int(in_features * self.suggest_hyperparam_to_optuna('n_initial_units_factor'))
    p = self.suggest_hyperparam_to_optuna('dropout')
    perc_decrease = self.suggest_hyperparam_to_optuna('perc_decrease_per_layer')
    for layer in range(n_layers):
        model.append(torch.nn.Linear(in_features=in_features, out_features=out_features))
        model.append(act_function)
        model.append(torch.nn.BatchNorm1d(num_features=out_features))
        model.append(torch.nn.Dropout(p=p))
        in_features = out_features
        out_features = int(in_features * (1-perc_decrease))
    model.append(torch.nn.Linear(in_features=in_features, out_features=self.n_outputs))
    return torch.nn.Sequential(*model)

The implementations for 'classification' and 'regression' just differ by the out_features of the output layer (and loss function as you can see in the first code block). self.n_outputs is inherited from BaseModel, where it is set to 1 for regression (one continuous output) or to the number of different classes for classification.

References

  1. Bishop, Christopher M. (2006). Pattern recognition and machine learning. New York, Springer.

  2. Goodfellow, I., Bengio, Y.,, Courville, A. (2016). Deep Learning. MIT Press. Available at https://www.deeplearningbook.org/