easypheno.model._template_torch_model
Module Contents
Classes
Template file for a prediction model based on |
- class easypheno.model._template_torch_model.TemplateTorchModel(task, optuna_trial, encoding=None, n_outputs=1, n_features=None, width_onehot=None, batch_size=None, n_epochs=None, early_stopping_point=None)
Bases:
easypheno.model._torch_model.TorchModel
Template file for a prediction model based on
TorchModel
See
BaseModel
andTorchModel
for more information on the attributes.Steps you have to do to add your own model:
Copy this template file and rename it according to your model (will be the name to call it later on on the command line)
Rename the class and add it to easypheno.model.__init__.py
Adjust the class attributes if necessary
Define your model in define_model()
Define the hyperparameters and ranges you want to use for optimization in define_hyperparams_to_tune().
CAUTION: Some hyperparameters are already defined in
common_hyperparams()
, which you can directly use here. Some of them are already suggested inTorchModel
.Test your new prediction model using toy data
- Parameters
- standard_encoding = Ellipsis
- possible_encodings = [Ellipsis]
- define_model(self)
Definition of the actual prediction model.
Use param = self.suggest_hyperparam_to_optuna(PARAM_NAME_IN_DEFINE_HYPERPARAMS_TO_TUNE) if you want to use the value of a hyperparameter that should be optimized. The function needs to return the model object.
See
BaseModel
for more information.
- define_hyperparams_to_tune(self)
Define the hyperparameters and ranges you want to optimize. Caution: they will only be optimized if you add them via self.suggest_hyperparam_to_optuna(PARAM_NAME) in define_model()
See
BaseModel
for more information on the format and options.Check
TorchModel
for already defined (and for some cases also suggested) hyperparameters.- Return type