Estimated Time of Arrival Executor¶
This executor class is mainly responsible for completing the training and evaluation process of all estimated time of arrival models.
Executor Settings¶
The following mainly introduces the parameters that this executor class can receive:
max_epoch
: Maximum rounds of training. The default value varies with the model.epoch
: The number of initial training rounds. If it is greater than 0, it will first load the epoch model from./libcity/cache/model_cache
and then continue to complete the training or evaluation.learner
: The name of used optimizer. Defaults to'adam'
. Range in['adam', 'sgd', 'adagrad', 'rmsprop', 'sparse_adam']
.learning_rate
: Learning rate. Defaults to0.01
.weight_decay
: Parameter of optimizer. Default to0.0
.lr_epsilon
: Parameter of optimizer. Defaults to1e-8
.lr_beta1
: Parameter of optimizer. Defaults to0.9
.lr_beta2
: Parameter of optimizer. Defaults to0.999
.lr_alpha
: Parameter of optimizer. Defaults to0.99
.lr_momentum
: Parameter of optimizer. Defaults to0
.
lr_decay
: Whether to use lr_scheduler. Defaults toFalse
.lr_scheduler
: The type of lr_scheduler. Range in [MultiStepLR
,StepLR
,ExponentialLR
,CosineAnnealingLR
,LambdaLR
,ReduceLROnPlateau
].lr_decay_ratio
: Parameter ofMultiStepLR
、StepLR
、ExponentialLR
、ReduceLROnPlateau
.steps
: Parameter ofMultiStepLR
.step_size
: Parameter ofStepLR
.lr_lambda
: Parameter ofLambdaLR
.【However, this parameter needs to be specified as a function, currently json-based configuration files do not support.】lr_T_max
: Parameter ofCosineAnnealingLR
.lr_eta_min
: Parameter ofCosineAnnealingLR
.lr_patience
: Parameter ofReduceLROnPlateau
.lr_threshold
: Parameter ofReduceLROnPlateau
.
clip_grad_norm
: Whether to use clip_grad_norm_, Defaults toFalse
.max_grad_norm
: The parameter of clip_grad_norm_ which will clips gradient norm of model.
use_early_stop
: Whether to use the early-stopping mechanism. Defaults toFalse
.patience
: The number of rounds of the early-stopping mechanism. When the validation set error is greater than the minimum validation error, it will accumulate 1, otherwise it will be cleared to 0. The training will end when the accumulative number reachespatience
.
train_loss
: Specify the loss function used during training. Range in['mae', 'mape', 'mse', 'rmse', 'masked_mae', 'masked_mape', 'masked_mse', 'masked_rmse', 'r2', 'evar']
.log_level
: The log level setting, default toINFO
. All logs exceeding theINFO
level will be output, please refer to the third-party library logging for details.log_every
: Use log to record once everylog_level
round during training.
saved_model
: Whether to save the trained model. Defaults toTrue
.gpu
: Whether to use GPU. Defaults toTrue
.gpu_id
: The id of the GPU used. Defaults to0
.device*
: It cannot be specified externally. It is determined by the parametersgpu
andgpu_id
together. In the code of the model, it can be obtained by usingconfig['device']
instead of using the parametersgpu
andgpu_id
.