runn.models.base
optimizers = {'adadelta': Adadelta, 'adafactor': Adafactor, 'adagrad': Adagrad, 'adam': Adam, 'adamw': AdamW, 'adamax': Adamax, 'ftrl': Ftrl, 'lion': Lion, 'nadam': Nadam, 'rmsprop': RMSprop, 'sgd': SGD}
module-attribute
#
warning_manager = WarningManager()
module-attribute
#
BaseModel(attributes=None, n_alt=None, layers_dim=[25, 25], regularizer=None, regularization_rate=0.001, learning_rate=0.001, optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'], filename=None, warnings=True)
#
Abstract base class for all choice models.
PARAMETER | DESCRIPTION |
---|---|
attributes |
List with the attributes names in the model, in the same order as in the input data. If None, the model cannot be initialized unless it is loaded from a file. Default: None. |
n_alt |
Number of alternatives in the choice set. If None, the model cannot be initialized unless it is loaded from a file. Default: None. |
layers_dim |
List with the number of neurons in each hidden layer, the length of the list is the number of hidden layers. Default: [25, 25].
TYPE:
|
regularizer |
Type of regularization to apply. Possible values: 'l1', 'l2' or 'l1_l2'. Default: None. |
regularization_rate |
Regularization rate if regularizer is not None. Default: 0.001.
TYPE:
|
learning_rate |
Learning rate of the optimizer. Default: 0.001.
TYPE:
|
optimizer |
Optimizer to use. Can be either a string or a tf.keras.optimizers.Optimizer. Default: 'adam'. |
loss |
Loss function to use. Can be either a string or a tf.keras.losses.Loss. Default: 'categorical_crossentropy'. |
metrics |
List of metrics to be evaluated by the model during training and testing. Each of this can be either a string or a tf.keras.metrics.Metric. Default: ['accuracy'].
TYPE:
|
filename |
Load a previously trained model from a file. If None, a new model will be initialized. When loading a model from a file, the previous parameters will be ignored. Default: None. |
warnings |
Whether to show warnings or not. Default: True.
TYPE:
|
Source code in runn/models/base.py
evaluate(x, y, **kwargs)
#
Returns the loss value & metrics values for the model for a given input.
PARAMETER | DESCRIPTION |
---|---|
x |
Input data. Can be a tf.Tensor, np.ndarray or pd.DataFrame.
TYPE:
|
y |
The alternative selected by each decision maker in the sample x. Can be either a tf.Tensor or np.ndarray. It should be a 1D array with integers in the range [0, n_alt-1] or a 2D array with one-hot encoded alternatives.
TYPE:
|
**kwargs |
Additional arguments passed to the keras model. See tf.keras.Model.evaluate() for details.
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
Union[float, list]
|
Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). See tf.keras.Model.evaluate() for details. |
Source code in runn/models/base.py
fit(x, y, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, **kwargs)
#
Train the model.
PARAMETER | DESCRIPTION |
---|---|
x |
Input data. Can be a tf.Tensor, np.ndarray or pd.DataFrame.
TYPE:
|
y |
The alternative selected by each decision maker in the sample x. Can be either a tf.Tensor or np.ndarray. It should be a 1D array with integers in the range [0, n_alt-1] or a 2D array with one-hot encoded alternatives.
TYPE:
|
batch_size |
Number of samples per gradient update. If unspecified, batch_size will default to 32. |
epochs |
Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided. Default: 1.
TYPE:
|
verbose |
Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. Default: 1.
TYPE:
|
callbacks |
List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks for details. Default: None. |
validation_split |
Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. Default: 0.0.
TYPE:
|
validation_data |
Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. This could be a tuple (x_val, y_val) or a tuple (x_val, y_val, val_sample_weights). Default: None. |
**kwargs |
Additional arguments passed to the keras model. See tf.keras.Model.fit() for details.
DEFAULT:
|
Source code in runn/models/base.py
get_history()
#
Return the history of the model training.
RETURNS | DESCRIPTION |
---|---|
dict
|
A dictionary containing the loss and metrics values at the end of each epoch. |
Source code in runn/models/base.py
get_utility(x, name)
abstractmethod
#
load(path)
abstractmethod
#
plot_model(filename=None, expand_nested=True, dpi=96)
#
Generate a graphical representation of the model.
PARAMETER | DESCRIPTION |
---|---|
filename |
File to which the plot will be saved. If None, the plot will only be displayed on screen. Default: None.
TYPE:
|
expand_nested |
Whether to expand nested models into clusters. Default: True.
DEFAULT:
|
dpi |
Resolution of the plot. Default: 96.
TYPE:
|
Source code in runn/models/base.py
predict(x, **kwargs)
#
Predict the choice probabilities for a given input.
PARAMETER | DESCRIPTION |
---|---|
x |
Input data.
TYPE:
|
**kwargs |
Additional arguments passed to the keras model. See tf.keras.Model.predict() for details.
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
ndarray
|
Numpy array with the choice probabilities for each alternative. |
Source code in runn/models/base.py
save(path='model.zip')
abstractmethod
#
summary(line_length=100, **kwargs)
#
Print a summary of the keras model.
PARAMETER | DESCRIPTION |
---|---|
line_length |
Total length of printed lines. Default: 100.
TYPE:
|
**kwargs |
Additional arguments passed to the keras model. See tf.keras.Model.summary() for details.
DEFAULT:
|