SuperLearner wrapper for neural network using torch
SL.nnet.torch.softplus.RdA SuperLearner wrapper that implements a neural network using torch, with a softplus activation to encourage strictly positive predictions. By default, the neural network uses a single hidden layer with 10 hidden units, an Adam optimizer with a learning rate of 0.01, an MSE loss function, and 100 epochs.
Usage
SL.nnet.torch.softplus(
Y,
X,
newX,
n_hidden = 10,
learning_rate = 0.01,
loss_fn = torch::nn_mse_loss(),
epochs = 100,
lower = 1e-04,
...
)Arguments
- Y
A numeric
vectorof outcome values.- X
A numeric
matrixordata.frameof covariates and treatment.- newX
A numeric
matrixordata.frameof predictors.Number of hidden units in the hidden layer, default is 10.
- learning_rate
Learning rate for the Adam optimizer, default is 0.01.
- loss_fn
The loss function used during training, default is MSE.
- epochs
The number of epochs for training, default is 100.
- lower
Lower bound to clip predictions to, default is 1e-4
- ...
Any additional arguments.