mlr3torch 0.3.0
Breaking Changes:
- The output dimension of neural networks for binary classification
tasks is now expected to be 1 and not 2 as before. The behavior of
nn("head")
was also changed to match this. This means that
for binary classification tasks, t_loss("cross_entropy")
now generates nn_bce_with_logits_loss
instead of
nn_cross_entropy_loss
. This also came with a
reparametrization of the t_loss("cross_entropy")
loss
(thanks to @tdhock,
#374).
New Features:
PipeOps & Learners:
- Added
po("nn_identity")
- Added
po("nn_fn")
for calling custom functions in a
network.
- Added the FT Transformer model for tabular data.
- Added encoders for numericals and categoricals
nn("block")
(which allows to repeat the same network
segment multiple times) now has an extra argument trafo
,
which allows to modify the parameter values per layer.
Callbacks:
- The context for callbacks now includes the network prediction
(
y_hat
).
- The
lr_one_cycle
callback now infers the total number
of steps.
- Progress callback got argument
digits
for controlling
the precision with which validation/training scores are logged.
Other:
TorchIngressToken
now also can take a
Selector
as argument features
.
- Added function
lazy_shape()
to get the shape of a lazy
tensor.
- Better error messages for MLP and TabResNet learners.
- TabResNet learner now supports lazy tensors.
- The
LearnerTorch
base class now supports the private
method $.ingress_tokens(task, param_vals)
for generating
the torch::dataset
.
- Shapes can now have multiple
NA
s and not only the batch
dimension can be missing. However, most nn()
operators
still expect only one missing values and will throw an error if multiple
dimensions are unknown.
- Training now does not fail anymore when encountering a missing value
during validation but uses
NA
instead.
- It is now possible to specify parameter groups for optimizers via
the
param_groups
parameter.
Bug Fixes:
- fix: lazy tensors of length 0 can now be materialized.
- fix:
NA
is now a valid shape for lazy tensors
- fix: The
lr_reduce_on_plateau
callback now works.
mlr3torch 0.2.1
Bug Fixes:
LearnerTorchModel
can now be parallelized and trained
with encapsulation activated.
jit_trace
now works in combination with batch
normalization.
- Ensures compatibility with
R6
version 2.6.0
mlr3torch 0.2.0
Breaking Changes
- Removed some optimizers for which no fast (‘ignite’) variant
exists.
- The default optimizer is now AdamW instead of Adam.
- The private
LearnerTorch$.dataloader()
method now
operates no longer on the task
but on the
dataset
generated by the private
LearnerTorch$.dataset()
method.
- The
shuffle
parameter during model training is now
initialized to TRUE
to sidestep issues where data is
sorted.
- Optimizers now use the faster (‘ignite’) version of the optimizers,
which leads to considerable speed improvements.
- The
jit_trace
parameter was added to
LearnerTorch
, which when set to TRUE
can lead
to significant speedups. This should only be enabled for ‘static’
models, see the torch
tutorial for more information.
- Added parameter
num_interop_threads
to
LearnerTorch
.
- The
tensor_dataset
parameter was added, which allows to
stack all batches at the beginning of training to make loading of
batches afterwards faster.
- Use a faster default image loader.
Features
- Added
PipeOp
for adaptive average pooling.
- The
n_layers
parameter was added to the MLP
learner.
- Added multimodal melanoma and cifar{10, 100} example tasks.
- Added a callback to iteratively unfreeze parameters for
finetuning.
- Added different learning rate schedulers as callbacks.
Bug Fixes:
- Torch learners can now be used with
AutoTuner
.
- Early stopping now not uses
epochs - patience
for the
internally tuned values instead of the trained number of
epochs
as it was before.
- The
dataset
of a learner must no longer return the
tensors on the specified device
, which allows for parallel
dataloading on GPUs.
PipeOpBlock
should no longer create ID clashes with
other PipeOps in the graph (#260).
mlr3torch 0.1.2
- Don’t use deprecated
data_formats
anymore
- Added
CallbackSetTB
, which allows logging that can be
viewed by TensorBoard.
mlr3torch 0.1.1
- fix(preprocessing): regarding the construction of some
PipeOps
such as po("trafo_resize")
which
failed in some cases.
- fix(ci): tests were not run in the CI
- fix(learner):
LearnerTabResnet
now works correctly
- Fix that tests were not run in the CI
- feat: added the
nn()
helper function to simplify the
creation of neural network layers
mlr3torch 0.1.0