Releases: JetBrains-Research/tensorflow-federated
Releases · JetBrains-Research/tensorflow-federated
TensorFlow Federated 0.87.0
Release 0.87.0
Added
- Added an implementation of AdamW to
tff.learning.optimizers
.
Changed
- Support
None
gradients intff.learning.optimizers
. This mimics the
behavior oftf.keras.optimizers
- gradients that areNone
will be
skipped, and their corresponding optimizer output (e.g. momentum and
weights) will not be updated. - The behavior of
DPGroupingFederatedSum::Clamp
: it now sets negatives to 0.
Associated test code has been updated. Reason: sensitivity calculation for
DP noise was calibrated for non-negative values. - Change tutorials to use
tff.learning.optimizers
in conjunction with
tff.learning
computations. tff.simulation.datasets.TestClientData
only accepts dictionaries whose
leaf nodes are nottf.Tensor
s.
Fixed
- A bug where
tff.learning.optimizers.build_adafactor
would update its step
counter twice upon every invocation of.next()
. - A bug where tensor learning rates for
tff.learning.optimizers.build_sgdm
would fail with mixed dtype gradients. - A bug where different optimizers had different behavior on empty weights
structures. TFF optimizers now consistently accept and function as no-ops on
empty weight structures. - A bug where
tff.simulation.datasets.TestClientData.dataset_computation
yielded datasets of indeterminate shape.
Removed
tff.jax_computation
, usetff.jax.computation
instead.tff.profiler
, this API is not used.- Removed various stale tutorials.
- Removed
structure
fromtff.program.SavedModelFileReleaseManager
's
get_value
method parameters. - Removed support for
tf.keras.optimizers
intff.learning
.