Meta#
- ivy.fomaml_step(batch, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps, inner_learning_rate, /, *, inner_optimization_step=<function gradient_descent_update>, inner_batch_fn=None, outer_batch_fn=None, average_across_steps=False, batched=True, inner_v=None, keep_inner_v=True, outer_v=None, keep_outer_v=True, return_inner_v=False, num_tasks=None, stop_gradients=True)[source]#
Perform step of first order MAML.
- Parameters:
batch (
Container
) – The input batchinner_cost_fn (
Callable
) – callable for the inner loop cost function, receiving sub-batch, inner vars and outer varsouter_cost_fn (
Callable
) – callable for the outer loop cost function, receiving task-specific sub-batch, inner vars and outer vars. If None, the cost from the inner loop will also be optimized in the outer loop.variables (
Container
) – Variables to be optimized during the meta stepinner_grad_steps (
int
) – Number of gradient steps to perform during the inner loop.inner_learning_rate (
float
) – The learning rate of the inner loop.inner_optimization_step (
Callable
, default:<function gradient_descent_update at 0x7ff477b708b0>
) – The function used for the inner loop optimization. Default is ivy.gradient_descent_update.inner_batch_fn (
Optional
[Callable
], default:None
) – Function to apply to the task sub-batch, before passing to the inner_cost_fn. Default isNone
.outer_batch_fn (
Optional
[Callable
], default:None
) – Function to apply to the task sub-batch, before passing to the outer_cost_fn. Default isNone
.average_across_steps (
bool
, default:False
) – Whether to average the inner loop steps for the outer loop update. Default isFalse
.batched (
bool
, default:True
) – Whether to batch along the time dimension, and run the meta steps in batch. Default isTrue
.inner_v (
Optional
[Container
], default:None
) – Nested variable keys to be optimized during the inner loop, with same keys and boolean values. (Default value = None)keep_inner_v (
bool
, default:True
) – If True, the key chains in inner_v will be kept, otherwise they will be removed. Default isTrue
.outer_v (
Optional
[Container
], default:None
) – Nested variable keys to be optimized during the inner loop, with same keys and boolean values. (Default value = None)keep_outer_v (
bool
, default:True
) – If True, the key chains in inner_v will be kept, otherwise they will be removed. Default isTrue
.return_inner_v (
Union
[str
,bool
], default:False
) – Either ‘first’, ‘all’, or False. ‘first’ means the variables for the first task inner loop will also be returned. variables for all tasks will be returned with ‘all’. Default isFalse
.num_tasks (
Optional
[int
], default:None
) – Number of unique tasks to inner-loop optimize for the meta step. Determined from batch by default.stop_gradients (
bool
, default:True
) – Whether to stop the gradients of the cost. Default isTrue
.
- Return type:
- Returns:
ret – The cost and the gradients with respect to the outer loop variables.
- ivy.maml_step(batch, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps, inner_learning_rate, /, *, inner_optimization_step=<function gradient_descent_update>, inner_batch_fn=None, outer_batch_fn=None, average_across_steps=False, batched=True, inner_v=None, keep_inner_v=True, outer_v=None, keep_outer_v=True, return_inner_v=False, num_tasks=None, stop_gradients=True)[source]#
Perform step of vanilla second order MAML.
- Parameters:
batch (
Container
) – The input batchinner_cost_fn (
Callable
) – callable for the inner loop cost function, receiving sub-batch, inner vars and outer varsouter_cost_fn (
Callable
) – callable for the outer loop cost function, receiving task-specific sub-batch, inner vars and outer vars. If None, the cost from the inner loop will also be optimized in the outer loop.variables (
Container
) – Variables to be optimized during the meta stepinner_grad_steps (
int
) – Number of gradient steps to perform during the inner loop.inner_learning_rate (
float
) – The learning rate of the inner loop.inner_optimization_step (
Callable
, default:<function gradient_descent_update at 0x7ff477b708b0>
) – The function used for the inner loop optimization. Default is ivy.gradient_descent_update.inner_batch_fn (
Optional
[Callable
], default:None
) – Function to apply to the task sub-batch, before passing to the inner_cost_fn. Default isNone
.outer_batch_fn (
Optional
[Callable
], default:None
) – Function to apply to the task sub-batch, before passing to the outer_cost_fn. Default isNone
.average_across_steps (
bool
, default:False
) – Whether to average the inner loop steps for the outer loop update. Default isFalse
.batched (
bool
, default:True
) – Whether to batch along the time dimension, and run the meta steps in batch. Default isTrue
.inner_v (
Optional
[Container
], default:None
) – Nested variable keys to be optimized during the inner loop, with same keys and boolean values. (Default value = None)keep_inner_v (
bool
, default:True
) – If True, the key chains in inner_v will be kept, otherwise they will be removed. Default isTrue
.outer_v (
Optional
[Container
], default:None
) – Nested variable keys to be optimized during the inner loop, with same keys and boolean values. (Default value = None)keep_outer_v (
bool
, default:True
) – If True, the key chains in inner_v will be kept, otherwise they will be removed. Default isTrue
.return_inner_v (
Union
[str
,bool
], default:False
) – Either ‘first’, ‘all’, or False. ‘first’ means the variables for the first task inner loop will also be returned. variables for all tasks will be returned with ‘all’. Default isFalse
.num_tasks (
Optional
[int
], default:None
) – Number of unique tasks to inner-loop optimize for the meta step. Determined from batch by default.stop_gradients (
bool
, default:True
) – Whether to stop the gradients of the cost. Default isTrue
.
- Return type:
- Returns:
ret – The cost and the gradients with respect to the outer loop variables.
Examples
With
ivy.Container
input:>>> import ivy >>> from ivy.functional.ivy.gradients import _variable
>>> ivy.set_backend("torch")
>>> def inner_cost_fn(sub_batch, v): ... return sub_batch.mean().x / v.mean().latent >>> def outer_cost_fn(sub_batch,v): ... return sub_batch.mean().x / v.mean().latent
>>> num_tasks = 2 >>> batch = ivy.Container({"x": ivy.arange(1, num_tasks + 1, dtype="float32")}) >>> variables = ivy.Container({ ... "latent": _variable(ivy.repeat(ivy.array([[1.0]]), num_tasks, axis=0)) ... })
>>> cost = ivy.maml_step(batch, inner_cost_fn, outer_cost_fn, variables, 5, 0.01) >>> print(cost) (ivy.array(1.40069818), { latent: ivy.array([-1.13723135]) }, ())
- ivy.reptile_step(batch, cost_fn, variables, inner_grad_steps, inner_learning_rate, /, *, inner_optimization_step=<function gradient_descent_update>, batched=True, return_inner_v=False, num_tasks=None, stop_gradients=True)[source]#
Perform a step of Reptile.
- Parameters:
batch (
Container
) – The input batch.cost_fn (
Callable
) – The cost function that receives the task-specific sub-batch and variables, and returns the cost.variables (
Container
) – Variables to be optimized.inner_grad_steps (
int
) – Number of gradient steps to perform during the inner loop.inner_learning_rate (
float
) – The learning rate of the inner loop.inner_optimization_step (
Callable
, default:<function gradient_descent_update at 0x7ff477b708b0>
) – The function used for the inner loop optimization. It takes the learnable weights,the derivative of the cost with respect to the weights, and the learning rate as arguments, and returns the updated variables. Default is gradient_descent_update.batched (
bool
, default:True
) – Whether to batch along the time dimension and run the meta steps in batch. Default is True.return_inner_v (
Union
[str
,bool
], default:False
) – Either ‘first’, ‘all’, or False. If ‘first’, the variables for the first task inner loop will also be returned. If ‘all’, variables for all tasks will be returned. Default is False.num_tasks (
Optional
[int
], default:None
) – Number of unique tasks to inner-loop optimize for the meta step. Determined from the batch by default.stop_gradients (
bool
, default:True
) – Whether to stop the gradients of the cost. Default is True.
- Return type:
- Returns:
ret – The cost, the gradients with respect to the outer loop variables, and additional information from the inner loop optimization.
Examples
With
ivy.Container
input:>>> from ivy.functional.ivy.gradients import gradient_descent_update >>> import ivy >>> from ivy.functional.ivy.gradients import _variable
>>> ivy.set_backend("torch")
>>> def inner_cost_fn(batch_in, v): ... return batch_in.mean().x / v.mean().latent
>>> num_tasks = 2 >>> batch = ivy.Container({"x": ivy.arange(1, num_tasks + 1, dtype="float32")}) >>> variables = ivy.Container({ ... "latent": _variable(ivy.repeat(ivy.array([[1.0]]), num_tasks, axis=0)) ... })
>>> cost, gradients = ivy.reptile_step(batch, inner_cost_fn, variables, 5, 0.01, ... num_tasks=num_tasks) >>> print(cost) ivy.array(1.4485182) >>> print(gradients) { latent: ivy.array([-139.9569855]) }
>>> batch = ivy.Container({"x": ivy.arange(1, 4, dtype="float32")}) >>> variables = ivy.Container({ ... "latent": _variable(ivy.array([1.0, 2.0])) ... })
>>> cost, gradients, firsts = ivy.reptile_step(batch, inner_cost_fn, variables, 4, ... 0.025, batched=False, num_tasks=2, ... return_inner_v='first') >>> print(cost) ivy.array(0.9880483) >>> print(gradients) { latent: ivy.array([-13.01766968, -13.01766968]) } >>> print(firsts) { latent: ivy.array([[1.02197957, 2.02197981]]) }
This should have hopefully given you an overview of the meta submodule, if you have any questions, please feel free to reach out on our discord!