optimize¶
- torch_sim.runners.optimize(system, model, *, optimizer, convergence_fn=None, max_steps=10_000, steps_between_swaps=5, trajectory_reporter=None, autobatcher=False, pbar=False, init_kwargs=None, **optimizer_kwargs)[source]¶
Optimize a system using a model and optimizer.
- Parameters:
system (
StateLike) – Input system to optimize (ASE Atoms, Pymatgen Structure, or SimState)model (
ModelInterface) – Neural network model moduleoptimizer (
Optimizer | tuple) – Optimization algorithm functionconvergence_fn (
Callable | None) – Condition for convergence, should return a boolean tensor of length n_systemstrajectory_reporter (
TrajectoryReporter | dict | None) – Optional reporter for tracking optimization trajectory. If a dict, will be passed to the TrajectoryReporter constructor.autobatcher (
InFlightAutoBatcher | bool) – Optional autobatcher to use. If False, the system will assume infinite memory and will not batch, but will still remove converged structures from the batch. If True, the system will estimate the memory available and batch accordingly. If a InFlightAutoBatcher, the system will use the provided autobatcher, but will reset the max_iterations to max_steps // steps_between_swaps.max_steps (
int) – Maximum number of total optimization stepssteps_between_swaps (int) – Number of steps to take before checking convergence and swapping out states.
pbar (
bool | dict[str, Any], optional) – Show a progress bar. Only works with an autobatcher in interactive shell. If a dict is given, it’s passed to tqdm as kwargs.init_kwargs (
dict[str, Any], optional) – Additional keyword arguments for optimizer init function.**optimizer_kwargs – Additional keyword arguments for optimizer step function
- Returns:
Optimized system state
- Return type:
T