Skip to content

Commit

Permalink
fix: typos in python files of directory qiskit/algorithms/optimizers/ (
Browse files Browse the repository at this point in the history
…#11080) (#11081)

(cherry picked from commit 2c01ff8)

Co-authored-by: Surav Shrestha <[email protected]>
  • Loading branch information
mergify[bot] and shresthasurav authored Oct 23, 2023
1 parent 2f76ef0 commit b27fd11
Show file tree
Hide file tree
Showing 5 changed files with 10 additions and 10 deletions.
6 changes: 3 additions & 3 deletions qiskit/algorithms/optimizers/gradient_descent.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ def __init__(
perturbation in both directions (defaults to 1e-2 if required).
Ignored when we have an explicit function for the gradient.
Raises:
ValueError: If ``learning_rate`` is an array and its lenght is less than ``maxiter``.
ValueError: If ``learning_rate`` is an array and its length is less than ``maxiter``.
"""
super().__init__(maxiter=maxiter)
self.callback = callback
Expand Down Expand Up @@ -250,7 +250,7 @@ def perturbation(self, perturbation: float | None) -> None:

def _callback_wrapper(self) -> None:
"""
Wraps the callback function to accomodate GradientDescent.
Wraps the callback function to accommodate GradientDescent.
Will call :attr:`~.callback` and pass the following arguments:
current number of function values, current parameters, current function value,
Expand Down Expand Up @@ -295,7 +295,7 @@ def ask(self) -> AskData:

def tell(self, ask_data: AskData, tell_data: TellData) -> None:
"""
Updates :attr:`.~GradientDescentState.x` by an ammount proportional to the learning
Updates :attr:`.~GradientDescentState.x` by an amount proportional to the learning
rate and value of the gradient at that point.
Args:
Expand Down
2 changes: 1 addition & 1 deletion qiskit/algorithms/optimizers/scipy_optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def minimize(
jac: Callable[[POINT], POINT] | None = None,
bounds: list[tuple[float, float]] | None = None,
) -> OptimizerResult:
# Remove ignored parameters to supress the warning of scipy.optimize.minimize
# Remove ignored parameters to suppress the warning of scipy.optimize.minimize
if self.is_bounds_ignored:
bounds = None
if self.is_gradient_ignored:
Expand Down
2 changes: 1 addition & 1 deletion qiskit/algorithms/optimizers/spsa.py
Original file line number Diff line number Diff line change
Expand Up @@ -328,7 +328,7 @@ def calibrate(
steps = 25
points = []
for _ in range(steps):
# compute the random directon
# compute the random direction
pert = bernoulli_perturbation(dim)
points += [initial_point + c * pert, initial_point - c * pert]

Expand Down
8 changes: 4 additions & 4 deletions qiskit/algorithms/optimizers/steppable_optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ class OptimizerState:
njev: int | None
"""Number of jacobian evaluations so far in the opimization."""
nit: int | None
"""Number of optmization steps performed so far in the optimization."""
"""Number of optimization steps performed so far in the optimization."""


class SteppableOptimizer(Optimizer):
Expand All @@ -81,7 +81,7 @@ class SteppableOptimizer(Optimizer):
This family of optimizers uses the `ask and tell interface
<https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/009_ask_and_tell.html>`_.
When using this interface the user has to call :meth:`~.ask` to get information about
how to evaluate the fucntion (we are asking the optimizer about how to do the evaluation).
how to evaluate the function (we are asking the optimizer about how to do the evaluation).
This information is typically the next points at which the function is evaluated, but depending
on the optimizer it can also determine whether to evaluate the function or its gradient.
Once the function has been evaluated, the user calls the method :meth:`~..tell`
Expand Down Expand Up @@ -180,7 +180,7 @@ def ask(self) -> AskData:
It is the first method inside of a :meth:`~.step` in the optimization process.
Returns:
An object containing the data needed to make the funciton evaluation to advance the
An object containing the data needed to make the function evaluation to advance the
optimization process.
"""
Expand Down Expand Up @@ -217,7 +217,7 @@ def evaluate(self, ask_data: AskData) -> TellData:

def _callback_wrapper(self) -> None:
"""
Wraps the callback function to accomodate each optimizer.
Wraps the callback function to accommodate each optimizer.
"""
pass

Expand Down
2 changes: 1 addition & 1 deletion qiskit/algorithms/optimizers/umda.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ class UMDA(Optimizer):
have been obtained [1]. UMDA seems to provide very good solutions for those circuits in which
the number of layers is not big.
The optimization process can be personalized depending on the paremeters chosen in the
The optimization process can be personalized depending on the parameters chosen in the
initialization. The main parameter is the population size. The bigger it is, the final result
will be better. However, this increases the complexity of the algorithm and the runtime will
be much heavier. In the work [1] different experiments have been performed where population
Expand Down

0 comments on commit b27fd11

Please sign in to comment.