*For ansatz of the form
*We expect better results in sampling from a distribution generated by the Hamiltonian
- *Choosing the
$H_{mix}$ specifically to respect the symmetries present in the$H_{prob}$ - *And then fine tuning the relative weight
$\gamma$ to enhance the sampling procedure
*Intuitively we expect that a specifically designed
*However restricting the sample space considerably might substantially reduce the search space which we can get over by fine-tuning
*We strongly believe that even for other ranges the transition probability depends significantly on the structure of
-
**Confirming effect of
$\gamma$ variation without enhanced (informed ?)$H_{mix}$ design.-
Take the usual $H^{v}{mix} = \Sigma{i=0}^{n} X_{i}$, (aka. vanilla mixer ) and obtain sampling results for a generic Random Ising Model hamiltonian.
-
Average the effect over different instance of the randomly generated ising models.
-
Run the simulations for different ranges of gamma,
$\gamma_{perturbative} \approx {0.01}$ ,$\gamma_{low} \approx {0.1,0.2}$ ,$\gamma_{mid} \approx {0.5,0.6}$ ,$\gamma_{high} \approx {0.8,0.9}$ To observe :
At high$\gamma$ the sampling algorithm will loose track of the information of the landscape it is supposed to sample. Aim to get an idea about the mid$\gamma$ range in which, we still manage to get good samples, or an upper bound such that an$\gamma > \gamma_{max}$ will fail to produce good samples.
-
-
**Confirming the effect of enhanced
$H_{mix}$ design, for Random Ising Models, with$\gamma$ variation.-
Design
$H_{mix}$ from the set of multi qubit interactions i.e${ X_{i}X_{i+1}, X_{i}X_{i+1}X_{i+1}, X_{i}X_{j},}$ . Restrict design to$X$ mixers only, for better interpretebilty. -
Check the sampling results for the choice of enhanced mixers against their pauli weight
-
Rerun the simulations for different ranges of gamma,
$\gamma_{perturbative} \approx {0.01}$ ,$\gamma_{low} \approx {0.1,0.2}$ ,$\gamma_{mid} \approx {0.5,0.6}$ ,$\gamma_{high} \approx {0.8,0.9}$ To observe : Using
$H_{mix}$ of different pauli weight allows for transitions of higher hamming distance$d_{h}$ , though this would allow for transversal of local minima this does not directly guarantee a better sampling of the the energy landscape because the algorithm would fail to make small transitions when required thus bypassing the ground states.We need to find a strategy of picking the right
$H_{mix}$ (or a scheme of varying it over iterations ) combined with the information about the mid$\gamma$ range from the previous experiment this will let us balance between the different types of transitions to drive the sampling algorithm more accurately through the landscape.
-
-
**Confirming the effect of enhanced
$H_{mix}$ design, for structured datasets, with$\gamma$ variation.- Get an close approximation of the ideal Hamiltonian
$H_{D}$ for the dataset$D$ , by some means (?). - Design
$H_{mix}$ such that it enhances transitions$\ket{s} \to \ket{s'}$ , such that${\ket{s}, \ket{s'}}; \in D$ , over other transitions. - Compare the sampling, by using
$H_{D}$ as the problem hamiltonian and comparing the effect of using an enhanced$H_{mix}$ over using vanilla mixer$H^{v}_{mix}$ .
- Get an close approximation of the ideal Hamiltonian
- run local sampling for multiple 3 secs
- sampling with interleaved mixers of different weights