Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: LemurPwned/cmtj
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 1.6.0
Choose a base ref
...
head repository: LemurPwned/cmtj
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref

Commits on Dec 6, 2024

  1. Copy the full SHA
    ea42662 View commit details
  2. Merge pull request #86 from LemurPwned/feat/workflow-fixes

    docker should install extra deps + workflow trigger fix
    LemurPwned authored Dec 6, 2024
    Copy the full SHA
    13a504e View commit details

Commits on Dec 10, 2024

  1. Copy the full SHA
    a3bc83d View commit details
  2. Update cmtj/__init__.pyi

    Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
    LemurPwned and sourcery-ai[bot] authored Dec 10, 2024
    Copy the full SHA
    ea21ef3 View commit details
  3. Merge pull request #87 from LemurPwned/feat/add-noise-pyis

    adding 1/f noise binding .pyis
    LemurPwned authored Dec 10, 2024
    Copy the full SHA
    ff146a2 View commit details

Commits on Dec 13, 2024

  1. updating iDMI formula

    LemurPwned committed Dec 13, 2024
    Copy the full SHA
    b0392b8 View commit details

Commits on Dec 20, 2024

  1. Copy the full SHA
    638bd57 View commit details
  2. ui-fixes

    LemurPwned committed Dec 20, 2024
    Copy the full SHA
    a685105 View commit details
  3. Merge pull request #88 from LemurPwned/feat/stremlit-updates

    Feat/stremlit updates
    LemurPwned authored Dec 20, 2024
    Copy the full SHA
    f7164a9 View commit details

Commits on Dec 22, 2024

  1. add return field

    LemurPwned committed Dec 22, 2024
    Copy the full SHA
    ba2491e View commit details
  2. Copy the full SHA
    4b5aeeb View commit details
  3. adding a back scan

    LemurPwned committed Dec 22, 2024
    Copy the full SHA
    01cd4c3 View commit details
  4. adding a back scan

    LemurPwned committed Dec 22, 2024
    Copy the full SHA
    5961c63 View commit details
  5. remove invalid scaling

    LemurPwned committed Dec 22, 2024
    Copy the full SHA
    9fc916e View commit details
  6. UI changes

    LemurPwned committed Dec 22, 2024
    Copy the full SHA
    09b2f1b View commit details
  7. help updates

    LemurPwned committed Dec 22, 2024
    Copy the full SHA
    cb2b93a View commit details
  8. Merge pull request #89 from LemurPwned/streamlit/ui

    Streamlit/UI -- branch sync
    LemurPwned authored Dec 22, 2024
    Copy the full SHA
    70210f0 View commit details

Commits on Dec 27, 2024

  1. Copy the full SHA
    621f391 View commit details

Commits on Dec 28, 2024

  1. notebook update

    LemurPwned committed Dec 28, 2024
    Copy the full SHA
    f7c2da0 View commit details

Commits on Jan 30, 2025

  1. Copy the full SHA
    3547db3 View commit details

Commits on Feb 8, 2025

  1. doc updates

    LemurPwned committed Feb 8, 2025
    Copy the full SHA
    434a73e View commit details
  2. Copy the full SHA
    d5c5192 View commit details
  3. Copy the full SHA
    ab4fe6b View commit details
  4. Copy the full SHA
    3774445 View commit details
  5. Copy the full SHA
    159e2cc View commit details

Commits on Feb 14, 2025

  1. version bump

    LemurPwned committed Feb 14, 2025
    Copy the full SHA
    1fb9f14 View commit details
  2. Merge pull request #92 from LemurPwned/feat/new-vsd

    Feat/new vsd
    LemurPwned authored Feb 14, 2025
    Copy the full SHA
    870dbda View commit details
  3. update license files

    LemurPwned committed Feb 14, 2025
    Copy the full SHA
    b7534de View commit details
  4. Merge pull request #93 from LemurPwned/feat/update-setp

    Update setup license file field
    LemurPwned authored Feb 14, 2025
    Copy the full SHA
    45cb637 View commit details
  5. workflow fix

    LemurPwned committed Feb 14, 2025
    Copy the full SHA
    3f9deb0 View commit details
  6. Merge pull request #94 from LemurPwned/feat/update-setp

    workflow fix
    LemurPwned authored Feb 14, 2025
    Copy the full SHA
    b9c4706 View commit details
46 changes: 24 additions & 22 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -1,36 +1,31 @@

name: Python Package Publication

on:
pull_request:
types: [closed]
branches: [master]
workflow_dispatch:
inputs:
release-version:
required: true
linux:
type: boolean
required: true
default: true
other-os:
type: boolean
required: true
default: true
paths:
- '**.cpp'
- '**.hpp'
- '**.py'
- 'setup.py'
- 'setup.cfg'
- 'pyproject.toml'

jobs:
linux-build:
if: ${{ inputs.linux }}
runs-on: ubuntu-latest
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
with:
submodules: 'true'
- name: Get version
id: get_version
run: |
echo "version=$(python setup.py --version)" >> $GITHUB_OUTPUT
- name: Python wheels manylinux stable build
uses: RalfG/python-wheels-manylinux-build@v0.5.0
with:
@@ -39,12 +34,11 @@ jobs:
if: github.event.pull_request.merged == true || github.event_name == 'workflow_dispatch'
run: |
python -m pip install --upgrade pip
python -m pip install wheel setuptools twine
python -m pip install wheel setuptools twine packaging>=24.2
twine upload dist/*-manylinux*.whl
continue-on-error: false

other-os-build:
if: ${{ inputs.other-os }}
runs-on: ${{ matrix.os }}
env:
TWINE_USERNAME: __token__
@@ -57,14 +51,18 @@ jobs:
- uses: actions/checkout@v3
with:
submodules: 'true'
- name: Get version
id: get_version
run: |
echo "version=$(python setup.py --version)" >> $GITHUB_OUTPUT
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: build wheel
run: |
python -m pip install --upgrade pip
python -m pip install wheel setuptools twine
python -m pip install wheel setuptools twine packaging>=24.2
python setup.py bdist_wheel
- name: upload wheel
if: github.event.pull_request.merged == true || github.event_name == 'workflow_dispatch'
@@ -77,12 +75,16 @@ jobs:
needs: [ linux-build, other-os-build ]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Get version
id: get_version
run: |
echo "version=$(python setup.py --version)" >> $GITHUB_OUTPUT
- name: Create release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
tag: ${{ github.event.inputs.release-version }}
run: |
gh release create "$tag" \
gh release create "v${{ steps.get_version.outputs.version }}" \
--repo="$GITHUB_REPOSITORY" \
--title="${GITHUB_REPOSITORY#*/} ${tag#v}" \
--title="${GITHUB_REPOSITORY#*/} ${{ steps.get_version.outputs.version }}" \
--generate-notes
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# Changelog

# 1.6.1

- Small fixes to the noise interfaces and resistance functions.
- Documentation updates: added a tutorial on custom dipole interactions + interface fixes and typing.

# 1.6.0

- Extended the `Stack` models allowing for non-symmetric coupling between devices.
13 changes: 13 additions & 0 deletions cmtj/__init__.pyi
Original file line number Diff line number Diff line change
@@ -479,6 +479,19 @@ class Layer:
"""
...

def createBufferedAlphaNoise(self, bufferSize: int) -> None:
"""Create a buffered alpha noise generator."""
...

def setAlphaNoise(self, alpha: float, std: float, scale: float, axis: Axis = Axis.all) -> None:
"""Set alpha noise for the layer.
:param alpha: Alpha parameter
:param std: Standard deviation
:param scale: Scale
:param axis: Axis, by default all axes are used
"""
...

def setAnisotropyDriver(self, driver: ScalarDriver) -> None:
"""Set anisotropy driver for the layer.
It's scalar. The axis is determined in the layer constructor"""
26 changes: 25 additions & 1 deletion cmtj/models/general_sb.py
Original file line number Diff line number Diff line change
@@ -861,8 +861,32 @@ def _compute_numerical_inverse(self, A_matrix):
A_inv_np = np.linalg.inv(A_np)
return sym.Matrix(A_inv_np)

@lru_cache(maxsize=1000) # noqa: B019
def _compute_A_and_V_matrices(self, n, Vdc_ex_variable, H, frequency):
A_matrix = sym.zeros(2 * n, 2 * n)
V_matrix = sym.zeros(2 * n, 1)
U = self.create_energy(H=H, volumetric=False)
omega = sym.Symbol(r"\omega") if frequency is None else 2 * sym.pi * frequency
for i, layer in enumerate(self.layers):
rhs = layer.rhs_spherical_llg(U / layer.thickness, osc=True)
alpha_factor = 1 + layer.alpha**2
V_matrix[2 * i] = sym.diff(rhs[0] * alpha_factor, Vdc_ex_variable)
V_matrix[2 * i + 1] = sym.diff(rhs[1] * alpha_factor, Vdc_ex_variable)
theta, phi = layer.get_coord_sym()
fn_theta = (omega * sym.I * theta - rhs[0]) * alpha_factor
fn_phi = (omega * sym.I * phi - rhs[1]) * alpha_factor
# the functions are only valid for that row i (theta) and i + 1 (phi)
# so we only need to compute the derivatives for the other layers
# for the other layers, the derivatives are zero
for j, layer_j in enumerate(self.layers):
theta_, phi_ = layer_j.get_coord_sym()
A_matrix[2 * i, 2 * j] = sym.diff(fn_theta, theta_)
A_matrix[2 * i, 2 * j + 1] = sym.diff(fn_theta, phi_)
A_matrix[2 * i + 1, 2 * j] = sym.diff(fn_phi, theta_)
A_matrix[2 * i + 1, 2 * j + 1] = sym.diff(fn_phi, phi_)
return A_matrix, V_matrix

@lru_cache(maxsize=1000) # noqa: B019
def _compute_A_and_V_matrices_old(self, n, Vdc_ex_variable, H, frequency):
A_matrix = sym.zeros(2 * n, 2 * n)
V_matrix = sym.zeros(2 * n, 1)
U = self.create_energy(H=H, volumetric=False)
17 changes: 14 additions & 3 deletions cmtj/noise/__init__.pyi
Original file line number Diff line number Diff line change
@@ -7,10 +7,10 @@ class BufferedAlphaNoise:
def fillBuffer(self) -> None:
"""Fill the buffer with the noise. This method is called only once."""
...

def tick(self) -> float:
"""Produce the next sample of the noise."""
...
pass

class VectorAlphaNoise:
"""Create a vector alpha noise generator. Alpha can be in [0, 2]."""
@@ -22,15 +22,26 @@ class VectorAlphaNoise:
std: float,
scale: float,
axis: cmtj.Axis = cmtj.Axis.all,
) -> None: ...
) -> None:
"""Kasdin algorithm for vector alpha noise generation.
:param bufferSize: Buffer size
:param alpha: Alpha parameter
:param std: Standard deviation
:param scale: Scale
:param axis: Axis, by default all axes are used
"""
...

def getPrevSample(self) -> cmtj.CVector:
"""Get the previous sample of the noise in a vector form."""
...

def getScale(self) -> float:
"""Get the scale of the noise."""
...

def tick(self) -> float: ...
def tickVector(self) -> cmtj.CVector:
"""Get the next sample of the noise in a vector form."""
...
pass
11 changes: 10 additions & 1 deletion cmtj/reservoir/__init__.pyi
Original file line number Diff line number Diff line change
@@ -159,5 +159,14 @@ def computeDipoleInteractionNoumra(
...

def nullDipoleInteraction(r1: cmtj.CVector, r2: cmtj.CVector, layer1: cmtj.Layer, layer2: cmtj.Layer) -> cmtj.CVector:
"""Compute null dipole interaction between two junctions."""
"""Compute null dipole interaction between two junctions.
This is a placeholder function that returns a zero vector.
:param r1: Position vector of the first junction
:param r2: Position vector of the second junction
:param layer1: Magnetic layer of the first junction
:param layer2: Magnetic layer of the second junction
:return: Zero vector
"""
...
38 changes: 37 additions & 1 deletion cmtj/utils/parallel.py
Original file line number Diff line number Diff line change
@@ -5,7 +5,9 @@
from multiprocess import Pool
from tqdm import tqdm

__all__ = ["distribute"]
from ..models.general_sb import LayerDynamic

__all__ = ["distribute", "parallel_vsd_sb_model"]


def distribute(
@@ -47,3 +49,37 @@ def func_wrapper(iterable):
iterable, output = result
indx = indexes[iterables.index(iterable)]
yield indx, output


def parallel_vsd_sb_model(
simulation_fn: Callable,
frequencies: list[float],
Hvecs: list[list[float]],
layers: list[LayerDynamic],
J1: list[float] = None,
J2: list[float] = None,
iDMI: list[float] = None,
n_cores: int = None,
):
"""
Parallelise the VSD SB model.
:param simulation_fn: function to be distributed.
This function must take a tuple of arguments, where the first argument is the
frequency, then Hvectors, the list of layers and finally the list of J1 and J2 values.
:param frequencies: list of frequencies
:param Hvecs: list of Hvectors in cartesian coordinates
:param layers: list of layers
:param J1: list of J1 values
:param J2: list of J2 values
:param n_cores: number of cores to use.
:returns: list of simulation_fn outputs for each frequency
"""
if J1 is None:
J1 = [0] * (len(layers) - 1)
if J2 is None:
J2 = [0] * (len(layers) - 1)
if iDMI is None:
iDMI = [0] * (len(layers) - 1)
args = [(f, Hvecs, *layers, J1, J2, iDMI) for f in frequencies]
with Pool(processes=n_cores) as pool:
return list(tqdm(pool.imap(simulation_fn, args), total=len(frequencies)))
Loading