Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update magnetic STEM-DPC notebook #78

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
120 changes: 45 additions & 75 deletions 10 STEM DPC Analysis of Magnetic Sample.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"tags": []
},
"source": [
"# Analysing magnetic materials using STEM-DPC\n",
"\n",
Expand All @@ -11,18 +13,22 @@
"The data we'll be looking at there is from the paper **Strain Anisotropy and Magnetic Domains in Embedded Nanomagnets**, and is STEM data recorded on a Merlin fast pixelated electron detector system, where the objective lens has been turned off.\n",
"This allows for magnetic information to be extracted, by carefully mapping the beam shifts.\n",
"\n",
"More documentation about pyXem is found at https://pyxem.github.io/pyxem-website/\n",
"More documentation about pyXem is found at https://pyxem.readthedocs.io/en/latest/\n",
"\n",
"Journal article:\n",
"* **Strain Anisotropy and Magnetic Domains in Embedded Nanomagnets**\n",
"* Nord, M., Semisalova, A., Kákay, A., Hlawacek, G., MacLaren, I., Liersch, V., Volkov, O. M., Makarov, D., Paterson, G. W., Potzger, K., Lindner, J., Fassbender, J., McGrouther, D., Bali, R.\n",
"* Small 2019, 15, 1904738. https://doi.org/10.1002/smll.201904738\n",
"* Open data, including scripts: https://zenodo.org/record/3466591\n",
"\n",
"The full dataset and scripts used in analysing this data is found at Zenodo: https://zenodo.org/record/3466591\n",
"-----------------------\n",
"\n",
"This notebook has been modified to use a cropped version of the data with only the infromation about the zero beam.\n",
"This notebook has been modified to use a cropped version of the data with only the direct beam.\n",
"\n",
"Refer to the link above to look at the entire dataset."
"For the full datasets, see: https://zenodo.org/record/4312960:\n",
"* Small version, 640 MB: https://zenodo.org/record/4312960/files/fe60al40_stripe_pattern_very_small_dataset.hspy?download=1\n",
"* Medium version, 2 GB: https://zenodo.org/record/4312960/files/fe60al40_stripe_pattern_small_dataset.hspy?download=1\n",
"* Large version: 6 GB: https://zenodo.org/record/4312960/files/fe60al40_stripe_pattern.hspy?download=1"
]
},
{
Expand Down Expand Up @@ -80,7 +86,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As the file is pretty big, we load it lazily."
"Datasets such as these are typically very large, so we would load them lazily. This means the whole dataset is not loaded into memory at the same time. \n",
"\n",
"The dataset included in this demo has been greatly reduced, so we don't really have to use `lazy` processing here. But this notebook will use `lazy` processing to show how pyxem can be used to processed large datasets.\n",
"\n",
"If you want to look at the full dataset, see https://zenodo.org/record/4312960."
]
},
{
Expand All @@ -89,7 +99,7 @@
"metadata": {},
"outputs": [],
"source": [
"s = hs.load('data/10/fe60al40_stripe_pattern_very_small_dataset_cropped.hspy')"
"s = hs.load('data/10/fe60al40_stripe_pattern_very_small_dataset_cropped.hspy', lazy=True)"
]
},
{
Expand All @@ -101,15 +111,6 @@
"s"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s.plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand All @@ -121,7 +122,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Lets first have a look at the data, using the `slider` navigator."
"Lets first have a look at the data, using the `slider` navigator. This means only a small part of dataset is loaded at the same time."
]
},
{
Expand All @@ -137,7 +138,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The direct beam is at (x, y) = (128, 125). Lets create a navigation signal from a single pixel in the direct beam."
"The direct beam is at (x, y) = (30, 31). Lets create a navigation signal from a single pixel in the direct beam."
]
},
{
Expand All @@ -146,14 +147,14 @@
"metadata": {},
"outputs": [],
"source": [
"s_nav = s.isig[33, 29].T"
"s_nav = s.isig[30, 31].T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This was pretty quick, thanks to how the chunking is for this dataset: (16, 16, 16, 16). So only about 0.4% of the full dataset had to be loaded into memory.\n",
"This was pretty quick, as only a very small part of the dataset had to be loaded into memory.\n",
"\n",
"Plotting it, we can see the nanocrystalline structure of the material."
]
Expand Down Expand Up @@ -187,33 +188,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"What we're interested in here is the movement of the centre beam. To make the plotting more responsive, we can grab just the centre of the diffraction patterns.\n",
"What we're interested in here is the movement of the centre beam. Move the navigator up and down in the centre of the sample, and look for small changes in the beam position.\n",
"\n",
"Move the navigator up and down in the centre of the sample, and look for small changes in the beam position."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s.plot(navigator=s_nav)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It can be really tricky noticing any systematic beam shifts. A better way is transposing the navigation dimensions."
]
},
Expand All @@ -231,8 +207,7 @@
"A powerful feature in pyXem, which is inherited from HyperSpy, is the ability to \"flip\" the navigation dimensions.\n",
"So instead of navigating across the probe position, you can navigate over the detector positions.\n",
"\n",
"Firstly, lets make a navigation signal for that, by averaging several of the diffraction patterns.\n",
"Here we use the `s_crop` signal from earlier."
"Firstly, lets make a navigation signal for that, by averaging several of the diffraction patterns."
]
},
{
Expand All @@ -249,7 +224,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we transpose `s_crop`, and use `s_diff_nav` for navigation"
"Then we transpose `s`, and use `s_diff_nav` for navigation"
]
},
{
Expand All @@ -274,7 +249,6 @@
"metadata": {},
"outputs": [],
"source": [
"#%matplotlib notebook\n",
"s_transpose.plot(navigator=s_diff_nav)"
]
},
Expand All @@ -298,9 +272,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"There are many ways of extracting the beam shift. One way is using center of mass, which is a fairly simple method.\n",
"\n",
"Again, we use `s_crop`, since we're only interested in the direct beam."
"There are many ways of extracting the beam shift. One way is using center of mass, which is a fairly simple method."
]
},
{
Expand All @@ -309,7 +281,7 @@
"metadata": {},
"outputs": [],
"source": [
"s_crop_com = s.center_of_mass()"
"s_com = s.center_of_mass()"
]
},
{
Expand All @@ -327,27 +299,18 @@
"metadata": {},
"outputs": [],
"source": [
"s_crop_com.plot()"
"s_com.plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, we're also getting contrast from the other effects, such as structural effects. Since the sample is nanocrystalline, some of the grains will be close to some zone axis, giving heavy Bragg scattering. While the Bragg spots themselves won't be visible at such low scattering angles as we have in `s_crop`, it will still change the intensity distribution _within_ the direct beam. Essentially, the direct beam becomes non-uniform, which will have an effect similarly to beam shift.\n",
"However, we're also getting contrast from the other effects, such as structural effects. Since the sample is nanocrystalline, some of the grains will be close to some zone axis, giving heavy Bragg scattering. While the Bragg spots themselves won't be visible at such low scattering angles as we have in `s`, it will still change the intensity distribution _within_ the direct beam. Essentially, the direct beam becomes non-uniform, which will have an effect similarly to beam shift.\n",
"\n",
"One way of reducing this is by using thresholding and masking. However, we first need to find reasonable values for these.\n",
"\n",
"For this, we use `threshold_and_mask` on a subset of the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s_subset = s.inav[:20, :20]"
"For this, we use `threshold_and_mask` on a subset of the dataset. Currently, `threshold_and_mask` does not work for lazy signals. So we extract a small part of the dataset, and load it into memory using `compute()`"
]
},
{
Expand All @@ -356,7 +319,8 @@
"metadata": {},
"outputs": [],
"source": [
"s_subset.plot()"
"s_subset = s.inav[:10, :10]\n",
"s_subset.compute()"
]
},
{
Expand All @@ -374,7 +338,7 @@
"metadata": {},
"outputs": [],
"source": [
"s_threshold_mask = s_subset.threshold_and_mask(threshold=3., mask=(16, 16,16))"
"s_threshold_mask = s_subset.threshold_and_mask(threshold=2, mask=(32, 32, 32))"
]
},
{
Expand All @@ -399,7 +363,7 @@
"metadata": {},
"outputs": [],
"source": [
"s_crop_com_threshold = s.center_of_mass(threshold=3., mask=(16, 16, 16), lazy_result=False)"
"s_com_threshold = s.center_of_mass(threshold=2, mask=(32, 32, 32), lazy_result=False)"
]
},
{
Expand All @@ -408,14 +372,16 @@
"metadata": {},
"outputs": [],
"source": [
"s_crop_com_threshold.plot()"
"s_com_threshold.plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This seems to have improved it a little bit.\n",
"Comparing `s_com` and `s_com_threshold`, there isn't really an improvement. This is due to the reduced nature of the dataset, with the electron beam being very small. When combined with thresholding, it leads to many of the probe positions having exactly the same beam shift.\n",
"\n",
"Try this with the full dataset (6 GB), and compare the results: https://zenodo.org/record/4312960/files/fe60al40_stripe_pattern.hspy?download=1\n",
"\n",
"For doing more advanced processing of this type of data, see https://fpdpy.gitlab.io/fpd/, which has better edge detection algorithms in the `fpd.fpd_processing.phase_correlation` function. For a complete example on how to use this, see https://zenodo.org/record/3466591/files/d001_get_dpc_raw_signal.py which processes the same type of dataset as used in this example."
]
Expand Down Expand Up @@ -445,7 +411,7 @@
"metadata": {},
"outputs": [],
"source": [
"s_dpc = s_crop_com_threshold.correct_ramp(corner_size=0.05)"
"s_dpc = s_com_threshold.correct_ramp(corner_size=0.05)"
]
},
{
Expand Down Expand Up @@ -476,7 +442,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"nbsphinx-thumbnail"
]
},
"outputs": [],
"source": [
"s_color = s_dpc.get_color_signal()\n",
Expand Down Expand Up @@ -565,7 +535,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand All @@ -579,7 +549,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
"version": "3.9.11"
}
},
"nbformat": 4,
Expand Down