Skip to content

Functioning HEDM workflow via CLI

Latest
Compare
Choose a tag to compare
@joelvbernier joelvbernier released this 11 Jun 20:55
· 28 commits to master since this release

Overview

This release has a functioning CLI for ff-HEDM data reduction. Recent fixes also affect line position extraction from powder diffraction data. A sample config file for the HEDM workflow is given below. Mac OS and Linux conda packages are available here.

This also incorporates the nf-HEDM utils from CHESS (see this paper.)

Example ff-HEDM config

This is a sample YAML file that is taken from the new hexrd examples repo.

analysis_name: results_hexrd06_py27  # defaults to analysis

# working directory defaults to current working directory
# all relative paths specified herein are assumed to be in the working_dir
# any files not in working_dir should be specified with an absolute path
#
# working_dir:

# "all", "half", or -1 means all but one, defaults to -1
multiprocessing: all 

material:
  definitions: materials_py27.hexrd
  active: ruby

image_series:
  format: frame-cache
  data:
    - file: ../imageseries/mruby-0129_000004_ff1_000012-cachefile.npz
      args: {}
      panel: ff1  # must match detector key
    - file: ../imageseries/mruby-0129_000004_ff2_000012-cachefile.npz
      args: {}
      panel: ff2  # must match detector key

instrument: dexelas_id3a_20200130.yml

find_orientations:
  orientation_maps:
    # A file name must be specified. If it doesn't exist, one will be created
    file: null

    threshold: 250
    bin_frames: 1 # defaults to 1

    # "all", or a list of hkl orders used to find orientations
    # defaults to all orders listed in the material definition
    active_hkls: [0,1,2,3,4,5]

  # either search full quaternion grid, or seed search based on sparse
  # orientation maps.  For input search space:
  #
  # use_quaternion_grid: some/file/name
  # 
  # otherwise defaults to seeded search:
  seed_search: # this section is ignored if use_quaternion_grid is defined
    hkl_seeds: [0,1,2] # hkls ids to use, must be defined for seeded search
    fiber_step: 0.5 # degrees, defaults to ome tolerance
    # Method selection:
    #   Now 3 choices: label (the original), 'blob_dog', and 'blob_log'
    #   Each has its own parameter names, examples below.
    #
    # method: 
    #   label: 
    #     filter_radius: 1
    #     threshold: 1 # defaults to 1
    #
    # method:
    #   blob_dog:
    #     min_sigma: 0.5
    #     max_sigma: 5
    #     sigma_ratio: 1.6
    #     threshold: 0.01
    #     overlap: 0.1
    #
    method:
      blob_log:
        min_sigma: 0.5
        max_sigma: 5
        num_sigma: 10
        threshold: 0.01
        overlap: 0.1
  # this is the on-map threshold using in the scoring
  # defaults to 1
  threshold: 1
   
  omega:
    tolerance: 1.0  # in degrees, defaults to 2x ome step

    # specify the branch cut, in degrees. The range must be 360 degrees.
    # defaults to full 360 starting at the first omega value in imageseries.
    period: [0, 360]

  eta:
    tolerance: 1.0  # in degrees, defaults to 2x ome step
    mask: 5  # degrees, mask angles close to ome rotation axis, defaults to 5

  clustering:
    # algorithm choices are 
    #   sph-dbscan
    #   ort-dbscn
    #   dbscan <default>
    #   fclusterdata; this is a fallback and won't work for large problems
    radius: 1.0
    completeness: 0.85 # completeness threshold
    algorithm: dbscan  

fit_grains:
  do_fit: true # if false, extracts grains but doesn't fit. defaults to true

  # estimate: null

  npdiv: 4 # number of polar pixel grid subdivisions, defaults to 2

  panel_buffer: 2 # don't fit spots within this many mm from edge

  threshold: 25

  tolerance:
    tth: [0.25, 0.20] # tolerance lists must be identical length
    eta: [3.0, 2.0]
    omega: [2.0, 1.0]

  refit: [2, 1]

  tth_max: false # true, false, or a non-negative value, defaults to true