Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FENDL-3.2b Retrofitting #42

Draft
wants to merge 59 commits into
base: main
Choose a base branch
from

Conversation

eitan-weinstein
Copy link
Contributor

This commit includes a user-oriented set of Python scripts that allow for a given input of FENDL 3.2B formatted GENDF or ENDF/PENDF pairs to extract out PKZA, DKZA, emitted particles, and cross-sections data from the new files. The scripts allow for direct downloads of GENDF files from the FENDL database (https://www-nds.iaea.org/fendl/data/neutron/index.html), but this does not include data on nuclear isomers. Therefore, if the user inputs an isomer (i.e. Ti-48m), then it will instead download from the TENDL 2017 database (https://tendl.web.psi.ch/tendl_2017/tendl2017.html), on which FENDL 3.2B is based anyways. Given that TENDL 2017 does not have GENDF file formats and that ALARA requires group-wise, as opposed to point-wise data, data taken from TENDL 2017 will be processed using the NJOY GROUPR module (https://github.com/njoy/NJOY2016) and converted into group-wise data.

The output of the main executable Python script (fendl3_gendf.py) is a CSV file containing the aforementioned extracted data and saves it in the current directory.

There are areas that I know will still need modification, either within these codes or externally, including:

  • Changing the format from a user CLI input to one consistent with ALARA input files such that these programs can be automatically called given an ALARA input file that contains FENDL 3.2B formatted data.
  • Another processing script or a modification to an existing C++ library in ../DataLib that can take the data from the gendf_data.csv output and make it readable by ALARA.

@eitan-weinstein eitan-weinstein marked this pull request as ready for review June 14, 2024 16:39
@eitan-weinstein eitan-weinstein changed the title First commit for FENDL3.2B retrofitting FENDL 3.2B Retrofitting Jun 14, 2024
Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a fabulous start @eitan-weinstein, and the overall style is more modular than my first impressions once I understood your design strategy - and a lot cleaner than the first code you wrote a year ago :)

I'm going to publish this first comment now and then dig in to the details a little more in a subsequent review.

Building the comment about logger - print statements are reasonable when debugging & developing something, but by the time we get to PR, its best to either remove them or convert them to something that uses logger primarily to make it easy to hide them.

We also try to avoid any interactive code an rely instead on command-line arguments, input files, and/or calling methods from custom user python code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separation of concerns - make this more modular with each function performing a specific task.

  • argparse for command-line input/options
  • read JSON or YAML for more complicated input
  • separate methods for reading data, processing data, outputting data
  • logging for user output

Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are a variety of suggestions to cleanup and streamline things.


# Download the GENDF file
def gendf_download(element, A, M=None, save_path=None):
Z = str(elements.index(element) + 1).zfill(2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather than searching with index you could process this into something like:

elements = dict(zip(elements, range(1,len(elements)+1)))

and then you can just look up elements[element]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, since you use this more than once, you could define it once and import it in each place.


print(f"Final download URL: {download_url}")

subprocess.run(['wget', download_url, '-O', save_path])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't you use requests to do the download rather than wget?

def gendf_download(element, A, M=None, save_path=None):
Z = str(elements.index(element) + 1).zfill(2)
A = str(A).zfill(3)
gendf_gen_url = 'https://www-nds.iaea.org/fendl/data/neutron/group/'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah! These GENDF files are NOT activation files. They are transport files and not reliable for our activation purposes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll hold off reviewing the GENDF stuff until we are sure it's useful

Comment on lines 47 to 49
# Suppress unnecessary warnings from ENDFtk
@contextlib.contextmanager
def suppress_output():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'f definitely curious to see which output you are suppressing here...

Comment on lines 29 to 33
if 'm' in A:
m_index = A.find('m')
A = A[:m_index].zfill(3) + 'm'
else:
A = A.zfill(3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the m always last? If so, then I think this will do it:

Suggested change
if 'm' in A:
m_index = A.find('m')
A = A[:m_index].zfill(3) + 'm'
else:
A = A.zfill(3)
A = A.zfill(3 + ('m' in A))

Comment on lines 186 to 191
# Write the input deck to the groupr.inp file
with open('groupr.inp', 'w') as f:
f.write('groupr\n')
for card_name, card_content in zip(deck_names, deck):
f.write(format_card(card_name, card_content, MTs))
f.write(' 0/\nstop')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

separation of concerns: make a separate function to write the output from the one that generates the data

matb, MTs = grpt.endf_specs(endf_path)

# Write out the GROUPR input file
mt_table = pd.read_csv('./mt_table.csv')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you read this into a dictionary instead of a dataframe?

see: https://docs.python.org/3/library/csv.html#csv.DictReader

Comment on lines 166 to 170
card9 = []
for mt in MTs:
mtname = mt_table[mt_table['MT'] == mt]['Reaction'].values[0] # description of section to be processed
card9_line = f'{mfd} {mt} "{mtname}"'
card9.append(card9_line)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if mt_table is a dictionary, this could be as simple as:

    card9 = [f'{mfd} {mt} "{mt_table[mt]}"' for mt in MTs]

Comment on lines 116 to 117
for line in card_content:
card_str += f' {line}/\n'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for line in card_content:
card_str += f' {line}/\n'
card_str = ' ' + '/\n '.join(card_content) + '/\n'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely want to replace this with a python script.

Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More pythonic use of dictionary

src/DataLib/fendl32B_retrofit/GROUPR/groupr_tools.py Outdated Show resolved Hide resolved
@eitan-weinstein eitan-weinstein changed the title FENDL 3.2B Retrofitting FENDL-3.2b Retrofitting Jun 19, 2024
# Set conditionals for local file input
if args.method == 'I':
gendf_data = handle_local_file_input(args, mt_dict)
cumulative_df = pd.concat([cumulative_df, gendf_data], ignore_index=True)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Execute pd.concat outside of the if-else block

Comment on lines 116 to 122
# Extract fundamental data from the GENDF file
pKZA = tpp.extract_gendf_pkza(gendf_path, M = M)
matb, MTs, file_obj = tpp.extract_endf_specs(gendf_path, 'gendf')

logger.info(f"GENDF file path: {gendf_path}")
logger.info(f"Parent KZA (pKZA): {pKZA}")
logger.info(f'MTs: {MTs}')
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have this all be its own function that can be called by both

'Cross Sections' : []
})

def handle_local_file_input(args, mt_dict):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Get this function to go only up to the point that it can be treated the same way as a local file.


# Extract data for each MT
for MT in MTs:
try:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using a try block, check beforehand to make sure that the MT is in the mt_dict and only proceed if True

return pKZA

@contextlib.contextmanager
def redirect_ENDFtk_output():
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See how things work in avoiding using this function and letting the ENDFtk messages come out organically.

Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are some changes we discussed today.

logger = logging.getLogger(__name__)

# Redirect stdout and stderr to the logger
class LoggerWriter:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably don't need this - can just access the default logger you created above

Comment on lines 27 to 28
sys.stdout = LoggerWriter(logger.info)
sys.stderr = LoggerWriter(logger.error)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need to reassign these, and if you don't you won't need so many gymnastics in the arg parsing

pKZA = tpp.extract_gendf_pkza(gendf_path, M = M)

# Recalibrate MT list after GENDF conversion
matb, MTs, file_obj = tpp.extract_endf_specs(gendf_path, 'gendf')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check for any MTs that are not in mt_dict

if {MTs} - {mt_dict} not None:

Comment on lines 204 to 209
A = A.split(' ')[0]
if 'm' in A:
m_index = A.find('m')
A = A[:m_index]
M = str(str(M).count('M')) or '0'
pKZA = Z.zfill(2) + A.zfill(3) + M
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A = A.split(' ')[0]
if 'm' in A:
m_index = A.find('m')
A = A[:m_index]
M = str(str(M).count('M')) or '0'
pKZA = Z.zfill(2) + A.zfill(3) + M
Z = int(Z)
M = 1 if 'm' in A.lower() else 0
A = int(A.strip().lower().split(''m'))
pKZA = (Z * 1000 + A) * 10 + M

Defaults to None and will be otherwise defined internally.

Returns:
pKZA (str): Parent KZA identifier.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If changing below

Suggested change
pKZA (str): Parent KZA identifier.
pKZA (int): Parent KZA identifier.


try:
section = file.section(MT).content
lines = section.split('\n')[2:-2:2]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
lines = section.split('\n')[2:-2:2]
# only every 2nd line starting at the 3rd line has cross-section data.
lines = section.split('\n')[2:-2:2]

particle
)
for particle in particle_types
if count_emitted_particles(particle, emitted_particles)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if count_emitted_particles(particle, emitted_particles)
if particle in emitted_particles)

"""

# Neutron capture
nucleus_neutrons = A - nucleus_protons + 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

    #.            emitted   delta N   delta P
    delta_NP = { 'n' : np.array([-1, 0]), # neutron emission, N = 1, P = 0
                 'p' : np.array([0,-1]),
                 't' : np.array([-2, -1]), 
....
}

N_P = np.array([A - Z + 1, Z])

for particle, num_particles in emission_tuples.items(): # if changed to dictionary
    N_P += num_particle * change_N_P[particle]

Comment on lines 152 to 159
emission_tuples = [
(
count_emitted_particles(particle, emitted_particles),
particle
)
for particle in particle_types
if count_emitted_particles(particle, emitted_particles)
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
emission_tuples = [
(
count_emitted_particles(particle, emitted_particles),
particle
)
for particle in particle_types
if count_emitted_particles(particle, emitted_particles)
]
emission_tuples = {
particle : count_emitted_particles(particle, emitted_particles)
for particle in particle_types
if particle in emitted particles
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And build this as soon as you load the MT CSV data

with open(csv_path, 'r') as f:
csv_reader = csv.DictReader(f)
for row in csv_reader:
mt_dict[row['MT']] = row['Reaction']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

process all the emitted particles here for once and for all, and store in a data structure that is something like:

you could actually store a dictionary:
mt_data[MT] = { 'delKZA' : (change_P * 1000 + change_P + change_N) * 10 + change_M, 'emitted_particles' : emitted_particles }

Copy link
Member

@gonuke gonuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more comments here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be able to just be moved into your main program now....

read out as {'n': 1, 'p': 1}.
"""

particle_types = ['n', 'd', 'α', 'p', 't', '3He', 'gamma']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably makes sense to somehow generate this from the keys of NP_dict below?


# emitted delta N delta P
NP_change = array([ 1 , 0 ]) # neutron activation
NP_dict = {'n' : array([-1 , 0 ]), # neutron emission
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if emission_dict has a gamma in it?

Comment on lines 50 to 56
number_str = int(number_str)
elif particle in emitted_particle_string:
number_str = 1
else:
number_str = None

return number_str
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't a string any more so why call it number_str?

None
"""

# In ENDF-6 formatted files, there are 66 lines of whitespace before
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# In ENDF-6 formatted files, there are 66 lines of whitespace before
# In ENDF-6 formatted files, there are 66 characters/columns of whitespace before

Comment on lines 149 to 150
if not os.path.exists(save_directory):
os.makedirs(save_directory)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use pathlib instead of os

@eitan-weinstein eitan-weinstein marked this pull request as draft July 18, 2024 14:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants