-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FENDL-3.2b Retrofitting #42
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a fabulous start @eitan-weinstein, and the overall style is more modular than my first impressions once I understood your design strategy - and a lot cleaner than the first code you wrote a year ago :)
I'm going to publish this first comment now and then dig in to the details a little more in a subsequent review.
Building the comment about logger
- print
statements are reasonable when debugging & developing something, but by the time we get to PR, its best to either remove them or convert them to something that uses logger
primarily to make it easy to hide them.
We also try to avoid any interactive code an rely instead on command-line arguments, input files, and/or calling methods from custom user python code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Separation of concerns - make this more modular with each function performing a specific task.
- argparse for command-line input/options
- read JSON or YAML for more complicated input
- separate methods for reading data, processing data, outputting data
- logging for user output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are a variety of suggestions to cleanup and streamline things.
|
||
# Download the GENDF file | ||
def gendf_download(element, A, M=None, save_path=None): | ||
Z = str(elements.index(element) + 1).zfill(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rather than searching with index
you could process this into something like:
elements = dict(zip(elements, range(1,len(elements)+1)))
and then you can just look up elements[element]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, since you use this more than once, you could define it once and import it in each place.
|
||
print(f"Final download URL: {download_url}") | ||
|
||
subprocess.run(['wget', download_url, '-O', save_path]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't you use requests
to do the download rather than wget
?
def gendf_download(element, A, M=None, save_path=None): | ||
Z = str(elements.index(element) + 1).zfill(2) | ||
A = str(A).zfill(3) | ||
gendf_gen_url = 'https://www-nds.iaea.org/fendl/data/neutron/group/' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah! These GENDF files are NOT activation files. They are transport files and not reliable for our activation purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll hold off reviewing the GENDF stuff until we are sure it's useful
# Suppress unnecessary warnings from ENDFtk | ||
@contextlib.contextmanager | ||
def suppress_output(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'f definitely curious to see which output you are suppressing here...
if 'm' in A: | ||
m_index = A.find('m') | ||
A = A[:m_index].zfill(3) + 'm' | ||
else: | ||
A = A.zfill(3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the m
always last? If so, then I think this will do it:
if 'm' in A: | |
m_index = A.find('m') | |
A = A[:m_index].zfill(3) + 'm' | |
else: | |
A = A.zfill(3) | |
A = A.zfill(3 + ('m' in A)) |
# Write the input deck to the groupr.inp file | ||
with open('groupr.inp', 'w') as f: | ||
f.write('groupr\n') | ||
for card_name, card_content in zip(deck_names, deck): | ||
f.write(format_card(card_name, card_content, MTs)) | ||
f.write(' 0/\nstop') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
separation of concerns: make a separate function to write the output from the one that generates the data
matb, MTs = grpt.endf_specs(endf_path) | ||
|
||
# Write out the GROUPR input file | ||
mt_table = pd.read_csv('./mt_table.csv') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you read this into a dictionary instead of a dataframe?
see: https://docs.python.org/3/library/csv.html#csv.DictReader
card9 = [] | ||
for mt in MTs: | ||
mtname = mt_table[mt_table['MT'] == mt]['Reaction'].values[0] # description of section to be processed | ||
card9_line = f'{mfd} {mt} "{mtname}"' | ||
card9.append(card9_line) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if mt_table
is a dictionary, this could be as simple as:
card9 = [f'{mfd} {mt} "{mt_table[mt]}"' for mt in MTs]
for line in card_content: | ||
card_str += f' {line}/\n' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for line in card_content: | |
card_str += f' {line}/\n' | |
card_str = ' ' + '/\n '.join(card_content) + '/\n' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely want to replace this with a python script.
…ard formatting function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More pythonic use of dictionary
Co-authored-by: Paul Wilson <[email protected]>
…d fendl3_gendf.py
…tain argparse messages in terminal
…g over multiple isotopes.
# Set conditionals for local file input | ||
if args.method == 'I': | ||
gendf_data = handle_local_file_input(args, mt_dict) | ||
cumulative_df = pd.concat([cumulative_df, gendf_data], ignore_index=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Execute pd.concat
outside of the if-else block
# Extract fundamental data from the GENDF file | ||
pKZA = tpp.extract_gendf_pkza(gendf_path, M = M) | ||
matb, MTs, file_obj = tpp.extract_endf_specs(gendf_path, 'gendf') | ||
|
||
logger.info(f"GENDF file path: {gendf_path}") | ||
logger.info(f"Parent KZA (pKZA): {pKZA}") | ||
logger.info(f'MTs: {MTs}') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have this all be its own function that can be called by both
'Cross Sections' : [] | ||
}) | ||
|
||
def handle_local_file_input(args, mt_dict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Get this function to go only up to the point that it can be treated the same way as a local file.
|
||
# Extract data for each MT | ||
for MT in MTs: | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of using a try
block, check beforehand to make sure that the MT is in the mt_dict
and only proceed if True
return pKZA | ||
|
||
@contextlib.contextmanager | ||
def redirect_ENDFtk_output(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See how things work in avoiding using this function and letting the ENDFtk messages come out organically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some changes we discussed today.
logger = logging.getLogger(__name__) | ||
|
||
# Redirect stdout and stderr to the logger | ||
class LoggerWriter: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably don't need this - can just access the default logger
you created above
sys.stdout = LoggerWriter(logger.info) | ||
sys.stderr = LoggerWriter(logger.error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you need to reassign these, and if you don't you won't need so many gymnastics in the arg parsing
pKZA = tpp.extract_gendf_pkza(gendf_path, M = M) | ||
|
||
# Recalibrate MT list after GENDF conversion | ||
matb, MTs, file_obj = tpp.extract_endf_specs(gendf_path, 'gendf') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for any MTs
that are not in mt_dict
if {MTs} - {mt_dict} not None:
A = A.split(' ')[0] | ||
if 'm' in A: | ||
m_index = A.find('m') | ||
A = A[:m_index] | ||
M = str(str(M).count('M')) or '0' | ||
pKZA = Z.zfill(2) + A.zfill(3) + M |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A = A.split(' ')[0] | |
if 'm' in A: | |
m_index = A.find('m') | |
A = A[:m_index] | |
M = str(str(M).count('M')) or '0' | |
pKZA = Z.zfill(2) + A.zfill(3) + M | |
Z = int(Z) | |
M = 1 if 'm' in A.lower() else 0 | |
A = int(A.strip().lower().split(''m')) | |
pKZA = (Z * 1000 + A) * 10 + M |
Defaults to None and will be otherwise defined internally. | ||
|
||
Returns: | ||
pKZA (str): Parent KZA identifier. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If changing below
pKZA (str): Parent KZA identifier. | |
pKZA (int): Parent KZA identifier. |
|
||
try: | ||
section = file.section(MT).content | ||
lines = section.split('\n')[2:-2:2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lines = section.split('\n')[2:-2:2] | |
# only every 2nd line starting at the 3rd line has cross-section data. | |
lines = section.split('\n')[2:-2:2] |
particle | ||
) | ||
for particle in particle_types | ||
if count_emitted_particles(particle, emitted_particles) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if count_emitted_particles(particle, emitted_particles) | |
if particle in emitted_particles) |
""" | ||
|
||
# Neutron capture | ||
nucleus_neutrons = A - nucleus_protons + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#. emitted delta N delta P
delta_NP = { 'n' : np.array([-1, 0]), # neutron emission, N = 1, P = 0
'p' : np.array([0,-1]),
't' : np.array([-2, -1]),
....
}
N_P = np.array([A - Z + 1, Z])
for particle, num_particles in emission_tuples.items(): # if changed to dictionary
N_P += num_particle * change_N_P[particle]
emission_tuples = [ | ||
( | ||
count_emitted_particles(particle, emitted_particles), | ||
particle | ||
) | ||
for particle in particle_types | ||
if count_emitted_particles(particle, emitted_particles) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
emission_tuples = [ | |
( | |
count_emitted_particles(particle, emitted_particles), | |
particle | |
) | |
for particle in particle_types | |
if count_emitted_particles(particle, emitted_particles) | |
] | |
emission_tuples = { | |
particle : count_emitted_particles(particle, emitted_particles) | |
for particle in particle_types | |
if particle in emitted particles | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And build this as soon as you load the MT CSV data
with open(csv_path, 'r') as f: | ||
csv_reader = csv.DictReader(f) | ||
for row in csv_reader: | ||
mt_dict[row['MT']] = row['Reaction'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
process all the emitted particles here for once and for all, and store in a data structure that is something like:
you could actually store a dictionary:
mt_data[MT] = { 'delKZA' : (change_P * 1000 + change_P + change_N) * 10 + change_M, 'emitted_particles' : emitted_particles }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more comments here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be able to just be moved into your main program now....
read out as {'n': 1, 'p': 1}. | ||
""" | ||
|
||
particle_types = ['n', 'd', 'α', 'p', 't', '3He', 'gamma'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably makes sense to somehow generate this from the keys of NP_dict
below?
|
||
# emitted delta N delta P | ||
NP_change = array([ 1 , 0 ]) # neutron activation | ||
NP_dict = {'n' : array([-1 , 0 ]), # neutron emission |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if emission_dict
has a gamma
in it?
number_str = int(number_str) | ||
elif particle in emitted_particle_string: | ||
number_str = 1 | ||
else: | ||
number_str = None | ||
|
||
return number_str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't a string any more so why call it number_str
?
None | ||
""" | ||
|
||
# In ENDF-6 formatted files, there are 66 lines of whitespace before |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# In ENDF-6 formatted files, there are 66 lines of whitespace before | |
# In ENDF-6 formatted files, there are 66 characters/columns of whitespace before |
if not os.path.exists(save_directory): | ||
os.makedirs(save_directory) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use pathlib
instead of os
…tings for the logger.
This commit includes a user-oriented set of Python scripts that allow for a given input of FENDL 3.2B formatted GENDF or ENDF/PENDF pairs to extract out PKZA, DKZA, emitted particles, and cross-sections data from the new files. The scripts allow for direct downloads of GENDF files from the FENDL database (https://www-nds.iaea.org/fendl/data/neutron/index.html), but this does not include data on nuclear isomers. Therefore, if the user inputs an isomer (i.e. Ti-48m), then it will instead download from the TENDL 2017 database (https://tendl.web.psi.ch/tendl_2017/tendl2017.html), on which FENDL 3.2B is based anyways. Given that TENDL 2017 does not have GENDF file formats and that ALARA requires group-wise, as opposed to point-wise data, data taken from TENDL 2017 will be processed using the NJOY
GROUPR
module (https://github.com/njoy/NJOY2016) and converted into group-wise data.The output of the main executable Python script (
fendl3_gendf.py
) is a CSV file containing the aforementioned extracted data and saves it in the current directory.There are areas that I know will still need modification, either within these codes or externally, including:
../DataLib
that can take the data from thegendf_data.csv
output and make it readable by ALARA.