Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FENDL-3.2b Retrofitting #42

Draft
wants to merge 59 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
dbef6ca
First commit for FENDL3.2B retrofitting
Jun 14, 2024
cdd7bcd
Replacing NJOY Bash script with subprocess execution in Python
Jun 17, 2024
f3d010f
Simplifying mass number formatting in tendl_download() function
Jun 17, 2024
6c5ac24
Simplifying endf_specs() function
Jun 17, 2024
03e3af3
Remove now-obsolete ENDFtk warning suppression
Jun 17, 2024
fa4f29e
Simplify tendl_download() function using data structures
Jun 17, 2024
0190096
Switching tendl_download() function over to urllib dependence
Jun 17, 2024
413ae46
Moving card deck formatting from Pandas DataFrame to dictionary
Jun 17, 2024
2eb9ffd
Separating out a write function for the GROUPR input from the input c…
Jun 17, 2024
1247db3
Removing now-obsolete Pandas dependence
Jun 17, 2024
1d35b79
Simplifying card writing for groupr_input_file_writer()
eitan-weinstein Jun 18, 2024
de3cbb4
Fixing indexing on groupr_input_file_writer()
Jun 18, 2024
d20eed8
Storing elements in a single dictionary to be referenced across both …
Jun 18, 2024
58a4ede
Removing now-obsolete ENDFtk warning supression from gend_tools.py an…
Jun 18, 2024
b83be55
Updating gendf_download() function -- notably switching away from wge…
Jun 18, 2024
e135311
Switching CSV reading from Pandas DataFrame to dictionary
Jun 18, 2024
29528dd
Moving away from direct input to argparse input/options
Jun 18, 2024
0abb51b
Expanding argparse usage
Jun 18, 2024
582424b
Moving away from print statements towards logging
Jun 18, 2024
77e9a65
Removed unnecessary file from file cleanup list
Jun 19, 2024
493a35c
Expanding logger to capture 'No bottleneck testing available' message
Jun 19, 2024
a29bd66
Improving readability of NJOY run message for logger
Jun 19, 2024
69fe5f0
Updating the logging to redirect ENDFtk messages to the logger and re…
Jun 21, 2024
d0f7d3b
Removing stand-alone groupr script -- unnecessary and not called indi…
Jun 21, 2024
4318ae4
Reorganizing folder structure -- separate GROUPR folder no longer see…
Jun 21, 2024
b1b63f9
Finalizing move out of GROUPR/
Jun 21, 2024
1edd251
Moving the rest of fendl3_gendf.py to the main() function
Jun 21, 2024
fb2d548
Forgot to include mt_table in main()
Jun 21, 2024
4065d00
Streamlining endf_specs usage and placement.
Jun 24, 2024
4250c44
Removing direct GENDF download function -- all downloads need to be p…
Jun 24, 2024
93d469f
Moving GROUPR parameters to global constants.
Jun 24, 2024
98dcc93
Logging error if NJOY run is unsuccessful.
Jun 24, 2024
2460d72
Cleaning up package imports
Jun 24, 2024
6fcf5e5
Removing unnecessary package imports on fendl3_gendf.py
Jun 24, 2024
5ec6bbf
Fixing KZA formatting.
Jun 26, 2024
f490d38
Addressing low-level comments from most recent review.
Jul 1, 2024
45df27f
Improving readability
Jul 1, 2024
b76634f
Beginning high-level overhaul and restructuring
Jul 1, 2024
121e57a
Improving readability for nuclear_decay()
Jul 1, 2024
fb1b796
Increasing readability of argparser
Jul 1, 2024
c8e6cea
Major overhaul of modularity and including functionality for iteratin…
Jul 9, 2024
4d99f41
Removing time package.
Jul 9, 2024
e0529dc
Removing specific example file from GENDF files.
Jul 9, 2024
cc064b6
Making the file saving more versatile.
Jul 9, 2024
14c5730
Responding to a majority of the high-level comments from Tuesday's re…
Jul 11, 2024
95815b2
Fixing docstring for ensure_gendf_markers() function.
Jul 11, 2024
a5997b5
Improving isotope identification methods.
Jul 12, 2024
f83a646
Improving isotope identification methods.
Jul 12, 2024
98f23c3
Simplifying logging method and usage.
Jul 12, 2024
498c824
One more logging fix.
Jul 12, 2024
a807f1e
Completing response to last review and making arg processing more mod…
Jul 16, 2024
76b9aa1
Improving ability to iterate over all elements.
Jul 16, 2024
eebeea3
Fixing minor bug in execution of handle_TENDL_downloads().
Jul 16, 2024
912530f
Small formatting change to fit in max line length.
Jul 16, 2024
c374494
More minor formatting adjustments and simplifying the line length set…
Jul 16, 2024
f50b617
Allowing for fendle_retrofit.py to be executed from DataLib.
Jul 16, 2024
4ba725e
Removing unnecessary print statement.
Jul 17, 2024
6257033
Ensuring that NJOY output is properly handled when program is execute…
Jul 17, 2024
4921dba
Small formatting changes before moving over to project individual PRs.
Jul 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions src/DataLib/fendl32B_retrofit/GROUPR/groupr.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import groupr_tools as grpt
import pandas as pd

# Call TENDL download function by user CLI input
element = input('Select element: ')
A = input('Select mass number: A = ')
endf_path = grpt.tendl_download(element, A, 'endf')
pendf_path = grpt.tendl_download(element, A, 'pendf')
print(f'ENDF file can be found at ./{endf_path}')
print(f'PENDF file can be found at ./{pendf_path}')

# Extract necessary MT and MAT data from the ENDF file
matb, MTs = grpt.endf_specs(endf_path)

# Write out the GROUPR input file
mt_table = pd.read_csv('./mt_table.csv')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you read this into a dictionary instead of a dataframe?

see: https://docs.python.org/3/library/csv.html#csv.DictReader

card_deck = grpt.groupr_input(matb, MTs, element, A, mt_table)

# Run NJOY
grpt.run_njoy(endf_path, pendf_path, card_deck, element, A)
230 changes: 230 additions & 0 deletions src/DataLib/fendl32B_retrofit/GROUPR/groupr_tools.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,230 @@
# Import packages
import ENDFtk
import os
import requests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think urllib is part of the standard library - can't it do what you want?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably could. I'll look into switching this over to urllib

import contextlib
import subprocess
import pandas as pd

# List of elements in the Periodic Table
elements = [
'H', 'He', 'Li', 'Be', 'B', 'C', 'N', 'O', 'F', 'Ne',
'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'Cl', 'Ar', 'K', 'Ca',
'Sc', 'Ti', 'V', 'Cr', 'Mn', 'Fe', 'Co', 'Ni', 'Cu', 'Zn',
'Ga', 'Ge', 'As', 'Se', 'Br', 'Kr', 'Rb', 'Sr', 'Y', 'Zr',
'Nb', 'Mo', 'Tc', 'Ru', 'Rh', 'Pd', 'Ag', 'Cd', 'In', 'Sn',
'Sb', 'Te', 'I', 'Xe', 'Cs', 'Ba', 'La', 'Ce', 'Pr', 'Nd',
'Pm', 'Sm', 'Eu', 'Gd', 'Tb', 'Dy', 'Ho', 'Er', 'Tm', 'Yb',
'Lu', 'Hf', 'Ta', 'W', 'Re', 'Os', 'Ir', 'Pt', 'Au', 'Hg',
'Tl', 'Pb', 'Bi', 'Po', 'At', 'Rn', 'Fr', 'Ra', 'Ac', 'Th',
'Pa', 'U', 'Np', 'Pu', 'Am', 'Cm', 'Bk', 'Cf', 'Es', 'Fm',
'Md', 'No', 'Lr', 'Rf', 'Db', 'Sg', 'Bh', 'Hs', 'Mt', 'Ds',
'Rg', 'Cn', 'Nh', 'Fl', 'Mc', 'Lv', 'Ts', 'Og'
]

# Define a function to download the .tendl file given specific user inputs to for element and mass number
def tendl_download(element, A, filetype, save_path = None):
# Ensure that A is properly formatted
A = str(A)
if 'm' in A:
m_index = A.find('m')
A = A[:m_index].zfill(3) + 'm'
else:
A = A.zfill(3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the m always last? If so, then I think this will do it:

Suggested change
if 'm' in A:
m_index = A.find('m')
A = A[:m_index].zfill(3) + 'm'
else:
A = A.zfill(3)
A = A.zfill(3 + ('m' in A))


# Define general URL format for files in the TENDL database
tendl_gen_url = 'https://tendl.web.psi.ch/tendl_2017/neutron_file/'

# Construct the filetype-specific URl for the data file
if filetype == 'endf' or filetype == 'ENDF':
# Construct the URL of the ENDF file to be downloaded
download_url = tendl_gen_url + f'{element}/{element}{A}/lib/endf/n-{element}{A}.tendl'

# Define a save path for the ENDF file if there is not one already specified
if save_path is None:
#save_path = f'tendl_2017_{element}{A}_{filetype}.endf'
save_path = 'tape20'

elif filetype == 'pendf' or filetype == 'PENDF':
# Construct the URL of the PENDF file to be downloaded
download_url = tendl_gen_url + f'{element}/{element}{A}/lib/endf/n-{element}{A}.pendf'

# Define a save path for the PENDF file if there is not one already specified
if save_path is None:
#save_path = f'tendl_2017_{element}{A}_{filetype}.pendf'
save_path = 'tape21'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

build some data structures with to help populate a template with fewer conditionals and repeated code

Suggested change
# Construct the filetype-specific URl for the data file
if filetype == 'endf' or filetype == 'ENDF':
# Construct the URL of the ENDF file to be downloaded
download_url = tendl_gen_url + f'{element}/{element}{A}/lib/endf/n-{element}{A}.tendl'
# Define a save path for the ENDF file if there is not one already specified
if save_path is None:
#save_path = f'tendl_2017_{element}{A}_{filetype}.endf'
save_path = 'tape20'
elif filetype == 'pendf' or filetype == 'PENDF':
# Construct the URL of the PENDF file to be downloaded
download_url = tendl_gen_url + f'{element}/{element}{A}/lib/endf/n-{element}{A}.pendf'
# Define a save path for the PENDF file if there is not one already specified
if save_path is None:
#save_path = f'tendl_2017_{element}{A}_{filetype}.pendf'
save_path = 'tape21'
file_handling = {'endf' : { ext: 'tendl', tape_num: 20},
'pendf' : {ext: 'pendf', tape_num: 21} }
download_url = tendl_gen_url + f'{element}/{element}{A}/lib/endf/n-{element}{A}.' +
file_handling[filetype.tolower()][ext]
if save_path is None:
save_path f'tape{file_handling[filetype.lower()][tape_num]}'


# Check if the file exists
response = requests.head(download_url)
if response.status_code == 404:
# Raise FileNotFoundError if file not found
raise FileNotFoundError(f'{download_url} not found')

# Download the file using wget
subprocess.run(['wget', download_url, '-O', save_path])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you use requests to get this file? Also, why not urllib?


return save_path

@contextlib.contextmanager
def suppress_output():
"""Suppress all output to stdout and stderr."""
with open(os.devnull, 'w') as fnull:
old_stdout = os.dup(1)
old_stderr = os.dup(2)
os.dup2(fnull.fileno(), 1)
os.dup2(fnull.fileno(), 2)
try:
yield
finally:
os.dup2(old_stdout, 1)
os.dup2(old_stderr, 2)
os.close(old_stdout)
os.close(old_stderr)

# Define a function to extract MT and MAT data from an ENDF file
def endf_specs(endf_path):
# Read in ENDF tape using ENDFtk
tape = ENDFtk.tree.Tape.from_file(endf_path)

# Determine the material ID
mat_ids = tape.material_numbers
matb = mat_ids[0]

# Set MF for cross sections
xs_MF = 3

# Extract out the file
file = tape.material(matb).file(xs_MF)

# Extract the MT numbers that are present in the file
MTs = []
for i in range(1000):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Magic number: why 1000?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is based on the list of MT numbers from the ENDF manual (https://www.oecd-nea.org/dbdata/data/manual-endf/endf102_MT.pdf). The range goes up to 999.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So best to define a variable max_endf_MT = 1000 or such

with suppress_output():
try:
file.section(i)
MTs.append(i)
except:
continue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you may be able to query the list of sections directly:

MTs = file.sections.to_list()

I noticed in the EDFtk readme that this may require parsing the file first (🤷‍♂️):

file = tape.material(matb).file(xs_MF).parse()


return matb, MTs

# Define a function to format GROUPR input cards
def format_card(card_name, card_content, MTs):
eitan-weinstein marked this conversation as resolved.
Show resolved Hide resolved
card_str = ''
gen_str = ' ' + ' '.join(map(str, card_content))
if card_name == 'Card 9':
for line in card_content:
card_str += f' {line}/\n'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for line in card_content:
card_str += f' {line}/\n'
card_str = ' ' + '/\n '.join(card_content) + '/\n'

elif card_name == 'Card 4':
card_str += gen_str + '\n'
else:
card_str += gen_str + '/\n'
return card_str

# Define a function to create the GROUPR input file
def groupr_input(matb, MTs, element, A, mt_table):

# INPUT PARAMETERS

# Set Card 1
nendf = 20 # unit for endf tape
npend = 21 # unit for pendf tape
ngout1 = 0 # unit for input gout tape (default=0)
ngout2 = 31 # unit for output gout tape (default=0)

card1 = [nendf, npend, ngout1, ngout2]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
card1 = [nendf, npend, ngout1, ngout2]
cards = {}
cards[1] = [nendf, npend, ngout1, ngout2]


# Set Card 2
# matb -- (already defined) -- material to be processed
ign = 17 # neutron group structure option
igg = 0 # gamma group structure option
iwt = 11 # weight function option
lord = 0 # Legendgre order
ntemp = 1 # number of temperatures (default = 1)
nsigz = 1 # number of sigma zeroes (default = 1)
iprint = 1 # long print option (0/1=minimum/maximum) -- (default=1)
ismooth = 1 # swith on/off smoother operation (1/0, default=1=on)

card2 = [matb, ign, igg, iwt, lord, ntemp, nsigz, iprint]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
card2 = [matb, ign, igg, iwt, lord, ntemp, nsigz, iprint]
cards[2] = [matb, ign, igg, iwt, lord, ntemp, nsigz, iprint]

and so on...


# Set Card 3
Z = str(elements.index(element) + 1).zfill(2)
title = f'"{Z}-{element}-{A} for TENDL 2017"'
card3 = [title]

# Set Card 4
temp = 293.16 # temperature in Kelvin
card4 = [temp]

# Set Card 5
sigz = 0 # sigma zero values (including infinity)
card5 = [sigz]

# Set Card 9
mfd = 3 # file to be processed
mtd = MTs # sections to be processed
card9 = []
for mt in MTs:
mtname = mt_table[mt_table['MT'] == mt]['Reaction'].values[0] # description of section to be processed
card9_line = f'{mfd} {mt} "{mtname}"'
card9.append(card9_line)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if mt_table is a dictionary, this could be as simple as:

    card9 = [f'{mfd} {mt} "{mt_table[mt]}"' for mt in MTs]


# Set Card 10
matd = 0 # next mat number to be processed
card10 = [matd]

# Create a card deck
deck = [card1, card2, card3, card4, card5, card9, card10]
deck_names = ['Card 1', 'Card 2', 'Card 3', 'Card 4', 'Card 5', 'Card 9', 'Card 10']
deck_df = pd.DataFrame({
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dataframe seems like overkill here, perhaps just a dictionary like the one I've already suggested?

'Card' : deck_names,
'Contents' : deck
})

# WRITE INPUT FILE FROM CARDS

# Write the input deck to the groupr.inp file
with open('groupr.inp', 'w') as f:
f.write('groupr\n')
for card_name, card_content in zip(deck_names, deck):
f.write(format_card(card_name, card_content, MTs))
f.write(' 0/\nstop')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

separation of concerns: make a separate function to write the output from the one that generates the data


return deck_df

# Define a function to execute NJOY bash script
def run_njoy(endf_path, pendf_path, card_deck, element, A):
# Read the template file
try:
with open('run_njoy_template.sh', 'r') as f:
script_content = f.read()
except:
with open('./GROUPR/run_njoy_template.sh', 'r') as f:
script_content = f.read()

# Replace placeholders with actual file paths
script_content = script_content.replace('<ENDF_PATH>', endf_path)
script_content = script_content.replace('<PENDF_PATH>', pendf_path)

# Write modified script content to run_njoy.sh file
with open('run_njoy.sh', 'w') as f:
f.write(script_content)

# Make the script executable
subprocess.run(["chmod", "+x", "run_njoy.sh"])

# Execute the modified script
njoy_run_message = subprocess.run(["./run_njoy.sh"], capture_output=True, text=True)
print(njoy_run_message.stdout)
print(njoy_run_message.stderr)

# If the run is successful, print out the output and make a copy of the file as a .GENDF file
if njoy_run_message.stderr == '':
output = subprocess.run(['cat', 'output'], capture_output=True, text = True)
title = card_deck[card_deck['Card'] == 'Card 3']['Contents'].values[0][0][1:-1]
title_index = output.stdout.find(title)
print(output.stdout[:title_index + len(title)])

gendf_path = f'tendl_2017_{element}{A}.gendf'
subprocess.run(['cp', 'tape31', gendf_path])
return gendf_path
42 changes: 42 additions & 0 deletions src/DataLib/fendl32B_retrofit/GROUPR/run_njoy_template.sh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely want to replace this with a python script.

Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
#!/bin/bash

# Check if NJOY is installed and accessible
if ! command -v njoy &> /dev/null
then
echo "NJOY could not be found. Please make sure it is installed and added to your PATH."
exit 1
fi

# Define the input files
TAPE20="<ENDF_PATH>"
TAPE21="<PENDF_PATH>"
INPUT="groupr.inp"
OUTPUT="groupr.out"

# Check if input files exist
if [ ! -f "$TAPE20" ]; then
echo "ENDF file not found!"
exit 1
fi

if [ ! -f "$TAPE21" ]; then
echo "PENDF file not found!"
exit 1
fi

if [ ! -f "$INPUT" ]; then
echo "Input file not found!"
exit 1
fi

# Run NJOY with the input file
echo "Running NJOY..."
njoy < "$INPUT" > "$OUTPUT"

# Check if NJOY ran successfully
if [ $? -eq 0 ]; then
echo "NJOY ran successfully. Output written to /output."
else
echo "NJOY encountered an error. Check /output for details."
exit 1
fi
Loading