diff --git a/.gitignore b/.gitignore index a51106b..387eb2c 100644 --- a/.gitignore +++ b/.gitignore @@ -1,18 +1,23 @@ -# Intermediate data files -acropolis/data/*.dat +# Files from personal projects +acropolis/prj/* +acropolis/data/cn.dat + +# Reference paper +paper* # AlterBBN files -alterbbn/* +alterbbn* # Temporary files -tmp/* +CHANGES +TODO +manual/v* +plots/v* -# Byte-compiled / optimized / DLL files -__pycache__ -*.py[cod] -*$py.class +# Testing scripts +test.py -# Plot data +# Plots plots/data/* # Build files @@ -20,6 +25,11 @@ build/* dist/* ACROPOLIS.egg-info +# Byte-compiled / optimized / DLL files +__pycache__ +*.py[cod] +*$py.class + # TeX files *.tex *.sty diff --git a/ACROPOLIS.png b/ACROPOLIS.png new file mode 100644 index 0000000..c525b16 Binary files /dev/null and b/ACROPOLIS.png differ diff --git a/README.md b/README.md index 050d29e..774af71 100644 --- a/README.md +++ b/README.md @@ -3,8 +3,8 @@ **A** generi**C** f**R**amework f**O**r **P**hotodisintegration **O**f **LI**ght element**S** ![Language: Python3](https://img.shields.io/badge/Language-Python3-blue.svg?style=flat-square) -![Version: 1.2.1](https://img.shields.io/badge/Current_Version-1.2.1-green.svg?style=flat-square) -![Dersion: 1.3](https://img.shields.io/badge/Current_Dev_Version-1.3-orange.svg?style=flat-square) +![Version: 1.2.2](https://img.shields.io/badge/Current_Version-1.2.2-green.svg?style=flat-square) +![DevVersion: 1.3](https://img.shields.io/badge/Current_Dev_Version-1.3-orange.svg?style=flat-square) ![Logo](https://acropolis.hepforge.org/ACROPOLIS.png) When using this code for your own scientific publications, please cite @@ -27,10 +27,21 @@ The remarkable agreement between observations of the primordial light element ab # Changelog +v1.2.2\ +(April 6, 2022) + - Implemented fixes for the issues #10 and #11 on GitHub + - Made some initial plotting functions available in ``acropolis.plots``, which can be used to easily plot the results of parameter scans + - Improved the output that is printed to the screen (especially for parameter scans if ``verbose=True``) + - Updated the neutron lifetime to the PDG 2020 recommended value + - Included some example files, e.g. for parameter scans, in the directory examples/ + - Included a new c-file tools/create_sm_abundance_file.c, which can be used with [``AlterBBN``](https://alterbbn.hepforge.org/) to generate the file ``abundance_file.dat`` for sm.tar.gz + - Fixed a bug that prohibited running 2d parameter scans without 'fast' parameters + - Fixed a bug that caused INFO messages to be printed even for ``verbose=False`` + v1.2.1\ (February 16, 2021) - Fixed a bug in ``DecayModel``. Results that have been obtained with older versions can be corrected by multiplying the parameter ``n0a`` with an additional factor ``2.7012``. All results of our papers remain unchanged. - - Updated the set of initial abundances to the most recent values returned by [``AlterBBN``](https://alterbbn.hepforge.org/) v2.2 (explcitly, we used ``failsafe=12``) + - Updated the set of initial abundances to the most recent values returned by [``AlterBBN``](https://alterbbn.hepforge.org/) v2.2 (explicitly, we used ``failsafe=12``) v1.2\ (January 15, 2021) @@ -43,7 +54,7 @@ v1.2\ v1.1\ (December 1, 2020) - For the source terms it is now possible to specify arbitrary monochromatic and continuous contributions, meaning that the latter one is no longer limited to only final-state radiation of photons - - By including additional JIT compilation steps, the runtime without database files was drastically increased (by approximately a factor 15) + - By including additional JIT compilation steps, the runtime without database files was drastically decreased (by approximately a factor 15) - The previously mentioned performance improvements also allowed to drop the large database files alltogether, which results in a better user experience (all database files are now part of the git repo and no additional download is required) and a significantly reduced RAM usage (∼900MB → ∼20MB) - Fixed a bug, which could lead to NaNs when calculating heavily suppressed spectra with E0 ≫ me2/(22T) - Added a unified way to print the final abundances in order to declutter the wrapper scripts. This makes it easier to focus on the actual important parts when learning how to use ``ACROPOLIS`` @@ -56,24 +67,28 @@ v1.0\ # Installation from PyPI -This is the recommended way to install ACROPOLIS. To do so, make sure that ``pip`` is installed and afterwards simply execute the command +*This is the recommended way to install ACROPOLIS.* + +To install ACROPOLIS from PyPI, first make sure that ``pip`` is installed on your system and afterwards simply execute the command ``` python3 -m pip install ACROPOLIS --user ``` -After the installation is completed, the different modules of ACROPOLIS can be directly imported into our own Python code (just like e.g. numpy). Using this procedure also ensures that the executable ``decay`` and ``annihilation`` are copied into your ``PATH`` and that all dependencies are fulfilled. +Once the installation is completed, the different modules of ACROPOLIS can directly be imported into our own Python code (just like e.g. ``numpy``). Additionally, the installation also ensures that the two executable ``decay`` and ``annihilation`` are copied into your ``PATH`` and that all dependencies are fulfilled. + +If any dependencies of ACROPOLIS conflict with those for other programs in your work environment, it is strongly advised to utilize the capabilities of Python's virtual environments. # Installation from GitHub -To install ACROPOLIS from source, first clone the respective git repository by executing the command +To install ACROPOLIS directly from source on GitHub, start by cloning the respective git repository via the command ``` -git clone https://github.com/skumblex/acropolis.git +git clone https://github.com/hep-mh/acropolis.git ``` -Afterward, switch into the main directory and run +Afterward, switch into the newly created main directory and run ``` python3 -m pip install . --user @@ -81,19 +96,19 @@ python3 -m pip install . --user # Usage without installation -If you just want to use ACROPOLIS without any additional installation steps, you have to at least make sure that all dependencies are fulfilled. As specified in ``setup.py``, ACROPOLIS depends on the following packages (older versions might work, but have not been thoroughly tested) +In case you just want to use ACROPOLIS without any additional installation steps, it is necessary to manually check that all dependencies are fulfilled. As specified in ``setup.py``, ACROPOLIS depends on the following packages (older versions might work, but have not been thoroughly tested) - NumPy (> 1.19.1) - SciPy (>1.5.2) - Numba (> 0.51.1) -The most recent versions of these packages can be collectively installed at user-level, i.e. without the need for root access, by executing the command +The most recent versions of these packages can be collectively installed via the command ``` python3 -m pip install numpy, scipy, numba --user ``` -If these dependencies conflict with those for other programs in your work environment, it is strongly advised to utilise the capabilities of Python's virtual environments. +Afterwards, you can import the different modules into your own Python code, as long as said code resides in the ``acropolis`` directory (like ``decay`` and ``annihilation``). If you instead want to also use the different modules from other directories, please consider using one of the two previously mentioned installation methods. # Using the example models @@ -112,4 +127,4 @@ annihilation 10 1e-25 0 0 0 1 # Supported platforms -ACROPOLIS should work on any platform with a working Python3 installation. +ACROPOLIS should work on any platform that supports ``python3`` and ``clang``, the latter of which is required for ``numba`` to work. diff --git a/acropolis/cache.py b/acropolis/cache.py index 3675d18..9f91433 100644 --- a/acropolis/cache.py +++ b/acropolis/cache.py @@ -5,7 +5,7 @@ def cached_member(f_uncached): # Define the cache as a dictionary cache = {} - cT = {"_": -1.} + Tc = {"_": -1.} # Define the wrapper function @wraps(f_uncached) @@ -17,8 +17,8 @@ def f_cached(*args): # For each new temperature, # clear the cache and start over - if T != cT["_"]: - cT["_"] = T + if T != Tc["_"]: + Tc["_"] = T cache.clear() if pargs not in cache: diff --git a/acropolis/cascade.py b/acropolis/cascade.py index 4e4253c..946f75d 100644 --- a/acropolis/cascade.py +++ b/acropolis/cascade.py @@ -18,10 +18,10 @@ # pprint from acropolis.pprint import print_warning, print_error # params -from acropolis.params import me, me2, alpha, re, hbar, tau_m +from acropolis.params import me, me2, mm, mm2, alpha, re, hbar, tau_m from acropolis.params import zeta3, pi2 -from acropolis.params import Emin -from acropolis.params import approx_zero, eps, Ephb_T_max, E_EC_cut +from acropolis.params import FX +from acropolis.params import Emin, approx_zero, eps, Ephb_T_max from acropolis.params import NE_pd, NE_min @@ -57,6 +57,10 @@ def _JIT_G(Ee, Eph, Ephb): dE_sqrt = (Eph - Ephb)*sqrt( 1. - me2/( Eph*Ephb ) ) Ee_lim_m = ( Eph + Ephb - dE_sqrt )/2. Ee_lim_p = ( Eph + Ephb + dE_sqrt )/2. + # ATTENTION: White et al. impose the range in the soft + # photon limit, which is more difficult to handle but + # should lead to the same results, since the pair production + # kernel ensures that Ephb ~ T << Eph ~ O(MeV) if not ( me < Ee_lim_m <= Ee <= Ee_lim_p ): # CHECKED to never happen, since the intergration @@ -65,13 +69,14 @@ def _JIT_G(Ee, Eph, Ephb): # Split the function into four summands # and calculate all of them separately + # Ee + Eep = Eph + Ephb sud = 0. sud += 4.*( (Ee + Eep)**2. )*log( (4.*Ephb*Ee*Eep)/( me2*(Ee + Eep) ) )/( Ee*Eep ) sud += ( me2/( Ephb*(Ee + Eep) ) - 1. ) * ( (Ee + Eep)**4. )/( (Ee**2.)*(Eep**2.) ) # ATTENTION: no additional minus sign in sud[2] # It is unclear whether it is a type or an artifact # of the scan (in the original paper) - sud += 2.*( 2*Ephb*(Ee + Eep) - me2 ) * ( (Ee + Eep)**2. )/( me2*Ee*Eep ) + sud += 2.*( 2.*Ephb*(Ee + Eep) - me2 ) * ( (Ee + Eep)**2. )/( me2*Ee*Eep ) sud += -8.*Ephb*(Ee + Eep)/me2 return sud @@ -80,7 +85,10 @@ def _JIT_G(Ee, Eph, Ephb): # _PhotonReactionWrapper ###################################################### @nb.jit(cache=True) -def _JIT_ph_rate_pair_creation(y, x, T): +def _JIT_ph_rate_pair_creation(logy, logx, T): + # Return the integrand for the 2d integral in log-space + x, y = exp(logx), exp(logy) + # Define beta as a function of y b = sqrt(1. - 4.*me2/y) @@ -92,13 +100,15 @@ def _JIT_ph_rate_pair_creation(y, x, T): # (the written limit is unitless, which must be wrong) # This limit is a consequence of the constraint on # the center-of-mass energy - return ( 1./(pi**2) )/( exp(x/T) - 1. ) * y * .5*pi*(re**2.)*(1.-b**2.)*( (3.-b**4.)*log( (1.+b)/(1.-b) ) - 2.*b*(2.-b**2.) ) + sig_pc = .5*pi*(re**2.)*(1.-b**2.)*( (3.-b**4.)*log( (1.+b)/(1.-b) ) - 2.*b*(2.-b**2.) ) + + return ( 1./(pi**2) )/( exp(x/T) - 1. ) * y * sig_pc * (x*y) @nb.jit(cache=True) -def _JIT_ph_kernel_inverse_compton(y, E, Ep, T): +def _JIT_ph_kernel_inverse_compton(logx, E, Ep, T): # Return the integrand for the 1d-integral in log-space; x = Ephb - x = exp(y) + x = exp(logx) return _JIT_F(E, Ep, x)*x/( pi2*(exp(x/T) - 1.) ) * x @@ -112,17 +122,17 @@ def _JIT_el_rate_inverse_compton(y, x, E, T): @nb.jit(cache=True) -def _JIT_el_kernel_inverse_compton(y, E, Ep, T): +def _JIT_el_kernel_inverse_compton(logx, E, Ep, T): # Define the integrand for the 1d-integral in log-space; x = Ephb - x = exp(y) + x = exp(logx) return _JIT_F(Ep+x-E, Ep, x)*( x/(pi**2) )/( exp(x/T) - 1. ) * x @nb.jit(cache=True) -def _JIT_el_kernel_pair_creation(y, E, Ep, T): +def _JIT_el_kernel_pair_creation(logx, E, Ep, T): # Define the integrand for the 1d-integral in log-space; x = Ephb - x = exp(y) + x = exp(logx) return _JIT_G(E, Ep, x)/( (pi**2.)*(exp(x/T) - 1.) ) * x @@ -159,7 +169,7 @@ def _JIT_dsdE_Z2(Ee, Eph): # SpectrumGenerator ########################################################### @nb.jit(cache=True) -def _JIT_set_spectra(F, i, Fi, cond): +def _JIT_set_spectra(F, i, Fi, cond=False): F[:, i] = Fi # In the strongly compressed regime, manually # set the photon spectrum to zero in order to @@ -168,10 +178,7 @@ def _JIT_set_spectra(F, i, Fi, cond): @nb.jit(cache=True) -def _JIT_solve_cascade_equation(E_rt, G, K, S0, Sc, T): - EC = me2/(22.*T) - Ecut = E_EC_cut*EC - +def _JIT_solve_cascade_equation(E_rt, G, K, E0, S0, Sc, T): # Extract the number of particle species... NX = len(G) # ...and the number of points in energy. @@ -181,12 +188,12 @@ def _JIT_solve_cascade_equation(E_rt, G, K, S0, Sc, T): # Generate the grid for the different spectra # First index: X = photon, electron, positron - F_rt = np.zeros( (3, NE) ) + F_rt = np.zeros( (NX, NE) ) # Calculate F_X(E_S), NE-1 _JIT_set_spectra(F_rt, -1, np.array([ Sc[X,-1]/G[X,-1] + np.sum(K[X,:,-1,-1]*S0[:]/(G[:,-1]*G[X,-1])) for X in range(NX) - ]), E_rt[-1] > Ecut) + ])) # Loop over all energies i = (NE - 1) - 1 # start at the second to last index, NE-2 while i >= 0: @@ -194,39 +201,39 @@ def _JIT_solve_cascade_equation(E_rt, G, K, S0, Sc, T): a = np.zeros( (NX, ) ) # Calculate the matrix B and the vector a - for j, X in enumerate( range(NX) ): + for X in range(NX): # Calculate B - B[j,:] = .5*dy*E_rt[i]*K[X,:,i,i]/G[X,i] + B[X,:] = .5*dy*E_rt[i]*K[X,:,i,i]/G[X,i] # Calculate a - a[j] = Sc[X,i]/G[X,i] + a[X] = Sc[X,i]/G[X,i] - a0 = K[X,:,i,-1]*S0[:]/G[:,-1] + .5*dy*E_rt[-1]*K[X,:,i,-1]*F_rt[:,-1] - for k in range(i+1, NE-2): - a0 += dy*E_rt[k]*K[X,:,i,k]*F_rt[:,k] + a0 = K[X,:,i,-1]*S0[:]/G[:,-1] + .5*dy*E_rt[-1]*K[X,:,i,-1]*F_rt[:,-1] + for j in range(i+1, NE-1): # Goes to NE-2 + a0 += dy*E_rt[j]*K[X,:,i,j]*F_rt[:,j] - for a0i in a0: - a[j] += a0i/G[X,i] + for a0X in a0: + a[X] += a0X/G[X,i] # Solve the system of linear equations for F _JIT_set_spectra(F_rt, i, np.linalg.solve(np.identity(NX)-B, a) - , E_rt[i] > Ecut) + ) i -= 1 # Remove potential zeros - F_rt = F_rt.reshape( 3*NE ) + F_rt = F_rt.reshape( NX*NE ) for i, f in enumerate(F_rt): if f < approx_zero: F_rt[i] = approx_zero - F_rt = F_rt.reshape( (3, NE) ) + F_rt = F_rt.reshape( (NX, NE) ) # Define the result array... - res = np.zeros( (4, NE) ) + res = np.zeros( (NX+1, NE) ) # ...and fill it - res[0 , :] = E_rt - res[1:4, :] = F_rt + res[0 , :] = E_rt + res[1:NX+1, :] = F_rt return res @@ -246,6 +253,8 @@ def __init__(self, Y0, eta, db): # NUMBER DENSITIES of baryons, electrons and nucleons ##################### def _nb(self, T): + # gs does not change anymore for the relevant temperature, + # hence (R0/R)^3 = gs(T)T^3/( gs(T0)T0^3) = (T/T0)^3 return self._sEta * ( 2.*zeta3/pi2 ) * (T**3.) @@ -264,16 +273,27 @@ class _PhotonReactionWrapper(_ReactionWrapperScaffold): def __init__(self, Y0, eta, db): super(_PhotonReactionWrapper, self).__init__(Y0, eta, db) + + # CONTINUOUS ENERGY LOSS ################################################## + # E is the energy of the loosing particle + # T is the temperature of the background photons + + # TOTAL CONTINUOUS ENERGY LOSS ############################################ + def total_eloss(E, T): + return 0. + + # RATES ################################################################### # E is the energy of the incoming particle # T is the temperature of the background photons # PHOTON-PHOTON SCATTERING ################################################ def _rate_photon_photon(self, E, T): - if E > me2/T: - return 0. + #if E > me2/T: + # return 0. + expf = exp( -E*T/me2 ) - return 0.151348 * (alpha**4.) * me * (E/me)**3. * (T/me)**6. + return 0.151348 * (alpha**4.) * me * (E/me)**3. * (T/me)**6. * expf # COMPTON SCATTERING ###################################################### @@ -285,19 +305,37 @@ def _rate_compton(self, E, T): # BETHE-HEITLER PAIR PRODUCTION ########################################### def _rate_bethe_heitler(self, E, T): - # In general, it is necessary to use a different formula close to - # the reaction threshold, i.e. for E < 2me. However, this case never - # occurs as Emin = 1.5 > 2me (see 'acropolis.params') - - # For small energies, the rate is best approximated by a constant - # (cf. 'hep-ph/0604251') - if E < 4.: E = 4. + # For small energies, the rate can be approximated by a constant + # (cf. 'hep-ph/0604251') --- NOT USED HERE + #if E < 4.: E = 4. k = E/me + # Below threshold, the rate vanishes + # This case never happens since Emin = 1.5 > 2me + # (see 'acropolis.params') + if k < 2: + return 0. + + # Approximation for SMALL energies + if 2 <= k <= 4: + r = ( 2.*k - 4. )/( k + 2. + 2.*sqrt(2.*k) ) + + return ( alpha**3./me2 ) * self._nNZ2(T) * (2.*pi/3.) * ( (k-2.)/k )**3. * ( \ + 1 + r/2. + (23./40.)*(r**2.) + (11./60.)*(r**3.) + (29./960.)*(r**4.) \ + ) + + + # Approximation for LARGE energies log2k = log(2.*k) - # We implement corrections up to order (2./k)**2 ('astro-ph/9412055') - return ( alpha**3./me2 ) * self._nNZ2(T) * ( (28./9.)*log2k - 218./27. + (2./k)**2.*( (2./3.)*log2k**3. - log2k**2. + (6. - pi2/3.)*log2k + 2.*zeta3 + pi2/6. - 7./2. ) ) + # We implement corrections up to order (2./k)**6 ('astro-ph/9412055') + # This is relevant in order to ensure a smooth transition at k = 4 + return ( alpha**3./me2 ) * self._nNZ2(T) * ( \ + (28./9.)*log2k - 218./27. \ + + (2./k)**2. * ( (2./3.)*log2k**3. - log2k**2. + (6. - pi2/3.)*log2k + 2.*zeta3 + pi2/6. - 7./2. ) \ + - (2./k)**4. * ( (3./16.)*log2k + 1./8. ) \ + - (2./k)**6. * ( (29./2304.)*log2k - 77./13824. ) \ + ) # DOUBLE PHOTON PAIR PRODUCTION ########################################### @@ -310,13 +348,18 @@ def _rate_pair_creation(self, E, T): # Define the integration limits from the # constraint on the center-of-mass energy - llim = me2/E # < 30*T (see above) - ulim = Ephb_T_max*T # ~ 100*T - # ulim > llim, since me2/E < 30*T + llim = me2/E # < 50*T (see above) + ulim = Ephb_T_max*T # ~ 200*T + # ulim > llim, since me2/E < 50*T # CHECKED! - # Perform the integration in lin space - I_fso_E2 = dblquad(_JIT_ph_rate_pair_creation, llim, ulim, lambda x: 4.*me2, lambda x: 4.*E*x, epsrel=eps, epsabs=0, args=(T,)) + # Perform the integration in log-log space + # The limits for s are always in ascending order, + # i.e. 4*me2 < 4*E*x, since x > me2/E + I_fso_E2 = dblquad(_JIT_ph_rate_pair_creation, log(llim), log(ulim), \ + lambda logx: log(4.*me2), lambda logx: log(4.*E) + logx, \ + epsrel=eps, epsabs=0, args=(T,) + ) return I_fso_E2[0]/( 8.*E**2. ) @@ -344,7 +387,12 @@ def total_rate(self, E, T): # PHOTON-PHOTON SCATTERING ################################################ def _kernel_photon_photon(self, E, Ep, T): - return 1112./(10125.*pi) * (alpha**4.)/(me**8.) * 8.*(pi**4.)*(T**6.)/63. * Ep**2. * ( 1. - E/Ep + (E/Ep)**2. )**2. + #if Ep > me2/T: + # return 0. + expf = exp( -Ep*T/me2 ) + + return 1112./(10125.*pi) * (alpha**4.)/(me**8.) * 8.*(pi**4.)*(T**6.)/63. \ + * Ep**2. * ( 1. - E/Ep + (E/Ep)**2. )**2. * expf # COMPTON SCATTERING ###################################################### @@ -417,7 +465,7 @@ def __init__(self, Y0, eta, db): # RATES ################################################################### - # E is the energy of the outgoing particle + # E is the energy of the incoming particle # T is the temperature of the background photons # INVERSE COMPTON SCATTERING ############################################## @@ -538,6 +586,7 @@ def _kernel_pair_creation(self, E, Ep, T): # '_PhotonReactionWrapper._rate_pair_creation' if Ep < me2/(50.*T): return 0. + # Ep is the incoming(!) energy dE, E2 = Ep - E, E**2. z1 = Ep*( me2 - 2.*dE*( sqrt(E2 - me2) - E ) )/( 4*Ep*dE + me2 ) @@ -592,7 +641,7 @@ def __init__(self, Y0, eta, db): # RATES ################################################################### - # E is the energy of the outgoing particle + # E is the energy of the incoming particle # T is the temperature of the background photons # INVERSE COMPTON SCATTERING ############################################## @@ -648,8 +697,9 @@ def total_kernel_x(self, E, Ep, T, X): ) -class _MuonReactionWrapper(_ReactionWrapperScaffold): # TODO - pass +# TODO: Not yet fully implemented +# Goal is ACROPOLIS v1.3 +class _MuonReactionWrapper(_ReactionWrapperScaffold): # RATES ################################################################### # E is the energy of the incoming particle @@ -657,7 +707,7 @@ class _MuonReactionWrapper(_ReactionWrapperScaffold): # TODO # MUON DECAY ############################################################## def _rate_muon_decay(self, E, T): - return hbar/tau_m + return hbar*mm/(tau_m*E) # INVERSE COMPTON SCATTERING ############################################## @@ -677,15 +727,18 @@ def __init__(self, Y0, eta): # no data in the folder 'data/', db = (None, None) db = import_data_from_db() - self._sY0 = Y0 # A dictionary containing the BBN parameter + # Define a dictionary containing the BBN parameter + self._sY0 = Y0 - self._sRW = { # A dictionary containing all reaction wrappers + # Define a dictionary containing all reaction wrappers + self._sRW = { 0: _PhotonReactionWrapper (self._sY0, eta, db), 1: _ElectronReactionWrapper(self._sY0, eta, db), 2: _PositronReactionWrapper(self._sY0, eta, db) } - self._sNX = 3 # The number of particle species (in the cascade) + # Set the number of particle species (in the cascade) + self._sNX = 1 + 2*FX def _rate_x(self, X, E, T): @@ -700,7 +753,41 @@ def rate_photon(self, E, T): return self._rate_x(0, E, T) - def universal_spectrum(self, E0, S0, Sc, T): + def get_spectrum(self, E0, S0, Sc, T, allX=False): + # Define the dimension of the grid + # as defined in 'params.py'... + NE = int(log10(E0/Emin)*NE_pd) + # ... but not less than NE_min points + NE = max(NE, NE_min) + + # Generate the grid for the energy + E_rt = np.logspace(log(Emin), log(E0), NE, base=np.e) + + # Generate the grid for the rates + G = np.array([[self._rate_x(X, E, T) for E in E_rt] for X in range(self._sNX)]) + # first index: X, second index according to energy E + + # Generate the grid for the kernels + K = np.array([[[[self._kernel_x_xp(X, Xp, E, Ep, T) if Ep >= E else 0. for Ep in E_rt] for E in E_rt] for Xp in range(self._sNX)] for X in range(self._sNX)]) + # first index: X, second index: Xp + # third index according to energy E + # fourth index according to energy Ep; + # For Ep < E, the kernel is simply 0. + + # Generate the grids for the source terms + # injection + final-state radiation + S0 = np.array([S(T) for S in S0]) + Sc = np.array([[ScX(E, T) for E in E_rt] for ScX in Sc]) + + # Calculate the spectra by solving + # the cascade equation + res = _JIT_solve_cascade_equation(E_rt, G, K, E0, S0, Sc, T) + + # 'res' always has at least two columns + return res[0:2,:] if allX == False else res + + + def get_universal_spectrum(self, E0, S0, Sc, T, offset=0.): # Define EC and EX as in 'astro-ph/0211258' EC = me2/(22.*T) EX = me2/(80.*T) @@ -720,11 +807,12 @@ def universal_spectrum(self, E0, S0, Sc, T): F_rt = np.zeros(NE) # Calculate the spectrum for the different energies - S0N = lambda T: S0[0](T) + S0[1](T) + S0[2](T) + # TODO: Perform integration + S0N = lambda T: sum(S0X(T) for S0X in S0) for i, E in enumerate(E_rt): if E < EX: F_rt[i] = S0N(T) * K0 * (EX/E)**1.5/self.rate_photon(E, T) - elif E > EX and E < EC: + elif E >= EX and E <= (1. + offset)*EC: # an offset enables better interpolation F_rt[i] = S0N(T) * K0 * (EX/E)**2.0/self.rate_photon(E, T) # Remove potential zeros @@ -737,36 +825,3 @@ def universal_spectrum(self, E0, S0, Sc, T): res[1, :] = F_rt return res - - - def nonuniversal_spectrum(self, E0, S0, Sc, T, allX=False): - # Define the dimension of the grid - # as defined in 'params.py'... - NE = int(log10(E0/Emin)*NE_pd) - # ... but not less than NE_min points - NE = max(NE, NE_min) - - # Generate the grid for the energy - E_rt = np.logspace(log(Emin), log(E0), NE, base=np.e) - - # Generate the grid for the rates - G = np.array([[self._rate_x(X, E, T) for E in E_rt] for X in range(self._sNX)]) - # first index: X, second index according to energy E - - # Generate the grid for the kernels - K = np.array([[[[self._kernel_x_xp(X, Xp, E, Ep, T) if Ep >= E else 0. for Ep in E_rt] for E in E_rt] for Xp in range(self._sNX)] for X in range(self._sNX)]) - # first index: X, second index: Xp - # third index according to energy E - # fourth index according to energy Ep; - # For Ep < E, the kernel is simply 0. - - # Generate the grids for the source terms - # injection + final-state radiation - S0 = np.array([S(T) for S in S0]) - Sc = np.array([[ScX(E, T) for E in E_rt] for ScX in Sc]) - - # Calculate the spectra by solving - # the cascade equation - res = _JIT_solve_cascade_equation(E_rt, G, K, S0, Sc, T) - - return res[0:2,:] if allX == False else res diff --git a/acropolis/data/pythia8.cmnd b/acropolis/data/pythia8.cmnd new file mode 100644 index 0000000..481a19b --- /dev/null +++ b/acropolis/data/pythia8.cmnd @@ -0,0 +1 @@ +ProcessLevel:all = off diff --git a/acropolis/db.py b/acropolis/db.py index f1cdae9..6cdd919 100644 --- a/acropolis/db.py +++ b/acropolis/db.py @@ -28,7 +28,8 @@ def import_data_from_db(): start_time = time() print_info( "Extracting and reading database files.", - "acropolis.db.import_data_from_db" + "acropolis.db.import_data_from_db", + verbose_level=1 ) ratefl = gzip.open(db_file, "rb") @@ -37,7 +38,9 @@ def import_data_from_db(): end_time = time() print_info( - "Finished after " + str( int( (end_time - start_time)*1e4 )/10 ) + "ms." + "Finished after {:.1f}ms.".format( 1e3*(end_time - start_time) ), + "acropolis.db.import_data_from_db", + verbose_level=1 ) return ratedb @@ -74,7 +77,7 @@ def _get_E_index(E_log): index = int( ( Enum - 1 ) * ( E_log - Emin_log ) / ( Emax_log - Emin_log ) ) # For points at the upper boundary, i+1 does not exist - return index if index != Enum -1 else index - 1 + return index if index != Enum - 1 else index - 1 @nb.jit(cache=True) diff --git a/acropolis/em.py b/acropolis/em.py deleted file mode 100644 index 2468b89..0000000 --- a/acropolis/em.py +++ /dev/null @@ -1,74 +0,0 @@ -# math -from math import pi, log10 - -# input -from acropolis.input import InputInterface -# model -from acropolis.model import AbstractModel -# params -from acropolis.params import hbar, alpha - -class EmModel(AbstractModel): - - def __init__(self, input_file): - # Initialize the Input_Interface - self._sII = InputInterface(input_file) - - # The mass of the decaying particle - self._sMphi = self._sII.parameter("mphi") # in MeV - # The lifetime of the decaying particle - self._sTau = self._sII.parameter("tau") # in s - # The injection energy - self._sE0 = self._sMphi/2. - - # The number density of the decaying particle - # as a function of temperature - self._number_density = lambda T: self._sII.cosmo_column(5, T) - - # The branching ratio into electron-positron pairs - self._sBRee = self._sII.parameter("bree") - # The branching ratio into two photons - self._sBRaa = self._sII.parameter("braa") - - # Call the super constructor - super(EmModel, self).__init__(self._sE0, self._sII) - - # ABSTRACT METHODS ################################################################## - - def _temperature_range(self): - # The number of degrees-of-freedom to span - mag = 2. - # Calculate the approximate decay temperature - Td = self._sII.temperature( self._sTau ) - # Calculate Tmin and Tmax from Td - Td_ofm = log10(Td) - # Here we choose -1.5 (+0.5) orders of magnitude - # below (above) the approx. decay temperature, - # since the main part happens after t = \tau - Tmin = 10.**(Td_ofm - 3.*mag/4.) - Tmax = 10.**(Td_ofm + 1.*mag/4.) - - return (Tmin, Tmax) - - - def _source_photon_0(self, T): - return self._sBRaa * 2. * self._number_density(T) * (hbar/self._sTau) - - - def _source_electron_0(self, T): - return self._sBRee * self._number_density(T) * (hbar/self._sTau) - - - def _source_photon_c(self, E, T): - EX = self._sE0 - - x = E/EX - y = me2/(4.*EX**2.) - - if 1. - y < x: - return 0. - - _sp = self._source_electron_0(T) - - # Divide by 2. since only one photon is produced - return (_sp/EX) * (alpha/pi) * ( 1. + (1.-x)**2. )/x * log( (1.-x)/y ) diff --git a/acropolis/info.py b/acropolis/info.py new file mode 100644 index 0000000..421ece1 --- /dev/null +++ b/acropolis/info.py @@ -0,0 +1,14 @@ +# The current version of ACROPOLIS +version = "1.2.2" + +# The current dev version of ACROPOLIS +dev_version = "1.2.2" + +# The short description of ACROPOLIS +description = "A generiC fRamework fOr Photodisintegration Of LIght elementS" + +# The main webpage of ACROPOLIS +url = "https://acropolis.hepforge.org" + +# The authors of ACROPOLIS +authors = "Paul Frederik Depta, Marco Hufnagel, Kai Schmidt-Hoberg" diff --git a/acropolis/input.py b/acropolis/input.py index 12ae65d..f7a00a0 100644 --- a/acropolis/input.py +++ b/acropolis/input.py @@ -4,11 +4,10 @@ from math import log10 # numpy import numpy as np -# scipy -from scipy.interpolate import interp1d -from scipy.integrate import cumtrapz # tarfilfe import tarfile +# abc +from abc import ABC, abstractmethod # util from acropolis.utils import cumsimp @@ -26,28 +25,76 @@ def locate_sm_file(): return sm_file -class InputInterface(object): +def data_from_file(filename): + # Read the input file + tf, tc = tarfile.open(filename, "r:gz"), {} + + # Extract the different files and + # store them in a dictionary + for m in tf.getmembers(): tc[m.name] = tf.extractfile(m) + + # READ THE PREVIOUSLY GENERATED DATA + cosmo_data = np.genfromtxt(tc["cosmo_file.dat"] ) + abund_data = np.genfromtxt(tc["abundance_file.dat"]) + param_data = np.genfromtxt(tc["param_file.dat"], + delimiter="=", + dtype=None, + encoding=None + ) + + return InputData(cosmo_data, abund_data, param_data) + + +class AbstractData(ABC): + + @abstractmethod + def get_cosmo_data(self): + pass + + @abstractmethod + def get_abund_data(self): + pass + + @abstractmethod + def get_param_data(self): + pass + + +class InputData(AbstractData): + + def __init__(self, cosmo_data, abund_data, param_data): + self._sCosmoData = cosmo_data + self._sAbundData = abund_data + self._sParamData = param_data + + + def get_cosmo_data(self): + return self._sCosmoData + + + def get_abund_data(self): + return self._sAbundData + + + def get_param_data(self): + return self._sParamData + - def __init__(self, input_file): - # Read the input file - tf, tc = tarfile.open(input_file, "r:gz"), {} +class InputInterface(object): - # Extract the different files and - # store them in a dictionary - for m in tf.getmembers(): tc[m.name] = tf.extractfile(m) + def __init__(self, input_data): + # If input_data is a filename, extract the data first + if type(input_data) == str: + input_data = data_from_file(input_data) - # READ THE PREVIOUSLY GENERATED DATA - self._sCosmoData = np.genfromtxt(tc["cosmo_file.dat"] ) - self._sAbundData = np.genfromtxt(tc["abundance_file.dat"]) - self._sParamData = np.genfromtxt(tc["param_file.dat"], - delimiter="=", - dtype=None, - encoding=None - ) + # Extract the provided input data + self._sCosmoData = input_data.get_cosmo_data() + self._sAbundData = input_data.get_abund_data() + self._sParamData = input_data.get_param_data() # Calculate the scale factor and add it sf = np.exp( cumsimp(self._sCosmoData[:,0]/hbar, self._sCosmoData[:,4]) ) - self._sCosmoData = np.column_stack( [self._sCosmoData, sf] ) + self._sCosmoData = np.column_stack( [self._sCosmoData, sf] ) # Log the cosmo data for the interpolation # ATTENTION: At this point we have to take the @@ -95,22 +142,42 @@ def _check_data(self): # 1. COSMO_DATA ########################################################### + def _find_index(self, x, x0): + # Returns an index ix such that x0 + # lies between x[ix] and x[ix+1] + ix = np.argmin( np.abs( x - x0 ) ) + + # Check the edge of the array + if ix == self._sCosmoDataShp[0] - 1: + # In this case, the condition + # below is always False + # --> No additional -1 + ix -= 1 + + # If x0 is not between ix and ix+1,... + if not (x[ix] <= x0 <= x[ix+1] or x[ix] >= x0 >= x[ix+1]): + # ...it must be between ix-1 and ix + ix -= 1 + + return ix + + def _interp_cosmo_data(self, val, xc, yc): + # ATTENTION: To ensure maximal performance, + # it is assumed that x is already sorted in + # either increasing or decreasing order x = self._sCosmoDataLog[:,xc] y = self._sCosmoDataLog[:,yc] - N = self._sCosmoDataShp[0] val_log = log10(val) - # Extract the index corresponding to - # the data entries above and below 'val' - ix = np.argmin( np.abs( x - val_log ) ) - if ix == N - 1: - ix -= 1 + + # Extract the index closest to 'val_log' + ix = self._find_index(x, val_log) m = (y[ix+1] - y[ix])/(x[ix+1] - x[ix]) b = y[ix] - m*x[ix] - return 10**(m*val_log + b) + return 10.**(m*val_log + b) def temperature(self, t): @@ -141,6 +208,10 @@ def cosmo_column(self, yc, val, xc=1): return self._interp_cosmo_data(val, xc, yc) + def cosmo_range(self): + return ( min(self._sCosmoData[:,1]), max(self._sCosmoData[:,1]) ) + + # 2. ABUNDANCE_DATA ####################################################### def bbn_abundances(self): diff --git a/acropolis/models.py b/acropolis/models.py index c5d655e..c6b1edc 100644 --- a/acropolis/models.py +++ b/acropolis/models.py @@ -14,8 +14,8 @@ # params from acropolis.params import zeta3 from acropolis.params import hbar, c_si, me2, alpha, tau_t -from acropolis.params import Emin -from acropolis.params import NY +from acropolis.params import Emin, NY +from acropolis.params import universal # pprint from acropolis.pprint import print_info, print_warning @@ -32,40 +32,40 @@ def __init__(self, e0, ii): # The temperature range that is used for the calculation self._sTrg = self._temperature_range() - # Calculate the relevant physical quantities, i.e. ... - # ...the 'delta' source terms and... - self._sS0 = [ - self._source_photon_0 , - self._source_electron_0, - self._source_positron_0 - ] - # ...the ISR source terms - self._sSc = [ - self._source_photon_c , - self._source_electron_c, - self._source_positron_c - ] + # The relevant source terms + (self._sS0, self._sSc) = self.get_source_terms() # A buffer for high-performance scans self._sMatpBuffer = None def run_disintegration(self): - # Print a warning of the injection energy + # Print a warning if the injection energy # is larger than 1GeV, as this might lead # to wrong results - if int( self._sE0 ) > 1e3: + if not universal and int( self._sE0 ) > 1e3: print_warning( "Injection energy > 1 GeV. Results cannot be trusted.", "acropolis.models.AbstractMode.run_disintegration" ) + # Print a warning if the temperature range + # of the model is not covered by the data + # in cosmo_file.dat + cf_temp_rg = self._sII.cosmo_range() + if not (cf_temp_rg[0] <= self._sTrg[0] <= self._sTrg[1] <= cf_temp_rg[1]): + print_warning( + "Temperature range not covered by input data. Results cannot be trusted.", + "acropolis.models.AbstractMode.run_disintegration" + ) + # If the energy is below all thresholds, # simply return the initial abundances if self._sE0 <= Emin: print_info( "Injection energy is below all thresholds. No calculation required.", - "acropolis.models.AbstractModel.run_disintegration" + "acropolis.models.AbstractModel.run_disintegration", + verbose_level=1 ) return self._squeeze_decays( self._sII.bbn_abundances() ) @@ -90,6 +90,24 @@ def run_disintegration(self): return Yf + def get_source_terms(self): + # Collect the different source terms, i.e. ... + # ...the 'delta' source terms and... + s0 = [ + self._source_photon_0 , + self._source_electron_0, + self._source_positron_0 + ] + # ...the continous source terms + sc = [ + self._source_photon_c , + self._source_electron_c, + self._source_positron_c + ] + + return (s0, sc) + + def _pdi_matrix(self): if self._sMatpBuffer is None: # Initialize the NuclearReactor @@ -175,9 +193,8 @@ def _source_positron_0(self, T): return self._source_electron_0(T) - @abstractmethod def _source_photon_c(self, E, T): - pass + return 0. def _source_electron_c(self, E, T): @@ -266,7 +283,6 @@ def _source_photon_c(self, E, T): _sp = self._source_electron_0(T) - # Divide by 2. since only one photon is produced return (_sp/EX) * (alpha/pi) * ( 1. + (1.-x)**2. )/x * log( (1.-x)/y ) @@ -299,6 +315,7 @@ def __init__(self, mchi, a, b, tempkd, bree, braa, omegah2=0.12): # Call the super constructor super(AnnihilationModel, self).__init__(self._sE0, self._sII) + # DEPENDENT QUANTITIES ############################################################## def _number_density(self, T): @@ -363,5 +380,4 @@ def _source_photon_c(self, E, T): _sp = self._source_electron_0(T) - # Divide by 2. since only one photon is produced return (_sp/EX) * (alpha/pi) * ( 1. + (1.-x)**2. )/x * log( (1.-x)/y ) diff --git a/acropolis/nucl.py b/acropolis/nucl.py index 4543100..fcfe1a7 100644 --- a/acropolis/nucl.py +++ b/acropolis/nucl.py @@ -19,8 +19,9 @@ from acropolis.pprint import print_error, print_warning, print_info # params from acropolis.params import me, me2, hbar, tau_n, tau_t -from acropolis.params import approx_zero, eps -from acropolis.params import NT_pd +from acropolis.params import approx_zero, eps, E_EC_max +from acropolis.params import NT_pd, NY +from acropolis.params import universal # cascade from acropolis.cascade import SpectrumGenerator @@ -85,6 +86,29 @@ 16: 5.605794, 17: 9.304680 } +# A dictionary containing the theoretical errors for +# the different reaction rates (taken from 2006.14803) +# in terms of a relative deviation from the mean value +# 8 (He4->d+d); 10, 11; (Li6->...); 12, 14 (Li7->...) +_rdev = { + 1: 0.00, + 2: 0.00, + 3: 0.00, + 4: 0.00, + 5: 0.00, + 6: 0.00, + 7: 0.00, + 8: 0.00, + 9: 0.00, + 10: 0.00, + 11: 0.00, + 12: 0.00, + 13: 0.00, + 14: 0.00, + 15: 0.00, + 16: 0.00, + 17: 0.00 +} # A dictionary containing all relevant decays @@ -361,8 +385,9 @@ def get_cross_section(self, reaction_id, E): def _pdi_rates(self, T): EC = me2/(22.*T) - # Calculate the maximal energy - Emax = min( self._sE0, 10.*EC ) + # Set the maximal energy, serving + # as a cutoff for the integration + Emax = min( self._sE0, E_EC_max*EC ) # For E > me2/T >> EC, the spectrum # is strongly suppressed @@ -372,15 +397,22 @@ def _pdi_rates(self, T): pdi_rates = {rid:approx_zero for rid in _lrid} # Calculate the spectra for the given temperature - xsp, ysp = self._sGen.nonuniversal_spectrum( - self._sE0, self._sS0, self._sSc, T - ) + if not universal: + xsp, ysp = self._sGen.get_spectrum( + self._sE0, self._sS0, self._sSc, T + ) + else: + xsp, ysp = self._sGen.get_universal_spectrum( + self._sE0, self._sS0, self._sSc, T, offset=5e-2 + ) + # For performance reasons, also + # cut the energy at threshold + Emax = min(self._sE0, EC) # Interpolate the photon spectrum (in log-log space) # With this procedure it should be sufficient to perform # a linear interpolation, which also has less side effects - # lgFph = interp1d( np.log(xsp), np.log(ysp), kind='linear' ) - Fph = LogInterp(xsp, ysp) + Fph = LogInterp(xsp, ysp) # Interpolation on: Emin -> E0 # Calculate the kernel for the integration in log-space def Fph_s(log_E, rid): E = exp( log_E ); return Fph( E ) * E * self.get_cross_section(rid, E) @@ -391,28 +423,31 @@ def Fph_s(log_E, rid): # Calculate the different rates by looping over all available reaction_id's for rid in _lrid: - # Do not perform the integral for energies below - # threshold or for strongly suppressed spectra - if _eth[rid] > Emax: - continue - - # Perform the integration from the threshold energy to Emax - with warnings.catch_warnings(record=True) as w: - log_Emin, log_Emax = log(_eth[rid]), log(Emax) - I_Fs = quad(Fph_s, log_Emin, log_Emax, epsrel=eps, epsabs=0, args=(rid,)) - - if len(w) == 1 and issubclass(w[0].category, IntegrationWarning): - print_warning( - "Slow convergence when calculating the pdi rates " + - "@ rid = %i, T = %.3e, E0 = %.3e, Eth = %.3e" % (rid, T, self._sE0, _eth[rid]), - "acropolis.nucl.NuclearReactor._thermal_rates_at" - ) - - # Calculate the 'delta-term' + # Calculate the 'delta-term'... I_dt = self._sS0[0](T)*self.get_cross_section(rid, self._sE0)/rate_photon_E0 + # ... and use it as an initial value + pdi_rates[rid] = I_dt # might be zero due to exp. suppression! + + # Only perform the integral for energies above threshold, + # i.e. do not consider strongly suppressed spectra + if Emax > _eth[rid]: + # Perform the integration from the threshold energy to Emax + with warnings.catch_warnings(record=True) as w: + log_Emin, log_Emax = log(_eth[rid]), log(Emax) + I_Fs = quad(Fph_s, log_Emin, log_Emax, epsrel=eps, epsabs=0, args=(rid,)) + + if len(w) == 1 and issubclass(w[0].category, IntegrationWarning): + print_warning( + "Slow convergence when calculating the pdi rates " + + "@ rid = %i, T = %.3e, E0 = %.3e, Eth = %.3e" % (rid, T, self._sE0, _eth[rid]), + "acropolis.nucl.NuclearReactor._thermal_rates_at" + ) - # Add the delta term and save the result - pdi_rates[rid] = I_dt + I_Fs[0] + # Add the result of the integral to the 'delta' term + pdi_rates[rid] += I_Fs[0] + + # Avoid potential zeros + pdi_rates[rid] = max(approx_zero, pdi_rates[rid]) # Go home and play return pdi_rates @@ -434,16 +469,18 @@ def get_pdi_grids(self): start_time = time() print_info( "Calculating non-thermal spectra and reaction rates.", - "acropolis.nucl.NuclearReactor.get_thermal_rates" + "acropolis.nucl.NuclearReactor.get_pdi_grids", + verbose_level=1 ) # Loop over all the temperatures and # calculate the corresponding thermal rates for i, Ti in enumerate(Tr): + progress = 100*i/NT print_info( - "Progress: " + str( int( 1e3*i/NT )/10 ) + "%", - "acropolis.nucl.NuclearReactor.get_thermal_rates", - eol="\r" + "Progress: {:.1f}%".format(progress), + "acropolis.nucl.NuclearReactor.get_pdi_grids", + eol="\r", verbose_level=1 ) rates_at_i = self._pdi_rates(Ti) # Loop over the different reactions @@ -452,7 +489,9 @@ def get_pdi_grids(self): end_time = time() print_info( - "Finished after " + str( int( (end_time - start_time)*10 )/10 ) + "s." + "Finished after {:.1f}s.".format(end_time - start_time), + "acropolis.nucl.NuclearReactor.get_pdi_grids", + verbose_level=1 ) # Go get some sun @@ -533,8 +572,9 @@ def get_matp(self, T): start_time = time() print_info( - "Running non-thermal nucleosynthesis.", - "acropolis.nucl.MatrixGenerator.get_matp" + "Calculating final transfer matrix.", + "acropolis.nucl.MatrixGenerator.get_matp", + verbose_level=1 ) nt = 0 @@ -543,10 +583,12 @@ def get_matp(self, T): # Columns: Loop over all relevant nuclei for nc in range(_nnuc): nt += 1 + + progress = 100*nt/_nnuc**2 print_info( - "Progress: " + str( int( 1e3*nt/_nnuc**2 )/10 ) + "%", + "Progress: {:.1f}%".format(progress), "acropolis.nucl.MatrixGenerator.get_matp", - eol="\r" + eol="\r", verbose_level=1 ) # Define the kernels for the integration in log-log space @@ -559,11 +601,45 @@ def get_matp(self, T): end_time = time() print_info( - "Finished after " + str( int( (end_time - start_time)*1e4 )/10 ) + "ms." + "Finished after {:.1f}ms.".format( 1e3*(end_time - start_time) ), + "acropolis.nucl.MatrixGenerator.get_matp", + verbose_level=1 ) return (mpdi, mdcy) + def get_all_matp(self): + NT = len(self._sTemp) + + start_time = time() + print_info( + "Calculating transfer matrices for all temperatures.", + "acropolis.nucl.MatrixGenerator.get_all_matp", + verbose_level=2 + ) + + all_mpdi = np.zeros( (NT, NY, NY) ) + all_mdcy = np.zeros( (NT, NY, NY) ) + for i, temp in enumerate(self._sTemp): + progress = 100*i/NT + print_info( + "Progress: {:.1f}%".format(progress), + "acropolis.nucl.MatrixGenerator.get_all_matp", + eol="\r", verbose_level=2 + ) + + all_mpdi[i, :, :], all_mdcy[i, :, :] = self.get_matp(temp) + + end_time = time() + print_info( + "Finished after {:.1f}s.".format(end_time - start_time), + "acropolis.nucl.MatrixGenerator.get_all_matp", + verbose_level=2 + ) + + return self._sTemp, (all_mpdi, all_mdcy) + + def get_final_matp(self): return self.get_matp( self._sTmin ) diff --git a/acropolis/obs.py b/acropolis/obs.py new file mode 100644 index 0000000..2e99dc6 --- /dev/null +++ b/acropolis/obs.py @@ -0,0 +1,13 @@ +class AbundanceObservation(object): + + def __init__(self, mean, err): + self.mean = mean + self.err = err + + +pdg2020 = { + "Yp" : AbundanceObservation( 2.45e-1, 0.03e-1), + "DH" : AbundanceObservation(2.547e-5, 0.035e-5), + "HeD": AbundanceObservation( 8.3e-1, 1.5e-1), + "LiH": AbundanceObservation( 1.6e-10, 0.3e-10) +} diff --git a/acropolis/params.py b/acropolis/params.py index c4bba07..85b6291 100644 --- a/acropolis/params.py +++ b/acropolis/params.py @@ -4,6 +4,12 @@ from scipy.special import zeta +# ATTENTION !!!!!111elf +# Only parameters that specify a default value are +# meant to be changed by the user, i.e. everything +# under FLAGS and ALGORITHM-SPECIFIC PARAMETERS + + # FLAGS ############################################################# # If this flag is set to 'True', @@ -25,6 +31,16 @@ # Default: False debug = False +# If this flag is set to 'True', +# the universal spectrum is used +# for all points in parameter space +# ATTENTION: +# Change with caution and only if +# you know what you are doing. +# World destruction possible! +# Default: False +universal = False + # PHYSICAL CONSTANTS ################################################ @@ -37,6 +53,12 @@ # The electron mass squared (in MeV^2) me2 = me**2. +# The muon mass (in MeV) +mm = 105.658 + +# The muon mass squared (in MeV^2) +mm2 = mm**2. + # The classical electron radius (in 1/MeV) re = alpha/me @@ -53,7 +75,7 @@ tau_m = 2.1969811e-6 # The neutron lifetime (in s) -tau_n = 8.802e2 +tau_n = 8.794e2 # pre PDG2020: 8.802e2 # The lifetime of tritium (in s) # T_(1/2) = 3.885e8 @@ -82,18 +104,33 @@ # ALGORITHM-SPECIFIC PARAMETERS ##################################### -# The number of elements/isotops that -# are considered in the calculation +# A specifier to descide which particles +# are considered in the Boltzmann equation +# for the electromagnetic cascade +# ATTENTION: Do not use a value that does +# not include all injected particle types +# 0: Photons +# 1: Photons, Electrons/Positrons +# 2: Photons, Electrons/Positrons, Anti-/Muons (not yet implemented) +# Default: 1 +FX = 1 + +# The number of nuclei that are +# considered in the Boltzmann equation +# for non-thermal nucleosynthesis +# Default: 9 NY = 9 # The number of mandatory columns in # 'cosmo_file.dat' +# Default: 5 NC = 5 -# Minimum energy for the different spectra (in MeV) +# The minimum energy for the different spectra (in MeV) # This value should not be larger than the minimal # nucleon-interaction threshold of 1.586627 MeV # (reaction_id: 15 in 'astro-ph/0211258') +# Default: 1.5 Emin = 1.5 # The value that is used for 'approximately' zero @@ -109,10 +146,10 @@ # Default: 200 Ephb_T_max = 200. -# The energy in units of EC at which to -# cutoff strongly suppressed spectra -# Default: 500. -E_EC_cut = 500. +# The maximal value of E/EC up to which the +# integration is performed when using the +# full spectrum with exponential suppression +E_EC_max = 10. # The number of points per decade for # the energy grid, which is used within diff --git a/acropolis/plots.py b/acropolis/plots.py new file mode 100644 index 0000000..203ae5f --- /dev/null +++ b/acropolis/plots.py @@ -0,0 +1,352 @@ +# math +from math import log10, floor, ceil +# numpy +import numpy as np +# matplotlib +import matplotlib.pyplot as plt +from matplotlib.ticker import FixedLocator, FixedFormatter +# warnings +import warnings + +# obs +from acropolis.obs import pdg2020 +# pprint +from acropolis.pprint import print_info +# params +from acropolis.params import NY + + +# Set the general style of the plot +plt.rc('text', usetex=True) +plt.rc('font', family='serif', size=14) + +# Include additional latex packages +plt.rcParams['text.latex.preamble'] = r'\usepackage{amsmath}\usepackage{mathpazo}' + + +# A global variable counting the +# number of created plots, in order +# to provide unique plot identifiers +_plot_number = 0 + + +# The number of sigmas at which a +# point is considered excluded +_95cl = 1.95996 # 95% C.L. + + +# DATA EXTRACTION ################################################### + +def _get_abundance(data, i): + # Add + 2 for the two parameters in the first two columns + i0 = i + 2 + + # Extract the different abundances... + mean, high, low = data[:,i0], data[:,i0+NY], data[:,i0+2*NY] + # ...and calculate an estimate for the error + diff = np.minimum( np.abs( mean - high ), np.abs( mean - low ) ) + + return mean, diff + + +def _get_deviations(data, obs): + # Extract and sum up neutrons and protons + mn, en = _get_abundance(data, 0) + mp, ep = _get_abundance(data, 1) + mH, eH = mn + mp, np.sqrt( en**2. + ep**2. ) + + # Extract and sum up lithium-7 and berylium-7 + mLi7, eLi7 = _get_abundance(data, 7) + mBe7, eBe7 = _get_abundance(data, 8) + m7, e7 = mLi7 + mBe7, np.sqrt( eLi7**2. + eBe7**2. ) + + # Extract deuterium + mD , eD = _get_abundance(data, 2) + + # Extract and sum up tritium and helium-3 + mT , eT = _get_abundance(data, 3) + mHe3, eHe3 = _get_abundance(data, 4) + m3 , e3 = mT + mHe3, np.sqrt( eT**2. + eHe3**2. ) + + # Extract helium-4 + mHe4, eHe4 = _get_abundance(data, 5) + + # Calculate the actual deviations + with warnings.catch_warnings(record=True) as w: + # Calculate the relevant abundance ratios + mYp , eYp = 4.*mHe4, 4.*eHe4 + mDH , eDH = mD/mH, (mD/mH)*np.sqrt( (eD/mD)**2. + (eH/mH)**2. ) + mHeD, eHeD = m3/mD, (m3/mD)*np.sqrt( (e3/m3)**2. + (eD/mD)**2. ) + mLiH, eLiH = m7/mH, (m7/mH)*np.sqrt( (e7/m7)**2. + (eH/mH)**2. ) + + # Calculate the corresponding deviations + Yp = (mYp - obs[ 'Yp'].mean) / np.sqrt( obs[ 'Yp'].err**2. + eYp**2. ) + DH = (mDH - obs[ 'DH'].mean) / np.sqrt( obs[ 'DH'].err**2. + eDH**2. ) + HeD = (mHeD - obs['HeD'].mean) / np.sqrt( obs['HeD'].err**2. + eHeD**2. ) + LiH = (mLiH - obs['LiH'].mean) / np.sqrt( obs['LiH'].err**2. + eLiH**2. ) + + if len(w) == 1 and issubclass(w[0].category, RuntimeWarning): + # Nothing to do here + pass + + # Take care of potential NaNs + HeD[ mDH < obs['DH'].err ] = 10 + DH [ np.isnan(DH) ] = -10 + + # Return (without reshaping) + return Yp, DH, HeD, LiH + + +# LATEX INFORMATION ################################################# + + +_tex_data = { + # DecayModel + 'mphi' : (r'm_\phi' , r'\mathrm{MeV}' ), + 'tau' : (r'\tau_\phi' , r'\mathrm{s}' ), + 'temp0' : (r'T_0' , r'\mathrm{MeV}' ), + 'n0a' : (r'(n_\phi/n_\gamma)|_{T=T_0}' , r'' ), + # AnnihilationModel + 'braa' : (r'\text{BR}_{\gamma\gamma} = 1-\text{BR}_{e^+e^-}', r'' ), + 'mchi' : (r'm_\chi' , r'\mathrm{MeV}' ), + 'a' : (r'a' , r'\mathrm{cm^3/s}'), + 'b' : (r'b' , r'\mathrm{cm^3/s}'), + 'tempkd': (r'T_\text{kd}' , r'\mathrm{MeV}' ), +} + + +def add_tex_data(key, tex, unit): + global _tex_data + + _tex_data[key] = (tex, unit) + + +def tex_title(**kwargs): + global _tex_data + + if _tex_data is None: + return + + eof = r',\;' + + # Define a function to handle values + # that need to be printed in scientific + # notation + def _val_to_string(val): + if type(val) == float: + power = log10( val ) + if power != int(power): + # TODO + pass + + return r'10^' + str( int(power) ) + + return str( val ) + + title = r'' + for key in kwargs.keys(): + # Extract the numerical value + val = kwargs[ key ] + val_str = _val_to_string( val ) + # Extract the latex representation + # of the parameter and its unit + tex, unit = _tex_data[ key ] + # If the value is 0, do not print units + unit = '\,' + unit if val != 0 else r'' + + title += tex + '=' + val_str + unit + eof + + if title.endswith(eof): + title = title[:-len(eof)] + + return r'$' + title + r'$' + + +def tex_label(key): + if key not in _tex_data.keys(): + return '' + + tex, unit = _tex_data[ key ] + + if unit != r'': + unit = r'\;[' + unit + r']' + + return r'$' + tex + unit + r'$' + + +def tex_labels(key_x, key_y): + return ( tex_label(key_x), tex_label(key_y) ) + + +# FIGURE HANDLING ################################################### + +def _init_figure(): + fig = plt.figure(figsize=(4.8, 4.4), dpi=150, edgecolor='white') + ax = fig.add_subplot(1, 1, 1) + + ax.tick_params(axis='both', which='both', labelsize=11, direction='in', width=0.5) + + ax.xaxis.set_ticks_position('both') + ax.yaxis.set_ticks_position('both') + for axis in ['top','bottom','left','right']: + ax.spines[axis].set_linewidth(0.5) + + return fig, ax + + +def _set_tick_labels(ax, x, y): + nint = lambda val: ceil(val) if val >= 0 else floor(val) + + xmin, xmax = np.min(x), np.max(x) + ymin, ymax = np.min(y), np.max(y) + + xmin_log = nint( log10(xmin) ) + xmax_log = nint( log10(xmax) ) + ymin_log = nint( log10(ymin) ) + ymax_log = nint( log10(ymax) ) + + nx = abs( xmax_log - xmin_log ) + 1 + ny = abs( ymax_log - ymin_log ) + 1 + + # Set the ticks on the x-axis + xticks_major = np.linspace(xmin_log, xmax_log, nx) + xticks_minor = [ log10(i*10**j) for i in range(1, 10) for j in xticks_major ] + xlabels = [ r'$10^{' + f'{int(i)}' + '}$' for i in xticks_major ] + + xticks_major_locator = FixedLocator(xticks_major) + xticks_minor_locator = FixedLocator(xticks_minor) + xlabels_formatter = FixedFormatter(xlabels) + + ax.xaxis.set_major_locator(xticks_major_locator) + ax.xaxis.set_minor_locator(xticks_minor_locator) + ax.xaxis.set_major_formatter(xlabels_formatter) + ax.set_xlim(xmin_log, xmax_log) + + # Set the ticks on the y-axis + yticks_major = np.linspace(ymin_log, ymax_log, ny) + yticks_minor = [ log10(i*10**j) for i in range(1, 10) for j in yticks_major ] + ylabels = [ r'$10^{' + f'{int(i)}' + '}$' for i in yticks_major ] + + yticks_major_locator = FixedLocator(yticks_major) + yticks_minor_locator = FixedLocator(yticks_minor) + ylabels_formatter = FixedFormatter(ylabels) + + ax.yaxis.set_major_locator(yticks_major_locator) + ax.yaxis.set_minor_locator(yticks_minor_locator) + ax.yaxis.set_major_formatter(ylabels_formatter) + ax.set_ylim(ymin_log, ymax_log) + + +def save_figure(output_file=None): + global _plot_number + + # If no name for the output file is given + # simply enumerate the different plots + if output_file is None: + output_file = 'acropolis_plot_{}.pdf'.format(_plot_number) + _plot_number += 1 + + plt.savefig(output_file) + + print_info( + "Figure has been saved as '{}'".format(output_file), + "acropolis.plot.save_figure" + ) + + +def plot_scan_results(data, output_file=None, title='', labels=('', ''), save_pdf=True, show_fig=False, obs=pdg2020): + # If data is a filename, load the data first + if type(data) == str: + data = np.loadtxt(data) + + # Get the set of input parameters... + x, y = data[:,0], data[:,1] + + # ...and determine the shape of the data + N = len(x) + Ny = (x == x[0]).sum() + Nx = N//Ny + + shape = (Nx, Ny) + + # Calculate the abundance deviations + Yp, DH, HeD, LiH = _get_deviations(data, obs) + + # Reshape the input data... + x = x.reshape(shape) + y = y.reshape(shape) + # ...and the deviation arrays + Yp = Yp.reshape(shape) + DH = DH.reshape(shape) + HeD = HeD.reshape(shape) + LiH = LiH.reshape(shape) + + # Extract the overall exclusion limit + max = np.maximum( np.abs(DH), np.abs(Yp) ) + max = np.maximum( max, HeD ) + + # Init the figure and... + fig, ax = _init_figure() + # ...set the tick labels + _set_tick_labels(ax, x, y) + + # Plot the actual data + cut = 1e10 + # Deuterium (filled) + ax.contourf(np.log10(x), np.log10(y), DH, + levels=[-cut, -_95cl, _95cl, cut], + colors=['0.6','white', 'tomato'], + alpha=0.2 + ) + # Helium-4 (filled) + ax.contourf(np.log10(x), np.log10(y), Yp, + levels=[-cut, -_95cl, _95cl, cut], + colors=['dodgerblue','white', 'lightcoral'], + alpha=0.2 + ) + # Helium-3 (filled) + ax.contourf(np.log10(x), np.log10(y), HeD, + levels=[_95cl, cut], # Only use He3/D as an upper limit + colors=['mediumseagreen'], + alpha=0.2 + ) + + # Deuterium low (line) + ax.contour(np.log10(x), np.log10(y), DH, + levels=[-_95cl], colors='0.6', linestyles='-' + ) + # Deuterium high (line) + ax.contour(np.log10(x), np.log10(y), DH, + levels=[_95cl], colors='tomato', linestyles='-' + ) + # Helium-4 low (line) + ax.contour(np.log10(x), np.log10(y), Yp, + levels=[-_95cl], colors='dodgerblue', linestyles='-' + ) + # Helium-3 high (line) + ax.contour(np.log10(x), np.log10(y), HeD, + levels=[_95cl], colors='mediumseagreen', linestyles='-' + ) + # Overall high/low (line) + ax.contour(np.log10(x), np.log10(y), max, + levels=[_95cl], colors='black', linestyles='-' + ) + + # Set the title... + ax.set_title( title, fontsize=11 ) + # ...and the axis labels + ax.set_xlabel( labels[0] ) + ax.set_ylabel( labels[1] ) + + # Set tight layout + plt.tight_layout() + + if save_pdf == True: + save_figure(output_file) + + if show_fig == True: + plt.show() + + # Return figure and axis in case + # further manipulation is desired + return fig, ax diff --git a/acropolis/pprint.py b/acropolis/pprint.py index 155bc83..81f7433 100644 --- a/acropolis/pprint.py +++ b/acropolis/pprint.py @@ -3,6 +3,25 @@ # params from acropolis.params import verbose, debug +# info +from acropolis.info import version, dev_version, url + + +_max_verbose_level = 1 + + +def print_version(): + if verbose == True: + # Differentiate between stable and dev version + version_str = "" + # Stable version + if version == dev_version: + version_str = "v{}".format(version) + # Development version + else: + version_str = "v{} [dev]".format(dev_version) + + stdout.write( "\x1B[38;5;209mACROPOLIS {} ({})\x1B[0m\n\n".format(version_str, url) ) def print_Yf(Yf, header=["mean", "high", "low"]): @@ -60,10 +79,26 @@ def print_warning(warning, loc="", eol="\n"): stdout.write("\x1B[1;33mWARNING\x1B[0m: " + warning + locf + eol) -def print_info(info, loc="", eol="\n"): +def print_info(info, loc="", eol="\n", verbose_level=None): + global _max_verbose_level + + if verbose_level is None: + verbose_level = _max_verbose_level + + _max_verbose_level = max( _max_verbose_level, verbose_level ) + locf = "" if debug == True and loc != "": locf = " \x1B[1;35m(" + loc + ")\x1B[0m" - if verbose: + if verbose and verbose_level >= _max_verbose_level: stdout.write("\x1B[1;32mINFO \x1B[0m: " + info + locf + eol) + + +def set_max_verbose_level(max_verbose_level=None): + global _max_verbose_level + + if max_verbose_level is None: + max_verbose_level = 1 + + _max_verbose_level = max_verbose_level diff --git a/acropolis/scans.py b/acropolis/scans.py index 7f8fbce..ce91f48 100644 --- a/acropolis/scans.py +++ b/acropolis/scans.py @@ -1,18 +1,27 @@ # numpy import numpy as np +# time +from time import time, sleep # itertools from itertools import product # multiprocessing from multiprocessing import Pool, cpu_count # pprint -from acropolis.pprint import print_error +from acropolis.pprint import print_info, print_error # params from acropolis.params import NY # models from acropolis.models import AbstractModel +class _Batch(object): + + def __init__(self, length, is_fast): + self.length = length + self.is_fast = is_fast + + class ScanParameter(object): def __init__(self, ivalue, fvalue, num, spacing="log", fast=False): @@ -38,20 +47,26 @@ class BufferedScanner(object): def __init__(self, model, **kwargs): # Store the requested model - # self._sWrapper(...) creates - # a new instance of this class + # self._sModel(...) afterwards creates + # a new instance of the requested model if not issubclass(model, AbstractModel): - print_error(str(model) + " is not a subclass of 'AbstractModel'") + print_error( + model.__name__ + " is not a subclass of AbstractModel", + "acropolis.scans.BufferedScanner.__init__" + ) self._sModel = model - # Define the various sets + ####################################################################### + + # Initialize the various sets self._sFixed = {} # Fixed parameter self._sScanp = {} # Scan parameters... - self._sFastf = {} # ...that allow for fast scanning + self._sFastf = {} # ...w/o fast scanning - # Define the number of scan parameters - self._sNp = 0 + # Initialize the number of scan parameters... + self._sNP = 0 # (all) + self._sNP_fast = 0 # (only fast) # Parse the keyword arguments and build up the # sets 'self._sFixed' and 'self._sScanp' @@ -66,15 +81,22 @@ def __init__(self, model, **kwargs): # In case there is a 'fast' parameter, this whould be # one of the 'non-fast' parameters # - # Sort the keys in order for the fast parameter - # to be at position 0 + # Sort the keys in order for the fast parameters + # to be at he beginning of the array list.sort( self._sScanp_id, key=lambda id: self._sFastf[id], reverse=True ) # Choose the last parameter, which in any case is not the # 'fast' parameter and therefore can be calculated in parallel - self._sPP_id = self._sScanp_id[-1] + self._sId_pp = self._sScanp_id[-1] - # Determine the 'fast' parameter - self._sFP_id = self._sScanp_id[ 0] + ####################################################################### + + # Extract the dimension of parallel/sequential jobs + self._sDp, self._sDs = 0, 0 + for id in self._sScanp_id: + if id == self._sId_pp: + self._sDp += len( self._sScanp[id] ) + else: + self._sDs += len( self._sScanp[id] ) def _parse_arguments(self, **kwargs): @@ -87,98 +109,157 @@ def _parse_arguments(self, **kwargs): self._sFixed[key] = float(param) # Extract the scan parameters elif isinstance(param, ScanParameter): - self._sNp += 1 + self._sNP += 1 - # Save the 'is_fast' status of all parameters - self._sFastf[key] = param.is_fast() # Save the relevant range of all paremeters self._sScanp[key] = param.get_range() + # Save the 'is_fast' status of all parameters + self._sFastf[key] = param.is_fast() else: print_error( "All parameters must either be 'int', 'float' or an instance of 'ScanParameter'", - "BufferedScanner._parse_arguments" + "acropolis.scans.BufferedScanner._parse_arguments" ) - if list( self._sFastf.values() ).count(True) > 1: + # Get the number of 'fast' parameters (Np_fast <= Np - 1) + self._sNP_fast = list( self._sFastf.values() ).count(True) + + # ERRORS for not-yet-implemented features (TODO) ################################ + if self._sNP_fast > 1 or self._sNP != 2: print_error( - "Using more than one 'fast' parameter is not yet supported", - "BufferedScanner._parse_arguments" + "Currently only exactly 2 scan parameters with <= 1 fast parameter are supported!", + "acropolis.scans.BufferedScanner._parse_arguments" ) + # TODO!!! + def _build_batches(self): + # Generate all possible parameter combinations, thereby + # NOT! including the parameter used for the parallelisation + scanp_ls = product( *[self._sScanp[id] for id in self._sScanp_id[:-1]] ) + # Right now: One sequential parameter, which is either fast or not + scanp_bt = [ _Batch(self._sDs, self._sNP_fast != 0), ] + + return scanp_ls, scanp_bt + + def rescale_matp_buffer(self, buffer, factor): return (factor*buffer[0], buffer[1]) def _perform_non_parallel_scan(self, pp): - # Generate all possible parameter combinations, thereby - # NOT! including the parameter used for the parallelisation - scanp_ls = product( *[self._sScanp[id] for id in self._sScanp_id[:-1]] ) + # Build the relevant batches + scanp_ls, scanp_bt = self._build_batches() - # TODO: Extend to more than two parameters - dx = len( self._sScanp[self._sFP_id] ) - dy = len( self._sScanp_id ) + 3*NY # + 1 + # Determine the dimensions of the 'result grid' + dx = self._sDs # rows + dy = self._sNP + 3*NY # columns results = np.zeros( ( dx, dy ) ) - matpb, matpf = None, False - print(pp) + # Initialize the buffer + matpb = None + + nb, ib = 0, 0 # Loop over the non-parallel parameter(s) - for count, scanp in enumerate(scanp_ls): + for i, scanp in enumerate(scanp_ls): + # Store the current batch + batch = scanp_bt[nb] + + # Check if a reset is required + reset_required = (ib == 0) + # Define the set that contains only scan parameters scanp_set = dict( zip(self._sScanp_id, scanp) ) - scanp_set.update( {self._sPP_id: pp} ) - + scanp_set.update( {self._sId_pp: pp} ) # Define the set that contains all parameters fullp_set = scanp_set.copy() fullp_set.update( self._sFixed ) - # Initialize the model wrapper of choice + # Initialize the model of choice model = self._sModel(**fullp_set) scanp_set_id_0 = scanp_set[self._sScanp_id[0]] # Rescale the rates with the 'fast' parameter - # TODO: Only do this if a fast parameter exists - if count != 0 and matpf == True: - factor = scanp_set_id_0/fastp - model.set_matp_buffer( self.rescale_matp_buffer(matpb, factor) ) + # but only if the current parameter is 'fast' + if batch.is_fast and (not reset_required): + if matpb is not None: + # matpb might still be None if E0 < Emin + # save, since parameters determining the + # injection energy, should never be fast + factor = scanp_set_id_0/fastp + model.set_matp_buffer( self.rescale_matp_buffer(matpb, factor) ) ############################################################## Yb = model.run_disintegration() ############################################################## - # Rescale the nuclear-rate buffer existent - if count == 0: + # Reset the buffer/rescaling + if batch.is_fast and reset_required: matpb = model.get_matp_buffer() - matpf = matpb is not None - # matpb might still be None if E0 < Emin fastp = scanp_set_id_0 # For the output, use the following format - # 1. The 'parallel' parameter - # 2. All 'non fast/parallel' parameters - # 3. The 'fast' parameter + # 1. The 'non fast' parameters + # 3. The 'fast' parameters sortp_ls = list( zip( scanp_set.keys(), scanp_set.values() ) ) - list.sort(sortp_ls, key=lambda el: self._sFastf[ el[0] ]) + list.sort(sortp_ls, key=lambda el: self._sFastf[ el[0] ]) # False...True sortp_ls = [ el[1] for el in sortp_ls ] - results[count] = [*sortp_ls, *Yb.transpose().reshape(Yb.size)] + results[i] = [*sortp_ls, *Yb.transpose().reshape(Yb.size)] + + # Update the batch index + if ib == batch.length - 1: # next batch + ib = 0 + nb += 1 + else: + ib += 1 return results def perform_scan(self, cores=1): num_cpus = cpu_count() if cores == -1 else cores + + start_time = time() + print_info( + "Running scan for {} on {} cores.".format(self._sModel.__name__, num_cpus), + "acropolis.scans.BufferedScanner.perform_scan", + verbose_level=3 + ) + with Pool(processes=num_cpus) as pool: # Loop over all possible combinations, by... # ...1. looping over the 'parallel' parameter (map) # ...2. looping over all parameter combinations, # thereby exclusing the 'parallel' parameter (perform_non_parallel_scan) - parallel_results = pool.map(self._perform_non_parallel_scan, self._sScanp[self._sPP_id], 1) + async_results = pool.map_async( + self._perform_non_parallel_scan, self._sScanp[self._sId_pp], 1 + ) + + progress = 0 + while ( progress < 100 ) or ( not async_results.ready() ): + progress = 100*( self._sDp - async_results._number_left )/self._sDp + print_info( + "Progress: {:.1f}%".format(progress), + "acropolis.scans.BufferedScanner.perform_scan", + eol="\r", verbose_level=3 + ) + + sleep(1) + + parallel_results = async_results.get() pool.terminate() parallel_results = np.array(parallel_results) old_shape = parallel_results.shape parallel_results.shape = (old_shape[0]*old_shape[1], len( self._sScanp_id ) + 3*NY) # + 1) + end_time = time() + print_info( + "Finished after {:.1f}min.".format( (end_time - start_time)/60 ), + "acropolis.scans.BufferedScanner.perform_scan", + verbose_level=3 + ) + return parallel_results diff --git a/acropolis/tmp/eloss.py b/acropolis/tmp/eloss.py new file mode 100644 index 0000000..e9c4958 --- /dev/null +++ b/acropolis/tmp/eloss.py @@ -0,0 +1,174 @@ +# math +from math import log, exp, sqrt +# numpy +import numpy as np +# numba +import numba as nb +# scipy +from scipy.integrate import quad + +# params +from acropolis.params import pi, pi2, zeta3 +from acropolis.params import alpha, me, me2 +from acropolis.params import eps, Ephb_T_max + + +@nb.jit(cache=True) +def _JIT_phi(x): + a = [ + 0.8048, + 0.1459, + 1.1370e-3, + -3.8790e-6 + ] + b = [ + -86.07, + 50.96, + -14.45, + 8./3., + ] + c = [ + 2.910, + 78.35, + 1.837e3, + ] + + if x <= 25: + asum = 0 + for i in range(4): asum += a[i]*( (x-2.)**(i+1) ) + + return (pi/12.)*(x-2.)**4./( 1. + asum ) + + bsum, csum = 0, 0 + for j in range(4): bsum += b[j]*(log(x)**j) + for k in range(3): csum += c[k]/( x**(k+1) ) + + return x*bsum/( 1. - csum ) + + +@nb.jit(cache=True) +def _JIT_eloss_bethe_heitler(logx, E, T, m): + x = np.exp(logx) # kappa + + # Calculate gamma + ga = E/m + # Calculate nu (https://journals.aps.org/prd/pdf/10.1103/PhysRevD.1.1596) + nu = me/(2*ga*T) + + # log + return x * _JIT_phi(x)/( np.exp(nu*x) - 1. ) + + +class InteractingParticle(object): + + def __init__(self, m, q=1, a=0): + # The mass of the particle + self._sM = m # in MeV + # The charge of the particle + self._sQ = q # in units of e + # The anamolous mangentic moment + self._sA = a + + + # TODO: Interface correctly with ACROPOLIS + def _ne(self, T): + # The number density of photons + na = 2.*zeta3*(T**3.)/pi2 + + if T >= me: + return 1.5*na + + if me > T >= me/26.: + return 4.*exp(-me/T)*( me*T/(2.*pi) )**1.5 + + # The baryon-to-photon ratio + eta = 6.137e-10 + # The abundance of helium-4 + Y = 0.2474 + + return (1. - Y/2.)*eta*na + + + # CHARGED PARTICLES ####################################################### + + # TODO + def _dEdt_coulomb(self, E, T): + # The plasma frequency + wp2 = 4.*pi*self._ne(T)*alpha/me + wp = sqrt(wp2) + + # The gamma factor of the charged particle + ga = E/self._sM + + # The velocity of the charged particle + v = sqrt(1. - 1./ga**2.) if ga > 1 else 0 + + if v < sqrt( 2*T/me ): + # TODO + return 0. + + Z = self._sQ + # The 'b-factor' + b = max( 1, Z*alpha/v )/( ga*me*v ) + + return -(Z**2.)*alpha*wp2*( log( 0.76*v/(wp*b) ) + v**2./2. )/v + + + def _dEdt_thompson(self, E, T): + # The gamma factor of the charged particle + ga = E/self._sM + + Z = self._sQ + # This holds true also for non-relativistic + # particles, in which case gamma^2-1 = 0 + return -32.*(pi**3.)*(alpha**2.)*(ga**2. - 1)*(T**4.)*(Z**4.)/(135.*self._sM**2.) + + + def _dEdt_bethe_heitler(self, E, T): + # The gamma factor of the charged particle + ga = E/self._sM + + # The velocity of the charged particle + v = sqrt(1. - 1./ga**2.) if ga > 1 else 0 + + Z = self._sQ + # Define the prefactor + pref = (alpha**3.)*(Z**2.)*me2*v/( 4.*(ga**2.)*pi2 ) + + # Calculate the appropriate integration limits + Emax = Ephb_T_max*T + xmax = 2*ga*Emax/me + # --> + xmin_log, xmax_log = log(2), log(xmax) + + # Perform the integration + I = quad(_JIT_eloss_bethe_heitler, xmin_log, xmax_log, + epsabs=0, epsrel=eps, args=(E, T, self._sM)) + + return -pref*I[0] + + + def _dEdt_charged(self, E, T): + return self._dEdt_thompson(E, T) + self._dEdt_bethe_heitler(E, T) + self._dEdt_coulomb(E, T) + + + # NEUTRAL PARTICLES ####################################################### + + def _dEdt_magnetic_moment(self, E, T): + return 0. + + + def _dEdt_neutral(self, E, T): + return self._dEdt_magnetic_moment(E, T) + + + # COMBINED ################################################################ + + def dEdt(self, E, T): + if E <= self._sM: + return 0. + + if self._sQ == 0: + return self._dEdt_neutral(E, T) + + return self._dEdt_charged(E, T) diff --git a/acropolis/tmp/lhe.py b/acropolis/tmp/lhe.py new file mode 100644 index 0000000..8d1c8b6 --- /dev/null +++ b/acropolis/tmp/lhe.py @@ -0,0 +1 @@ + diff --git a/acropolis/utils.py b/acropolis/utils.py index c943252..d9a5533 100644 --- a/acropolis/utils.py +++ b/acropolis/utils.py @@ -17,6 +17,10 @@ def __init__(self, x_grid, y_grid, base=np.e, fill_value=None): self._sXminLog = self._sXLog[ 0] self._sXmaxLog = self._sXLog[-1] + if self._sXmaxLog <= self._sXminLog: + raise ValueError( + "The values in x_grid need to be in ascending order." + ) self._sN = len(self._sXLog) @@ -36,6 +40,9 @@ def _perform_interp(self, x): ix = int( ( x_log - self._sXminLog )*( self._sN - 1 )/( self._sXmaxLog - self._sXminLog ) ) + # Handle the case for which ix+1 is out-of-bounds + if ix == self._sN - 1: ix -= 1 + x1_log, x2_log = self._sXLog[ix], self._sXLog[ix+1] y1_log, y2_log = self._sYLog[ix], self._sYLog[ix+1] @@ -52,21 +59,23 @@ def __call__(self, x): return self._sCache[x] +# Cummulative numerical Simpson integration def cumsimp(x_grid, y_grid): n = len(x_grid) - delta_z = log(x_grid[-1] / x_grid[0])/(n-1) - g_grid = x_grid*y_grid - integral = np.zeros(n) + delta_z = log( x_grid[-1]/x_grid[0] )/( n-1 ) + g_grid = x_grid*y_grid + + i_grid = np.zeros( n ) last_even_int = 0. - for i in range(1, int(n/2 + 1)): + for i in range(1, n//2 + 1): ie = 2 * i io = 2 * i - 1 - integral[io] = last_even_int + 0.5 * delta_z * (g_grid[io-1] + g_grid[io]) + i_grid[io] = last_even_int + 0.5 * delta_z * (g_grid[io-1] + g_grid[io]) if ie < n: - integral[ie] = last_even_int + delta_z * (g_grid[ie-2] + 4.*g_grid[ie-1] + g_grid[ie])/3. - last_even_int = integral[ie] + i_grid[ie] = last_even_int + delta_z * (g_grid[ie-2] + 4.*g_grid[ie-1] + g_grid[ie])/3. + last_even_int = i_grid[ie] - return integral + return i_grid diff --git a/annihilation b/annihilation index b25252c..9eb9a75 100755 --- a/annihilation +++ b/annihilation @@ -5,10 +5,13 @@ import sys # pprint from acropolis.pprint import print_Yf -from acropolis.pprint import print_error +from acropolis.pprint import print_error, print_version # models from acropolis.models import AnnihilationModel +# Print version information +print_version() + # Extact the number of command line arguments... N = len(sys.argv) diff --git a/decay b/decay index 2cdbb17..98eed2b 100755 --- a/decay +++ b/decay @@ -5,10 +5,13 @@ import sys # pprint from acropolis.pprint import print_Yf -from acropolis.pprint import print_error +from acropolis.pprint import print_error, print_version # models from acropolis.models import DecayModel +# Print version information +print_version() + # Extact the number of command line arguments... N = len(sys.argv) diff --git a/download_db b/download_db index 63ccd55..d28c077 100755 --- a/download_db +++ b/download_db @@ -1,7 +1,9 @@ #! /usr/bin/env python3 # pprint -from acropolis.pprint import print_info +from acropolis.pprint import print_info, print_version + +print_version() print_info( "Since v1.1 this operation is no longer needed. So you can simply proceed from here!\n" + diff --git a/examples/scan_decay_mphi_aa b/examples/scan_decay_mphi_aa new file mode 100755 index 0000000..99a6fa8 --- /dev/null +++ b/examples/scan_decay_mphi_aa @@ -0,0 +1,42 @@ +#! /usr/bin/env python3 + +# sys (needed, since the script is in a different directory) +import sys; sys.path.append('..') +# numpy +import numpy as np + +# models +from acropolis.models import DecayModel +# scans +from acropolis.scans import ScanParameter, BufferedScanner +# pprint +from acropolis.pprint import print_version, print_info + +# Print version information +print_version() + +# Set/Extract the mass of the mediator +mphi = float( sys.argv[1] ) if len( sys.argv ) != 1 else 50 + +# Define the number of points +N = 200 + +# Perform the scan... +scan_result = BufferedScanner( DecayModel, + mphi = mphi, + tau = ScanParameter( 3, 10, N), + temp0 = 10., + n0a = ScanParameter(-15, -3, N, fast=True), + bree = 0., + braa = 1. + ).perform_scan(cores=-1) + +# ...specify the output-file... +results_file = 'decay_mphi_{:.0e}MeV_aa.dat'.format(mphi) +# ...and save the results +np.savetxt(results_file, scan_result) + +# Finally, print the output-file location +print_info( + "Results have been written to '{}'.".format(results_file) +) diff --git a/examples/scan_decay_tau_aa b/examples/scan_decay_tau_aa new file mode 100755 index 0000000..841c6eb --- /dev/null +++ b/examples/scan_decay_tau_aa @@ -0,0 +1,42 @@ +#! /usr/bin/env python3 + +# sys (needed, since the script is in a different directory) +import sys; sys.path.append('..') +# numpy +import numpy as np + +# models +from acropolis.models import DecayModel +# scans +from acropolis.scans import ScanParameter, BufferedScanner +# pprint +from acropolis.pprint import print_version, print_info + +# Print version information +print_version() + +# Set/Extract the lifetime of the mediator +tau = float( sys.argv[1] ) if len( sys.argv ) != 1 else 1e7 + +# Define the number of points +N = 200 + +# Perform the scan... +scan_result = BufferedScanner( DecayModel, + mphi = ScanParameter( 0, 3, N), + tau = tau, + temp0 = 10., + n0a = ScanParameter(-14, -3, N, fast=True), + bree = 0., + braa = 1. + ).perform_scan(cores=-1) + +# ...specify the output-file... +results_file = 'decay_tau_{:.0e}s_aa.dat'.format(tau) +# ...and save the results +np.savetxt(results_file, scan_result) + +# Finally, print the output-file location +print_info( + "Results have been written to '{}'.".format(results_file) +) diff --git a/examples/scan_pwave_ee b/examples/scan_pwave_ee new file mode 100755 index 0000000..7a2df27 --- /dev/null +++ b/examples/scan_pwave_ee @@ -0,0 +1,42 @@ +#! /usr/bin/env python3 + +# sys (needed, since the script is in a different directory) +import sys; sys.path.append('..') +# numpy +import numpy as np + +# models +from acropolis.models import AnnihilationModel +# scans +from acropolis.scans import ScanParameter, BufferedScanner +# pprint +from acropolis.pprint import print_version, print_info + +# Print version information +print_version() + +# Set/Extract the kinetic decoupling temperature +tempkd = float( sys.argv[1] ) if len( sys.argv ) != 1 else 1e0 + +# Define the number of points +N = 200 + +# Perform the scan... +scan_result = BufferedScanner( AnnihilationModel, + mchi = ScanParameter( 0, 3, N), + a = 0., + b = ScanParameter(-21, -10, N, fast=True), + tempkd = tempkd, + bree = 1., + braa = 0. + ).perform_scan(cores=-1) + +# ...specify the output-file... +results_file = 'annih_pwave_Tkd_{:.0e}MeV_ee.dat'.format(tempkd) +# ...and save the results +np.savetxt(results_file, scan_result) + +# Finally, print the output-file location +print_info( + "Results have been written to '{}'.".format(results_file) +) diff --git a/examples/scan_swave_ee b/examples/scan_swave_ee new file mode 100755 index 0000000..59c2625 --- /dev/null +++ b/examples/scan_swave_ee @@ -0,0 +1,39 @@ +#! /usr/bin/env python3 + +# sys (needed, since the script is in a different directory) +import sys; sys.path.append('..') +# numpy +import numpy as np + +# models +from acropolis.models import AnnihilationModel +# scans +from acropolis.scans import ScanParameter, BufferedScanner +# pprint +from acropolis.pprint import print_version, print_info + +# Print version information +print_version() + +# Define the number of points +N = 200 + +# Perform the scan... +scan_result = BufferedScanner( AnnihilationModel, + mchi = ScanParameter( 0, 3, N), + a = ScanParameter(-27, -16, N, fast=True), + b = 0., + tempkd = 0., + bree = 1., + braa = 0. + ).perform_scan(cores=-1) + +# ...specify the output-file... +results_file = 'annih_swave_ee.dat' +# ...and save the results +np.savetxt(results_file, scan_result) + +# Finally, print the output-file location +print_info( + "Results have been written to '{}'.".format(results_file) +) diff --git a/hepforge/include/sidebar b/hepforge/include/sidebar index c1df0e1..107ca2a 100755 --- a/hepforge/include/sidebar +++ b/hepforge/include/sidebar @@ -1,10 +1,10 @@
- ACROPOLIS is a generic framework to calculate the evolution of the light-element abundances due to photodisintegration reactions induced by different BSM particles. With ACROPOLIS, the widely discussed cases of decays as well as annihilations can be run without prior coding knowledge within example programs. However, its modular structure also makes it possible to easily implement other BSM physics scenarios. ACROPOLIS is free software licensed under GPL3 and the full source code of the project is available at GitHub, but it can also be installed from PyPI or by downloading one of the different .tar.gz archives from the downloads sections of this website.
+ ACROPOLIS is a generic framework to calculate the evolution of the light-element abundances due to photodisintegration reactions induced by different BSM particles. With ACROPOLIS, the widely discussed cases of decays as well as annihilations can be run without prior coding knowledge within example programs. However, its modular structure also makes it possible to easily implement other BSM physics scenarios. ACROPOLIS is free software licensed under GPL3 and the full source code of the project is available at GitHub, but it can also be installed from PyPI or by downloading one of the different .tar.gz archives from the downloads sections of this website.
The easiest way to install ACROPOLIS, is to fetch it from PyPI via pip, i.e. by running the command
python3 -m pip install ACROPOLIS
- The most recent version of the manual can be found here. In this document, you also find the most recent installation instructions.
+ The most recent version of the manual can be found here. In this document, you also find the most recent installation instructions.
@@ -30,6 +30,21 @@
+