-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Variables #700
Variables #700
Conversation
The new classes introduce name conflicts with existing
|
Still trying to answer @simoneliuzzo's questions, concerning parameters:
This is a major point. I couldn't find any solution such that a numpy array could turn itself into a ParamArray. But since I agree with Simone's that it is really frustrating, I am considering another solution: each array attribute of an Element would be initially set as a ParamArray at element creation. The performance penalty should be very small, I'll prepare this modification. |
Directly assigning a parameter to an array item is now allowed: p5 = Param(-0.3, name='QD strength')
qd1.PolynomB[1] = p5 |
c7abf94
to
020d8f8
Compare
Documentation largely improved, please check again… |
Hello @lfarv, I would encourage you to do your own timing tests; however, when I ran my standard Full timing data for reference:
|
@T-Nicholls : you raise a very important point. I tried to go more into the details of timings, down to the root of the problem which is the modification of the access to the attributes of elements. There are 3 different test cases:
All the tests are run on a lattice without parameters. I display all the results in terms of ratio compared to the master branch. linopt6I first tried to reproduce @T-Nicholls's results by measuring
Ratio between 1.54 and 3.06, similar to @T-Nicholls's results. The last modification (array attributes) is indeed worse, but not so much. Remark: refpts=All is better because it's just additional python computations, without accessing lattice elements. TrackingThe I tried a pure tracking: 10000 turns with a non-zero test particle:
As expected, no effect on tracking because the attribute values are initially stored in C structures, so the overhead with 10000 turns is negligible. Attribute accessAnd finally I just measured a simple access to an attribute, scalar or array:
On the 1st version, array access is better probably because indexing PolynomB itself takes time both in Conclusion
|
Dear @lfarv @T-Nicholls @swhite2401, To get more user friendly matching we lose generally a factor 1.5-3 in speed (for an already slow matching). May be when GPU tracking is available we can think about it again, but as it is I would not want to use it. Could we add the timing tests to the global tests of AT? if slower than before --> give a warning/error? best regards |
@simoneliuzzo: I'm not sure I understand you comment: I understand that you propose to postpone the introduction of Parameters (until we find a better way to implement them, if is is possible), and keep only Variables? That's a good solution to go on without affecting the performance! If there is a global agreement for this, I can prepare it. Adding timing tests in the test sequence ? Good idea. How could it be done?
|
For info: apparently the problem is for a large part in python. The simple fact of adding a def __getattribute__(self, key):
return super(Element, self).__getattribute__(key) slows down significantly the access to the attributes:
Tested with python 3.9 on macOS. |
@lfarv, apologies for the delay in getting back to you, I was off most of last week. Re: timing tests, I don't see why it wouldn't be possible to create a new GitHub workflow to run tests on the current branch and master, comparing how long they take. I can add this to my todo list if no-one else is keen to do it, but it'd be low priority, i.e., it'd be several months before I got around to it. As for our slowdown, it is to be expected that overriding a default Python method with a new custom method would be slower, after all, most of the default operations are optimised "under the hood"; but I must admit that I was surprised by the magnitude of the slowdown though. As you note, this fact combined with the sheer number of What are the primary use cases for Parameters, are they intended to be used mainly when optimising a lattice or elsewhere too? If they are only for optimisation, then I could see us having separate Parameterised element classes as a solution. |
Dear all, this slow-down is clearly not acceptable and this feature cannot be merged as is. Concerning Parameters specifically, I think it is necessary to evaluate it in a separate branch. To honest I am also surprised by this huge slow-down but it seems we cannot do much about it with the present implementation. My feeling is that a workaround can be found using 1- When parametrizing and attribute, using In this case for standard attributes the efficient python attribute access will be used and only in the case where a parametrized attribute is declared the slow access will be used. This should considerably speed-up methods like linopt and not affect lattices without parametrized attributes (99% of the usage) What do you think? |
Hello all,
Yes, and it suggests that there may be some potential improvement in For now:
|
@swhite2401's |
61add12
to
81e9cc2
Compare
This PR now concerns only Variables, and Parameters have been removed. The title and description have been updated. The documentation is here. I recall that to use the new classes, one must specify: from at.future import ElementVariable, RefptsVariable, CustomVariable, VariableList to avoid conflicts with existing classes. |
Hello @lfarv thanks for this! I have been quite busy with other stuff but this looks very nice at first sight and the documentation seems also very good. No I have to find time for testing... Other question/comment: did you set a branch for the Parameters? Is there some implementation we could use to test ideas to solve this slow down issue? |
The parameters are still available in the |
a02317f
to
29f3d6b
Compare
I have tested these features and overall it works very well. Here a first round of comments:
I might have more with further testing |
More comments:
|
ea88ce8
to
ebcd2d2
Compare
Ok, I can start looking at that as soon as this PR is merged, which will unlock other branches: maintaining several active branches starts being difficult since |
Ok then we can merge, this is anyway in the "future" imports so we can make some modification even after the merge right? |
You are right. There may modifications resulting from reviewing the upcoming "new matching" PR which is closely linked to the "Variables" and "Observables" branches. Can you have a look at #738, while I'm preparing the "matching" one. |
Ok I will do, are you still looking at your ESRF mail? I asked you a question some times ago... |
This replaces #603 and #696. Upon request, #696 was split in 4 parts, here is the first one.
We introduce standard ways of parametrising a lattice and varying any quantity.
Variables
Variables are references to any scalar quantity. AT includes two predefined
variable classes referring to scalar attributes of lattice elements:
ElementVariable
is associated to an element object, and acts on all occurences of this object. But it will not affect any copy, neither shallow nor deep, of the original object,RefptsVariable
is not associated to an element object, but to an element location in aLattice
. It acts on any copy of the initial lattice. A ring argument must be provided to the set and get methods to identify the lattice.Variable referring to other quantities may be created by:
Variable
base class. Usually this consist in overloading the abstract methods _setfun and _getfun,CustomVariable
class, accepting user-defined get and set functions.The documentation for this branch is available here.