You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
node-ical processes the whole fiel and converts it to "object structure".
I think we could add a "per event callback/event" and when this was added simply call the callback/emit event for each "parsed" object and do not collect all events in this case ... this approach could limit the RAM uses a lot
Hhmmm ... ok we need in any case remember Uids we already processed and maybe use events and have "newEntry" and "updateEntry" (https://github.com/jens-maus/node-ical/blob/master/ical.js#L462). Alternatively adjust to a "two stage" parsing approach to first find out uids and "start/end positions" in the input string to be able to parse uids together then in second stage..
Additional idea to reduce RAM usage would be to not split the whole data to array directly at the beginnig, but use the text and process it by always searching for next line end - would be interesting how such an approach behaves runtime wise ... having one "regexSplit" at the beginning vs. many strPos on big string with "startIndex"
The text was updated successfully, but these errors were encountered:
node-ical processes the whole fiel and converts it to "object structure".
I think we could add a "per event callback/event" and when this was added simply call the callback/emit event for each "parsed" object and do not collect all events in this case ... this approach could limit the RAM uses a lot
Hhmmm ... ok we need in any case remember Uids we already processed and maybe use events and have "newEntry" and "updateEntry" (https://github.com/jens-maus/node-ical/blob/master/ical.js#L462). Alternatively adjust to a "two stage" parsing approach to first find out uids and "start/end positions" in the input string to be able to parse uids together then in second stage..
Additional idea to reduce RAM usage would be to not split the whole data to array directly at the beginnig, but use the text and process it by always searching for next line end - would be interesting how such an approach behaves runtime wise ... having one "regexSplit" at the beginning vs. many strPos on big string with "startIndex"
The text was updated successfully, but these errors were encountered: