From 31b8823ef520b4d18f8615c246c64869b8a16005 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Thu, 1 May 2025 16:46:52 -0700 Subject: [PATCH 1/3] 107th Meeting of TC39 notes --- meetings/2025-04/april-14.md | 1070 ++++++++++++++++++++++++++++++++++ meetings/2025-04/april-15.md | 974 +++++++++++++++++++++++++++++++ meetings/2025-04/april-16.md | 994 +++++++++++++++++++++++++++++++ meetings/2025-04/april-17.md | 592 +++++++++++++++++++ 4 files changed, 3630 insertions(+) create mode 100644 meetings/2025-04/april-14.md create mode 100644 meetings/2025-04/april-15.md create mode 100644 meetings/2025-04/april-16.md create mode 100644 meetings/2025-04/april-17.md diff --git a/meetings/2025-04/april-14.md b/meetings/2025-04/april-14.md new file mode 100644 index 0000000..de99126 --- /dev/null +++ b/meetings/2025-04/april-14.md @@ -0,0 +1,1070 @@ +# 107th TC39 Meeting + +Day One—14 April 2025 + +## Attendees + +| Name | Abbreviation | Organization | +|:-----------------------|:-------------|:-------------------| +| Waldemar Horwat | WH | Invited Expert | +| Daniel Ehrenberg | DE | Bloomberg | +| Ashley Claymore | ACE | Bloomberg | +| Jonathan Kuperman | JKP | Bloomberg | +| Ben Lickly | BLY | Google | +| Bradford C. Smith | BSH | Google | +| Chris de Almeida | CDA | IBM | +| Daniel Minor | DLM | Mozilla | +| Jesse Alama | JMN | Igalia | +| Chip Morningstar | CM | Consensys | +| Michael Saboff | MLS | Apple | +| Nicolò Ribaudo | NRO | Igalia | +| Erik Marks | REK | Consensys | +| Richard Gibson | RGN | Agoric | +| Josh Goldberg | JKG | Invited Expert | +| Luca Forstner | LFR | Sentry | +| Philip Chimento | PFC | Igalia | +| Christian Ulbrich | CHU | Zalari | +| Mikhail Barash | MBH | Univ. of Bergen | +| Eemeli Aro | EAO | Mozilla | +| Chengzhong Wu | CZW | Bloomberg | +| Dmitry Makhnev | DJM | JetBrains | +| J. S. Choi | JSC | Invited Expert | +| Keith Miller | KM | Apple Inc | +| Aki Rose Braun | AKI | Ecma International | +| Luca Casonato | LCA | Deno Land Inc | +| Samina Husain | SHN | Ecma International | +| Istvan Sebestyen | IS | Ecma International | +| Duncan MacGregor | DMM | ServiceNow Inc | +| Mathieu Hofman | MAH | Agoric | +| Mark Miller | MM | Agoric | +| Ron Buckton | RBN | Microsoft | +| Andreas Woess | AWO | Oracle | +| Romulo Cintra | RCA | Igalia | +| Andreu Botella | ABO | Igalia | +| Ruben Bridgewater | | Invited Expert | +| Michael Ficarra | MF | F5 | +| Ulises Gascon | UGN | Open JS | +| Kevin Gibbons | KG | F5 | +| Shu-yu Guo | SYG | Google | +| Jordan Harband | JHD | HeroDevs | +| John Hax | JHX | Invited Expert | +| Stephen Hicks | | Google | +| Tom Kopp | TKP | Zalari GmbH | +| Veniamin Krol | | JetBrains | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Luis Pardo | LFP | Microsoft | +| Justin Ridgewell | JRL | Google | +| Ujjwal Sharma | USA | Igalia | +| James Snell | JSL | Cloudflare | +| Jack Works | JWK | Sujitech | + +## Opening & Welcome + +Presenter: Ujjwal Sharma (USA) + +USA: Perfect, great. Thank you. Then I will start with this and property folks as we go. Hello and welcome to the 107th meeting. It’s the 14th of April. This is a fully remote meeting in the New York timezone. I’d like to introduce to you all to your chairs group you might remember from the last meeting, or if you missed the last meeting, here is some news for you. There’s me, RPR, and CDA, and the facilitators JRL, DLM and DRR. On behalf of all of us, I’d like to welcome you all and kick off this meeting. Make sure you’re signed in. If you’re here, I’m assuming you’re already signed in. It’s perfectly fine if you’re here already, please just go back and sign in. The responses to this forum are really helpful for us to track attendance. The TC39, as you mow, has a code of conduct, and please be mindful and follow it at all times. It applies to this meeting. Since it’s online, there’s not many mediums and chat rooms are governed by a code of conduct. Daily schedule is pretty straightforward for these daily meetings. We start now, which in this case is 10, in New York time, and we finish in go hours, and we have a two-hour session until the break, and then an hour break and another two hours until three in New York time. + +USA: A quick rundown of our comms tools before we begin. There’s TCQ, which is by far one of the most unique and important tools that we use for communicating. You should have the link to the TCQ already. And as you can see, there’s the entire agenda there. This is how any individual agenda item looks. There’s a queue and a bunch of things. This side looks for a participant. If you’re a speaker, in is how is view looks for you. I’ll quickly discuss these different options you have. They go from right to left in order of sort of reducing priority, so point of order is the highest priority, that’s why it’s red color. But the important part here is that please use it sparingly, please use it for emergencies such as if the notes we don’t seem to update for you or if there’s some serious technical glitches or if in general, you believe that there’s something urgent enough that the meeting should halt for that to be resolved. Next you have the clarifying questions. These jump to the top of the queue apart from points of orderings, obviously, and in the case, you are basically interrupting the running tune of discussions to ask a clarifying question regarding the current point that’s being discussed. Next you have the discuss current topic, where you add another sort of item for discussing the current topic, so, you know, if there’s any topic that’s going on, you can add another point to that sort of list, so it doesn’t go to the bottom, but it goes to the end of this particular discussion, and then you can introduce a new topic, which puts you at the bottom of the list, so you can start a new topic after the most recent one has been finished. So that’s all for adding yourself on the queue. There’s another button that is only visible if you’re already speaking which says I’m done speaking. Please do not use it in this moment because the problem with this button at the moment is that sometimes we can double click it, so, for instance, if the Chairs are running the queue and you also press this button, you might just skip the next person to you. So because of TCQ’s technical glitches at this moment, we do not recommend using this button. That’s all for TCQ. We also have Matrix. You might enjoy any of these channels. Now, of course, delegates is supposed to be for the most technical discussions, Temporal, quite the opposite. So all these channels are different and have their own sort of vibe, but overall, there’s a group these channels that are dedicated to specific subjects and you might want to be on them. So join the TC39 space on Matrix, and ask us for joining details if you don’t have them. Next is the IPR policy. Basically, this is a quick reminder of ECMA’s IPR policy. Everybody who is a part of this meeting at this moment is supposed to be either a delegate from an ECMA member, or, which which case your organization collectively signed the—and approved the ECMA IPR policy and you’re an invited expert, in case of that, you have done it yourself. If you have not, please contact us or, you know, be aware that your contributions in this meeting are going to be used as part of this royalty-free teaching. So, yeah, I’m not a lawyer myself, but make sure that you have reviewed this and, yeah, observers, on the other hand, by not contributing anything to the meeting themselves, in terms of obviously, you know, spoken contributions are not subject to this. Notes are live. I believe we are being transcribed right now. And remember to summarize key points at the end of each topic. For instance, if you have a presentation and you think you have a pretty good idea what the conclusion is going to be or the summary going to be, feel free to include it in the presentation itself or take that few minutes at the end of to your presentation to go over a quick summary. Actually, I’m suppose odd the read this outtrade, so a detailed transcript of the meeting is being prepared and eventually post in GitHub. You may edit this at any time during the meeting in Google docs for accuracy, including deleted comments which you do not wish to appear. You may request corrections or deletions after the fact by editing the Google Doc in the first two weeks after the TC39 meeting or subsequently making a PR in the notes suppository or contacting the TC39 chairs. The next meeting, the 108th meeting is from the 28th to 30 of May in A Coruña, hosted by Igalia and in central European summertime. Yay for that. And let’s move on to the rest of the agenda. + +USA: So first of all, let’s ask for note takers. Any volunteers? Let me switch. + +JMN: I can help out. This is Jesse alma. + +USA: Thank you, Jesse. Anyone else would like to help out with the notes? The very first slot of the day. And if I may, this is probably one of the easiest ones really given how relaxed the topics seem to be, as opposed to later parts of the meeting where things can get quite complicated. + +ACE: I’ll take an easy slot. + +USA: Thank you, Ashley. So, yeah, let’s—noted down, yeah, perfect. And move on. Okay, so let’s approve the previous minutes. I’ll give a minute for—well, a few seconds for anyone to mention any thoughts on the previous minutes. a reminder that you can always edit them in the notes repo if you’d like. Anyone? + +CDA: Yeah, so the minutes are still not published. There’s a PR out, but the—there’s still a bunch of open, unresolved suggestions. We should direct those folks to just submit, like, just make those commits directly, because like this commonly happens where somebody’s waiting for, I guess, the PR author to approve the suggestions, but they should just feel free to make them, but we should make a point to get this done as soon as possible. + +USA: Right. Yeah. Thank you, Chris. I guess in this case, then the previous minutes are part of the PR. We should merge it soon, but since it’s still not merged, you have a great moment to sort of go through it, approve it, if you’d like, or just post any corrections. All right, then let’s say with the previous minutes—that the previous minutes have been approved. Let’s make sure that we merge them in soon. Next let’s adopt the current agenda. So I’ll give a few seconds for folks to raise any concerns about the current agenda. Sounds like consensus, so we have adopted the current agenda. Next we have the secretary’s report. Hello, Samina. + +## Secretary's Report + +Presenter: Samina Husain (SHN) + +SHN: Thank you for the start of the meeting, and welcome to everybody. I have a relatively short slide since our last meeting with activities that have taken place. The opt out is open for the ECMA262 and 402 for the officials for the GA, and I’d like to give you a bit of an update on some new discussions we’re having with some new topics and work for ECMA. ECMA has a code of conduct and you can review the invited experts rules. And some of the documents that have been recently published, if you want access to those documents, you just have to ask your chair. Dates for the next Meetings are also noted, Ujjwal already mentioned the very next meeting for the plenary is going to be in May, the next important date for us to be aware of is the June GA, which is the 25th of—June coming up this year. + +SHN: All right, so as I mentioned, so very important for the June meeting that’s coming up, we have the opt out period open for the 60 day as per always. It does tend to run very smoothly and I anticipate the same, and there are two additional approvals for both, 262 and the 402, so the 16th edition and the 12th edition. I think they’ve already been frozen for some time, so thank you very much for all of that work and we will proceed to the approval in June. + +SHN: The new work that’s going on, there has already been some—a good discussion on forming a new TC, TC57 There’s a question amongst the discussion in the execom. I think we are moving forward well. We are on the second cycle of discussion, it will be excellent to have a new TC in the work items of ECMA. + +SHN: Just some other items, just as a reminder, there have been a number of invited experts that have recently joined TC39, not to mention other TCs, as per always, I will review them in the third quarter this year. Many of the new TCs are part of organizations and I look forward to seeing those organizations ready to make decisions to join or how they want to assess their participation and activities with ECMA. I was reminded by W3C about the horizontal review. I’ve left a note that this is still an open discussion, so as TC39 deems fit, we would then come back to them on how to better be involved in the horizontal review. + +SHN: I’m going to pause there, because that is the extent of the report based on what we discussed from our last meeting, which was just six weeks ago, and I’ll stop here to ask if there’s anything I missed that you would have expected me to present or you have questions on what I have presented. + +DE: Once there is input from the committee, the new TC will give that feedback back to the open-source community so that they can digest it, make a new proposal and everyone can agree on a common standard. I think this could be a really useful tool to unifying the whole community ecosystem. And I would encourage everyone here who is interested to participate. Please get in touch with me if you’re interested or if you have feedback on this idea. + +AKI: I don’t think I have any specific comments. I have been asked about sort of our process in collecting information from participants, how we utilize forms and have that data. And that’s something I’m working on and will have something for new the future, but it’s not anything slide worthy at the moment. + +SHN: Thank you, AKI. I want to recognize and thank AKI for her work on looking future on some tools, so we understand some of the requirements we’ve had. We’ve just had a meeting on it. So please just be a little patient. We’ll come back with you with some proposals on how we’re going to help improve that, and Aki is going to be involved in that, and also thanking Aki in advance for the PDF versions of the documents once they are approved in June. Thank you. Ujjwal, thank you very much. + +AKI: Thanks for to the 262 editors, by the way, for their help in what—in the direction we’re going to go for the PDF. They’ve put a lot of work in as well. Thank you. + +SHN: Yes, thank you very much. Ujjwal, thank you. That’s the extent of my presentation. I will be online if there are any further questions. + +### Speaker's Summary of Key Points + +A brief overview of current activities and upcoming milestones was presented: + +* The opt-out period for ECMA-262 (16th Edition) and ECMA-402 (12th Edition), which are both scheduled for final approval at the June General Assembly. +* An update was shared regarding the progress of new technical work, specifically the ongoing discussions around the formation of a new TC. There is positive momentum within the ExeCom and highlighted that this initiative represents a promising addition to Ecma’s future work program. +* Reminders were given about Ecma’s Code of Conduct, access to recently published documents, and upcoming meetings, including the next plenary in May and the June GA. Also mentioned a number of newly added invited experts across various TCs, with a formal review of all IE status scheduled for Q3. +* AKI reported on ongoing work related to information collection for tools and confirmed upcoming contributions related to PDF document formatting. +* AKI and the ECMA-262 editors were thanked for their continued support and collaboration. + +## TC39 Editors’ Update + +Presenter: Kevin Gibbons (KB) + +KG: There have been a fair handful of normative changes, partly because we are in the process of cutting ES2025 and we wanted to make sure we got as many of the outstanding things as we could. So I’ll run through all of these very briefly just so everyone is aware. This first one is is a fairly technical change. It makes it so there’s not a distinction between variables declared with var inside `eval` vs declarations without a `var` so engines don’t have the work to keep track of whether something is a var declaration or not, which is just useless work. The second thing was an oversight where when you `for await`over a synchronous iterator and the synchronous iterator is yielding promises, if the synchronous iterator yields rejected promises, then the for-await treats that as an exception, and when iterators have exceptions, you don’t close them. But this isn’t exception from the point of view of the synchronous iterator. It’s only an exception from point of view of the async consumer. So the synchronous iterator should be closed in this case. We had consensus for this literally years ago and were waiting to merge it until there were implementations, which the implementation landed Safari a few months ago, which is why that landed. I’ll have more to say about that later. + +KG: We added `RegExp.escape`. We made another iterator closing tweak where if you pass an invalid value to an iterator helper, that should close the underlying iterator. And we added Float16Array. And then #3559, this was a bugfix—in the process of updating the spec towards merging iterator helpers, we tweaked some of the machinery. In the process, we made an accidental change, an accidental normative change to array and RegExp string iterators where they became observably not reentrant, which was not our intention and not what engines implemented. So ABL I believe, opened a PR to rewrite this so we restore the original behavior. I did want to mention this is a bug fix, and sometimes we backport bug fixes to—when we very recently cut a new edition of the spec that’s still in the IPR opt out. The editors intend to do this unless there’s some particular reason not to. I don’t believe this should affect IPR opt out, especially because the behavior that we are restoring was in fact already part of the specification as of a couple of years ago. So this was strictly a bug fix, but it is technically a normative change. So I just want to give a heads up that will be one errata normative change to ES2025. + +KG: Okay. So that’s all the normative changes. There’s been a handful of editorial changes I want to call out that we know how dark mode, thanks to, again, Arai. So you’ll see that if you have your browser setting to preferred dark mode. + +KG: And then #3353, I want to call out only because it’s a tweak to the module machinery, the async module machinery, which is extremely complicated stuff. If you work with that, I recommend taking a look at this change, all though it’s a fairly small change I expect you’ll consider it an improvement. If you don’t work with the machinery, you don’t care about this at all. And finally, as Aki already mentioned there's been a bunch of changes towards making the printable document less crap. So it’s looking much nicer now. Thank you AKI and also MF for work on that. + +KG: We have a a fairly similar list of the upcoming work, although I wanted to call out we’ve actually gone through, well, mostly MF has gone through and documented the editorial conventions that we follow. It’s currently just a wiki page and there’s a link here if you’re interested in that. This is things like particular phrasing we use or decisions that we make when editing the document that can’t be captured by Ecmarkup. We try to codify as many as we can in code, but of course that’s not practical to do everything. And the last thing of course is just to call out that ES2025 has been cut. Apart from that up with minor tweak I mentioned, the link is on the reflector and there are the IPR has begun towards the GA in June or whatever it is. If you or any of your lawyers have any objections, speak now or forever hold your peace. And that’s all I got. Thanks so much. + +## ECMA-402 Editors’ Update + +Presenter: Ujjwal Sharma (USA) + +USA: Anyway, all right, I’ll be very quick. Hello everyone again. This is a brief update are the ECMA 402 editors. As KG mentioned earlier for 262, the new edition is out or, well, it is an opt out. Please check it out. And let us know as soon as possible if you have any concerns regarding, this otherwise it’s good from our end. We have done a bunch of editorial improvements, this is edition that includes duration format. + +USA: But here are three big editorial improvements (972, 983, 984), and one is it restructures the unit and sort of style and display landing, and format, instead of having multiple slot for style and display, we have one slot for each unit for options that correspond to that. So that’s a record that contains what the style and the display a bit more structured than it used to be, basically. Then we have cleaned up number format a bit. Some of this is still being sort of discussed. So if you’re interested, please check out that PR, but most of that sort of editorial improvements have been merged and then we have aabstracted away the constructors, the locale resolution part of the constructors into a single AO. So all around, there’s a few different editorial improvements. It should be a lot easier now to make sense of the spec, and, yeah, that’s it for 402. So thanks. + +## ECMA 404 + +Presenter: Chip Morningstar (CM) + +* no slides + +CM: Yeah, ECMA 404. Well, I looked. It’s still there. USA: That’s as good as it could be, right? CM: Yes, it’s excellent. + +## Test262 Status Update + +Presenter: Philip Chimento (PFC) + +PFC: We’ve continued to have many nice smaller contributions from many people. We’ve been chipping away at the large pull request for tests for the Explicit Resource Management proposal, with many thanks to a contributor from Firefox as well. And I think that’s all that there is to report this time. + +## TG3 Status Update + +Presenter: Chris de Almeida (CDA) + +CDA: Yes, TG3 continues to meet to discuss security impacts of proposals in various stages. Please join us if you are interested. + +## TG4 Status Update + +Presenter: Johnathan Kuperman (JKP) + +JKP: This is a pretty quick update, and just a reminder, the working mode that we’ve been using is seeking out annual approval on things, so we’ve been meeting frequently in the meantime working on our newer features as well as normative changes. Mostly between the previous plenary and today we’ve been working on editorial updates. + +JKP: The big one is we converted the TG4 source map from bikeshed to ecmarkup and we’ve added formatting and linting for it as well as improving the experience for dark mode users. + +We’ve made a few normative updates. There was a typo in the decoding algorithm. A reminder these slides and the links are linked in the agenda. We had a typo in the VLQ decoding algorithm and another issue where the continuation bit for the code decoding VLQs. We also moved our algorithm examplesto the ECMA “syntax-directed operations” grammar. + +As far as our proposals, we’ve been just continuing to work on range mappings and scopes. For range mapping, we have a few small changes like a allowing multi-line mapping and for scopes, we have more work and we’ve got larger PRs discussing how to futureproof scope encoding and decoding as well as where to use relative and use absolute indices. + +## TG5: Experiments in Programming Language Standardization + +Presenter: Mikhail Barash (MBH) + +* [slides](https://docs.google.com/presentation/d/1We23iI6oOg5jViJZOB4EtUWexoQTvKlGDR7csxSxsT4/edit?usp=sharing) + +MBH: We had a very successful workshop at the plenary in Seattle. We had 21 attendees, I think from 11 different organizations. We continued to hold monthly meetings. And we are currently arranging two TG5 workshops. The one which is confirmed as of now is in A Coruña the day before the plenary starts, so the 27th of May will be hosted at the University of A Coruña and they have prepared some presentations for us, and I will also post later in the refactor and in the Matrix channel a call for presentations from the delegates if you want to give some presentation at that workshop. And we are currently planning a TG5 workshop in Tokyo for the November meeting. + +MBH: One more thing, the outreach: there will be a workshop on programming languages and organization and specification, which will be co-located with the European conference on object-oriented programming which will be held in July in Bergen, and the keynote will be on WebAssembly spec tech, the mechanized approach for the web assembly specification, and I would like to bring your attention to this. We encourage you to submit proposals for talks. It’s a 300-word abstract, and the links will be shared in the reflector and also in the Matrix channel. So please consider submitting. That’s + +## Updates from the CoC + +Presenter: Chris de Almeida (CDA) + +CDA: There are no updates from the CoC committee. There is nothing new to report. As always, remind folks that we are always welcoming new members to the CoC committee, so if that’s something you’re interested in, please reach out. + +## Normative: add notation to PluralRules + +Presenter: Ujjwal Sharmna (USA) + +* [proposal](https://github.com/tc39/ecma402/pull/989) +* [notes](https://notes.igalia.com/p/UpmK0K8eo) + +USA: This is my presentation about small normative pull requests that we made on ECMA 402. I’d like so to quickly introduce it and by be end for presentation, hopefully you’ll have enough background and sort of confidence about this that you would agree to putting this into ECMA 402. So the title says notation support for `PluralRules`. What does that mean? + +USA: Okay, so here was the problem, right? `Intl.PluralRules` , if, you know, going for the initiated, this is a constructor on the Intl object that is slightly different from all of the existing constructors. While it there’s a bunch of these formatters, `DateTimeFormat`, `NumberFormat`, you know, we add formatters, we love formatters… this is actually an API that does selection, so it’s a bit more of an interesting building block. What it does is it exposes the locale specific pluralization rules to the user, so you could input a number and ask, you know, for any given locale, what the plural category is going to be for this. Now, for English speakers, this doesn’t sound super impressive given there’s only two. Languages like Spanish, for instance, have three, there’s a separate category for bigger numbers, for example; but there’s a lot more complex languages that can have up to five or six plural categories so it can be quite an involved process to build an application that takes all of these into account and in a way that works across locales. That’s what plurals does. + +USA: The problem is it doesn’t take the notation into account. Why are notations important? I give a quick history lesson on this. Notations weren’t originally in NumberFormat, but they were kind of one of the more frequently requested topics, so in May 2018, and I know that these kind of timings can be complicated, but I say May 2018 because of this issue, shutout to SFC by the way, for the heavy lifting. Spanish has a third category for “millones”, and every time you are in the millions, there’s a different plural category. + +USA: Fun fact, but, yeah, so in May 2018, unified NumberFormat added this notation support to NumberFormat. This means that NumberFormat can now format numbers in scientific notation or other sort of compact notation. This was nifty, right, and pretty much right away, or let’s say in two years, but, yeah, we wanted to support them in PluralRules too. It looks like it’s long time hases a pasted because unified NumberFormat took a while to happen, but as you can see, this unified NumberFormat was still not merged. The idea was once unified NumberFormat was merged and it would have notations for it, we would simultaneously start supporting number notations in groups. It somehow slipped through the cracks however and it doesn’t happen. But the idea was, you know, something as simple as this could be accepted, and given that notation was, you know, something that was already being supplied to number format, a similar options bag could be used for both. + +USA: So, yeah, not only should PluralRules support notations, but it should probably stick to the same options that NumberFormat does. And thinking of a solution, sort of more recently, I thought, well, if we have a notation slot on the PluralRules object, then we can just pass it to ResolvePlural, and given that this operation is not really specified, I mean, it’s implementation-defined, so to say, the final result is that, you know, we just need to start storing this information and passing it into the AO and that would pretty much be it. + +USA: Now, while the, you know, I call it a minimal solution, the PR, it is quite minimal by normative PR standards as well, which is why I don’t think that it deserves to be a proposal by any shot, but condensing it even further, you know, removing, for example, the part where is I add the new slot, I put it in the constructor, I put it in the list of slots, this is the change, right? Like, in the spec, you would perform said NumberFormat options in AO with these options in standard, and standard here is a notation. So this is an AO that is being shared between number format and plural rules. Now what we’re doing is we are getting the notation from the object—or from the options object, sorry, we are setting the internal slot that I talked about earlier, and then we call this. So we perform NumberFormat options with the notation instead of it being standard. Standard here being the standard notation as well. So, you know, the default is still standard. + +USA: There’s a few options here that I clicked out for readability, but, you know, the standard engineering, scientific compact, all of these options are available for notation. And in April 2025, which is, you know, slightly less than two weeks ago, we got approval from TG2. So here I am. I hope that this was, you know, informative enough and that you all feel confident. But I would like to ask now for consensus. + +DLM: Yeah, we support this normative change. + +DE: In change sounds good to me. I think we should treat this similar to staged proposals in terms of merging it once we have multiple implementations and test. We could track PRs like this. Anyway, this seems like a very good change to me. + +USA: Just FYI, we have tracking for everything, basically, sorry, for all normative PRs for ECMA 402, but noted. [https://github.com/tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs](https://github.com/tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs) + +DE: Okay, great. + +CDA: Awesome. That’s it for the queue, so it sounds like you have support. Are there any other voices of support for this normative change? + +USA: Awesome. I thank you. And I have a proposed conclusion for the notes, so the conclusion, normative pull request ECMA 402 was presented to the committee for consensus and this PR added support for a notation option in the plural rules constructor for handling different non-standard notations. + +DE: Do you want to say the part about how we had consensus? + +USA: And yeah, and with I guess a couple of supporting opinions, we achieved consensus for this pull request. + +### Speaker's Summary of Key Points + +Normative pull request [https://github.com/tc39/ecma402/pull/989](https://github.com/tc39/ecma402/pull/989) on ECMA 402 was presented to the committee for consensus and this PR added support for a notation option in the plural rules constructor for handling different non-standard notations. + +### Conclusion + +The committee reached consensus on [the pull request]([https://github.com/tc39/ecma402/pull/989](https://github.com/tc39/ecma402/pull/989)), with explicit support from DE and DLM. + +## Normative: Mark sync module evaluation promise as handled + +Presenter: Nicolò Ribaudo (NRO) + +* [proposal](https://github.com/tc39/ecma262/pull/3535) +* [slides](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU) + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_1#slide=id.g34836646ca1_0_1) + +NRO: I’m presenting a pull request, fixing a bug around module promise rejection handling. Just a little bit of background, how does Promise rejection track work? And what is the problem? Rejection tracking is basically the machinery that lets us fire some sort of event when you have a promise that gets rejected, and then it gets handled. For example, browsers do this through an unhandledRejection event. So how does this work in detail? + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_8#slide=id.g34836646ca1_0_8) + +NRO: Well, whenever you reject a Promise, either through, like, calling the reject function from the constructor or using `Promise.reject`, and also for promise created internally by the spec and rejected: if, when the promise gets rejected, it’s not handled yet, so if it does not have a callback registered through .then or .catch, then we call HostPromiseRejectionTracker. + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_16#slide=id.g34836646ca1_0_16) + +NRO: And then later when you actually handle the promise, so when you call .then or .catch, it will tell the host, “now this promise has been handled”, and the host can tell the event is going to fire or do whatever they want to mark which promises are not being properly handled. + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_25#slide=id.g34836646ca1_0_25) + +NRO: So that was Promises, and how does this interact with modules? + +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_44#slide=id.g34836646ca1_0_44)] + +There are multiple types of modules in this spec, or well Module Records, which represent modules. There are a Module Record base class and two main types of actual Module Records. There are Cyclic Module Records andSynthetic Module Records. Cyclic records are modules that support dependencies. And this is some sort of abstract extract base class and our spec provides Source Text Module Records that are variant for JavaScript. For example, the web assembly imports proposals in the WebAssembly is proposing a new type of cyclic on the record, and for synthetic module records, and it’s just modules where you already know the exports and you have to wrap them with some sort of module to make them importable. The way module evolution works changed over the years. Like, originally there was this Evaluate method that would—it was on all module records, and it would trigger evaluation, and if there was an error returned a throw completion, otherwise a normal completion. But then when we introduced the top-level await, we changed the method to return the promise with the detail that only cyclic module records can actually await. If there’s any other type of the module records, like any type of custom host module, there’s a promise in there, returned by the Evaluate method, and this promise must already be settled. So the promise there is just to have a consistent API signature, and not actually used as a promise. + +NRO: And given that this promise is going to be already settled in the module revelation machinery, we just—whenever we have a module record that’s not a cyclic module record, we just look at internal slots of this promise to see if it’s rejected and extract the value that it’s rejected with. You can see from here, we only use `promise.[[promiseResult]]` to get the value that is promise, and it’s normal, and we look at its internal state. + +NRO: And this causes a problem. Because given that we’re not reading this promise using the normal promise abstract operations, when this promise is created by the host, if this promise rejects, it will call host promise rejection and the host, hey, this promise is not handled and it’s rejected, and then we never tell the host that the promise has been handled because we never called the promise down, which is the AO responsible for calling the host hook. So the host doesn’t know that it actually here we took care of this completion. So I have, like, for example, these three modules in this. We have a module that does dynamic import for `a.js`, and it depends on some B module. This B module is not a JavaScript module on the record. It’s a matter on the record and managed by the host. It creates a promise, rejects and it returns the promise as part of its evolution, so when it’s rejected promise, it calls the host promise rejection trigger hook telling that the promise has been rejected. + +NRO: Then during the evaluation of `a.js`, we perform the steps from the slide before, and we look at the error and we do not call HostPromiseRejectionTracker—oh, here it says "rejected", it should be "handled" in the promise hook. And then we create—in the meanwhile, dynamic import creates another promise for the evaluation, not just of b, but of the whole module graph for `a.js`, and in the module on the left, we handle this other promise, so the promise for the whole graph A is handled and we never handled the promise for module B. + +NRO: So the fix here is to just change these InnerModuleEvaluation abstract evaluation to explicitly call the host hook that marks the promise as handled when we extract the rejection from the promise. And, well, editorially, I’m doing this as a new AO because it's used by the import defer proposal, and we’re going to have it inline in the Module evaluation algorithm. + +NRO: Are there observable consequences to this? Yes and no. Technically this is a normative change, as example before, this is observable because it changes the way host hooks are called, and usually they affects how some events are fired. However, on the web, the only non-cyclic module records we have are syntactic model records and we already have the values, we already—we’re just packaging them in a module after creating them, so that promise is never rejected, and this is not observable. Outside of the web, we have commonJS, and when you import from a .cjs file, it would be wrapped in its own Module Record and we evaluate the particular CJS module in the `.Evaluate()` methodevaluation of the module record. However, NodeJS does not expose as rejected through their rejection event the promise for that internal module, because maybe they don’t actually create the promise, and don’t know how it’s implemented. So Node.js already implements the behavior that would be—that we will get by fixing this. Node does not implement the bug. So, yeah, to conclude, is there consensus on fixing this? There’s the pull request ([3535](https://github.com/tc39/ecma262/pull/3535)) already reviewed in the 262 repository. + +MM: Great. So I’ll start with the easy question. The—you mentioned the situation where the promise—there exists a promise that when born is already settled, and I understand why, and it all makes sense, I just want to verify that it does not violate the constraint, the invariant that user code cannot tell synchronously whether a promise is settled or not. That the only way—the only anything that user code can sense is asynchronously. It finds out that a promise is settled. Is that correct? + +NRO: It’s correct, because the way you can check this is through the dynamic import, and you get a promise anyway. And also this promise is not a promise that was provided by the user. And it was just a promise that was provided by a spec to a spec. + +MM: Great. And the concept of internal promise or promises which respect fictions leads me to the more interesting question, which is the one that MAH posted on the PR that you already responded to. Could you recount that, and then I’ll respond after that. + +NRO: Yes. So MAH was asking if internal spec promises are observable to the host hook. And I believe unfortunately the answer is yes, because if you reject a promise, it will call this host hook, and it’s just the host that will have to know, "oh, this is an internal promise, let’s not give it to the user", which I know is not the answer you’re hoping for. And it’s not just this specific module case, it’s about all internal spec promises. + +MM: You’re right it’s not the answer I’m hoping for. It’s up—it’s only being directly made observable to the host hook, and it’s only being indicted observable to JavaScript code according to the behavior of the host hook. The problem is that right now, the behavior of the existing host hooks for this do reflect it back to Javascript code, these internal spec promises do get [INAUDIBLE], and as do promises that can be observed by JavaScript code, and I’ll just say, we’re rather aghast at the idea of the spec causing what were spec fiction concepts to need to be reified as promises that become observable by user code. + +NRO: Yeah, I guess I agree. I don’t know if hosts actually expose any of these promises, though. I didn’t check, outside of this one use case. + +MM: Okay. + +MM: The promises that are fieldwork fictions in the module machinery, remain unobservable and do not—are not reported as either handled or not handled or just not recorded? + +NRO: Are you asking me or in general to the committee? I feel like this is a larger discussion. + +MM: I am asking you first, I think it could be.. I think we should I just don’t know in depth. If you think we should and could I recommend that we do. + +SYG: I have a clarifying yes, and what is a spec fiction promise in this case? Is it something that is synchronously accounted—like you can write an async loop can count how many of you went back to the microtask queue and that is observable and with so which tick is scheduled or not scheduled becomes observable when you basically lead it with like a `for await` loop that will count many. + +MM: That is an interesting intermediate case and thanks for raising that and I was not thinking about that. And so, now what I think of as a object which is a spec fiction verses is whether the user code itself can get access to the object and get connected to the object. Does the object become nullified? And an object that is only observable behaviour, observable by user code is additional ticks on the queue, those could be explained by just advancing the ticks by a means or it could be explained by promises with spec fictions and still call them a spec fiction and same way that other objects are spec fictions. And they have observed effects which is why that we have them at all, but your code can never get a hold of them. And the regional example that I became aware of from the distinction is the sync to async adopter thing which is only ever itself explained as an additional object but there InOrder Successor way for user code to to get ahold of the object. + +SYG: I think remedy in this case would be by because you would like—because the question you had asked NRO can aged rid of them and can you do to so, and to clarify your preference would be if you can get rid of them by getting rid of them, you mean remove the spec from even constructing such promises but keep the observable behaviour the same with other explanatory means? + +MM: That would satisfy my constraints and I am not suggesting that we need to do that. If there is some other means by which the spec fiction promises can distinguished by this PR so that the spec promises reject or not reject the status and not reflect to the JavaScript service code that would be satisfying and the mildest thing that you would be satisfying, and I am not sure that I am happy with this, but I will suggest it to put it on the table. Which is sense it depends on the behaviour of the host hook, whether to reflect the report back to JavaScript code, simply making the spec promise observable to the host is not yet in violation of the language in variant and that will leave it to the host whether or not to violate and the path of resistance if we just accept this PR and in nonnormative note is that hosts will reflect the spec fiction promises back to the user code, the way they reflect or other promises back to user code. So if somehow we were able to make clear that we are advising hosts not to reflect these back to the user code, and providing in the host hook, enough information for hosts to make that decision, that would likely satisfy the concern that I have. And— + +CDA: Okay a quick one of—we are about 8 minutes passed time for this topic. + +NRO: In this specific case the promise is provided to us by the host in the `Evaluate()` method of the module record and so we don’t know if the promise is fictional or a promise that is supposed to be used in some other way and it is almost created by this instance. + +MM: I understand that technically, but do we in terms of what the practical status quo on the ground is, do we know any host behaviors where these particular promises do get exposed to JavaScript user code? Other than by the rejection tracking? + +NRO: I don’t know. + +MM: Okay, so let me say, I am in favor overall of the direction of this, but I do feel like with us being out of time, I need to withhold consensus until we observe this Issue. + +NRO: Let me see if we can talk and come back to the meeting on the last day. + +MM: Okay. + +KG: MM, I cannot imagine any outcome here where the particular behaviour being changed isn’t part of whenever it is that you are looking for. So it seems like we have consensus on this particular change though there is other changes that you like as well, and the change here is a change is a change to a particular piece of behaviour. + +MM: Okay that is a very good point, that is a very good point. And the things I want to not to be observable are already observable just with the wrong cracking and we are fixing how these inappropriately observable promises and fixing how they are tracked rather than fixing whether they are observable, is that correct? + +KG: This is causing them to be tracked in a way that results in not observable in practice, even though in some sense they are actually observable to the host. + +MM: I am not worried about a malicious host, the issue is a host—existing host and the host that follow path of this resistance and implementing this once it is in the spec, and you are causing inadvertent observability in these promises. And so, so, yes, we might agree to consensus during this theory and I will withhold now for one final reason which is simply, this objection is raised by MAH who cannot be present at this moment but will be present during the plenary. So, if MAH—to KG given your point, if MAH agrees, I am happy with consensus. + +CDA: We are at time and we need to move on and so SYG is on the queue. + +SYG: Thank you NRO on a very clear presentation on the problem and a lot of machinery is messy and this is extremely clear, thank you. + +CDA: Yes DE is talking about a follow up topic and yes we can schedule a continuation for this. + +### Summary + +* When using some types of non-JavaScript modules that throw during evaluation, the current spec does not call the HostPromiseRejectionTraker hook to mark the promise returned by .Evaluate() as handled. +* The normative PR fixes it by explicitly calling the host hook. + +### Conclusion + +Explicit support from multiple TC39 members including SYG. Blocked by MM due to a concern from MAH about spec-internal promises being exposed to user code through host hooks; follow-on topic to continue later + +## Note about changed behavior of `Array.fromAsync` after landing #2600 + +Presenter: Kevin Gibbons (KG) + +* [proposal](https://github.com/tc39/proposal-array-from-async/issues/39#issuecomment-1526744932) +* (no slides) + +KG: Okay, let’s see, so, all right, so I mentioned during the updates and we have this very old PR ([#2600](https://github.com/tc39/ecma262/pull/2600)) and to recap what this PR does: When you have a `for await` loop that iterates over a synchronous iterator or iterable that will yield a promise that will reject—the original behaviour was that the `for await` loop would treat that as an async iterable throwing which is a violation of the iterator protocol. Which is to say that the loop assumes the iterable has a chance to do any cleanup that it needs to do before yielding such a promise. And this is not the case for sync iterables. And so to ensure the synchronous iterator will have time to clean itself up, the change here was that now we close the iterator when it yields a rejected promise. The wrapper which is lifting of the sync iterator to an async iterator checks if the sync iterator yields a rejected promise and closes the underlying iterator, on the assumption that the consuming `for await` loop would not close the iterator itself. + +KG: And that is a very good change. There is an invariant we are supposed to close iterators 100% of the time when we are done with them, and this is a necessary change to achieve that. + +KG: So there is also an outstanding proposal, `Array.fromAsync`, which is Stage 3, although I do believe it does have implementations in all browsers, which is basically a `for await` loop which will collect values. And it in fact uses the same spec machinery as `for await` loops. So when we made this change to the machinery for `for await` loops, it affected the behavior of `Array.fromAsync` when consuming a sync iterator which yields rejected promises. + +KG: So this PR had the consequence of the behavior of `Array.fromAsync` changing. It's not obvious from looking at the PR because `Array.fromAsync` is not in the specification, and it is not obvious if you are looking at `Array.fromAsync` because nothing has changed from `Array.fromAsync`. But we changed a bit of the machinery `Array.fromAsync` was using, and the machinery was not in the same place as the thing that was using it, and so I wanted to put that on the agenda to call out the distinct change that happened so no one is surprised. + +KG: I believe the champions are in the process of getting tests written for this behavior, and I don’t know if there was a test for the old behavior, and it hopefully should be a straightforward change, and in some engines, they might have been using the same machinery internally as well, and it might have gotten fixed automatically. But this is a heads up about this weird case where we made a number of changes to the machinery that the proposal was using, and that changed the proposal, and I don’t know if that has come up before. Yeah, that is all I had to say. + +JSC: Like KG said, there is a test pull request for test262 already open. This is a testable observable change. V8 already should pass it, while other engines I tested do not yet pass it. Also, work on `Array.fromAsync` has resumed. Hopefully it will reach Stage 4 within the year. That is all. + +USA: That was it for the queue. KG, would you like to conclude? + +KG: There was no request for consensus, so that's all. + +USA: Yeah. All right, I guess that is it then. + +### Summary + +The committee is advised that landing [tc39/ecma262#2600](https://github.com/tc39/ecma262/pull/2600/) resulted in a change in the behavior of the widely implemented `Array.fromAsync` proposal despite no changes in its spec text. Test262 tests have been updated at https://github.com/tc39/test262/pull/4450. + +### Conclusion + +This was just a notification to the committee; no consensus needed + +## `AsyncContext` Stage 2 Update + +Presenter: Andreu Botella (ABO) + +* [proposal](https://github.com/tc39/proposal-async-context) +* [slides](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/) + +ABO: So, so this is an update on `AsyncContext`, focusing on the use cases and some updates from the web integration and after some negative feedback that we got from Mozilla. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_13#slide=id.g3484e1b5507_0_13) + +ABO: And first of all, on the use cases. When we were talking about the proposal previously, the things we were focusing on were: `AsyncContext` is a power user feature meant for library authors (such as OpenTelemetry maintainers) and not so much for the majority of the web developers. And one use case is enabling tracing in the browser, which is currently only possible in Node.js through AsyncLocalStorage, or other runtimes that implement it such as Deno or Bun. With AsyncContext, this would be possible in the web as well. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_19#slide=id.g3484e1b5507_0_19) + +ABO: And all of that is correct. However, there are two clarifications on the use cases that we have not made as strongly as we are making now: + +* `AsyncContext` would be used by library authors to improve the user experience of library users. +* And that `AsyncContext` is incredibly useful in many front-end frameworks works, regardless of the tracing use case. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_25#slide=id.g3484e1b5507_0_25) + +ABO: And so we have actually had some conversations with some frontend frameworks, and we are covering here some of the highlights. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_47#slide=id.g3484e1b5507_0_47) + +ABO: The current status in frameworks is that you have some things where you have confusing and hard-to-debug behaviour. And, for example with React, if you have a async function as the transition callback and you have an `await` inside of it, anything after the `await` gets lost and is not marked as a transition. And in the React documentation it says this is a limitation of JavaScript, and it’s waiting on `AsyncContext` to be implemented to fix this. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_70#slide=id.g3484e1b5507_0_70) + +ABO: Another thing some frameworks do to avoid this is to transpile all async code. This can be as simple as wrapping `await` with this `withAsyncContext` function, in the case of Vue. And that will let them deal with things, but you need to transpile everything through the entire code base, possibly including third-party code. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_1_1#slide=id.g304d6459cbf_1_1) + +ABO: So about the use cases for certain frameworks: React has transitions and actions. If you have `async` inside one of those, React would need to understand that it is a series of state changes that should be coordinated together into a single UI transition. The alternative for this is having developers to pass through a context object to every related API, which would be easy for them to forget, or compile everything which like transpile everything, which for React would be invasive and a non-starter. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_1_16#slide=id.g304d6459cbf_1_16) + +ABO: In the case of Solid.js, they have a tracking scope and an ownership context. Since this is a signal-based framework, they use this to collect nested signals and handle automatic disposal of them. And if you have `await` in them, you will lose both contexts. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_1_32#slide=id.g304d6459cbf_1_32) + +ABO: For Svelte, on the server they have a `getRequestEvent` function to read information about the current request context. They’d also like to have a similar thing for client-side navigations, but that is currently impossible. Once again they could do this by transforming await expressions—again, transpilation—but they can only do that in certain contexts, which would lead to confusing discrepancies. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_1_44#slide=id.g304d6459cbf_1_44) + +ABO: In the case of Vue, there is an active component context which can be propagated with await, but it only works when you have a build step with Vue single-file components, and not in plain JS. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_79#slide=id.g3484e1b5507_0_79) + +ABO: If you have any cases that are relevant to front-end frameworks, and like to share them, please jump on the queue. It would be good to share them to convince implementers that this is really useful and would be worth the complexity. + +CZW: I would like to highlight Bloomberg’s internal use cases. We have an internal application framework called R+ and we actually use a mechanism to instrument the internal engine so that we don’t need to transpile user code and we can run multiple application bundles in a single JavaScript environment. We call this co-location, and this allows us to save resources and improve performance, given that we don’t have to create a bunch of new environments for each application bundle, and there is no RPC between them. + +CZW: In order to support colocation, we use this internal mechanism, which is similar to `AsyncContext`, to track callbacks and promises created by each application bundle and we use this context information to associate app metadata. And this is crucial for us to improve our web application and developer experience because they don’t have to pass any of this application metadata around to support our co-location feature. So this feature is really important in Bloomberg’s cases. + +SHS: Google uses a polyfill of this for interaction tracing and, secondarily, performance tracing. It's critical for us because we use frameworks with a lot of loose coupling, so that there aren't a lot of direct function calls where you could expand the parameters to pass additional tracer data explicitly. Examples of this kind of loose context would be event listeners, signal handlers / effects, and RPC middleware. In all these cases there is no way to pass tracer data explicitly. Beyond that we are hoping that once the proposal is further along, we have a number of other possible use cases like cancellation where you could have an ambient AbortSignal, would be a really useful thing to have but that is lower priority and so we're less interested in taking quite as big as a risk for using that while it is still experimental. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_98#slide=id.g3484e1b5507_0_98) + +ABO: Thank you for sharing your use cases, and now I will give an update on the web integration. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_103#slide=id.g3484e1b5507_0_103) + +ABO: So the last time that we presented this in full was in Tokyo, and we gave a brief summary of the changes since then in December; but basically, one of the things that Mozilla highlighted for this proposal was that it increases the size of potential memory leaks. + +ABO: If you have this in the web, this code used to only keep alive the callback and any scopes that closes over. If there can be any click event, the callback is not a leak, and for the scopes it crosses over, it is only a leak if it keeps alive things that are not used by the function. And I know that sometimes engines keep more things alive than they should for closed over scopes, but that is a trade-off they make. + +ABO: In the proposal as we presented it in Tokyo, `addEventListener` implicitly captures an `AsyncContext.Snapshot`, and a lot of the entries in the snapshot, a lot of those values will not be used by the callback, even if the snapshot itself is used, so this could be a leak—or will be a leak in most cases. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_119#slide=id.g3484e1b5507_0_119) + +ABO: And so the proposal has moved towards a model where the context always propagates from the place where the callback is triggered. So here you have a `click()` method on `HTMLElement` which causes a click event to be dispatched synchronously. And as part of that click, that propagates the context from the `click` call to inside the callback, and it only stays alive if the event listeners are dispatched and that is it. + +ABO: If you have events that are dispatched async, like on an `XMLHTTPRequest` object, when you call `send()` that context will be stored for the duration of the HTTP request, and when it fires the final event it can be released. So this is where we are calling the dispatch context. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_142#slide=id.g3484e1b5507_0_142) + +ABO: For some APIs there is no difference and the callback is passed at the same time that the work starts which will eventually cause it to trigger. The simplest example is `setTimeout`: in the old mental model, you pass the callback into the web API and thus it captures the context. In the new mental model, `setTimeout` starts an async operation to wait and then call the callback, and it propagates the context through that async operation. The behavior is the same, and it’s just like that for any APIs that take a callback and schedule it to be called at a later point. They will have the same behavior, so we can think of it with the new mental model. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g304d6459cbf_0_0#slide=id.g304d6459cbf_0_0) + +ABO: And for any API’s, the new behaviour should be what you would get if the APIs were internally implemented in JavaScript using promises and no manual context management. You can have an implementation of `setTimeout` that does a sleep and then calls the callback, and this would have the same behaviour. And if every API works like this, if we make all web platform API’s behave like most other APIs that developers will interact with, it will reduce the cognitive overhead of having to think of the context. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_161#slide=id.g3484e1b5507_0_161) + +ABO: Now, in some cases execution of JavaScript code is not caused by other JavaScript code, and then there is no context. So if you have a user click that will trigger the click listener, then there is no context, because the source of that event does not come from JavaScript but comes from the browser or user. And this would be the case for events coming outside the current agent. And in this case, this JS code would run in the “root context”, with all variables set to their initial values, and the same as if you start an agent—the context that it would have at first. + +ABO: Now there are some cases where you have regions of code—for example, in the case of server-side, to track a particular request—and you want to identify the different regions of code. And if you have something like one of these events that run in the root context, it would lose track of which region it’s in. So because of this we have a scoped fallback mechanism to provide fallback values which would be independent for each `AsyncContext.Variable`. And you have an API that would set this for each `AsyncContext` variable and it will store the value at that point, and it would be set for any event listeners that are registered within that region. And so the context would have all variables set to their initial values except for the variables which have fallback values. + +[Slide](https://docs.google.com/presentation/d/1YkSQIWxCQCLSe1WKFWpndEd-gdf9coE1HRYa_v_z5J0/edit?slide=id.g3484e1b5507_0_174#slide=id.g3484e1b5507_0_174) + +ABO: And here you can read more details about the web integration or the memory aspects of the proposal. + +SYG: So I did not quite understand, clarifying question. I did not quite understand how the new mental model keeps working for the `setTimeout`? Maybe it helps more if we go to the proposal slide (17), could you explain if the callback is no longer a thing that captures the `AsyncContext` at the point when `addEventListener` is called, something still has to propagate the original `AsyncContext`. How can the behaviour not be changed from the current mental model if the callback no longer captures the AsyncContext? + +ABO: Because in the Ecma-262 part of the proposal, `await` propagates the context from before the `await` to after the `await`. + +SYG: Oh I see. And my follow up here—maybe just walk me through, how this meaningfully reduces the time which context would be kept alive based on the leak concern? + +ABO: In the previous proposal for web integration which we covered in Tokyo, calling `addEventListener` with the callback would store the context that was the current context when `addEventListener` was called, and that would stay alive forever unless you called `removeEventListener` with the same callback. + +SYG: Does it object to the `setTimeout`—the click thing I understand because you changed it to propagate the root context instead of capture context and you just remove the capturing, but does this behaviour change `setTimeout`? + +ABO: For `setTimeout` there is no difference, but have this here because we are describing this with a new mental model, and with this mental model, the `setTimeout` behaviour is the same. + +SYG: That makes sense. + +DLM: First off I would like to thank the champions they have done in putting together this presentation and reaching out to people that are involved in frameworks and so with that being said, we still continue to have some concerns around web integration and I mean, it seems at least you know our concern is it’s going to be a large amount of work to implement. And I think the use cases are better stated now and I don’t know if that has fully changed our calculations in terms of whether use cases justify what we see as a very large implementation. One thing I do kind of see in the frameworks use cases, it does not appear that people necessarily are looking for web integration of APIs and this is kind of like the basic—not basic but the more linguistic JavaScript functionality, and with that being said, I think that we sort of represented in our point of view and not necessarily if this meeting but from other implementers and see if they share the concerns about the amount of work that might be involved in the web integration. + +ABO: I think SHS’s was one example of a framework that did need the web integration if I understood correctly? + +NRO: Yes, so, thank you. DLM, is there any suggestion of how to change the web integration? Knowing it would make it easier to adapt it the right way, if we know what the right way would be. + +DLM: I don’t have a specific suggestion. There is some work done to address our concerns of the memory leaks but I think there is issue on the queue as well and I feel like we are going to have to change a very large number of APIs and there would be you know there is like with the two different potential context and this is like a manual process that we are going to have to change, a lot of it, yes. I don’t have a simple suggestion of how this would work. + +SYG: That sounds confirmation to the answer on the queue that a lot of work is on the number of API’s that of implementation that needs to be context aware. + +DLM: Yes that is correct and at least in our initial analysis, it feels like it was a case by case basis rather than like one you know—it was not just one place to change things, it feels like we would have to do things not for individual API but in a number of different places depending on the type of API. + +SHS: I don’t remember quite what you are thinking about in terms of web integration, but I will say that we do want to make sure that the context actually propagates coherently across both language built-in’s and web API’s. + +DE: Definitely a goal for the web integration design was to be consistent and hopefully, could be just kind of mostly inferred from WebIDL. All of the stuff of falling back to the root context is a simplification versus previous versions, and is towards the “doing nothing” direction, away from trying to solve all of the things. I hope that Igalia can show this in a generic way, rather than implying per-API work. That work on generic framing has not been completed yet, but that is the direction. We have a principle that everyone can intuit the behavior. In writing the spec, it could be centralized in like one or two places in web–ideal and for implementation. That is the goal, and I guess presenters are being conservative now because it has not been totally proven out, but I understand it would be necessary to meet those criteria before this can be accepted. DLM, if the context were propagated in this regular way, would that make it acceptable? Is that the kind of thinking you are looking for? + +DLM: I am not sure if it can be done that way, but yes, I think that would address a lot of our concerns. + +NRO: So it is not possible to do in general, if it is possible to do for setTimeout but not for the event. It can be done semi-automatically in specs, but not through something you can auto-generate with WebIDL. + +DE: If Events can be changed in one way then that still meets the criteria that I am saying. So let’s think offline. This logical principle that developers can follow and that spec writers can follow is a positive step. I am looking forward to seeing how it is proposed that you update spec. I know this is something you have been working on, and I am looking forward to seeing it. + +SYG: DLM, to your earlier question about position: currently is so in the beginning, we have the Chrome, and shared a lot of concerns about memory and about complexity and not just leaks and you would need to map a word to keep the context and maybe like a tree of context, so the leak of the usually implementation concerns but currently we are positive despite those concerns and I don’t think those concerns have gone away and we are engaged and despite the concerns and in net and any API is there net in the browser and ABO and the champions and testimonials and it is truth that the each of API is small but the framework and library and products that are ego are to adopt this ASAP and the reasons they are adopting it is explicitly for components of liability and currently it has a pretty wide reach of users of the web and because of that alone, I think it is worth positive on it and the amount of work granted and I cannot see I am happy either but I am happy to see the way it is going and produce it and not hurt the primary goal. Our position to reiterate is positive and the pay off here, I think demonstrated to be not species and there is multiple people on the record that would say we will adopt this which is relatively rare for the things that we are pushing. + +DLM: Thank you. + +CDA: That is it for the queue. + +ABO: Yup, so, this was basically it. This was the Stage 2 update, and you can read more details on these links. Thank you. + +### Speaker's Summary of Key Points + +This presentation focused on two main updates, addressing part of Mozilla's negative feedback about the complexity of the proposal and lack of use cases: + +* feedback from frameworks, about their use cases and about their need for `AsyncContext` to improve the DX for their users +* some changes to the web integration to reduce the amount of snapshots that get captured and kept alive for too long + +### Conclusion + +Multiple frontend web frameworks are eagerly waiting for `AsyncContext` to ship in browsers, to enable async/await in developers’ codebases without breaking framework-level tracking. However, while the use cases have been found convincing, it's still not clear yet that they are worth the implementation cost required by the proposal’s web integration. Different browsers have opposite opinions about this tradeoff. + +## Temporal Update + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-temporal) +* [slides](https://ptomato.name/talks/tc39-2025-04/#1) + +PFC: One day early which is something you can calculate with Temporal! My name is Philip Chimento and I work at the TC39 member Igalia, and we are doing this work in partnership with Bloomberg. I brought the news last time that Temporal is shipping in Firefox and it is available in nightly builds now. There have been some open questions raised about how to coordinate specifically the behaviors that in the spec we call "locale-defined." We are making sure that those are sufficiently coordinated between implementations and TG2 is addressing those questions. We will continue to analyze the code coverage and answer any questions that implementations have. + +[Slide](https://ptomato.name/talks/tc39-2025-04/#3) + +PFC: I have this graph every time showing the percentage of test conformance for each implementation that has implemented Temporal. We added more tests since last time so the baseline goes down slightly. But looks like GraalJS and Boa—particularly Boa—have made specific gains in conformance to the spec. Some of the bars have gone down by imperceptible amounts but the graph looks on the whole fuller than it did last time. Obligatory note that this is not percentage done but percentage of tests passing. + +[Slide](https://ptomato.name/talks/tc39-2025-04/#4) + +PFC: I wanted to highlight some new information about the use of BigInts in the spec. Previously there were concerns about this, and I showed in a previous presentation that you do not need to use BigInts internally, but you can use 75 integer bits and divide that however you like over 64 or 32 bit integers. I ran across an interesting paper recently which is cited here, and did a quick proof of concept that represented epoch nanoseconds and time durations each as a pair of 64-bit floats. So you don’t have to deal with nonstandard size integers. Just give me a shout if this is interesting to you for your implementation. There is a proof of concept in JavaScript using two JavaScript Numbers, and it does all the necessary calculations correctly, including the weird extra precise floating-point division in `Temporal.Duration.prototype.total` . + +DLM: I just wanted to say that we're planning to ship this in Firefox 139. + +SYG: What is this locale dependence? + +PFC: This one specifically is about the era codes that CLDR provides. I can link the issue if you want to read up on it. + +SYG: I am wondering given that this—like all of Intl depends on locale data, what is special about this case? + +PFC: Let me pull up the issue. So there are a couple of issues in the Intl Era and Month Code proposal, which is a separate proposal that we hope to present at the next meeting. One of the issues is where the year zero starts in the eras of various calendars. Another one is the constraining behaviour for nonexisting leap months, which is calendar dependent. These are things that CLDR does not necessarily define currently, and it should. So the issue is agreeing on the behaviour that CLDR should have so that gets reflected in the various internationalization libraries that will get pulled in by the implementations. ([tc39/proposal-intl-era-monthcode#32](https://github.com/tc39/proposal-intl-era-monthcode/issues/32), [tc39/proposal-intl-era-monthcode#30](https://github.com/tc39/proposal-intl-era-monthcode/issues/30), [tc39/proposal-intl-era-monthcode#27](https://github.com/tc39/proposal-intl-era-monthcode/issues/27), plus various bikeshedding threads about updating the era codes provided by CLDR) + +SYG: I see, makes sense. + +PFC: If there are no more questions, I think we can conclude and I will put a summary in the notes. + +### Speaker's Summary of Key Points + +* Firefox 139 will ship Temporal. +* Boa and GraalJS have substantially increased their conformance with the test suite. +* There's a proof of concept available for doing all the BigInt or mathematical value calculations in the spec, using a pair of JS Numbers. +* TG2 is discussing some locale-specific behaviour in the Intl Era and Month Codes proposal. + +### Conclusion + +Temporal is at Stage 3 and ready to ship + +## Composite Keys for stage 1 + +Presenter: Ashley Claymore (ACE) + +* [proposal](https://github.com/tc39/proposal-composites) +* [slides](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/) + +ACE: So hi, I am Ashley. I am one of the Bloomberg delegates and I am excited today to be actually proposing something today. I have presented, I think, three times on this design space, never proposing anything, just trying to share my current thoughts and to elicit feedback. And particularly, based on the feedback and the conversations we had in Seattle, I felt like the time had come for a proposal, and then here we are. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g334e668a325_0_0#slide=id.g334e668a325_0_0) + +ACE: So this follows on very much from the previous presentation I gave in February, and the ones before that. So I don’t want to recap too much stuff from those. I will do my best to make this accessible to as wide a group as possible, but I would encourage people to look at the previous slides if they feel like they do need more context. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g34a6082a0da_0_0#slide=id.g34a6082a0da_0_0) + +ACE: So I will be asking for Stage 1 and some people might think, "a lot of this is very similar to records and tuples, and that’s a Stage 2 proposal, so what is going on?". Separately from this session, I put on the agenda a request to withdraw the Records & Tuples proposal, and this current agenda item is for a new proposal that I see as a reimagining of a very similar problem space. And I think it’s significant enough of a reimagining that it just makes sense and it’s easier all around to start from the start as Stage 0, see if we want to do Stage 1. With a new kind of branding, even if we end up calling things records or tuples this is the best way process-wise, not only for us in the committee but the general JavaScript ecosystem to help everyone follow what is happening. + +ACE: So I don’t want to focus too much on Records & Tuples being withdrawn, I have a separate item on the agenda for that, which is currently set for tomorrow (note: it ended up happening later the same day). + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g2af82517ce6_0_26#slide=id.g2af82517ce6_0_26) + +ACE: This problem space I keep referring to, it’s about this situation you may find yourself in. You have got objects that represent the same data. Two positions, both representing the same coordinates. But when you put them in a Set, you still have two things in that Set. I am using Sets here because it’s easier to talk about but it’s the same with Maps. Sets and Maps work great with strings and numbers, but when you have an object it really only works if the thing you care about is the object’s identity. Not the data it represents. This is unlike other languages, where it's common to be able to override that behavior for objects. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_7#slide=id.g3479f757b84_0_7) + +ACE: So what do JavaScript developers do today? So there could be a library solving this, but I think what I see a lot of is, no need to reach out for a library when we have `JSON.stringify`. So this gives people a seemingly really quick fix of this problem. Because now, I add my two positions in the set and the set is size 1. But I now have so many other problems that I am perhaps not even aware of because I am copying how I see other code handle this and am just falling into the same trap as everyone else. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g2af82517ce6_0_39#slide=id.g2af82517ce6_0_39) + +ACE: So `JSON.stringify`, it’s impacted by key order. You have two objects implementing the same interface but created in different areas of a codebase and have different key order so they stringify to different strings, it’s not safe. Also some values will throw, BigInt for example. Other values can be lossy, `NaN` becomes `null`, and there are other examples of things losing information when they become a string. And also, not in all cases, but it’s easy to think of lots of cases, where the string representation of something occupies a lot more memory. And also at the end of day, you have a string. So your sets and maps are now filled with strings. If you want to iterate over those and do something with it, then you want to go back the other way, turning the string back into an object. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g34a0f634362_0_0#slide=id.g34a0f634362_0_0) + +ACE: So this is all not great. And it’s a bit of a problem. CZW actually reached out to me after seeing these slides and said that they do exactly this in the OpenTelemetry package, and this is a snippet of it—they have this whole custom HashMap but I am just showing part of that code here. It uses `JSON.stringify` and stores two maps so you can do the reverse mapping. And you can see here, they have taken into account one level of sorting the keys. Because they know that these objects just have one level. So I am not just making this up. This is what people do today. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_17#slide=id.g3479f757b84_0_17) + +ACE: So what am I proposing? So I am going to propose something that maybe looks more like a solution. And that’s maybe wrong, why am I proposing a solution when we should, at Stage 1, be focussing on a problem? The reason I am proposing something that looks like a solution is, one, we have been talking about this problem space for, like, at least four years while I’ve been in the committee and I am sure it dates further back than that. So I think the thing that is really needed here is actually, what are we doing, especially as there have been other proposals in this space, so I think it’s important that this proposal is not only saying the problems, but how it’s intending to address them. Also, because even with the things I am going to propose, there’s plenty of design space to talk about—this is by no means a complete solution. It’s just the core of the idea. The names and API can change. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g336fbd25823_0_0#slide=id.g336fbd25823_0_0) + +ACE: So what is that idea? The idea is a new thing in the language I am calling them "composites" for now. When I put one into a Set, the Set sees that the things I am putting into the Set are composites and it switches to the new behavior, where it sees these things are equal, according to how composites are equal, which I will explain later. And now, I only have one in the Set. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g337bd48536b_3_0#slide=id.g337bd48536b_3_0) + +ACE: So what are these composites? These are objects, not new primitives being added to the language. And parts of this proposal are driven from feedback we got from records and tuples, not only the implementation complexity, which hopefully you can see how there is lower complexity in the implementation, but also for the developer understanding of the language. Like, there was concern of introducing new primitives on both sides, the developer experience and the implementer experience. So these are not new primitives. They are objects. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_33#slide=id.g3479f757b84_0_33) + +ACE: And you always get back a new object from this thing. There’s no reliance on garbage collection and GC semantics to trick the sets to saying these things are equal. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_38#slide=id.g3479f757b84_0_38) + +ACE: And they don’t modify the object.—It isn’t like `Object.freeze`. The argument I am passing in is like something that is going to be—MM gave me a useful word, we can see this as it is *coercing* the input to a composite in a way. Or it’s taking the argument as a "template" on what the composite should contain. It’s not modifying the input to become a composite itself. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_44#slide=id.g3479f757b84_0_44) + +ACE: Here I show that the function throws when called with `new`. Maybe this bit should change. But the way I was thinking of them is they’re not classes with a prototype. Instead, they are like a factory function. Maybe this is something we should discuss. Maybe during Stage 1. But that is what I was thinking, it’s not like a class hierarchy. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_50#slide=id.g3479f757b84_0_50) + +ACE: The argument you’d pass has to be an object. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_56#slide=id.g3479f757b84_0_56) + +ACE: and the composite is frozen from birth. So you can never observe a composite in a mutable state. A composite is always frozen. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_62#slide=id.g3479f757b84_0_62) + +ACE: And they’re not opaque. You can see the things that a composite holds as its constituents. So I have created a composite that has 'x' set to '1'. And then if I look at the keys on that composite, then it has a key of 'x' and I can read that and get '1' back out. If you have a map or set with composites as keys, you can iterate over them and use them as data without having to do a reverse mapping. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_68#slide=id.g3479f757b84_0_68) + +ACE: They’re generic, and by generic, I mean they can store `T`, they can store any value. They’re not like records and tuples which were primitives that can only contain more primitives. So here, you can put a Date object in, and then if I read that property back out I get the original reference to that object. It’s not deeply converting everything. It’s saying, here I have a property 'd', and that stores the reference to that date. And I am also thinking, you should be able to store negative 0 and maybe that’s another thing we should discuss, maybe during Stage 1. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_68#slide=id.g3479f757b84_0_68) + +ACE: So yes, there’s two things. One, it’s not doing a deep conversion. Also, you can store any value in here. So that means, these things aren’t necessarily deeply immutable, but they could be if everything you put in them is deeply immutable. So they don’t give you that guarantee. But you certainly can construct deeply immutable data from them. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_242#slide=id.g3479f757b84_0_242) + +ACE: As there will be a way that you can check if an object is one of these special composites. If you created a proxy of one of these, it would be false. It’s not like `Array.isArray` where you can check the proxy's target. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_156#slide=id.g3479f757b84_0_156) + +ACE: So that is like what they are on their own. I guess the thing that is more exciting about them is how they are equal to each other. So the simplest possible case… + +DLM: There’s a clarifying question in the queue. + +JLS: Just a question there. The properties passed in, are they also frozen deeply? So if I have an object, an existing object, and it’s one of the properties I am passing in, I have an example there in the queue… in the question itself. + +ACE: There’s no deep conversion. In your example, if you create a composite of a property foo that is an object it is not modified or touched in any way. The composite only contains a reference to that original object. + +JLS: Okay. + +ACE: So the composite itself is frozen but the things it references don't necessarily need to be. + +JLS: So the equality, then, that you spoke of using a composite in a set, it’s—is that equality, a deep equality? Or… + +ACE: I will come on to that. + +JLS: Okay. Thank you. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_162#slide=id.g3479f757b84_0_162) + +ACE: So yes. A more interesting example is, these two things. Both have an X and a Y. They’re all—the key order don't matter. And there is a choice on how that is achieved it just ignores the ordering when comparing, or does it kind of try and sort these when it creates them? That kind of gets us all into a bunch of questions about symbol keys. So at the moment, I am thinking, it doesn’t sort the keys. So here I have two composites. If you ask the first one for its keys, it gives X and Y, and for the second one, it gives you Y then X. But when you’re comparing them, that wouldn’t matter. There’s an issue on the proposal of if we want to do something different. But in general, the goal, however we achieve it, is that you shouldn’t have to worry about key order. That’s one of the problems we are trying to solve. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_168#slide=id.g3479f757b84_0_168) + +ACE: So the equality is symmetric. Checking if A equals B is no different from asking if B equals A. It doesn’t matter if one is a subset of the other. These aren’t equal because one has extra keys from the other. + +ACE: So to JLS's question, "is it deep?". It is deep while the kind of backbone that it’s following is still a composite. So as it’s walking, every time it sees a composite, it keeps using recursion to check if they are equal. If you have two big trees made of composites then it’s doing deep comparison. But as soon as you have something that is a regular object then you are back to pointer equality. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_180#slide=id.g3479f757b84_0_180) + +ACE: So here, these are not equal. Because the composites are referencing two different objects. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_185#slide=id.g3479f757b84_0_185) + +ACE: Whereas this is equal because they’re both referring to the same object. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_190#slide=id.g3479f757b84_0_190) + +ACE: So what does that look like in pseudo-JavaScript code? `Composite.equals` starts with this base case of, SameValueZero. Though, again maybe this is something we should discuss in Stage 1. Maybe it shouldn’t be SameValueZero. The alternative here is we have SameValue. One of those two operations. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g34c13560fa8_0_28#slide=id.g34c13560fa8_0_28) + +ACE: Then if either one of the arguments isn’t a composite, then it’s not going to be equal. Otherwise, both arguments are composite, so let’s compare them using this secondary 'equalComposites' function. + +ACE: So we first get the keys of one. Compare to the keys of the other. They have to have the same set of keys. Otherwise, we return false. + +ACE: And then we loop through the keys and recurse back to the beginning - are the values of the two keys equal? + +ACE: The main thing I want to show here is that when you are comparing composites, you have lots of opportunities to return false early. The kind of the worst case comparison is when the two things are equal, that is when you have to get all the way to the end to be confident of that. Unless you’re literally comparing a composites to its literal pointer self, in which case, that would be an immediate `return true`. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_80#slide=id.g3479f757b84_0_80) + +ACE: So the really good things about this equality is, all of these things. So it’s guaranteed to have no side effects. These things can’t be proxies. They don’t have any traps, asking for the keys and reading those values is always safe. The words I was looking for earlier, symmetric, reflexive. All of the things required to be well-behaved map and set keys. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_200#slide=id.g3479f757b84_0_200) + +ACE: So where would this equality appear? It definitely appears if you did `Composite.equals`. And then the real key part of this idea is that it kind of works out of the box for Maps and Sets. And then also, the other places that currently use SameValueZero, which would be `Array.propose.includes` and then it feels wrong if we do `Array.prototype.includes`, you also want `indexOf` and `lastIndexOf`. So we wouldn’t be changing those for existing values, they would still be strict equality unless the thing you are passing as the argument is a composite. + +ACE: So there’s no, like, web compatibility breaking changed all of these things. The semantics are identical to current semantics. It’s when the argument is a composite that it uses the new semantics. I guess that asterisk applies to all of them. Mainly, I am trying to say, for `indexOf` definitely we not changing from strict equality when the arguments are anything else. So `NaN`s are still not in arrays according to `indexOf`, but a composite containing `NaN` would be. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g3479f757b84_0_208#slide=id.g3479f757b84_0_208) + +ACE: So they might also appear in future bits of spec which don’t exist yet, like MF’s proposal `Iterator.prototype.uniqueBy`, you can imagine this is the—here when you pass in the callback to say how things should be worked out if they are equal, then here, it can return a composite. So under the hood it is using a set-like thing to then filter out the duplicate values from the iterator. So there’s opportunity for this to appear in more places in the future. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g347a4abe357_0_1#slide=id.g347a4abe357_0_1) + +ACE: So equality is linear. But in some negative cases it will be faster. Internally the way people would need to implement these, and the way the example polyfill implements things, there is hashing under the hood. But it doesn’t expose that hash value in any way. When you are putting these things in a map and a set, it wouldn’t literally be scanning every composite and doing a fully linear scan. It would be doing, like, an initial hash lookup first, and then only needing to compare when there is a hash collision, when the composites are equal for example. And because these things are immutable from birth, there’s no way to create cycles in this equality. So you can kind of traverse them safely without needing to keep track of where you have already been. + +[Slide](https://docs.google.com/presentation/d/1n7lj_y02f4QjrTMvRGs3aZXt_zji5nSVYGhmVchMpvI/edit?slide=id.g2c6eebea946_0_97#slide=id.g2c6eebea946_0_97) + +ACE: So I have a bunch of bonus slides. But I will use them if the topic comes up. I would like to go to the queue. So MM is up first. + +MM: Yeah. So the fact that `isComposite` as well as `Composite.equals` is operating on the argument instead of `this`, and that compositeness is not pure proxies, that looks like a dangerous precedent if that’s allowed to go by without examination. So I want to take a moment and explain why that’s okay in this case and it’s not—it’s not systematic of a more general principle that leads to a wider precedent. + +MM: The reason it’s okay in this case is that—first of all, the reason why it might not be okay is because of ruining practical membrane transparency. Indeed, I imagine this does actually ruin practical membrane transparency for existing membrane code—membranes built out of proxies—for existing membrane code based with composites where the membrane code did not know about the possibility of composites. + +MM: The reason why going forward, this is repairable by the authors of the membrane code, is that a viable way to restore practical transparency is due to the very passive nature of composites. Composites, none of these operations on them trigger user code. These are purely the contents of the composites is a simple, you know, simple object with frozen data only properties. And, therefore, a proxy participating in a membrane when faced with a real target, that is a composite, can simply produce another composite, not a proxy on a composite, on the other side of the membrane and that proxy on the composite has to go through all of the same issues as creating a shadow target. Insofar as ed(?), if the original composite referred to X, then the composite on the other side of the membrane has to generally refer to a proxy for X and vice versa. But that will—because there’s no user code involved, that will restore practical membrane transparency. I also want to maintain—just remind everyone that there is the big issue that doesn’t—I see somebody mentioned that. Okay. That’s all I had to say. + +ACE: Yeah. Thanks. I agree. We shouldn’t naively see this as precedent for change on a general design constraint. And this is an exception, not a new rule. + +MM: Good. + +LCA: Hey. Your slide on storing a data object in a composite, many times those are objects that you want to compare by value, not by identity. So I was just wondering whether you had put any thought into how that could work? Like, do you think the approach here is that users should like to turn these values—like, these Date objects into strings before putting them into composites? Or do you imagine a toComposite method on these objects would give you something to put in a composite or anything else? + +ACE: Yeah. So if—yeah. If we were starting Temporal in two years’ time and we already had Composites I think it would be really nice if Temporal objects were composite objects. Unfortunately just the way that the language is designed in a particular order means, yes, they wouldn’t be. + +ACE: And I think at least for Temporal, the good thing there is that—in my understanding, please correct me if I am wrong—all Temporal values do have a canonical lossless string representation. Especially now that we don’t have custom calendars. Yes, if you want to create a Composite that has a start date and an end date, then to get the equality that you probably want to turn them into a string in that case. Or construct kind of your own—a different type of composite specifically for Temporal types where the constituents of the composite are the parts of temporal type so it's not flattened to a string. But yeah. Because we can’t make all Temporal things composites, it’s my understanding, then I think this doesn’t just work out of the box unfortunately. + +PFC: I agree, it won’t work out of the box. But there are probably ways to accommodate this use case with special cases in the composite factory function. It would be web compatible to have special cases there, because nothing has ever been passed to the composite function on the web before. You could, for example, say, if you pass a Temporal object to Composite, it will ignore expando properties and read the internal slots. I haven’t thought through the implications of the idea, but it's an example of something we could think of in the realm of special cases to make these use cases work. I do see that people want to use Temporal objects as hash keys. + +ACE: Yeah. I think this problem already exists. Like, it’s already the case that if I have a `Temporal.PlainMonthDay`, I can’t use that as a temporal specific domain map key. So Composites don’t introduce the issue, but perhaps only compound it in that, now if I have two of them, I also can’t compose them together, because even on their own their equality in a Map is that of object identity. + +ACE: I would like to move on to WH. + +WH: I just wanted to double-check that, no matter what you pass to `Composite.equals`, it will not run user code? + +ACE: Correct. There’s nothing about it—assuming a well-behaved implementation, isn’t going to do something, the spec says it shouldn’t do, then yes, there’s no user code. You can check if something is a composite, then it would only read the properties and interact with the object if the object is a composite. If the thing is a composite, then none of the methods used during equality checks can trigger user code. + +WH: Thank you. + +SYG: So we chatted about this with the V8 folks. The biggest piece of feedback we had was an alternative designs which canonicalizes in the factory function. In that canonicalization, to de-duplicate in the constructor function using the semantics or equals you have laid out and returned the canonical object that is the duplicated and can that, the performance is different. You have a different bottleneck, where the canonicalization is slow. Because you—for the same reason equals is linear in the worst case, in the worst case here you would have to check against this table. And because this is kind of canonical with respect to everything that you might create, the domain you are comparing against is possibly larger. On the other hand, you get other very nice benefits like, you can continue to just use === everywhere because it’s a canonical copy of the object, and as an object, it’s just pointer identity. Nothing else needs to change. The comparisons compare fast. + +SYG: This tradeoff makes sense if indeed it’s four keys, it stands to reason you are creating fewer keys then you can check. For equality. So what are your thoughts on that alternative design instead of the current one? + +ACE: Yeah. Guess one thing about that design whenever we have discussed that in the past is that one of the constituents has to be an object. You can’t create a pair of numbers because if you return the canonical representation of that its lifetime is infinite. If you try and put that object in a WeakMap, WeakRef, or in a FinalizationRegistry? It has to live forever because the canonicalization of two numbers is—is—it has no expiry. I wouldn't want to say you can’t create a pair of two numbers. That doesn't feel great. It also sounds like it moves all of the work to the object creation, which was one of the concerns with records and tuples. Yes, the comparison is now cheaper. If you are creating lots of these, you have to, like, eagerly kind of do all the work up front. Whereas maybe you are just checking `Composite.equals` and if the very first two keys are different, then you don't need to traverse the whole object and canonicalize them. You can see immediately they are not equal and stop working. + +ACE: I had assumed that this was off the table because of the discussions around Records & Tuples because it has a lot of the same implementation complexity, minus introducing a new `typeof`. But if that’s on the table, certainly up for discussing it. + +SYG: I will respond to some of the points. The—remind me of the first response you said + +ACE: …having an object constituent + +SYG: The WeakRef thing is true. My response to that is, I think it would make sense in the canonical—in the eager canonicalization alternative that it would not be usable as a weak target for the same confusion as `Symbol.for`. Even with composite key as as presented, that potential for surprise already exists. If people are using composite keys as, you know, a pair of two numbers, it may surprise somebody that that—to use that composite as a key in a WeakMap, it may surprise the user that that entry may be collected from under them. + +SYG: Right? If the mental model is just a composite key of two numbers, I think my intuition there is that whether we do canonicalization or you do the current proposal, there is potential for confusion. I am not sure how successful we can communicate, it’s an object that looks like any other object. That goes into a small point I see somewhere in the queue about having the—the `new` keyword if I am leaning to that. The—the difference between this proposal feedback and canonicalization was bad for Records & Tuples on the V8 embedder side, if you have a different’ type, it’s not pay as you go… if it’s a canonicalized object it’s complex [plit] pay as you go. It’s just an object. And I want to dig in through some time later, on the queue if we have time. + +SYG: And the other issue that—that gave us was the use cases, where a lot broader than just composite keys. And when I chatted with the people about composite keys as a use case, it seemed to be relatively about fewer objects, shallow object graphs which are very different performance-wise from many kinds of objects. Arbitrarily complex object graphs. I don’t think people are keying things on arbitrarily complex object graphs. So if that is an assumption, a use case we’re designing for, it seems less—a lot less problematic to bottleneck everything, all the expensive work in the constructor. Now, if you think it is still worthwhile solving for the many objects, arbitrarily complex object graph cases, then I have my doubts this is the composite key proposal, but there’s a longer conversation we can pick up later. + +DE: So I—I guess I have two questions for SYG. One question is: do you see this proposal as pay-as-you-go? Because it’s only hit in kind of this extra branch to make a comparison. Or is that extra branch considered more expensive? And also, wondering, you know, how confident are you that people won’t want this to be cheap, the allocation of this? Is there a hope derived from the use case? + +SYG: We don’t use the first case as pay as you go because of the extra branch—as a combination of the extra branch across multiple data structures. And this becomes a thing that would be common in all data structures we design, that check equality. We need a protector here or something like that. For the second question— + +DE: If case you never use the feature? + +SYG: You never— + +DE: Yes. + +SYG: Yeah. The—yeah. Okay. + +SYG: So the second question, yes. It is—I have no idea. I don’t want to say I am confident or not confident. I have no idea. If we believe are people are reaching for this via composite key, the less concern about the key creation being cheap and the lookups being cheap + +ACE: Part of me doesn’t want to think of these things as only composite keys. That certainly is the primary reason for adding this to the language. But what I wouldn’t love happening is if, you have to—you do completely separate your data from the composite key, because then otherwise, what ends up happening; every object has, like, a 'getComposite' or 'toComposite', then which is annoying if the thing is a person, and the person has an inner company field and the company is a composite. So it gets deep. It’s easier in that case to use the composites as your data. So you don’t have to keep kind of converting things to composites, when you do want to put them in a map key. So I do want the proposal to focus on the use case of keys for maps. And would like them to have the potential of how the language—where the language could go over the next, you know, 10, 20 years. Maybe these things do become something that actually forms more of the way you actually model the application. But I could see the startup application development going there, but would then necessitate the creation being cheap because you wouldn’t necessarily pay off the cost of actually doing the comparison. So that’s where I am thinking about it. Yeah. I really—I wasn’t expecting your comment. I will have to have time to think about it. + +MM [on queue]: I am uncomfortable with canonicalization . + +KG: I support Stage 1. This gives me everything I wanted from Records & Tuples. This is the case I had for Records & Tuples. I am worried that this is not the use case for, like, half of the people who wanted Records & Tuples. There’s a lot of people wanting Records & Tuples for reasons I didn’t fully understand, wanting immutable objects, but not liking `Object.freeze` or something like that. And I confess that I just never really figured out what people were excited about there. I am hesitant to completely dismiss all that. But like I said, it gives me everything I personally want. So I am supportive of this. + +ACE: Thanks, KG. + +CZW: Yeah. I want to echo that this also matches the OpenTelemetry use case, it does not only provide equality comparison, but also allows iterating the keys inside a map without keeping a reverse map to the original key object. So this is really helpful to ask as well. And support this for Stage 1. + +SYG: We touched on this a little bit. But I want to reiterate here what—what KG was saying, given so much of what we heard from Records & Tuples was kind of the different camps between people who are really, really excited about immutable data function, but people who are not clear on the performance implications here and that’s where a lot of the implementers were pushing back. I would be very uncomfortable if this API were designed to be flexible enough that people just use it for immutable functional data structures and then end up finding it’s not a good fit. So thoughts on kind of taking the use case lane that this API says it’s going for and then sticking to it. + +ACE: Okay. Yeah + +SYG: Your thoughts on that. You said both— + +ACE: Yeah. Like, the core thing I want is this kind of new capability. So I think the initial API can make that very clear, that when you are constructing these things, it’s to create a composite key and we can make sure—for example, this proposal, I am not during Stage 1, going to sneak in, let’s add some syntax for this, because when you have nice syntax, we say as a committee, use these as your immutable data. While the API is more verbose, maybe we should make it `Composite.create` to make it more verbose. It’s not going to become the default data structure. But I think if we want to explore adding more immutable data structures to the language in future, they seem like the kind of the perfect base for that. I want the door to be open. It would feel wrong, if in the future we added immutable data structures, that they wouldn’t be composites. That feels like we have—as per my previous presentations that composite keys need to be immutable. So if you then also add immutable, it might as well work as keys. So I hear what you are saying. And it’s a difficult part of the language design, and small changes in this could impact where they are used. Also, it impacts how we might want to focus on how they work on a performance perspective. I want to discuss this more with you. But maybe not when we only have 10 minutes left. I definitely hear what you are saying. + +DE: The reason you are concerned about this being a poor fit, SYG, is pursuing the canonicalization alternative and making sure that is workable? Or other particular concerns you have if this gets overused? + +SYG: I think the canonicalization thing is a possible solution to the deeper performance tradeoff concerns. I can see just very different implementation strategies for dealing with the shallow and few—like if you believe composites are shallow and composites are few, that’s very different than if you expect most of them are deep and have many composite pieces. + +SYG: I am not convinced that— + +DE: That’s the pattern I would expect + +SYG: I would then just not use the other one composite for the other cases, is my preference. + +DE: Right. But this—we are talking about the application code. Not your code. I am not sure what we could use to prevent this. Do you think we should be making this not transparent, like you can’t access what’s in it? + +SYG: It’s like one such strategy, where it’s like—the cost is upfront during creation. Therefore, it favors one kind of pattern. We want that pattern to be fast and to be the happy path. That is— + +DE: My question was, are there other things besides canonicalization that come to mind? For you. + +SYG: Not at this point. No. + +DE: Okay. + +ACE: Yeah. I see there’s a reply. One thing I was imagining engines would possibly do was calculate the hash value of these things. So the equality in the cases where you don’t have a hash collision is still not as cheap obviously as with pointer equality. But still, it’s to reduce the number of times you are falling into that kind of deeper comparison case. Yeah. JSC has a reply. + +JSC: It’s a reply to KG’s topic which popped off the queue before I could finish my reply. Does anyone else have replies to SYG’s? If not, I want to say to KG, who was wondering about the people wanting Records & Tuples for immutable data structures but not finding `Object.freeze` acceptable. I was one of the people who was eagerly waiting for Records & Tuples. I am a huge fan of efficient persistent data structures, persistent immutable data structures like you see in Scala, Clojure, Immutable.js which was inspired by them, all which can quickly create new versions of data structures with fast deep changes to inner keys without any copying. `Object.freeze` would not address this, because creating changed versions requires deep copying. However, I accept that the engines say that adding immutable persistent data structures to the core language is not practical. So I’m also fine with being able to just use composite keys in Maps and Sets. That’s my view, as someone who was eagerly watching the old proposal for immutable data structures. Thanks. + +MF: You asked about key sorting during the presentation. I want to give some feedback on that. I think key sorting is important if we were—if `Composite.equals` wasn’t doing its own equality comparison. Oh no, you can tell the difference between these even though they are equal. But because it’s `Composite.equals` and not `Object.is` , I don’t think key sorting is important. It’s not important the keys also sort the same. + +ACE: Yeah. I agree. + +MF: Yeah. That’s my opinion on that. + +MF: The next one on the, like, base case there of Composite.equals, you had it in the slides, SameValueZero, I am of the opinion that SameValue would be better here. The comparison that we have in maps and sets today, well, you can’t actually tell. They do a normalization of -0 to 0. When you put in -0, you get a 0. It doesn’t matter whether they do a SameValue or SameValueZero. But it would matter with composites because when you put in a -0, you observe it, you get a -0. Which means you should use SameValue to compare them. This would also make it way easier to—if you have a map that’s already doing, like, SameValueZero, it’s very hard to get a SameValue map. I have a library that does that. It would be—I don’t know if it would be significantly harder or just impossible to do that if composites were also doing SameValueZero. I would strongly support SameValue here + +ACE: Thank you. A decent part of Stage 1 would be unfortunately talking about -0. I thought that -0 was a thing of my past. I can see it’s a thing of my future too. MM has a reply? + +MM: Yeah. The sorting of keys is a good open and shut case about why canonicalization is impossible, if we admit anonymous symbols as property names. There is no possible canonical sorting of anonymous symbols, and if you canonicalize, then if you simply go with whichever one became canonical first, then that’s history-dependent and opens a global communications channel. So I think you can’t have them + +ACE: I was imagining them to have symbol keys. If we do that, sorting is off the table. + +MM: Therefore, canonicalization is off the table? + +ACE: Yes, if we want them to have non-registered symbol keys. + +DLM: So there’s a point of order. We have 8 minutes left. We have heard pretty positive things so far. I don’t think we have heard anything that would block asking for Stage 1. So it might be a good idea to do that shortly + +ACE: Yeah. Yes. It’s a great suggestion. If we could do that now. And then if we have any time left, we can pick a favorite topic from the queue. + +ACE: So I am asking Stage 1 for this new composites proposal. + +WH: I support this. + +ACE: Thanks, WH. + +[In the queue] Explicit support from JH. CDA, SpiderMonkey team/DLM, MF, CZW, NRO, MM, SYG. + +ACE: Any objections? + +[silence] + +ACE: Thanks. That’s really great! I’ve been mulling over this for so time. I am really pleased. We have 3 minutes left. So we can still keep chatting a little bit more. But I am shaking with excitement right now! + +EAO: Could you speak briefly about why Composite is not a class? + +ACE: So it was only because—no. It was initially because I was imagining—as I have gradually evolved my mind, from the Records & Tuples proposal, over and around and to this, I was still thinking of records and tuples. And I was thinking that there would just be this one factory, and it would kind of switch its behavior based on what you passed in. If you passed in, like, a plain object, you would get back something like a record. If you passed in an array, you would get back something like a tuple. This wouldn’t make sense if you are doing the composite and its prototype changed. But a bunch of conversations since, about kind of this whole space, and should there be kind of tuple-like composites for when you do literally just have a list of things? And giving the names doesn’t make sense. Or and you want to prototype, so you can have a map and things. Does make sense to me more about whether it should actually be `new Composite`, and you could then—this loops back to SYG’s point. If you can do a new composite, that means you can do, like, `class Position extends Composite`. And, you know, and like reflect, construct, NewTarget there is now my `Position.prototype`. And some people think that’s cool. Some people also think that’s not going to be good. And I think this is going to be one of the main things to talk about. One, - 0. And two, how we should drive this API to encourage the behaviors we want as a committee in the way these are used. Like, should it be really thought about as this particular case? Or should they be used as a general data model? And `new Composite` should be a part of that conversation. + +MAH: On the previous topic I thought there was a comment from WH on the queue and I would like to hear it. I don’t know why he disappeared in the shuffling. Or maybe it was removed? + +DLM: No, that was my fault, sorry. + +WH: `Composite.equals` is a replacement for SameValueZero. If we switched the semantics to `SameValue`, it would break Map and Set semantics. + +MAH: I think the idea here would be to use the same value to compare the composite themselves, maybe not for top level. If there is a concern that if there is a concern that plain -0 should still be SameValueZero to 0. Um, we can keep that but inside once you put that inside composites you don’t need to keep that rule. + +WH: That would confuse users. We talked about this extensively in Records & Tuples. + +ACE: I felt this topic was behind us because of the 350 comment thread on Records & Tuples, but I think I will end up doing a slide deck on this particular topic. Because it sounds like there is a variety of opinions amongst the committee. + +MM: So the reason why I think it needs to be called `Composite`, not `new Composite` is that if I saw `new Composite`, I would expect it to give me something fresh even if the input was already a composite. But if it is a constructor and—the expectation that it acts as a coercer and if you feed it the kind of thing that it produces and it will return to the thing that directly. Without creating a fresh wrapper. + +ACE: Yeah we maybe lose the ability to do that and kind of optimization. + +DLM: Okay, I think we will stop this conversation there and we are almost out of time, congratulations ACE + +### Speaker's Summary of Key Points + +* The problem of working with composite data in Maps and Sets was presented +* A proposal for adding a special object type that is compared structurally when used in Maps, Sets and some other APIs. +* There was discussion on if this helps with existing types such as Temporal, which the initial proposal does not +* There was discussion on an alternative design which eagerly interns the objects, instead of introducing new logic to existing equality APIs +* There was discussion on SameValueZero vs SameValue + +### Conclusion + +* Consensus for stage 1 was achieved +* Discussion about canonicalization and handling of negative zero will continue as part of stage 1 + +## Immutable ArrayBuffer for Stage 3 + +Presenter: Mark Miller (MM), Petter Hoddie (PHE) + +* [proposal](https://github.com/tc39/proposal-immutable-arraybuffer) +* [keynote slides](https://github.com/tc39/proposal-immutable-arraybuffer/blob/main/immu-arraybuffer-talks/immu-arrayBuffers-stage3.key) +* [pdf slides](https://github.com/tc39/proposal-immutable-arraybuffer/blob/main/immu-arraybuffer-talks/immu-arrayBuffers-stage3.pdf) +* [recorded presentation](TODO: link) + +MM: I would like to ask everyone’s permission to do my normal thing and record the presentation, including questions asked during the presentation. And then I will turn the recording off when we get into the explicit Q&A section. + +USA: Let’s wait maybe a few seconds. Seems like nobody has objected. + +MM: Great. Thank you. + +MM: In the last several meetings we have gotten a little bit of efforts and succeeded quickly through Stage 1, Stage 2, and Stage 2.7 and today, I would like to ask the committee for Stage 3. + +MM: I think it is a simple enough proposal that I don’t need to recap, but if anybody wants to ask questions about the content of the proposal or clarification or whatever, please do. Because people can understand where they are. And this was the checklist for Stage 3 based on the stage 3 criteria. And we’ve written test262 tests and submitted them as a PR, but we have not yet gotten reviews on that. And therefore, of course we have not yet merged it. And implemented feedback—we would like implementor feedback from the high-speed engines, and XS engine has done. Given implementation 262 tests and all the feedback from XS is the proposal and all of their feedback is good and we have not yet received feedback from other implementations. + +MM: So there are two things listed as normative issues and document the permanent bidirectional stability of Immutable ArrayBuffer content, immutable meaning it is not just read-only, but it's a bilateral guarantee that not only can we not mutate it but what you are seeing will be permanently stable. + +MM: So RGN added this text after the feedback from last time and the document and the permanent bi-directional stability, and the remaining thing which we have not checked yet, is we purposely did not declare the order of operations were resolved, because we wanted to find out whether there were any implementation concerns. For example, sometimes a difference in order of operations which is observable but does not matter at all to the JavaScript programmer would provide suggest implementation and we have not received any such feedback by the explanation of purpose of the stages and we don’t have to get that feedback before 2.7, if we are willing to accept this as the normative spec until we get feedback to the contrary. And the spec that we have adopted was a reaction to the previous feedback that we got, which was that `sliceToImmutable`, which is the only case where this arises, should just be literally as close as possible to `.slice()`, so we added this exception over here to keep these things as close as possible including both the order of operations and whether they throw or do nothing.So basically this is all the engines implements `slice` right now and I have not heard any complaints about `slice` doing anything inefficiently and so I would like the committee to approve that this is the one spec in Stage 3, still of course subject to Stage 3 implementer feedback. + +MM: This is the PR against test and written by PHE, co-champion and part of Moddable XS. So, this is the actual formal explanation of Stage 3, the standard for carrying the purpose. So any questions, and may I have Stage 3? + +MM: Now I will stop recording. + +NRO: So, I would prefer that we wait — + +MM: Can you repeat that? + +NRO: I would prefer if before moving to Stage 3, we would wait for tests to be merged. And the reason I am saying this is that there are two proposals in Stage 3 and they have tests pending to be merged. There’s the decorators and the using declaration proposal, and in both cases implementers are confident about the coverage because this is Stage 3. In both cases there were different bugs that would have been code in the test and they not test. It does not need to happen during plenary but if you [INDISCERNIBLE]. It will be done automatically when it is merged and I would prefer to wait and I am comfortable asking this because a few weeks ago, there has not been much material for such reviewing. + +MM: So can I ask people involved in test262, and those that committer status in test262, please do review it and then I am eager for your feedback. And is there anybody who thinks they might actually get to do that before this plenary is over? + +MM: Okay, so once we proceed on the test262 tests to the point where they’re merged, we’d come back and ask for Stage 3. And is the objection a valid objection and that is why I brought it up in the presentation. + +USA: There is a question about process by JLS? + +JLS: Yeah the question is straightforward and can we advise to advance automatically once it emerges? If we have consensus and the tests are the only reason to withhold, that advancement could be automatic? + +USA: I can help answer, we can get conditional consensus on Stage 3 and that the position advance to Stage 3 once the tests are merged. + +MM: I would certainly like to have that conditional approval, sure. + +SYG: The significant testing in Stage 3 should be merged whether it is merged on the trunk or staging, it needs to be executable. And as for conditional Stage 3 on merging the tests, I guess that’s okay if this is the only thing. In general I would like it to not be—sorry, in general I would like to minimize the number of conditional advances because that just increases the likelihood of things falling through the cracks and so we can come back since it is not in a particular hurry. + +MM: I am happy to come back as well. I don’t think postponing it for one meeting will materially affect anything. SYG, let me and you about has there been any exploratory implementation work on this proposal at Google? + +SYG: No, and we will not look at it until it reaches Stage 3. + +MM: I see. That is the reason why it would be nice to get to Stage 3 earlier than next meeting, but not a big deal. + +SYG: Without the tests and even if it is conditional stage 3, we need to see if it works. + +MS: All right, we have a reply by MLS? + +MLS: JavaScriptCore won't start looking at it until Stage 3 as well. + +DE: How complete are the tests that are out for review? I think it’s important that we have some tests merged, but are they complete enough for Stage 3? + +MM: I cannot speak to that myself. I certainly recognize the importance of the question and I just don’t know. + +RGN: I can speak to it. I am a test262 maintainer but not putting formal approval on the test, and I reviewed them. And I think they are complete enough for Stage 3. Follow ups are expected for addressing the cases in order of operations with respect to error handling, but that is common even in mature tests and there is coverage for transfer to needable but not yet for sliceToImmutable but that will be largely analogous. We could push for inclusion of this for this pull request, but for this or not it will come inside of Stage 3. + +DE: I don’t have an opinion whether or not those are on this pull request or in a separate pull request, but before getting to Stage 3, we should complete all of those follow up items and not have any known gaps. Just because we have existing coverage gaps overall does not mean that proposals can reach Stage 3 with known coverage gaps. + +RGN: In this case I think it is because those are the very things that are expected for implementer feedback is requested. + +DE: Right, so the test will be helpful to get that feedback. + +MM: So, I think all of these lines are pointing to bringing it back for Stage 3 next meeting. And which I am happy to do. And what RGN says does raise an issue of the committee feedback. RGN is both a test262 committer and a co-champion of the proposal, and he did not write the tests but reviewed them and PHE wrote them. Is there any problem with RGN reviewing this as an official test262 committee person despite the fact that he is a champion? I don’t know— + +DE: As a non-maintainer of test262, I think that’s fine, for anyone prepared to do an intellectually honest job. RGN just volunteered points for further work, which is a great demonstration of that honesty. + +MM: Good, awesome. + +DE: Unless anyone else has opinions here? + +PFC: In addition to the point I wanted to make, I will also answer the immediate question which is: I think it's fine for RGN to review that. Having the specific test262 reviewer not be a champion is not a standard that we have required for anything else. + +PFC: I wanted to take the opportunity to make a point about how to facilitate test262 reviews, not just for this proposal specifically but in general. We have some documentation about testing plans that my colleague IOA wrote which we will merge soon hopefully. I recommend to all proposal champions, before you write the tests, open an issue with a testing plan, because that will help us as reviewers to get a sense of how complete the coverage is without having to dive into every corner of every proposal, because that is the thing that that really takes the most time when we are reviewing. And also once you have a testing plan with a checklist, that will make it easier to open multiple smaller pull requests than one large one, and that helps us because currently we have a lot of maintainers that have limited time for reviews. So if the choice is between reviewing three small PRs or 20% of a large one, I think people will naturally want to review the smaller pull requests. Having them be small and marking them as done in the testing plan as they get merged, helps us get around to things faster and merge them faster. + +NRO: I think RGN should be allowed to review the request, and it is better that champions of proposal review the test and having just an approval from RGN is better than just having an approval from PFC because RGN has more context on the proposal. + +DMM: +1 on more smaller PRs for the tests rather than one giant one. The current PR is almost 1K lines and we will miss stuff in trying to review that. + +MM: Okay I will communicate to PHE offline. + +USA: That was the queue, MM. Would you like to ask for consensus? + +MM: There is no consensus to and for, and I think what we settled on is I will come back next meeting and ask for Stage 3 assuming that we got the test262 situation merged. And I will suggest to PHE that the test will be divided into smaller PR’s. + +### Speaker's Summary of Key Points + +* Immutable ArrayBuffer was presented for Stage 3. test262 tests have been written and submitted as a PR, but reviews and feedback about the tests are still pending, so Stage 3 was deferred. +* There was a discussion about facilitating the test262 review process by opening an issue with testing plans first and then submitting smaller pull requests to make reviews more manageable. + +### Conclusion + +The proposal will be brought for Stage 3 at a future meeting, once tests are landed and known coverage gaps are filled. For now, it remains at Stage 2.7. + +## Upsert for Stage 2.7 + +Presenter: Daniel Minor (DLM) + +* [proposal](https://github.com/tc39/proposal-upsert) +* [slides](https://docs.google.com/presentation/d/1Mfc7jl2Rbe8K8LCJWtjNZS94tQpzgvQIBfrq2e_iRcU/) + +DLM: So it has been a little bit since I have talked about upsert and this is presenting it for Stage 2.7 + +DLM: … using the map and you want to do an update but you are not sure if there is already a value associated with your key or not and what people do today is roughly along the lines of snip it example code and see if the map has the key and if it is there, you are going to do one thing, and if it is not there, then you are going to silence. + +DLM: The proposed solution is to add two methods to Map and WeakMap, one is `getOrInsert` which will take a key and value, and search for the key and the map, and if it is found it will find key associated and otherwise it will insert value in the map and return that. And there is a computed variant and this one takes a callback function. And will insert the—we discussed this last time and where we decided that we cannot prevent callback from modifying to map but we will insert the value and make the modifications that it made. + +DLM: Last time this was one outstanding issue which was called issue #60 and there was barely a discussion about that, and it is `getOrSet`, and there is `getOrInsert` and that you will do once and get or set can be done multiple types. So we resolved that issue with the idea of continue to use `getOrInsert` or `getOrInsertComputed`. + +DLM: And that was the last issue. So I would like for consensus for Stage 2.7 + +MF [on queue]: +1 support for 2.7 + +DMM [on queue]: +1 support for 2.7. EOM. + +USA: Anyone oppose? + +USA: Congratulations, you have Stage 2.7. + +DLM: Thank you very much. And I would like to thank everyone that helped out, especially my Stage 2.7 reviewers. + +USA: You are pretty swift. That was less than five minutes. + +### Speaker's Summary of Key Points + +The last remaining item prior to Stage 2.7 was the issue about what to call the two new methods on Map and WeakMap, which has been resolved to use `getOrInsert` and `getOrInsertComputed`. Consensus was asked for and reached for Stage 2.7. + +### Conclusion + +The upsert proposal advanced to Stage 2.7. + +## Withdrawing Records & Tuples + +Presenter: Ashley Claymore (ACE) + +* [proposal](https://github.com/tc39/proposal-record-tuple) +* [slides](https://docs.google.com/presentation/d/1afxyqJthBWsOpBvmPFP-VOhT8KyVF_AQlXLj0nkY6v4/) + +ACE: All right, so, as I mentioned earlier, the Records & Tuples proposal has this new reimagining, 'Composites' which is now Stage 1. So while composites are looking at a similar design space, the real core of the proposal was adding these new primitives, which is fundamentally different. And this slide is a nice quick little montage of the previous times we spent talking about Records & Tuples, and there were a lot of other talks outside of plenary too. But it is clear at least to everyone in plenary—and there is a decent amount of the community outside in the ecosystem too that Records & Tuple is not going to be progressing any time soon. Adding these new primitives did not find a way to move forwards. And we have composites as a new way of looking at this problem space. So, I am proposing that we withdraw the Records & Tuples proposal. + +NRO [on queue]: RIP R&T, you'll be missed. I support withdrawing. EOM + +MM: Just taking the opportunity, I think composite is ready for Stage 2. The question is if anything outstanding? + +JDH: There is no spec. + +MM: Oh. + +JDH: We cannot go for Stage 2 today even if they asked for it. + +MM: Oh I did not notice that, okay thank you. + +USA: Okay, I suppose that is not on the table then? + +EAO: I like the composite approach. I don’t really like “composite” as a name for it. It does not really feel like it means anything, and in my head it is hard to to remember if it’s “composite” or “composable” or something similar like that. And since we have cleared out the “record” space, that could be one direction in which to go here. But I would like to note that there is also another direction available here, of leaning into the use case as presented for composite keys, that could be better than “composite”, and to use “key” as the term here. The decision here ought to clarify if we are going in more as “this is the thing you use as a key” or “this is the thing you use as a generic immutable thing”. And being in this middle ground and using a weird word for it I think is awkward and we need to pick one of these directions. And the primary way of doing this is by bikeshedding on what the name of this thing is. + +ACE: Yes definitely, I can't remember exactly where the name composite came from. I think it reemerged when we were in Seattle. Maybe from DE. The proposal can end up with a different name, and only name I think we should not do is “Record" because I think it just has too much precedent in TypeScript and we cannot ignore that fact about the ecosystem. So I would not be keen on the word "Record” but I would be keen to chat about other names. And just because it is called "proposal composites", the API name does not need to be composite. + +EAO: Do you have any initial thoughts on “key” as the name here? + +ACE: I think 'key' iis part of that conversation of which way we want to push this. Do we push it where it firmly sits where you use these things as keys, or use these things as part of your data model? If we really want to push people in the key direction, yes then, calling them keys would be the way to do that. But first we need to decide which direction we want to push it in. + +EAO: Haven’t we kind of done that by agreeing to the use cases and needs that you have presented here, which are quite explicitly about doing this as a composite key, sort of a thing? And maybe this goes a little meta, but if we go for something way more generic like records and tuples, that should be something we explicitly agree on because that would be effectively changing the use cases that the proposal is aiming for. + +ACE: Yeah, as I said the thing is still something we should discuss. I think that most of the value we get at the beginning is a proposal that is focused on the composite key case. But I really want us to think long-term as well. So that the committee that sits around the future version of a call aren’t annoyed at the decisions that we make now in doing things. And I want to really make sure that we are thinking about some what-if scenarios. Of course we cannot predict the future unfortunately and we can’t over invest in coming up with something perfect that can fit all possible futures, it's not possible. But I want us to take a moment to pause and think a little bit about it, and not get too focused on just this one case and then end up missing something that we end up regretting. I think there is a conversation to be had there. And the thing I said earlier, I think it would be a shame if people have to keep converting their data into these things, and one way of avoiding that is if you can use these things more generically. But I can see why some people think that we should not go in that direction. + +USA: So yeah, congratulations, so to say, to ACE on consensus, however bittersweet this might be, and we look forward to composites. + +### Speaker's Summary of Key Points + +Following on from the Composites proposal achieving stage one and the Records and Tuples proposal not managing to gain further consensus for adding new primitives it was proposed that the Records and Tuples proposal be withdrawn. + +### Conclusion + +The Records and Tuples proposal has been withdrawn diff --git a/meetings/2025-04/april-15.md b/meetings/2025-04/april-15.md new file mode 100644 index 0000000..2b7758b --- /dev/null +++ b/meetings/2025-04/april-15.md @@ -0,0 +1,974 @@ +# 107th TC39 Meeting + +Day Two—15 April 2025 + +## Attendees + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Daniel Ehrenberg | DE | Bloomberg | +| Samina Husain | SHN | Ecma International | +| Josh Goldberg | JKG | Invited Expert | +| Daniel Minor | DLM | Mozilla | +| Chris de Almeida | CDA | IBM | +| Jesse Alama | JMN | Igalia | +| Michael Saboff | MLS | Apple | +| Aki Rose Braun | AKI | Ecma International | +| Dmitry Makhnev | DJM | JetBrains | +| Bradford C. Smith | BSH | Google | +| Ron Buckton | RBN | Microsoft | +| Eemeli Aro | EAO | Mozilla | +| J. S. Choi | JSC | Invited Expert | +| Istvan Sebestyen | IS | Ecma International | +| Ben Lickly | BLY | Google | +| Philip Chimento | PFC | Igalia | +| Richard Gibson | RGN | Agoric | +| Jonathan Kuperman | JKP | Bloomberg | +| Mark Miller | MM | Agoric | +| Gus Caplan | GCL | Deno Land Inc | +| Zbigneiw Tenerowicz | ZBV | Consensys | +| Mikhail Barash | MBH | Univ. of Bergen | +| Ruben Bridgewater | | Invited Expert | +| Ashley Claymore | ACE | Bloomberg | +| Luca Forstner | LFR | Sentry.io | +| Ulises Gascon | UGN | Open JS | +| Matthew Gaudet | MAG | Mozilla | +| Kevin Gibbons | KG | F5 | +| Josh Goldberg | JKG | Invited Expert | +| Shu-yu Guo | SYG | Google | +| Jordan Harband | JHD | HeroDevs | +| John Hax | JHX | Invited Expert | +| Stephen Hicks | | Google | +| Peter Hoddie | PHE | Moddable Inc | +| Mathieu Hofman | MAH | Agoric | +| Peter Klecha | PKA | Bloomberg | +| Tom Kopp | TKP | Zalari GmbH | +| Kris Kowal | KKL | Agoric | +| Veniamin Krol | | JetBrains | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Erik Marks | REK | Consensys | +| Chip Morningstar | CM | Consensys | +| Justin Ridgewell | JRL | Google | +| Daniel Rosenwasser | DRR | Microsoft | +| Ujjwal Sharma | USA | Igalia | +| Jacob Smith | JSH | Open JS | +| Jack Works | JWK | Sujitech | +| Chengzhong Wu | CZW | Bloomberg | +| Andreas Woess | AWO | Oracle | +| Romulo Cintra | RCA | Igalia | + +## Don't Remember Panicking Stage 1 Update + +Presenter: Mark Miller (MM) + +* [proposal]([https://github.com/tc39/proposal-oom-fails-fast/tree/master](https://github.com/tc39/proposal-oom-fails-fast/tree/master)) +* [slides](https://github.com/tc39/proposal-oom-fails-fast/blob/master/panic-talks/dont-remember-panicking.pdf) + +MM: So last time we brought this to the committee, it was called must fail fast, and it got stage 1, and then it got blocked from advancing from there for reasons I will explain. And this is a Stage 1 update. Since then, the proposal—we renamed the proposal “Don’t Remember Panicking.” So I’m going to linger on this slide for a little bit because this is the code example I’m going to use throughout the entire talk, so it’s worth all of us getting oriented in this code example. This is a simple money system. Don’t worry if you stop bugs, it’s purposefully a little bit buggy, which is—in order to illustrate some of the points. + +MM: At the top here, this is just a validity check or it’s a Nat that takes a number and this number is a natural number, i.e. non-negative, and otherwise it returns it. Instances of class Purse each represent a holder of money such that money can be moved from one Purse to another. This money system has exchange rate conversion built into it, so for each Purse, there’s a number of units of some currency, which is the variable field, and then there’s a unit value, which is how much is each unit of that currency worth in some quantum unit of currency, some fine grain quantum unit of currency. And then we use that on construction to ensure that both of these are not negative. And all of the action, everything interesting is in the deposit method, because the deposit method is there to implement transactional totality. All the effects happen or none. The effects are that removing myDelta units into this purse, the destination purse, withdrawing the worth of those units from the source purse, and we’re trying to keep the total worth approximately conserved. + +MM: So this is implementing the transactional totality using the prepare-commit pattern, which I do recommend, and the prepare commit pattern has a prepare phase that provides the “none” of the “all or none”, which is it’s doing all the input validation, all the precondition checking such that all possible throws or early returns happen here, and in particular, no effects happen here. So if you throw an early return, no damage has been done. And this particular prepare phrase checks that that is an instance with this dot sharp here, that this is an instance of the Purse class that, source is an instance of the Purse class, that myDelta, the units—the number of units we’re transferring into this Purse, the destination purse, is now negative, and this outer Nat is checking that `src` would not be overdrawn. So if any of those are reasons to bail out early, we bail out early, not doing any damage, and otherwise, we go past the commit point into the fragile phase. The fragile phase is a thing that implements the “all” of the all or none, which is that once you start into it, you’re performing effects, and the correctness of the system depends on all of these effects happening, that there is no bailout in the middle here after you started to perform some effects. + +MM: Okay. So in JavaScript, the JavaScript spec does not admit the possibility of out-of-memory or out-of-stack, but if, for example, the numbers we’re computing with here are BigInts and even a multiply allocates and depends how it is implemented, even with what numbers multiply, it might allocate, it also might need a new C stack frame, and any time there’s an allocation, there’s always the possibility that there’s no more memory to allocate or that you’re out of budget for the total stack space or total number of stack frames, and because this can happen anywhere and the possibility is not part of the acknowledged semantics of JavaScript, it’s just unreasonable for the programmer writing something like this to have to be—think to be defensive against errors like this. And if they happen, then in this case, if it happens, then the destination Purse, this Purse, it’s incremented, but the source Purse, the one that its units were not decremented because of this failure. + +MM: So you could think that, well, maybe the programmer should just be defensive in general in the fragile phase by putting it into a try/catch, which makes a lot of sense, except that in the programmer has no idea why this block might have failed, maybe it failed in the plus equal here rather than the multiply, if they don’t know why it failed and without an extraordinary amount of bookkeeping to consulted, know what to do to repair the damage. We have unknown corrupted state, and to proceed with execution with unknown corrupted state is to compute forward with corrupted state and continue to do damage, and since we don’t know what’s corrupted, we don’t know what further damage will happen. So this is not a tenable situation. + +MM: So what the first version of this proposal was exclusively about was out of memory or out of stack policies, and what we were advocating is that when such an exception—such a problem state happens, a default happens, that the agent exits immediately, and the agent immediately terminates because any further execution of JavaScript code after this point is just too dangerous. However, when we presented this, we ran into the objection that, from browser makers in particular, that browser makers currently do throw on the out-of-memory and they were unwilling to change that as the default policy, because there’s too much code out there that counts on being able to continue after such things happen. + +MM: And in slides coming up I’ll explain why that’s actually quite a sensible policy, especially given the view of JavaScript when the browsers arrived at that policy. So instead of proposing that it be a part of the JavaScript spec, that we immediately terminate, we’re instead proposing that the decision, the policy decision, is delegated to a host hook, and the host hook, we’re generalizing it from just out-of-memory and out of stack to a bunch of different faults where you—we provide the host with a fault type and an argument that can provide additional information per fault type. So in order to make sense of this, we need a taxonomy of fault types. Oh, wrong taxonomy. Those are earthquake faults. Let’s go to software faults. + +MM: After unrepairable corruption has happened, the most important part of our taxonomy is that it’s not possible to continue with both availability and integrity. You can’t compute forward from unrepairable corruption preserving both availability and integrity. So one possibility, one possible choice for the host fail stop, which is to sacrifice availability for integrity. That’s certainly what you want for some—you want transaction totality like money system, where user assets are at stake, but the—with the browsers expressed as their desire for the default policy is what we would instead call best efforts, which is to sacrifice integrity for availability, to remain responsive. + +MM: And so remember that the browsers arrived at this behavior during the ECMAScript 3 days or earlier, I arrived at ECMAScript 3, which and the entire language was what we are now calling sloppy mode, and even in a failed assignment, you just continue to pass the failed assignment silently, going on to the next instruction, and the reason for that was the engineering goal review of JavaScript and browser behavior at the time is the most important thing those two preserve availability of the page, to preserve that the page stays interacting, even if the price of that was to compute forward with corrupt state. + +MM: So the next part of our taxonomy is these four levels of severity of a software fault. So the first level of severity is that, and I’m—and now I’m being a little sloppy with the word “host”. I mean, “host or JavaScript engine, but not JavaScript code”. This is the code that implements what JavaScript code sees. It detects that its internal state is corrupted, for example, an internal assert is violated. I’m just guessing here that the browsers have something like an internal assert as well. I’ve only studied the internal faults for XS, and I’m guessing that the browsers, but if the internal state of the host or JavaScript engine is corrupt, then I’m assuming that we all agree that those cases do call for fail-stop. And in browser receive the blue tabs of death; for XS, which is meant primarily for devices, there’s the quite sensible policy of just rebooting the device, not computing forward with corrupted state, and they’ve found that to be a much more robust way to continue. And for XS, the out-of-memory, out-of-stack, are in this severity category, because the excess machine is not built to guard its own integrity against these conditions. + +MM: The browser engines, we’re guessing based on the objection previously raised, that they are built so that following out-of-memory and out-of-stack, that the JavaScript engine and host have not lost their internal consistency. That their internal invariants still hold, but they’re now in a position that they cannot continue executing JavaScript while upholding the semantics of the JavaScript spec. The error that they throw is outside of the JavaScript spec, and because it’s outside of the JavaScript spec, it’s outside of what the JavaScript programmer thought they could count on, and therefore, we should assume that the JavaScript code is now continuing to compute with its state corrupted, even though the C state, so to speak, is uncorrupted. + +MM: So at this point, so for this severity level, it makes sense for the host to decide between best effort and fail stop depending on whether integrity or availability is the overriding goal. But if a host chooses fail stop, we think that it should provide some API, we’re not proposing what the concrete API for this would be, but if the default policy is best efforts, we’re advocating that the host provide some API such that the JavaScript code can opt into fail stop to protect itself, such as for such banking examples, and in fact, we would propose that, whatever this API for opt into fail stop is, become standardized as part of JavaScript. + +MM: Okay. The next level of severity is the host is fine, the JavaScript—the host can proceed within the JavaScript spec, but something happened such that we can think of the JavaScript code itself as likely being in trouble, where some symptom that we know about that the host can react to based on the assumption that it indicates that the JavaScript code might be in trouble, so unhandled exception, unhandled rejection are the well-known ones, there’s also for XS has metering built in such that it can be out of time. The browsers, in order to cope with an infinite loop happening within JavaScript code, might time that out and then have some strategy for continuing execution if that times out. And then the next lower level of severity I will come back to. But let’s at this point, to motivate that remaining level of severity, let’s go back to our example. + +MM: So now that we’re familiar enough with this code as a whole, let’s just scroll further to where we see both the deposit method and some code for testing it. So there is an obscure bug in this code, or maybe not so obscure to some of you, but the nature of this bug is such that it can survive zillions of test cases like this. It might survive during development and review and test cases because it requires a weird data coincidence in order to reach the bug. So the result is that the data coincidence might happen first in deployment at a customer site, having survived development and testing. And the data coincidence is shown over here, where it happens if we’re providing it BigInts rather than numbers, and that there’s a zero here on the unit value of the source Purse, which is not something that might—that the developer might have thought to try, and a zero in the amount in the destination Purse of the amount we’re trying to increment the destination Purse by. If there’s a zero in both of these positions, then this divide operation will throw a range error, and if we continue computing past that throw, then because we never noticed the possibility of this bug during development, we now, again, have corrupted state in deployment in a user code, in a customer site, which really bad. And we might, again, try to do thing about this by put a try/catch around it, but, again, if we don’t know what the problem is, we don’t know what to do to repair the state without extraordinary amount of extra bookkeeping, so the most we can do is log it or try to do some kind of diagnostic that ultimately makes it back to the developers so that at least we know why Zalgo is laughing at that point. + +MM: Now, surprisingly, there actually is a way in JavaScript today for this code to defend its own integrity and, you know, to sacrifice availability to do—to preserve integrity, at least as far as the spec is concerned. Which is it can go into on an infinite loop at that point, and as far as the spec is concerned, that blocks all further execution in this agent. Zalgo never gets to observe the corrupted state, never gets to do damage because of continued computing with the corrupted state, and we’re safe. + +MM: But there’s two problems with this. + +MM: One is the price of the safety is very expensive, and it’s expensive for the customer since this happens first at a customer site, and the other one is that if the host already has a policy that it engages in some remedial action like a throw or boarding the current turn, if it times out, like we believe the browsers do, and then continues execution, then that leaves what the semantics of the JavaScript spec and continues commuting anyway with corrupted state, so what we’re proposing instead is that the JavaScript code have some way to say, somebody stop me! Which is this `Reflect.panic` operation, which is a new API that we are proposing, so that it can become a practice when engaging in the prepare commit pattern to do a try catch and then to stop the agent, to abort the agent, you know, as soon as possible, as soon as the assumption with the front of the block is that no early exits happen here, if an early exit does happen here, that’s enough of a symptom to say, okay, we violated our basic assumption of fragile block, we don’t know how to repair the damage, just terminate immediately. + +MM: Another thing to do with the `Reflect.panic` is that the JavaScript code itself can now just do an assert-like operation where the current asserts might throw an exception, throw an error, when the assert—when the assert condition is violated, a more severe JavaScript code might say that, well, in assert gets violated, I have no idea how to continue, so just panic at that point. + +MM: So that brings us to the remaining—the next lower integrity level, which is the JavaScript code notices some corruption through a failed JavaScript level assert or an exit for—from fragile block, calling the `reflect.panic` , which in turn calls the host fault handler with the fault type, user panic, provide—and then whatever arg is provided here becomes the extra data provided there. + +MM: Now, throughout this talk to this point, I’ve been saying repeatedly abort the agent, but there’s been a conversation on this WHATWG thread in the HTML repo going back to 2017 on three different bug threads, including the new bug thread as of a few days ago of what the actual unit of computation is that needs to be aborted when we need to abort this, what we’re calling in this talk the minimal abortable unit of computation, and what these are discussing is, do we have to abort an entire agent cluster? + +MM: So a way to visualize the dilemma is: agents, before the introduction of SharedArrayBuffers, the agent was indeed the minimal abortable unit of computation because objects within an agent were indeed synchronously coupled to each other and in general, computation from an agent is synchronously coupled to each other, so you would have to abort worth at least the agent, but back then, agents were only asynchronously coupled to other agent, so I could abort an agent and then other agents were in a position to react to the sudden absence of an agent they had been talking to. + +MM: With the introduction of SharedArrayBuffers, agents could be coupled, asynchronously coupled to other agents, so the agent cluster, which is what those HTML threads are about, is—what we’re in this talk calling the static agent cluster, which is all of the agents that might be synchronously coupled to each other because they might share a shared ArrayBuffer. So that is certainly a sound unit for jointly aborting, but it’s not really satisfying as the minimal unit, because it’s—there can be a tremendous number of agents within the agent cluster, and sacrificing all of them to preserve consistency seems unfortunate. + +MM: So fortunately, there is a smaller unit to abort, which is what we’re calling here the dynamic agent cluster, which is at the moment the fault happens, let’s say the fault happens within this agent, if the fault happened within the agent, then at the moment of the fault, the—first of all, the processing of the fault is clearly something that can be the on the slow path. Nobody cares how long it takes a kill a bunch of tabs. The—what we can do is if the fault happens in that agent, you can say, at that moment, what is the transitive closures of—closure of agents synchronously coupled to the agent in which the fault happens at that moment? And then tell the transitive closure of those agents, the ones in that agent cluster, and this one would be killed and this one would not be killed, even though it’s in the same static agent cluster. + +MM: So the assumption that we’re making in providing this host hook that can choose to abort this minimal abortable unit is that it is allowed for this new host hook to not return a control to JavaScript. However, the actual text of the spec says something that in fact JavaScript engines generally violate, which is that the host hook must return either with a normal completion or a throw completion. Instead, execution today, depending on what goes wrong, might core dump or produce some other kind of diagnostic snapshot, depending on the host, on Node.js `process.exit`, which granted is not actually a host hook, but just a host provided built-in, is a, you know, process is the host object, does not return control to JavaScript. And of course, the browser blue tabs of death. + +MM: So we want to acknowledge that by allowing, in particular, the host fault handler not to resume JavaScript execution. Hold on. But there’s actually another way to not resume other than simply death before confusion. It’s certain three case that you can’t resume by simply allowing computation to proceed forward, but one of the reasons why the browsers to the blue tab of death preserving the URL in the URL bar is to give the user the choice to just refresh the page, the host hook could conceivably just decide to have refreshed the page on its own, although I don’t recommend it. I think giving the choice to the user is more sensible. But in this case, the reason why refreshing the page, after repressuring by the user choice or browser choice makes sense, is you’re to falling back to a previous consistent state. You’reimmediately forgetting—you’re abandoning all of the corrupted data state. You’re abandoning it immediately, no further damage happens, and you’re falling back to a previous consistent state. XS in rebooting the device does exactly that. The previous consistent state is the state in RAM. + +MM: A friend of mine who is doing an extremely reliable operating system one day was talking to somebody who does software for pacemakers, which clearly need to be extremely reliable, and asked them, okay, how do you deal with various kinds of fault that might happen in your own, you know, pacemaker software? And he said, you know, the heart is an extremely fault tolerant device. It can miss a beat or two without much worry, so we just reboot the pacemaker, and that works. And by rebooting, we restart from exactly this previous consistent state that’s in one. But if it took ten beats to reboot the pacemaker, that would be a very different story. At that point, you might prefer best efforts. And then Agoric does a full transactional abort, which is between transactions, Agoric has stored enough snapshot and log information that we can completely abandon the corrupted state, the state of the aborted transaction, and restore from a previous consistent state and continue to compute forward from there. + +MM: So this kind of amnesia before confusion is a way to preserve both availability and integrity, so this policy of providing this kind of abort of this minimal abortable unit supports hosts that want to have some kind of fallback to a previous consistent state. + +MM: So this brings up—that brings us to the larger question of how one builds fault-tolerant systems, and what faults fault-tolerant systems are trying to survive. So there is the conventional dichotomy of building fault-tolerant systems out of Byzantine components and out of fault-fail stop components, and usually this is discussed in the context of hardware faults, with with the hardware faults are assumed to be non-replicated so that you can have multiple replicas running the system that—where the fault occurs in a minority of the replicas, so with Byzantine faults, the—it’s assumed not to fail stop. It’s assumed to continue forward computing with corrupted state, and, therefore, be unpredictable, and furthermore, more generally, Byzantine fault means that the individual, that piece of hardware may indeed be malicious, which is the assumption behind Byzantine fault tolerance and blockchain, and, you know, this is hard, but there’s zillions of systems now that to exactly there, and that copes with the supply chain risk where some of the hardware running the computation might indeed be malicious. + +MM: And then there’s just the more common hardware assumption that you can build the hardware to act in a fail-stop manner, and then all you need is simple redundancy and voting that—so that is represented by systems like the tandem non-stop, where the replica that loses the vote just drops out. Also various failover schemes are essentially in the same category. But there’s the more interesting category of failable applications, what if there’s a bug in the code or what if there’s a fault in the interaction of the user code, the application code, and the software that the application code is running on where all of the replicas are running the same software, in which case, any of those faults are replicated. And if they’re replicated, the hardware redundancy is of no help at all. You have to—we have to engage in other coping mechanisms. + +MM: So what Agoric has certainly been focused on mostly is replicated faults that are Byzantine faults, which are, for example, library supply chain risks, where a library itself is linked into your software might be malicious, and now we’re trying to—we can’t mask those errors, but we can reduce the severity of those faults with principle of least authority providing the library with no more ability to cause effects to do their job, which is often tiny compared to the—to the status quo of what’s provided to libraries today, object capabilities, defensive consistency, where individual components are programmed to maintain their own consistency in the face of malicious callers and compartmentalization, which is the what the compartments proposal was about. + +MM: But today, what we’re raising is this other category of application faults, which are—where the faults are not malicious, and we would like to make the faults into fail-stop faults so that we can do fault containment, and which is what Erlang philosophy is about, it uses the philosophy, fail-only programming meaning the process, which is very much like our agent, that on the—if something goes wrong, it terminates immediately, and it leaves it to other agents, including an agent serving as the supervisor of the agent that fails, but in general, other agents interacting with the failed agent, to react to the sudden absence of the failed agent. And that fits with the postmortem fails philosophy that we followed when we introduced weak references and postmortem finalization, which is as opposed to the Java finalize method, which is a method on the object being garbage collected, which is if you’re being torn down or if you’re confused, you’re the last one that the system should ask to cope with the consequences that you’re about to go away, because you’re confused, you’re the least capable of being able to cope exactly with your own corruption. Rather, we should just kill you immediately without consulting you, and then let other code elsewhere deal with your sudden absence, and that’s—so we’re doing that for garbage collection with postmortem finalization within an agent and we’re proposing with these panic and fault handling to be able to apply that to the agent as a whole with other agents reacting. + +MM: And that brings me to the end of this Stage 1 update, and now I will take questions after turning recording off. + +DE: Hi. Interesting presentation. So `Reflect.panic`, you make interesting arguments for it, but overall, it seems like a pretty strong capability to be giving everyone to make it extremely easy to halt the program. Given points that you’ve raised before about how people compose programs without thinking so much about, you know, giving those components less privilege, it makes me worry that uses in the software ecosystem could let libraries call `Reflect.panic` when that isn’t want users of those libraries expect or intend to enable that. What do you think? + +MM: I think that’s really the crucial question, and we went back and forth over this. And in fact, if you take a look at the Agoric software today, it’s all built on the opposite assumption. The Agoric operating system gives to—gives to the start compartment the compartment that’s able to hold privileges, five gives it a capability to terminate the agent immediately, and then we go through all the trouble of threading that exactly and only to that code that should be able to exercise that capability. And, yeah, that’s a lot of trouble, but we did it and it’s okay. And the thing that got us thinking the other way is that any code has the ability to go into an infinite loop anyway. So the infinite loop is the [INAUDIBLE] that says we can’t stop code from going into an infinite loop, so why are we treating the ability to—specifically to stop the agent with—or the dynamic agent cluster, stop the minimal abortable unit, to stop it indicating a fault. And the—and specifically indicating a fault. The Agoric operating system in providing capabilities actually provides two capabilities. There’s stop indicating that something was wrong, and there’s stop indicating a normal termination. And for the second one, stop indicating a normal termination, we still treat that as a protected capability. And we intend to keep treating that as a protected capability. Because exiting indicating that you’re done is not equivalent to an infinite loop. So this is basically just a sort of cheaper form of the infinite loop that also provides diagnostic information. + +MM: And then I’ve got a PR that I will link to from the proposal repo, that I should have—but I’ve got a PR on the Agoric software where I’ve unthreaded the panic thing and provided panic as an ambient thing, as ambient as `Reflect.panic`, and in our case, it’s importable for module right now, but if this proposal makes it to Stage 3, then wherever that API ends up, we’ll move it there. But it was very pleasant making it ambient because we’ve got a lot of these fragile blocks of prepare commit patterns. We’ve got over a dozen of them, I think, and the internal assert that says, if this assert fails, I have no idea what the problem is. Just kill me now. That seems—it was certainly very convenient. It got rid of a lot of lines of code, and because of the equivalents with infinite loops, it didn’t actually create any danger that we did not already have. + +DLM: I’ll just be quick since we have limited time. I also have some concerns about `Reflect.panic`. + +PFC: I think there’s going to be a popular understanding of what `Reflect.panic` is for, if it becomes available, that I think will be pretty harmful for the web. I could just see it happening that somebody ships an assert library that panics when an assertion fails—which, as you pointed out in your presentation, has real use cases, there are times when you would want to use such a thing. But, you know, the library’s going to be available and the narrative that people will take away from it is 'oh, that’s more secure.' I can just see that panicking assert library being used in all sort of situations where it’s not necessary, which I think would really degrade the experience for users of the web. + +MM: I agree with you. That’s a real danger. But the availability of the panic and the need to kill the minimal abortable unit, if certain asserts do fail, is an actual need. You know, the other thing that such an assert library could do today, besides the infinite loop, is if they know enough about the host to know what causes the host to panic, how to induce blue tabs of death, then they could do that. + +MM: But I agree with you that this is why we went back and forth over this issue. One possibility going forward, and I’ll go ahead and ask for the committee’s reaction to right now—one possibility going forward is that we separate the user panic into a separate follow-on proposal, that the rest of this proposal is explicitly making room for. And this proposal, everything else that you heard today, which is generalizing it there just out of memory to a fault handling taxonomy with the severity levels, and the host policy and the ability of JavaScript code to opt into fail stop if the host defaults to best efforts, I think that all of that holds up well in the face of this criticism, so that we could keep that with room for panic and then have panic itself be a follow-on proposal. What does the committee think on that? + +SYG: I don’t think `Reflect.panic` is a good idea. I think the UX is going to be different from an infinite loop. An infinite loop is your page hangs for a bit, things don’t work, and a thing pops up that says this page is not responsive, do you want to stop it? You are now, a `Reflect.panic` , the only realistic way I can imagine implementing it, if you want to kill the entire agent cluster, is an actual process crash, and there is no world where a browser vendor is going to ship an API that user code can call that makes it look like the browser’s render process crashed. There’s no world where that is going to happen. There are other ways to implement it, I suppose. But those are very invasive. We’re talking about a way to kind of communicate to all the running other threads, like workers and stuff like that, to basically stop at the next point, right? Like, if you don’t want to kill they right then there, have to communicate that says check this, interrupt and stop. I’m not sure if that’s what you want anyway, because, you know, if you have SharedArrayBuffers, you have shared memory, and you don’t kill if process right there and then whatever thread killed the bug, you don’t know how long the other workers are going to keep running until they receive that necessary message. Maybe that’s that you don’t want, and that’s a very invasive implementation technique that is likely to fly either. I don’t see how we can ever have a `Reflect.panic`. It comes down to we’re not going to ship something that makes user code make it look like there’s a bug in our product, right? That’s not something we’re going to ship. + +MM: Okay. Noted. Thank you. I understand the nature of the objection. + +MAH: Really quick, and I think this is reply not just to panic, but to others. We seem to focus a lot on what the behavior of the browser would be for the main thread. But this can also be called in any other agent, part of that cluster, for example, a worker, and where the main thread could survive and act as the supervisor, in which case the application itself can still programmatically handle this. + +SYG: Sorry, was that a question or a comment? + +MAH: Yeah, the question is there a world where `Reflect.panic`, for example, would be acceptable in an agent that is not the main thread? + +SYG: I have no idea what the question is. I’m sorry. + +MAH: Is it reasonable to imagine that `reflect.panic` would be—like— + +SYG: It doesn’t kill the agent cluster, it only kills the agent? + +MAH: `Reflect.panic` only kills the—would only kill the dynamic agent cluster that the agent they are sharing SharedArrayBuffers, so it’s entirely possible that the maybe thread would not be affected and only workers would be affected, if they’re not sharing a shared ArrayBuffer? + +SYG: I see. Yeah, it sounds possible on paper. It still seems highly unlikely to be implemented. + +DE: Yeah, you were suggesting we have a host hook, and we’ve also discussed however behavior from browsers is kind of complicated and varying. So how would you want that host hook to be defined in HTML? + +MM: So let’s start with that we just take what browsers are doing right now, which violates the JavaScript spec, and instead, with this proposal, explain what browsers are doing right now as a host policy expressed by the behavior of the host hook, as we all understand, there doesn’t have to be a piece of software which is the host hook. The host hook is an explanatory device for dividing responsibility between JavaScript and the host, essentially the different hosts can express different policies, different hosts in fact have different policies with regard to host handling. Let’s bring the possibility of those different policies into the language by attributing them to the behavior of the host hook. + +MM: Was my answer clear? + +DE: No, because I don’t think there’s, like, a common enough or well defined enough behavior. Like, I still don’t know what you would want to actually write. I guess you described a general approach. + +SYG: Yeah. Let me interrupt you there, Mark. We don’t have interop among the browsers for what happens for what kinds of out of memory. + +MM: Ah. + +SYG: There is no universal behavior and I think it is inaccurate to say, that we violate the spec because this is extra spec. It is just not a spec behavior. + +MM: Well, JavaScript code continues outside of the semantics of JavaScript that the spec promised the JavaScript programmer they could count on. + +DE: Well, sure. So the spec doesn’t say that there are any resource limitations, and by not being an infinite unbounded machine, it’s violating the spec. Is that what you mean, Mark? + +MM: That’s what I mean. Is that in the—the previous time I gave—I brought this to the committee, my first thought was, you know, mostly there are two kinds of languages in the world: languages that cannot be implemented correctly, and there are languages in which it’s impossible to write a correct program. JavaScript is a language that cannot be implemented but on a infinite memory machine. Java, because of the machine error is one that can be correctly implemented but impossible to write a correct program because of the virtual machine error might happen at any time. + +MM: So the—so for those particular hosts, so I mean now maybe I am using the term host, but since it’s not universal across browsers, each browser expresses, you know, its way of coping without a memory, we attribute it to that browse are’s behave and make it something that is acknowledged by the language as something that is according to the host’s choice. Just make it explicit so the JavaScript programmer knows it’s out-of-memory happens, we ask the host what to do. That doesn’t seem like a big ask to me. Let’s go on with the queue + +DLM: Sure. I will be quick. It’s not clear to me even with SharedArrayBuffer, to guarantee the computation continues on another worker in a corrupted state. Just from scheduling point of view. And other—I don’t expect an answer. But it sounds like this is a building block for transactions, so why not just consider bringing a transaction proposal? That’s it for me. Thanks. + +MM: So do you think a transaction proposal—You’re exactly right. This is a low-level proposal that facilities JavaScript code creating transactions and such things at a higher level. And there are, you know, many possible transactional semantics. If we did bring transactions directly to the committee, first of all, there are… I don’t see that the—I mean, the kinds of things you need to support genuine transactions, including falling back to a previous consistent state, I just don’t see engines being willing to implement that in general. Or hosting being willing to provide that in general. So I don’t see that that would be better able to advance. This is a much lower level mechanism that enables a much wider variety of coping strategies. + +DLM: Yeah. That’s fair. I agree, it doesn’t—the transactions seems like problematic like in a JavaScript future. But I wanted to see if you considered that, given there are some concerns about this—especially with `reflect.panic` . + +MM: Altogether, my sense is that this proposal without the user-level panic is still plausible, and that the user-level panic could be a follow-up proposal, because of these objections, it might not advance. Okay. Let’s go on. + +[out of timebox] + +### Summary + +See topic continuation for summary. + +### Conclusion + +No conclusion; we’ll discuss further in a continuation topic, including a temperature check on the viability of this proposal without the panic API. + +## Enums for Stage 1 + +Presenter: Ron Buckton (RBN) + +* [proposal]([https://github.com/rbuckton/proposal-enum](https://github.com/rbuckton/proposal-enum)) +* [slides](https://1drv.ms/p/c/934f1675ed4c1638/EYypvengQohMlG52w1qseW8BCwCkSG0Y-2ip8Zq7pxoOFw?e=Aklyqu) + +RBN: Today I want to discuss enum declarations. I am Ron Buckton, I work on the TypeScript team. Enum declarations are essentially enumerated types. Provide a finite domain of constant values that are obvious to indicate choices, discriminants and bitwise flags. And a staple from C-style languages, VB .NET. C#, Java, PHP, Rust, Python, the list goes on and on. The reasons we are discussing this are several. One, the ecosystem is rife with enumerated values. ECMAScript is `typeof`--String based. The DOM has `Node.type`, which has its enumerated values on the Node constructor, this is the same. Buffer encodings are string based, or a string based enumerated type essentially. And Node.js has constants that are enumerated type or value-like functionality. But there’s no grouping. For users there is really no standardized mechanism to define enumerated type, ones that can be used reliably by static type. We talked about ObjectLiterals. But there’s a reason why that’s not really the best choice for this. I will go into that in a moment. + +RBN: Another reason to bring this up. Node.js shipped a feature that allows for type stripping in TS files to allow both user code and third party packages to potentially run just TS files within their program. And in those cases, type visit types are stripped off. However, enums are a TypeScript feature that are not an erasable type functionality. They have run-type behavior. So if Node.js developers wanted to use enum, they are forced to use something else that doesn’t work well with TypeScript because it’s not something that is supported by type stripping and we had developers as well as members of the Node.js committee reach out the team to consider bringing this proposal to ECMAScript to have some form of TypeScript enums potentially standardized. + +RBN: So I mentioned why we might not want to use an ObjectLiteral. Enum declaration has a number of advantages over a plain old ObjectLiteral. It has—the goal is to have a closed domain by default. So enum members would be non-configurable and non-writable. Enum declaration would be non-extensible and have a null prototype. Null prototype is to avoid collisions, and non-extensibility to avoid somebody making changes to an enum value later to get a runtime optimization. Another advantage of enum declaration is that it is restricted to a specific domain of values. Limited to a subset of primitives. Number and string are what is supported in TypeScript, we have also been considering BigInt and Boolean as well as symbol-based values. + +RBN: One other capability of enum declaration at least TypeScript enums that’s not able to do with ObjectLiterals, is self-reference during the definition. In ObjectLiterals you can’t reach out to other members of the ObjectLiteral while defining it, itself because it doesn’t exist yet. However, it’s fairly common within a declaration, one that works with numbers like bit flags and bitmasks, to use bitwise combinators to create bitmaps, by referencing numbers, within the definition of that enum. + +RBN: And again one of the other major advantages for enum declarations is that they are something that can be especially recognized by tooling such as a static type system like TypeScript, not only is this something in value space, in a JavaScript runtime value that can be accessed—but also a type that can be restricted in a static type system, used to discriminate in a union, provide documentation in hovers, et cetera. + +So other really interesting advantages of an actual enum declaration like ObjectLiteral we have things to extend enum declarations that a normal ObjectLiteral it wouldn’t make sense to do. One of the big areas we have investigated is something like introducing algebraic data type or ADT-style enums. Creating a structured object using a very concise syntax. These are often like option or result types. They are frequently used in languages like Rust. But also in even Python, it may not have an ADT enums, but it uses something like Option. + +RBN: Some other areas we want to investigate might be decorators, if you use ADT enums to specify data used as a wire format and translate to something that is more usable. If you are using like prototype, or doing a WASM interop, you want to say, I want these values to be stored in memory in this way, but serialize or deserialize them. So decorators is a way that could be accomplished. I will go into more in later slides. Another area is opt-in auto initializers. I will explain why that’s important later. + +RBN: And the potential for shared enums. They have further restrictions to be used with shared memory multithreading and shared structs. + +RBN: This is a Stage 1 proposal. However in many cases we have some leeway in what we are considering as far as both the syntax we’re supporting and the runtime semantics. Since one of the goals is to allow TypeScript developers to have some form of enum in native JavaScript code, there are some restrictions to what we’re looking for as far as syntax. In many cases, there are things that TypeScript looked at, and said there are behaviors of enum that are not desirable that we might be able to change or that we might be able to build on top of the same functionality, but built on top of a more restricted or MPV approach to a deliciousing the syntax we are proposing is—enum have identifier base name. They have a name, an identifier or a StringLiteral or initializer. StringLiterals are not as used as identifier names, but they are used, it is something that we don’t have a strong preference on and we may consider to drop the support because it has some complications that arise when it comes to do self-references. + +RBN: I will say as we start—continue a brief Github search, showing among public projects on Github, 250,000 cases of enum declarations across numerous projects. It’s a popular and heavily used feature. And that’s why again the goal is to be for the syntax to be compatible with TypeScript. If we can co-evolve the syntax, as long as we avoid conflicts. So one area that we are considering not supporting because of discussions I have had with various committee members over the years has been TypeScript default for auto-numbering, and we might have more specific or opt-in mechanisms for that. We will go into that more later as well. + +RBN: As far as the proposed runtime semantics, so enum declarations—currently we’re looking at producing an ordinary object with a null prototype. That’s not what we do in TypeScript today, because in general in TypeScript we tend to lean on the type system to tell you when you’re doing something wrong, when you are accessing something that isn’t like—for example, access valueOf, or—which is a inherited property from an object—that is not a member of the enum domain. We don’t let you use that. But JavaScript doesn’t have a type system, it’s likely more reliable to introduce some of these additional restrictions and semantics to avoid having to depend on a type system for that kind of behavior. + +RBN: Another thing to consider is enum declarations have a `symbol.iterator` method. That pairs the enum elements. The reason why—but also, some of the directions considering for ADT enums as a potential future capability, would have the potential to introduce new static members to an enum that aren’t necessarily part of the enum domain. We don’t believe `Object.entry` would be reliable long-term for something like that, and it’d be better to have a more specific capability for yielding key values like this. This Is a feature that is present in Python enums. You can loop over the enum members of a Python enum. + +RBN: Enum members are properties of the object for enum declaration and they are configurable: false or writable: false. This isn’t a behavior that TypeScript implements. So for actual native runtime enum, it would be reliable to make sure the members are fixed and unchangeable. + +RBN: Next is the enum members that have identifier or StringLiteral names. For one, we again are considering the potential for removing the ability to have StringLiteral names. There are very few cases where that occurs at runtime. It’s likely that enum values might have complex values that don’t really match a potential StringLiteral. We only see things like dashes in the StringLiteral names from the Github searches we have done. Generally we want to avoid things like NumericLiteral names or computed property names. TypeScript currently uses a reverse mapping for numeric enums that, while in and of itself is problematic, iit means that having NumericLiteral names increases the potential for collisions with those types of reverse mappings, which is why we don’t support them. But I could have one that strings is an integer, for example, but that’s again not something we really run into that often but it’s been a concern. The reverse mapping thing is something that we are rethinking due to limitations that it has. I will discuss that more in the later slide. + +RBN: As mentioned before, enum initial values are limited to String, Number, BigInt and Boolean and Symbol. We don’t allow all of these JavaScript types for various reasons. For example, function, we want to for bid as we consider a potential future of ADT enums, which are structured, those would be enum members that are constructor functions, and we want to avoid confusion when it comes to the actual enum domain for values that is a constructor function for ADT value, or is this just some function value that doesn’t make sense? So in general we try to limit it to a subset of primitive types that would be allowed in those places. + +RBN: And the other interesting semantics would be that enum initializers may refer to the enum by name, or prior enum members. The most important thing is it's used commonly for bit masks, or if I need to alias a name with a different name, or enum member to a different name so that I can do refactoring without breaking old code, it’s useful to reference those values. Referencing the enum declaration itself is useful for things like enum members that can be referenced because they are not an identifier or they are a reserved word. If you tried to create a enum member name `default`, whatever its value is, then member referenced that member as `default`, that doesn’t work because “default’ is a reserved word. There’s some cases where we would need to reference the enum declaration itself which is why we would want to support that. + +RBN: The desugaring is not final. We are considering is simple as a desugaring. If you can consider this, the ObjectLiteral approach, where you just define enum members A and B as 1 and 2 and freeze the result. That’s one possibility. But you cannot do something like enum member C, which references those in a bit mask. + +RBN: So instead, what we do is, define these one at a time. It’s helpful when you look to a future where support decorators, and need to handle that evaluation independently. + +RBN: One other way to consider doing this desugaring would be to predefine all of the properties with a value of undefined, but still configurable: true, and after evaluating each enum value, assign it, and at the end, mark those as or just freeze the object essentially. One to roughly emulate some of the behavior we want to emulate for the structs proposal, where the shape is fixed. The early layout is fixed, would allows for runtime optimization in engines, the type of things looking at the structs proposal, we might want to leverage here to avoid costly lookups. We know these objects are unchangeable, that the members themselves can’t change. Therefore, it’s potential that the engine can do inlining. We are not depending on the behavior, but some of these things are the advantages we are looking at with the structs proposal with having a fixed layout. + +RBN: Some other considerations, they’re not currently in the proposal, but we’re willing to consider these, and what the value might be right now. TypeScript don’t support enum expressions, and it’s fairly common in JavaScript for a declaration form to have an expression form. In something like the structs proposal, specifically for shared structs, it’s not possible to have an expression form for shared structs, with the type of correlation mechanisms we are considering. + +RBN: And since TypeScript doesn’t support this, we are not that strongly motivated to add support for it. Enums are generally evaluate—one-type operations. Finding constants that are application-wide. Or at the very least, are frequently used within a single file. Therefore, enum expressions aren’t that important. And if you do need them, you could still define enum declaration and return or ache is its enum object as the expression. But if there’s support motivation, we could consider rolling that into the proposal. + +RBN: TypeScript does support export for enums, it doesn’t support `export default`. We consider adding support for export default like it is for class today. + +RBN: Shared structs. Primitives like number and string can be passed to a shared structs. However, we have had discussions in the past about what should be the—if there was a default behavior for enum members that don’t have initializers, what should the value be? TypeScript’s support for enums currently is to use numeric enums, and does auto-incrementing of the enum values for the simplest approach to uniqueness. We had discussions potentially about whether we used Symbol. Symbol is not reliable when working with shared structs, as evaluating the declaration twice, once in the main thread and once in a worker thread, would result in different values for those enum members, when the Symbol is evaluated. So you would have to use things like `Symbol.for` or some mechanism. We generally discourage using symbol as enum values. But we are not opposed to having symbol values themselves. + +RBN: There’s some differences to the proposal from what we have today in TypeScript. These differences are accepted, and we’ve been discussing it with the team, and we’re even willing to consider further differences. And eventually, adapt TypeScript to support that and make changes in deprecate certain functionality if necessary. But one thing that is very heavily used in Typescript are auto-initializers. TypeScript, if you supply you enum members that don’t have an initializer, we choose a number. And those values then are auto incremented. We do this because that’s generally the case in practice that is used by every language that does enums. With few exceptions, C# enums produce a value that does something like this auto numbering capability, even though the enum type information is stored along with the value. So if you—at least when you box an enum value, the enum type information is stored within it, so it’s more complex than just numbers. But it’s generally the practice that numbers are used in these cases, and it’s fairly common for applications to use auto numbering—I shouldn’t say common. It’s common for users to not want to put values for the initializers because those values obviously aren't a consequence when you are writing high-performance code, like a compiler, like in TypeScript. Having numeric values is extremely useful to be able to write high performance conditions to filter out certain values. So the approaches discussed, if we don’t have numeric auto initializers, using string or symbol based, Manning (?) through function to get the number, but that—that is not efficient or performant. It’s likely that a native implementation might not support auto initializing, as some delegates expressed concern that it's a footgun they don’t want to make easy to reach for. TypeScript would likely continue to support auto initializers on enums written in TypeScript and would down-level them to a native enum that has explicit initializers. + +RBN: Another TypeScript feature considering deeply indicating for the 6.0 release later this year would be to prevent declaration merging. TypeScript lets you declare the same enum twice, and the new members are merged into the old declaration. This is not a desirable feature. And it’s something we are considering deprecating. We have looked at our almost top 1,000 projects in TypeScript projects that have sources available on Github, and we have run into I think one major case and it was the declaration file that was the result of a bug in how the declaration was produced. So really this is not a practice that is commonly used in applications today. So it’s not really a concern. + +RBN: So another thing that TypeScript has that we are considering deprecating is reverse mapping. However, reverse mapping is actually very valuable. Reverse mapping is when you map a enum number to a numeric value, that you can then have a mapping back from that numeric value to the enum member. This is used for debugging, diagnostics, serialization and formatting. It’s unreliable because it works for number-based enums. In other cases, that reverse mapping could produce a collision, so we don’t support it in those cases. Since it’s unreliable, we are looking like `Symbol.iterator` to produce the entire domain of the enum. And you can use your own functions to then filter through that to find the reverse mapping and do filtering, formatting, for diagnostics and the like. And we have considered, and an early version of this proposal had a global enum object that would produce—use or could have used this—this data to provide simple mechanisms to get this information. We have removed that from the proposal to have a more MVP approach. So it’s something we can consider, but not considering at the moment. + +RBN: Another major difference from TypeScript enum, TypeScript has `const enum`. Something we added to do inlining of enum values using whole program optimization and program knowledge. We don’t intend to bring that to TC39 as it’s not really necessary. Any type of optimization that could be done with `const enum` could be done by the runtimes themselves. `const enum` is more of a mechanism TypeScript would use to do this type of inlining for performance reasons, but a caveat is that you change the dependency without rebuilding, then the values are not updated because they are inlined at compile-time. If engines are able to, or have the interest in optimizing enum members in a way that can do this inlining, such a feature would not be necessary. + +RBN: And as mentioned, TypeScript doesn’t support Symbol, BigInt and Boolean values. Symbols have been discussed off-line with a number of delegates as a potential option, Boolean is something that has been discussed within the TypeScript team. BigInt is one that is useful because it’s very—it has been the case where working with bitmasks, you can run out of space in a 32-bit integer. BigInt is an option, but it has pretty poor performance because of how it’s implemented. Plus it’s a variable length integer, you are not fixed to something like a 64-bit int. So these are areas where we’re considering potentially adding this support, but we are again considering what the specific motivations are if these are things we want to bring to TypeScript or enums/ + +RBN: There’s areas for future enhancement. Opt-in for auto initializers. ADT enums, how it interacts with pattern matching and decorators. + +RBN: Opt-in auto-initializers, It’s been argued as I’ve been talking about this proposal with various delegates over the years, that such behavior could be a footgun, by default. It can cause issues with package versioning. This is like TypeScript enums, if someone wants to inline the value for performance reasons, because they knew what the value was in version 1 of a package, and upgrade to version 2, it might not no longer match. If they introduce it in the middle of existing members. + +RBN: However, even with that footgun, this is still a highly desirable feature for users. A large number of the enums I looked at in public enums on Github were using this auto-initializer capability. One way we have considered to bring that capability back would be to, instead of having an implicit behavior, to have an explicit syntax. This is a function or an object with a built in symbol that could map things from—based on the current position and information. + +RBN: So enum of number would auto number with values of 1, 2 and 3 and continues, enum of string would auto initialize each of these to the name of the enum member itself. The other way we might consider doing this would be an `auto` keyword as an opt-in. It's a fairly small opt-in, or small hurdle to get over to get back to the TypeScript style enum approach. But in that case it’s numbers only, since if you are trying to add more complexity to specify what type of auto-initializer you get, you should consider things like number of, of string, et cetera. + +RBN: One of the reasons we don’t have this in the core proposal is there’s been some discussion about things like the of clause. If it is dynamic, then it can’t really be optimized by a native implementation. If it is to be optimized by native implementation and not dynamic, you don’t need the symbol for it, but it still runs into things like aliases and shadowing, where something could be declared number and it’s not really—not still necessarily statically analyzable. + +RBN: But again these are—a lot of these things are capabilities we might consider for Stage 1 version of the proposal, as we continue to discuss what feature we want to see and advance to Stage 2. + +RBN: Another big area of interest, especially amongst a number of JavaScript developers that I have talked to in various forums, especially Twitter, has been things like ADT enums, the ability to specify something like an Option type. So enums of Option with value and None with no value. A result type of okay with a value or success with the value and error error with a reason. This is something that came up recently with off-line discussion about a possible try expression. And then to but with the result be? (?) If we had something like ADT enums, we want to use something like a result type is a way to define those values. + +RBN: But in addition, these types of behaviors would fit in nicely with extractors and pattern matching. This example in things like the extractors proposal in slides before, where you could match on the enum members and extract values using extractors. So it’s definitely an area we have been investigating, on the extractors and pattern matching for a while. + +RBN: One of the reasons ADT enums aren't part of the proposal, we want them to interact with structs and shared structs. There’s dependency and tie-ins to consider and we want to pursue this as a follow on proposal to this proposal. + +RBN: As mentioned, there’s some strong ties in to future pattern matching, but tied in things like normal enums, or normal enum members as well. Where we might specify a `Symbol.customMatcher` as part of enum declaration itself that would lo allow us to use a is or whatever we might look at as in fix to say, is this part of the domain of `Color`? It’s only 0, 1 or 2. There are some additional advancements of pattern matching as we consider that approach + +RBN: Decorators, I don’t want to spend too much time on this. Once we get decorators in the ecosystem, beyond their use in TypeScript code or Babel, then there’s other avenues to consider in the future for using decorators and other declarations. For example, having decorators on enum members, and having the distinction is this an enum or a field, so it knows what metadata to look at. For example, things like control serialization/deserialization, formatting, marshalling, etcetera. When defining the declarations. This is a featured used in C# when doing JSON serialization as well as serializing to other wire formats. It’s a fairly commonly used feature in those cases. But it’s not a critical path capability we are looking for now. And again this is something we might consider once we see implementations of decorators in runtimes, and have some time to see, like, what is the user taking up with the capability? + +RBN: So in conclusion, again one of the main things we are looking at here is to eventually standardize some form of TypeScript enum, so that developers using their JS with type stripping could use enums today. To have some of those advantages in the enum syntax or something like an ObjectLiteral. And to have some of the flexibility we have going forward for introducing ADT enums. + +RBN: We are looking today is the potential stage advancement for Stage 1 to investigate enumerated types to determine which of these semantics we might want to adopt. What we want to consider. If this still feels like a good fit for ECMAScript. And what type of direction we need to go and the type of changes we might need to make to TypeScript to make these things possible. + +MM: Could you go back to the slide with the desugaring. I understand you are not committed to this desugaring. In this desugaring, there is several things that I think are problematic. One is that if, for example, you initialized—if you swapped the line where you’re initializing B and initializing C, just swap the order of them, then C would be doing a property get on E.B, and with this desugaring, that would not produce a TDZ error. That would produce undefined. And then `A | undefined`, whatever that evaluates to, that becomes the value that C gets initialized to. Along the same lines, if, for example, the right-hand side of that was calling a function, providing E as a value, `F(E).B`, then it would be making E available to other code while it is in an initialized state. These are both the same problem. Which is the visibility of E before it is fully initialized. + +MM: Now, with all that said, could you now bring up issue #25? Which some of you have already looked at. + +RBN: I am not currently set up to bring that up, because I am sharing just the presentation. + +MM: Okay. + +RBN: But I would like to talk with you more about this off-line on this issue. We have been discussing—one of the main reasons you want to support the—we currently support in TypeScript referencing the enum declaration by name is so that you can reference values that aren’t identifiers. If you have a StringLiteral enum member that is something like—hold on a second. You can have been again looking at it, one of the examples I found on a simple code search… I was looking at a code search of enums with dashes in the name. Mostly because the enum was referencing the same thing. You can’t reference those by as identifiers. One thing we considered in TypeScript is deprecating support for StringLiteral names, but there are still reserved words that couldn’t be used on the right-hand side. So if you said, “return” or “default”, lower case, in your enum number name, you couldn’t reference it in the enum value. That’s why we were considering it. This is something we want to look into more as we go + +MM: You agree that the desugaring you are showing doesn’t deal with enum names that are not available names as well. + +RBN: Yes. The desugaring doesn’t handle that case and you could call a function passing in the enum declaration. One way we could address this which, at least in part, is that we could predeclare the shape of the enum so it has a fixed shape, even if the members themselves are not marked as configurable: false. As you initialize them, then we mark them as configurable: false. So there’s potential something—you can like will to get the error during declaration time as opposed to getting an error later on when using the enum. So these are things to consider. Also, we have been talking about the same thing with the structs proposal around being able to pass a shared struct to something that might be uninitialized, as discussing how if it’s—if there’s a possibility of having read-only fields inside a shared struct. We will continue to discuss. + +MM: Because all of the issues I am raising, you are acknowledging are open and to be revisited, altogether, let me say, I like this proposal, I like this direction. In particular, I appreciate that you are not trying to reproduce exactly the existing TypeScript semantics, that you are willing to propose here a more principled semantics, or a better-behaved semantics. And that for TypeScript code that stays within the semantics that works in both ways, that works with this proposal, and works with existing TypeScript, that that code would also be in erasable TypeScript, once given that this proposal is accepted into JavaScript. And I am very excited about erasable Typescript. And enums were the biggest hole in the valuable things that had to be removed from TypeScript and that would erase that. I will leave this there. + +SYG: So RBN, could you walk me through how you would adopt new enums, if they don’t have the exact semantics as the TS enums of today in the erasable mode? + +RBN: In most cases, where the syntax—the syntax or semantics differ from TypeScript we are considering making changes to TypeScript. The one case we are not is in how auto numbering works. That is too much of a breaking change. Since TypeScript has this mode called erasableSyntaxOnly, designed to only allow the syntax in your TypeScript code that is also allowed with type stripping in Node.js. If we were to support native enums, because they are now standardized and then available in node.js applications, then we would instead only restrict the parts of enums declaration that are still TypeScript only, which would be the default auto-initialization behavior. If we introduce an opt-in auto auto-initialization capability, we would guide, including an error, a quick fix that allows you to do that. We have a story going forward how to adopt the these changes, some of the other changes to runtime semantics, such as enum declaration merging, or things we were already considering deprecating for TypeScript 6.0 later this year, and also looking into other potential changes we might want the semantics to later forbid. Then that gives people time to transition to new things like not using the old reverse mapping approach to using `Symbol.iterator` if we decide to advance. We have leeway there. To make some of the changes. The number of people that use things like reverse mapping is relatively small, but an important capability in those use cases. But in general, like we are going to match syntax and semantics as much as possible and preserve auto numbering + +SYG: Two follow-up questions. One, is it then a fair characterization to say there is a constraint that if we standardize a piece of syntax that exactly is the same as a piece of TypeScript syntax, that the semantics we standardize must also be exactly the same as what TS currently exposes? Like, is is the auto numbering versus non-auto numbering semantics currently syntactically distinguished + +RBN: It is not by not—in TypeScript by not having an initializer on the enum number. We that’s how it’s been for a decade. + +SYG: Does that mean then if we standardize—just as hypothetical, no value judgment here—if we standardize a JS enum without auto numbering, that must be syntactically distinguished from the default auto numbering enum that TS has today? + +RBN: If you are saying that it has no auto initialization at all, then no. Because you can write a—you can write a native enum with no auto initialization in TypeScript today by putting an initializer. So that is the syntactic distinction there. If the concern was we chose to the auto-initialization using a different default, which was discussed several years ago, as like say symbol was the default for those, that is something we would have to say as something we explicitly don’t support, since we want to maintain backward compatibility with TypeScript. There are a lot of declaration files that exist across numerous applications and right now if a declaration file is handwritten and uses enum, we have an assumption of what the results are. We wouldn’t want to break that. Which is why we instead say, it would be better if we wanted to change the auto initialization behavior is that doesn’t match TypeScript, do it through an opt-in approach that is syntactically distinct such as the—let me jump to… to something like having auto or of and of clause to be syntactically distinct to the auto initialization behavior. That’s the approach we would recommend. + +SYG: So then the adoption story is, if we need to make changes, we will add new syntactic ways to distinguish the changes, but that will obviously require the TypeScript libraries and apps, preventing them from being erased to also update their code. That’s an accurate assessment, right? + +RBN: That would be an accurate assessment, yes. Isn’t a concern when it comes to referencing an enum that someone else is publishing as part of the declarations. We assume that’s a property access on an import. The only case where it differs is a `const enum`, and that doesn’t make sense in plain JavaScript. But, yes, if we want to do something that differs, then we would want to use a syntactic mechanism and then developers would have to adopt that mechanism to support it. Yes. + +SYG: Okay. Thanks + +PFC: Hi. I think this is a great proposal and I would like to see it advance. Did you have any thoughts on whether enum declarations should be able to be decorated? Not the individual members, but the whole declaration? + +RBN: Yes. Yes. They would be. That’s not clear. I don’t show an example of a decorator on the declaration here, but in the second bullet point, I say we would want to distinguish between kind enum and kind class and decorating itself. Yes, that’s something to consider. Again, if most likely if decorators are a feature we add to enums, it would be a follow proposal that comes after decorators reach Stage 4, and we consider some of the other decorator proposals. And there’s hesitation to advance other proposals, until the log jam around decorators support and runtimes is addressed, and get some feedback on implementations in the wild and people starting to use it. It’s likely the decorators wouldn’t come to enums and depend on decorators existing, but definitely support it. Yes. + +PFC: Yes. Sorry. I missed the second bullet point here. Thanks. + +DE: Hi. So I am wondering about the pros and cons of this feature versus the purely TypeScript-level feature, where we focus on making sure that you can have the ObjectLiteral kind of like as it comes with a types declared. You mentioned a few advantages such as being able to do self-references, the way it could be frozen afterwards, the way it could be a host for other possible features. And I guess there’s obviously the aesthetics of sticking with what developers have found to be intuitive. So yeah. Could you speak to this? + +RBN: Yeah. So the approach that’s outlined in the pull request you mention is the `as enum` + +DE: Yeah. Now it’s called … (?) + +RBN: So for all the reasons I have listed on this slide, generally we were less than enthusiastic of ObjectLiteral enums on the TypeScript team. There’s a number of limitations that make it not usable for a lot of cases for enums today. It doesn’t give us that avenue for advancing potential future dresses liar ADT enums which is a very popular capability that I’ve been discussing with a number of folks. It doesn’t give us that able to to—unless you are using a static type system, it doesn’t give you that ablility to to do self-referencing, which is extremely useful and necessary for anything that works around bit flags and bitmasks—which, if Node.js were to adopt this for specifying flags for the file nodes for open, those are all bitmasks. You’d want something that works in those cases, easy to define. And you really want to—ObjectLiterals don’t give you that capability. ObjectLiterals would need additional work after the fact, like freezing the object, it wouldn’t give you the opportunity to potential things like inlining and runtime capabilities where runtime might be able to look at the object shape and if all the things—if it knows this variable can’t change, because it’s an import, and it knows this property member can’t change because it’s frozen, you might be able to inline in native code. We are not depending on, but we want to see in the future. `const enums` have shown they can be significantly faster when you can do that inlining. And there are some performance enhancements that runtimes are looking at as we have been looking at the structs the proposal, and we won’t get that with ObjectLiterals, I don’t think. + +DE: So the restricted domain of values part is what I am having trouble understanding. Why is that a benefit? + +RBN: So I mentioned before that, if we eventually do want to eventually support things like ADTs, which I think we may really want to investigate—I think they’re an alternative to the things like records and tuples, they slot in well with things like the extractor, and pattern matching and the like, as well as other proposals that we have been considering or have been discussing for a little while now. Having those Option, Result types and the stronger capabilities mean that if we don’t limit it now, then if tooling is built up, that says, I am going to do something with enum members and I know—and the enum value could be anything. Then it makes it really hard to say, now we support ADT enums. Is this an ADT enum member because it has type of function or some other function? It makes it harder for runtime tools to make those types of distinctions because there’s no way to indicate. Just like other than looking at toString, there’s no way to indicate the difference between a function and a class, other than trying to construct it. This is simplifying this from earlier versions, but limiting the surface area of enums to gives flexibility for advancing in the future, and the more surface area we leave open, the more we paint ourselves into a corner with other capabilities later. + +DE: My other question was about the transition from TypeScript’s current enums to this. What we saw with the set -> define transition was the fact there wasn’t a syntactic difference. But there was a semantic difference. It made it pretty difficult, because locally, you couldn’t switch between the two. It had to be a global flag. Are we falling into the same issue here? + +RBN: I don’t think we will be. One—so there are two problems that came up when it came to set versus define. How we did the downlevelling to be compliant. Because there are certain things like if you tried to override a method with a field, how that works versus how set semantics works. It had to to deal with inheritance. And the other one was around producing those types of error messages, how do you know it’s doing the right thing. We have to have ways of knowing that you are trying to override a getter-setter and we did introduce syntactic differences to help with that. Which was, previously when we had declaration files we would emit a get/set as a field in the declaration file. Whereas we introduce ambient getters and setters to distinguish them and produce results. + +RBN: The difference there also, we needed the flag to control emit behavior. Because people had a dependency on how these fields were declared. It’s still an issue today. Because people are using, like, legacy decorators that have expectations on how fields are handled. You could put access modifier on a constructor parameter and it becomes a field of the object and avoid having the boilerplate of doing the assignment and adding the field declaration. All those things ran into issues when it comes to sets versus define semantics. None are issues with enums. People don’t look at things whether an enum is configurable in TypeScript because we don’t allow you to overwrite them when using the type system. When it comes to the break between when we support this and when we don’t, right now if you wanted to use enums with erasable syntax only, it fails. It will continue to fail until there is a version of TypeScript that has support for emitting native enums, which won’t be until the proposal has advanced to Stage 2.7 or 3. In which case, we would be using a version of the compiler that knows how to emit that. In general, when referencing an enum from another file, you care about that it’s a `identifier.propertyName` . There’s no different semantics particulars to emit for access. It’s always going to be self-contained within your own code. No subclassing or concerns there. So I think all of those concerns we had for set versus define are not relevant to enums. Any of those behaviors that need to change within TypeScript for runtime semantics are things we have a deprecation plan for. We release changes over iterates so folks have time to make the changes. If they say, I need a way to emit old enums, we would provide a flag giving them the capability and modifying their code and what it emits because consumers use property `name.member` name to access those enum members. We don’t think there’s going to be a concern there. + +DE: So I didn’t quite get it. Are you saying nobody uses enum introspection from the outside of the module where the enum is defined? + +RBN: The only times we seen that happen was really around reverse mapping. And reverse mapping again is not fully reliable because it only works for number enums and not StringLiteral enums. We are willing to change that behaviour. And give some time for developers to address that. + +RBN: And that’s again why we introduce things like the `Symbol.iterator` as a way to give you the guaranteed set of the domain of the enum, regardless to static methods we use or capabilities with ADT enums. So you always have the fixed set of elements we support. And if this is something we decide to advance, we’ll get that into TypeScript as early as we can so people can start transitioning. + +DLM: You’re at time. Shu is on the queue. If you could be quick, we could do a call for consensus before we stop here, general support for Stage 1. + +SYG: A reminder to RBN and other folks in the committee. This is not a Stage 1 blocker, but as I have talked about before, I will be certainly assessing from the browser point of view any new proposals on how much is a pure DX thing that can be desugared, versus whether there are concrete benefits for the browser during runtime. So just keep that in mind. You have alluded to some run-time benefits, such as the fixed layout thing that can be taken advantage of. That is definitely I am looking at closely + +RBN: Fixed layout and the other one is type stripping. Node type stripping right now— + +SYG: That’s not a runtime benefit + +RBN: It is in that Node.js is not going to do a downlevelling of enums. So this is a feature that cannot come to node.js type stripping support unless there is a runtime capability. Even if it is a desugaring– + +SYG: We can take this offline. + +RBN: I would like to ask the committee for Stage 1 + +DLM: Support from MM, PFC, JHX, CDA., NRO. + +DLM: Okay. I think that’s it then. Thank you very much. And yes. Congratulations on Stage 1, RBN. + +RBN: If anyone has any other feedback, add to the enum proposal. I will have that migrated over shortly. Thank you. + +### Summary + +Proposed adoption of enums to ECMAScript roughly based on TypeScript’s `enum` declaration. Like TypeScript’s `enum`, a native `enum` would support enum members with initializers limited to a subset of primitive types. Unlike a TypeScript `enum`, a native `enum` would not support auto-numbering by default. So long as backwards compatibility is not affected with respect to auto-numbering, TypeScript has expressed a willingness to adopt a number of semantic changes to align with native support. Some concerns were raised regarding some of the proposed self-referencing behavior, which will be further explored during Stage 1. + +### Conclusion + +Advanced to Stage 1 + +## `Object.propertyCount` for stage 1 or 2 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/ljharb/object-property-count) +* [slides]([https://github.com/tc39/agendas/blob/main/2025/2025.04%20-%20Object.propertyCount%20slides.pdf](https://github.com/tc39/agendas/blob/main/2025/2025.04%20-%20Object.propertyCount%20slides.pdf)) + +JHD: Hi, everyone. RBR just became an Invited Expert. He and I are co-championing this proposal. `Object.propertyCount` is solving this problem that RBR is going to talk about is something I run into frequently, and so I was very excited to walk-through this when he approached me with the idea. RBR is a [Node TSC [Technical Steering Committee]]([https://github.com/nodejs/TSC](https://github.com/nodejs/TSC)) and core collaborator. And I will hand it over to him to present better than I would have been able to do. Go for it. + +RBR: Thank you very much also for having me here. It’s the first time for me to be on the call. So very nice to—I am able to present. So like JavaScript I am pretty certain, every one of you has multiple times heard that JavaScript is a slow language. And thanks to JITs this is mostly no longer true, in most situations, and one thing is, however, that has bothered me, and because the language doesn’t provide any way to implement a lot of algorithms in a very performant way. And one is relating to counting the properties of an object in different ways. So it’s a very common JavaScript performance bottleneck I have run into. + +RBR: And the motivation is pretty much that any library or framework that you will look at is going to use at least `objects.keys` , some object and then the length directly. So what we are doing is we allocate an array for all the different properties on it, even like if it’s an array, for example—and yes that is also passed to `Object.keys()` for multiple reasons—an array can also contain additional string keys that are just not index and as such, you want to know if they exist or not and then you have to do that. + +RBR: So that is something very frequently called. We allocate that array, we—then need the garage collector to remove to get the number of the properties on it. Something similar is done with `object.getOwnSymbols` and object get own property names, especially with the symbols, mostly what algorithms are doing that I looked at is, filtering out new ones. So that is something very frequently happening. And it could be a Fath path, depending on the data structure these implications are using to check if there are if I non-enumerable symbols on or enumerable ones And so generally, it’s mostly use. As a fast path for things. And now let’s think about performance exist in total length. What do we actually have to do when wanting to call such API? Or when we do something as `object.keys` . The array create performance measurement is hard because we have a just in time compiler, a garbage collector. We have C++ and JavaScript boundaries to cross, maybe it’s not C plus plus, but across platforms. And all of these different aspects and different runtimes can have a huge impact on the actual performance. + +RBR: And still we can say, like there are a couple of things we can determine as that as this part in this runtime. So we have an initial API call cost. Which is mostly CPU time and not be able to overcome that cost with a new API. There is going to just be the new API call call os. One interesting thing is object get own symbols is, for example, in V8 currently, if I am not mistaken in C++ and not cross-platform assembler. What happens there is the initial call is actually very expensive in itself already. It would definitely be able to optimize that further. That is a reason for me to also just so similar API which I will come to later on again, instead of multiple. Because then we overcome any of these implementation difficulties. And so then the cost of traversing the keys is again mostly CPU bound. And with that, in this case, it could actually theoretically be improved if the compiler, for example, determines we want to track how many keys added since creation. And then just returns that number instead of actually iterating over all keys. That is the theoretical optimization inside the compiler. There are other things like proxy methods and so on. And it would have to be checked against these things and I am curious about discussions around the second point, with implementers for potential ways of dealing with that. What we can be pretty certain that is possible to optimize is the cost of allocating the area, which is both CPU and memory bound. And that is like something that is completely gone afterwards. We don’t need that. The garage collector won’t be necessary for it anymore, and the—depending on the concrete implementation and the runtime, there might also be cost involving, for example, index keys that are theoretically just the length. You now optimize something internally as I have index keys starting from 0 to 100. And now, there is, like, not a concrete key anymore. And instead, each time it has to actually create the string of that specific number. So that’s an additional cost, depending on implementation details in this case. + +RBR: Yeah. Effective performance. I thought about showing numbers or not. And I decided against it. Because this is so dependent on how the algorithm that is used really looks like. And also, the concrete implementation in the runtime. So like—from what I have seen, mostly, this is starting from, like, 2 digit percentages in average to, like, it can become very, very costly, depending on the algorithms and the object or array that is the problem. + +RBR: What use cases do we have? And definitely, something like input validation, I want to know if an object has a specific number of keys. To even continue looking into it. This is a common thing that it could have for a lot of APIs and, let’s say, on the back end side, for example, for HTTP calls and I want to know if mandatory number of properties is there, I don’t have to check. I can go immediately there and done. Which is great. In general, guarding against two input in many APIs is something we could have. Object comparison is a very, very frequently used case. And a lot of different algorithms that do object comparison, and in this case, we always have to do the comparison from both sides. So one side we could have to get the keys out of it. But from the other side, that is actually not necessary. You just have to compare the lengths and that can be optimized afterwards. + +RBR: Sparse array detection. It’s something I would definitely like to be able to do in the language. And that is is one option that I am going to propose because I want to determine if something is an index key and non-index string key or a symbol key, as a length. Like right now, and this is mostly APIs just are use cases, don’t do that. Because it is very costly to determine this. Or they just accept they might have a performance overhead for sparse arrays and by iterating over all the holes, or and they just return something like undefined, for the whole, so really it depends on the concrete implementation and what guarantees they want to give for that API. + +RBR: It’s a good detecting extra properties on array like object. So to know if, for example, an array, there could be additional property and now it’s easy to determine if that exists or not. Telemetry data and could be something because you want to get it fast again, testing utilities and just general a lot of fast paths are where it should be used. + +RBR: Property count the name it servings determined it should be relatively open. And it does not contain own, for example, for a reason. And the reason is that, in this case, theoretically, it would be possible to add another option at a later point to potentially add it. I don’t believe that’s a common use case, but it would at least keep it open. And otherwise, it is pretty clear about what it is doing. It’s accounting the properties on any kind of object. We have the target where we apply it to. In case it is not there—not on object. In that case, type array would be there. Something similar with the optional options object. If that is passed through, it is possible to it’s possible to differentiate in between a couple of different things. First of all, the key types. So I know three different key types: there are index keys, which are different in arrays and typed arrays. So that’s like a specialty that we would still have to look into. And also, non-index string keys and symbol keys. Most are—at least, for example, V8 does already differentiate exactly these three different types of keys internally. At least to my knowledge, I worked on this part a few years back, I believe it didn’t change since then. The default should align with object keys, so in this case it would be a combination of index and non-index string. To really reflect the most common use case, without providing any options. And then an additional option to check for enumerable properties. In this case, there’s three different values: `true`, `false` and `'all'`. From what I have seen in the wild, true is common. All is used also sometimes. False, I didn’t find. Which is an interesting aspect. That’s something we might look into. The default, is true. In this case, to reflect the same behavior as object keys as was the target, if any of is invalid, the typed array should be thrown. It doesn’t matter if as a wrong property key or value. + +RBR: I also considered alternatives in this case. For example, if—it shouldn’t have an array as key types. And we could instead have a flat object that has index keys, non-index keys and symbol keys as direct properties which each boolean to do something before. Not with the nested array in there. And otherwise, it’s identical to the former. So I am continuing. + +RBR: I already spoken briefly about options—did I? I am not sure. Options versus multiple methods as an implementation and it’s something I thought about. And the API is very important for me personally, I want anyone to use the API without having to think about the default use cases. That would be object keys. Right? So this is very simple and straightforward to use. And any expert could then provide additional options to it, to gain even more benefit in a couple more complex algorithms. + +RBR: It is also my experience in speaking from V8 in this case that the implementation for the fast APIs is actually slower in case we have multiple APIs because there is an additional overhead of the implementation to provide for all of these, and for having something like cross-platform assembler as an example, having a single one and in that case it is way less implementation overhead because it has been done once as soon as we overcome the difficulty between the JavaScript and C++ boundary crossing, that is something positive for performance. + +RBR: Why only own properties? I couldn’t think about any inherited properties use cases so far. I did not check for these. But I—I saw maybe if someone believes there is something necessary, at some point, we could still be implemented in the way this proposal is there. And if this API would be implemented in that fashion, that was mainly my thought. So I kept out `own` from the name for that purpose. + +RBR: In the repository actually provided a lot of different examples. And in this case, and Daniel (Ehrenberg?) asked for the different variations of the algorithms or the options being based. Angular uses the regular object keys one. But they also do object get own property symbols and filter out Number numerable ones. React has an implementation for object keys. They also do object get own property names as it is. And they use get own property symbols and filter. And also have something where they don’t filter out. As far as I seen those are the different use cases for React. Doe dash has it. And Next.js, all these different use cases provide all possibly combinations besides enumerable false. Node, for example, also has specifically index check. And in this case, it’s similar to object keys and filtering. We actually have a C++ API from V8 that we use instead because it allows a couple of APIs would be significantly faster, skipping non-index string in those cases. Or the other way around, actually, also. + +RBR: And index and non-index string as before, I believe node has like the biggest variety of different options that I found, and I also know about because I mainly work on node. And I knew most of these use cases before. + +RBR: Also, I believe that—yeah. So all these example and we are only taking from production code, and I tried to exclude any code where it would be test because I believe that’s actually, like, tests are also important to run fast, but would not be as crucial as other situations. So this is actually all production code. The real world examples that are values, as a set false, I don’t know if that exists or not. I cannot tell. Index property, I believe—I am certain that would be used as soon as it exists. The reason for not being found so far by me is probably just, if you have to determine how much overhead do I have to do something like that, in an algorithm, it’s so expensive, people decide they are not going to support that case. Then it is possible in the future. + +RBR: A couple of edge cases: Index properties. Which are difficult to determine because, like, the array indices versus typed array indexes, they are a different limitation. And I didn’t check again on indexes on any other object. I believe they also have specific behavior, I am I am not sure about this one. For example, lab prototypes is something that might have to be looked into it. In this case, I believe it’s relatively natural to just work as with any other object. So only like a property that is there is going to be counted. + +RBR: The API suggestion is meant to be backwards compatible and performant. Simple to use, flexible to use and someone brought up, it couldn’t be implemented with maps, but instead using object. So, first of all, maps normally address a different need than objects. For example, when you have just like one configuration, you normally don’t want to use a map in this case. You want to use something as an object. So this is something where the fast path would benefit from this API. On top of that, for a map, you always have to do the hash value which is a little bit more difficult to calculate, if I am not about mistaken, for a map than for object keys because object keys are actually only strings. And symbols. And so the algorithm behind it may be simpler, I don’t say it is, I say it may be simpler than for maps because you have to also differentiate between other types to accept those. For objects it would all just be coerced to a string. + +RBR: Next steps would be pretty much like getting your feedback, getting the input, addressing the comments and deciding if this could become a Stage 1 or 2 proposal. All steps for Stage 2 are as far as I know, already addressed. So I am thankful for your input now. + +JHD: Point of order in the queue. Just to jump in. We only have 5 minutes left in the timebox and based on some of the discussion in matrix, it would be great if we could first focus on queue items that might block Stage 1. And save the Stage 2 stuff for later or another time. + +CDA: Just a quick note on that, we do have some time on this afternoon session available. So we can go over to hopefully get through the entire queue. + +USA: Yeah. So based on this suggestion, I am going to go through the queue one by one and ask if there’s any Stage 1 concerns. KG, is yours—okay. So, MF, what about yours? + +MF: I think mine might be a Stage 1 concern. So I am coming at this from a viewpoint that this proposal was entirely performance motivated. It doesn’t seem to be the case that there’s additional power here. So with that in mind, like 20 years ago, we used to write JavaScript. And it was common among a large number of JavaScript developers that when you wanted to loop over an array, because that was the only way to enumerate, there was no for-of or forEach, we write a for loop and have a variable that increments until it reaches like `array.length` . Right? And people would—it was common for a lot of developers to take that `array.length` and cache it ahead of the loop so that it could hint to the engine or guarantee to the engine, the number of iterations of the for loop is not going to change, you are not modifying the array during this loop. That was a commonly known and used optimization. But there were a lot of JavaScript developers and a lot of people weren’t doing that. And they were still just like looping until `array.length` . And engines ended up just detecting the array wasn't modified in the loop and optimizing it. And it’s no longer the case, if you write a for loop and don’t use the modern facilities, that you would need to do this length caching. I think it’s the same case with this. I think this is a fairly simple pattern to detect where you don’t have to actually realize these intermediate data instructions and the engine can provide the result efficiently. And if it’s performance motivated, I would rather put my eggs in that basket, having the engine optimize that if it’s truly common in the ecosystem and common to be doing in places where performance matters a lot. If engines would like to speak up to, like, the difficulty of that, I would love to hear about that. But I have confidence there. If we hear negative feedback, I don’t think it’s worth pursuing this proposal at all. + +RBR: So since I’m not an engine implementor. I’m not the best person to answer it. + +SYG: I will just jump in. So your particular example, MF, is something that sounded like a hot loop. It’s a loop. And the opportunity is open to hot loops like that is pretty different because the intuition there it will eventually hit the optimizing tier. These examples, for this proposal, seem to be kind of all over the place. Not necessarily in hot code. And the optimizing opportunities for not hot code is a lot fewer. This is something that is I don’t think it’s basically possible or worth it to ever optimize in the non-optimizing tiers. And I think it’s worth—I am convinced by this proposal’s data for performance based on the intuition that this is a popular thing that people do all over the place, and you—the cost kind of adds up in aggregate. If it were only ever used in hot loops or hot code, I would agree with your argument, Michael, we can lean on the optimizing tiers to do the fancy optimization to get rid of the allocation, but that’s not the sense I am getting where this pattern is being used. + +RBR: One additional part to that, a couple of the algorithms are not doing `object.keys` , the object and then lengths. Sometimes they have in between calls or they just have a different algorithm implementation because nothing like that exists so far. So that is definitely also something that could never be addressed by any engine because it would just not be detected. + +USA: Okay. Let’s move on with the queue. If there’s no more after that… next is MM. MM is your topic— + +MM: My topic is not a Stage 1 blocker. It is a Stage 2 blocker. + +USA: Okay. Shu, what about your topic later? + +SYG: Similar. Stage 2, not Stage 1 + +USA: I think this is at for Stage 1 discussions. What do you folks propose? + +JHD: DLM, was your point something that needed to be addressed? I think you wanted to hear you. + +USA: DLM’s point is no longer on the queue + +JHD: If it’s not relevant for Stage 1, at this point— + +DLM: I will just jump in. We do optimize for this, as Shu pointed out, but only in hot code. So the optimization in the engine won’t apply in non-hot paths. Shu provided a good answer as to why my point wasn’t really relevant for this proposal. + +JHD: Thank you, DLM. + +JHD: So then I think based on the time, we should—let’s ask for Stage 1 and we will defer to a later time or future meeting to discuss the rest of the queue items and potential Stage 2. + +USA: All right. The champions are requesting Stage 1. We have one statement of support by DE, support for Stage 1. + +KG: I also support. + +DE [on queue]: +1 for stage 1 + +CDA [on queue]: +1 stage 1 + +DJM [on queue]: +1 for Stage 1 + +USA: Sounds like congratulations. You have Stage 1. + +RBR: Thank you. + +JHD: Thank you very much. + +JSH [on queue]: +1 for Stage 1 + +[returning to non-stage 1 blockers] + +KG: I was skeptical of the proposal, but I was convinced the basic use case comes up frequently enough that it makes sense to be in the language. I also searched our own codebases and got, you know, more than a dozen hits, so it’s not something that is just like random amateurs doing. This is a—it’s a common thing, even among people who are familiar with the language. So I am very happy to support the basic use case. I am extremely skeptical of everything that is not the basic use case. The key looking at index versus non-index versus, that’s an explosion in complexity in the API and it’s nowhere near as motivated. I couldn’t find any cases where I have needed any of those patterns or like my codebases have needed any of those patterns. You had a couple of examples on the slides, but I think they are much, much more obscure. And in particular, some of the things you mentioned I would like to actively discourage: having a fast path for sparse arrays. I specifically don’t want people doing that. Looking for non-index keys on arrays, I specifically don’t want people doing that. I think code should—with very, very few exceptions—be agnostic about spare arrays and should not put non-integer keys on random arrays. That this is something added to support those use cases, that makes me want it less and not more. + +RBR: May I address that part directly? So actually, I am 100% aligned with what you were saying about this should never be a use case. There is no doubt about it. The motivation, it’s actually to guard against the usage. So and like for most APIs, you want to detect them as outliers and want to—probably just reject them during input validation, for example. + +KG: Well, no. What I want is for you to not do that. I want you to—I want your code to not be aware that anyone might do such a thing. If they do it, that’s on them. Libraries should generally be written so that if someone passes in a sparse array, they treat it like a non-spare array. If the library is slower, this is the problem of the person using the API. + +USA: On the queue we have JHD. + +JHD: Yeah. That about the expando properties on arrays. RegExp match and stuff makes one of those. We do it. I am in complete agreement that good code doesn’t ever have sparse arrays in it. And it doesn’t ever—doesn’t create sparse arrays or make arrays sparse and doesn’t attach named properties on arrays and treats arrays differently than objects. It lists are different than property bags. I think we are actually largely aligned on what people should do. The use cases here are for any code that actually cares about the real world instead of just the idealized world that we all want needs the to do the checks and often makes things slower even for the good people who are not doing the bad thing, because we still have to check for the bad thing. And making those checks faster, allows code to be faster for the good thing people. It’s still going to be slow for the bad people because we have to do the slow thing if they have doing the bad thing + +KG: I disagree. I don’t think you need to accommodate people who are doing weird things. You can just not. It’s fine. + +SYG: (Index vs non-index) I don’t know about that. That does not seem like a good idea to me, to distinguish, to have a mode to count index versus non-index properties. It is true there are optimization within V8 that distinguish the use of index properties versus non-index property, but the only concept we align on, if we have a language feature is the spec notion and the spec notion is not how we these are represented in the runtime, but here are is string that happens to round trip to, like, an integer value within this range. If that’s a filter you want to built in that has complexity and I am convinced by the use cases we ought to add it, concretely, you may have seen there’s a field called is integer index on our name object. And if so, we have parsed the name already into an integer. A size T, it’s 32-bits. Integer index concept in the spec is 2 to the 53 - 1. So that is not going to work. There’s going to be more code that that—that will require if you support that mode and I don’t think that particular code is well-motivated. + +RBR: I mean, in this case, it’s more of, like, currently when we compare objects, like Kevin you said it’s ??? which is not detected. Some algorithms do check for extra property on an array. For printing them, for object validation for equality checks and similar, and this is where I know them from. And that’s why the index versus non-index one is an optimization for these cases. I do understand that it’s something probably not as frequently used. That’s completely understandable. And it’s theoretically, I mean, what could be done is to also like remove the specific one and initially something like that and consider a different mode at a later point as another proposal potentially, if there are more use cases together. Or look into it further to see where more uses in the wild exit for this particular… + +SYG: To KG’s point because this is an optimization proposal, because it is not an new capability proposal, if you choose to not handle a use case, that does not mean the use case is impossible. It remains slower. On net, like how much of a problem is it to have that one use case remain slower. My hunch is that the index versus non-index case is not going to be that big a harm to the proposal given the relative popularity of the base use case. + +KG: And to be clear, I am open to being convinced that this comes up enough in cases where the performance matters. But I think the printing use case is not motivating to me. + +RBR: So printing, why is that—like, in this case, on the browser, it’s not as crucial. And however, in node, for example, anything that is locked is super crucial to have and like the lowest probable CPU overhead possible because a lot of users actually logging a ton, and to inspect these objects, it’s going to block the event loop. And that is something that may not happen in any server application. So it is there the most important part to not do the CPU. + +KG: Right. I am not convinced by cases which are only relevant to node. Like, I think that those cases matter. But they are not sufficient on their own to warrant an addition to the language. + +MM: Defensive code that is trying to defend against any possible input that JavaScript code could send it is a very important use case and that kind of input validation for our code certainly needs to be much more faster than it is. This proposal would take some things that for us are performance critical and change from O(n) to O(1). We appreciate it. The main concern is around the distinction of index properties versus not. The distinction is crucial for us, for you are input validation and what is performance critical. Without that, I would find—I would have much less motivation to see this move forward at all. However, not all JavaScript objects have the same notion of index. And that raises an issue with regard to how this works across proxies. + +RBR: I am aware of those issues. Myself, I also don’t have a solution for that. On that we briefly discussed in an issue around it was that—we would, depending on the target, determine the limit automatically because there are normally specific limits towards those targets that would be one possible way of handling it. I am certainly we could make it more explicit as well. I am very open for input for this. + +MM: So let me ask a specific question here which just might settle it: if the distinction, if the target base distinction is only normal arrays versus everything else, which I think Mathieu told me it is, then since there is array that is array which punches through proxies, then there might not be a problem here. Except for the cost of doing this all over proxy. + +SYG: Yeah. Mark, I don’t think counting index versus non-index properties in the spec sense, if you include both array and typed array notion, of index, it will not be O(1). It’s cheaper than the current way, but it won’t be 0(1) + +MM: If it’s not 0(1), the—then if there is—then the question remains: what practical speedup would this proposal give us in general, and if the answer is not much, then again I don’t find the proposal very motivating. + +SYG: Are you talking about just the index case or the normal case of, like, just counting properties? + +MM: I am speaking specifically about the index case. Specifically—the case that we find frustrating performance wise, here is an array-like. Here is something that looks like an array. Does it have any non-index properties? That’s the check that we need to be fast, not proportional to the number of index properties in the array, which might be enormous + +SYG: You don’t care about the number of properties. Do you just care that it has non-index properties? + +MM: That’s correct. + +MAH: Yeah. We only care about that for array and typed array objects. + +SYG: I am going to say that sounds to me like a different problem statement than the problem statement presented. + +RBR: That’s fair. Actually, that is also how it is used in node, so to speak. In a very similar way. And I believe that is pretty much the common use case for the differentiation, like that is the main one. The second one would be determining and it’s a sparse array and to have a fast path for that. Which is less crucial. Like, that is not as big of an impact. But counting, or generating the index properties is very costly. Now, I know in V8, at least because I know—the other implementations are a little bit different– I don’t know how any other would look like. In V8 and this specific question would be answered in a O(1) for—does it have any non-index string properties? + +RBR: All right. Thank you very much. And also, thank you very much for the input. Like I am going to check for the begin comments and would like to then just follow up with everyone to see— + +SYG: I have a concern now. That corresponds to—that relates to Stage 1. Because what has been teased out in conversation with Mark and the mode use case, there is a different problem statement particularly about arrays that is not actually about counting properties. What is the thing we got Stage 1 on? + +RBR: I thought all of the API? + +JHD: Exploring the problem space of all of these things. That does—that allows for disregarding some of these things during Stage 1. + +SYG: Please enumerate all of these things + +JHD: So, for example, the—so if you want them enumerated, I had to defer to RBR. But my example would be trying to have a fast path for non-sparse arrays is one of the problem states and it would be fine if we decided during Stage 1 that is not a problem we’re trying to solve. While we continue to solve the other ones. I will pass it to Ruben. + +RBR: Yeah. So I proposed the three different values. Right? Like, index, non-index string, and symbol. What I have—like and there are—I know of four relatively frequently use cases. The most frequent one— + +CDA: We are five minutes over at this point. Is this something we can do by bringing this back in the meeting later? + +RBR: Yeah. Sorry. Like, understand there are other things. So I—I guess we should continue that. + +DE: Yeah. Maybe you could write that enumeration in the notes. + +RBR: Mm-hmm. + +MM: SYG, are you okay with this continuing with Stage 1? + +SYG: Let me talk to you later. I think so, but let’s talk later. + +DE: Yeah. We—maybe you should record in the conclusion that not everyone in committee was convinced of some of the aspect of the broader scope and some people wanted the scope to be narrower. That would reflect the state of discussion at Stage 1. + +### Speaker's Summary of Key Points + +Summary to be provided on continuation topic. + +### Conclusion + +Not everyone in committee was convinced of some of the aspect of the broader scope and some people wanted the scope to be narrower. + +## Explicit Resource Management + +Presenter: Daniel Minor (DLM) + +* [proposal](https://github.com/tc39/proposal-explicit-resource-management) +* [slides]([Explicit Resource Management: Implementer Feedback](https://docs.google.com/presentation/d/1F4kLwEUvBmyyTWq06HQgiJypcCWm3uwOzVDzFQ0xauE/edit#slide=id.p)) + +DLM: Sure. Tough. I would like to present some feedback about the explicit resource management proposal. Quick reminder about what a specific resource management is. Basic idea the idea is to add a `using` keyword, along with a `Symbol.dispose` and the concept of `DisposableStack`. And generally the idea allows for automatic disposal of resources when the use—when using variable leaves scope. For example this simple little thing here. Where are we in SpiderMonkey. It’s fully implemented. It’s currently shipped in Nightly, but disabled behind a prop and the current implementation follows the spec. In particular, it’s currently maintaining an explicit list of resources to dispose at runtime. + +DLM: So a while back, SYG opened this issue. There’s a lot of conversation there. And it basically evolved into this. We would like to disallow `using` in bare `case` statements. So the example on the left, where you have fallthrough from case 1 to case 2, is what we would like to no longer allow. And you can insert braces and do your thing. In this case, it’s clear that the `using` is in a blocked scope that corresponds to that one case. + +DLM: My colleague IID provided a nice example of how this could desugar. So with case fallthrough, as you can see, things get a little bit weird. We would argue this isn’t implementation weirdness, but it’s actually a weirdness in how things are specified. And on the right-hand side we have the desugaring without fallthrough, which makes everything fairly clean and straightforward. + +DLM: So why make this change? Basically, this would mean that we would be able to know the scope of the `using` statically. So in our implementation, we could get rid of the runtime dispose that we are currently maintaining, and just synthesize try-finally blocks. This is more efficient and simpler. We believe it’s dubious at best that people want to have this kind of C-style fallback behavior when using. And we are willing to rewrite our implementation if this change is made. And we heard support from V8 who said they are willing to rewrite the implementation for this implementation + +DLM: Alternatively, why not just create a scope outside of a switch? We are doing our best to be efficient when handling switch statements. We currently have two at one much [#457BD]ling fall through would require adding a second pass or maybe making a new scope and removing if not needed. Doing time travel. This is possible. But it’s definitely extra work in complexity for code that ist most likely written by mistake, not on purpose. + +DLM: Concently, RBN was kind enough to put a pull request with these changes. Yeah. I would like to ask for consensus. On making this change, about pull request #14. + +MM [on queue]: Strong support of prohibition. Thanks! EOM + +USA: All right. We have MM on the queue says, strong support. That was all. Let’s give a minute or two, to see if folks have more thoughts. + +DLM: Yeah. There are comments with implicit support in the full request as well. I will share that NRO was positive with regards to this change about the Babel point of view. + +JHD [on queue]: switch is bad and it's ok if people can't use a new feature in it, +1 . + +PFC: I support this. + +USA: We have a lot of for support them, and no negative comments. So, DLM, you have consensus. + +DLM: Okay. Great! Thank you very much. + +### Summary + +Allowing the `using` statement in a switch statement with fallthrough complicates implementations. If we disallow this use case, implementations can desugar to try/finally blocks which is simpler and more efficient. The proposal champion put together a pull request for this change: [https://github.com/rbuckton/ecma262/pull/14](https://github.com/rbuckton/ecma262/pull/14). + +### Conclusion + +Consensus to merge [https://github.com/rbuckton/ecma262/pull/14](https://github.com/rbuckton/ecma262/pull/14). + +## Non-extensible applies to Private for stage 1, 2, 2.7 + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/syg/proposal-nonextensible-applies-to-private) +* [keynote slides](https://github.com/syg/proposal-nonextensible-applies-to-private/blob/main/no-stamping-talks/non-extensible-applies-to-private.key) +* [pdf slides](https://github.com/syg/proposal-nonextensible-applies-to-private/blob/main/no-stamping-talks/non-extensible-applies-to-private.pdf) + +MM: So, normal request for being able to record during presentation, including QA during presentation, and then recording off afterwards. Okay! + +MM: This is primarily by SYG and I. The actual proposal text was written by SYG. And this is—something that, and, and—this particular proposal has several motivations, but first, for its history, it is extracted from the stabilized proposal. So to just, from a very, very quick recap. Stabilize was proposing new integrity traits and broke it into these five or five element integrity traits to consider. And in the last meeting when we talked about stabilize, we explained our hopes and dreams, which is that the fixed integrity trait, which is the one that we’re talking about today, be bundled into the existing non-extensible integrity trait. It would not be a new integrity trait, but new behavior associated with the existing non-extensible. + +MM: And what this new behavior is about, is illustrated by the following code, the contrast between the subclass on the left and the subclass on the right. In this case, as an expository example, they are both from the same super class, `FrozenBase`, and the superclass constructor for whatever reason just freezes this. And on the left, the `AddsProperty` subclass, adds a public named property to `this`, but it is doing it, of course, after the `super` returns, before `super` returns there is no this that’s in scope and once super returns, this will, as you expect, throw a `TypeError`. + +MM: On the right, we have what is essentially the same code, expect that instead of adding a public property, we’re adding a private field. And today, this does not throw. This actually adds the private field, even though the object is already frozen, and because it is already frozen it is already non-extensible. The reason we get the `TypeError` here is not because it is frozen per se, but specifically because it is non-extensible. We think this is counterintuitive, that is one motivation. + +MM: So what we’re proposing is that non-extensible be—that the meaning of it be extended in a way that the claim is already intuitive, it is the thing that would be the least surprise. Such that attempting to add a private property to an object that is already non-extensible would throw a `TypeError`. + +MM: That is a nice motivation, it probably wouldn’t have motivated us doing something as dangerous as this. It’s dangerous in that what we’re proposing is noncompatible, we will come back to it. One motivation for this is that the struct implementation, the struct proposal, is proceeding as a separate proposal; a lot of the rationale for it is that structs are essentially better classes, and better in particular in ways that enable them to have a high-speed implementation. And the problem with the current semantics is that this extensibility of private properties, combined with the return override, can be composed to force the engine to add a private field to an already constructed struct instance. And given the way private fields are implemented by, as far as I know, all of the high-speed engines, they would then have a choice, which is give up on structs necessarily having a fixed shape, which would hurt the performance promise—or have a completely different path through the engine for adding private fields to structs that are completely different then they are for adding private fields to objects. Neither of which we like. + +MM: So with this proposal, this attempt to change the shape of structs, which is the only thing right now in the language that would imply that structs’ shape can change runtime—this would instead throw a `TypeError`. Instead, as far as we can tell, we can tell that structs can faithfully to the spec have a fixed shape high-speed implementation. + +MM: The other motivation is mostly hinted at by this piece of code. Which is that the ability to add private properties via return override to existing objects, essentially gives the language something that is very much like a WeakMap, but it makes it accessible by syntax. And therefore, also fairly global. + +MM: So over here, when we’re trying to reason about communication channels, this weak map reachable by syntax is a problem. Because you might freeze the class and freeze the prototypes and all of the methods, so it all looks like it has no hidden state here. And in the you take two other objects that you know not to have hidden state for the normal meaning of state, and thereby not represent a communications channel. And then one party might use this hidden map functionality to create a mutation of surprising state that the other party might not expect. And this hints at, you know, larger problems with virtualization that I can get into if there are questions. Let’s just say there are several different motivations that are quite crucial for both parties that all have the same simple solution. + +MM: And the solution is indeed quite simple. It is so simple that SYG initially raised the possibility of doing this just as a needs-consensus PR, which I will agree is reasonable. I prefer that anything that has semantic observable effect, especially when there is a danger of incompatibility, just go through the discipline of a proposal, but still one that we can hope to advance fairly quickly. These two changes is the entirety of the proposal. + +MM: These are the two operations in the spec that can cause a private field to be added to an object. And we’re just proposing that both of these do a precondition check, an input validation check, to verify that the object is extensible, and otherwise throw a `TypeError`. + +MM: Now, with regard to the potential danger of incompatibility, would this break the web? Google has generously already deployed usage counters to find out. And the bad news is that over time this has still been growing. It has not been asymptoting. But the numbers here are like 0.000015%. So, they’re tiny. And a little bit more, by the way, with desktop than mobile, I think the 0% here is showing on both is just rounding errors on the display. But these are the six websites all in Germany where a problem was detected. And for all of these six sites, there are only two cases. + +MM: One case is this weird piece of code that we don’t quite know why it’s—oh. The class is named `_`. So over here, it’s looping through the fields of under bar in order to initialize a private field of the `_`. Sorry, enumerating the public enumerating fields of `_`. And add a private field of `_`. But during the enumeration it is freezing `_` itself. The disturbing thing about the proposal is this code, for whatever weird reason might exist, is currently correct. And the price of accepting this proposal is that this code would start misbehaving despite the fact that today this is correct code. + +MM: And likewise, with the other case, which is perhaps more understandable. Which resembles our first counterexample. Which is a superclass constructor that freezes `this` and then adds private properties in the subclass. So given that in both of these cases we would be breaking correct code, which exists out there on this very small portion of the web, it is conceivable that browsers would not be willing to go forward on this. Google, as a cosponsor of this proposal, looking at these numbers, have decided that they themselves are willing to go forward. And it is also conceivable that non-browser implementations might object to the non-backwards-compatible change as well. That’s the question we have for the committee today. + +MM: And what we’re, we would like to ask for is first stage one, but since this was something that was reasonable as a needs consensus PR, we, we would, if we did a stage one, we’re going to ask for more. Which we may or may not get. + +MM: So first of all, may we have stage 1. This is the actual, official statement of stage 1. So first of all, any objections to stage 1? + +MM: Okay. Any support for stage 1? + +[on queue] - support for stage 1 from DLM, DE, and DJM + +DLM: SpiderMonkey team is favorable about this change. + +MM: Great. Thank you. So, at this point, I think we can say we have stage one. The stage two checklist we made for ourselves derived from the official statement is also committee approved and spec reviewers selected. This is the actual official statement of stage two. Can we have two non-champion volunteers to review? + +MF: I’m—I’m confused. We didn’t reach stage two, right? + +MM: Right. I’m asking do we need this to prove, so what I wrote down over here is to get to stage two, we need reviewers selected. Am I just wrong about that? + +MF: When we grant stage two, we assign reviewers. + +MM: Ah. Excellent. Excellent. So can we—first of all, are there any objections to stage two? + +SYG: I just wanted to give some more color on the data that was shown. So— + +MM: Great. + +SYG: MM is certainly accurate that Chrome is willing to try to ship this, by our suggestion. But that said, while the absolute percentage does seem small, due keep in mind this is sampling from page loads, and page loads are on the order of many, many billions. So, like even very tiny percentages can end up causing concentrated pain for a small percentage of folks who keep hitting the same error over and over, which might be bad. But in this case, the plan is that we already did reach out to this German GIS software, Cadenza, we have thought heard back, I will try to ping and follow-up with them, but the hope that since this is looking like an officially sold and supported product, that they would be responsive to changing that one piece of code to a static Initializing, which would be a very easy work around for their code. + +SYG: The other two websites that were broken that used the same Axial framework, which I cannot find any references to; if anybody is familiar with the German web design scene and firms that do that service, if they have any clues there, it would be much appreciated. But I don’t know how to do any outreach for that at all. But given it is just two, and one of them is a music festival website, traffic for which I expect will die down after the music festival is passed. It really just comes to this one other site. And I think that is not sufficient cause to consider it a breakage to not try to ship. So really right now, it comes down to trying to reach out to this dissy company that does the mapping software. + +SYG: I welcome any help that anyone wants to volunteer to also do the reach out if they are interested in also seeing this change happen. + +MM: Any objections to stage two? Great. Is there any support for stage two? + +CDA: Do we have any explicit support for stage two? So there’s nobody explicitly expressing aberrations. + +[on queue] support for stage 2 from DJM + +JHD: Yeah, I mean, it’s—I think it should be fine to, like I understand all of the rationales here and all of the cross-proposal, crosscutting concerns as to why this is valuable. And I see worse consequences if we don’t do this. So I like, but it is unfortunate because I really liked the simplicity of the weak map analogy for private fields. But like this does explain it, you know, somewhat cleanly. So as long as it is web compatible, like, go for it. + +MM: Okay. So I take it, I kind of take that as support? + +JHD: Yeah. Yeah. Yeah. It’s—support plus I wanted to grumble a little bit. + +MM: Oh, yeah. Okay. I had the same discomfort when the idea first arose in stabilize. Okay. So, good. Now, so now— + +CDA: Now, you need reviewers. Given there were no objections earlier, not seeing objections now and multiple voices have explicit support, you now have stage two. Which means now you need stage two reviewers. + +CDA: JHD has volunteered. + +MM: Great. Thank you. + +CDA: Do you think you need one more? Typically? We like to have— + +MM: Yeah. I don’t know what the requirement is, but certainly two is traditional. + +CDA: I think two is the minimum. + +DE: I’m happy to review. + +CDA: And Daniel will review. + +MM: Excellent! Excellent! Thank you. And now, could we possibly in the same meeting with, with two reviewers, do we need the reviews to happen before we ask for stage 2.7. If so, obviously, key can’t get stage 2.7 this instance. + +CDA: Acceptance criteria for 2.7 is reviewers sign off on the spec. This is required for 2.7. + +DE: Yeah. I did review the spec before the meeting. And I would sign-off on it. But it would need those other sign-offs, too. + +MM: Okay. Can we get those other—so the other people that would need to sign off, JHD, you said you’re a reviewer, would you sign-off on the spec text you saw? It is really the entirety of the spec text. + +JHD: I would have to go back and look at it. But I’m comfortable with conditional approval and I will check in the next 20 minutes. But the editors are the ones that definitely need to sign-off. Yeah. So yeah, I approve that spec. That’s fine. + +MM: Okay. Great. And—are there editors who could weigh-in in realtime? + +KG: Yes. Seems good. + +MM: I’m sorry, who was that? + +CDA: That was KG. + +MM: KG, hi. So, do you approve? + +KG: Yes. + +MM: Great. Is that sufficient? Do we need another editor? + +MF: I mean, I would personally prefer to have until tomorrow. I hadn’t looked at the spec yet. But I’m also comfortable deferring to KG. So that’s fine. + +MM: Okay. That’s great. That means that there is still a chance we can get it this plenary. So which is really the only thing I care about. It doesn’t have to happen in real time. And—all right. + +CDA: Okay. For the record, are we saying we are granting conditional advancement to 2.7, predicated on the editor’s sign-off? Now, KG just said he approved. MF was a little bit more ambivalent, haven’t heard from SYG. + +SYG: I wrote the text. + +CDA: I forgot you are cochampion on this. Your sign-off is implied. + +MM: Okay. And, and once we have all of the sign-offs, does anyone in the committee object to 2.7? + +MM: Great. And does anybody on the committee support 2.7? + +CDA: Nothing on the queue so far. JHD supports 2.7. + +JHD [on queue]: same support + +MM: Great, that means we do have conditional 2.7. Waiting on MF, correct. + +CDA: I believe you need two explicit supports. + +DE: I also explicitly support 2.7. + +MM: Okay. Great. Thank you. Okay! Great. So MF, I look forward to hearing more from you later. + +MF: Yep. + +CDA: All right. I believe, if I’m not mistaken, that concludes your topic. + +MM: Okay. Great. + +### Speaker’s Summary + +* MM presented a new proposal, broken off from [proposal-stabilize]([https://github.com/syg/proposal-nonextensible-applies-to-private](https://github.com/syg/proposal-nonextensible-applies-to-private)), co-championed by SYG and others. It proposes to make private fields respect `Object.preventExtensions` . +* This proposal would patch up the current counterintuitive behavior of private fields not obeying non-extensibility, prevent hidden state creation via private fields, and improve performance so that nonextensible objects can have fixed memory layouts. +* The proposal is not backwards compatible and might rarely break existing correct code. +* Google has deployed usage counters and found minimal impact, but some websites in Germany (some which use a German GIS framework called Cadenza) might be affected. One website has minimal likely impact; it is for a temporary music festival. Google is trying to reach out to the affected German websites and Cadenza, but further help with outreach was requested by SYG. + +### Conclusion + +* The proposal reached Stage 1. +* And it reached Stage 2 (reviewers JHD and DE, who already have signed off). +* And it reached conditional Stage 2.7 (conditional on pending editor approval from MF; editor approval from KG and SYG already were given). +* And it reached stage 2.7 later in the meeting when it got that approval from MF. + +## Continuation of Object.propertyCount for stage 1 or 2 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/ljharb/object-property-count) +* [slides](https://github.com/tc39/agendas/blob/main/2025/2025.04%20-%20Object.propertyCount%20slides.pdf) + +JHD: We have been discussing it in Matrix. So for the notes I can just say that it seems like what we need to do is potentially come up with a proposal for arrays that are sparse and then remove the index stuff from the property count and then come back—and try to address the concerns that folks have indicated. Like there is so much—the—the potential API surface or the potential solution for the property count proposal is large enough and there has been enough varied kinds of pushback, I’m not sure if is productive in plenary to go back and further right now. But we have a lot of people to talk to in the interim, I’m sure it would be a lively discussion at a future plenary. + +RBR: My suggestion for now would be just to remove the differentiation between the index and nonindex string and keys. I assume that would address the concerns in the room, but I could be wrong. I would go for that for now. + +SYG: I think we’re running ahead a bit. While a few of us did express specific concern of the API shape you presented, I’m very supportive of one problem statement I heard, performance issue of counting properties. You showed a bunch of examples that motivated it, the problem in the wild. During the discussion, another problem, which sounded like a different, very different problem to me, came up, which was around slow paths with arrays. Whether that is sparse arrays or native arrays with non-indexed properties on them. That is a different problem to me than counting properties. Perhaps the best way to solve is to count properties, but the problem that you’re trying to solve in the arrays case is not actually counting properties. Right? That is what I would like clarity on. The stage one that we got agreement on, is that for counting properties or counting properties plus whatever problem with arrays? My personal preference is, because they sound like very different problems to me, that they be treated as different proposals. But that’s my personal opinion. It is up to you all to decide how to frame the problem statement. + +JHD: My, my interpretation here is that—we phrase this as `Object.propertyCount`, counting properties, because that seemed like the only solution to all of these use cases at once. I would say that a more broader statement of what I was originally hoping to solve, is generally comparing and describing objects and arrays, and avoiding performance cliffs whenever possible. And it’s totally reasonable if—so that’s how I would, and Rubin can talk to this as well, that’s how I would personally describe the problem statement. Maybe workshop it, and try to come up with a shorter version. And I think it is completely reasonable to say, well, why don’t we narrow that, within stage one, into two separate problem statements, one about array and one about non-arrays and then have two separate proposals. Where, one about arrays might, for example, do like an `isSparse`, because it doesn’t need the necessary count them, it just needs to determine if there are any. Things like that. Does that broader problem statement of avoiding performance cliffs when comparing and describing objects work for you, SYG? + +SYG: I would like it more scoped than that. The general problem of avoiding performance cliffs I think extends to a lot of implementation details that may be undesirable to expose. In particular, you might care if an object is in dictionary mode, that is usually much slower than in fast mode. That is not anything we would ever want to expose to the web, but it evidently affects fast pass and slow pass, and I would categorically reject out of scope. And if it was broad enough just to avoid fast pass, the shape and object it is currently in, that sounds like it would be in scope. That broader statement, why I would encompass the array issue and property counting issue is to too broad for me to really figure out what is in scope that you’re thinking of. + +RBR: Yeah. So I agree to what you were just saying. And like exposing if something is in dictionary mode or not, I wouldn’t be interested in. I don’t believe that is useful because that’s something I believe is really up for the engine. + +SYG: Right. So that’s why I was, earlier I was saying I would like an enumeration of what you consider to be in scope. It could be that the problem statement— I’m totally happy with the broad problem statement “we want to avoid performance cliffs”, in particular, these performance cliffs. But all performance cliffs, that is pretty hard for me to think about. + +RBR: Yeah. So I believe there are already a couple mentioned. The question is, like—and if we want to address them all with the one API, and mention ones or if a couple should be separated? + +CDA: That is it for the queue. + +JHD: I mean, I think—I understand we want a specific problem statement that everyone is happy with for stage one. We should have this before the end of this plenary. SYG, it sounds like in spirit you’re okay with it, but we haven’t come up with a wording that avoids including things like dictionary mode and all of that stuff. Right? Does it make sense to you that if we—like, does that resonate? That we just haven’t come up with the phrasing we’re probably on the same page as to what we sound to describe? + +SYG: It sounds like you care about property counting and something to do with arrays, concretely and nothing else. Yeah, that sounds. + +JHD: Yes. + +SYG: So, I am very enthusiastic about property counting, solving that performance issue with the allocations. I’m very skeptical about how we can solve that at all. I still don’t think that is a stage one blocker. But if you choose to just glom together those two problem statements into one for the proposal, then just be clear that, you know, it’s, I’m very skeptical of the second part. + +JHD: Right. I would say for the time being we do. But based on all of the discussions it is highly likely that we would want to come back with a narrower problem scope in the future for this proposal, and perhaps a new proposal to account for the part that was removed. + +SYG: Yeah. + +RBR: Good? And like, I do have one question. That is like the differentiation of an array and an object. Because in the end, for me, as a user, an array is always an object. So I personally try to prevent it, but I have seen a lot of code just accepting any input which could be an array or an object and they just use, for example, `Object.keys()` on it. That's very, very expensive to do. So, that’s where I’m not certain about the array versus `Object.keys()`, how to differentiate them? + +SYG: Sorry, was that a question for me? + +RBR: Yeah. + +SYG: I mean, you differentiate them by—I see, okay. So, if the problem statement were improving the performance of counting properties, because you think in your experience, the performance of counting properties of arrays versus non-array objects is very different, that falls under that, that beginning, for that distinction falls under the problem statement of counting properties? + +SYG: Like, your problem statement, I want to solve performance of counting properties. Now you’re saying I went to solve distinguishing arrays and objects. Is the distinguishing thing like a necessary step to solve the counting of properties performance? + +RBR: No. They are just different kind of algorithms and for input validation, for example, you want to probably make sure that the array does not contain any additional properties on it. Yeah? + +SYG: Let’s take a step back. Now, I heard a third problem which is input validation. + +RBR: Yeah, that is something that I mentioned. + +SYG: Is the goal that you want one API? You have a list of use cases and want one API that fits all of them? Is that the actual goal? + +RBR: And the API just fits in different aspects. It is used as a fast pass. And that was like, I believe, also in a first, or second slide in this case. For many algorithms and from the use cases, I believe I mentioned seven. And there’s like a fast path in general is a very big one for a lot of things. For example, input validation as well. + +JHD: I guess to answer your question as well, SYG. It doesn’t matter how many APIs solve these problems, the more of them they solve the better. This specific solution happens to be one API that addresses all of them. If that API seems too complex for the subset of use cases that you or any other delegate finds compelling, then it’s fine, we can split that up into multiple separate proposals and APIs. You know, it is not a binary thing. Right? As RBR said, there are seven use cases, solving one is better than zero. And solving six is better than zero. Right? So, it is more that we looked at these problems and this API seems to address them all. And especially given recent engine concerns about the number of methods being added, it seems desirable to come up with one somewhat related method that covered all of the use cases. It is fine if that isn’t palatable. + +SYG: Okay. So the problem statement is here are these use cases. Which is the problem statement is just like, here’s a burn down list, we want to fix these. Is that the most accurate thing? + +JHD: Yes. + +RBR: Yeah. + +SYG: Okay. Okay. I see. Yeah, I don’t have an issue with, I personally don’t have an issue with stage one for that problem statement. + +RBR: Thank you. + +### Speaker’s Summary + +* The proposed problem space: Developers need a performant way to count properties on an object, without allocating intermediate arrays. + * `Object.keys(obj).length` is very common in real-world JavaScript code. + * Other use cases presented included input validation, object comparison, sparse-array detection, and telemetry, especially in hot paths. +* A proposed API solution: an Object.propertyCount function that takes an options object allowing filtering by key type (`'index'`, `'nonIndexString'`, `'symbol'`) and enumerability (`true`, `false`, or `'all'`). +* There was broad support for the core use case, counting enumerable “own” properties. +* There was pushback about various proposed features, especially about those dealing with sparse arrays, as well as distinguishing between index keys and non-index string keys. + +### Conclusion + +* Consensus to progress to Stage 1. + +## Continuation: Don't Remember Panicking + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-oom-fails-fast/tree/master) +* [slides](https://github.com/tc39/proposal-oom-fails-fast/blob/master/panic-talks/dont-remember-panicking.pdf) + +SYG: I wanted to understand, since it is a host hook, it does not callback into JS, which I’m not at all sure of, I think that would be disastrous. But just a host hook I’m not quite sure how this would help Agoric’s code. + +MM: So, right now, Agoric runs our critical code on XS. XS does immediately stop executing on out of memory, out of stack, internal cert violation. And XS is willing to give us an explicit panic built-in, but it would prefer to do it if the committee agrees on it, which is not an answer to your question. Because we are already talking about postponing the panic built-in until later. So, altogether, the host, the host hook by itself doesn’t directly help Agoric, it helps the language, the helps programmers writing in the language, it helps the committee, and I would argue it helps the website of standardization, because it gives the, all of those standards processes a place to talk about realities like out of memory that right now the spec is in denial of. You made a claim, when we were, you know, previously, that I have continued to not understand, maybe you can clarify. The claim was that the host continuing execution after an out of memory, continuing to execute JavaScript code is not a violation of the spec. I certainly agree that it is reality that memory does exhaust, but I don’t see how it cannot be a violation of the spec. And I would also ask you, if a particular host continued not by throwing an exception, but by simply having the place where the fault occurred return 7, would you also consider that to be not a violation of the spec in something that programmers should know to be prepared for? + +SYG: So, I do retract my statement that it is not a violation of the spec. I agree with you that it is a violation of the spec. What I was driving at was that it is not a violation of the spec that is useful or can be acted upon in anyway, that is impossible to conform to the spec as you have previously pointed out for that particular violation. So, while it is pedantically speaking a violation. I don’t think it of as— + +MM: So, since it is reality, and since we are a standards committee that has basically two primary audiences: people writing JavaScript code and people implementing JavaScript engines, and since whatever the hosts do, the people writing JavaScript code need to know about this, this gives the hosts an opportunity to explain the behavior of their fault handler so that JavaScript programmers can consult that if the host documents it—we’re not insisting that the host documents it. But for example, on the website of standards, the, the purpose of all of these standards committees in the first place is to reduce the gratuitous behavior differences between browsers. That was sort of one of the core initial motivations for both web standards and TC39 originally. This would explicitly make it discussable in web standards without web standards having to stay: This is how we violate the spec. It can say, it could, it could, within the JavaScript standard, the web standard could say, here is the behavior of our default handler. We are not demanding that they do that it provides the opportunity. + +MM: And for those who are formalizing JavaScript, like the case folk in South Korea, you know, are doing heroic job of turning JavaScript into something with a formal semantics such that you can do proofs about JavaScript code, right now, the path of least resistance is, that I believe all, everyone has done formal semantics in JavaScript is following is the spec text. To assume that actually covers the contract with JavaScript code and even leaving what the host is doing in the fault handlers unspecified to simply say that these conditions delegate to the host handler would be a very good hint to those formal semantics that they should include the possibility that—that possibility is part of the semantics, so that proofs of correctness of JavaScript code do not prove correct to code that does not work in reality. + +CDA: MAH? + +MAH: Yeah. So I think host by itself, as MM said, that doesn’t achieve much. But it gives us a place to discuss what happens when the situation occurs. And in particular, the hope is to also have a mechanism to configure the behavior of the host. So if we encounter an out of memory condition, or a user panic or something like that, that the creator of the agent or the first run script is saying “if I encounter these conditions, please kill me instead of continuing”. This is, in particular really useful for workers that have been spawned out of the main thread, as it may not always be possible to reliably notify the main thread that such a condition occurred. For User memory, could do it. But a worker, I would prefer for the supervisor, the main thread to let known that has happened, for the work to be killed then for the main thread to let it know that it has happened so it can take further actions. + +ABO: Yeah. The HTML standard has a section about aborting a running script, which is in the case of HTML is sometimes needed for killing the current document. It also discusses things like memory limits and so on. Or like, if an API blocks the main thread, such as `window.alert`, that the user can block script execution. Or in particular, if the script execution is disabled in the middle of an infinite loop, the HTML spec describes like the running script should be killed. So, maybe this should be moved into Ecma-262. I don’t know. But yeah. This currently isn’t in the Ecma-262 spec. But it is not like it is not spec’d for web browsers. [MM shows slide 28, with HTML issue] This is not related to the three HTML issues that MM is currently showing in the current slide. This is something that has been a part of the spec for a while, but I guess it is kind of related. + +[https://html.spec.whatwg.org/multipage/webappapis.html#killing-scripts](https://html.spec.whatwg.org/multipage/webappapis.html#killing-scripts) + +MM: Yeah. So if all are other places in webstandards that I should know about that discusses this issue, please let me know offline. These were the three that I found. But these three are specifically talking about what the minimum abortable unit it, they are talking about I’m here calling the static agent cluster, which is just, in my opinion, unfortunately large. But once, but—once again, the fundamental ask here is the host hook. And if the host hook wants to terminate the behavior and wants to terminate something larger, than the minimum abortable unit, I wouldn’t, I would just like to give it the opportunity to document that’s what it is doing. + +ABO: I was mentioning this in response in particular to the concern about having the host hook not respond. + +MM: Oh, oh. + +ABO: So I think that would be allowed by this section of the spec that allows aborting a running script. + +MM: Right. Okay. So yeah. So right now, the actual text in the ECMAscript spec that the host hook must return a normal completion or throw completion. So you’re saying the HTML description of browser behavior, exactly that the host, whether it is the host hook or not, there are conditions that the host neither returns a normal completion or a throw completion, it doesn’t proceed into JavaScript at all. I would like the JavaScript spec to acknowledge that might happen. Otherwise the HTML spec and the JavaScript spec are simply logically incompatible.. + +MAH: I’m not sure if that is quite true. If there is no further execution in that environment, it is not observable that, it is basically equivalent to the host having never returned. + +MM: Well, yeah. But this says that the host hook must return. I’m being picky on language here. Maybe it is not what it meant to say. That is what the actual text in the JavaScript spec says. + +MAH: Yeah, I’m not sure how that would be observable anyway. If it doesn’t return. Yeah—I’m done. + +MF: Yeah. I don’t—think I have the same hang up as you do about the phrasing here with the host hook. We say *what* it must return. We don’t say *when* it must return. + +[Laughter] + +MF: It could take until—just before the heat death of the universe and then return. Like is that the same for you? I don’t think it is as strong a requirement as you’re reading it. + +MM: That’s, that’s—okay. The spec is trying to not simply be, you know, be denotationly correct, but it’s trying to be explanatory. But anyway, I’m not terribly hung up with the particular text here. What I do, what I am, what I do feel strongly about is that the spec itself should somewhere, whether it is here or not, should somewhere be clear that, that—following—you know, going back to the original motivation, following in particular the out of memory, which is still the most problematic case, that that is part of reality and that some hosts, you know, different hosts might choose different policies. But somebody doing a semantics in JavaScript and other people trying to prove correctness of their programs with reference to the mechanized semantics in JavaScript. Any such semantics of JavaScript needs to take into account the possibility that hosts might resume JavaScript execution if they indeed might. Otherwise, you approve correct programs that then misbehave without their being a bug anywhere. + +CDA: All right. MF, you’re also next on the queue + +MF: Yeah. I just wanted to kind of get an understanding of the relationship between what browsers do today when there’s an unresponsive main thread and they kill that versus like what your proposed minimum abortable unit is. Have you looked into what the various browsers kill? What that unit is when it is unresponsive? + +MM: I don’t know. From the three—the three HTML discussions I take it that, that there are at least considering standardizing the standard agent cluster. Which is sound. But it just seems unfortunately large compared to the minimum that they could do instead. But I don’t know. + +MF: But they are considering in theory, right? Have they discussed at all what is done today? + +MM: Okay. So—so—enough browser makers in the room. Could some browser makers comment on what they think they do today? + +SYG: Got, authoritative, I think it kills the process. There is no notion of a dynamic agent cluster. I think that would be pretty much unimplementable and nondeterministic. Like we’re talking about figure out what portion may be live and reachable from which thread and find the minimum set of such threads with somehow just join those threads. So I don’t think that happens. So it is just a process. + +MM: Okay. So Chrome likely static agent, well, I’m sorry. Is static agent cluster and process the same thing? + +SYG: I—don’t know. Like I think so. There may be some origin stuff in play if you want, like same origin agent clusters and stuff like that. Outside of those details it is pretty much a process, I’m sure. + +MLS: I believe it is the same thing with Safari it kills the web content stuff that is running the pain. + +MM: Oh, the other place with the correctness issue in absence of the spec omitting various faults that came up in my talk is the infinite loop. Code might actually engage in an infinite loop in order to prevent further progress from that point. Engines might in violation of the spec continue process in that agent past that point anywhere. And the code trying to protect itself now does damage with corrupted state. State that observers definitely outside of the program should not have been able to expect. + +CDA: SYG? + +SYG: I’m supportive of figuring out how to better talk about real world resources in our specification. I’m not supportive of the goal of adding a host hook to the hopes of eventually exposing it as a configurable toggle. + +MM: Okay. Can we divide that line even into separate questions? I certainly want to expose it as, as one way you can opt into something other than the default behavior. But let’s separate out that question. Since your supportive in general of the JavaScript spec, being more explicit with regard to these problematic conditions, specifically, resource exhaustion. Would simply the host hook as an explanatory mechanism, as some place where, for example, the HTML spec could explain what browsers agree on, Or individual browsers could explain what they do. Does that by itself do you—is there anything about that by itself that you would object to? + +SYG: As an editorial explanatory device. Sorry, I see MLS is on the queue. + +MM: But, but, yeah, I would like to get your response. + +SYG: As an editorial explanatory device I would prefer that we reflect reality through other editorial ways. Because a host hook here suggests two things. One, that is somehow configurable by the host and programmable by the host. Whereas I think the whole way to reflect reality, is to say this implementation is fine. + +MLS: Yeah. Yeah, just a quick comment. We've actually written tests to make sure that we can recover from an out of memory exception when there's a proper try catch that would pop off the frames that are responsible for that. So, it's kind of tricky to code that in a reliable way, but we can recover from that. the engine itself doesn't die when the user creates an out of memory exception due to what they're + +CDA: All we are past time, but Matthew is on the queue. + +MHN: Yeah, really quick. I didn't quite understand SYG's reply at the end. Because a host already has a choice today of either raising an error when an out of memory error or it can take the choice of panicking the agent. So it is already a reality that the host is free to decide this + +SYG: Yeah, a host loosely speaking is a collection of implementations, HTML is a host of the JavaScript spec. There is nothing we can write in HTML that would be beyond it if it is just implementation defined. + +MM: I think we can adjourn at this point. + +MM: I think we received a lot of good feedback. and clearly SYG and the champions can continue a lot of this conversation offline as well. + +CDA: Okay, thanks. That brings us to the end of day two. See everyone tomorrow. + +CDA: Big thanks to everyone and especially our notetakers for the day. Really appreciate it. + +### Speaker's Summary of Key Points + +* MM presented the “Don’t Remember Panicking” proposal, renamed from the Stage 1 proposal “OOM Must Fail Fast”. +* The presented problem is that robust transactional code (e.g., financial applications or medical devices that need integrity more than availability) need to be able to explicitly request termination when unrecoverable runtime faults occur, yet JavaScript hosts today handle these fault conditions inconsistently. +* The new proposed solution: + * A HostFaultHandler hook to deal with internal faults within the current “minimal abortable unit of computation”. + * A built-in Reflect.panic function for developers to explicitly invoke the HostFaultHandler hook. +* There was pushback against Reflect.panic and giving web developers the capability to excessively halt programs, particularly webpages. It was proposed to split Reflect.panic into its own proposal to allow the rest of the host-handling mechanism to be considered separately. +* It was pointed out that there is no current common interoperable behavior defined for when browsers run out of memory. There was extensive discussion over the extent to which real-world resource management and fault conditions are already currently specified by Ecma262 and HTML, and whether they should be developer configurable. +* There was general agreement that Ecma262 should more robustly specify the current reality of how memory and other real-world resources should be handled. + +### Conclusion + +* Extensive discussion. +* Still in Stage 1. diff --git a/meetings/2025-04/april-16.md b/meetings/2025-04/april-16.md new file mode 100644 index 0000000..38de110 --- /dev/null +++ b/meetings/2025-04/april-16.md @@ -0,0 +1,994 @@ +# 107th TC39 Meeting + +Day Three—16 April 2025 + +## Attendees + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Nicolò Ribaudo | NRO | Igalia | +| Michael Saboff | MLS | Apple | +| Samina Husain | SHN | Ecma International | +| Eemeli Aro | EAO | Mozilla | +| Jesse Alama | JMN | Igalia | +| Dmitry Makhnev | DJM | JetBrains | +| Richard Gibson | RGN | Agoric | +| Philip Chimento | PFC | Igalia | +| Daniel Minor | DLM | Mozilla | +| J. S. Choi | JSC | Invited Expert | +| Bradford C. Smith | BSH | Google | +| Ben Lickly | BLY | Google | +| Ashley Claymore | ACE | Bloomberg | +| Istvan Sebestyen | IS | Ecma International | +| Ron Buckton | RBN | Microsoft | +| Chris de Almeida | CDA | IBM | +| Jonathan Kuperman | JKP | Bloomberg | +| Aki Rose Braun | AKI | Ecma International | +| Shane Carr | SFC | Google | +| Zbigneiw Tenerowicz | ZBV | Consensys | +| Gus Caplan | GCL | Deno Land Inc | +| Mikhail Barash | MBH | Univ. of Bergen | +| Ruben Bridgewater | | Invited Expert | +| Daniel Ehrenberg | DE | Bloomberg | +| Michael Ficarra | MF | F5 | +| Ulises Gascon | UGN | Open JS | +| Kevin Gibbons | KG | F5 | +| Shu-yu Guo | SYG | Google | +| Jordan Harband | JHD | HeroDevs | +| John Hax | JHX | Invited Expert | +| Stephen Hicks | | Google | +| Peter Hoddie | PHE | Moddable Inc | +| Mathieu Hofman | MAH | Agoric | +| Tom Kopp | TKP | Zalari GmbH | +| Kris Kowal | KKL | Agoric | +| Veniamin Krol | | JetBrains | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Erik Marks | REK | Consensys | +| Keith Miller | KM | Apple | +| Mark S. Miller | MM | Agoric | +| Chip Morningstar | CM | Consensys | +| Justin Ridgewell | JRL | Google | +| Daniel Rosenwasser | DRR | Microsoft | +| Ujjwal Sharma | USA | Igalia | +| Henri Sivonen | HJS | Mozilla | +| James Snell | JSL | Cloudflare | +| Jan-Niklas Wortmann | | JetBrains | +| Chengzhong Wu | CZW | Bloomberg | + +## Intl Era Month Code Stage 2 Update + +Presenter: Shane Carr (SFC) + +- [proposal](https://github.com/tc39/proposal-intl-era-monthcode) +- [slides](https://docs.google.com/presentation/d/1wvJoRFa8nRjlYSHuVLpxx-wCfwt4H9NIw2fsGJ72gxs/edit#slide=id.p) + +SFC: I’m going to be going through the Stage 2 update on Intl error month code. First I’ll start with a little bit of a reminder. What is this proposal? The goal of this proposal is to make—is to make operations in Temporal be interoperable across calendars and eras. For example, there’s a lot of things on this slide here that are not specifically covered in the Temporal specification, and yet are things that we think developers should expect to be interoperable. So, for example, if you specify the year 10 in the era BH in the calendar “islamic”, that should correspond to the year 10BH in the Islamic calendar that you wanted. Although, using calendar “islamic” here is a little bit misleading as you’ll see later in the presentation. But we believe that in type of operation is something that should be able to be made interoperable. So in other words, every conformant implementation of Temporal should have the behavior listed on this side, and it should work. + +SFC: There’s been a lot of changes since last time this has come up. So just a little bit of a preface here is FYT, my colleague, did a lot of work on this proposal a couple of years ago and then sort of took a break from working on it and Temporal in general. And more recently, you know, I’ve sort of taken up the mantle of this proposal, so I’ll be sharing the updates today. So one of the biggest changes was in era codes. + +SFC: So previously we had had a—we had been using a scheme for era codes that favored—that had certain properties. It favored a general framework that would apply to all the different calendar systems without having to dive into the details of any individual calendar, but also had the property of all era codes being globally unique. Basically what we did was we took the BCB47 era ID for the alleged (?) and used it as the error code, and then for calendars that had reverse errors like the BC era in Gregorian, we talked on dash inverse at the end (of the name). That’s what we had previously done. + +SFC: However, we got feedback from a lot of different delegates that this scheme was confusing and one of the feedback that I think resonated with me was one that since these are also the names of the calendars, using them as the names of eras as a categorical error, having to repeat the same word twice and read the same word twice when you’re reading and your code and getting things in the debug output doesn’t really tell you what an era is, and using the names—using the actual names of the eras is more useful there. + +SFC: So we’ve now adopted something on the right, in the new column here, which uses the names of the eras as the identifiers, so in order to generate these, basically the rule was if there is a commonly known English/Latin acronym for the era, use that, so that in script, rather, and if not, use a transliteration for it. So many of these calendars have well-understood Latin scripts acronyms, and the ones that don’t such as the Indian and Republic of China (ROC) calendar, we use the transliteration, so you can see that there. + +SFC: One thing I want to point out is that the codes are no longer globally unique with these scheme. For example, the eras named "am" means three different things depending on what calendar you’re in. In Coptic, it means [INAUDIBLE] and in Ethiopian it’s amete-alem, and in Hebrew, it’s an Ormondi and at least one source suggests that "am" can also mean minguo in ROC. So, you know, we definitely lose that property by adopting this new scheme. + +SFC: Next slide, era of arithmetical year, this was a concept we have in Temporal. It means that when you create a date with the year, but without an era, what era do we use as the index for the year? And we had previously not clearly defined this. The main thing here is that for Chinese and Dengue, we’re now using the layered ISO year as the medical (?) year, so it means you can write code such as what’s shown on the bottom. This is what the Temporal polyfill is already doing, even though it wasn’t written in the spec anywhere, and this is what the polyfill was doing, and I asked users on the ground and this seems to be straightforward to people. Basically using the western year that has the greatest overlap with the lunar year. So as an example here, the Chinese year 2024 starts, you know, in some time early in 2024 and ends early in 2025. It means if you write code such as this, I wanted to highlight this because you end up getting, you know, month 12, day 8 ends up being in ISO year 2025 even though it’s in Chinese year 2024, but this seems to work the way expect it to work. I wanted to highlight that here. + +SFC: Next slide. Hijri calendars, we did a lot of research on hijri calendars. And most Islamic countries it turns out rely on physical observations of the moon. We can’t accurately predict what Tate t date for the hijri calendar will be for any region. There’s an interesting calendar about Ide this year, which is a celebration for the year of Ramadan, and half of the countries observed it at the crescent moon on the 29th and half of them observed on the 30th, and it meant that half of the people in the world ended their fast a day earlier than the other half. And this kind of thing is not a kind of thing that we’re currently able to represent in a software because it requires, like, basically realtime live data. So the simulated Hijri calendar is at best an approximation and not something that can be used for displaying a date, an actual reliable date, because of that problem. + +SFC: So I’ll point out that, for example, some operating systems such as Windows actually solve in problem by allowing users to go and set a number in their operating system to, like, adjust. I guess every new moon, like, you’ll go in there and set is it plus one or minus one or zero, like, your adjustment, and that’s a proposal we could possibly entertain for ECMA 402. We’re not currently doing that, but, you know, this is a direction we could possibly explore. But the problem of simulated Hijri calendars not really working is still there. + +SFC: Another problem with these simulated calendars is round-trip-ability. So a Hijri calendar are subject to change over time. They might month round-trip, so for example, if you were to create a date in year 3000 and you try to recover it later, that may or may not actually be recovered because the Hijri calendar simulations are subject to change in order the better match ground truth, even he it’s really hard to match ground truth. + +SFC: Here is a draft solution. Here is a draft, we discussed it briefly TG2 and we’re not 100% aligned on it, and I want to show you the direction we’re thinking of. And there’s links so you can comment on this. The sort of direction we’re thinking about is focusing on the three Hijri variants that have some sort of meaning that have truth. And there's the official Hijri of Saudi Arabia. And they publish the almanac even into the future. They use astronomical calculations to compute their almanac. So then we basically ship the results of their calculations. And that works for several hundred years, a range of several hundred years. There’s two others which are based on a well-known ironic(?) cycle algorithm, tbla and civil. Which is called tabular Type II epoch and those are ones we can ship. In certain regions these calendars are sort of used as reference points when can’t do the observation, and then you might use these. So I want to show on the next slide an example of what this might look like in code. + +SFC: So currently the calendar named Islamic is the Islamic simulation-based calendar, but then this proposal is that the calendar Islamic will then resolve Intl dateTimeFormat into the calendar such as umalqura. For example, if you’re in Saudi Arabia, it might resolve to umm-al-qura. As are reminder, this—the type of thing is already done by Intl dateTimeFormat. Intl dateTimeFormat already has behavior of mapping calendars when calendars are not supported. If I put in a calendar here like here where that is not supported, in en-US locale I’ll get the default calendar Gregory. There’s nothing new here in terms of algorithm. Everything here is already conformant with the spec, which I just want to call that out, because that’s very important. And on the Temporal side, Temporal is strict. It will support only the calendars that you give it, that it will support. It won’t do the fallback. + +SFC: And the constraint that we’ll have is anything a Intl Tate time format resolves by calendar will be resolved by Temporal. That’s the constraint, so you can continue to use your code that loads the locale calendar and passes it to Temporal, that code should continue to work. We’ll continue make sure that that constraint is upheld. + +SFC: Let me keep going through the slides. I want to highlight that spec text is still in progress. Like, a lot of these things are written out in issues, but not yet reflected in the spec text. If you go check the spec text, it’s still the version that was there two years ago. Hopefully this will be resolved. Yeah, relationship with CLDR. We made the recommendations to CLDR. What happened before Is that we asked CLDR, can you come up with error codes, and CLDR came up with error codes and they’re the ones we ended up not liking in Temporal, and in order to not repeat that mistake, this time we as with the, like, champions of this proposal and in TG2 came up with these recommendations codes and we sent it to CLDR to adopt. + +SFC: And so far it looks like they’re likely to adopt most or all of our recommendations. And this will be a better outcome. + +SFC: I also what to highlight that this is not specific to date code. And this issue of #2869. So there’s a problem here about what go do you with distant dates. Say you’re 25,000 in the Chinese calendar. That’s very far away. We cannot accurately predict the ground truth that far into the future. And, you know, in general, like, dates more than a few hundred years away are not that widely used. Like, you know, like, I don’t know, something—let’s just say 99% of dates that represented in computers are probably within a span of 100 years, right? And then it goes down and, like, there’s a long tail there. So these are very rarely encountered dates. It’s already somewhat unusual to encounter, you know, dates in these—in, like, the Chinese calendar, but then it’s even more rare to encounter them more than a few hundred years away. This is definitely an edge case. + +SFC: This leads to two different camps we had in TG2. One camp says "this is an edge case, and it doesn’t matter what are we do in the edge case, so let’s go ahead and fall back to an approximation". The other camp is "this is an edge case, and we should inform the developer this is an edge case by throwing an exception". The exact same facts lead to do two different interpretations. The philosophy the Temporal champions of are generally employed is no data-driven exceptions, and given that developers are not likely to thoroughly test locales in their application, we apply best effort behavior. Intl, you pass whatever locale you want and it will give you some result. And that result, you know, could improve as more data gets added, as more locales get adds, and it will give you some result. We call that best effort behavior. So this code here, like I’ve written here in, code that I think should always work. Where you take your locale calendar and then you give it some Sate, which could come from an external source and you’re able to get a Temporal date in that calendar. Like that, code should not just break randomly. And in an implementation dependent way. + +SFC: So my preferred approach, which is currently the approach posted in the issue, and I haven’t seen a, you know, really a viable alternative to this. If you have one, post it in the issue, is that we fall back to the approximation for Int (?) dates and do the best effort behavior, and we can have a follow on proposal for users who care about this. We go ahead and expose information about is this a safe date, safe is a word that we can debate, and, like, basically is this date backed by an almanac or a reliable form, and the answer for Gregorian will be true for the whole range and Hijri will be however lock the almanac is going and so forth. And this could also be reflected in the Intl formatting and such. That’s the end of my presentation. Happy to answer questions. And, yeah— + +USA: Let’s go to queue. First on the queue we have Steve nix. + +SHS: Is there an option for requesting Hijri adjustments from the OS? + +SFC: Yeah, we don’t currently have the Hijri adjustments, you but that could be definitely something we could explore. We should make an issue about that. So thanks for bringing that up. + +DLM: Thank you. First I wanted to say I just wanted to thank SFC as the members of TG2 for taking the time to investigate the Hijri calendar and the astronomic simulations, that was something that in some ways arose from our implementation of Temporal. I just wanted to talk a little bit more about that, just bring color to what Shane said. So we were— we’re definitely concerned about the Islamic calendar, which is an astronomical simulation of moon rise. The implementations in ICFRX do not agree. It’s a simulation, not an observation, and there’s currently no way of specifying the observation point, and IC for C and IC for X, the last time I checked, were using separate observation points and I don’t believe the ICU4C is specified, and there’s been at least some reports from users that were generating not just inaccurate dates, but also impossible dates, for example, months with the wrong number of dates. So I agree with everything that SFC said, and I wanted to make sure that others in the committee were aware that the simulations are definitely problematic and at least we’re exploring the possibility of not shipping them, which also aligns with what SFC presented, but we have very little evidence that these are being used on the web, and that’s by examining corpus on websites and I’m planning on adding telemetry to see how much use we see. + +USA: Next is you again, DLM. + +DLM: Yeah, separate point, I just add on to what SFC was presented about out-of-range states. Using the example of the Chinese calendar where things are, you know, maybe 25,000 years in the future or something, which sounds like a lot, but as he also alluded to, for Hijri, if here using a tabular data source, we might only have a few hundred years in the past and a limited window in the future or no window in the future for which the dates will be accurate. + +USA: And that was it for the queue. Would you like to make any concluding remarks, Shane? + +SFC: Sure. + +SFC: Since I’ve taken over this, we probably should have maybe another Stage 2.7 reviewer. Seems like Dan Minor has been quite involved with this kind of thing, so he might be a good choice. + +DLM: I’d be willing to do that, and I can also ask if Henry would like to have a look at it since he’s also been quite involved. But I can volunteer, and perhaps Henry will take it over. + +SFC: Thank you. + +EAO: I’m happy to continue. + +USA: For the notes, that was Eemeli, and that was it, I guess, right, Shane? + +SFC: That’s all I have for this topic. + +### Speaker's Summary of Key Points + +SFC: I gave an update on Intl error month code, focusing of the changes in terms of error codes, years, simulated Hijri calendars, and out of range dates. The exact details have yet to be actually written down in spec text, but I anticipate that that will happen soon. I hope to come back to commit’ for this proposal going to Stage 2.7 in an upcoming meeting this year. We currently have, I believe, the stage 2.7 reviewers from the last time we presented this were me and Eemeli. + +### Conclusion + +An update was given. + +## Compare Strings by Codepoint for stage 1 or 2 + +Presenter: Mathieu Hofman (MAH) + +- [proposal](https://github.com/tc39/proposal-compare-strings-by-codepoint) +- [slides](https://docs.google.com/presentation/d/1eTuB1jjgb2_xG_zMNmkhleJx1F0QviMEwkkBUL9ezPQ/) +- [pdf slides](https://raw.githubusercontent.com/tc39/proposal-compare-strings-by-codepoint/19c5470bfb02acb4988708f5979d12720fa4c4c7/compare-codepoint-talks/compare-by-codepoint.pdf) + +MAH: I am here to talk about string comparisons. So first, a little reminder, what exactly are strings? In the ideal Unicode says they’re a sequence of values, which have code points between U+0000 and U+01FFFF minus a range that is used for UTF-16 surrogates. In JavaScript, we represent strings as a sequence of 16 bits code units. That is the UTF-16 encoding of those unicode codepoints, while allowing lone surrogates, so we can have technically malformed Unicode strings in JavaScript. Any Unicode values outside of the basic multilingual plane are encoded as a surrogate pair, as two code units. For humans, what strings are is just a sequence of graphemes. It’s what they can visually recognize as characters, and that can actually be a series of multiple Unicode codepoints. A classic example is emojis which are usually a combination of Unicode values. Here is a bit of an example of how a string that appears to humans decomposes in graphemes and code points. The letters that I used in the word “emoji” here are not all in the Latin range, but that are lookalike letters, some of them in full width, some of them in the mathematical range, and then I put an actual emoji, which decomposes in multiple code points. + +MAH: In JavaScript, what the composition actually looks like, if you look for the code units, you can see that the codepoints that were in the higher range actually decompose into multiple code units. All right, so code units are unfortunately a concept that, for historical reasons, show up in a bunch of places throughout the language. Whenever you try to access a string through an index property, you are actually talking about the code unit position in the string, and that means all String APIs that talk about offset or length is related to the code units implementation of the string. When you are trying to match or test a string with a RegExp, similarly by default, it matches by Unicode code units unless you’re using specific Unicode RegExp flags. + +MAH: When you’re comparing strings, array sorts or just the regular less-than or greater-than operators, you are also comparing the string based on its code unit representation in JavaScript. But these days, there are alternatives that allow you to actually work with a string’s code points. When you take a string and you iterate over it, you end up iterating over the string as a series of code points. You can ask what a code point at a certain code unit position is. While the input offset is in code units, what you get out is the full codepoint without breaking the value up. To be able to match or test a sequence of code units in a string, you can use the “u” or “v” flags for RegExp, and now you have recovered the ability to match the string by Unicode code point. For comparing strings, though, it’s less clear what you can do if you want to compare a string by using its code points. There are some comparators in the language that are codepoint aware, but let’s look a little built closer into exactly what these comparators are. + +MAH: There’s two of them, there’s localeCompare on the string prototype and also the new Collator compare in Intl. Effectively, these are the same, as far as I understand, maybe someone can correct me, but as far as I understand, they behave the same. They both are locale dependent. So because they’re locale-dependent, any changes in how Unicode says the locale should be treating some characters that can change over time. And the other thing that since they’re locale-dependent, it varies on the environment in which JavaScript runs and what the locale implementation is. This is a variation of not being stable, like, it depends on what the current implementation is, and that can also change. There’s actually a proposal about having a locale that’s stable that is in Stage 1, but that wouldn’t quite help because there’s also another issue with locale comparators: they’re meant for humans. What that means is that they do some special processing for some characters. + +MAH: So there is a series of characters that are defined by Unicode to be confusable to a human, that means they basically look the same. As I mentioned earlier, I used the word emoji, but I used characters from different ranges that, for a human in some conditions, often look the same, but actually are not the same Unicode value, and the locale comparators group those together: so they will not compare the same, but they will be next to each other in the comparison. It also collapses characters in the same equivalence class. So in Unicode, there’s often different ways of representing the exact same character. + +MAH: I’ll give a couple examples now. These are the results that you get from using the locale comparators with some of these values. Here I used a full-width Latin letter, whose code point is in the basic multilingual plane but above the surrogate range. I also used the mathematical character, which is not in the basic multilingual plane, and then I used just a Latin-1 character. If you sort them through the built-in comparison for string, you end up getting something that is not in the Unicode order because the surrogates for the mathematical character ends up being implemented as two code units and ends up sorting before the full width letter. If you sort them by locale comparators, you end up, as I said, because the locale comparators do group confusable characters, you end up sorting it by what humans would consider the sort order, which is ABC, in this case. + +MAH: And finally, here is an example of characters in the same equivalence class. This is E with an accent. If you compare these two Unicode characters, even though they are represented differently, they end up comparing the same. + +MAH: So what this proposal is about is the request for a portable comparator. Why do we need a comparator that compares by code points? Well, we need it for data-processing, really. As I mentioned, the locale comparators are code point aware, but they’re meant for humans, and have sorting rules for them. We need something that is meant for computer systems. And mostly, for compatibility with other systems. There are many languages these days that represent strings as a series of UTF-8 code units. Some examples are Swift, Go, Rust, there’s probably a bunch of others. + +MAH: In particular, to us, SQLite uses it for string representation by default. And the property of UTF-8 is that when you end up doing a byte comparison of the UTF8 code units that is in the same order as the Unicode code point, so all these languages and systems end up sorting strings by their Unicode code points. So what I’m proposing here is something like `String.codePointCompare` , a comparator that compares by code point values. The exact name can be decided, but the outcome that I want is that when we’re applying it to the example values I had previously, the sort order would end up being CBA, which no other comparator currently gives me. + +MAH: Why do we need this? This is an example for our use case—in the proposal repo, I have also linked to some Discourse discussions about some requests that are similar to ours. But in our case, we implement custom collections. So these collections have a well-defined sort order: each type comes before another in that sort order, but within the type, we use the intrinsic order for that type. For numbers it is obvious how they sort. For string we want to use a well defined string order. And then for types that we cannot compare, like object references, we either use insertion order for object references or we don’t allow other incomparable values in these collections in the first place. It’s not very relevant, like the rules of our collection, but basically what we need to understand is that systems have collections that don’t use insertion order that use a well-defined sort order and strings need to have a well defined order in that case. + +MAH: What is interesting to know about our collections is that they can have different backing stores. To users, they have the same interface, they work the same, and some are backed by the JavaScript in ephemeral memory when the program restarts they’re gone, while others are durable and backed by a SQLite DB under the hood. And this is where a compatibility question comes up. We need the compatibility between iteration of these two implementations. And that is it for my presentation. Any questions? I see a lot on TCQ. + +KG: Yeah, I’m in favor of this. I’m also in favor of having more easy comparators in general. Like, don’t we also have a way of comparing numbers in the language, for example. And if I can find time, I will try to pursue something along those lines in the near future, and I think that that might end up affecting the design of proposal. Probably not, but if we are going to add a bunch of comparison operators, I think it would make sense for them to be as coherent as possible. I don’t think this is a blocking concern. Certainly not at this stage. It might be something that we would like to think through before Stage 2.7, though. Anyway, I support this going forward. + +MAH: Thank you. + +SFC: Yeah, so you sort of talked a little bit in your presentation about, you know, the use cases involving SQLite, et cetera. I guess I wanted to—I was wondering if you could elaborate a little bit more on, like, what the—like, what are the advantages you see in terms of having this implemented in the standard library as opposed to in userland, you have your shim on slide 13 which is, like, you know, 10 or 12 lines, it’s not that hard to write in userland. Are your concerns about, like, this being, like—a built-in being more efficient? Are you worried about this code here being, like, tricky to use correctly, or are you more concerned about, like, this is a very widely needed use case that is, you know, motivated because everyone should be needing it? + +MAH: A little bit different parts of this. So let’s actually start with the last one. I believe most people don’t realize that they’re doing the wrong comparison on string and they’re using UTF-16 code units when they should be doing some other comparison, depending on what the intended sort use case they are looking for. In general, the regular sort comparison is not what they would want. The other part is performance. Yes, you can implement this in userland. However, not all engines implement strings the same way under the hood. Sometimes it’s more efficient to iterate over multiple strings like this using iterators, sometimes it actually is more efficient to use an index and use the codePointAt. This one is tailored to the engine that we use the most, but that doesn’t mean this is going to be the most efficient throughout. And no matter what, a native implementation is of course going to be more efficient than the userland one. + +SFC: Yeah, I have a—just two replies to those. The first one was that, so you said that UTF-16 sort order is—I forgot the exact adjective you used, I think unexpected or wrong, I forget the exact adjective you used, but it’s a well-defined sort order, and it’s the most efficient sort order that’s going to be possible in—you know, from UTF16 strings. And it’s perfectly fine, if you need the property of strings being sorted, for example, if you’re using sort of like a b-tree map that requires the property of a total order of strings, code order is fine, UTF16 order is fine. Right? So then this gets a little bit into my—well, actually, no, this is this comment. I guess it gets a little bit into my comments that I’m coming up with later, but I guess I’m, you know, a little bit confused by your assertion that UTF16 order is wrong. Like, because it is fine as a total ordering, and if you want a human ordering, that’s what Intl collator is for. And UTF8 order is no more correct than UTF16 order, because they’re both total orderings of strings. + +MAH: It’s okay if the only systems your program is interacting with are systems with a similar encoding. Any time you have to deal with another system and need to process your data in the same way, the UTF16 encoding is most likely not going to be appropriate. + +SFC: Okay. Yeah, I’ll save more, because I have another topic about this later. The other thing about—so my next comment was, yeah, if performance is a concern here, if we could, you know—I think it would be obviously helpful to see benchmarks. If a proposal is—if this proposal is being motivated by performance, it would be nice to, you know, maybe have, like, a WebAssembly implementation versus, like, this shim and then see if one is significantly faster than the other or some other way to, like, give, like, a ballpark for what the performance is going to be somehow. But, yeah— + +MAH: I’m not sure how—I mean, besides having this implemented in the engine that we use I don’t see how I can get performance numbers, because WebAssembly, there’s a bunch of other overhead that would come into play. WebAssembly doesn’t have a string representation, so it’s a can of worms. I’m not sure how I can get performance numbers for a proposal for Stage 1, besides doing the implementation in an engine. + +SYG: Just a clarifying question. I think folks in the Matrix helped me clear this up, but I want to check with MAH. By portability. What do you mean by portability? I thought you meant some code doesn’t work exactly the same across systems. Do you mean that, or do you want the semantics to be easily understood without surprise by JS programmers working across both JS and, say, SQLite? + +MAH: The second part. As I mentioned, we have collections that are implemented by—that have two backing implementations. One is a heap representation using JS maps. And another one is by SQLLite. And when we’re—so when we’re iterating over that collection, because the collection has well defined sort order, we end up iterating over according to the encoding system in the backing implementation. So in JS we use Maps, but we sort the keys, which ends up using the native sort order if we’re not careful. We actually had some issues where we forgot and ended up using the native sort order in the heap implementation, and that would iterate over keys in one order—if you use the three letters from the example, it would come out as CAB. If we used our SQLite implementation and relied on the SQLite implementation order, we would end up with what I actually expect, which is CBA. + +SYG: And the point, with your custom backing store collection is that is stored by SQLite, at some point SQLite is sorting and gives you the sorted order? + +MAH: Correct. When you get the results in SQLite from a query, we’re asking SQLite to sort by keys, and it automatically sorts the keys according to its string representation. + +SYG: Okay, that clarifies for me, thanks. + +USA: Reminder that we have around five minutes to go, and then a few items on the queue, so let’s be brief and quick. Next we have—oh, I assume MAH, that you want to proceed with the queue. But you can ask for Stage 1 at any point or sort of prioritize the queue as you see fit. + +MAH: Yeah, let’s go over a few more items. + +ABO: Yeah, so I think this is needed because it’s not just that the regular comparison gives a different result. It’s that most developers are not aware of the details of encoding and would not expect JavaScript to give a different result than Python or SQLite or Ruby or so on. And, like, even I, who I’m familiar with encodings, UTF-8 versus UTF-16 and surrogates and so on, I was implementing the, like, sorting of strings in Nova—I don’t know if you remember, Aapo Alasuutari had a talk on the Helsinki plenary last year, and I was implementing string comparison on that engine, and we have strings as UTF-8—or, well, WTF-8, extending UTF-8 to have lone surrogates—and I didn’t realize that the regular comparison would not match JS. And if I didn’t realize that, when I’m comfortable with encodings, definitely the average developer would not be expected to realize it. + +ABO: We not only need to add this, but we need to let developers know that they should not use regular comparison when they’re interfacing with other systems, unless they know that the other systems are using UTF-16 code units—which is JavaScript, Java, I think C#, and not much more. Well, everything else pretty much uses the equivalent of comparing with UTF-8 or code points. + +MAH: Yeah, I mean, it’s the same for us. Like, we know about Unicode, and we forgot about the comparison when we were sorting. So I take your point that this is going to require some developer outreach. + +MLS: I think your shim answered my question. And that is do you plan that code point compare would sort multicode point emojis, for example, and I think the shim does that. + +MAH: I mean, it sorts them by their individual code points. + +MLS: Right. But if you have two emojis and they differ in the third code point, it’s going to sort them based upon that comparison of the third code point? + +MAH: Yeah, correct. + +SFC: Yeah, this is a little bit from my previous question, but if you’re interoperating with someone like SQLite and someone a uses UTF8, presumably you have UTF-8 strings in memory, like an ArrayBuffer and using a text encoder. If you have already have the UTF-8 strings you should be sorting on the UTF8 strings, not the JavaScript strings. I was just wondering if you could address, like— + +MAH: We actually don’t—so what happens is that the UTF8 strings are stored in SQLite, but when we read them out, they basically come out as—as JavaScript strings. Mostly through JSON parsing. So I’m not going to go very deep in details, but yes, at some point, we have it binary form, but I’m not even sure in our system we actually ever end up seeing an ArrayBuffer of those. + +SFC: Okay. + +MAH: I think the only place where it shows up is in bindings of the SQLite library. + +USA: We are almost at time. There’s two more items on the queue. But MAH, you might want to— + +MAH: I will ask Stage 1 here first. Do I have some support for Stage 1? + +WH: I support Stage 1. This should have been done long ago. It fixes a bug that dates back to the beginnings of Unicode. + +MAH: Thank you. + +USA: Also—thanks, KG. Also on the queue, we have— + +USA: We have MF with support, JHB supports Stage 1, and CDA also says support Stage 1. Let’s maybe give a couple more seconds for any more comments. Also on the chat, MLS with more supporting comments. Congratulations, you have Stage 1. Would you— + +MAH: Let’s go maybe to—WH, do you have anything else to say? + +WH: The only other comment I had was that this really has nothing to do with UTF-8 since UCS-4 also sorts in the same way as UTF-8. + +MAH: Just UTF-8 was the most common case, yeah. + +WH: This happened because surrogates were added late to UTF-16 when Unicode folks realized that they’d need more than 64K characters. They couldn’t use the encodings at the end of the 16-bit range, which were already used for other things. This causes the irregularity when you compare surrogate pairs with characters between U+E000 and U+FFFF. + +MAH: That’s exactly the problem. That’s a problematic range, exactly. + +MAH: MF, I would love to hear your question, if we have time. + +MF: Yeah, I can do it quick, sure. + +MF: So, yeah, I generally support the proposal. But in your examples, you showed the kind of assumption that there would be a single function that compares your strings, and I think there might be some more general thing underlying here that we could do. I would like to see during the Stage 1 process that you explore solutions that maybe are a bit more general, where we take just two arbitrary iterables and make it more ergonomic for the string use case have a string iterator that yields numeric code points rather than the single code point strings that the string iterator does. I think we could have a generic solution that's still sufficiently ergonomic. We could probably do both, but I would like to see that explored to see how good it would be, if that’s a possible solution on its own. + +MAH: I think KG expressed something similar in the past. My main concern with that approach is because it’s relying on iterators, I am not sure how well engines might be able to optimize for it. Here at the very least, the engine can recognize the sort function being passed to sorts and doesn’t technically have to invoke it. Iterators are notoriously hard to optimize. + +WH: I don’t understand MF’s comment. I don’t know the generalization of — + +USA: Unfortunately, they are on time, though. We would have to bring this back to a continuation + +MAH: Michael, can you file an issue in maybe—that will help Waldemar understand the request? + +MF: Yeah. Will do. (opened [#6]([https://github.com/tc39/proposal-compare-strings-by-codepoint/issues/6](https://github.com/tc39/proposal-compare-strings-by-codepoint/issues/6))) + +MAH: Thank you. + +USA: And thank you MAH. + +### Speaker's Summary of Key Points + +I presented a proposal for stage 1 to explore string comparison by their unicode codepoint. The motivation is compatibility with other languages and systems that use that sort order. There were some clarifying questions regarding when different string comparators should be used, and a request to explore the intersection with iterator based comparators. Some delegates highlighted that the default sort order can be surprising for any developer not familiar with JavaScript string encoding, and a need to document this better. + +### Conclusion + +Stage 1 + +## Update to Consensus policy + +Presenter: Michael Saboff (MLS) + +- [slides]([https://github.com/msaboff/tc39/blob/master/TC39%20Consensus%20Apr%202025.pdf](https://github.com/msaboff/tc39/blob/master/TC39%20Consensus%20Apr%202025.pdf)) + +MLS: This is a continuation from our conversation that we had in Seattle. And I asked for an hour, I don’t think this is going to take an hour. But we will see. This is caused conversation in the past. I think from Seattle there’s general agreement there is a problem we need to deal with single dissenters. It’s rare, but there’s been some issues in the past. There’s also, I took away from Seattle, there’s no desire for like a major process change. That we—our social norms seem to be enough to guide us for 9X% where X is a pretty big number. 98%. And it also, I took at here’s no need to have two objectors. I originally proposed 5% at Seattle and people thought that was too onerous and have to figure out what is 5% so on and so forth. + +MLS: There was some sensitivity having dissenters from the same ECMA member or possibly different members with a financial arrangement between them. So, for example, in part of Apple, if the two dissenters were both Apple, Apple delegate, that doesn’t seem right. I agree that’s something we should figure out how to do. + +MLS: And last, I think MM brought this up, and others, any system we come up with can be gamed. So any changes we make are not going to change that. Maybe make it more difficult to game it, but it’s my hope that members, the TC39 members are acting in good faith. And generally, I believe that that is the case. So this is the kind of take-away, I took from Seattle. + +MLS: So the goals for TC39 decision-making is, in my mind, an orderly, deliberate, open and welcoming and inclusive process. That those in attendance delegates in attendance, and by the experts, they can discuss and evolve JavaScript for the whole ecosystem. You know, not just developers or implementers, but everybody, including end-users. And it should be based on social norms than flexibility in the system. We agree that general consensus where we had a proposal that went to Stage 1 and there was general agreement among people, let’s investigate codePointCompare. + +MLS: So I am going to propose a minor change. Withholding consensus and the check marks we already do, maybe need to remind us, but delegates clearly explain the reasons, including possibly acceptable changes to a propose, so that they would support it. I am going to skip the second line and come back to that. We do want the reasons for withholding consent to be recorded in the minutes to be helpful for not only the champions, but also other people to remember, go back and remember why something didn’t move forward. And that withholding delegates are willing to discuss with champions possible path forward. The last two things are being done. + +MLS: What I would like to propose and I think that MM came up with is that we don’t have to necessarily have a second delegate withhold consensus. They could also basically voice support. And I am going to—if two delegates withhold consensus. If somebody makes a motion and somebody seconds it, I think that’s what we are discussing in the case of somebody that voices support for somebody else that is withholding consensus. + +MLS: And so we basically have two people and we don’t want them to be from the same ECMA member. Again, this could be gamed. Because members could agree. People from different member could agree to withhold consensus. But again, it’s looking for this to be done in good faith. + +MLS: And so that’s one thing I would like to discuss if we have a second delegate, or I second that or I support that, they don’t have to necessarily—they may not think that they would withhold consensus, but they understand the reasons why a dissenter does withhold consensus. + +MLS: And the last point is, can an invited expert withhold consensus. I think they could be a supporter, second it, but the reason I am bringing this up is because you look at ECMA bylaws, and only members are allowed to vote, for example, probably at the May meeting, we will vote on—I think we already voted on ECMA 2025 sending it to the executive committee and the GA, but only members can vote on that. + +MLS: So this is what I am proposing. So it’s two things: one, that we have a second dissenter, second person withholding consensus. Or a person that supports the sole dissenter, and they—neither these can be from the same member company or having an obvious financial relationship between themselves. And then I would like to discuss where invited experts fit into this policy. + +MLS: So that is it. I don’t have the queue available because of how I am sharing the screen. But I will leave this slide up. So… That was 9 minutes. Let’s see how long we discuss this. + +DE: Well, I think it—this is good to have a way to overcome certain vetoes, and good to acknowledge the state of our decision-making procedure. One thing that Rob and the chair suggested last meeting was around having—you could call it cooling off period. Anyone can block, including invited expert, maybe, a proposal during one meeting. Say, it’s not going to advance this meeting. And then we cool off. We—the objector or objectors clearly state their reason. And then at a follow on topic, very subsequent meeting, we can have an agenda item which is, you know, considering moving past the or overriding the objection. And so then we can make a presentation which is the person who decides they want to specifically invoke this procedure, makes a presentation, explain the objection. Explain why they think it shouldn’t be a blocker. And then we see if there are multiple people objecting to it. So this procedure could be invoked no matter how many people, if it was one or multiple, who gave the specific observation. And then the committee given sufficient time to think things over, could make a collective decision on whether to move past it. I think this—this thing about taking time to overcome objections is more important than some of the details about the threshold, whether it’s two people or more people, whether it includes invited experts. I think the most important thing is that we’re very conscious and resolute when we make these decisions. + +MLS: DE, if you want to bring that forward, that’s fine. I haven’t thought about that and worked through that. I generally agree that something like that would be useful. But I think that’s for another time. + +DE: I could make another presentation about, this where I propose this. I was— + +MLS: That’s what I am saying + +DE: I do think we should adopt this—these two things together, though. + +MLS: Okay. + +DE: Because if we just do the kind of weakening without this other safe guard it could leave us in tricky situations. I do want to understand better why you think that this is kind after separate something from what you have proposing. + +MLS: Because I haven’t thought about it. I would like to— + +DE: Okay. Sounds good + +SHN: I just want to make a comment, it’s not necessarily a question. It’s come up on the MLS’s slide and it is the question of invited experts as you are all aware invited expert is based on ECMA rules. They don’t vote. I understand in TC39 when you do temperature checks it’s different than voting. I think here in this particular discussion, perhaps we need to think about the invited experts and whether they can hold—withhold consensus. Ideally, I don’t think that would be a way to go forward, but I leave it there for discussion. + +CDA: So just on this point, I am not on the queue, but it’s sort of been long-standing practice that invited—— in the spirit of committee, blocking concerns from invited experts are respected as are blocking concerns from people who are delegates for ESM members that are not able to vote. So I don’t think it would be practical to go down or fruitful to go down the slippery slope of determining whose voices are more important than others. + +SYG: Could you point me to some examples for invite—I am not exactly sure when have invited experts actually block. + +MLS: We could. + +CDA: Long-standing historical precedent comment… + +DLM: We’re opposed to the financial relationship qualification. We have a financial relationship with Google, as does Apple. And our current process requires basically implementations between V8, SpiderMonkey and JavaScriptCore. So if we move ahead with not being able to do anything based upon function relationships, a proposal could have advanced to Stage 3 and then 2 to 3 implementations. So don’t think this is right + +MLS: DLM, I wasn’t thinking about the financial relationship that you have with Google and apple has with Google until the—I was thinking of the contract financial arrangements. But yeah, you bring up a GAD point. + +CDA: All right. NRO? + +NRO: Yeah. I agree with not having a blanket on blocks from companies with financial relationships exactly for the reason DLM just said. But simply good to have some wording about that. There are, like, cases, but I work for Igalia and they are paid to work on this. And like it just should be disallowed for some other company in the committee to like try hiring as just Google proposal with them. And like, all of this needs to be somewhat based on good faith. Because, like, we cannot enforce this. But at least having some guidelines, some wording on this saying, like, it would be good. + +SFC: Yeah. Regarding the financial relationship thing, DLM already brought up, you know, the three browser implementer problem. But the other thing is basically, very many of the organizations here, you know, have financial relationships with, you know, companies like Igalia and so forth. But Igalia is also quite a big company with a lot of different, you know, delegates working on a bunch of different proposals and it doesn’t necessarily, you know, make sense that, like, you know, if one delegate working on one proposal, you know, like, backs you know a delegate from a different proposal that should be disallowed. Like, it’s almost a thing, if they’re in very—a very tight relationship, but again that’s—which I think is sort the spirit. They are in a tight relationship. But that’s just very, very difficult to define. Yeah. That’s all. + +CDA: Thank you. There’s Michael Ficarra with a + 1 to NRO’s comment. Let’s go about further on the queue to DLM again + +DLM: I wanted to point out that the current process allows for blocking solely on something being late—added late to the agenda. And I think that’s important to maintain. I don’t think we should need a second—a person to second that, if people legitimately haven’t had a chance to review something because it was added late to the agenda. That should continue to be a sufficient condition to withhold consensus. + +DE: Yeah. DLM, I think—I didn’t think about that. Including that here. But agree that if you are not in a ten-day window that’s more of a process thing that’s not based upon we don’t like this or want this change, kind of thing. I support that. + +DE: Yeah. If we do say that multiple meetion to overcome a block, then I think this followed naturally and that’s a benefit for the fundamental reason why Dan is raising this, because everyone should have a lot of time to review things and think them over + +CDA: For the record, for some reason, your mic went really quiet that time. Anyway… Waldemar is next. + +WH: I am a bit uncomfortable with creating second-class citizens out of invited experts. Can invited experts still review proposals? + +MLS: I don’t have a problem reviewing proposals. I think the issue is more of keeping with—with ECMA’s bylaws and policies. And you can think of the case, again, I—in my mind, gaming the system, where invited expert comes from one meeting to go block a certain proposal. Not that that is going to happen. But it could. + +WH: A lot of things _could_ happen. But I think we’re focussing too much on the identity of whoever is supporting or opposing rather than the rationale. I think the rationale is more important. I don’t see that much of a difference between invited experts and academics, other than official standing within ECMA. TC39 has explicitly not done formal voting other than the annual votes to push out a new version of a standard — + +MLS: Well, actually, TC39 does more voting than probably all the other TCs because a dissenter is a negative vote. It’s a veto. So we vote far more often than the other TCs. All the other TCs work by consensus and don’t take votes except when they are advancing a new version of a standard. So we do it quite often. Not every meeting does we have a dissenting vote on a proposal moving forward, but that’s a vote. So I want us to recognize that. + +WH: I disagree with this characterization of everything as a formal vote. I am also uncomfortable with not being able to support proposals. Or are you saying invited experts can support, but cannot oppose? + +MLS: I am saying that invited experts should not be a lone dissenter but certainly they could give support + +WH: That seems wrong. + +SYG: I’ve been somewhat uncomfortable throughout the entire—not this discussion, but throughout, like, every TC39 the whole working history, I’ve been involved in TC39, I have never really quite understood to what extent we are to uphold the ECMA bylaws because we seem to operate in opposition to a bunch of the bylaws. I understand we have a lot of sway and I understand we have been operating with our own way for a long time. And—but like we are still a body within ECMA. And we have kind of a legal and IP umbrella through ECMA. So I am not—I don’t even understand what is the flexibility that we are afforded here? Like, it seems to me for the invited expert question, the ECMA bylaws are pretty clear. So if we’re—if this is actually under discussion as SHN suggests, as she herself said, I would like to hear better from ECMA administration what they see as the flexibility that the TC39 has to be opposed—to operate in a way that isn’t according to the bylaws. + +SHN: SYG, thank you. A fair question. You know, this discussion is raising a point or multiple points on how TC39 works versus I would say all the other technical committees. Voting is typically something that we do more at general assembly, which it’s only the ordinary members. Within TCs as MLS said, it’s done by consensus. Mind you, other TCs are much smaller and are much—in a much frequency having to find this point of consensus because of the—when they—the finalizing the standards. This is different for TC39. I am always trying to be pragmatic and ensure that the work that every technical committee does is bringing value. I do think as we think more and more about this topic of consensus is bringing up—it’s becoming tricky. I also understand WH’s comment. You don’t want two-class citizens. SYG, I don’t have a clear answer and I appreciate I may have the chance to give a much clearer answer at the next plenary in person. And I may go and think more deeper about this in a broader way of TC39. And I’m sorry for that. I know it’s beyond the agenda today. But some points brought up today, have touched some very important points of our rules. + +MLS: So let me see if I—like I say, I can’t see the queue because I am displaying full screen. Let me see if what I say now is acceptable: that we need a second, either delegate to withhold, or to second as it were or support a sole dissenter; and that that can’t be from the same ECMA member company. Is that—is this taking that—those statements together, that an acceptable change to our policy? + +WH: Who are you asking? + +MLS: I am asking the committee — + +CDA: I just wanted to respond on that particular aspect and some of those other aspects, and ECMA rules, or what invited experts can and cannot do. I think the details like that are important. And especially relevant if there’s going to be such a significant process change. But I think that we’re putting the cart before the horse a little bit there because I don’t think that those particular details are going to move the needle on whether this committee wants to resolve the higher level process change to begin with. So with that, I would like to keep moving through the queue. + +DE: So briefly, we have been operating at—you could call it a superposition of multiple different possible policies. Different chairs in the chair group, even, have different opinions whether invited experts can block. And when invited experts do block, then it’s ambiguous whether the block is real and the—it’s getting blocked or whether maybe the person who blocks is just voluntarily—the proposal champion is making different proposal back because they got strong feedback. I’ve been telling some of the people in the chair group privately for a while, this should be made more unambiguous, but it’s politically fraught as we are seeing now. I think overall, ECMA does—TC39 does follow ECMA rules. And I don’t see any mismatches. ECMA has the only—ECMA has a voting procedure that TCs can use, but most TCs don’t use it and we are similar. If there’s some other mismatch with the rules we should definitely get it changed in the rules. We have already gotten several changes made in the ECMA rules to accommodate TC39. It’s straightforward to accommodate ECMA changes, it’s a simple majority of the general assembly. And you know, as president of that general assembly, I am happy to help you get a new rule change through our process. + +JHD: Yeah. I mean, so separate from the ambiguity, I think it’s important that invited experts and delegates are afforded equal rights. But ecma exists to serve committees. If its bylaws are not serving the committee, then they must be changed. And we should pursue that if it turns out there is a conflict, which it doesn’t seem like there is. But I wanted to state that. We aren’t here for ECMA - ECMA is here for us, and all the other committees. + +DLM: I wanted to second what CDA said. We can go down a deep rabbit hole talking about invited experts and it’s important we go back and talk about the overall proposed change to the process. + +MM: So first of all, let me just make it unambiguous that I object to this overall thing. But I am very glad to see that what is being asked for has been whittled down substantially from the thing that I objected to much more strongly. In particular, the fact that the supporter does not need to be objecting, leads to support our something, I wanted to clarify, and MLS when you raised that, recited me as the suggestor, I am going to clarify the suggestion on that which is that the—MLS started his proposal, maybe it’s in the previous slide, simply saying there’s general agreement that there’s a problem with the lone objector. What there’s not general agreement on is that there’s any cure for that, that is worse than the disease. A sole objector, together with the assumption that members are working in good faith, I don’t think is a problem. The danger is that there is a sole objector that everyone else suspects is not objecting in good faith. And, therefore, the thing that I was suggesting was that the supporter, if you will, is not so much supporting that the proposal should not proceed; is not seconding, I think those are both misleadings ways to put that, even if procedurally are correct. The way to put it is that someone else on the committee—and it’s fine to say not a community member, if we go forward with this suggestion—agrees that the objector is objecting in good faith. And as long as the objector is objecting in good faith, I think that deals with the only legitimate issue with the sole objector. And I would certainly object to anything stronger than that. + +MM: And like I said, I think that the main reason I think this whole direction is counterproductive is that under the current rules, we all get to work on the problem. And when there’s a sole objector in good faith, they’re normally objecting to the particular solution to the problem. They are usually not objecting to the idea of some underlying motivating problem being solved somehow. And I have seen this over and over again—which I think TC39 is brilliant at—is, let’s see if we can find some other way to solve the problem that overcomes the reason why the objector is objecting. And then we move forward. Any attempt to weaken that distracts from technical work and focus activity instead on political work. Can I get somebody else to support my objection? And that’s just counterproductive. I so object to these whole thing + +MLS: MM, I think both you and I have been suggest to people that have—that have objected in bad faith. + +MM: Yeah + +MLS: And we have seen it. + +MM: I agree. Let me respond immediately to that. + +MM: Every case where I have been blocked in bad faith—that I believe is in bad faith; obviously, there’s no objective test—has been by a browser-maker. And I see SYG has an item later on the queue about the implicit veto that browser-makers have anyway. If there’s no way to overcome that, there’s no way to have overcome the bad faith objections that I’ve been subject to. + +MLS: So the voices of support a sole withholder, I would like somebody to, if they are not willing to object themselves, to assert or to offer to the committee that they believe that the objection is in good faith. And about I do agree with you, that we want a collaborative process as we evolve the language. And you’re right, good faith is a subjective thing in most cases. Although, I think, there are been cases in the past where it’s pretty clear by a majority present that it was a bad faith. + +MM: Since I did mention I think I’ve been blocked in bad faith by browser makers, it’s not someone on the committee at this moment. I am not saying that to anybody here now. + +SFC: Yeah. So I largely agree with the perspective that MM is bringing here. And I just wanted to ask like, it seems like the real problem is, you know, a delegate acting in bad faith, by some definition of bad faith. And it seems to me like that’s a problem more for the code of conduct committee than anything else. Like, if there’s a delegate acting in bad faith, then, you know, we kind of have a process for handling that. + +MM: I did not bring the particular case to the code of conduct committee, and would not because I can’t imagine that would have been productive. + +MLS: Yeah. I agree with MM there. I believe that there’s been cases where I thought that there’s code of conduct violations, but I didn’t think it was worth reporting. We have seen in the past where—and it hasn’t been, I would say, in the last several years—but we have seen in the past where withholding consensus has caused somebody to stop attending. Whether they were the champion of something, or even if they were just a bystander. And I think we also obviously have people—we talked about this in Seattle—we have people who have more initiative to speak up, and there’s others that are more reticent. And we have to take into account if we want all voices to be involved in the technical discussion in the committee. + +SFC: Yeah. I mean, all I am saying there is that it feels like, you know, if the problem is really acting the bad faith, then maybe we should look more into that—I agree with what MM said, that’s a direction we should look more into. Handing it from that angle. + +MLS: And I agree with MM. I don’t want the cure to be worse than the problem. + +PFC: I would like to register my explicit disagreement with the assertion that the status quo doesn’t admit any politics. Either way you slice it, the process is a political process, whether you have sole dissenters or not. If you have sole dissenters, there’s an intense amount of politics around things like, which ECMA member is that dissenter from? How much soft power do they have in the committee? I agree with the goal of building a process that minimizes the politics and maximizes the technical discussions we can have. I just disagree that the status quo is that process. + +SYG: So MLS, I want to entertain this hypothetical to the extent that you would like: I wanted to talk about—if we all do agree that de factos kind of do exist by the browsers—let’s not bring individual technical stuff into it. Let’s just, for this hypothetical, say, that we somehow get a top-down direction for some other reason completely out of my control. Like this product blah, blah, therefore, we cannot agree to some particular proposal. It doesn’t have anything to do with the technical merits at all. It’s just like force some other constraints, it’s not shippable for us or something like that. And for this hypothetical, this is a problem only I have. Apple doesn’t have it, Mozilla doesn’t have it, other implementations don’t have it. In that world there is 0 technical reason in this hypothetical for any other implementor to support the veto. It’s not technical. It’s from top down. Given that, if we can’t have that veto, we are still going to go into a world where the feature might be non-interoperable, because for external constraints I can’t ship the feature. How do we address that failure mode? + +MLS: So I think if you’re going to act in good faith, I think you would let the committee know that that is the issue. That you can’t share— + +SYG: [inaudible] + +MLS: Without revealing any internal information, but can say that we can’t ship this with whatever justification you can provide, then the committee knows that. And the committee can respond to that. Yeah. Various implementers of various technologies, they have what they see their market as, and they do or do not agree to certain changes in standards. But communication is the most important thing here: “that is why we are not supporting this, and this is what we would support”. + +SYG: Typically, I think hypotheticals like I brought up will be exceedingly rare, and I am supportive of this change. But there are some, you know, some new edge cases that may arise that take up process discussion that you want to point out. That’s all. + +MLS: Okay. + +WH: I am concerned that we’re focussing too much on folks objecting to things in bad faith and we’re throwing the baby out with the bath water. There are a lot of scenarios which arise much more frequently. Those include proposals which simply haven’t met the entrance criteria for the stage they are going for, or bugs which have been identified in proposals, which should be fixed before advancing. And this change would not be helpful in those situations. I think the reasons for not advancing at a particular meeting are more important than how many delegates state those reasons. + +MLS: WH, wouldn’t you say that, for example, if there’s a bug found or, you know, don’t have enough reviews done, that that would be easy to get a second person to agree to block? And it’s clear, the reasons for withholding consensus would be stated and recorded. And those could be easily overcome in that case. Also, with a bug, if the bug is addressed in the spec or algorithm or whatever, if there’s a bug, other people could see that there’s a bug, if it’s pointed out, they would support withholding consensus, and that bug would be either addressed. Or if it’s a fatal flaw, I think that would be able to be shown to the champions. + +WH: That has not been my experience. Typically what happens is, somebody identifies a bug. And the other delegates are not really familiar with it. They need to think about it. There is no time left in the timebox to explain the bug. No, you would not get support from other delegates for that. This change is counterproductive in such situations. And it’s also unnecessary, since in that situation nobody is trying to actually block something from getting into the standard. It’s just not ready at that meeting. + +MLS: I wouldn’t say that’s true. I think things have been blocked with a desire never to bring them in the standard. + +WH: You misunderstood me. I am talking about the more common situations in which the discussion identifies a problem, and nobody has had time to work on fixing the problem yet. + +MLS: But again, I think that is something that should be able to be—the others in the room can be made aware of that, and it’s not a huge amount of convincing them to also support blocking. And that blocking would be considered temporary. + +WH: This asks people to block based on things they don’t understand. I am very uncomfortable with that. + +CDA: All right. Thanks, WH. That’s it for the queue. + +MLS: So I sense that we’re not willing to move with even part of this, which is we would have somebody who would support a sole withholder? + +MM: That’s correct. I am not willing to—I think that we’re fixing a non-problem, and even a step in this direction is worse than the disease. + +MLS: Okay. + +CDA: Thank you. If you would like, MLS, I mean, you could formally ask for consensus for your proposed change. But if I am a betting man, it doesn’t sound like it’s— + +MLS: I don’t think I need to ask that question, because I think I already know the answer. + +CDA: Okay. + +MLS: MM and WH’s last two comments were sufficient to convince me of that. But I think that other comments that were made during this discussion show this is a problem that does need to be addressed. + +SYG: To MM, we heard a direct disagreement to your understanding of the status quo. I wonder if you have any thoughts on how your interpretation of how political the work required is in TC39, your view is at least not universally shared. + +MM: Okay. Any time you get human beings together under any circumstances at all, there are some politics. I don’t disagree that in the status quo there’s some politics. But I also agree with the point made at the same time, that we shouldn’t do anything to amplify the politics, at the expense of technical points. And any step in the direction that MLS is proposing amplifies politics and diminishes good faith, technical involvement. + +SYG: Can you explain the thought process that makes you think that? Compared to the current way, which as I agree with more of what PFC had said, one way where the single veto, or at least the threat of single veto, has turned extremely political is it focuses all the engagement on either heading off an anticipating kind of no repeat folks who would like to block, or reactively dealing with it after the fact. It concentrates a lot of procedural and political power into the hands of those folks. And that’s where I see a lot of political work—if you are not involved in a particular proposal—it changes from proposal to proposal is what I am saying. It’s not a constant thing that is always happening in committee at large. So I think it’s very disproportionate, and some people get exposed to it a lot worse than others. Especially those who need to have some involvement in every single proposal. So I would like to understand it in comparison with that. How does MLS’s proposal make it worse? + +MM: Okay. So, first of all, with regard to those issues, I am glad this slide is on the screen. MLS’s check mark, the status quo, are really essential to making the current process as reasonable as possible, which is that the objector has to support technical engagement, has to make their reasons clear; and has to engage with the delegates to see if there’s a way forward with the purpose of the proposal that meets the objector’s objections. I think all of that is great, and I think we have been doing that. And beyond that, I frankly did not understand the question. + +MLS: So MM, let me add that I think we do that almost all the time. There are times when we don’t do that. And that gets to the political side. + +MM: Okay. When you say we should do the things that are political— + +MLS: Yes. + +MM: It’s in our—how we work of the the check mark things. Explicit. + +MLS: I had I would have to look, but yeah, I think it’s the general ethos of the committee + +MM: How do we write down the check marks of how we work that is not more damaging than the status quo? + +SYG: I mean, I think the plus sign here is that the proposal to make it better. Right? + +MLS: Yeah. + +MM: I don’t understand why you think that would make it better + +SYG: Because I read this as— + +MM: What is the problem with the status quo, if you explain the problem with the status quo such that the plus sign thing would address that problem without introducing worse problems? + +MLS: Because one person could have a non-technical, political reason that they want to block something. It’s happened in the past, we have seen it; and there’s no technical resolution that will allow something to move forward. + +MM: Okay. + +MLS: If you have a second person added to that, whether they support it or also withhold consensus for maybe the same or different reason, and they articulate it, you reduce the likelihood it’s done for non-technical reasons in my mind, especially if they are from different member companies + +SYG: The way I phrase it is, if it’s a good technical reason to object, you should be at least able to convince one other person on the technical objection. If it is not a technical objection, then you have a lower likelihood of being able to convince someone else to also see your point of view, because it’s not actually a technical objection. + +MLS: This is what I have been saying for over a year + +MM: Let me come back to a point that certainly always prominent in my head when we discussed this. I did not understand MLS’s answer to SYG’s earlier question, which is about the unilateral browser veto as reality on the ground. Since a browser-maker can unilaterally block, because the committee would do a disservice to everyone to proceed forward in putting something in the standard that a browser maker announced they are not going to implement. So I did not understand—that to me is a primary issue here. Any attempt to weaken the ability for anybody other than that browser to block, without weakening the ability of the browser to block, which is impossible, simply disempowers the community compared to the browser makers. + +SYG: You are missing the converse of this. Browsers don’t only have a de facto veto, but a de facto antiveto. We can unilaterally ship things as well. + +MM: Yeah. That’s happened. And I don’t— + +SYG: There is no weakening here + +MM: That’s happened. That's the reality that I agree that there’s—I mean, in general, one of the things that I think is right about the whole TC39 phenomenon in general, most standards groups, we have no enforcement power. If we move forward in a way that is at odds with what prominent JavaScript engine implementers agree with each other to implement or not implement, we make ourselves irrelevant. So yes, the browser-makers do have the unilateral ability to implement something anyway. And we have seen that kind of thing happen, in fact. I don’t understand what the implication of that is. I haven’t played that out. + +SYG: Your argument was that MLS’s proposed change here would weaken every other non-browser delegate’s withholding power. As I understood your argument, because browsers have this unilateral single veto power, de facto, then in the process, we should also enshrine and give every other non-browser delegate the same power, to have single veto. Is that a fair characterization of your argument, first of all? + +MM: Yes. I think I see where you are going. So let me get that. This goes back to—let me play some more of the implications of disempowering the committee and what it means for the committee not to have any enforcement power. + +CDA: MM, sorry to interrupt, we just have a couple of minutes left before the break. Please continue. + +MM: So is there other things on the queue? I can’t see. + +CDA: No. + +MM: Okay. So the browsers got together at one point because of disagreements with W3C to form WHATWG. And in so doing, made it clear that they are going to proceed forward with agreement among the browser vendors, leaving the non-vendor voices that were in W3C, rather than on WHATWG, powerless. And that was publicly visible, as it should have been. If the power that TC39 as a standards process has, comes from the fact that the engine-makers and the community are both on it, and therefore the browsers who can certainly go off and do another WHATWG, or in fact go to WHATWG to decide among themselves, is to just make it public, that they are making a decision just among the browsers, leaving the community out of it. And that should be costly. That should be costly in the public visibility, that the browser-makers have decided to do that. + +SYG: Sorry. And that is an argument for not accepting MLS’s proposed change here? I think I am missing a few steps + +MLS: I’m not sure how MM’s comments are tied together. + +SYG: Yeah. + +DE: So TC39 works well today because we collectively do this technical development in alignment within the committee. If we stopped doing things, then, you know, things would be done in other places. But we can preserve our position and our ability to contribute to the web platform by continuing to operate effectively and making good designs and coordinate them. + +### Speaker's Summary of Key Points & Conclusion + +- Some delegates were in favor of these changes or something similar. +- It is thought by some on the committee that going forward with this process change would be worse than the status quo +- It makes sense to continue discussing our consensus process at future plenaries. + +## Stage 1 update for decimal & measure: Amounts + +Presenter: Jesse Alama (JMN) + +- [proposal](https://github.com/tc39/proposal-decimal) +- [slides](https://notes.igalia.com/p/tc39-2025-04-decimal-intl-integration#/) + +JMN: Okay. Good afternoon, good morning, good evening, . We are talking about the decimal proposal. There is a measure proposal in here. This is going to come up in the presentation, the decimal and measure proposals are kind of being at least in part developed side by side at the moment. My colleague, BAN is on sick leave and about working on the measure proposal for some time. You remember it come be up in November in plenary. He’s with us in spirit today as he has been helping with decimal and progress on the measure proposal + +JMN: The status quo is that we have settled on a lot of the semantics in the API for decimal. That’s not new. We have settled on that for quite some time now. In the meantime, the internationalization side of thing is a work in progress. What I am here to tell you about today is some of the progress we have made there. + +JMN: We think we have settled on a solution to many of the problems there. This presentation is a bit awkward because I am going to deliberately not name a class that I propose to add to decimal. You see, I am calling it `Decimal.Something`. It’s a bit tongue-in-cheek. But I hope you understand my intention. + +JMN: The point is that the name is important. And we don’t quite know what name we should use yet. The name is TBD. Maybe things like `Decimal.Amount` or `Decimal.WithPrecision` would be good. The name is a bit up in the air. We’re welcome any suggestions that you might have, but I hope we can avoid too much bikeshedding about that. If I say “amount”, that’s usually what I mean. But in fact, that’s not really the official name here. Think of `Decimal.Something` or placeholder. + +JMN: And this idea of a `Decimal.Something` or `Decimal.Amount` is a small class that really rounds out internationalization. This class can unblock us with some issues there. + +JMN: And looking forward — or looking sideways, depending how you think about it — if this class is accepted, then this is something that the measure proposal might also use. It’s something that the measure proposal might add fields to, to store things like a unit or currency indicator. + +JMN: Just to recap what the issue is with decimal: so for decimal, I think the story is clear. We are interested in numbers. And when we say "numbers", in that context, we settled on the notion of a point on the number line or mathematical value. And that has a number of use cases, we have settled on IEEE Decimal128 for that. That’s all kind of old. But for the internationalization story, when we think of decimal, there’s a bit more to the story there. We need to have a kind of a concept of a number that somehow knows its own precision. So think of it as something like a number plus a precision or a sequence of digits of a certain form, if you like. + +JMN: You might say, well, what is going on? Why can’t we use, like, JS numbers for this? That’s kind of no big deal. Right? The problem is that using numbers currently with Intl is error prone, especially with NumberFormat and PluralRules and the mixture of the two, the for example type, if we do this, with decimal, shouldn’t be created that has the same problems. And these needs for internationalization exist in parallel to the needs that exact decimal values, that is, mathematical values essentially, currently meet. So we have a kind of version of Decimal128. We call that Decimal. We think we understand the use cases and the needs there. But for internationalization we need a bit more. And that’s what we are here to tell you about. + +[slide 4] + +JMN: The idea has been bouncing around for quite some time in various forms in the last plenary we talked about the overlap between the Decimal and Measure proposals. And in fact, as somewhat radical suggestion we even put on the table the idea of merging the proposals. Thinking that, well, their use cases and the needs overlap, to some extent. Maybe that overlap is large enough to warrant, you know, thinking of this as one proposal. But the consensus was that they should remain separate. The use cases are too different. They might overlap, but there’s a non-overlap here that’s big. So we keep them separate. So one ever the proposals or one of the suggestions that we had there for talking about these—the intersection between measure and decimal was to have something like 3 classes. Something like decimal, which we already had. Some kind of number with precision. And measure. But that didn’t get much traction either. And so coming out of the last plenary, we were struck. We had the internationalization use cases, but we kind of didn’t have a path forward. + +[slide 5] + +JMN: But what I am about to tell you about today is something I think might be way forward or thinking about. And the idea here is to try to take more seriously the idea of measure and decimal are just separate proposals. And the thinking here is that if we want to talk about units or currency codes, that naturally suggests a number of issues that can be separated from just talking about the underlying number. So decimal the proposal is all about numbers, with or without precision. So this discussion of units, although related, feels like a kind of foreign object added to the discussion. It’s maybe interesting to think about, but it’s not really about just numbers by themselves. Which already have their own package of problems and issues. + +JMN: So the thinking is that what we have in mind here with this `Decimal.Something` or Amount, is that the measure proposal could, then, take that ball and roll with it. So rather than introducing a new class, what we have in mind is that the `Decimal.Something` could—we could expand that with the measure proposal. + +[slide 6] + +JMN: The API for this thing is very small. It’s very thin. It’s deliberately kept quite minimal. We conduct the data using convenience functions on `Decimal.prototype` . Maybe we should allow constructors—construction using new too. Maybe there should be a static Temporal style `.from` method. That’s a bit open for discussion. Interesting questions to think about there. There’s an accessor for an underlying decimal and the precision, of course. Just a toString. Critically for us is that there’s no arithmetic here. The thinking is that we already have arithmetic sitting in decimal. It would be a bit awkward to reproduce that somehow in `Decimal.Something` . And besides we have discussed many times that this issue of propagating the precision of numbers using arithmetic is a bit odd in IEEE754. And we know there are other ways to do it, so we just skip it then, and say there’s no arithmetic on these things. The main thing is that we have some kind of integration with NumberFormat and PluralRules. And again, just like decimal, our `Decimal.Something` would be immutable. + +[slide 7] + +JMN: So again, so I have talked about how we have just a bit after tongue-in-cheek placeholder names here. These are also place holders, but the thinking; if I have a decimal, then I can try to create one of these `Decimal.Somethings` using some kind of method that attributes or just imputes some kind of precision to the thing. So, for instance, if I have `new Decimal("42.56")`, and I say, let’s consider that number as a number with two significant digits. So then we essentially are talking about 42 there. Or I can say, take the same number and consider with 5 fractional digits. Let’s say I know out of band that whatever number I have, has some kind of precision of 5 fractionalDigits. I impute that number. I talk about 42.56000. The names here are awkward, I admit. They’re place holders. Just to get your creative juices flowing. + +[slide 8] + +JMN: Again, so we think that we have a bit of—made a bit of progress with the internationalization side of things. There’s been some discussion in the champions call, that happens every couple of weeks. You should see it in the TC39 calendar. Also a channel for this, if you would like to join the discussion. The current thinking is that PluralRules shouldn’t handle bare Decimal values. And there was also a discussion about whether NumberFormat should continue to handle bare Decimal, which it did in earlier iterations of this proposal. But we also thought about possibly banning bare decimals from NumberFormat. + +DLM: I’m sorry. JMN. There’s two clarifying questions in the queue. + +CDA: They aren’t meant to be asked immediately, but WH, did you… ? + +WH: Yes. On the previous slide—yeah that one—you said that the first example produces 42. Shouldn’t that be 43? + +JMN That one would produce 43, because there would be rounding. We look at the 5 to do the rounding and use round to help you. Sorry about that. + +WH: Okay. + +JMN: Does that clarify? + +WH: Yeah. So this then raises the issue of rounding modes and how to specify them. But that’s a different ball of wax. + +MM: Okay. I have a question. So there’s two different methods here. The meaning of each of the methods is clear. But in terms of the representation of precision, with object that the methods produce, are you thinking of two different kinds of representation of precision or do both of these somehow produce the same kind of representation for precision? + +JMN: Very good question. So the thinking at the moment is that there’s just one notion of representation. In the current discussions that, we are working with significant digits. That’s the one and only underlying notion. Then if you want to use fractional digits there’s a calculation to convert that. Does that answer your question + +MM: Yes, it does. Thank you. + +JMN: So we were talking about how this thing, this amount, this `Decimal.Something` fits into the internationalization picture. And the thinking is that wherever we used to have Decimals sitting in Intl, namely in PluralRules and NumberFormat, they should be banned. Maybe it’s a bit of a discussion, whether some parameters should also be mandatory. In general the thinking is, banning but handling these with `Decimal.Something`s instead. So the idea is that for PluralRules, NumberFormat, `Decimal.Something` is going to be the thing that contains the information that is likely needed in the internationalization use cases. + +[slide 9] + +JMN: We also have a bit after story here about how this would fit in with the measure proposal. I said this was going to be about decimal, but that’s like 95% true. The discussion would be incomplete if we didn’t say something about measure. And the current thinking is the measure proposal can be slotted in later. With some kind of unit or currency attached to an amount. So let’s look at a bit of code. If we have some decimal, 5.613. We can attribute some kind of unit to that. And then we can, perhaps, convert that to a amount, and then attribute to a unit later. You can see looking carefully at the number there, before kilograms, there’s a slight difference there. Again, we’re still bikeshedding a lot about the names. And the exact API shape. But we think that something like this should be possible. + +JMN: So what we are thinking here, this is—an assumption there that decimal happens before measure. Or like at the same time. But measure doesn’t actually need decimal. But so if decimal doesn’t happen, we can still work on measure. + +[slide 10] + +JMN: That’s it. This is just a short update about our current thinking. So for those of you who are worried about our suggestion of merging the measure and decimal proposal, we’re not doing that. They remain separate. There’s some spec text available, if you would like to take a bit of look. And in our view, if this `Decimal.Something` or amount is something that looks good to committee or seems reasonable, then I think we are in a pretty good position to ask for Stage 2 for decimal at the next plenary. And that’s it. I am very interested to hear what you have to say. I will take a look at the queue. + +WH: I am a bit confused about some of the points raised here. You said that there wasn’t much interest in having 3 classes, but what I see here is 3 classes. The only thing that changed is that the name of one of the classes has moved to be on the Decimal object, rather than being an independent class. + +JMN: Yeah. I understand the concern. I think the current thinking is to lean towards having the two classes, the idea being that— + +WH: Which two classes? + +JMN: The Decimal and then the `Decimal.Amount` . The idea is that we would add some kind of unit and possibly other methods later in the measure proposal. Or we can also think again about three-class solution. That naturally arises in this case, as well: in other words, I am prompting us to rethink that also. + +WH: Okay. I have some observations here. Precision is not necessarily specific to Decimal. You could have Numbers or other types with precision also, so having Amount use Decimal might be foreclosing options here which we don’t necessarily want to foreclose. + +WH: You also mentioned that some of the internationalization methods might throw if passed a Decimal instead of an Amount. How do those methods behave when passed a Number—do they also throw if you pass Numbers? + +JMN: Numbers are fine if you pass them in. + +WH: So they would accept Numbers but not Decimals? + +JMN: Yeah. You are right. I mean, there is a bit of ambiguity there. We could accept the Decimals. But the thinking is that this might open up the door to some kind of errors and footguns that number currently allows. So I mean, yes, it’s allowed to use numbers. But the thinking is, this is a chance to perhaps fix some issues or prevent some problems that would come up. But if we can see a clear need for allowing Decimals, that’s also fine. It’s just something we are leaning towards right now + +WH: It’s not clear to me that providing bare Numbers or bare Decimal is always a bug. I can think of many cases where it just makes sense. There are cases where you might want to specify precision, and other cases where you don’t. + +WH: The other concern I have goes back to having the three classes, which is that, once you add precision and units, it’s more ergonomic to have the operations of setting units and setting precision commute with each other. By separating the classes, you make these non-commutative. You must set precision before you set the units, rather than the other way around. Things become awkward + +JMN: Yeah. That’s an interesting consideration. I am not sure I have a solution off the top of my head, except things like allowing some kind of, like, options bag as an argument where both can be specified. But yeah, you are absolutely right. We should perhaps think about that. + +WH: Okay. + +NRO: Yeah. So when WH was asking about Numbers, and asking if it’s weird that Numbers work with intl and bare Decimals do not, the passing Numbers to the various single classes, they can cause problems because you need to make sure to pass the same precision options to various separate Intl classes and that’s—it’s just very easy to miss that if the precision doesn’t come with number. And also, it’s—depending on your locale, what locale you are testing with, is it making a mistake because it might map in one locale and not in another. Or might be common in one locale and not the other. This is a long-standing issue with `Intl.NumberFormat` and PluralRules. Which is why we need—in the numerics calls we thinking of just a number type of number, let’s make it difficult to make a mistake. Maybe there could be way to say, oh, here. I actually am sure I am passing a decimal and I do not care about precision and like we should direct people towards doing the safe thing. Which is the opposite of numbers right now. + +WH: Can I reply to that? + +NRO: Yes + +WH: I am not sure I believe the claim providing a precision is safer. I can think of plenty of instances where you don’t know anything about the Number or Decimal you are providing, and adding a precision can make things worse. It can change the value of it. It’s unclear what happens if you escape into exponential notation. + +NRO: Yeah. You are right. I think we should still make it explicit that you don’t want to find the decision rather than making it the easy thing to do. + +WH: Yeah. I want there to be a simple way of _not_ specifying a precision, if that is something that makes sense for that application. + +JMN: If I may reply briefly to that. We still have things like toString and toPrecision and stuff like that. So it is possible to have a decimal and you just kind of format it without any acknowledge of the digits. I guess the question is: does Intl also want to be totally open to decimals? And I guess that’s still something we need to resolve. + +NRO: I think I was the only one pushing for three classes. Just because it felt like a better solution to me, I guess. It would have been convinced multiple times that it’s absolutely fine to have two classes. And this discussion keeps coming back. But nobody said and support of three separate classes. + +SFC: My comment is that the two-class solution allows the phases to be commutative as WH mentioned. It also—a decimal without the precision amount is like—has the precision being the number of significant digits and the decimal. If you specify the unit on a decimal without specifying precision, you’re inheriting the precision of the decimal. Of the bare decimal. So I do think that’s a well-defined operation. + +EAO: I am echoing some of what SFC just said. But maybe from a different angle, and the way I see it, I understand one of the main reasons for Decimal to be representing better values that numbers that we are getting from external sources. And those values are, then, more exact with decimal and therefore have a precision that we don’t need to define. These values really ought to be formattable without needing to be specially wrapped and determined what the precision is. + +JMN: May I reply to that quickly? I guess—are you also going with WH's suggestion? The idea that Intl should accept bare decimals? Do I understand you correctly? + +EAO: Yes. I disagree with the reasons for not supporting bare Decimal in `Intl.NumberFormat` and `Intl.PluralRules` . + +SFC: Yeah. I think that—we should have that—I mean, this is a discussion we should have in TG2, once we get to that point. But I think that, you know, there’s definitely valid arguments to be made that bare decimals are formattable finite values by themselves. Intl operator could—that’s a discussion we should have. Whether that’s a natural place to draw the line. + +MM: Okay. So I think what I am about to say overlaps with a lot of what has already been said. I won’t respond on the rationale. The position I find attractive is the two-class with there being a Decimal class and a measure class, for the sake of both immutability and commutability, both the precision portion of a measure and the units portion of measure are each optional. And therefore, you know, they could both be optional. I don’t want to take it the same as the underlying number, but the key thing; there’s nothing about any of this machinery that should be specific to decimal. Decimal is just a number without precision. And then the thing that adds precision for display purposes to a number should simply apply to all numbers, including floating point numbers and BigInt. One part of my rationale is that from our point of view, being a participant in the blockchain crypto ecosystem, we would certainly want to represent `numbers.withUnits` , where the units could be cryptocurrency, but we would never use decimal because even at a 128 bit mantissa, we would not want to take a chance on the loss of precision. You would do what is already the convention in crypto, which is you just take the smallest quantum for any currency, it could be incredibly small like the Satoshi, and then we just use BigInts and having BigInts plus display information plus units for what currency is the only thing that we would use for that use case. + +JMN: I see that NRO has a response to this. + +NRO: Yeah. So when this first came up, like, maybe 6 or 8 months ago, it was like I didn’t discuss every single number type. Like, there was a—it was a, like, 2 dot significant digits and we give something similar for that. The feedback was that it was just like a lot of stuff. And I think we can—like, this presentation showing the constructor, would potentially leave the door open for something like `BigInt.Something`, if that’s,like, motivated. + +MM: I think—I think they mentioned any of the specific to decimal is just not motivated. I would object to leaving the extension to other number types on the table, while something specific to decimal proceeds forward. + +NRO: Okay. I think there is value in having, like, the class to be charged stronger type, like you have the object that contains, like, a decimal with something. So you don’t have to look into the object to then figure out what type it is. Like, I can think about how to use this with TypeScript. + +MM: TypeScript has parameterized types + +CDA: SFC? + +NRO: Yeah. So I have [inaudible] remove it, because it’s what Shane said. I was going to say the second point and reply to this. + +SFC: Yeah. We looked at the polymorphic approach a while ago. The decimal backed amount approach, I asked, you know, FYT and others about the implementation difficulty of this. Having a single amount class that we know what the backing amount is, it means that that class has properties. That is available, that polymorphic amount wouldn’t be able to have. So when I say polymorphic amount, I mean an amount that has a numeric field that can be many different types. It means that basically, every interaction with that type needs to then have branching code. Just because it has to have different behavior based on the underlying numeric type, it likely will use more memory because you have discriminants and such. Another advantage of a decimal backed amount is that the precision is free to represent because Decimal128 already represents the number of signature digits. It doesn’t require adding any slots which is another nice advantage. Yeah. I guess that’s what I commented. + +EAO: Replying and asking maybe a clarifying question for MM here: is that given that your interest and needs are for working with numbers that have higher precision than what decimal can provide, that you’re maybe natively working with BigInts, and then but I would like to format these and presumably when formatting these, you will you would like not be formatting an integer but a number with a fraction presumably. If I understand the case you are register there, do I understand, effectively you are saying, that you would need to be able to represent the number, for instance, as a numeric string or you would need a dividing scaling factor to be applicable somewhere in order for this thing to work for your purposes? + +MM: So the answer is, yes. And I will agree that that weakens my case, that one notion of measure will cover my use case. The point that I was making, though, is that the combination of units together with an underlying number, there’s certainly nothing about that that is specific to decimal. And also, the general notion of precision combined with some underlying number, is also a notion that is not specific to decimal, even if our particular use case for BigInts takes us in a bit of a different direction. Certainly for numbers the notion of a way to associate precision for purposes of display makes perfect sense. And with regard to the representational economy, I think that’s exactly the kind of implementation detail that programming language implementations generally strive to hide from users, and what should be exposed to users, especially for the language at the level of JavaScript, rather than the level of C, you could have a measure class, that internally took advantage behind the scenes in a given implementation if it shows to do that of a more economical representation when the underlying number was decimal. But I don’t see any reason to make that visible to the user. And certainly, some implementations would choose not to do that, and I think they should be welcome not to do that. + +MM: One more point on this. Which is, the ton that you are doing, which is using the precision that is inside the decimal—the non-normalized Decimal128 representation, you are using that in a semantics violating pun because the actual IEEE semantics of that implicit precision is number of trailing zeros, not number of trailing digits or number of significant digits. By using it for display purposes for number of significant digits, or number of trailing digits, you are making use of a representation whose documented purpose is something else. + +CDA: REK on the queue: "plus 1 to the BigInt and precision in the context of cryptocurrency" + +DE: Yeah. I am not sure about the cryptocurrency use case. Isn’t that one with fractionalDigits. That corresponds with the—does anyone have use cases for other kinds? I could picture this number precision use case, but I want to question that this is independent from decimal. Precision is base-10 precision of—especially of fractions. It does go in this positive direction as well. I would want to dispute the comment that Mark made about this being an invalid pun of the IEEE data model. We convert to the quanta at that in the IEEE and that’s a reasonable representation. Precision is a base-10-basic consent and the whole point of the proposal is to encourage you to move away from representing numbers that are logically containing these base-10 fractional parts and user precision units. In a sense it is analogous, if you are using number for that, you are going to be broken. + +MM: I’m sorry. I don’t understand that comment at all. People use binary floating point with rounded decimal displays to different numbers of significance all the time. That is a use of floating point numbers that we should, in general, try to discourage and if you are going to display a number in decimal—with decimal digits at all, you shouldn’t be using floating point. That seems like two greater— + +DE: To a large extent, yeah. Logically what you should be doing is kind of two phases. One, round to an appropriate decimal and display the decimal. It’s okay that we have all this tradition of those operations being elite and grouped. But I think the whole point of the decimal proposal is to focus on giving accuracy and reliability to the common case. Isn’t the result of sine or something or other. + +MM: For the IEEE definition all about trailing zeros, I understand that. That argument. But for very good reasons that’s not what we’re doing here. We are interpreting its numbers, significant digits and you draw significant digits then you are talking about—you’re seeing an approximation anyway. I don’t see why a decimal display approximation of a floating point number is less sound than a decimal display approximation of a decimal number. + +DE: So maybe WH can clarify more about the IEEE alignment. I don’t think it represents a number of trailing zeroes in IEEE either. I think significant digits, there’s—it’s interchangeable with respect to particular decimal with IEEE, you know, quanta concepts. We could allow this, but I just—it doesn’t seem motivated given the main use case was this crypto thing, which is not what the proposal means. And— + +MM: Sorry. I withdraw the crypto, as anything but illustrative and agree with the ways it doesn’t fit with what I was saying. I certainly don’t withdraw floating point numbers. There’s existing software that does this with float be point numbers and I can consider that software to be correct. I do not want to retroactively declare that software to be incorrect. + +DE: We have APIs for dealing with that, NumberFormat takes various precision parameters. And you can format numbers this way. We are just—even though something is logically sound and has a well-defined meaning, when adding something to the standard library, we are making a judgment call. This thing is especially pertinent. And I think we’re allowed to make judgment calls that don’t correspond exactly to, like, is this a logically meaningful thing or not + +MM: You are certainly allowed to make such judgment calls. I am free to arrive at the opposite call on that call. + +DE: That's fair. Just to sum it up. The main use case I heard from you, when you are doing something you might today do with NumberFormat, providing the and giving a double, then having is that in one unit is a logically meaningful thing. And that’s that. + +MM: Yeah. And to flip it around: to the extent that we’re willing to live with NumberFormat providing the display precision, why not use the NumberFormat to provide the display precision for decimal values as well? + +DE: Well, that’s—yeah. SFC has made that argument as well, that often this is logically wrapped up in the human legible like human interpreting meaningful decimals in a way that— + +MM: Okay. So once what is being displayed is an approximation of at underlying number, then I don’t see the distinction between the case being made for doing it with decimal versus floating point numbers. + +DE: Okay. I’ve been—I will leave it at that. I think I made the argument + +WH: There are a few things that I think are incorrect or we have been neglecting. There was the claim made that we could optimize the `Amount` class to use the IEEE quantum precision. That doesn’t really work because the precision varies depending on the amount of the number you have. So if we want simple semantics for what the precision value could be, then we must store the precision separately. Trying to fit it into an IEEE quantum is just premature optimization, which wouldn’t work anyway. + +WH: The other crucial thing that we’ve not even discussed here is where rounding takes place and how that rounding works. Rounding modes are important for a number of applications. I don’t understand in this proposal how that would be specified, and that makes a huge difference in what representations we can use in the `Amount` class. + +DE: Yeah. Could you work through an example of where this doesn’t line up. I am just trying to understand the terms of the IEEE logic. Why can't we use this quanta for the precision. It’s too complicated to figure out which would apply + +WH: For example, denormals. + +DE: Do you think you could talk through an example just so I could picture it better? + +WH: So for semantics I am imagining for precision, you can set the number of digits after the decimal point independently of the value you have. This is not true for IEEE quanta. We only have 10 minutes left. I don’t want to digress into explaining examples of that. + +CDA: Right. We have less than 10 minutes left and several items in the queue. + +EAO: I agree with WH that I don’t think that packaging in the precision into the IEEE754 representation. Separately, what I would like to note, the current `Intl.NumberFormat` supports formatting a string representation of a number with fractional digits with limits on the precision that go well beyond that of Decimal. So, for example, for the use case that MM was representing earlier, where there is a value with a precision greater than the precision allowed for by Decimal, that ought to still be formattable with a precision. This is currently supported by explicitly setting the precision in the NumberFormat constructor, but this would not be supported by the `Decimal.Something` or the `Decimal.Amount` that is proposed here, if that value that is based on Decimal rather than, for example, a string representation of a number. + +NRO: Yeah. It was said, like, IEEE talks about trailing zeros. We talk about digits not being an approximation. I disagree with that. You can convert between one and two. Like one or the other. You can look up Wikipedia, it actually talks all over the place about significant digits and trailing zeros. Because, like, they’re just interchangeable. Once you like deal with the data. + +NRO: And then I have a question. If you start with a floating point Number and say, this is actually to interpret it as if it was a base-10 number, with this amount of precision, is there any of those float64 numbers that cannot be represented as a Decimal number together with some precision? + +WH: The answer is yes. + +NRO: Okay. Thank you. + +SFC: Yeah. I was just—mostly echo what NRO said, which is that the—the quantum representation to cohort representation for precision is equivalent to pairing a normalized decimal with a number of significant digits between one and 34. If this is not a true statement, we can just discuss on Github some counterexamples. But as far as my understanding of how this works, like this is a true statement. Maybe there’s edge cases involving subnormals. But for most numbers, these two representations are equivalent to one another. + +SFC: I am also next on the queue again. I can also take this off-line to discuss with MM and WH. But the— + +MM: I see the question, "are MM and WH motivated by the other use cases not mentioned". I can just give you a quick answer, is that although the cryptocurrency case was a thing that senseized this to me and I have withdrawn it as more than that, the this—this is not motivated—my objection are not motivated by anything that still has anything to do with Agoric or anything I want to do with this, it’s that the non-orthoganality of what is proposed compared to the blatant orthogonality of the underlying concepts just offended me as a language designer and a lot of my feedback in general as me trying to uphold the quality of the language, whether it has anything to do with a particular use case I want to engage in or not. + +SFC: Yeah. I will just respond a little bit there, MM, which is that for—in terms of a decimal-specific abstraction here, for example, Decimal128 itself and most other programming languages that use Decimal128 are able to represent the number with precision with the quanta in a decimal representation. And it seems like there’s value in having a type in the language that is able to interoperate with the other platforms and systems that use Decimal128. And `decimal.amount` is the natural place to put that. A polymorphic amount is not a natural place to put that interoperability type. Because that’s very much, a very specific decimal functionality. + +WH: I would like to understand why we keep bringing up the IEEE754 representation of 'quantum'. I don’t see how it’s connected to anything we are doing here. A use case that doesn’t work is specifying precision of, let’s say, 15 digits after the decimal point and having that work for any number as the number—so I just don’t understand the motivation for trying to force this into the IEEE quantum model. As far as internationalization is concerned, it’s the number of digits you want to display after the decimal point. That could be arbitrary. That could be 40. It would be 15. + +NRO: Like, you could want to represent any precision, like, saying I have a number with 1000 significant digits. But in practice when it comes to show numbers to users, you don’t deal with that. Like, a number that’s, like—has more than 34 digits of precision, you are going to find some other way to explain that concept to the user, like, for example, splitting it into multiple subunits, like, you have based on hours and seconds and so on. But then a single very long number. And so like putting a limit on how much this precision could represent in practice, like, when it comes to Intl and showing the thing to users is not—is not a real limiting factor + +WH: It is. Like, even two decimal digits, reliably emitting two digits after the decimal point doesn’t work if a number is large enough. So far I have heard plenty of discussion about, you know, how we could work around the limitations of IEEE quantum, but I haven’t heard any reason why we should be using it in the first place, rather than storing the precision as a number that is independent of the value that’s being stored. I have yet to hear any motivation other than trying to save a byte or two. + +NRO: We don’t really have a use case. Like, this is personal or like not discussed in the—if you had anything to represent the list, the decision, and represent the list IEEE Decimal128 number, then we personally be fine with me. We heard the restriction makes sense for them because it makes it easier. + +SFC: I didn’t mean to have the discussion to go in the way of quanta. I brought that up as a way that implementations could choose to represent this in a more efficient way. + +WH: Yeah. I still don’t have a good answer to how you would print a bunch of numbers each with two decimal digits after the decimal point and have them line up. + +CDA: Okay. Thanks, everyone. We are past time. + +### Speaker's Summary of Key Points + +- We presented a new class that solves problems with Intl and decimal +- We suggested using this new class instead of bare decimals in Intl + +### Conclusion + +- There was some concern about the commutativity of the application of a unit and a precision +- We discussed problems about the representation, in Decimal128, of very large/precise numbers such as those arising in cryptocurrency. +- There were some concerns about our proposed “banning” of bare decimal values in Intl + +## Guidelines for Locale-Sensitive Testing in Test262 + +Presenter: Philip Chimento (PFC) + +- [slides](https://ptomato.name/talks/tc39-2025-04/#8) + +PFC: Hi, again, everybody. This is a topic that I gave in an informal presentation on TG2 a few months ago. And I thought it would be helpful to bring it here as well and get feedback. This is not a normative thing for the specification. It’s just a discussion of what kinds of tests are helpful to have for parts of the language that are locale dependent. So ILD is an abbreviation, it stands for things that are implementation and locale defined. This is ILD behavior in JavaScript. + +[slide 9] + +PFC: Here's an example. You use the toLocaleString method of Date and you pass some arguments to it. And you get back an answer that says, "in the afternoon". That is obviously dependent on language and culture. The spec text says about this, + +> Let _fv_ be a String value representing the day period of _tm_ in the form given by _f_; the String value depends upon the implementation and the effective locale of _dateTimeFormat_. + +PFC: So taken in the most literal way, the specification says that any string can come out of this code. Like even a series of 1,024 `X` characters concatenated together, or something like that. That will be legal, but we don’t want that. So, you know implementations make their own places and they largely agree on what should come out of here but that functionality is often expressed in third party libraries such as ICU4C and IC4X. + +[slide 10] + +PFC: I would argue that it is good for users of the web, when the ILD behaviour is stable and websites don’t break and suddenly produce different results. But I would also argue that it is good for the web when ILD behaviour is updated to reflect current cultural practices so that websites are localized in a way that users find comfortable. As an example of that the locale-dependent formats in data repositories like CLDR are often wrong because somebody in the past made an arbitrary guess as to how a locale represents the date and number and they guessed wrong, and somebody who actually has more knowledge of that complains and submits a change and the behaviour is updated. + +PFC: So I think ILD behaviour being stable and ILD behaviour being unstable is both good, and obviously diametric opposites. So that brings me to the more practical consideration of what do we do when we are testing this behaviour in test262? + +[slide 11] + +PFC: Obviously, if we stuck to this spec text and only tested literally what the spec text says, we could not make any assumptions about the behaviour because arbitrary strings can come out. That seems like it is certainly not very helpful for implementations and not good for users of the web. We do want test coverage of these APIs, and we do have existing test coverage of these APIs in test262. We will talk about what do we want out of that test coverage and what is helpful and should it be a goal to cover every locale and option for every API? My opinion is no. And I think if you do that, after a certain point, you reach diminishing returns and you are not testing the JavaScript implementation with the ILD test anymore—you're just testing the underlying data source. + +[slide 12] + +PFC: We do have tests in test262 for this sort of behaviour, and there are two strategies that are often used that I consider not ideal. One is called 'golden output' and the other one I will call 'mini-implementation'. Golden output is kind of testing jargon and means comparing the output of the method under test against known-good output. I think this is undesirable in test262, because what is the golden output? It varies between implementations. Each major browser has their own human interface guidelines where they amend some data in these data sources in CLDR and ICU. Golden output will also vary over time as they update the data in the data sources. All of these variations are permitted by the specification. We don’t want to ban variation, but make sure that the variations are limited to things that make sense to vary. And then finally, if you build in golden output that means that the test can only reasonably be run by an implementation using a particular version of CLDR. If you are using another version or another data source altogether, forget it. + +[slide 14] + +PFC: The other strategy that is often used I will call 'mini-implementation', and you can see this in some of the files in the harness directory of test262. It is basically writing a polyfill for part of the spec in the test code, and then comparing what that polyfill outputs to what the implementation outputs for the method under test. I think this is undesirable in test262, because it makes it difficult to understand what is being tested and it is unclear when the test fails, is that a problem with the implementation or is it a problem with the polyfill? + +[slide 15] + +PFC: That was a bunch of slides on what not to do. What should we do instead actually? Here are some ideas that I have collected or thought of. + +[slide 16] + +PFC: One option would be to use stable substrings. So this is not quite golden output but you identify a part of the output that is reasonably expected to be stable across versions of the third party libraries and data sources and across implementations even taking into account their own human interface guidelines. An example here on this slide, you want to test the date-time formatting with `dateStyle: 'full'`. Instead of asserting that the result is equal to some string that you have predetermined, you assert that the result contains the month name written out in full in English. So this more robust than comparing it against golden output, but it does share some of the disadvantages of golden output. It may be more stable across implementations and time but it is not entirely so. + +[slide 17] + +PFC: There is comparative testing. This is a principle where you can say that each setting for each input option must produce a distinct output and this could be good for getting coverage of all the code paths in implementations and making sure that each line is exercised, which in some cases is a goal. There is an example here on this slide. You know you can format a date with the weekday either narrow, short, or long. And you can reasonably say that the narrow week day should not be equal to the short week day which should not be equal to the long week day. But that assumption does not hold in all cases. The second stanza in the code sample is doing the same thing for the day parameter where the options are numeric and 2-digit and if you have a day that is greater than nine, the numeric day will be the same as the 2-digit day because there's no zero-padding necessary. So that approach would fail, and you need to do it judiciously. + +[slide 18] + +PFC: And there is metamorphic testing which RGN pointed out to me, where you find invariant properties of output that must hold across multiple inputs. And this is nice because there is no need to actually specify what those properties are exactly; you just have to specify that they hold. That sounds easy but it is not easy in all cases. So here's an example in this code sample on this slide. You format a date with just the day. You format the long month name. And then you format with dateStyle full. The property is that the full dateStyle should include the day which is not zero padded, and include the long month name. And I think that is a reasonable assumption if you want to test full dateStyle without hardcoding golden output. But again, it does not hold in all cases, or sometimes finding these relationships can be difficult. + +[slide 19] + +PFC: So, that is an overview of the things that I look at when I am looking at ILD tests in test 262 and I would love to hear further thoughts. There is an issue here that you can click through to and continue the discussion on as well. I'm especially interested to know that what kinds of guidelines are helpful for implementations here? I am assuming the most helpful is that each implementation tests that output is exactly what they expect. That is probably not feasible for test262 because we permit variation between implementations in certain cases. So what would be the next best thing, and I would be particularly interested in hearing that. So I will open up the floor to questions? + +SYG: I did not understand the 'mini-implementation' and how can you test the polyfill with the actual method? + +PFC: I can put link to an example of this in test262 in matrix. + +https://github.com/tc39/test262/blob/61fcd7bd565e01f795e55080ed9af70b71adb27e/harness/testIntl.js#L2517 + +SYG: I can read the link, no need to explain + +PFC: Okay. + +EAO: Your presentation reminded of testing that I think we ought to be doing in particular for `Intl.DateTime` . Two or three years ago one of the spaces in en-US date formatting changed from a simple space to a thin space, and this was being used by sites that were presuming that they could format the date as using the 'en-us' locale and rely on that format being supported by the built-in datetime parser. I think it would be appropriate for test262 to test for the changes that would impact users who are using internationalization for non-internationalization purposes. Other examples include the ways that are currently used for formatting dates using year-month-day representation by formatting dates in Swedish, or with the calendar: ‘iso8601’ option. If these things change due to CLDR data and ICU implementation changes that theoretically ought to be fine because this is internationalization, but in practice things will break and test262 should be pointing to that stuff breaking. + +SYG: I don’t know about this Swedish thing but I do agree that the 'en-us' thing given its reach and its basically the default chances that people already depend in it on the web. So probably should be treated as stable if is there no intersection among the implementation currently, of course that is a good signal that maybe it is ability is not as needed or things for which there is intersection among different browsers and yeah it would be good to get an early warning that something changes en-us. And I am talking about actual golden in this case that would be anything that would give us a guarantee that something is stable and your possible alternative in stable sunscreen compared to testing and all of that stuff, I am not exactly sure yet how I would think about what kind of guarantees they would give me as an implementor and I see a test break, and it is a stable substring. It might tell me that means there is less likelihood that the parsers will break but I have no idea how people will write that specific output and I guess comparative testing and—that is all to say the most important thing to me as an implementor is stability for 'en-us' for sure. + +SFC: So sort of reply to what SYG was saying and I think basically the gist of this particular line of thinking is that if there is an assumption that developers are making about invariants that the standard library has, that it is you know is not assumptions that are not intended to be made, right? That is sort of our definition of abusing INTL libraries and if we can detect those—not the word detect yet but if we can identify what those are, then like I think there is a reasonable argument to be made that those could go into test 262 because you know that would be be basically an early warning signal, however I don’t know if necessarily test262 is the right for that purpose because it is trying to test conformance to a specification and will it break the break and maybe test 262 can be that thing but I want to be clear this is a different use case than trying to test implementation conform to this spec. And regarding the 'en-us' thing, and I think there is an argument to like—I know that is a proxy to the thing to the real problem which is you know code that abuses individual APIs and you know we have evidence that there is a popular stack overflow question about the Swedish thing which is maybe one reason I thought about that and I don’t necessarily believe that every anticipate will accept an en-us locale that needs to live up to the same standards for the one that we found, and based on the question how do you do timezone conversion in JavaScript and you can use en-us day time format and date the code and then you have that assumption built in everywhere. And I think that, you know, proactively testing en-us in test 262 is not the best solution there could be a shortcut because if we believe that en-us will carry a different value and like this is not the long-term goal we should be testing which it we should be identify what the use cases are and you know the other thing we could probably test. Sorry that is was a bit of a circular argument. + +SYG: I will say something stronger than that but for like my argument is really about risk management. It is not about doing the right thing for locale at all. That is an orthogonal problem that is done by other people but we keep getting burned, 'keep' is perhaps too strong a word, but especially after the en-us date format changed and how many things that broke. The working already shifted to how do we derisk this for future data changes? And whether something is a good thing technically to develop for a locale, is going to be weighed against what is the risk of this breaking again, and right now, not breaking is very much the highest priority. So, I would—that is the lens I will be looking at this from and you can make all the arguments you want about how you don’t want to compromise the long-term vision of data but should we update the data and take in the new changes that you think are great and the lens this is going to be judged from often is what the risk of accepting the update and will it break stuff. + +SFC: You have a valid point about de-risking. I'm saying "test all en-US with goldens" is not a great solution. + +SFC: moving on from the like developer assumption stuff into spec assumptions. Right, regarding the part about what you called metamorphic testing. Like a lot of things that are like encoded in the spec are safe to test. DurationFormat says that it is composed of number formats and list formats. That is a safe thing to test. Beyond that, testing whether datetime string contains date string as a substring maybe can work sometimes but it is not necessarily a spec assumption. It worked from time to time. + +SFC: What are we actually testing? We should be testing that a thing conforms to the specification and that maybe we should write in the specification the spirit of Intl function that it conveys to the users in the computable way. That is what we are trying to test and maybe we should sort of shape our assumptions around there around that. You know, like, for example, like can we ask the LLM, here’s the output of DateTimeFormat—if the LLM can round-trip it back, and that conveys the goal. I don’t think necessarily there is very much supportive of LLM in the testing pipeline but like that is the spirit of what the API will do. + +SFC: And my last comment, comparison against ICU. It is little bit like the polyfill thing that you had earlier on. The mini implementation—you could just fire up ICU4X or whatever and use that as reference implementation; you still have golden problem but the scope gets smaller, right? + +PFC: Okay, thanks. I see that maybe I should have requested a larger timebox but I have to go now. And I would invite everybody to continue giving their thoughts in test262 issue #3786 and thanks for the discussion. + +### Speaker's Summary of Key Points + +- With the specification permitting almost any results of ILD (implementation- and locale-defined) behaviour, test262 has to strike a balance between stability and adaptability, as locale data sources such as CLDR are often updated. +- When writing tests for ILD behaviour, testing against golden output or a 'mini-implementation' is not recommended. +- We discussed several other strategies that live somewhere around the middle of that balance: stable substrings, comparative testing, and metamorphic testing. +- The en-US locale, and to a lesser extent sv-SE, may need to meet higher stability requirements than other locales due to the prevalence of popular copypaste code that expects certain output from those locales. +- After CLDR replaced ASCII spaces with thin spaces, implementations became more acutely aware of compatibility risk. + +### Conclusion + +- Please feel free to continue the discussion on [tc39/test262#3786](https://github.com/tc39/test262/issues/3786). + +## `export defer` extracted from `import defer`: stage 2 update or for stage 1 + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/nicolo-ribaudo/proposal-deferred-reexports) +- [slides](https://docs.google.com/presentation/d/1ats5CbsgalobhnfFIR2b1QAdaLRe4yVI55meo_ARqdU) + +NRO: Hello, yes. So this proposal was presented this part of import defer a while ago and while exploring more and on the surface they look similar, and to there was much more complex to import defer and it lasts more and I propose one year ago to—we discussed des sided to keep it but I propose to leave export defer behind because the important part was ready like all of the different questions was ready to go to Stage 2.7 and it was left behind. + +[slide 4] + +NRO: So I will define what barrel files are. When it comes to barrel files with out components and libraries like lodash and components that you can use to build spec components together and it is exponent for them to just have a single entry parts that the export and the reason being a much nicer export for users for single library from all of functions and like obviously they do not by code and they have export from the declaration. And that’s actually problematic because like it is not get for export users and if you find the semantics that will cause unnecessary code for execution. And it is like you are loading the whole library and you are just using two or three functions. And unfortunately, like people use this a lot because the developer advantages are so great. + +[slide 8] + +NRO: So obviously we just don’t always load a lot of files from the browser and there is some current users and there is [INDISCERNIBLE]. This was similar things for other libraries. It is less use now because there is other solution which is a bit better that is tree shaking the list and bundlers try to analyze the code and when imports to use and they tried to detect which code does not have the effects so we know which export or import from statements can actually be removed. And they had different ways of doing so. For example, weback [] and side effects and so check the code and but all of these are very difficult because JSON die nappics and you cannot determine if there is effects or not, and this was done a lot during the first designs of rollup and parcel and so figuring out what that means for module and the answer was no it is just not possible. + +[slide 10] + +NRO: When it comes to node.js and when using common JS, so there is required things and you basically export an object with a bunch of getters, and this does not work with ESM. + +[slide 12] + +NRO: So the proposal is about—actually to understand what it is about let’s look at a quick example many so we have here prediction that loads button and some components library and the components library working with a bunch of components from a bunch of different files. And like I mentioned before, this like if you just rate what this is our file on the left component library and it does not actually meet them so one would say yes we have solution for this problem but that is a different problem because the goals was to keep the necessary work for things that do you right now but that does not work in this case, because like at some point, you still execute like those sub— when you didn't need to load them in the first place. + +[slide 15] + +NRO: This is not when we need to executes a thinking but whether or not we execute and whether or not we need to lead the thing or not. So we would mark this export as the third and defer means in the export position this module is only good if in it is part of this binding that is exported here. So this is specifically telling the JS file and if defer is not important you can just skip JS export. + +[slide 16] + +NRO: So it is like the same as this slide but I guess this is exactly how to work with the commonJS. + +[slide 17] + +NRO: So, export defer is like different from import defer which is what you want model—so you have the model available and preloaded and executed so you can make synchronously executed. And like this actually let’s us—it is also like trying to defer much so in + +[slide 18] + +NRO: this example given before and loading button and all of these other modules area much and so executed and they are not significantly and like you just read the read files from the harddisk. So the goal here to import startup performance by reducing unnecessarily initialization work. And there is different loading semantic, and so we are not going to check—it is good to show [INDISCERNIBLE]. + +[slide 19] + +NRO: export defer you will use it in start, and because like export differ ever defer from hello, we do not know if is file is imported and we don’t know if coming from file or not and we need to do it unconditional and we need to have it exported. + +[slide 22] + +NRO: So why from the language, and just one first advantage of having provides guaranteed tree taking that every one can rely on and if a module is marked—output of the model is explicitly telling us we can ignore the side effects effect if they are observable. And some cases this is less because like maybe some are able to do like is not granular and two things can work together and this can provide a baseline and this works when using ESM natively and you can get one step closer and using the browser implementation but just—and this is useful when you combine with import and like for example specifically evaluation of proper access. So instead of the just loading and prefer when possible keeps execution. + +[slide 23] + +NRO: So, I'm in a bit of a weird situation. As `import defer` got to stage 2.7, and present these other time, so I don’t know if today I should be presenting this as update like conformation and if to discuss this Stage. But I would like I guess being preamps but if there is a preference problem here and so this is proposal for Stage 1 and proposal that is just a request of protocol and the branch of proposal. And there is like almost complete syntax and there is a couple to do and there is bugs but it is almost complete. We can go to the queue but I have a couple of more slides but I can go faster. + +[slide 25] + +NRO: How does this integrate with `import defer`? You could have different keyword from import and export side, and this is from a different proposal, and this screen is not effected by this proposal but have a mod.js that inserts start from mod JS and so in this case everything is going to be loaded and both the foo and the bar will be loaded and execution of mod.js and the dep-bar is for start up and we executed—wait, this slide is wrong. We execute—we execute the dep-foo and the dep-bar. + +[slide 26] + +NRO: Instead we have import and we have a clear list of names, and export defer, like in this case here on your screen, here there is no defer execution and during the proposal we do not have deferred execution by access, however, we know from mod.js we’re only importing foo and we keep executing that bar. When it comes to `import *` and `export defer`, what you get the next case of object and various names can be individually like isolated for execution from keyword and in this case it would be a dep-foo but the bar and together in this case except that we avoid execution and avoid executing mod.js because that is executed in the dep-foo and know that in this case and in the case where you have import without defer, potentially both foo and bar will be executed later and we need to executed async in both of them. + +[slide 31] + +NRO: And there is a proposal—this is during Stage 2 and I am mentioning this if you have opinions about this. And now let’s go to the queue. + +JHD: So, um, this is not better and it is not currently capable in any implementation I have seen of removing as much code as just importing directly from the files you need instead from a barrel file. So, it should still be—we should be telling people and encouraging people not to use barrel files but this proposal is great because it makes tree shaking do less of a bad job and maybe we can see it will do a good job with this change but skeptical but I would like to see this advance and underscore for the group and notes that tree shaking is currently and always a sub par solution. + +NRO: I talked to maintainers. I plan to just keep interacting tool maintainers with them individually to make it as good as possible for tree shaking. + +JHD: Thank you. + +WH: On the “import * & export defer” slide you mentioned that, even though `dep-bar` is not imported, its async dependencies are? + +NRO: Yes. like in this case. + +WH: How common is the situation in the ecosystem that, even though you avoid getting `dep-bar`, you get the nest of its dependencies which are executed anyway? + +NRO: Um the reason here is that so this is async is for proposal and if you refer execution and you go to dependencies and reason being that you cannot defer asynchronous module to synchronous execute that synchronous for access. And in this case, we like if we are just doing this `import *` and later asynchronous will go through bar, and only way to make that work is by executing the async dependencies. + +WH: If `dep-bar` synchronously imports some stuff, that does not get evaluated, right? + +NRO: If it is async, it does access that. If it is synchronous it does not get adopted. + +SYG: Um, so like do we know why the tree shaking does such a poor job with the—do I care about side effects or not? Is that extent much it or is there other issues as well? + +JHD: Yeah I think that tuple will make it harder but I think it is any import can be side effecting. So if there is binding there is no side effecting but that is not a safe assumption for bundlers but for linting rules. I would assume that it is really difficult to do the safe analysis or whatever the appropriate terminology is to figure out which code you with delete and what code you can’t before determining if you can delete the actual import of the file. + +SYG: Um, how are that—you just said how it is hard or difficult to delete, how is that solved by this? + +JHD: In my view at least, it is that you don’t need to even traverse the deferred subdependency graph unless the object is triggered or passed to a function or whatever which you know on some level it is statically determinable. And not perfectly but sufficiently. + +SYG: So from the—I have annotation that I don’t care about any of the side effect of the tuple it comes from and is that equivalent to import defer inside of tools would use that for tree shaking? + +JHD: It is possible - I don’t use treeshaking tools myself but it might be right that it would not help that use case, but even if it definitely didn’t, I would enjoy it as a syntactic marker of that property. But yes I don’t have the answer to that. + +NRO: So the syntactic marker like if I do for example, in JS, and this proposal does not have it if actually used or not but I can [INDISCERNIBLE]. But it knows for sure and for the keyword it does not need to check the site of bar to try to figure out if it has syntax or not. It can just blindly remove it. + +SYG: But what I mean is that the problem—is a fair characterization of problem is that the reason tree shaking does so poorly is that it requires correct like out of band explicit annotation like in this case dep-bar does not have anything I care about but saying in export defer I signal my attempt from dep-bar does it come down to that? + +NRO: Yes when it comes to some tools and other tools, it actually tends to perform in some cases vector that it will try to do side effects or not without relying on the annotation. + +SYG: So not relying on—I think I under but segue in my next question which is I think the problem you present for improving performance sympathetic to but it comes down to that the current way that the ecosystem is work around it insufficient due to basically lack of good annotations of this because this solution for at least the tree shaking problem comes down to that we will let programmers annotate. Right? Like you will still have to annotate it except but by expert defer by some other tool-specific thing. Have you thought about trend of instead of bundling into referral one time behaviour of signaling that the exporter does not care about the side effects of the top level of the module there they are exporting or importing in this case and reexporting and since we have import attribute after the top of my head one possible way to signal that annotation is you know I don’t care about `side effects:true` or something like that have thought about that? + +NRO: So one alternative I considered was just to have it like dedicated line in bar that side effects don’t matter. You can see here that suggest there is no production of this export keyword. And I guess that exactly not semantics but carrying on this proposal and we like this okay, let’s mark this to keyword and turn onto it. Because like it effects semantics in a way that you would not be able to otherwise represent in JavaScript and proposal you can just have the import file fix imported file will export the object while this is giving semantics that you could not normally explain. + +SYG: Okay, um, my general concern here is that kind of feature matrix for what ESM and the qual features can you do and now adding TLA and defer and export size with different phases and the feature matrix for ESM is getting very complicated and I think that is in general a thing that I want to simplify, and so, I think there is a problem in as I said beginning sympathetic to the problem that you presented, and I think it is important to solve and perhaps export defer is the best way to solve it but I think in any isolation a lot of the things when you look at in isolation, that they are motivated, but like the ESM story is not something I think is in a good place narrative wise and with WASM to be cog any constant of that. + +NRO: I understand and using this in this case complexity but change— + +SYG: There would be no extra transitive referral behaviour and it does not resolve in that Simplification. + +ACE: Sorry, tree shaking aspect, it is doing two things and you are saying one you can skip load this at all if I am not referring to the binding but then also the other things,if I am doing import * then it is saying like you can easily evaluate these things so it is not purely a replacement of the package.json […] the marker but making the evaluation lazy. + +ACE: I do agree that this is just adding more things to ESM. I think the real shame here is my opinion and we should have just— this should have been the semantics of export binding from the start. And I don’t ideally we can do a break in change and this is what like when you are really exporting something it is like an alias thing and this optimization should have been the default. + +NRO: And I would say that that is problem that is happening we should just recommend it to export—because we don’t need to use export from that is relying on the set of export module. Which like if you do, you shouldnt. So I see MM saying happy to go to Stage 2, end of message. And then I see Jack saying support for advancement, no need to speak. And Dmitri saying support switch, end of message. + +NRO: So confirm Stage 2, and consensus for this? Does anybody prefer to go through like start from 0? + +SYG: Clarify Stage 2 this is separate proposal from import defer? + +NRO: Yes. + +CDA: You have explicit support for Stage 2 from MM, DLM. Does anybody not support this for Stage 2? + +CDA: There is nothing and nothing in the queue and that will bring us to the end of the meeting or end of the day. Thank you Nicolo. And thank you everyone and thank you to our notetakers and we will see everyone tomorrow. + +### Speaker's Summary of Key Points + +- `export defer` has been presented before when it was combined with the `import defer` proposal. It aims at reducing the overhead caused by 'barrel files' that re-export values from many other modules. +- `import defer` was advanced to Stage 2.7 without `export defer`, due to the additional complexity with handling re-exports +- An explanation on how `export defer` differs and composes with `import defer`. +- One significant difference is that `export defer` allows for module loading (network requests) to be skipped, whereas `import defer` only defers execution. + +### Conclusion + +- Reaffirmed that `export defer` is at stage two, continuing from when it was when `import defer` was split off to proceed on its own diff --git a/meetings/2025-04/april-17.md b/meetings/2025-04/april-17.md new file mode 100644 index 0000000..0d6a49a --- /dev/null +++ b/meetings/2025-04/april-17.md @@ -0,0 +1,592 @@ +# 107th TC39 Meeting + +Day Four—17 April 2025 + +## Attendees + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Chris de Almeida | CDA | IBM | +| Waldemar Horwat | WH | Invited Expert | +| Michael Saboff | MLS | Apple | +| Nicolò Ribaudo | NRO | Igalia | +| Luca Casonato | LCA | Deno | +| Dmitry Makhnev | DJM | JetBrains | +| Bradford C. Smith | BSH | Google | +| Samina Husain | SHN | Ecma International | +| Ron Buckton | RBN | Microsoft | +| Istvan Sebestyen | IS | Ecma International | +| Daniel Minor | DLM | Mozilla | +| Jesse Alama | JMN | Igalia | +| J. S. Choi | JSC | Invited Expert | +| Ashley Claymore | ACE | Bloomberg | +| Gus Caplan | GCL | Deno Land Inc | +| Zbigneiw Tenerowicz | ZBV | Consensys | +| Eemeli Aro | EAO | Mozilla | +| Mikhail Barash | MBH | Univ. of Bergen | +| Ruben Bridgewater | | Invited Expert | +| Shane F Carr | SFC | Google | +| Daniel Ehrenberg | DE | Bloomberg | +| Dominic Farolino | DMF | Google | +| Michael Ficarra | MF | F5 | +| Luca Forstner | LFR | Sentry.io | +| Kevin Gibbons | KG | F5 | +| Josh Goldberg | JKG | Invited Expert | +| Shu-yu Guo | SYG | Google | +| Jordan Harband | JHD | HeroDevs | +| Stephen Hicks | | Google | +| Mathieu Hofman | MAH | Agoric | +| Artem Kobzar | AKR | JetBrains | +| Tom Kopp | TKP | Zalari GmbH | +| Kris Kowal | KKL | Agoric | +| Ben Lickly | BLY | Google | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Erik Marks | REK | Consensys | +| Keith Miller | KM | Apple | +| Mark S. Miller | MM | Agoric | +| Chip Morningstar | CM | Consensys | +| Justin Ridgewell | JRL | Google | +| Daniel Rosenwasser | DRR | Microsoft | +| Ujjwal Sharma | USA | Igalia | +| Chengzhong Wu | CZW | Bloomberg | +| Andreu Botella | ABO | Igalia | +| Andreas Woess | AWO | Oracle | +| John Hax | JHX | Invited Expert | +| Jon Kuperman | JKP | Bloomberg | +| Philip Chimento | PPC | Igalia | +| Richard Gibson | RGN | Agoric | +| Romulo Cintra | RCA | Igalia | + +## Disposable AsyncContext for Stage 1 + +Presenters: Chengzhong Wu (CZW), Luca Casonato (LCA), snek (GCL) + +- [proposal](https://github.com/legendecas/proposal-async-context-disposable) +- [slides](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit#slide=id.gc6f73a04f_0_0) + +CZW: This is CZW from Bloomberg, with LCA and GCL from Deno, and we are going to present disposable `AsyncContext.Variable` today, and what do we have with `AsyncContext.Variable` is we already have `AsyncContext.Variable`, and it is good to fit—to provide a strong encapsulation to both users and frameworks that their mutations on a single `AsyncContext.Variable` cannot be leaked out of the function scope they provided to the— as a variable to the variable that run, and so this provide a—provides a strong guarantee that and a mental model that their notation can only be seen by a subtask inside of the function scope. And this API and pattern also fits in well in manual web APIs and frame works, given that they can use if `AsyncContext.variable` to run as a job repolice station to code ex. So the code can just replace what they have as a listener to wrap the listener, and there’s no change to the function, shape or function parameters. However, if a user wants to modify the `AsyncContext.Variable`’s value, that encounters an issue that they cannot use this planned pattern in their arbitrary context generator like constructor, given that we cannot—we cannot replace the super or break or continue, like, key words like this in the new menu function scope with a very naive replacement. + +CZW: So let’s have a look at a recap of how the `variable.run` works. So in a short overview, an `AsyncVariable.run` can be seen as equivalent to a try/catch scope that replaces the `AsyncContext` mapping with a cloned new mapping with the new `AsyncContext.variable`’s value being swapped, and when the function is being evaluated, if there is not any interruptions, we will swap back the `AsyncContext` mapping to the previous mapping, so we might find it kind of similar to what we have with the using decorations, so can we enter variables without creating a new function scope to address the problem that we presented. + +CZW: So we would like to present that. Can we introduce the `using` declaration support to the `AsyncContext.variable`, given that they are semantically similar and it can also adjust the prevalence that we just raised and fix existing functions context well without requiring users to refactoring their function in order to use yield or any kind of these key words. And the question is can we do something similar to what we have when we introduced the using declaration support to the `AsyncContext.variable`? We still want to preserve the encapsulation that the `AsyncContext.Variable.run` supports, and we want the usability improvements that it can be done with the use integrations, so we will visit the problems that we might face with user integrations on an `AsyncContext.variable` later, but let’s see what we can do with this support. + +CZW: So the primary use case that we have is that what—if we want users to use, to create their performance tracing spans on the web, so we have said that `variable.prototype.run` fits good in what frame works where these frameworks takes a user function and sets up the context for users, but if a user wants to perform mutations on the `AsyncContext.variable`, they need to refactor the code in order to use the run pattern, but in the tracings case, users don’t want to refactor the code just to add tracing spans to record how their operations perform, so this `.run` pattern is kind of harder for users to adopt in the existing functions context. So what we want is not just to add declaration support for `AsyncContext.variable`, but all the library wrappers that could wrap under the `AsyncContext.variable` . So library wrappers can extend the functionality of an `AsyncContext.variable` and provide additional support for a tracing library, they can wrap their span around the `AsyncContext.variable` and provide methods like setAttribute for users to conveniently access this functionality without refactoring their code heavily. + +CZW: So comparing to the `AsyncContext.variable` proposal, which is at Stage 2, this new proposal builds on top of the existing `AsyncContext.variable`, but we want to introduce syntax integration to improve usability, just like how we did with promise and async/await. Like, we have promise and we have `promise.then` , and it’s kind of an improvement to introduce async/await syntax support on top of promise, but we still need promises functionality inside the language. So given this similar idea, we want to improve the `AsyncContext.variable` with the `using` declaration to help you observe to reduce—avoid refactoring the code in order to mutate a single `AsyncContext.variable`. + +CZW: Before we go to detailed solutions, I would like to go to the queue to see if there are any questions regarding the moderations of the proposal. + +SHS: In some cases it’s impossible to adopt run, such as in test cases like jasmine and others. + +CZW: Yeah, I think, yeah, that’s kind of an observation that we found in the test frameworks, and they provide before and test and after test, so all of these functions are separated into, like, different function scopes, so in this case, the `AsyncContext.variable` to run pattern does not fit in, but to be honest, we—like, in real world use cases like tracing, the tracing library can provide all the alternatives to address the testing facilities, so even though it’s not the primary concern that we raised this new proposal, but I think this new proposal can also help to improve that use case. + +SHS: Yeah, I think it’s more general just in terms of being able to-any `AsyncContext.variable`, not having to do a special thing for every context async variable, and it’s just an automatic variable to be done with this. A much more general solution, I guess, is what I’m trying to say. + +MM: Yeah, so I just want to see if I can rephrase what’s been said so far in terms that I more strongly relate to, just to make sure I’m oriented. The current `AsyncContext` run, you have to give it a function, and then the new temporal scope, the new binding of the variable applies over the execution of that function, and that the—and that the—sorry, that the variable, you know, thinking of the nested scoping as scoping, the variable’s only shadowed within that function, there’s no equivalent of assigning to the variable. The variable does not change within the prior scope. Now, there’s several constructs in the language that can be understood in terms of transforming to continuation’s passing style. Not that it could literally be implemented that way, necessarily, but yield within generators, await within async functions, and the using for disposables all can be understood as doing something to the continuation of the execution. Dispose is different from the others because the continuation is only within the block. + +MM: Now, the question is, the—are you proposing something that would change the using mechanism itself or is the change to async function just writing on the using mechanism as it exists, and if so, I don’t understand how the using would introduce the new shadowing scope, because once again, it’s important for `AsyncContext` that it only be shadowing, not assignment. For one thing, if it was an assignment, then a snapshot of the context could change its meaning when the snapshot was revised after an assignment. So that’s it. + +CZW: Yeah, in the coming slide, we will explore different solutions. Definitely, if it is possible, we want only add the using, like, the `symbol.dispose` or `symbol.enter` on the `AsyncContext.variable` and reduce all the functionality with using decoration. And we will also explore how to avoid, like, the concepts of being able to leaking out the encapsulation of the current scope, so maybe we can revisit this question when we go through all the slides later points. + +MM: Okay, that sounds good. On this slide in particular, `AsyncContext` swap, that’s something new that’s coming in with this proposal. Is there anything existing? + +CZW: It’s the abstract operations in the `AsyncContext` proposal specification. They are not exposed to users. It’s written here as an illustration of how the current run works. + +MM: Okay, could you remind me what the—I don’t remember an `AsyncContext` swap, and the name certainly sounds imperative rather than, like, you know, like an assignment more than a shadowing. Could you explain what `AsyncContext` swap is. + +CZW: It is not an operation exposed to users, so it’s an underlying abstract operations to replace the mappings on the agent that contains the variable value slots. And what is exposed to users is that when the run finishes, the evaluating—evaluating the given function, it will swap back to the map—previous mapping that when the run was invoked. + +MM: Okay, I’m not sure I understand, but I think I’ll postpone further questions about it. + +GCL: `AsyncContext` swap just takes the current `AsyncContext`, returns it, and then sets the new value to whatever you pass to the AO. So the way it’s used in the specification text is to build a stack, basically, where you, you know, push a new value to the stack by assigning it to the global `AsyncContext`, and then later pop that value by assigning the previous value that was returned. + +DE: And it effectively is enforced because only the two run methods only ever call `AsyncContextSwap`. + +MM: Okay. So it’s always stacking, it’s always balanced, and the snapshot is when you do a snapshot, it’s always of the current bindings—I think I’ll postpone until I have more context. But thank you. + +USA: That was the queue. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_22#slide=id.g3483f5889db_0_22) + +CZW: Okay. Cool. Maybe I can go with the slides. So let’s explore solutions that we could have. Right now we have three possible solutions. The first solution, A, it reuses the current using decorations mechanism and potentially we would like to also use the Stage 1 "enforced using declaration" proposal, which is the `Symbol.enter`, and the solution B and C enhances the using decoration to be a—to allow the `AsyncContext.variables` being used with user integration still being enforced in the current scope. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g34c9fd99034_11_14#slide=id.g34c9fd99034_11_14) + +CZW: And we would like to enforce the scoping with the using decoration `AsyncContext.Variable` on all three solutions, and not just possible. The proposal for solution A, it could be seen as transformed to this code [slide "Proposal A"] that when the `symbol.enter` is invoked, it will swap the mapping with the value being snapshotted, and it will reset the value or a variable when the `symbol.dispose` is being—method is being invoked. So in short, the `Symbol.enter` captures the variables current value, and then enters the async variable with the new value so user can observe the new value after the using decoration, and when the dispose method is being invoked by the user integration, it checks that if the current `AsyncContext.Variables` is the last one that was entered, or if not, it should throw and enforce this scope and it's not reset. So if the user invokes the dispose correctly with the user integration, we expect that this `AsyncContext.variable` using disposables are correctly stacked, just like how we used with `AsyncContext.variable.run`. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_25#slide=id.g3494191011f_1_25) + +CZW: And so what are the context leaks that we mentioned? It’s not memory leaks, it’s only if user, when a variable value is not encapsulated within a synchronous function co-boundary, so it is only possible when user invokes the `Symbol.enters` manually without using a using declaration. So this is only possible in synchronous function calls. It’s not possible in async function calls, because async function calls-- async functions are wrapped by promise, and promise will continue to behave like `AsyncContext.variable.run`, and properly encapsulate it. And so, what if a synchronous user function really leaks, and the user manually invoked the `Symbol.enter` without invoking the `Symbol.dispose`? The issue is that if we introduce such capability, we may assume that any function code can leak, but in use cases like Stephen mentioned earlier, like in test frame works, this test frameworks may want the leaks to happen, because they have this, like, before end test and being split into three function scopes, so this could be their intention, and we—even though it’s in the our intention to allow users to do this, so it might be someone’s—it might be someone’s use cases to do it, like, in test frameworks. And in the equivalent `AsyncLocalStorage.enterWith`, leaks are possible, because proposal A does not enforce using of the `using` declaration, and we recognize that synchronous leaks can cause expected behaviors, and we would like to call for general use cases for such behavior. And we also would like to highlight that this is not unsafe as that this value leaks are only visible on the observable if you have access to the `AsyncContext.Variable` instance, so you can not observe any synchronous leaks if you don’t have the access to the `AsyncContext.variable` instance, the `AsyncContext.Variable` object itself. And we will propose solution that cannot leak synchronously, and we will continue to explore and allow you to look at. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_39#slide=id.g3483f5889db_0_39) + +LCA: Thank you, yes, so as CZW said, the synchronous leak problem with A is that they expose a function to user code that can enter interview a context without it being forcibly exited, which means that a user can enter a context and possibly leak it out of a synchronous function scope, and proposal B and C both try to prevent this by two different mechanisms, tying the enter and exit of the `AsyncContext.variable` value directly to the using declaration syntax, so making the enter and dispose method behave in sort of a special way when called from the using syntax and not behaving in a way that would let you synchronously leak out a value when manually called by user. + +LCA: So the way that proposal B does this is by still having the `AsyncContext.Variable.prototype.withValue` method that returns an object with an enter and a dispose method, but calls to this enter method do not actually enter immediately, so if you look at the value of the `AsyncContext.Variable` directly after calling `Symbol.enter` manually, no value will have been entered. You will still see the previous value. Instead the `Symbol.enter` method records using some internal state whether it was called or not. And then the `using` machinery, when it is done calling the `Symbol.enter` method on the object that was passed to it will actually perform the entering, so that the entering happens within the using machinery and not within the synchronous function call. And then the `Symbol.dispose` method—sorry, the `symbol.dispose` method doesn’t actually do anything, and instead the `AsyncContext` restoration method happens entirely within the using machinery. Can you go to the next slide? + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_1#slide=id.g3494191011f_1_1) + +LCA: And I’m sorry for the small code here, but the way that this would work is essentially as described here. So there would be some changes to the actual behavior of using tech alation to be aware of AsyncContext and as you can see here there, sort of a stack of snapshots that the `AsyncContext.Variable` with value `Symbol.enter` method can use to record values to be entered at the end of the `Symbol.enter` call. And then in the dispose part of the using declaration year, we dispose, and then reset the AsyncContext variables to the previous value. And then forces that the values can be set to the sin tactical binding them to the lexical scope where you’re calling `using` declaration. You can not manually leak out an AsyncContext Variable from a function scope or any other scope because it will always be reset by the `using` declaration. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_47#slide=id.g3483f5889db_0_47) + +LCA: Proposal C does something very similar, but with a slightly different approach. Where instead of there being sort of a behavioral change insider using, we instead add a new internal enter and exit—or and dispose slot to the `AsyncContext.Scopable` object, which is the object that would be returned from `AsyncContext.variable.with` value. That using would call instead of a `Symbol.enter` or `Symbol.dispose` method if they’re present. And these are internal method that cannot be called by user code. They’ve only callable by using syntax, so the user cannot manually enter and exit. And this has exactly the same implications as proposal B. It just works through a slightly different mechanism where instead of it sort of being a side effect to call `symbol.enter` and the using machinery sets, that it’s a using slot on this method. And that has implications for ShadowRealm boundaries, which I’ll get to in just a second. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_59#slide=id.g3483f5889db_0_59) + +LCA: Yeah, so let’s start with the cons of each of these. Proposal A can leak, as we’ve discussed. Because of the fact that we do not want two to allow interleaving of variables, which means enter and exit must always be balanced, there has to be some slightly more complicated logic in the enter and exit functions to ensure that you cannot—yeah, that, like, you cannot call enter and exit in an unbalanced fashion. But, yeah, you can not prevent the actual scope leak. You can just prevent the exit happens in an interleaving fashion. + +LCA: Proposal B adds this sort of new global mutable state into the using declaration, but it’s not really problematic. It has exactly the same user observable semantics as using `AsyncContext.Variable` right now. Like, you can only use see the mutability for your own variables, which is not different from giving somebody the ability to enter an `AsyncContext.variable` using `AsyncContext.variable.prototype.run` context run. + +LCA: And proposal C as unforgettable internal slots that probably cannot work through a ShadowRealm callable boundary because they cannot proxy pierce. So this is the main drawback of proposal C. And as proposed, you can only set one variable per scopable, but this is something you could change if there was use cases for this. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_54#slide=id.g3483f5889db_0_54) + +LCA: And then pros, proposal A requires no special handling of the using syntax. It is just three methods. One method with value method is in context vary constable the object that is returned would just work with using syntax assuming there’s a `Symbol.enter`. And it works well with proxies with no special logic anywhere. But, yeah, it has the ability the leak, which some also consider a use case maybe for, for example, this test use case. We’ll have to see about that. + +LCA: And proposal B, you cannot scope leak and it has—and proposal C cannot scope leak and asynchronous calls and it’s simpler to explain than proposal B because these are go internal slots, which we already have behavior like that elsewhere. But, yeah, we discussed the cons already. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g34833c460bf_0_34#slide=id.g34833c460bf_0_34) + +LCA: so I do want to quickly cover the thing from earlier where we don’t just want this to happen for the `AsyncContext.Variable` itself, but also for objects that wrap `AsyncContext.Variables` , and we think that this is something that done through composition of `symbol.dispose` , and it’s slightly different for proposal C, but it’s still possible, where you could have an object that internally contains an AsyncContext variable and you call with value and `Symbol.enter` and manually a call to `Symbol.dispose` inside of `symbol.dispose` of the object you’re actually passing to using. You can see that illustrated here on the code on the left. Next slide. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g34833c460bf_0_47#slide=id.g34833c460bf_0_47) + +CZW: Yeah, so `Symbol.enter` is optional for proposal A and C, but we would like to say that it’s favourable because we can enforce that an `AsyncContext.variable` integration with unit integration are enforced: it must be invoked with a `using` declaration, and we can—it’s not expected to be invoked without dispose, and it also allows library integration, like the previous slide showed that with convenient extension. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3483f5889db_0_68#slide=id.g3483f5889db_0_68) + +LCA: Sorry for that. I just—my Internet stopped. Okay, so then, yeah, disclaimer, this proposal only works with async `contest.variable` and there’s no using integration for `AsyncContext.snapshot.` This is something we can talk about. We can talk about that offline. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_49#slide=id.g3494191011f_1_49) + +LCA: Yeah, so the summary is the `AsyncContext.Variable` prototype.run provides new behavior that is very useful for developers in frameworks especially, so when you’re not directory dealing with this but wrapping existing callbacks to a framework, that run requires a new function scope, which means widely using it in a code base that is not already using callbacks heavily often requires heavy refactoring, especially when using constructs like break or return. And we do expect there to be wide use of `AsyncContext.Variable` specifically for tracing, which is helped by a lot of instrumentation all across the user’s code base, so it would be good to make it as easy as possible for user to adopt this without requiring heavy repack or thing. And AsyncContext integration does support this the same way async await made it easier to adopt promises. And we’re specifically not looking to introduce new syntax and use the existing syntax because it does already lexical binding, which is what we’re after. And there’s currently three possible solutions we’re exploring, each with different tradeoffs and we’d be very interested to hear from you all what your thoughts are on these different solutions and also obviously on everything else on the proposal. + +LCA: So let’s go to the queue. + +RBN: I know I’ve discussed this with champions for AsyncContext proposal in the past, but my biggest concern with options B and C is they will break intuition with the disposal stack and how composition is intended to work with using declarations. Mainly, I've talked with a number of delegates in the past who when they talk about using the declaration proposal they generally think of the actual semantic behavior of using declaration syntax is that it’s essentially a syntactic sugar over just working with disposable stack. But this would—these two options, B and C, grant specific capability to `AsyncContext.Variable` and specific interactions with using that prevent these `AsyncContext.Variables` being used with the dispose stack or async disposable stack, which means they can’t be used in composition. So I’m really—that’s one of my biggest concerns, is that this is introducing a break in intuition with how any other disposable works. + +GCL: So, sorry, I think C, I would agree with you that C is not immediately composable with a disposable stack, but I believe the other two are. + +SHS: B doesn’t work either because with the using exits before you pass the enter or the variable itself to the stack. + +RBN: Yeah, can you go back to the example of the desugaring of option B. Yeah, this is—yeah, so here you’re—there’s two issues that I see with this. One is you’re introducing a syntactic transformation over a run time value. There’s no way that you can know when doing any type of static analysis and parsing without, like, a full, like, strong type system and reliable type system that the thing that you are passing in is an `AsyncContext.Variable.` This would require run time evaluation to know that it needs to do something special, unless you’re doing this for everything, and then you’re adding every single dispose. And then this AsyncContext enter/exit restoration functionality, it’s not, again, tied into, like, if I hook `AsyncContext.variable` and stuffed it into a disposable stack and I did it using around, that then it’s not necessarily going—if it’s in runtime detection is not going the detect that that’s actually part of that disposable context or disposable stack. + +GCL: You’re referring to the implementation wanted to optimize this to not always take a snapshot around using syntax? + +RBN: Your position is that with this approach is it will always snapshot at every using? + +GCL: I’m not entirely sure—I’m trying to clarify what you’re trying to claim. + +RBN: Let me go back for a second and say I’m trying to understand this slide. Is this asserting that every single using declaration would introduce an AsyncContext snapshot? + +GCL: It is asserting the observable semantics of the—this proposal. Whether the—like, are you asking about whether an implementation could optimize it to not do that? + +RBN: I’m asking what's—no, I’m not asking about optimizations or anything. I’m asking about if line 1 was not new `AsyncContext.variable` or line 4 was not calling with value, would this be doing the same thing for any other value? + +LCA: Within the spec, yes. + +RBN: So this will add to the belief to every single person using tech— + +LCA: No, no, within the spec. I’m not saying this is not—this cannot be optimized away. + +CZW: Yes, within the specification, this would just always perform the AsyncContext machine machinery. + +LCA: Yes, the same way we do for async functions, for example. + +RBN: Async functions I kind of understand overhead, because we’re working with async await, you’re not necessarily expecting the highest performance, because there’s always—there’s going to be some context switching and overhead from continuations, everything else that’s associated with that. But having, like, one of the interests—or one of the goals we have, or I have at least when it comes to shared structs proposal and shared memory multithreading is high performance applications that need to be able to work with locks will be using declarations to lock and unlock, and they do not want overhead. + +RBN: The—the other point to my topic is that in both B and C, there’s discussion about `symbol.enter`, and the proposal for `symbol.enter` that’s currently at Stage 1 is specifically about `Symbol.enter` being a more complex extra step to enforcing that you’re actually using a declaration with using that if you really, really, really want to or immediate to or have a very specific reason that you can call out to that method the invoke it, which, yes, could result in the potential context leaks you’re discussing, but the point it being a built-in symbol that 99.9% of people won’t have to look at because nail be typing using whatever equals and that value, means that people aren’t going to be reaching for this unless they really need, to people who really need will most likely be taking extra care about stack discipline. I’m not really certain that B and C are necessary in that context. + +NRO: Yeah, all times mentioned that this requirement is not all the time. Like, part of this context proposal design was that AsyncContext snapshots will be very cheap to get, they’re just copying a point, and you don’t actually need to, like, iterate through a structure to copy its values, which is why it’s okay to do it, for example, at every single await. So I wouldn’t worry about that too much. It’s just literally copying a point to somewhere else. + +SHS: Yeah, gist warranted to point out the overhead question, and there’s the issue of the order it happens. If the finishing the using is where we’re making mutation happen, that does break disposable stack, because you use using to enter the stack, and then you event kind of put more disposables on the stack without using syntax. And so that would not trigger the mutation mutation and that does break the intuition and the composability. + +DE: Yeah, I’m surprised by this suggestion of using AsyncContext Snapshot. I thought we were going to solve this by using general solution to making `Symbol.enter` more reliable in general, like RBN had proposed previously. I also want to say I think such a reliable `Symbol.enter` mechanism can work with DisposableStack, though it can be pretty complicated. It would mean that a lot of things that would previously be just a function call now would return a Scopable that could only be used with `using` again. The composition still makes sense, it just changes the interface, if the stack ever includes anything that has an enter that must be called. + +SYG: This is originally clarifying question and it has been clarified that proposal B is using hard coding to be aware when the right-hand side is an `AsyncContext.variable` ? Like, hard coding in the sense that no matter what the right-hand side of the using—sorry, not the right—yeah, the right-hand side. No matter what the right-hand side is, there’s this AsyncContext machinery that now happens both on the using—at the using site itself and in the finally block that they dispose of. I want to triple check that. That is what you’re proposing for proposal B? + +LCA: That is correct, yes. + +SYG: Okay. That is very unpalatable to me for a lot of the same reasons that Ron has said. But also it feels like taking a step back, it certainly feels like we’re running ahead of the solution space here. Like, there’s AsyncContext has been designed for while, with you there’s zero baking time. Using is barely shipped, only in Chrome, I think, I don’t think it’s shipped anywhere else. There’s not enough baking time. This feels—however I personally feel about this design, it just feels like given the maturity of the proposal dependency chain here, there’s—AsyncContext does not rise to the level of needing special casing in syntax that is itself very new yet. There’s a too much risk here that I don’t think something that ties `AsyncContext.variable` into a piece of syntax I think that is warranted. + +LCA: I do want to respond to this before we move on to your topic, Dan. Like, we have—there has been a—essentially equivalent proposal to AsyncContext as we have experienced through AsyncLocalStorage there for a long time, which is already being used for tracing. And we’re seeing a lot of the problems discussed in this presentation there right now. Particularly the very heavy need to use callbacks, and the very difficult refactoring hazard when you’re using some of these syntactic constructs like break and return. So this is, like, not coming out of nowhere with no practical experience. Like, in is based on the practical experience from using async local storage. Without having sort of— + +SYG: I think you misunderstood my position. I find proposals B and C deeply unpalatable because they basically hard couple AsyncContext variables to using syntax. Proposal A is palatable. I hear your problem statement. Proposal B and C is what I’m saying, is—yeah. + +LCA: Got it. + +USA: Less than ten minutes remain, so I would suggest everyone to be quick. + +DLM: Yeah, I just wanted to second SYG’s point. It came up in a genre view that this might be moving quickly. We have no concerns about this going to Stage 1 and we do kind of feel that more experience is warranted with both AsyncContext and using. + +CZW: Yeah, I think I can clarify that. This is the reason that we don’t want to couple this proposal with the Stage 2 AsyncContext proposal, and I don’t think we will proceed with this any time faster, because we really see the benefits that this proposal can be benefited from the .run enforced by using declarations with the `symbol.enter` . So, yeah, this is the reason that we want to propose a new proposal to advance to Stage 1, ask to advance to Stage 1. + +DE: I want to expand on what Luca was explaining about the motivation for this. First, I don’t think this proposal is essential for AsyncContext. I think `AsyncContext.Variable.run` is completely good enough and already corresponds to the, you know, common best practices for using AsyncLocalStorage. There are some uses of `AsyncLocalStorage.prototype.enterWith`, which is a different method that ends up letting you set a variable without entering the scope, breaking the stack discipline. So this proposal is an effort to get back some of those ergonomics, which I really do think are essential for AsyncContext. There are several—mine, not several, there are a few people in the no JS community who were especially interested in maintaining this ergonomics, so bringing this proposal to committee helps to get feedback on that as a possible future direction. That doesn’t mean we need to do it now. But it will be helpful to see this actually considered by the committee. That will be helpful to bring back to the Node.js community and talk through what that means. So it’s helpful whether or not we adopt the proposal. + +SHS: I agree that B and C are unpalatable. We should focus on A. And I think one of the main issues with A is that dispose can throw. Are we okay with that? Is that something that is acceptable? + +USA: Okay. Reminder that we have close to 5 minutes remaining. Item three items on the queue + +LCA: We want to ask for Stage 1. Do we have enough to go through the queue and then ask for Stage 1 or ask for that and— + +USA: I think so. I mean, if… you know, the items—depending how much time they take basically. You could ask for Stage 1. And go through the queue later. + +DE: Let’s go through the first queue. We are almost done. + +RBN: Yeah. I had a reply to SHS’s comment that dispose could throw if the stack broke. There is not really anything wrong per se with the disposed throwing. You shouldn’t if you can avoid it it. But the spec is designed to capture error and having dispose throw because you broke stack discipline is way to inform the user that they broke stack discipline and resolve that by adjusting their source code. So it seems like that’s a good thing. Rather than a bad thing. So I don’t—I don’t have an issue with dispose throwing in that case. + +SHS: Excellent. + +RBN: So LCA mentioned during the presentation that about composition. You said composition was feasible both with proposal A and B. I was trying to—I was trying to ask how it was feasible because it didn’t seem like it was. I think I might better understand that now when it’s—with the explanation that the snapshots are always happening around using. I do have some concern around the complexity of how that works with the—how that works with enter, but not that concerned about this topic anymore. + +LCA: Okay. Yeah. I think the way it’s possible is because using call `symbol.enter` on the outer object and the outer object can cause that on the inner object that has the side effect of mutating this snapshot that is set at the end of the original enter call. It’s possible with B or with C. If you do some prototype shenanigans on the return type of enter. But I don’t have— + +RBN: I am not sure—C does not seem reliable. It adds a lot of flexible to of the you are forcing the user to use a super type and class to do this, which really doesn’t—it might not work well can compositional cases + +LCA: Yeah. I tend to agree. + +DE: How are we preserving this stack discipline with generator? If you yield in the middle of a `using` block, if you never resume that generator, does this break stack discipline? Or does something about the generators work to restore the previous AsyncContextSnapshot at that point? + +CZW: The current Stage 2 proposal ensures that the generators are also preserving the encapsulation of the AsyncContext generators. So in the Stage 2 proposal, before and after the yield statement, `AsyncContext.variable` observe the same value. Regardless of what the caller will change the context async generator. + +DE: Yeah. That’s inside of the generator, but outside of it, if you call `.next()` and then that puts it inside of a using block, does it—how do you prevent not being an unbalanced thing? + +GCL: I would expect that we will specify all suspends for generators and async functions, and async generators to restore the AsyncContext. But we haven’t, you know, discussed the exact details there. I think not doing that, as you say, would kind of be, you know, it would sort of bring the same problem back. But yes, that’s what I would expect. + +LCA: Okay. Shane in the queue. Do you want to go to the final slide, Chengzhong. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g34833c460bf_0_52#slide=id.g34833c460bf_0_52) + +LCA: Yeah. Yeah. Ask for Stage 1 first. + +USA: Let’s see if we have any comments from the queue. Also feel free to support. Okay. We have support from CDA on the queue. + +RBN: Just briefly. I do support the idea of disposable `AsyncContext.variables.` I have concerns about options B and C. And I am still a little iffy on `symbol.enter` on its own. When you are doing UsingDeclaration, the idea you do the initialization when you do the acquisition. So calling with variables is essentially what the—would be a good place to actually change the context and do that. So I am still a little bit—even up in the air that option A is necessary, as long as you have a `symbol.dispose` . But considering it’s it is a proposal, we are still investigating it and looking at it, I don’t have an issue to continue to look at option A in that case. + +CZW: Thank you. + +SYG: I am going to clarify the reason this is a separate proposal and not folded into the existing AsyncContext proposal is because the champions think that this is not integral to AsyncContext. I would say that this improvement to the AsyncContext Stage 2 proposal. And Stage 2 proposal can work on its own and provides the functionality that we need to context. And this new proposal is essentially to improve the usability. + +CZW: This doesn’t answer why it needs to be a separate proposal. For it to be a separate proposal, I think it is mentioned, we would like it to—it depends on the `symbol.enter` which is also Stage 1 and we don’t want to—we don’t think it’s necessary to block the Stage 2 proposal. + +USA: There are responses, but I believe you have already answered, yes. They have gone away. That’s the queue. We are 2 minutes over. Let’s give a few more seconds to see if somebody has thoughts on Stage 1. + +DE: Do we have a definition of this scope or problem statement? Does it differ based on the different options? We have heard some opposition to some of them. + +[Slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_18#slide=id.g3494191011f_1_18) + +CZW: Well, I think the—this page of the slides explains the iteration. Ultimate goal is to allow AsyncContext integration and we are—we could explore that, like, we said with `symbol.enter` or `symbol.dispose` that I think—even with solution B and C, I think this is—this page shows that we want to include the feasibility and the solutions. + +LCA: I think a more written out version of this is on the third to last-page, Chengzhong, the summary slide [[https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_49#slide=id.g3494191011f_1_49](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_49#slide=id.g3494191011f_1_49)] + +USA: I’m sorry. We are past time. Can we focus on Stage 1 for now? + +ABO: + 1 by ABO. Stage 1 as well. + +USA: We have not heard any negative comments. Please let it be known if you have any. + +MM: To be clear, you heard negative comments. You have not heard objections to Stage 1. I will put myself in that category. I am very concerned about this and doubtful there’s actually a feasible solution, but I am not objecting to Stage 1. + +CZW: Thank you, MM. I think we can bring this up to SES meeting and thank you + +USA: All right. I guess with that, we can conclude stage 1. And I hope you folks have a good chat async afterwards and try to find some of these things + +LCA: Thank you + +### Speaker's Summary of Key Points + +- There are concerns with solution B and C as they change the semantics of the using syntax. +- Solution A allows compositions in libraries and integration with the syntax. + +### Conclusion + +- Proposal advances to stage 1 + +## WHATWG Observables + +Presenter: Dominic Farolino (DMF) + +- [proposal]([https://github.com/WICG/observable](https://github.com/WICG/observable)) +- [slides](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/) + +DMF: Okay. Perfect. All right. So my name is Domenic Farolino. I work on Google Chrome. And I am working on the observable API. Which is currently a WICG standard—or, specification. Before we go into the slides, I want to give some context. This is a pretty informal presentation. We are not—this is not incubated or proposed in TC39. We are not asking for specific Stage feedback or anything like that. But because we are pursuing this API which used to be pursued in the TC39, and we moved it over to WICG with upstreaming into a WHATWG DOM specification, we—myself and other browser vendors felt it was important to run the proposal and the design by folks in TC39. And try and just, you know, keep everything on the platform updates and ask for opinions on that perspective. That’s what I am doing here. + +DMF: I will start with the history of Observables. So like I mentioned, in 2015, it was a Stage 1 TC39 proposal, I believe championed by Ben Lesh—he’s the author of the RxJS userland Observables implementation. In 2017, it was proposed to instead move to the WHATWG DOM standard and incubated there. A lot of platform editors agreed with this approach and felt it was the right place for it. And that would be the best way to get it into developer hands faster. And some many years later, I sort of retook this proposal and formally moved it to WICG and created a specification out of it. And writing the implementation in Chromium. That’s the context for what we are here today. I just want to start by discussing what an Observable is. Before we cover some of the design details of the specifics of the proposal. + +DMF: So the best way to think about an Observable is that it’s like a promise. But for multiple values. Like promises, they are synchronously available, handles that represent async work. Which means, that you can act on them, write when you create them and call methods and operators on them. Even before the underlying source starts emitting values for their consumption. The main thrust of the Observable proposal is—the main way it integrates with the web platform is through the EventTarget interface. A big part of the specification is this new method on EventTarget called `when`, which creates an Observable that represents asynchronous stream of web platform events fired at the EventTarget. It’s like a better addEventListener, but integrating with the Observable API instead of just callbacks. + +DMF: So what I want is: this will enable you to write code like this. This should be `element.on('click')`. You can get an Observable by calling. This represents all the click events being fired on the element. And then you can start calling all of the operators on Observable like`filter`, like take all the click events, filter them, map the event to the data inside the event that you really care about, and then you can subscribe and add a handler. This is the linear pipeline that they offer, more convenient than clunky addEventListener callbacks and addEventListener callbacks. We think it helps you get out of the callback hell that promises help you get out of. But they work for async streams of values instead of one shot values that promises work on. + +DMF: So where do `filter` and `map` and so forth come from? There’s a list in our spec of all the operators. Some of them worked on observables and some made more sense to return promises. You can check out the spec for the list of them and the specs for them I want to cover some design details and talk about the internals of how this proposal actually works. So promises have two components to them. There’s a producer, which is the callback that the promise consumes, and this produces values and then the consumer, which consumes the value, in red, the thing in .then. Observables are similar. They construct very similarly, take a callback. And instead of, you know, just calling the resolve function or something, you get access to the subscriber object and .next() values to it. And the consumer subscribes and passes in various handlers. Not just `next` to get them. You can do `next` for signal completion because whenever you have multiple values, you need to signal that you can complete. And you can also `error` as well and signal that there’s an error to the consumer that way. So the consumer has the ability to respond to each of the events bypassing different callbacks that represent them. + +DMF: Some key components of observable that are different from promises shallings the first one is they have synchronous delivery. So back to this example. When you .next() a value on the right there, on the producer, it synchronously goes to the consumer and triggers the next handler. There’s no asynchronous microtask delay like promises have. + +DMF: The second one is that it’s lazy. This is a deviation from how promises work. So when you construct a promise, and you give it a callback, that producer callback runs immediately. With observables, the producer callback which produces values, actually gets saved as private state inside the observer. And it runs later when the consumer actually subscribes. In that case, they’re lazy compared to promises. + +DMF: Here’s an example of how that works with the observable produced by the method for EventTarget. You have this observable and it listens to an event. What that translates to is, this constructs an observable with the internal callback and whenever that callback runs eventually, you get—under the hood adds an add listener and forwards the value to the listener. The benefit being, you can call the observables immediately. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_278) + +DMF: One interesting design detail with our observable proposal is that the producer is essentially multicast. So this is a little complicated, but what this means is, you can see up top, the producer, the callback it takes, and everything 500 milliseconds will produce an incremented value. The first time the—the first consumer comes along, and subscribes, it’s `source.subscribe` on the left, it will fire the callback internally and run it. But because the producer is multicast, all subsequent consumers. They just listen in on the existing created producer that has values. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_261) + +DMF: We will talk about what happens when the consumers register the fact that they are uninterested in values and how that—how this listening mechanism works. But that’s part of the next section, cancellation and teardown. An Observable producer can stop producing values and be told to stop doing that. And basically, the way an Observable shuts down is through two different ways, the producer-initiated teardown, this is the producer callback under some conditions calling `subscriber.complete` or `subscriber.error`, and signaling to the consumer it’s done producing values. You are not going to hear from me. There’s `done` or `error`. A consumer would start a subscription then ask to end the subscription by aborting its subscription with an AbortController. Here’s an example of that. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_193) + +DMF: On the right, a producer, which is going to register a teardown to shut it down and ensure that it knows how to shop producing values. And on the left, we—the consumer passes in an AbortSignal associated with its subscription and at any time it can abort the controller for that signal and that triggers all the teardowns in the producer to run so that the producer knows to stop producing values for the consumer. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_118) + +DMF: Now this is tricky when we have multiple consumers because we can’t just stop producing values if not every consumer has aborted their subscription. The producer is ref counted for this reason. So we have observables to re-iterate, and they can have multiple consumers. They can have multiple consumers for that single individual producer. And once the refcount hits zero, that is finally when the producer will tear itself down. It can’t do that earlier than that because there could be other consumers that are still interested in getting the values from the producer. + +[Slide](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_238) + +DMF: Once it’s torn down, resubscription—the observable is not dead. It can be reignited. Here’s a concrete example of that. Kind of playing on to the last example we saw. If we have three consumers interested in the values of this observable, this producer, then the ref count of the producer function is basically three. And when the first consumer at the bottom left aborts its subscription, we mark the ref count down to two. Same thing in the middle. One to down [?]. And finally, when the last one, aborts its subscription, the ref count is 0 and then we tell the producer that it’s safe to teardown. It tears down and stops producing values. + +DMF: This was a design change we made after TPAC discussion last year and after some developer feedback in shop around at different venues. It was seen as one of the bigger footguns in userland observables that they did not do this. And so it made sense to consider that feedback and deviate from community precedent in a way and it’s been received well so far. + +DMF: The current status of this proposal, I will just—this is a little out of date. But basically, yeah. We would like input. It’s the number-one reacted-to web standards issue on Github. Given the [?] spec reaction tool. People are interested in it. There’s a lot of developer hype at conferences and on Twitter and so on. So we felt it was important to prioritize this proposal and bring it to developers. We are gathering feedback from Node and from WinterCG and no negative feedback so far. Either neutral or positive. + +DMF: And so with that, I would like to thank some of the folks in TC39, JHD and KG, in particular, who have been active on the repository in giving us some feedback and helping us shape some of the nuance points of the proposal into what it’s become. At this point, it’s pretty much done. And like I said, myself and other browser vendors felt it was important to run this by and formally update TC39 folks as to the proposal and see if there’s any interesting discussion points that will come out of this. And just to basically keep everything updated and see if there’s anything major red flags that people spot. So with that, I think I am pretty much done with the presentation. We can open up for discussion or just end with a call for any feedback to be registered on the GitHub repository there. + +DMF: And yeah. With that, I think I am done with the slides. + +CDA: Great. Thanks for coming to the committee to talk about the proposal. We have a number of folks on the queue with some questions. First, we have MM. + +MM: So could you go back to the history where this started in TC39. + +DMF: Yes. Yeah. This slide? [[https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_27](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_27)] + +MM: Yeah. The—what I remember is that there was an observable proposal in TC39 and I don’t remember the time frame. Does it say Jafar (JH)? It does say that. Good. Good. Good. That history is correct. I thought I had heard a different name. I wanted to make sure this was cochampion by Jafar Husain (JH) and I. When he left the committee, I didn’t have the energy to keep going with it, which I would assume is part of why it went over outside of TC39. I do want to express that although I didn’t have the energy for it, I wish that once the energy arose to pursue it somehow, that it had been pursued in TC39. I do not understand why the right venue is outside of TC39. + +DMF: I think—so this was discussed a little bit. We have a section on this specifically, this topic. Venue choice in the explainer. WICG explainer. The gist is basically the proposal’s primary integration point with the platform was EventTarget. And it made sense to have a dependency also on AbortController and AbortSignal as a cancellation mechanism to unregister one's subscription. And I believe the Cancelable Token proposal in TC39 was contentious and given some of the layering how we expected this to be integrated with the platform, I think that motivated this—this change here and felt it was more appropriate as a what working group. It wasn’t necessarily a new language primitive. + +MM: I understand. + +DE: I think there are multiple ways that layering can work for both Observable and AbortController. I hope we can work together between TC39 and WHATWG in the future, rather than kind of in both directions trying to claim territory. In particular, this could have been done with EventTarget being the HTML integration on top [designed together, but not necessarily determining the venue of the core of Observable, similar to many other TC39 proposals]. With AbortController, I think there’s a possibility that we could make an API that’s even improved in terms of usability, as you and I have discussed that could be done in potentially either venue. So I hope we can keep working together on these things. + +DMF: Yeah. I would love that. I think that this was very much our intention of starting this up in discussion and trying to shop it around here and yeah. I very much second the—such vendor. + +MAH: Could you maybe go back to the example slide where you showed the subscriber? + +DMF: Let me see. Which subscriber? + +[Subscription slide with consumer and producer examples] + +MAH: So this surprised me a little bit. I was expecting—when I think observable, I think basically as a mechanism that could be built on top of iterators with basically some sugar to make them multicast. But I didn’t expect the subscriber to look completely different from or somewhat different from iterators. It seems like, for example, here, the complete and the error are very equivalent to the return and throw that an iterator could and user understood to have. And so I am wondering like in the design space, like, has there been consideration for observable as some sugar around iterators for producers and consuming + +DMF: The closest thing related to this, we have the observable from method and that takes in—I wish I had a slide on it. Maybe. I don’t think so. But that takes a promise observable, iterator or AsyncIterator and converts them to observables. So there's a lot of adaptation and conversion mechanisms between those. But if—is it the naming of the function that you are commenting on or… + +MAH: Yeah. I mean, I am wondering what is different about the observables that you—in their behavior that they’re not—they don’t have th—they wouldn’t follow the iterator shape and protocol? + +DE: This is something that was discussed by Jafar when he was explaining observable to TC39. Iterators are pull-based. You get them by calling next on them. Whereas observables are push-based. The event is sent. So iterators can only work for things that are buffered, whereas Observables, for Events, you often don’t want to buffer them. + +MAH: I don’t—I don’t know the production—so okay. What is the behavior when you’re producing a value, is the producer expected to block until all the consumers have consumed the value? + +DE: It doesn’t block. It just calls synchronously. Right. + +MAH: I see. So the consumption is not iterator-based. It’s callback based. Got it. + +JSC: This is an exciting proposal. Thank you for presenting it to TC39. I want to bring up interoperability, conversions, and dataflow between DOM Observables and ES Signals. ES Signals are another proposal we have, I think, at Stage 1 right now. They’re similar, but they’re different. I wanted to ask, have there been explorations on how an Observable could feed a Signal? Or vice versa, a Signal feeding an Observable. Said conversion API would need to live in WHATWG DOM, not ECMAScript, since Observables are coupled to the DOM. The situation is somewhat analogous to WHATWG streams and ECMAScript async iterables. I understand that Observables are closer to shipping than Signals are, so interoperability APIs could be deferred to a future DOM proposal. But this kind of interoperability and interchange should be explored early on. + +DMF: Yeah. I know DE and Ben Lesh have the most comments about the interoperability between Signals and Observables. I unfortunately remain mostly ignorant of the specifics of the Signals proposal. So I mostly let them speak on it. Maybe Daniel has thoughts on it, but I don’t know about Signals and I defer to other folks helping me design this. + +DE: Signals represent a current value, whereas Observables represent more like a stream of events. When you have an Observable, which represents “the value changed to this new value”, then you can make a Signal which represents the current value which was the last one. So Signals are not about making sure the callback gets called on every iteration. But instead, they enable you to have a calculation that’s dependent on the current value and that you can refresh that calculation when you want it. So based on the Signal proposal API, in particular based on the `Signal.subtle.watched` and `Signal.subtle.unwatched` callbacks that you have in the `Signal.Computed` constructor, you can make in just a couple of lines of code conversion function which take an Observable and expose it as a Signal. It would be subscribing to the Observable when the install is watched by a Watcher. And conversely, the conversion could go in the other direction. You could install a Watcher on a Signal that fires an event to the Observable whenever the value changes. So I think the conversion makes sense in both directions. + +DE: It’s important not to confuse them in terms of use cases, in particular. Observables have been misused for reactive rendering and that doesn’t work well in practice because it causes glitches. I really hope the way Observables are explained to developers makes clear that it’s not the core reactivity approach for the web. For certain cases, it does make sense to translate like that. + +JSC: Yeah. I understand that these two concepts, Observables and Signals, are complementary. I think that coordination by both standards’ champions will be really important in developer messaging, to make their use cases clear to developers on MDN, on Web.dev, other developer blogs, and whatever—so that developers know what Observables’ and Signals’ respective roles should be. + +JSC: With that said, I know DE mentioned that feeding an Observable into a Signal or vice versa takes only a couple lines of code. I am hoping in the future, in DOM, there will be one-line ergonomic APIs that make such conversion/interchange from one into the other very easy. + +DE: So, DMF, you mentioned that Chrome shipped this already. (It’s not a standard because WICG things are not standards-track inside principle.) I am wondering what the feedback has been from other browsers. + +DMF: We’ve had some—some neutral or positive feedback from Mozilla folks in person at standards conferences but [?] on the repository that have taken a lot of time to analyze it directly, and that’s relatively common… WebKit almost flipped their position bit [to positive] but really wanted it to be shopped to TC39 first, before feeling comfortable to do so. I expect a positive response from them, providing there’s no red flags or concerns. And there's an almost complete implementation of observables in WebKit as well. So it wouldn’t be hard to get them to ship in. They have been reviewing it pretty promptly. So yeah. You know, informally, positive and zero negativity. + +DE: Are you interested in input from TC39? You said it’s done. How would you like to work together from here? + +DMF: Yeah. For context, the intent was to give this a while ago for logistical reasons, this was not got to in one of the last meetings. It is being given a little late. I guess like you know an informal kind of check would be useful to go back to the report to the other browsers that there’s no major concerns about this to folks. If that is true. I think we’re close to that point because we have gotten good feedback from TC39 folks informally, JHD and KG particularly. Just a quick temperature check to make sure this doesn’t jump out as a completely horrible idea or like—these things need to change. And there’s no fire lit under people’s feet to like fundamental issues against the repository. Just to keep folks updated and it’s updated. I don’t think we need a formal thing and think that would be sufficient. + +DE: I am somewhat worried that people are going to use Observables for situations where they really mean to use Signals. And I think that was a big ecosystem problem when Observables were discussed in the past: That people thought they were confusing, and a lot of the confusion comes from this category misuse. It’s good for us to be adding it. But I hope that the educational materials about Observables avoid misdirecting people in that particular way. + +DMF: Yeah. I think that’s a good point. Ben Lesh, the creator of RxJS, seems to strongly agree with that. And I think all of the messaging he has been doing continually about how they interact with the platform and how they compare to Signals is pretty aligned with that. It’s possible to do more messaging to hammer down that point. But for what it’s worth, everyone discussing it externally, I believe, from the platform perspective is on the same page. + +DMF: What is that thing you mentioned, the temperature check tool? I don’t know anything about that but… + +DE: Like we have this thing where we could give emoji reactions. I don’t know if that’s what you want, but we use that for informal polls sometimes that are supposed to be non-binding. + +CDA: I don’t know if we need to have a check, but…Usually we have something more concrete that we’re trying to get a temperature check on rather than just “how do we feel about that proposal”? But let’s keep moving through the queue. + +JRL: Can you show me the slides where you call—this is perfect. The `observable.subscribe` here. As far as I can see, there isn’t any slide that uses the return value from this. Maybe it doesn’t return anything. We also have other slides where we were shown passing in an AbortSignal separately during the `subscribe`. The `subscribe` could return a subscription and the subscription could have the thing to cancel at that point. We have discussed the integration with iterables, the subscription, the return value here is the thing that could hold the asyncIterable method,that lets you get an AsyncIterable back to convert it back and forth. Like, it seems like `subscribe` should return something is my point. + +DMF: So I think you mentioned subscribe like you expect subscribe to return a—I think you said a subscription that you can cancel based on that. That’s not the case because the reason is that if you are the first subscriber, we run the producer function synchronously. And if it happens to produce an unlimited amount of values synchronously, until the consumer aborts the subscription, you never get a chance to abort, because `subscribe` would in the start. You pass the signal into the `subscribe`, that’s now `abort`, and abort the subscription if you need to with the controller inside any of the subscription handlers. There’s some discussion, an open issue about possibly because that’s such an obscure case, there’s discussion about actually having `subscribe` return something that’s cancelable and limited in that case. It doesn’t run anything and you do pass in a signal as a second dictionary to `subscribe`. + +JRL: The case you are describing is that subscription itself immediately calls the producer, the producer could start pushing an infinite loop and you would want to cancel at that point. Doesn’t that mean the producer is an infinite loop and we never do code anyway. + +DMF: No because if the producer is in an infinite loop and it’s like—you could imagine an example where it—the call stack, I call subscribe. That calls, and infinite loop. A for loop would be calling `subscriber.next` . + +JRL: Why would the producer on this side ever yield so that the consumer could get values like, if we are in an infinite loop, it doesn’t seem like a valid use case. + +DMF: If it’s calling that next synchronously in an infinite loop, the next handler would run and the next handler would abort the subscription after it receives a certain number of values it wants. It could be waiting for one of the values and then once it finally receives it, it aborts the subscription and the user can constantly check this gives the way to tell the producer to stop producing values even if it produces it synchronously. + +JRL: The subscription’s `next` callback is being invoked immediately during the `subscriber.next` () callback. + +DMF: Yes. + +JRL: Okay. If we were to yield the task thread one tick, before calling the producer’s producing function, does that solve the same case without forcing us to have the AbortSignal separately? + +DMF: It does, but then it produces—we discussed this a bit. It produces a tricky situation where like sometimes the producers—like, yeah. You are saying, we don’t call the producer callback synchronously during `subscribe`? You are saying there’s a microtask gap or something. + +JRL: Right. + +DMF: We discussed that and we rejected it for reasons that are not inside my head right now. There’s discussion about this, though. On one of the issues. I can try and pull it up and put it in some notes or something. But this was discussed in part of the ref producer discussion. And yeah. So I think maybe… yeah. I don’t have all the context in my head, but we did discuss this and decided not to go that path. + +DE: We in TC39 had trouble working with the web platform in the past on ergonomics. For example, with Temporal, we designed Temporal to work well with having types that model different things in the DOM. And the principle that so far that we have gotten feedback from WHATWG, we could add things to the DOM later, post initially shipping, if that makes sense, if Temporal proves itself out. Betting against each other. I am wondering how this will work when applied to, for example, we add more iterator helper methods, then we add them at the same time to Observables? Or would we just kind of see if it works out shipping one and adding to the other? When we make changes at the TC39 level, we try to make them coordinated at the same time. But even for older features like Promises: when promises were created, there was the idea to have a `.loaded` accessor that would return a Promise, and that never happened. I am wondering how we should work together across the venues on these ergonomic issues, now that DOM is getting into those? + +DMF: I mean, it would be my intention to keep the set of like Iterator/AsyncIterator helpers up to date and with the Observable operators. I don’t have a particular reason or appetite to hold off on one that makes sense. Just to see if it kind of goes well in TC39 land. So I would like to keep these pretty up to date and pretty synchronized when possible. Which is kind of why we started with that initial list right off the bat and really didn’t deviate from it. Because you know, there’s some operators that you could do without [?] observables, but we felt the consistency between the helpers was important. So yeah. It would be my intention to keep them up to date and have enough cross-talk between the orgs to kind of synchronize the introduction of those changes. Does that answer the question at all? + +DE: Yeah. That sounds perfect. + +DMF: Cool. + +CDA: MF? + +MF: Yeah. This is a—I generally, I am in favor of doing this work across venues. When developing iterator helpers, we might not be taking into account the needs of observables and in our design there, I think that we may fail to account for something that is important. So I do hope that even if it is being developed in a separate venue that we keep in communication on those topics in particular. So that we don’t forget to be involved and be involved in the process early and don’t forget to take into account your needs. + +DMF: That’s perfect. I am glad we are on the same page on that. + +DE: So you described the proposal as done. How does TC39 could make itself a more attractive venue for discussing proposals even across venues before they are done? + +DMF: So I think—for all web APIs or things that have started incubation in TC39? + +DE: We wouldn’t be interested in bringing every web API here. Only ones that have clear overlap. This one had a particular history; people here were interested in it. + +DMF: I think yeah. I think your question applies to the ones that you mentioned have some significant overlap or some history in TC39. I think I probably should have just like—we were—we got the feedback to shop this around to TC39 a little later than I would have liked. And I tried to do this, I think it was December or October and it didn’t work out last minute. So I am presenting this rather late. How to make TC39 an attractive venue? The rare proposal that has this larger overlap, should be probably earlier. I don't think there’s anything about TC39 that was off-putting or unwelcoming. I feel like we didn’t consider it early enough probably. And the fact it was moving—I don’t have much experience with TC39, the process and it was moving out of the TC39 was, in my head, you know, we discussed with SYG and okay. Do it and do it over here. I think some earlier cross-talk would have been better. I mean, yeah. I don’t know. Maybe just informing editors in both groups that we should talk more is the best thing. But that’s the lesson I learned from this, at least. + +DE: That makes sense. Even if nothing comes to mind now, if there’s something offputting about the group or anything in the organization you can think of later, I want to figure out how to address it at the TC39 level. + +SYG: Let me jump in a little bit here. I think reputationally TC39 could improve and I see proposals that tried to move in that direction, though there was strong disagreement from both sides from MLS procedural for consensus processing improvement yesterday. Those are the proposals that would move TC39’s reputation for its process and deliberation to be more welcoming for web proposals. + +CDA: And on that note, we are at time. So thank you, Dominic. Thanks, everyone. Great discussion. + +DMF: Thank you so much. Appreciated. + +### Speaker's Summary of Key Points + +- DMF presented WHATWG Observables for general feedback to the TC39 plenary. +- DMF outlined the major design decisions made over the past 6 months or so, and asked if there are any general thoughts or big concerns. +- The proposal was originally a TC39 Stage 1 proposal in 2015, Observables moved to WICG/WHATWG to integrate more closely with DOM Events. +- There was some discussion about the history behind the original authors of the TC39 Observables proposal. +- It has been implemented in Chrome and partially implemented in [WebKit but wished first for feedback from TC39](https://github.com/WebKit/standards-positions/issues/292#issuecomment-2682983190). +- The Committee raised questions on cancellation design, observable/iterator/Signal interoperability, and possible developer confusion between observables and Signals, but Committee reception was largely positive. +- Feedback emphasized the need to have good developer messaging, to help the community understand the complementary and distinct use cases for Signals and Observables, and for ergonomic APIs that allow easy conversion/interoperation of data from Signals to Observables and vice versa. +- There was discussion about: +- How to maintain a positive relationship between WHATWG and TC39. +- How to encourage more cross-venue discussion for future relevant APIs earlier in the process. +- How to improve TC39’s reputation and make it more welcoming for relevant web proposals. + +### Conclusion + +- Positive overall feedback. +- Discussion about how to increase early collaboration between WHATWG and TC39. + +## Continuation: Normative: Mark sync module evaluation promise as handled (#3535) + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/ecma262/pull/3535) +- [slides](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU) + +NRO: So this was presented on Monday, I believe, it was blocked because there was some confusion. The problem specifically was this HostPromiseRejectionTracker hook exposes some promises to the host that are not exposed to JavaScript code and the promises were internal and the host hook would expose them. I went through the various promises that got rejected in the stack (spec?) and there are a bunch of internal promises that get exposed to the host hook. Like, the ones from missing from—these are internal and bunch of promises in the module stuff that—but this is one way to expose. And I don’t—I talked with MAH and I solved this, but it seemed to be fine. There’s not an issue but it does not make it worse. Is that correct, MAH? + +MAH: Yeah. I think from what I understood, this doesn’t make it worse. It actually makes it better. In the sense that, like, most hosts will not synchronously notify—give user-land the ability to interact with an unhandled rejection. They will usually queue that up until the promise queue drains and then fire events or callbacks or whatever mechanism the host has. And what I understand about this change, is that it basically makes the promise handled before draining the queue in this case, and so this guarantees effectively that the host would not—if the host has that behavior, it would not expose that internal promise to the—to any user callback. + +MAH: There might be other places in the spec where we’re creating promises that are assumed to only appear internally and that may end up being exposed to user code through the rejection mechanism. However, as NRO mentioned, that’s probably something we should more holistically review and see if there’s anything we want or should do about it. + +MAH: My main concern and the reason I raised in the first place, when you expose the promise object that is meant to be internal, to userland, userland can go and modify the promise object, and given the way promise works and I am not convinced that couldn’t interfere with the host or spec implementation later trying to observe the result, resolution of the promise, and somehow cause some synchronous reentrancy. We have talked about this before in committee. And we need to be more careful with how we handle promises. And until we have a way for spec or host code to safely handle promises without—while guarding itself from potential user code interference and causing reentrancy, we don’t want to create these situations in the first place. + +NRO: Yeah. All of that matches my understanding. So yeah. I would like us to—I want to know we are work zeroing in on that. We should only tell the host the promise it was handled if it was wasn’t handled before. But yeah. We will review this again. So do we have consensus now for this normative change? + +MAH: I am definitely—I think MM held consensus for me. + +MM: Yeah. I was convinced, I held consensus specifically for MAH so you have support from Agoric. + +NRO: Thank you. We have consensus now. + +### Speaker's Summary of Key Points + +- NRO and MAH discussed MAH’s concern about spec-internal promises being exposed to user code through host hooks. +- In fact, the proposed change would reduce the risk of spec-internal promises being exposed to developers. +- There might be other places in the specification that creates promises that are assumed to only be used internally and could potentially be exposed to user code through the rejection mechanism. This will need to be reviewed holistically further in the future. +- MAH and MM are no longer concerned about the proposal. + +### Conclusion + +- Positive consensus for the pull request. + +## Continuation: Reviewers for Export Defer + +NRO: I still need reviewers for expert defer— + +CDA: The next topic, who would like to review export defer Stage 2? + +USA: I can offer to help out. + +NRO: you are a colleague of mine + +USA: Yeah. To help out with the technical stuff, doing the review. + +CDA: Looking for two stage 2 reviewers for export defer. + +CZW: I can help with reviewing. + +CDA: Chengzhong. Yes. Looking for one more. + +NRO: I will review one of the proposals back. + +CDA: You got quid pro quo offer from Nicolo reviewing. Can we get one individual to help review export defer spec? + +ACE: I will review it. + +CDA: All right. Thank you, Ashley. + +CDA: MM asks to confirm that non-extensible got 2.7. I believe that is true. + +CDA: It was conditionally approved pending that review and then you got that. So it officially is 2.7. + +MM: Thank you + +### Speaker's Summary of Key Points + +- Stage 2 specification reviewers are required for the export defer proposal + +### Conclusion + +- CZW and ACE will review. USA will also help. + +## Plenary conclusion + +CDA: All right. With that, our next topic and technically lunch, but we have nothing else scheduled for the afternoon. This brings the 107th meeting of TC39 to a close. Thank you, everyone. And we will see you in the—the next one is coming up quick, in May. Yes + +USA: Yes, and please sign up for it. + +CDA: Are we still looking for people to volunteer to—for talks at the community event? + +USA: At the community event as well. Like, if you are even somewhat motivated to do this, please let me know, I will be happy to help out. Basically having an idea who would be available to do this or for a panel, for that matter, would be really helpful because it would help us set an agenda, start inviting people, put it up somewhere, so that people can sign up basically. Yes. + +MM: What about people who are attending only remotely? + +USA: I don’t believe we need to register, then. Like we can accommodate you + +MM: Is the nature of the community event such that someone who is attending only remotely, still appear at a community event + +USA: That’s a great point. Thank you. I hadn’t considered this, but I think it should be possible. I confirm this to you in person—or I mean, on DM. + +MM: Thank you. + +CDA: And another huge thanks to all the people that volunteered to help with the notes at this meeting. That would be ACE, ABO, BLY, CDA, DLM, EAO, JMN, JSC. CZW, DE, NRO, and SFC. Thank you so much. + +USA: Thanks, everyone. And thanks also to our transcriptionist. From 6d12414e5ae12a265cc070bd702bc92ae27c789f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Thu, 1 May 2025 17:35:04 -0700 Subject: [PATCH 2/3] Fix links that got gobbled up by conversions --- meetings/2025-04/april-14.md | 12 ++++++------ meetings/2025-04/april-15.md | 16 ++++++++-------- meetings/2025-04/april-16.md | 4 ++-- meetings/2025-04/april-17.md | 6 +++--- 4 files changed, 19 insertions(+), 19 deletions(-) diff --git a/meetings/2025-04/april-14.md b/meetings/2025-04/april-14.md index de99126..e4aaca8 100644 --- a/meetings/2025-04/april-14.md +++ b/meetings/2025-04/april-14.md @@ -186,7 +186,7 @@ CDA: There are no updates from the CoC committee. There is nothing new to report ## Normative: add notation to PluralRules -Presenter: Ujjwal Sharmna (USA) +Presenter: Ujjwal Sharma (USA) * [proposal](https://github.com/tc39/ecma402/pull/989) * [notes](https://notes.igalia.com/p/UpmK0K8eo) @@ -209,7 +209,7 @@ DLM: Yeah, we support this normative change. DE: In change sounds good to me. I think we should treat this similar to staged proposals in terms of merging it once we have multiple implementations and test. We could track PRs like this. Anyway, this seems like a very good change to me. -USA: Just FYI, we have tracking for everything, basically, sorry, for all normative PRs for ECMA 402, but noted. [https://github.com/tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs](https://github.com/tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs) +USA: Just FYI, we have tracking for everything, basically, sorry, for all normative PRs for ECMA 402, but noted. [tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs](https://github.com/tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs) DE: Okay, great. @@ -223,11 +223,11 @@ USA: And yeah, and with I guess a couple of supporting opinions, we achieved con ### Speaker's Summary of Key Points -Normative pull request [https://github.com/tc39/ecma402/pull/989](https://github.com/tc39/ecma402/pull/989) on ECMA 402 was presented to the committee for consensus and this PR added support for a notation option in the plural rules constructor for handling different non-standard notations. +Normative pull request [tc39/ecma402#989](https://github.com/tc39/ecma402/pull/989) on ECMA 402 was presented to the committee for consensus and this PR added support for a notation option in the plural rules constructor for handling different non-standard notations. ### Conclusion -The committee reached consensus on [the pull request]([https://github.com/tc39/ecma402/pull/989](https://github.com/tc39/ecma402/pull/989)), with explicit support from DE and DLM. +The committee reached consensus on [the pull request](https://github.com/tc39/ecma402/pull/989), with explicit support from DE and DLM. ## Normative: Mark sync module evaluation promise as handled @@ -252,7 +252,7 @@ NRO: And then later when you actually handle the promise, so when you call .then NRO: So that was Promises, and how does this interact with modules? -[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_44#slide=id.g34836646ca1_0_44)] +[Slide](https://docs.google.com/presentation/d/1kheOg1AZDj-T9n0O-0sbd5IBwvnjiODyJcK1Ez6Q0JU/edit?slide=id.g34836646ca1_0_44#slide=id.g34836646ca1_0_44) There are multiple types of modules in this spec, or well Module Records, which represent modules. There are a Module Record base class and two main types of actual Module Records. There are Cyclic Module Records andSynthetic Module Records. Cyclic records are modules that support dependencies. And this is some sort of abstract extract base class and our spec provides Source Text Module Records that are variant for JavaScript. For example, the web assembly imports proposals in the WebAssembly is proposing a new type of cyclic on the record, and for synthetic module records, and it’s just modules where you already know the exports and you have to wrap them with some sort of module to make them importable. The way module evolution works changed over the years. Like, originally there was this Evaluate method that would—it was on all module records, and it would trigger evaluation, and if there was an error returned a throw completion, otherwise a normal completion. But then when we introduced the top-level await, we changed the method to return the promise with the detail that only cyclic module records can actually await. If there’s any other type of the module records, like any type of custom host module, there’s a promise in there, returned by the Evaluate method, and this promise must already be settled. So the promise there is just to have a consistent API signature, and not actually used as a promise. @@ -264,7 +264,7 @@ NRO: Then during the evaluation of `a.js`, we perform the steps from the slide b NRO: So the fix here is to just change these InnerModuleEvaluation abstract evaluation to explicitly call the host hook that marks the promise as handled when we extract the rejection from the promise. And, well, editorially, I’m doing this as a new AO because it's used by the import defer proposal, and we’re going to have it inline in the Module evaluation algorithm. -NRO: Are there observable consequences to this? Yes and no. Technically this is a normative change, as example before, this is observable because it changes the way host hooks are called, and usually they affects how some events are fired. However, on the web, the only non-cyclic module records we have are syntactic model records and we already have the values, we already—we’re just packaging them in a module after creating them, so that promise is never rejected, and this is not observable. Outside of the web, we have commonJS, and when you import from a .cjs file, it would be wrapped in its own Module Record and we evaluate the particular CJS module in the `.Evaluate()` methodevaluation of the module record. However, NodeJS does not expose as rejected through their rejection event the promise for that internal module, because maybe they don’t actually create the promise, and don’t know how it’s implemented. So Node.js already implements the behavior that would be—that we will get by fixing this. Node does not implement the bug. So, yeah, to conclude, is there consensus on fixing this? There’s the pull request ([3535](https://github.com/tc39/ecma262/pull/3535)) already reviewed in the 262 repository. +NRO: Are there observable consequences to this? Yes and no. Technically this is a normative change, as example before, this is observable because it changes the way host hooks are called, and usually they affects how some events are fired. However, on the web, the only non-cyclic module records we have are syntactic model records and we already have the values, we already—we’re just packaging them in a module after creating them, so that promise is never rejected, and this is not observable. Outside of the web, we have commonJS, and when you import from a .cjs file, it would be wrapped in its own Module Record and we evaluate the particular CJS module in the `.Evaluate()` methodevaluation of the module record. However, NodeJS does not expose as rejected through their rejection event the promise for that internal module, because maybe they don’t actually create the promise, and don’t know how it’s implemented. So Node.js already implements the behavior that would be—that we will get by fixing this. Node does not implement the bug. So, yeah, to conclude, is there consensus on fixing this? There’s the pull request ([#3535](https://github.com/tc39/ecma262/pull/3535)) already reviewed in the 262 repository. MM: Great. So I’ll start with the easy question. The—you mentioned the situation where the promise—there exists a promise that when born is already settled, and I understand why, and it all makes sense, I just want to verify that it does not violate the constraint, the invariant that user code cannot tell synchronously whether a promise is settled or not. That the only way—the only anything that user code can sense is asynchronously. It finds out that a promise is settled. Is that correct? diff --git a/meetings/2025-04/april-15.md b/meetings/2025-04/april-15.md index 2b7758b..e4c6cab 100644 --- a/meetings/2025-04/april-15.md +++ b/meetings/2025-04/april-15.md @@ -62,7 +62,7 @@ Day Two—15 April 2025 Presenter: Mark Miller (MM) -* [proposal]([https://github.com/tc39/proposal-oom-fails-fast/tree/master](https://github.com/tc39/proposal-oom-fails-fast/tree/master)) +* [proposal](https://github.com/tc39/proposal-oom-fails-fast/tree/master) * [slides](https://github.com/tc39/proposal-oom-fails-fast/blob/master/panic-talks/dont-remember-panicking.pdf) MM: So last time we brought this to the committee, it was called must fail fast, and it got stage 1, and then it got blocked from advancing from there for reasons I will explain. And this is a Stage 1 update. Since then, the proposal—we renamed the proposal “Don’t Remember Panicking.” So I’m going to linger on this slide for a little bit because this is the code example I’m going to use throughout the entire talk, so it’s worth all of us getting oriented in this code example. This is a simple money system. Don’t worry if you stop bugs, it’s purposefully a little bit buggy, which is—in order to illustrate some of the points. @@ -207,7 +207,7 @@ No conclusion; we’ll discuss further in a continuation topic, including a temp Presenter: Ron Buckton (RBN) -* [proposal]([https://github.com/rbuckton/proposal-enum](https://github.com/rbuckton/proposal-enum)) +* [proposal](https://github.com/rbuckton/proposal-enum) * [slides](https://1drv.ms/p/c/934f1675ed4c1638/EYypvengQohMlG52w1qseW8BCwCkSG0Y-2ip8Zq7pxoOFw?e=Aklyqu) RBN: Today I want to discuss enum declarations. I am Ron Buckton, I work on the TypeScript team. Enum declarations are essentially enumerated types. Provide a finite domain of constant values that are obvious to indicate choices, discriminants and bitwise flags. And a staple from C-style languages, VB .NET. C#, Java, PHP, Rust, Python, the list goes on and on. The reasons we are discussing this are several. One, the ecosystem is rife with enumerated values. ECMAScript is `typeof`--String based. The DOM has `Node.type`, which has its enumerated values on the Node constructor, this is the same. Buffer encodings are string based, or a string based enumerated type essentially. And Node.js has constants that are enumerated type or value-like functionality. But there’s no grouping. For users there is really no standardized mechanism to define enumerated type, ones that can be used reliably by static type. We talked about ObjectLiterals. But there’s a reason why that’s not really the best choice for this. I will go into that in a moment. @@ -389,9 +389,9 @@ Advanced to Stage 1 Presenter: Ruben Bridgewater (RBR) * [proposal](https://github.com/ljharb/object-property-count) -* [slides]([https://github.com/tc39/agendas/blob/main/2025/2025.04%20-%20Object.propertyCount%20slides.pdf](https://github.com/tc39/agendas/blob/main/2025/2025.04%20-%20Object.propertyCount%20slides.pdf)) +* [slides](https://github.com/tc39/agendas/blob/main/2025/2025.04%20-%20Object.propertyCount%20slides.pdf) -JHD: Hi, everyone. RBR just became an Invited Expert. He and I are co-championing this proposal. `Object.propertyCount` is solving this problem that RBR is going to talk about is something I run into frequently, and so I was very excited to walk-through this when he approached me with the idea. RBR is a [Node TSC [Technical Steering Committee]]([https://github.com/nodejs/TSC](https://github.com/nodejs/TSC)) and core collaborator. And I will hand it over to him to present better than I would have been able to do. Go for it. +JHD: Hi, everyone. RBR just became an Invited Expert. He and I are co-championing this proposal. `Object.propertyCount` is solving this problem that RBR is going to talk about is something I run into frequently, and so I was very excited to walk-through this when he approached me with the idea. RBR is a [Node TSC Technical Steering Committee](https://github.com/nodejs/TSC) and core collaborator. And I will hand it over to him to present better than I would have been able to do. Go for it. RBR: Thank you very much also for having me here. It’s the first time for me to be on the call. So very nice to—I am able to present. So like JavaScript I am pretty certain, every one of you has multiple times heard that JavaScript is a slow language. And thanks to JITs this is mostly no longer true, in most situations, and one thing is, however, that has bothered me, and because the language doesn’t provide any way to implement a lot of algorithms in a very performant way. And one is relating to counting the properties of an object in different ways. So it’s a very common JavaScript performance bottleneck I have run into. @@ -576,7 +576,7 @@ Not everyone in committee was convinced of some of the aspect of the broader sco Presenter: Daniel Minor (DLM) * [proposal](https://github.com/tc39/proposal-explicit-resource-management) -* [slides]([Explicit Resource Management: Implementer Feedback](https://docs.google.com/presentation/d/1F4kLwEUvBmyyTWq06HQgiJypcCWm3uwOzVDzFQ0xauE/edit#slide=id.p)) +* [slides](https://docs.google.com/presentation/d/1F4kLwEUvBmyyTWq06HQgiJypcCWm3uwOzVDzFQ0xauE/edit#slide=id.p) DLM: Sure. Tough. I would like to present some feedback about the explicit resource management proposal. Quick reminder about what a specific resource management is. Basic idea the idea is to add a `using` keyword, along with a `Symbol.dispose` and the concept of `DisposableStack`. And generally the idea allows for automatic disposal of resources when the use—when using variable leaves scope. For example this simple little thing here. Where are we in SpiderMonkey. It’s fully implemented. It’s currently shipped in Nightly, but disabled behind a prop and the current implementation follows the spec. In particular, it’s currently maintaining an explicit list of resources to dispose at runtime. @@ -606,11 +606,11 @@ DLM: Okay. Great! Thank you very much. ### Summary -Allowing the `using` statement in a switch statement with fallthrough complicates implementations. If we disallow this use case, implementations can desugar to try/finally blocks which is simpler and more efficient. The proposal champion put together a pull request for this change: [https://github.com/rbuckton/ecma262/pull/14](https://github.com/rbuckton/ecma262/pull/14). +Allowing the `using` statement in a switch statement with fallthrough complicates implementations. If we disallow this use case, implementations can desugar to try/finally blocks which is simpler and more efficient. The proposal champion put together a pull request for this change: [rbuckton/ecma262#4](https://github.com/rbuckton/ecma262/pull/14). ### Conclusion -Consensus to merge [https://github.com/rbuckton/ecma262/pull/14](https://github.com/rbuckton/ecma262/pull/14). +Consensus to merge [rbuckton/ecma262#14](https://github.com/rbuckton/ecma262/pull/14). ## Non-extensible applies to Private for stage 1, 2, 2.7 @@ -766,7 +766,7 @@ MM: Okay. Great. ### Speaker’s Summary -* MM presented a new proposal, broken off from [proposal-stabilize]([https://github.com/syg/proposal-nonextensible-applies-to-private](https://github.com/syg/proposal-nonextensible-applies-to-private)), co-championed by SYG and others. It proposes to make private fields respect `Object.preventExtensions` . +* MM presented a new proposal, broken off from [proposal-stabilize](https://github.com/syg/proposal-nonextensible-applies-to-private), co-championed by SYG and others. It proposes to make private fields respect `Object.preventExtensions` . * This proposal would patch up the current counterintuitive behavior of private fields not obeying non-extensibility, prevent hidden state creation via private fields, and improve performance so that nonextensible objects can have fixed memory layouts. * The proposal is not backwards compatible and might rarely break existing correct code. * Google has deployed usage counters and found minimal impact, but some websites in Germany (some which use a German GIS framework called Cadenza) might be affected. One website has minimal likely impact; it is for a temporary music festival. Google is trying to reach out to the affected German websites and Cadenza, but further help with outreach was requested by SYG. diff --git a/meetings/2025-04/april-16.md b/meetings/2025-04/april-16.md index 38de110..56d483b 100644 --- a/meetings/2025-04/april-16.md +++ b/meetings/2025-04/april-16.md @@ -253,7 +253,7 @@ USA: Unfortunately, they are on time, though. We would have to bring this back t MAH: Michael, can you file an issue in maybe—that will help Waldemar understand the request? -MF: Yeah. Will do. (opened [#6]([https://github.com/tc39/proposal-compare-strings-by-codepoint/issues/6](https://github.com/tc39/proposal-compare-strings-by-codepoint/issues/6))) +MF: Yeah. Will do. (opened [#6](https://github.com/tc39/proposal-compare-strings-by-codepoint/issues/6)) MAH: Thank you. @@ -271,7 +271,7 @@ Stage 1 Presenter: Michael Saboff (MLS) -- [slides]([https://github.com/msaboff/tc39/blob/master/TC39%20Consensus%20Apr%202025.pdf](https://github.com/msaboff/tc39/blob/master/TC39%20Consensus%20Apr%202025.pdf)) +- [slides](https://github.com/msaboff/tc39/blob/master/TC39%20Consensus%20Apr%202025.pdf) MLS: This is a continuation from our conversation that we had in Seattle. And I asked for an hour, I don’t think this is going to take an hour. But we will see. This is caused conversation in the past. I think from Seattle there’s general agreement there is a problem we need to deal with single dissenters. It’s rare, but there’s been some issues in the past. There’s also, I took away from Seattle, there’s no desire for like a major process change. That we—our social norms seem to be enough to guide us for 9X% where X is a pretty big number. 98%. And it also, I took at here’s no need to have two objectors. I originally proposed 5% at Seattle and people thought that was too onerous and have to figure out what is 5% so on and so forth. diff --git a/meetings/2025-04/april-17.md b/meetings/2025-04/april-17.md index 0d6a49a..177d201 100644 --- a/meetings/2025-04/april-17.md +++ b/meetings/2025-04/april-17.md @@ -278,7 +278,7 @@ DE: Do we have a definition of this scope or problem statement? Does it differ b CZW: Well, I think the—this page of the slides explains the iteration. Ultimate goal is to allow AsyncContext integration and we are—we could explore that, like, we said with `symbol.enter` or `symbol.dispose` that I think—even with solution B and C, I think this is—this page shows that we want to include the feasibility and the solutions. -LCA: I think a more written out version of this is on the third to last-page, Chengzhong, the summary slide [[https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_49#slide=id.g3494191011f_1_49](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_49#slide=id.g3494191011f_1_49)] +LCA: I think a more written out version of this is on the third to last-page, Chengzhong, the [summary slide](https://docs.google.com/presentation/d/1p_rQ5UagJ3Bgwbds0NL-nBaR3ovJLmyHmKuRMJejs_Y/edit?slide=id.g3494191011f_1_49#slide=id.g3494191011f_1_49) USA: I’m sorry. We are past time. Can we focus on Stage 1 for now? @@ -307,7 +307,7 @@ LCA: Thank you Presenter: Dominic Farolino (DMF) -- [proposal]([https://github.com/WICG/observable](https://github.com/WICG/observable)) +- [proposal](https://github.com/WICG/observable) - [slides](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/) DMF: Okay. Perfect. All right. So my name is Domenic Farolino. I work on Google Chrome. And I am working on the observable API. Which is currently a WICG standard—or, specification. Before we go into the slides, I want to give some context. This is a pretty informal presentation. We are not—this is not incubated or proposed in TC39. We are not asking for specific Stage feedback or anything like that. But because we are pursuing this API which used to be pursued in the TC39, and we moved it over to WICG with upstreaming into a WHATWG DOM specification, we—myself and other browser vendors felt it was important to run the proposal and the design by folks in TC39. And try and just, you know, keep everything on the platform updates and ask for opinions on that perspective. That’s what I am doing here. @@ -358,7 +358,7 @@ CDA: Great. Thanks for coming to the committee to talk about the proposal. We ha MM: So could you go back to the history where this started in TC39. -DMF: Yes. Yeah. This slide? [[https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_27](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_27)] +DMF: Yes. Yeah. [This slide?](https://docs.google.com/presentation/d/1i5_zneksrU7i7ZHcl5EQRzUHGkmXRIQKd-bLfrPRNXY/edit#slide=id.g30a04a42395_0_27) MM: Yeah. The—what I remember is that there was an observable proposal in TC39 and I don’t remember the time frame. Does it say Jafar (JH)? It does say that. Good. Good. Good. That history is correct. I thought I had heard a different name. I wanted to make sure this was cochampion by Jafar Husain (JH) and I. When he left the committee, I didn’t have the energy to keep going with it, which I would assume is part of why it went over outside of TC39. I do want to express that although I didn’t have the energy for it, I wish that once the energy arose to pursue it somehow, that it had been pursued in TC39. I do not understand why the right venue is outside of TC39. From 689c41499131ee2c5b082e2059ae9970315ccbab Mon Sep 17 00:00:00 2001 From: Aki Date: Tue, 13 May 2025 17:31:43 -0700 Subject: [PATCH 3/3] Update meetings/2025-04/april-14.md Co-authored-by: Andreu Botella --- meetings/2025-04/april-14.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/meetings/2025-04/april-14.md b/meetings/2025-04/april-14.md index e4aaca8..c37dee1 100644 --- a/meetings/2025-04/april-14.md +++ b/meetings/2025-04/april-14.md @@ -430,7 +430,7 @@ ABO: Thank you for sharing your use cases, and now I will give an update on the ABO: So the last time that we presented this in full was in Tokyo, and we gave a brief summary of the changes since then in December; but basically, one of the things that Mozilla highlighted for this proposal was that it increases the size of potential memory leaks. -ABO: If you have this in the web, this code used to only keep alive the callback and any scopes that closes over. If there can be any click event, the callback is not a leak, and for the scopes it crosses over, it is only a leak if it keeps alive things that are not used by the function. And I know that sometimes engines keep more things alive than they should for closed over scopes, but that is a trade-off they make. +ABO: If you have this in the web, this code used to only keep alive the callback and any scopes that closes over. If there can be any click event, the callback is not a leak, and for the scopes it closes over, it is only a leak if it keeps alive things that are not used by the function. And I know that sometimes engines keep more things alive than they should for closed over scopes, but that is a trade-off they make. ABO: In the proposal as we presented it in Tokyo, `addEventListener` implicitly captures an `AsyncContext.Snapshot`, and a lot of the entries in the snapshot, a lot of those values will not be used by the callback, even if the snapshot itself is used, so this could be a leak—or will be a leak in most cases.