diff --git a/meetings/2023-11/november-27.md b/meetings/2023-11/november-27.md new file mode 100644 index 00000000..755ea59b --- /dev/null +++ b/meetings/2023-11/november-27.md @@ -0,0 +1,849 @@ +# 27 Nov 2023 99th TC39 Meeting + +----- + +Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. + +You can find Abbreviations in delegates.txt + +**Attendees:** +| Name | Abbreviation | Organization | +| ---------------------- | ------------ | ----------------- | +| Ashley Claymore | ACE | Bloomberg | +| Ben Allen | BAN | Igalia | +| Waldemar Horwat | WH | Google | +| Chris de Almeida | CDA | IBM | +| Daniel Minor | DLM | Mozilla | +| Samina Husain | SHN | Ecma International| +| Cam Tenny | CJT | Igalia | +| Jesse Alama | JMN | Igalia | +| Jirka Marsik | JMK | Oracle | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Sean Burke | | | +| Philip Chimento | PFC | Igalia | +| Nicolò Ribaudo | NRO | Igalia | +| Romulo Cintra | RCA | Igalia | +| Agata Belkius | - | Bloomberg | +| Istvan Sebestyen | IS | Ecma International| +| Chip Morningstar | CM | Agoric | +| Peter Klecha | PKA | Bloomberg | +| Luca Casonato | LCA | Deno | +| Eemeli Aro | EAO | Mozilla | +| Shane F Carr | SFC | Google | +| Jordan Harband | JHD | Invited Expert | +| Ujjwal Sharma | USA | Igalia | +| Mark Cohen | MPC | Invited Expert | +| Michael Saboff | MLS | Apple | +| Ron Buckton | RBN | Microsoft | +| Jack Works | JWK | Sujitech | +| Daniel Ehrenberg | DE | Bloomberg | +| Mikhail Barash | MBH | Univ. of Bergen | +| Ethan Arrowood | EAD | Vercel | + +RPR: So We have a code of conduct. It’s on our main website. So please make sure you have read that, and that you interpret it in the best possible spirit. If anything happens that you think perhaps goes against this or you’re concerned about, you can always tell us the chairs many private. And we will keep everything anonymous, likewise, we have the code of conduct that you can go to as well. Our schedule this week is the four-day meeting. We have two hours in the morning and two hours in the afternoon. So very simple. We have our usual comms tools. So for anyone that’s new here, from the reflect post, you’ll be able to get to TCQ. TCQ is our primary tool for controlling the conversation and for making sure that we have -- we stick to our agenda. We do have a very packed agenda. We always do. But let’s respect the queue in times of orderly conversation. You can switch to the active topic by clicking that queue button up top, and that will then take us to the active agenda item, like testing out TCQ. Whenever we have an agenda item, we always have a current topic, and so if you wish to talk about something new, you want to normally click the new topic button. So always prefer those buttons towards the left, the blue, and only use the ones to the right when needed, because they will increasingly interrupt the flow of the conversation. If you want to add something extra, a reply to the current topic, then we are discussing current topic, let’s see, at the moment, there’s no one on the queue. I’m just checking it myself. Likewise, if we’re not sure about something that’s been said, we can ask a clarifying question. Just to check our understanding. And then finally, if something outrage serious happening, if something is disrupting the meeting, such as we don’t have note takers or we can’t hear something, then please use a point of order as. That’s the most urgent way to get through. All right, when you’re speaking, you will have an extra button all I’m done speaking. Please to your best to make use of this as it’s a good way of yielding control and making sure everyone know it’s time to move on. We also have our equivalent of Internet relay chat. We have our chat room, the Matrix. And within that, the main rooms that you’ll be interested in, for this meeting at least, is the TC39 delegates. That’s the privileged room where only delegates can speak. If you’re in the in there, then please see the TC39 business repo and you can make sure your details are registered there or contact one of the chairs. And then there’s also Temporal dead zone. Please use that for anything off topic, so rather than the conversation descending in the main delegates chat, it descends into the Temporal dead zone. All right. We have an IPR policy. So everyone that’s here is generally expected to either be an ECMA member delegate, in which case your organization has already signed up the relevant IP policy, or an invited expert that has gone through our process, and likewise, there, signed the invited expert form. For everyone else, please make sure you have signed the contributing agreement, and anyone who is not in those categories and has not signed the form, you’re expected to purely be an observer, so please do not speak, and you can also obviously contact the chairs or the ECMA secretary here to find out more about ways to get involved. And, yes, we do welcome observers. + +Moving on, so notes, so note taking is a critical part of the meeting. We have, I think, a transcriptionist, is the transcriptionist here? Yes. Excellent. So -- and we’re very grateful to them for taking all our notes, but they do need help, so there is the job of the notes chaperon, the notes editor to fix up the notes. We will be asking for volunteers. I think perhaps now is a good time to do so. We’ve got a thumbs up immediately from Ashley, long-time contributor to the notes, and also Ben Allen. Thank you very much. Instantly we get two people. That’s a very good start. Chris is happy. And obviously there is, you know -- just there’s the IP notice that is above there. You’ll see just so that you know that note taking is happening and all your words are being down, and you have the opportunity to correct them afterwards. As we encourage. The next meeting coming up will be in San Diego, hosted by service now. So looking forward to that. The invites with all the details for that should be going up very shortly. Just finalizing that now. And I will say thank you to everyone who has contributed to the interest survey. It looks like we’re going to have a very healthy attendance there. So I’m looking forward to it. There is also going to be something extra. I don’t want the give away the surprise, but Samina is working on something that could be exciting for people. So that’s coming up in a little over two months. So finally, let’s get on to the regular stuff. Oh, clarifying question. Here we go. Ashley asks on TCQ, Ashley, would you like to speak. Is pointing of that in the Google names, they don’t have -- don’t always have the company. Right, yes. If you’re not -- if you’re using a Google account, then it will just show your natural or your -- where the name you’ve used with that. + +If you use incognito mode, then you get to specify whatever name you would like. I think at this point, Chris, I don’t think we will ask people to rejoin. + +CDA: Yeah, no, there’s no -- Google meet doesn’t have a display name. You can’t just change like you can in Zoom, so we will have to just proceed thusly. But it’s a good reminder to encourage people, please add yourself in the attendee list in the notes doc if you have not already done so. I will paste the link in the Google meet chat so it’s handy. + +And I guess if anyone appears in the participants where we don’t know the affiliation, we may just have to ask. + +RPR: Okay. So on our housekeeping items, so the first thing is do we have approval of the previous meeting’s minutes? I will take silence to mean that, yes, we do have approval. Okay, approved. And on to this meeting, so the current agenda, we have there the packed agenda. Is everyone happy with that? No complaints? Hopefully we will get through it swiftly and be able to fit in the overflow items. + +## Github Delegate Teams + +Presenter: Jordan Harband (JHD) + +JHD: Theoretically, there’s at least one person or each ECMA member who considers themselves responsible for their company or members’ interaction with ECMA. Please review the GitHub team for your company or member. And make sure that everyone on that team is still employed at your company, and that anyone that you want to be on that team that isn’t there, please file the appropriate issue to add them. Thank you. + +RPR: Perfect. Thank you. All right. And the other thing I will just say about today’s agenda, or this week’s agenda, is that the chairs generally try to reach out to people via Matrix, via direct messages or in the chat in order to bring items forward and back and so on, so please be on the lookout. That helps us -- that helps us get everything in. All right. And all right, it looks like, actually, we’re on track to fit everything in, because some people have updated their time boxes, so thank you so whoever gave up a little bit of their time. So, yeah, we should be in for a good meeting. So then thank you for sitting through this opening. Next up is secretary’s report. + +## Secretary Report + +Presenter: Samina Husain (SHN) + +- [slides](https://github.com/tc39/agendas/blob/main/2023/tc39-2023-048.pdf) + +SHN: Very good. Welcome everybody to the meeting. San Francisco time, so for some people, earlier and later for others. You said international orange for the bridge, hmm, maybe ECMA is also international orange, I’m going to have to investigate. + +A summary from the secretariat. As always, a reminder that the summary and conclusions are great, and please not forget them. It is a huge support when we work on the minutes. And then the final technical notes, and just wanting to recognize here, ACE who supported me hugely in ensuring that the final minutes and the technical reports were appropriately finished almost timely, delays due to me. And I thank him for his support and will probably need it again. But slowly but surely I am learning through that. I want to just point out, just a small comment on invited experts. If some of the TG4 members are on the call, I have been sending out a few emails just to confirm you are technical experts, the expectations for next year, and a friendly reminder I will review all of the invited experts just so that we are better aligned with the ECMA secretariat. We may have missed some on our list, so I will be reaching out just to make sure that we are aligned on the overall invited experts, and I’ll do that probably in the new year. I also want to point out that regularly on an annual basis, for ECMA, we confirm the chairs of the TCs, the TG conveners and editors. You may do that just verbally in the meeting. We are flexible in that. I know some TC chairs or one TC chair was just added this year. I think some of the conveners were also new to this year, it’s absolutely fine that you are all doing the same next year. But perhaps by the end of this meeting, either today, tomorrow, or on Thursday, if you may just verbally confirm to me that your TC chairs will remain as they are, the TG conveners will remain as they are and the editors that you have that I have noted are the same, then I will make sure that that’s also noted in the general assembly as we do a full review of all TCs. + +SHN: a few slides that I will go through, which cover the -- some of the items I listed there in my table of contents. I want to remind everybody that we will have elections at the general assembly coming up in December. December 5th to be exact. These are the candidates for the role as shown on slide. I believe you know all the names. These are the current candidates that we have for this year, 2023. They have all chosen to participate for one more year, 2024. All of them have the opportunity to do so. You can, of course, also nominate, even now, until the very last day, another candidate if you wish. If you would like to self-nominate, that’s also fine. Just to let you know that this is the current slate for our management and executive committee. Also want to remind -- from a timeline perspective, as every year, mid-year, we have an approval of a new addition of 262. Our April ExeCom is coming up on the 24th and 25th April. And typically, it’s during the ExeCom -- that the TC chairs bring in the recommendation of the next addition. I want to point out that the dates are very tight. We have a 60-day opt-out period and a 60-day open for comments publication. We do it almost in parallel. The dates are time shifted by ten days. A suggestion you may consider to freeze the specification in the early part of April to ensure that we meet these Ecma 60-day rules. Typically you’ve always done that. This year, the dates are quite tight for the 60 days. And as I would imagine, if the work that you’re doing so far has been moving so well that you would be on the 15th edition and 11th edition for approval in the upcoming June 2024 meeting. + +SHN: An update for the TC39 standards, so we had in ISO a vote that ended on 3rd of September. We didn’t know the results at our last meeting, which was in Tokyo, but the results are favorable, so we have been approved as an ECMA suite for the next five years, which is very good for us. That means we don’t have another periodic review until 2028 and the same happened for JSON, so that’s a very good position for us with all of our standards. + +SHN: I want to point out some workshops that we’re planning. So we have wanted to meet and bring information to all our members to give them information about new things that are happening, where there potentially are gaps and where they may be able to participate. There are two workshops coming up, they’re hybrid. If you would like to join, you absolutely can. You will note there’s an email that you can use to register. It would be the easiest if you did that. The first workshop will be on December 6th. It will be a TC53 workshop. The details are highlighted there. I will ensure that my slides are uploaded, that you may see them. But you may also find this information on a website. If you find it interesting, please register, it will be very good and informative and we would appreciate your participation. The second workshop will be on date and cloud standardization. We tried to do this earlier this year, but due to reasons we had to cancel. We are attempting to do it again. We also have some invited speakers from JTC1, from different study groups that will be participating and providing updates on what has been going on in ISO in the space of cloud standardization. We would also then, of course, look for where the gaps are and see how ECMA could potentially play a role there. So this is a general outline of that particular workshop. It will be on December 7th on the following day. Again, very easy to register. It is an email registration. So you see the email there is the help desk at ECMA international. I would highly recommend that if you’re interested in either of these workshops to please register today or latest tomorrow. If we don’t have enough participants unfortunately I will have to cancel one of the workshops. If the time zone works, please attend. I was asked if it was going to be recorded, it will be a WebEx session. In the event that we move forward with this, I will of course confirm with the speakers that they’re okay for the recording. And we hopefully will be able to have a recording. It will be a three-hour session. So if you wish to participate live, that would be great. + +SHN: A short update on some new projects, I had mentioned in the Tokyo meeting we were already having these discussions. We have a new non-for profit member that is OWASP. I think you’re familiar with them. They have already put in their application and royalty free requirements. We have already provisionally approved them. We will just have procedural approval during the GA. So we are very happy to have a new member. Our other new exciting news which you also may be aware of, we’ve talked about, is a new TC, cyclone DX. It will be -- if you would like to know more details about it, the scope of TC54 is one of the documents that’s uploaded from the GA available to all members. It’s on my annex list. I will leave it to you to read that and keep an I forget time for the presentation right now. So those are all very good. I’ll already highlight that we have a new member as a result of the cyclone DX project, and that is Lockheed Martin, which is excellent. They’ve already joined. We still have to approve them procedurally, but I think we will move forward without any hiccups. It’s very good to have the new member, and I want to recognize DE here for his huge support. He has been instrumental in making this positive for us. So thank you for that. + +SHN: We have also started a new ad hoc, which is called Ecma Growth. The ad hoc has members who are supporting me and the Secretariat to look at future activities and new strategies that ECMA needs to think about or adjust itself toward. We are looking at areas of voting, membership governance, and our objective of course is to encourage new projects, to be attractive to new projects and new members. So this ad hoc has just started. We’ve just done some initial meetings, and we anticipate that it will continue in the next year. And we will be able to bring some solid input that we can also share in this meeting. SHN: I think that may be one of my last slides. Just want to remind you that my usual annex has the statistics that you may look at your leisure. And documents that you may want to review as I mentioned Cyclone DX is there. If I go to my next slide, my next slide touches on our PDF version, and we had a discussion in the last meeting, and Kevin Gibbons was going to have a look to see if the recommendations received from Allen were of use, if we could move forward with that. KG, if you’re on the call and would like to make a comment and can make a comment, I can make a pause right now. + +KG: Yes, I think it’s enough to work with. The next step is I need to actually go through the process and confirm I can build a PDF, which will take me a little while, but I think I can. So I will have a trial PDF at some point and we can see how it looks and confirm it’s good enough. + +SHN: Okay, great, thank you very much, KG. If there’s anything you need to need to converse with Allen and I need to moderate anything and get you together, just let me know. I’m happy to support you. + +KG: Will do. + +SHN: Thank you very much for that. I’ll go to the next slide, we are on meeting 99 and meeting 100 is coming up, so I have a bit of a challenge for the entire committee. If you so choose to take the challenge, When I was at the last meeting in Tokyo, I saw this wonderful baseball cap that SHU was wearing, and I understand that you were the designer of this. I think it’s great. So I have a challenge to the whole committee. It’s up to you to decide how you’d like to do it. I would be more than happy to have a new set of baseball caps or beanies or tell me what you’d prefer, for the 100th meeting, to commemorate that milestone. Just request that there is a logo of TC39 and logo of ECMA on that baseball cap or beanie, or socks. I would like to have TC39 socked and. + +If those are two items that you would be happy with and would like, if you are interested, I leave you the challenge to come back to me with an idea of what could be the design on the cap. If you can give me your ideas by the end, the 18th of December, I will take the initiative and have them prepared and made so they can be ready to share at our 100th meeting coming up in San Diego. Along with a few other things, but that I can manage on my own, but I would love your support and ideas on the baseball cap or beanie or whatever you like. I’ll wait for your feedback on that. And I think that is my last slide. So are there any questions? + +RPR: Nothing on the queue. I think everyone’s just very impressed with the hat and the socks. SHN: I really would love some feedback on what you would like. So that would be great. Thank you very much. RPR: All right. Great. And in the Matrix delegates, Ron has, like a display cabinet of past hats of TC39. + +## Ecma262 Updates + +Presenter: Kevin Gibbons (KG) + +- [spec](https://github.com/tc39/ecma262) +- [slides](https://docs.google.com/presentation/d/14oO9ACSx66ChJK-5-dxhuKYmmh5XFaE1uC_4MS6tcEw/) + +KG: Editors update, we have landed a couple of normative changes. The first one is the resizable array buffers proposal. The second one is this long-standing web reality issue where the GetSubstitution, which is what is used for doing the replacement strings in string.prototype.replace, was specified to have slightly different semantics than it in fact had, and we have corrected that so that the behavior matches the behavior in real life. Also, a couple of editorial changes. The first one that is something we’ve been working on for a long time, or rather working towards for a long time, it completes a bit of work, where the RegExp semantics now work exactly the same way as the rest of the specification. Previously there was some unbound state and some named tuples and now the RegExp semantics use records the same way literally everything else in the spec uses records. So that’s just a small cleanup, but I believe that was the last place we have any such cleanup, so thank you to jmdyck for the pull request. + +KG: And the second thing is something that we’ve talked about for a long time and finally got around to. The -- as you know, the specification defines a number of kinds of exotic object. And usually the names of the exotic objects are basically what you would expect, you know, a proxy exotic object is a proxy. But there was one exception, which was "integer indexed exotic object" was the name for TypedArrays. These things were synonymous. A TypedArray was an integer indexed exotic object and an indexed indexed object was a TypedArray. But they had a different name, and that was silly. And now there’s just TypedArrays. If you go looking for that phrase and it’s gone, it’s because they're TypedArrays now. + +KG: Okay, very similar list of upcoming work except you will see I have struck through this one here because we have now finally finished the work of making all internal values consistent. We now use records everywhere. We also have a couple new ones. I particularly want to call out this first one. Getting rid of the terms and definitions section. This is a section in the spec that contains, like, 2% of the terms and definitions in the spec. That’s kind of silly. We are going to try to relocate definitions to where they make sense. Just like every other definition. If you feel strongly about that, though, let us know. Otherwise, the other notable thing here is the last item, Annex F, the documentation of breaking changes, someone pointed out we haven’t been keeping it up to date, although, it is kind of unclear what it means to keep Annex F up to date, because in practice, we have found that, for example, adding a new method to `Array.prototype` is much more likely to break code in the real world than most of the things in Annex F, so not totally clear what the criteria for something counting as a breaking change is. Of course, we don’t list new methods in Annex F, or at least have historically. We are going to think about the role Annex F should serve and what should go into it and YSV volunteered to help with it. + +KG: Another thing, this a reminder that ‘ecmarkup’ has a bunch of useful features. It got a couple of new ones recently. So the first thing is sort of a meta reminder. If you’re on the spec and press question mark, that is, you know, shift plus slash, question mark on the keyboard you will get a little popup that will tell you the keyboard shortcuts available in the specification. And particularly relevant is the pins, pinning things is incredibly useful. You can see on the left-hand side what pins look like, and thanks to Michael, there’s a new feature which is you can add clear out pins from the little menu by clicking the little X where the clear button will clear all of them. If you are using a particular section of the spec a lot, even just during a single limitation or something, highly recommend using. And in general a lot of the keyboard shortcuts are extremely useful. Last thing, this isn’t from me, if Jack Works’ on the call and able to demo. + +JWK: Hello. Here is the VSCode plugin that’s called EC markup. You can search that there are two extensions the other one is from RBN and it has not been updated since 2017. You can install the version I wrote, and it looks like this. For a .emu file, it will have syntax highlights for the grammar, and also for the algorithm. There are also some useful features like variable completion from the current algorithm and you can have some completion of the productions defined in the main spec. You can also have AO completion and also with function signatures and the links. If you hover the cursor on it, you can see there are signatures. This plug-in also provides many useful snippets. If you type "early error", you can complete the whole section of an HTML. That’s it. This extension is made of a language server, so it can extend to other editors. Now there is only a VSCode version, and it’s still in the very early stage. We have bugs, maybe. But it’s already constructive when editing the spec. + +KG: Very cool. And that was the end of the editor’s update as well. + +RPR: Thank you, well, yeah, that looks really good, JWK. That’s some good complements in the delegate’s chat about it, and you’ve posted the link there as well, so thank you. + +## Ecma402 Updates + +Presenter: Ben Allen (BAN) + +- [spec](https://github.com/tc39/ecma402) + +BAN: Yes. And so this time, we have a fairly quiet update. We do have one normative PR that the scope of it is such that we want to get feedback from the plenary on whether it’s the right size for PR or if it’s something that should be broken out into its own proposal, so I’ve opened a timebox on that, which I believe will be on the fourth day. Just to give a quick preview of what’s going on, let me share the correct screen. + +BAN: So currently iso 4217 is normative for the number of digits after the decimal separator to be used for currencies. But in practice, the definition that they use for those numbers of definitions diverges from actual Kay day-to-day practice, so in some case it’s the largest number of digits used in a financial separator in financial or accounting case. CLDR has its own datama separates out these financial or accounts uses where you’ll sometimes use more digits than you will in your day-to-day life, separates those out from what we they call a cash rounding, the number of digits used that matches the smallest denomination actually in circulation, so we have a. R in for this, but it’s sufficiently large that like said, we wanted to break it out into its own timebox. Beyond that, I yield the remainder of our time. + +RPR: All right, any questions for BAN? I don’t see anything on the queue. No? All right. Thanks, BAN. + +## Ecma404 Update + +Presenter: Chip Morningstar (CM) + +- [spec](https://www.ecma-international.org/publications/standards/Ecma-404.htm) + +CM: Everything is awesome. Well, not everything, everything, but everything to do with ECMA 404. + +RPR: Thank you. So ECMA 404 doing well. + +## Test262 Update + +Presenter: Philip Chimento (PFC) + +PFC: I can keep this very short as well. From the last meeting of the test262 maintainers, the message that we want to deliver is that we are working on the clearing the backlog of PRs that need maintainer review. So thanks, everybody, for your patience, and if you’re a proposal champion and you see something in the test262 review queue that is about your proposal, we’d also appreciate your help on reviewing it. That’s all. All right. And nothing on the queue. So we shall move on. Chris, are you ready with TG3? + +Presenter's Summary: We are working on clearing the backlog of PRs that need maintainer review. + +## TG3: Security Updates + +Presenter: Chris de Almeida (CDA) + +CDA: I do not have slides, but not a ton to report from TG3. The only things that we really have are we incorporated feedback that we got from plenary last time on the vulnerability disclosure policy regarding just some slight language change, which we made, and we felt that the -- we understood clearly what the ask was and felt like what we did met the ask and didn’t need to come back to plenary about it. So that has been published in the .github repo for the organization in the security,md, which is a community health file. Which will now propagates to all of the repos across the org, and the other item is we enabled GitHub private vulnerability reporting across the TC39 org in GitHub as well, so that is also now applying to all the TC39 repos that exist, and any future ones that are created will also apply to those as well. That’s it. + +RPR: Thank you, CDA. So we are getting more secure. All right. There’s nothing on the queue. So I shall move on. And you’re up again with code of conduct committee updates. + +## CoC Committee Updates + +Presenter: Chris de Almeida (CDA) + +CDA: Yep, all quiet on the code of conduct front. No new reports to report. So there’s really nothing else, but just as a reminder, as always, we’re always welcoming folks that would like to join the CoC committee. If that is something anyone is interested in, please reach out the somebody on the committee. Thank you. + +RPR: Thank you, CDA. So next up is MF to talk about publishing an FAQs document. MF, are you there? + +## Publishing an FAQs document + +Presenter: Michael Ficarra (MF) + +- [repo](https://github.com/tc39/faq) + +MF: So a couple weeks back, I proposed that that we create an FAQs document, somewhat informal. As a solution to the repeated explaining we have to do from, like, some very common questions we get in our various communication channels on the Matrix, on the discourse, et cetera. I wanted to create something kind of informal, because I don’t want this to have to go through consensus every time and have it be like an official statement from the committee. But rather just be something that we can use to say, like hey, this is how people normally explain this to people who ask this question. So I created these suggestions, and they got put into this repo. So now there’s a TC39 FAQ repo. Right now it’s private. The ask is going to be to make it public and to start using this for, like, you know, its intended purpose. So I wanted to point you to the disclaimer here at the top. I’ve hopefully metered our wording enough that people should be okay with it. So it says, this document contains typical responses to questions that are commonly raised about JavaScript language development, both within the community and to the committee on the various discussion platforms. The information in this document is curated by individual TC39 delegates with minimal review and may reflect biases and is not authoritative and not endorsed by TC39. Regardless, it may be helpful for those with questions or at least a handy reference for delegates or anyone else fielding these questions. And so the process that I’m proposing here is that any member of the public can suggest a new topic that they see frequently asked or that maybe they have a question about by opening an issue, and any delegate can add new topics or make changes they want with a pull request and a single approving review. Yeah, so that’s what I’m asking for. Hopefully we can be pretty chill about it and not have to fight over it. I want to keep things, like, just completely uncontroversial. I have a question in here that mentions PTCs, and hopefully it says only, like, factual statements about them and is completely uncontroversial and nobody should have a problem with it. I wanted to see if anybody has feedback there or if we can make this public and start using it. + +NRO: I think it’s great that there’s a consensus, but we still need to establish some process on how to handle disagreement between different delegates. I have a specific example here. There is a question about apparenttations that says java never has them and you parse them and it takes valuable time, and I found that misleading given that we’re working on the proposal. And I opened the request to regard this and proposal. Then two other delegates disagree with each other on how to word my reworded answer. So, like, do you have any idea on how to handle these cases? + +MF: I would say if there’s disagreements like that, we should just omit it. We can have a useful document that is some subset of all topics, right? The uncontroversial subset, and if controversy happens, the easy way out of it is not to include it in the meantime until we can have delegates work it out. I didn’t see your PR, but I’ll be happy to address it. These were all written just by me or I guess by me and KG. So happy to take any feedback about what is on there so far. I don’t expect these to be the totality of questions. There are plenty more that we’ll add. Does that cover your -- + +NRO: Yes, just maybe you could document somewhere that when [INAUDIBLE] given a way to handle this question and then add it -- get tied to delegates, like, find some compromise to the people that discussing it and hand it back once, like, you know that -- + +MF: Yes. Your audio is a little bit choppy, but I think I understood, you’re saying that in the event of conflict, don’t put it on the document, work it out first before it gets put on the document? + +NRO:Yes. + +MF: Yes. That’s what -- that’s the process I’m hoping for. Of course, if this all goes to hell, you know, we can take it down. But I think it’s worth at least trying for now and trying to be civil with each other. + +DE: +1. I agree with what NRO said of working things out, and excluding things from the FAQ when there’s disagreement. At the same time, documenting these different points of view and different disagreements that exist is also a very useful project. It has a different audience, though. It’s more like for us or for advanced JavaScript users than, you know, people who are trying to look for a quick answer. And this would fit in really nicely to what YSV has proposed previously about documenting invariants. I would very much encourage people to work on that in parallel with having the FAQ being for non-controversial stuff. + +RPR: That’s all the questions. O?, and SYG? + +SYG: I have a quick question. So is the expectation that delegates monitor this for possible disagreement that may come up when someone proposes a new FAQ item? + +MF: That’s a good question. I don’t have a solution for that. + +SYG: Likes, on the one hand, if there’s no hurry, you could do a quick update during plenary of things that may have disagreement. But it seems also kind of weird to tie this document to, like, a consensus seeking activity or something. + +MF: I could do a quick summary, like the editors' summaries, of things that have been added, if people want. Or you think -- + +SYG: I would appreciate that. Yeah, I’m just worried -- like, given that this says TC39 FAQ, external folks will read this in an authoritative voice, and if you add something, because we’re not used to monitoring this regularly and then people who might disagree just missed it and then we rescind it later, that would be kind of strange as well. + +RPR: Yeah, that will get quoted. + +MF: Yeah, I don’t know how much more clear I can make the disclaimer about it being not authoritative. I could make it bigger and in colors. I don’t know. They can just not read it. + +SYG: I think you have to remove TC39 from the repo for people to not read it authoritatively. + +MF: Which is an option. The plan was that if we failed to get approval for this to be within the TC39 org, I would put it somewhere where I can point to it and where other people can point to it, and then it’s definitely not seen authoritative, or at least mostly not seen as authoritative, but also other delegates have no control over it, and it's solely my discretion, which isn’t the greatest process. + +RPR: Okay, about two minutes. CDA? + +CDA: Yeah, so maybe this is something that was in the original issue that -- before this repo was created that maybe needs to be added in here. But one of the tenets that was listed in there was that this document should not contain anything that’s perceived to be controversial or something that’s not generally agreed upon. So the repo has, right now, I think the setting is only to require one review per pull request. We can consider increasing that number, but I think it’s important that everyone proposing a change to it, or creating a PR for it, as well as any reviewers keep that in mind, that if there’s any kind of debate as to whether this belongs in there, then the answer is probably no, it doesn’t. So I think that’s a shared responsibility. I would favor also the suggestion that anything new be reported up to plenary, perhaps, but this is also a good repo, I think, for everyone to follow if you’re not already following the entire organization, which most people probably aren’t. + +DE: Yeah, I think I agree with others that there’s no way to get everybody to read the disclaimer or even, you know, the majority of people, even if we repeated it on every answer. We can have a disclaimer, but just keep that in mind, no matter how we word it. I’m happy with this default of remove things when there’s controversy. I think I like the idea of requiring multiple reviewers and extremely briefly, extremely briefly noting before TC39 meetings, without this suddenly being a consensus seeking thing. + +RPR: And perhaps go to SYG. I know you’ve got a reply to NRO. + +SYG: Yeah, just a quick suggestion for reviewers, I feel like the chair group as a group ought to have a good overview on things that might have disagreements. Perhaps it would be enough to loop into one of the chairs to review for things to have a check whether this is or is not controversial. + +NRO: Okay. So I was -- while first going through this document, I first was reading through the answers, through the questions and I noticed way later the disclaimer. And, like, other people mentioned that it’s very likely that whether or not this document was, I was wondering how you feel about having a way shorter disclaimer in every single answer saying this might not represent a shared opinion, please read the disclaimer. Even if I link to somebody the specific questions to be paged, if they scroll to their -- there is a way for them to actually note there’s a disclaimer. + +MF: So DE had already discussed that saying he thinks may be insufficient, that people still won’t comprehend the disclaimer. I kind of agree with that, but I don’t oppose -- I could even put like a just link that says disclaimer next to every header and people can -- + +KG: If you put in italics a sentence that says this answer does not represent the views of a committee as a whole, just after every single question, I think that would be read, at least. + +MF: Maybe. But, yeah, I think -- I don’t want to take up any more time. I think we’re out of time. So do we -- do I have approval to make this public with the condition that I improve the disclaimer somewhat to mention, you know, directions for delegates who do not have anything controversial added and then also to somehow improve linking to disclaim from each and every question? + +CDA: I support making it public. with the caveat of allowing, -- a lot of people might just be finding out today about this. So I would favor making it public with the exception of allowing, I don’t know, a week or so for any feedback on the content that exists right now, in case anybody does take issue with anything that’s there, and then if everything is fine after a week, then I think it's good to go. + +MF: I’m happy to wait a week. + +RPR: All right. There’s not been any objections to that. All right, I think that’s -- that approach is good to go. Thank you, MF. + +## Requesting collaborators for writing and publishing a paper on the TC39 Process at IEEE + +Presenter: Mikhail Barash (MBH) + +- TODO - get link to slides + +RPR: All right. Onwards to MBH, who is requesting collaborators for writing and publishing a paper on the TC39 process at the IEEE. + +MBH: We plan to submit and to hopefully get published a paper about how TC39 works and what TC39 is. The venue is IEEE Software, a major journal for practitioners who are interested in a more deep understanding of the software processes. I can get this from the journal Web page. The . . . manage the production of systems. So it’s a magazine, not a journal. Still scientific but important differences are that the articles are shorter, they have sort of broader appeal. They are more tutorial in nature. And should be written with both experts and nonexperts in mind. And also, look different as you can see from these images here. So more visually appealing. And here is an example of an article where there are 25 coauthors. It is a good thing because we could have a complete overview of how TC39 works, if we have many, many coauthors on this paper. The goal of this, the main goal is that there will be a scientifically rigorous explanation of what TC39 is and how to works the motivation for this comes from our experience with YSV at SPLASH 2023, a conference on programming languages, there’s a poster how the committee gets feedback from users. We understood that the community is unaware of how the process works and even what TC39 is. We also have in mind several papers, a series of papers, about standardization and this paper would be the foundational and all the other papers – all other papers will refer to it. It’s important to know that this is a purely descriptive effort, so we are not introducing anything new about TC39. + +MBH: So here is a preliminary outline of the paper, which and YSV thought about. This is preliminary. Nothing is set in stone. There is something about what ECMA is, TC39 from the ECMA international perspective. Some sections about the TC39 process, we explain what a proposal is, in the understanding of TC39, how the plenary meetings are going, what kind of voting schemes, consensus, it’s interesting to explain to the scientific community. What kind of roles we have. So cochairs, members, delegates, users and on. How we can navigate the standard document itself. Say what grammar format it uses, how the semantics are specified. And some sections about the engines that implement the specification. So experimental engines are important, what other major browsers engines, the similarities and differences in how they implement the specification. Something about the Test262. And then sort of comparison from standardization of ECMAScript and other Web languages. So we would like to invite TC39 delegates, co-chairs and the ECMA secretariat to collaborate. You can send your ideas, say, in the form of both lists. No need to polish anything. The major writing effort will be done by myself, so I can sort of try to formulate a coherent text from that. And everything will be done openly with the committee, with all the participating collaborators. I will have a shared document where everyone will be welcome to proofread and your ideas can be about any parts of the paper, and suggestions about the outline of the paper are welcome. And everyone who will collaborate will become a coauthor on the paper. So we are looking into something from 12 to 15 pages. And schedule-wise, we are now here at the 99th meeting. So, say, in a couple of months, we would get input from the TC39 delegates. And then, I would like to present something already at the next meeting, have a similar presentation, which would summarize the input we got and the writing effort for – and some presentation at the meeting, the 100th meeting, and then we see where we go from that one. So that’s my presentation. + +RPR: All right. Thanks, lots of celebration images flying by right now. People are happy. + +MBH: Thank you. + +RPR: Are there any questions for him on this effort? Sounds like this will be very good at popularizing and educating more people, the committee and the process. So thank you for this. Thank you. + +## Array Grouping for Stage 4 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-array-grouping/) +- https://github.com/tc39/proposal-array-grouping/issues/60 +- no slides + +JHD: All right. Let me pull up my page and I will share a link. I don’t have a slide or anything. I will put in matrix the URL that I am looking at - it’s this issue on the array grouping proposal repo. The tests are merged. It's shipped in Chrome and FireFox and serenity, and there’s polyfills that exist. It’s certainly in the Safari Technology Preview, not in the current version of Safari, but presumably, it’s in an upcoming one. And the spec PR is up and editor approved. So my hope is to ask for Stage 4. + +RPR: DLM says he supports Stage 4 with no need to speak. + +JHD: Thank you. + +SYG: Sounds good. + +RPR: +1from CDA. MLS: it should be in the next Safari release. And + 1 from DE. Okay. So I think this is all positive. Just final check: any objections? No, there are not. Congratulations, you have Stage 4! + +JHD: Thank you, everybody. And thanks to Justin, who championed the majority of the way. I don’t believe he’s here today. + +RPR: So yeah, we thank Justin as well. Okay, good. + +### Speaker's Summary of Key Points + +### Conclusion + +- Achieved Stage 4 + +## Promise.withResolvers for Stage 4 + +Presenter: Peter Klecha (PKA) + +- [proposal](https://github.com/tc39/proposal-promise-with-resolvers) +- [slides](https://docs.google.com/presentation/d/1UvJSnt5B6tsXs-5A0zgg01cdFCtk6X2rAok6Z4Wp4Mo) + +PKA: So this is the Promise.withResolvers. Shipping in Chrome. Implemented in FireFox. And also in the Safari technology preview, as far as I can tell. There are tests merged and the spec is approved by the editors. So I, too, am asking for Stage 4. + +USA: There’s support on the queue. First up we have Daniel Minor + +DLM: Looks good. Thanks. + +USA: Thanks, Dan. Next we have CDA on the queue with support as well. And that’s pretty much it. Let’s wait for a second. Nothing else on the queue. Congratulations, Peter. You have Stage 4 + +PKA: Thank you. + +### Speaker's Summary of Key Points + +- Shipping in Chrome, FireFox and Safari Tech Preview + +### Conclusion + +- Achieved Stage 4 + +## ShadowRealm Stage 2 update + +Presenter: Leo Balter (LEO) + +- [proposal](https://github.com/tc39/proposal-shadowrealm) +- TODO get link to slides + +LEO: Thanks, everyone. Just for the very recent news, I am finally now the product owner for the team working with ShadowRealm, internally at Salesforce. This is two workdays news. So this is very recent for me. But it means I will be more active here with TC39 within this proposal. As we want to move it ahead. Updates as we work on Stage 2. This is not a request for Stage 2 advancement. Please don’t expect this to happen. Thank you. We continue the work, we are sponsoring the work to meet the requirements needed to return to Stage 3. And this is like the very high-level summary of this, overview of the implementation done – that was done, we have some small improvements to the ECMAScript proposal spec. Most of the work resides within the HTML and WebKit implementation. Right now for this V8 implementation of the overview, this is a few weeks out. ShadowRealm is complete as far as the V8 goes. Of course, there are small changes here and there being discussed. We are waiting right now on the blink integration and how the APIs should be included in the global for the ShadowRealm global. The exposed is being used right now. And blink has made some progress in completing the snapshot for ShadowRealm. This is a big chunk of our work. HTML integration. There was a huge PR rebase. Thanks to Igalia, we got that done. And we are keeping it renewed, as the changes keep coming to HTML. There is also work to align with the new modules integration. There is an approach discussed with Nicolo, Dan Ehernberger and being applied to what we have with the HTML integration of ShadowRealm. We have some preliminary editorial changes to the HTML specification. We have been going to SES discussions, for the host integration changes for the CSP unsafe-eval protection. And a separate discussion, disciplinary for the host ensure can compile strings change. As far as I understand, there’s been a pre-discussion with the editors, and there’s a separate discussion at plenary. It’s a longer one and I am not going into details right now. Yes, please join in, if interested in ShadowRealms. + +LEO: For the WebKit implementation, we have updates on the – yeah. We still have updates done with web platform tests, which we went into – want to continue with quality work, testing the current implementations, current branch against WPT in Test262. Continuing to look at coverage review. There has been like a few requests that needed updates. It’s keeping everything renewed and good to learn when ready. And moving forward, we have this work being done for the global object. We have other work, going to be discussed here at this plenary. And we have work that we intend to do for WebKit implementation. And, of course, like, everything else keeping everything green for when it’s ready. So maintenance for the proposal part are also like implicit. If we have any questions or comments, this is all the updates that I planned for today. It was intentionally short and sweet. Hopefully for everyone. Any comments or questions for anyone, I am here to try to answer them. + +USA: There’s a comment by Ashley, who says, please update agenda with links to slides. And that’s not just for this item; it’s previous items were missing that. That’s just general. + +LEO: Of course I will. As I mentioned, it’s a couple of days that I came back to the team. So this has been like a little bit of a last minute and I was only able to do this thankfully for the continuous updates from Igalia. This is a compiled list of updates that we had from Igalia. And I will do it today. + +SYG: Hi I want to point out the areas of focus, this update shows seems to not be aligned with the original concerns, which were around which APIs to include and out and all of all the APIs on the web, how do we get confidence that this do not need special normative changes to be handled in ShadowRealms, given that’s what Mozilla found. Do you updates on that area + +LEO: Shu, the answer we have for that is part of the HTML integration. Getting the HTML integration resolved, I understand the API will be like clearer out there. So this is like the very next work, what is coming after, global object is like the next step. But APIs also like part of – the resolution that we want for HTML integration, when I say that, it’s fulfilling this. I am aware to Stage 3, without the resolve. Thanks for the reminder, sorry about the brevity of my updates, not including that, making that clear to you. But yes, you are totally right. + +DE: +1 to SYG’s comment. I did want to note that one of the concerns raised by Daniel Minor was the lack of tests for the integration beyond the generated WebIDL tests, and I am happy to see that Rick Waldron began on the tests. I am looking forward to more progress there. I am awaiting to hear the results of which APIs will be included. For example, should any API that involves IO, but not access to the DOM, should that be included? That’s a key question. + +LEO: This is part of the – right now we are using the WebKit implementation to asserts and making use of the tests provide good coverage. We work together, so APIs will – when APIs are defined, we need more updates on the WPT part. And once again, the resolution of HTML integration, we need to get these inserts concluded as well + +### Speaker's Summary of Key Points + +Active work is again taking place on the Stage 2 ShadowRealms proposal, supported by Salesforce’s sponsorship of Igalia. This includes: + +- Development of the WebKit implementation +- HTML spec editing, including a PR rebase +- Discussions around CSP and unsafe-eval, in SES meetings + +### Conclusion + +ShadowRealms remain at Stage 2. Multiple concerns were raised in discussion, and these will also be addressed as part of ongoing work towards Stage 3: + +- Revisiting and listing the APIs supported in ShadowRealms +- Web platform test coverage + +## RegExp Modifiers Stage 3 update + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/tc39/proposal-regexp-modifiers) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkpR3y23lo5uqnkyQVA?e=UIpIZP) + +RBN: All right. So RegExp modifiers was discussed several sessions ago, when there was a review of Stage 3 proposals and Stage 2 proposals that were not updated. I wanted to circle back. There have been a few updates to the proposal since that time. + +RBN: Before I get into that, just a brief summary of what this proposal is for those who haven’t looked at it in a while: the idea is to be able to within a regular expression introduce new modifiers, which are a subset of the RegExp flags available and remove existing modifiers so you may for a specific portion of a regular expression pattern set specific flags for that pattern. The flags that are supported are currently limited to only the IgnoreCase, multiple line and DotAll modifiers. So updates with the proposal repros and the specification text: it’s been updated to match the latest version of ECMA262, including the changes that Kevin mentioned, and the recent changes related to the addition of the RegExp “v” flag. A new Record now captures what was previously ambiguously closed-over input and what flags were currently set, so now we are now using the new RegularExpressionRecord in place of the Modifiers record, which served the same purpose. In addition, because this RegularExpressionRecord has been threaded through the AOs for regular expression semantics, I can now use those to dramatically reduce the size of the spec text necessary for this feature. There is a pull request to the ECMA262 proposal and a pull request for Test262. I would like to note that, prior to 5 minutes ago this link was incorrect and was pointing to the wrong test PR. The slides have been updated. There is currently an implementation pull request to engine262, but I am seeking additional implementations. If there are any engines that are currently looking at the possibility of adding this, please reach out to me after the session, either via GitHub or Matrix, to discuss which issues have been filed to track this proposal so I can follow any progress updates. And that’s all I really have to share. Thank you. + +RPR: Okay. Thanks, Ron. There’s no queue at the moment. I know we didn’t have any debate. But a couple sentences conclusion or . . . on the key points for the notes. + +### Speaker's Summary of Key Points + +The proposal spec text was modified to account for the new RegExp “v” flag feature. As a result, the proposal spec text itself is now much simpler without requiring any semantic changes. Champion is seeking implementers to provide feedback on potential implementations. + +### Conclusion + +RegExp Modifiers remains at Stage 3 as no advancement was sought at this time. + +## Provide source text to HostEnsureCanCompileStrings PR #3222 + +Presenter: Nicolò Ribaudo (NRO), Philip Chimento (PCF) + +- [PR](https://github.com/tc39/ecma262/pull/3222) +- [slides](https://docs.google.com/presentation/d/1MRItYS_b1hwKstlqlfoD8mgbecS2OkTSiPFVWHs3Y_8/edit?usp=sharing) + +NRO: We are here today to propose some changes to how ECMA262 exposes information to embedding specs so they can enforce security policies. Diving into the changes we propose, how the CSP works. It’s the mechanism used in browsers to decide which resources can be used, and which code awaited and load images or styles or other types of like web resources from. And when it comes to JavaScript there are multiple ways of front code and multiple ways to configure your content policy to block different I think. And like in browsers we have external scripts and uploads a file and executed. And workers, the same but execute – well, in a different work up. Line code, script text and in event handler and some like eval or eval-like functions. Eval function and all the various constructor and set amount and parens providing at the web platform. And for each one of those we can restrict scripts in different ways. When it comes to external script tags, there is a policy, a directive called script-src SC to disable all scripting or it can be used – on a load from the same domain or from other places to domain or protocols. And the same applies for workers, just with a different directive called worker-elem. You might want to have different restrictions for different context given the work is more isolated For inline script, there is no URL because the code is there. And we have three main ways of controlling what can be executed, understanding the safe-inline value or a specific way to load the code to be executed just once and then like nobody else can reuse to try to evaluate other code. And finally, and this is the most important one for this presentation, we have these – we have a way to pass through the browser some hash. And when we try to execute 7-line script, the browser will hash the code. Check if it matches the hash provided in the content policy and only allow executing if like it’s the expected code. + +NRO: It works similarly for inline event handler, except by default hash, unless we explicitly enable with the unsafe hashes policy. So if there unsafe hashes policy, then the host or the browser will also check the hashes for inline event handlers Then we have the various evaluators, like eval. There is like a Boolean switch, it’s enabled and in this case, set amounts are allowed. Then disallowed. There is no way to control this. We are hoping to be able to change this in the future. And Philip, do you want to continue from here? + +PFC: I want to talk about what things we want to enable by making this change that’s in the normative PR. The first one is that sometimes eval is okay. This slide shows a reason why you might want to allow a use of eval with a particular string known at compile time, and not block indiscriminately: feature detection. Knowing whether a certain syntax feature throws a SyntaxError, when you evaluate it in your environment. And then use that feature detection to import code that uses the syntax feature or a code that is transpiled or otherwise written to not use it. + +PFC: The other one, which is in particular my motivation for introducing this, is that the only way to synchronously execute code inside a ShadowRealm from the outside of it, uses the mechanism of eval. So you can asynchronously import a file [which would be subject to different CSP policies, but using the `ShadowRealm.prototype.evaluate` method, it goes through the HostEnsureCanCompileStrings hook, and it’s subject to the `unsafe-eval` CSP directive. So you can’t indiscriminately block this without forgetting code inside ShadowRealms. + +PFC: One idea that exists on the CSP side is to introduce a new value for the `script-src` directive called `unsafe-hashes-eval` where you can specify precomputed hash for code that you want to allow executing eval on. So this would give you the ability to evaluate strings known at compile time while still disallowing anything else that was unexpected or computed at runtime or whatever. + +PFC: The way this works on the ECMA262 side is, we have a host hook called HostEnsureCanCompileStrings, which allows the host to block evaluation of strings containing JavaScript code. Currently, this host hook, it takes a reference to the realm in which the compilation is requested. It doesn’t receive the actual string. Another thing to note, if you pass something that is not a string to eval, that returns immediately, before the host hook is called. The host hook is never called in that case. The change being proposed is that we would pass the string argument to HostEnsureCanCompileStrings, which would allow the host to block or not block based on the contents of the string. Should we let the host distinguish between direct and indirect eval? There's a valid use case for direct eval when you want to do the syntax feature testing example from a couple of slides ago, but also – there’s a case for indirect because of the scoping rules. + +PFC: The other time when code is executed and consults this host hook is the Function constructor and its friends AsyncFunction, etc. So these might be blocked by HostEnsureCanCompileStrings. However, these constructors take parameters as well as the body of the function. We stringify the parameters and join them into a function declaration and then evaluate that. So this is a different situation than the simple case of having eval and passing the string argument to the host hook. What could we pass to the host hook here? Should we pass the dynamically synthesized function code, or should we pass the stringified arguments, or should we only pass the body if there are no parameters? There are cases to be made for all. The question; should we let hosts distinguish between a function constructor and the evaluation? + +JHD: I realize this doesn’t help ShadowRealms. I have done a feature test and never used this, but I have used `Function`. But it seems to me like the safety concerns around eval are mostly around direct eval, if I am using the term correctly. and indirect eval is only a problem in that it’s – it looks like direct eval, but functions isn’t a problem and lumped in eval. Like, in general, I think it would be great, if sites that blocked eval didn’t block new function. And regardless, I like the idea of a browser being able to – a site being able to say, these strings are safely `eval`-able, so things can be allowed. I don’t know if that – unless you allow it – allow with `eval` or treat the ShadowRealm differently from direct eval, I don’t know how to fixes it, but all sound good to me and I accept for ShadowRealms, I don’t need direct eval at all or indirect eval. You use Function for everything. But yeah. + +PFC: That’s good input. + +RPR: Go ahead. + +NRO: We will finish going through the slides and discuss this then. There are a couple more slides. + +PFC: Shall we move on to the next slide, Trusted Types. Trusted Types is a W3C proposal that is pretty closely related to this. It wouldn’t be part of ECMAScript, but it’s a proposal for objects that allow you to mark a string as trusted. You can create a policy object, with a method, and the policy acts like an object capability that you can pass to code, that allows it to pass in some HTML string or some JavaScript string, and the policy does whatever it needs to do so ensure that the string is safe and then you have an object that encapsulates the sanitized HTML or JavaScript or whatever. So there’s the code sample here on this slide showing what you would do for JavaScript code. And then the idea is that you pass this TrustedScript object, which contains the sanitized JavaScript code, to `eval`. I initially thought that this normative request would also enable this, you know, in fact, the pull request it was based on, that Mike Samuel worked on a few years ago, it was with the goal of enabling Trusted Types, but it’s actually not the case. Because currently with `eval`, we just return the value directly, if it’s not a string. So it will require another normative change to enable Trusted Types in this host hook. And that would actually be a more breaking change, if I can characterize it like that. We’re not proposing that right now. But I want it to be on the radar, that further changes may be required if we want to support Trusted Types in this host hook. Should we move to the discussion? + +NRO: I have an answer to what JHD started earlier. JHD mentioned that in general, `new Function` is better because it doesn’t have the special behavior that eval has. While working – thinking about this, we are trying to like work on this spec change with PFC, something we realized as mentioned here, `new Function()` has this problem of being dynamically constructed. The code that is executed is not statically available . . . and the way it’s done with hashing is that you have some tool at build time hash the code for you, that you might have marked in some way. And if we just have `eval()`, the code is there. You can see the code being passed to `eval()` in many cases. At least in the examples we gave you before, it’s easy to get the code and hash it. While for `new Function()`, it is more complex because the final code that gets executed is not there yet. We are missing a slide for this. But what we are trying to get consensus on with this discussion, is specifically changes to HostEnsureCanCompileStrings. The decision is, how much more information to explicitly provide to the host? Right now, it’s nothing. The questions are, can we expose the source code to the host hook? And if yes, should we let the host distinguish between `eval()` and `new Function()`, or eval and indirect eval. And expose the source code of new function at all to the host given its dynamic? + +NRO: I see a clarifying question from SYG: “is the unsafe-eval-hashes thing already in CSP, or being proposed in parallel?” That’s being proposed and worked on in parallel to the changes here, these would unlock unsafe hashes. It’s currently just impossible to specify these. Because this doesn’t expose enough. + +SYG: All things being equal, I guess my preference would be that you would have a host hook that is expressive enough because – another slide for Trusted Types. You could do a – orthogonal change to the same host hook. Because as I understand it now by this presentation, the current proposal is to pass the string contents, which would not solve the host hook – not the Trusted Types expressivity of needing to check like the bit on an object. Is that correct? + +NRO: Yes. That is correct. + +SYG: Could you go into a bit about why your preference is the current shape of the proposal instead of just passing whatever? Which indicates a string would be a string. + +NRO: Yes. Because right now, for – if you pass an object without that, immediately before calling the host hook. And Trusted Types, you call the host hook and then – first check if it is allowed to execute some code. And then return. And so it changes the order. So we are starting with this minimum version that doesn’t have any changes to current semantics. + +PFC: Unless we wanted to somehow specify Trusted Types in ECMA262 as well, the host hook would need to tell you whether this is – if only the host hook knows about the TrustedScript object, then the host hook needs to give you the string code back that `eval()` would need to evaluate. Otherwise, on the ECMA-262 side, we couldn’t tell the difference between some object with a `toString()` that returns a fragment of code and the TrustedScript object, unless we know the internal slots of the object. + +SYG: Okay. Thanks. + +NRO: So let me phrase exactly what we are asking for. There are two points: one is the `new Function()` case, we don’t know if we want to make the change. We don’t know if the change makes sense. And for the `eval()` case, it’s how much info is exposed to the host. If there are other questions, I can try to phrase a specific question for consensus to see if we agree on that. + +DE: +1 to SYG’s point. If we pass the original argument of eval before to a string that could be more powerful in the context of trusted types. If we added `Array.isTemplateObject` then you could make an object that’s branded that wraps a string passing to `eval` [NB: DE meant to say, the Function constructor], indicating it’s a literal string. It’s an example of what Shu was talking about with Trusted Types integration. I think it’s very good to bring up this topic and to add more information with the hosts. But honestly, I don’t think that this committee should be caring too much about limiting the information of what is passed to hosts. As long as the information makes sense on JavaScript’s side and as long as there’s interest from the hosts and potentially using it, that should be enough to justify this. + +NRO: One problem with, as you mentioned, the Trusted Types, passing the object, as they are, is that that doesn’t work for extending hash basis eval. It’s the current string. The host would need to stringify the object. And that needs a new function + +DE: What if you pass both the object and the string? + +NRO: That would work. For now, we can start with passing the string, and if someone picks up trusted types, then the object can also be passed. + +DE: Well, I hope we can have this communication about Trusted Types so it can move forward because this isn’t asking for very much. The design space is small. Let’s try to go through it. + +RPR: That’s the end of the queue. And Nicolo? + +NRO: Let’s separate the `eval()` and `new Function()` parts. For `eval()`, I want to ask if we do have consensus on passing the string to the host. And alongside with that, a way like also passing some enum, to distinguish between direct and indirect eval given the concerns with direct eval. This is just passing the string to avoid changing the order in which operations currently happen in eval. In the future, if you want to potentially change the order, we can also pass the object. And this is just for `eval()`, and not `new Function()`, for now. So is there any concern with passing this info to the host? + +SYG: Responding to the enum, is there a use case from the CSP side or the trusted type side to use that eval. Does anything?? + +NRO: There have been some ideas about potentially having something in direct eval because it’s much easier to validate security properties of indirect eval than of direct eval. But it’s not a concrete proposal yet. It’s mostly discussions going on and RegExp and having to avoid coming back here again, asking for this change, given that it’s very minimal and doesn’t affect how things currently work. + +SYG: I will voice my concern which may be on the CSP side. The concern of surfacing the enum is that CSP is something of a failure in terms of crafting understandable policies, like crafting the policies is the perennial challenge. And the more bits you expose, the more knobs, it’s the harder to craft. Exposing function versus indirect versus direct eval is a knob that makes sense in most of the cases. And it doesn’t harm for us, to expose that to the host, but I think that is more an editorial concern. I want a default, if the host does not do something, like we do the default conservative thing, instead of by adding this extra knob we open the design space, and not necessarily way, that invites misconfigured CSP even more than now. As long as it’s surfaced in some way, I am happy. I think the actual semantic difference is scoping difference. What is the difference between `new Function()` and `eval()`. + +NRO: It has access to the local scope. + +SYG: Are you asking for a two- or three way— + +NRO: Just for `eval()`. Not for `new Function()`. + +SYG: I see. + +NRO: Then I'll move to the next and ask for something there. The main concern is because we didn’t have an answer for what to do with `new Function()`. And consensus for `eval()`. So this would be a two-way enum for now. Yes, we will make sure to – if exposed, make sure to relay your comment to the CSP folks. + +RPR: All right. We are coming up on time. A minute or two. If you wish to summarize. + +PFC: Shall we ask for consensus on the `new Function()` stuff? We could propose passing only the body, and in the case where there are params, at this time, revert to the previous behaviour because the params may contain executable code as well, so we avoid making the host dynamically construct its own function. Maybe that’s a good consensus for now? I don’t know. What do you think, Nicolò? + +NRO: Okay. Let me prefer this and to be clear: neither I or Philip feel strongly. The best way to pass the string of the body and only in the case where `new Function()` receives just one argument. So just about it. And if we have consensus for that, so that’s an idea, the question is whether we should add a third value to the enum, distinguishing `new Function()` and direct eval, or whether it should be the same type for indirect eval. + +RPR: I would say there’s a lot of questions here. We have about 30 seconds left. + +NRO: Okay. Do we have consensus on passing the stringified body to the host, if there are no parameters? + +KG: Sounds good to me. + +NRO: Okay. + +NRO: And I guess there’s specifically for Shu. Consensus on like distinguishing `new Function()` and direct eval with that enum? + +SYG: I don’t understand why we would distinguish `new Function()` from indirect eval, yeah. Semantically, don’t they have the same properties? + +NRO: One difference is the parse is slightly different, one is returned, other one not. Let’s just not distinguish them. + +PFC: Yeah. The host could distinguish anyway way because we have to pass the parsing goal in the case of `new Function()` – + +NRO: No, we don’t – wait. We don’t need to parse. Like, passing the parsing goal is equivalent to distinguishing them with an enum. That's an editorial question, how to do that specifically. So the question, I guess, is whether like we want to let the host distinguish between `new Function()` and indirect eval or not. + +SYG: I am going to say no. I am going to say we should distinguish direct or indirect eval, like the actual scope differences, until such a use case presents itself. + +### Speaker's Summary of Key Points + +In HostEnsureCanCompileStrings, we are going to pass the string to be evaluated to the host when using `eval()`. When using `new Function()`, we will pass the function body, if and only if there are no parameters. We will also pass some information to the host, specifically a two-valued enum, distinguishing between direct eval and indirect eval. + +## Base64 Uint8Arrays discussion + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/proposal-arraybuffer-base64/) +- [slides](https://docs.google.com/presentation/d/1kq4AyZquZAObuG4Z4099FZo7emYUi7JnR07SZ4sue6k/edit?usp=sharing) + +KG: So I put this on the agenda originally as potentially going for Stage 3. Please ignore that. I am definitely not going for Stage 3. This is just an update which ia a request for feedback on some things. So recap, proposal is on GitHub. There's a playground, there’s spec text, but some of that is in flux, so don’t expect too much from it. So fundamentally, the thesis is I think it’s worth having a built-in mechanism converting to and from Base64. And that's the purpose of this proposal. There's a basic API, which hasn't changed very much, although none of this has changed recently. Some of it has changed since the first iteration of the proposal. Basically, a prototype method for turning a Uint8 array into a Base64 string or a hex string, and conversely methods, static methods, on Uint8 array for taking a Base64 or hex string and giving you a Uint8 array instance. There's a lot more details which we'll get into. I do want to explicitly call attention to the fact that this is only in the current iteration of this proposal, this is only on Uint8Array, so this is the first time you would have something that is specific to one typed array and not present on all of them. So we could instead put it on ArrayBuffer, for example, although that's annoying because UintDataArrays are views and so putting it on UintDataArray means that you are able to encode only a portion of the buffer. In the first version of this proposal it was on ArrayBuffer but I moved it to Uint8Array by request. An alternative is to have a new global, like a Base64 global, that would have toBase64 and fromBase64 and so on. If someone feels strongly that this would be a better path, please get on the queue. Right now I personally prefer the placement on Uint8Array. And we can discuss things either, you know, interrupting or at the end whenever people want to talk about stuff. + +KG: Also, there’s an options bag argument. The second argument to/from Base64, the first argument to toBase64, allows you to specify an alphabet. Currently the only two alphabets are “Base64” and “Base64URL”. Default is “Base64”. There has been a request for the ability to write to an existing buffer. We may or may not want to leave that for a separate proposal or a separate API. But it’s within scope for this proposal, something to think about. I’ll come back to that topic later. + +KG: Among ways that Base64 and coders and decoders across languages differ is in their handling of characters that are outside the space of a legal characters in the alphabet and outside of whitespace as well; whitespace is handled differently. Some APIs unconditionally reject evening outside of the alphabet. Some allow you to accept the whitespace, usually by default. Some allow you to accept any character outside of the alphabet and just ignore it, the same way they ignore white space. My current plan is to make it an error to get anything that is not whitespace and for default to be -- to accept whitespace, but to have an option to allow you to reject whitespace as well. That’s consistent with some, but not all other languages. I personally don’t have a use case for -- and I’ve never encountered a use case for accepting non-alphabet characters and just ignoring them. That seems like behavior that I would only do with a very good reason, and I don’t have a very good reason. I don’t consider matching PHP and Python and node good enough reason for this without a particular reason to do it. But, again, if you disagree, please get on the queue. + +KG: A somewhat larger topic: How to we handle inputs whose length is not a multiple of four? So as you may or may not be aware, standard Base64 includes padding bytes to ensure that the output length of the output is always in total 4. I should say padding characters, `=`s. Some implementations allow you to omit those equal signs. For example, `atob` on the web allows you to omit those equal signs. So atob in particular says you can omit the equal signs, but you still have to have - the input has to be valid if you had appended equal signs. So because when there is padding it is precisely one or two equal signs, this means that if the length is equal to one mod four, then atob will throw. It will still consider that invalid, not just not padded, because there is no padding that would make that legal. + +KG: So these are two possible behaviors, to just throw if it’s not padded or to throw if it could possibly be padded. But another possible behavior is to stop and give you the extra characters somehow. This is common in some parsing paradigms, to have recoverable parsing, to allow the user -- to parse as much as you can and then allow the user the decide how to handle the exceptional case. This is only relevant to decoding Base64 strings because when you are generating the strings, you can just ensure that it’s correctly padded. So it’s only relevant to specifically the Base64 decoding API. And, okay, so if we are going to allow the third thing, we need some way of getting the trailing input to the user. There’s a few possibilities, and I’ll run through some of them and give my preference. One possibility is to say if you request this kind of handling for extra care using the input, then those extra characters get attached as an additional data property on the returned Uint8Array instance. So this is an allocation, an allocation in the common case that you are not trying to do anything special with the handling. You’re just throwing or assuming it’s well padded, you don’t have no know about these options, you always get an Uint8Array and maybe there’s an empty string and that doesn’t affect your life at all. Another possibility is to always return a pair. Probably we wouldn't want this to be based on the presence of a flag. So it might make more sense to have a separate API that gives you this pair. The standard design question of having a Boolean flag versus a separate API. But this allows the base API, you might call it, to not have to worry about this option. And you also avoid sticking data properties onto a Uint8Array, which some people find distasteful. Another property is to just always give you this pair, but only populate the property in the pair if the flag is specified, so the API shape would be consistent, it would just be slightly more annoying in the case that you don't care about the extra characters. + +KG: And then finally, if you recall, I talked about the possibility of having a bring your own buffer method and method that allows you to write into an existing buffer. The web platform has such an API with the TextEncoder.encodeInto API. The way the web platform handles that is that it returns a pair ‘read’ and ‘written’, an an object with two numeric properties that tells you how many bytes it was able to consume from the input and how many bytes you were able to write into the output. The input might be too large or small to completely fill the target buffer. And if we were going to have such an API anyway, then the `read` property gives the user the ability to figure out what the extra characters were themselves, because it tells you exactly where in the string it stopped reading. If we have this option that says “stop reading before the final chunk if the final chunk is not a multiple of four characters", then by specifying this option, your `read` +would potentially be slightly smaller when specifying this option and you would be able to recover by looking at that property. + +KG: A nice consequence of this is that it would allow you to build a performant streaming decode in the userland. The error recovery information that I’m suggesting in previous slides is sufficient to give you decoding in user land, and by performant, what I mean is specifically it doesn’t require an iteration over the entire string in user land. You can just slice at the relevant point or decode the relevant characters. And get that information out of the API. + +KG: That was a few topics. We should go to the queue. + +WH: I support the decision not to allow and ignore other characters since this could go badly wrong if somebody has, for example, a Base64 string interspersed with comments. If you ignore the character which starts a comment and start parsing the comment as Base64, you can cause all kinds of mayhem. + +KG: Also, it’s awkward if you specify the wrong alphabet if you say it’s a Base64 string, but the string is in fact Base64 URL there’s a similar problem there. Okay, so noted. I will continue on with my plan to reject them unless someone speaks up to the contrary. + +LCA: I like the direction. I like the idea that we would have sort of one encoding API, one decoding API, and then additionally the decode-into API. I think this solves the streaming case, this solves, as you mentioned, the return extra bytes or the APX (?) use case, and people that already know how to do sort of streaming without a streaming primitive, like web streams, would already know how to use this because it’s a more text encoder stream. It’s good. + +KG: You’re supporting specifically variation 4, or any of these variations? + +LCA: specifically variation 4. + +KG: Okay. + +LCA: From Base64 into and then additionally with that flag. + +SYG: I requested BYOB as a thing to work on here. I want to preface this with saying I have no quarrel with having this as a follow-on API if that’s what people would like. I think the standalone API, the one-shot API does standalone and does not need to be held up on BYOB use case. KG, when you and I were chatting about this, there’s a half (?) idea that we have an in-and-out param. Why am I requesting BYOB, because there are use cases on the web platform where you really want to maximize the performance you get with a question decoding API, month so much for Base64, but we’re also thinking about an even more performance UTF8 decode API that is separate from text encoder, text decoder, and for that kind of API, you really want want to be able to bring your own buffer and it will be nice since we’re doing Base64, that the Base64 BYOB API that we design now is consistent and nicely symmetric with a future one that I would like the proposed for UTF8. That said, to maximize efficiency +, the most efficient API is one that you can decode into a buffer, add an offset, that incurs no object allocation beyond the -- maybe an options bag, because you can just cache that to be a singleton object. No about (?) allocation here goes to the extent of not even allocating the result object that holds read and written. One of the ideas that KG floated, which I really kind of liked, was we could instead have an in-out param where you -- where it is the responsibility of the caller to cast in an object that encodes parameters like the offset into which to start decoding, and that object is also an out param in that the written bytes would be updated on that object. So the -- it’s up to the caller to decide whether they would like to allocate a fresh object each turn or -- yes, okay, great, you already have a slide made, or maybe you just it right now. But basically something like that. So it kind of -- like, it is a decidedly a less ergonomic API, but I hope it is uncontroversial that BYOB chunking -- either the BYOB use case or the chunking use case and the streaming use case are advanced more in these use cases that can literally live with a tradeoff of having a less ergonomic API for maximal efficiency. And something like this is more efficient than having an intermediate-style API that is somewhat ergonomic, like returning a result object with read and written. While I can live with variant 4, though with the caveat of having -- of requiring an output offset in the input as well, I would actually be much happy were something like variation 4B. So for people supportive to have BYOB use case, I would appreciate your thoughts on this, if you could take a minute to read the slide and read this API. + +KG: To be really clear, what’s going on, the call in the second line would mutate the object, the `opts` object from the first line. It would be updating a parameter rather than merely reading from it. I don’t think we do this in the web very much, but to give my own opinion, I’m fine with it. I kind of prefer matching the text encoder, this is also a request from Anne, but this is decidedly more efficient, so I’m fine with it. + +PHE: Yeah, sorry. SYG, you show and you mentioned the offsets into those buffers, but not the available length for the input and output. Would that -- it seems to me that that would be necessary in some of these cases. + +SYG: They will be updated on return from Base64 into such that if you know the offset is -- so, like, by comparing the two updated offsets, if you still have input left but the output offset is at the end of the buffer, you would know that you could not finish fully decoding. But, like, the information should be there. So I agree with you that you need to be able to derive that information from the output even if it’s not straightforwardly available. + +PHE: I see, so the intent here, then -- I mean, it’s a little -- you’re not providing a general purpose way to encode into a buffer, into an arbitrary slice of a buffer, but you’re kind of using this as a by way to decode into a full buffer? + +SYG: I’m sorry, I don’t quite follow. I think this is a full way to generally decode into a null buffer. + +PHE: Imagine you didn’t want to write into the last byte of the buffer, the output buffer, okay? You have no way to -- I don’t see a way, sorry. I’ll state that differently. I don’t see a way to express that when using this API. + +SYG: Sorry, okay, I’m still trying to understand the use case. You have an input that you do not -- you do not want to fully decode it, is that the use case? + +PHE: No, you’re decoding it into a buffer that has other things in it, right? So you might not want to write past a particular offset. + +SYG: I see, regardless of how long the input is, you want to stop decoding at some length? + +PHE: Right. Basically what you’re doing with the input offset is a providing a sub-array that runs from the -- the input offset -- or output offset in the case, sorry, to the end of the array. And I am saying, like, why is -- why would we limit it to that? + +SYG: I see. I don’t want to put words into KG’s mouth here. I am -- I have no issue with -- like this, this is pretty unergonomic already. I see no issue adding more offsets to make it fully generalized. + +PHE: your goal is to have it do the whole buffer in case? + +SYG: Yes. + +PHE: The assumption is you would write to the end of the buffer? + +SYG: Yeah. + +PHE: Okay. Thank you. + +MLS: I think MF and I are kind of in agreement here. I can’t think of any existing API that we modify the argument that we pass to an API. Can we think of any? I mean, it just seems kind of weird to do this, to modify ops, then. And especially since it’s like the last parameter, not the first. And the buffer, we’re going to put something in, but we’re not modifying a property in API that I can think of. + +KG: I am not aware of any precedent, and I agree with you it’s weird. It’s just a tradeoff between, how much do we care about avoiding this allocation versus that weirdness. + +MLS: I think it’s -- it’s not a JavaScript path, and I’m wondering if it ends up being a foot gun somehow. I can’t think of one, but it just seems kind of weird. + +MF: Yeah, I just want to also say I don't have a horse in this race, so take it for what it is. But you did ask for feedback, and I'd say I think that without some extreme justification, it is just too weird to have the options object being written to. Also as a separate point, when we make these designs in the language, at least historically, we've done so with an acknowledgment of implementers' abilities to do things efficiently, but not with that as one of the top goals. And I have hopes that possibly in the future, you could avoid that allocation even with the original design in Variation 4. Obviously not when the object itself is used, but when it's destructured and used, you know how it is. + +SYG: So what was I going to say? Oh, yes, I agree with MLS. It is unprecedented and weird. I’m just plotting on the other side of the tradeoff polarity. With MF, I do think that efficiency -- so I think there’s a difference in efficiency for APIs that we intend to be wide, like, generally useful utilities, and efficiency for admittedly more niche use cases that may enable a lot of performance, basically, like, the one-shot API I see as the former and this API I see as the latter, which makes it more acceptable for the weird tradeoff space. But that said, I won’t necessarily die on this hill either. I think variation 4 we can live with. I am somewhat skeptical of the des destructuring optimization, like, that is doable. It’s just -- like, that has to be a general thing. Like, we can’t build that optimization just for this API. And it’s unclear to me now how much -- how viable it is, I guess. I just don’t know. And I want to -- yeah, okay, I’ll just leave it at that. + +KG: It’s maybe worth giving some additional color here in that apparently there’s requests specifically to avoid this allocation with the TextEncoder API, that the allocating result object for the TextEncoder API is not just theoretically expensive, it’s expensive in practice, to the extent that it’s possible that the web platform will have an update to the text encoder API in some form to allow skipping the allocations. So while this variation 4B is currently unprecedented, it’s at least conceivable that the web platform might want to do something like this at some point. On the other hand, TextEncoder already exists, and is precedent for this form, so maybe we could do the zero allocation version at some point if indeed TextEncoder adds a zero-allocation version. + +EAO: Sorry if I’m repeating something that was mentioned earlier, I think I just managed to catch up with the notes and so on. I didn’t see in any of these variations the possibility of a user giving a callback function, like `onExtraChars` that would be called with the extra characters if it’s defined. Is there a reason we couldn’t do this? Because that would feel, to me, like the most JavaScripty way to solve this. + +KG: I’m not aware of anything that would not make that work. The idea hadn’t occurred to me. My own opinion is that something like this would be more JavaScript-y. A callback to return half of the values is kind of weird. But it’s certainly an option. + +EAO: The benefit of the callback would be that you could basically do anything there and you could also know that if you’re not ignoring -- not interested in that. And to counter the JavaScript-iness of this, is that here effectively either we’re always returning this extra wrapper object or we are changing the shape of what is being returned based on the option. But I presume you’d be saying that it would -- this API would always have this extra wrapper object around it? + +KG: In this variation 2, yes. There’s also the variation 1 version where you just stick a data property on the Uint8Array. And then you could do that unconditionally regardless of the presence of that link. But, yes, these are all feasible. It sounds like, though, people are broadly in support of variation 4 specifically, with the -- SYG wanted 4B, but could live with 4. This is missing the offset parameters so it would be a little more more complicated than it appears here, but only a little. So can we all live with this? I would like to go this route, if so. + +LCA: I replied to EAO’s question or comment. Is it possible -- like, like, do implementers think that a callback API would be slower or faster than make the difference between variation 4 and 4B? Like, I can imagine that there could be a footgun here where somebody creates a closure for every call, which I’m sure would be slower, but I guess if you sort of have a closure that are used the same over and over, maybe it’s the same speed as 4B? This is not a question toKG , but to SYG and MLS. + +SYG: Can I answer someone in the queue or go on. + +SYG: I would like to respond and then I have a MLS’’s response and then my original other item I’ll take after MLS’s response. I do think the callback will be slower because, yes, there’s definitely the closure footgun, but also like you’re calling something. Like, the best case, even if you ignore it, even if the function is no-op, you’re still calling a function. And in a built-in, you would immediate to depend on some deep inlining to actually get rid of the call to figure out the thing is a no-op, and that doesn’t happen until later tiers in The JITs , but also, like, if the common case that you want to do in the callback is literally to, like, exfiltrate the extra characters so that you can loop around again in the streaming case to process them, like, that is for sure a slower way of doing that than to directly return the leftovers. MLS, would you like to -- + +MLS: If you’re going to make a call, it’s going to take a lot more time. Putting property on an object -- or updating an property on an object is a lot cheaper than making a call. And the question is, okay, if you don’t call, if there’s not any extra, you know, data versus doing a call, you know, yeah, maybe -- maybe it would be faster, but the whole reason that you want to have the extra characters, you know, with the callback or with a property is because you expect it. + +LCA: Okay, thank you. + +SYG: So I can live with variant 4. So,KG, I want to end to the rest of committee, I want to be explicit about something, which is I’ve alluded to this maximized -- like, maximal efficiency API that we would like to design in the future for UTF8. This does not preclude that. Presumably we would want -- so one concrete scenario here is that we ask for consensus for variant 4 exactly as it says here. Namely a return -- a result object with read and written properties and it does not take offsets as inputs, like you are expected to make a Uint8Array TypedArray to go into, and you are expected to create a sliced (?) string if you were to -- if you want to limit the input in some way. Is that what you’re asking for KG? + +KG: No, I’m proposing to include offsets. Sorry, I should have written this more. Also, it will probably look slightly different. I don’t think I’m going to have a Boolean. I’ll have a three state “how should I handle the final chunk?”. But the intent would be to support offsets. I don’t see any reason not to. + +SYG: Okay + +KG: I’ll bring that up real quick. + +SYG: Then I will refine my comment. I think I can still live with it. But it makes it much harder, I think, to introduce a really zero-allocation API in the future because there will be too much overlap with the current variant 4. If the current variant 4 does not include any offset, then you can say, okay, maybe we really want a zero allocation one, that’s a static method that takes all the offsets as well as this in-out param or some other trick, or we have really zero allocations. I think -- since you’re not asking for Stage 3, I would like to think more over this between this meeting and the next, but I think I can still live with variant 4 with offsets with the result object. But if the performance constraints are too great to really have zero allocations, what is the committee’s thoughts on “does this variant preclude future ones that are slight variations on this one that would have zero -- actually zero allocation?” + +KG: Yeah, I’m fine with that, for what it’s worth. I think that, like, slight variants for higher performance is, like, a pretty normal thing. It’s unfortunate that, like, text encoder encodeInto already exists. But having a slight variation of text encoder encode into and also a slight variation of Uint8Array to Base64 encoder is not actually a problem for me. + +PHE: Yeah, sorry, I wanted to come back to a passing mention that SYG made just for clarification. It’s related to KG’s earlier question about putting functionality (?) on UTF -- excuse me, on Uint8Array or somewhere else. I’m personally not super excited about starting to have TypedArrays that -- where some have certain methods and some don’t. It’s been consistent for a very long time. The question of having a separate global, I mean, I believe prefer to see a separate global. One of the questions or one of the points that was raised against that is if there was a Base64 global, it might only have two methods or four methods, which doesn’t seem like much. But we should also keep in mind here that this proposal includes hex. We never talk about it in committee, but it does have hex methods here. And further, so I was curious, so that would be some more methods, potentially, depending on the name of the global. RGN in an offline email had-pointed out an example of a language where there’s basically encoding, so it could be encoding could contain Base64 and hex and other things. Which might make it a little more general. + +PHE: The question to SYG, coming back around, SYG, you mentioned in passing a future proposal about more efficient UTF8 encoding and it wasn’t year to me when you said that whether that was a proposal intended for this committee or another body, and I just wanted to understand that, because it might have some bearing on the separate global topic. + +SYG: It’s not entirely clear to me either. I think probably this body, but I think there are also good arguments to be made that the more efficient thing ought to extend encoder/decoder, and this case we want for the proponents use case we want will probably look significantly different, so we consider it also possible to propose it here in TC39. That said, I don’t have strong feelings. + +SYG: Wait, hold on. I just lost my network connection, but I also just heard Peter’s response to me. So you can still hear me? + +PHE: Yes. + +SYG: I won’t close this window and when I finish my thought, I’ll try to restart. I don’t have a strong feeling whether we go with the base will be I also originally suggested the Uint8Array because of the window thing we get for free because it’s already a window view. But I don’t have a strong -- so long as the API accepts TAs to TypedArrays, I think that’s the high order bit. What the namespace is, I don’t feel strongly. + +PHE: Okay, so, you know, a maybe. But sure, appreciate it. + +KG: Yeah. I’d like to request other people’s feedback on the global versus the Uint8Array. I, like, have a mild preference for placement on Uint8Array, but it’s generally pretty mild. Peter, as he says, prefers a separate global. I am okay with anything that the committee is okay with here. But I would like more opinions than just, like, well, PHE kind of wants a global and I kind of don’t. Can anyone else give us something to go on here? + +LCA: Yeah, so for the decode -- sorry, for the encoding, so from bytes to text, I would prefer it as a prototype object, prototype method, just because that’s morering (?) knowledge and we can call buffer dot 64 hex, which is especially in the use cases where this is pretty useful, like you just hashed some data, you’ve gone back into Uint8Array, and you can go to a single line, just by trading methods. Which is nice. For the decoding use case, I don’t really have as strong of a preference. But I feel like if we’re going to put something on the prototype for encoding, we may as well put the decoding one on -- like, on the object. So, yeah. + +KG: Okay. Thanks. Anyone else have opinions about whether we are -- it makes more sense for whether you would aesthetically prefer or whatever to have a new global versus methods on the prototype and the constructor? + +MLS: So I think a new global Base64 object makes sense if we have, you know -- there’s encoding, decoding and what your source and destinations for each of those operations. I can envision that you can do string-to-string in either direction, not just Uint8Arrays. But it’s -- I think that that’s -- if it’s more generic than just, you know, between an array and a string and back, yeah, I don’t know. + +KG: Yeah, I should say also, I put in the slides -- no, I didn’t. Okay. Yes, as PHE mentions, this isn’t just Base64. It’s also hex. So presumably if we were doing a new global, either we would have, like, an encoding global that could contain Base64 and the hex versions or we would have Base64 and hex. It’s not just as simple as just a Base64 global. + +KG: Okay, well, surprisingly few opinions from this body. I’m going to assume that that means that no matter what I come back with, everyone is fine with it, within the space discussed. So I will figure it out offline. Tentatively planning to stick with what I’ve presented. Because I heard a voice in support of it from LCA. And PHE objected and I don’t know how strongly, anyway, we’ll talk about it offline. Tentatively, I’m going to plan to leave it on the Prototype, but that may change. And no one else has opinions currently, so I’m hoping that is fine with everyone. + +KG: The other thing I most wanted to discuss, it sounds like we’re all good with approximately variation 4. I should mention this partial chunk handling that says, you know, stop before the final chunk. Probably it makes sense for this option to be on the non-`into` version as well. Just for completeness. It’s not something you can easily recover from in the non-into version unless you know that your string doesn’t contain white space, in which case you can recover from it. But I don’t see any reason to restrict this option to just the into version, so the idea would be that there would be some option potentially like this that says the handling of the final partial chunk should be stopped before, and that option would be present on both the into and the non-into APIs. On the into API specifically, you could recover from it using the `read` property and thereby implement in userland if you want or do any sort of error recovery behavior that you care to. Okay. That was everything I had to discuss. + +WH: I just wanted to add my support to the choices you indicated. I have a slight preference for putting these things on the `Uint8Array` prototype. + +KG: Okay. Thanks very much. + +CDA: KG, SYG has entered the queue. + +SYG: I just want to reiterate for other delegates that are not KG, if you think a future weird zero allocation/high performant variant, which is a follow-up proposal, if you come the think before now and next meeting, please let me know. I would like to be perfectly open that that is part of my future goal here, that I’m still thinking about some weird -- something weirder maximally efficient API. If you don’t want that to happen because we have this at the next meeting, please let me know. + +KG: Okay. Well, I will hopefully come back next meeting with something that looks approximately like what’s on the screen here. I’m not going to commit to the precise names of any of the input options, but probably something like this. And, again, this partial chunk handling option or whatever it ends up being will probably be on both the -- from Base64 and Base64 into. + +KG: Last thing that I realized that I forgot to put in the slides is there will probably be a -- the same thing for hex. There will be a from-hex-into. So that if you have a hex string, you can write it into an existing Uint8Array. I assume everyone is fine with that. But if you’re not, let me know. And I look forward to coming back next meeting with something everyone can live with. + +### Speaker's Summary of Key Points + +KG: We heard mild support from Waldemar and Luca for putting these on the prototype rather than a new global. Peter still prefers to have a new global. We also heard that something like my variation 4 that I presented where you have a fromBase64Into method, it returns an object with a read and write property, would be acceptable to everyone, as long as, per SYG, it doesn’t pre-include the possibility of having some future weirder API that accomplishes the same thing with no allocations. And no one expressed objection to the idea of having this sort of handling for the length where you stop before -- you allow the use or the stop before a partial final chunk or potentially other behaviors. So intention is to work out the details and come back. Okay, that’s all, thanks. + +## Decimal Stage 1 Update & Request for feedback + +Presenter: Jesse Alama (JMN) + +- [proposal](https://github.com/tc39/proposal-decimal/) +- [slides](https://docs.google.com/presentation/d/1ecK7CzrgSO5t8-gYQnNWUSHcnWltJKWqTolgJsAIwqI/) + +JMN: Yeah, my name is Jesse Alama. I’m presenting Stage 1 update for decimal. I’m at Igalia and working on this with Bloomberg. I just wanted to give you a bit of a survey about how other languages tackle the problem of decimal numbers and the way that they approach this issue. I know that we’ve been discussing this several times in the last several plenaries. I thought that one of the things that might help the discussion is to see how other languages deal with these issues or not. And then given that information, we can see what the current suggestion is for how JavaScript might approach this problem. And in particular, there’s one topic that has been decided or essentially kind of quasi decided many years ago, and has always been sort of taken as settled, but increasingly, I wonder if that is something that we want to reopen, and that is the topic of normalization. I’ll explain a little bit more later about what that is. Those are the three parts about what I want us to talk about. + +JMN: There’s a bit of a bonus here. We have a new co-champion in the decimal proposal, JMK from Oracle, I think he’s with us now, is working on decimal. He’s a great guy to have around, and any questions or comments, he’s also there with me as well. Just to recap what the issue with decimals are, the idea or the problem, you might say, really, is that in all sort of use cases on the web and whether we’re talking front end, back end, JavaScript’s numbers that is floating point numbers are really not a great fit for handling human consumable numeric quantities. In particular thing like money, things like measurements or if you think about graphical representations, then things like axis fit or gentleman have script built in numbers because typically we use base 10, and the best we can do with JavaScript is base 2, right, binary floats. Clearly they work in many use cases. JavaScript has come this far without having decimals builtin but the issue is it can lead to user visible rounding errors. You can really get things simply wrong because of the just mathematical mismatch between base 2 and base 10. All sort of numbers are just simply not exactly representable in base 2, but which are obviously representable in base 10, because we write down just the string of digits. And even with careful programming, these kinds of rounding errors can pop up. Especially when the calculations, it start to get a little bit more complex. Decimals arise in both front end and back end scenarios, so if you think about something like handling financial quantity, that could be front end or back end. All sorts of qualities are coming into JavaScript, whether that’s through, say, attributes on the web say a database connection on the backend. JavaScript is often surrounded by systems that do handle decimals natively, so JavaScript is sometimes in an awkward state there, where some information might get lost. and the user may see this + +JMN: I was doing a bit of research here, there are all sort of graphics libraries out there. I was taking a look at Chart JS, and I got in touch with one of the maintainers there. And they pointed out a couple of interesting bugs that show up when generating charts with ChartJS. This [on screen] is just one example of many where the ticks on the graph get labeled in a pretty weird way. You can kind of guess what’s going on there. There’s another discussion that came up where someone was working with a calculation, and it seems that precision was wrong, the maintainer realizes this is a kind of long-standing, almost meme in the JavaScript world, look at this response down below there. If you take, you know, 0.1 and multiply it by 3, you just don’t get what you think you’re going to get, and this can be visible to the user. And sometimes this kind of lack of precision is not acceptable. It’s not okay to be close enough. It has to be exactly right. + +JMN: Google sheets, how many people use Google sheets? Well, I came across this one. This is thanks to another co-champion, Andreu, taking a look at the user forms for Google sheets. Here is a topic in the help forms by someone who apparently did some kind of calculation on their own. To calculate their monthly mortgage balance, and noticed that something was slightly off. And the -- this is probably not a JS programmer, but this is someone who is using a complex application for the web, and this is just getting something wrong, and the user is puzzled about this. So it’s not just the programmers who are going to encounter these kind of things, but also, you know, even users can see these kinds of things too. I mean, the guys running Google sheets I’m sure are very careful with these things, and nonetheless, a lot of this stuff can just leak through if one starts to poke and prod. + +JMN: So this is the issue of errors. So there’s a couple of motivations. One is just the abstract mathematical motivation of trying to just simply just represent things correctly, and then you might say, well, what difference does that make? And the answer is, well, because this can lead to user visible errors that really can be quite embarrassing for us. Okay. So how do other languages support this? Actually, I took a look at a kind of popular languages here. I just went to stack overflow and looked at top 12 or top 10, plus a couple extensions. This is an arbitrary list and not intended to be the best list ever. But I just picked some, rolled with it, and I think this is a pretty good sample of what’s out there. The interesting thing is that a lot of languages do support decimals, either out of the box, in say all programs, or they’re part of a standard library. And there are a couple where standardization is under way, so the problem is recognized. In the case of C++ comes to mind. This is very close to being done as a standard. There was supposed to be a meeting to C++ committee last year where a guy was supposed to present a final proposal, but didn’t attend, so it’s still kind of circling the airport there. And what I want to do for the languages that do support decimals, what I wanted do is a take a brief look at take a look at to give a sense of the ergonomics, whether the programs are lean and elegant or bulky and ugly. And the question would be given these languages and how they tackle the problem, what does our solution or our proposed solution look like. + +JMN: this part of the presentation is not so nice because I don’t really want to read these slides, but just to give you a sense, I have as a running example, the task of calculating final bill, where I have two items, tax rate of 7%. One item is, let’s say, 1.25. Or 25 cents, whatever, times 5. Then another thing, costs let’s say 5 bucks. And I want to add them together. Use the tax. And get the total there. Nothing very complicated here, but the example is nice because they can keep repeating that and rolling it in different languages. Here is C. You can see that here in C, they have – just out of the box, without importing anything, you have got some kind of literal thing, `1.25M` on line 5. And then again, a couple lines down. And then you can just use `+` and `*` as you normally would. These are overloaded. It looks elegant there. Java also comes with support in the standard library. They call it BigDecimal. You have to Du this with new and string arguments and everything is quoted. But it’s not too bad. We have literal 5 and 1 there and then the quoted – the string representations, with the numbers here. And you have to do a little bit of bulky dot multiply and whatnot here and dot add, for instance. But you can see that it’s still, I would say, not too bad. I guess, maybe it could be a bit worse. Kotlin, for all you Android programmers out there, you might have seen Kotlin. Basically, Kotlin is going to have support for decimals because of its connection with java. So this is actually basically the same as the previous slide. Python has had decimal for a long time in the standard library, from the early 2000s. You can see, it’s a little bit leaner because I don’t have to use new or something like that. I have a decimal type there. Again, with strings. Where you have to quote the values using. There’s no literals there like in C. Ruby also has this support for decimal in the standard library. There’s a kind of `to_d` method that gets applied to strings. If you have used SQL databases, you will find decimal support there as well. This is a contrived example, but a flavour of the syntax there and what is going on. Notice that there’s a bit of overloading there. So that’s kind of lean and elegant. What is interesting is that there’s already support for decimals in the browsers, as part of the UI code. It’s part of the JS. So you can follow these links, FireFox, Chrome, WebKit, they have decimals sitting in the codebases. They are just not plugged into the JS said of things. All of them are ad hoc decimal floating points. They have various, minimum and maximum exponents and precision. They all seem to work. + +JMN: I am trying to make an argument with these examples, which is that decimal support already exists in many languages. I guess you have to trust me and the languages that don’t have it built in, third party libraries are available because people have to come in contact with money and they need to handle it properly. browsers already have decimal in the UI code. This is something I kind of like because I consider my task here to look at browsers and look at the language and see how this might work in a browser and stumble across decimals already there The various data models are a bit ad hoc. Very few languages except, perhaps, Python and java claim to follow any kind of documented standard. That’s interesting. Because it suggests to me there’s a sort of understood notion of decimal a range of values that we think we need to handle. And we just kind of roll our own and it’s fine. And it seems to work. The focus is usually on basic arithmetic, not things like trigs or . . . and the conclusion of this I would say is that somehow all these things work. You don’t have Python not having the right decimals. They have happy with what they have. C, that has its own ad hoc approach. It’s fine for pretty much all use cases for which decimals are intended. + +JMN: So now, we have seen a few examples of what decimals look like in other languages. What would I propose? Well, the idea is to follow IEEE754, but not that – the part that – the binary part, but the decimal part which has been there around 2008 or 2009. And the idea would be to support the full decimal 128 with not a number, - 0, the infinities. The proposal here is not to have a new primitive datatype. So if you try to do things like decimal 3.14, a quoted string, that’s going to throw. There’s no intermingling with built in operators. Addition, multiplications so son, comparisons which all throw. A new standard library object. The name would be subject to bikeshedding, but call it decimal for now. The values are created with `new`. And only bas division, multiplication, square root, et cetera. And rounding. Inspired by Temporal and the Intl.NumberFormat folks. So the running example using the current design would look something like this. And this is actually unchanged from the previous template. You can get a sense this is basically in the right neighbourhood of all the things we have seen so far. Yes, there’s a new decimal there with a string which is a bit bulky, I admit. But other languages often have this too. Or only a little bit better. + +JMN: So now it comes to the part of the presentation where I would like to get some feedback from you folks. And that is the topic of normalization. To set the stage: we have talked about decimals since 2018 or maybe ‘17. And very early on, in our discussions, we settled on the idea of exposing what are called normalized values. So normalized means that trailing zeros would always be removed. You can give a number with trailing zeros as input, but those would never be stored. So a number with trailing zeros would have a loss of information. No way to recover that. So those are called normal numbers. Nonnormal numbers are ones that have at least one trailing zero. Like 1.20 versus 1.2. What is interesting is that IEEE754, decimal 128 works on all values. On those with and without trailing zeros. The zeros represent significant digits. These are not an LOL joke. Someone hitting zero on the keyboard for fun. These are supposed to represent measurements, or some kind of numeric quantity, where the extra zero is intended to be stored somehow. So in other words, here is another way to think about or rephrase what I said, decimals are not just mathematical values. Yes, 1.20 is just another notation for 1.2. These are simple the same thing. Digits strings with a mathematical interpretation, right, but it’s not simply a mathematical value. And there are understandable use case for such numbers . . . so if you think about those kinds of use cases, then killing the trailing zeros, it might be an irrecoverable loss of information. Or there has to be – go to some other source to find out what that information was. And that might not always be reliable or even available. Interestingly, many of the languages we presented, do support – do respect trailing zeros. So the question that I would like to propose is, whether we really want to ban trailing zeros? I realize that that is probably a controversial subject, but it seems to me that we haven’t really fully nailed down the arguments for that. + +JMN: Now, anticipating there might be a discussion about this, I proposed the middle ground, where in a way, you can have both. Not really, but it looks like it. So there are some assumptions: so given that we won’t have decimaLliterals in the near future, this is something we have been discussing for the last few plenaries. And it’s not something that we proposes in this stage of the proposal. So there are no literals. `==`, `<`, `+`, all those things throw and given the decimal argument, they don’t work. And if `===` has object semantics, the question is, whether we might be able to find some kind of middle path, where something like let’s say less than or equals works with mathematical values, but we would allow that how toString method would return all those digits that it initially had. Possibly with trailing zeros. A normalized method, for those cases when you would want to kill those trailing zeros. But the point is, they would be there. And then some other methods would work with mathematical values. It’s a bit of having our cake and eating it too, sort of. And that’s it for me. So I wanted to give you a sense of how our current proposal matches or doesn’t match how other languages tackle this problem. And then I would just like to open the floor for discussion. Either about what I said earlier about the language comparison or about this issue of normalization. + +WH: You listed a bunch of language libraries. Which are IEEE decimal and which are BigDecimal? + +JMN: Okay. Decimal 128 is available in Java you have 128. And in Python, you also sort of have decimal 128, in a generalized sense. Because it allows for things like specifying the number of digits, number of significant digits you want, overall it follows the decimal 128 approach. The approach that Python takes is by the same guy who made or designed decimal 128. You might say that decimal 128 is an instanceof this general decimal arithmetic in Python. + +WH: The java BigDecimal library in the example on the slides has arbitrary precision. + +JMN: This is the arbitrary precision one. Well, yeah. The name of the class is misleading, BigDecimal in java can mean either – the decimal 128 or indeed some kind of BigDecimal arbitrary unlimited precision. So actually, Java has both. + +WH: Okay. Which leads me into my second topic: you made the claim that transcendental functions are typically omitted. But all of the libraries which support decimal128 I looked at support transcendental functions. I can’t reconcile how they are omitted when the ones I looked at include those transcendental functions. + +JMN: The Python one + +WH: It has transcendental functions + +JMN: It does? + +WH: Yeah + +JMN: Okay. That’s news to me. Maybe I missed that. + +WH: I also looked at Julia, which includes transcendental functions. The Intel library, which is decimal128 for things like C, C+, has transcendental functions. + +JMN: That’s good to know. That’s fine. Good. Thank you. I mean, any – if I missed something with the Python one, that’s really an error on my part. And then the other libraries – + +WH: The ones which are based on arbitrary precision do not support transcendental functions. But the ones based on decimal128 do. + +DE: Python is based on arbitrary precision decimals. The default precision for a new decimal is a context variable, similar to AsyncContext. [Note: DE incorrectly said the precision of decimals was a global during the meeting; the previous sentence corrects that while making the same point.] So 2 can be 60 bits or whatever. Right? + +JMN: The Python one? + +WH: I thought it’s decimal128. + +JMN: For the Python one, you can – it’s more like a family of – of standards, where decimal 128 is an instance of setting certain parameters. You say you want 34 significant digits. This should be the Min and max. Then you can work with 128. + +WH: Okay. + +JMN: Yeah. That’s good to know. If indeed these functions are present, in more places than I think, then that’s an argument for including them. + +WH: Yeah. + +WH: Regarding normalization, before resurrecting a topic, it’s good to know why it was settled. Lately at every meeting somebody wants to resurrect denormalization without understanding why we settled on normalizing numbers. If you’re proposing to resurrect it, why why it settled? + +DE: Sorry, WH, you know why the history and why it was settled. Maybe you could share that. + +WH: I am going to, but this comes up at every meeting and I don’t like repeating the rationale at every meeting. So I am asking if anybody else remembers. + +JMN: Well, I think one of the – it my mind, one of the reasons why one would work with normalized values only, made sense in a world in which there were decimaLliterals. Because then indeed you could write some unusual things that might have weird results, if 1.2 and 1.20 are somehow handled different as literals. But in a world without decimaLliterals, the question; does that argument still apply? Does that thinking still apply? + +WH: That’s – that’s not actually one of the reasons. Literals are not the reason to not distinguish cohort members. The two main points that settled this last time were, one, the IEEE implementation of cohort members does not correspond to the mathematical notion of precision. That’s one. And two, what should `toString` do? Should `toString` produce normalized values or should it show significant digits? + +JMN: Well, just to continue this line of thinking, you say it would produce the extra digits, if available. + +WH: Okay. Let’s explore that case and see that it will cause a lot of subtle bugs and system failures. Let me go through a specific example to illustrate why. Let’s say you have a number, 300 with 4 significant zeroes. What should that print as? + +JMN: 300.0. + +WH: I said 4 significant zeros? + +JMN: Well, I guess you might say undefined. Because – what are we talking about? There’s a confusion because there’s two mental models. Are we talking about a mathematical subset of the rationale numbers. Or talking about digit strings? + +WH: I am talking about how it’s done in IEEE decimal128. Number 300 with 4 significant zeros. What should that toString as? + +JMN: `300.00`. Is that what you’re asking? + +WH: Yes. What about 300 with three trailing zeros? + +JMN: Well, 300.0 + +WH: Yes. Two trailing zeros would be what? + +JMN: 300. + +WH: Okay. 300 with one trailing zero is what? + +JMN: 300. + +WH: Nope. That’s 300 with two trailing zeroes. + +JMN: Okay. What is it? + +WH: 30e1. + +JMN: Okay. Okay. Well, okay. So then, it sounds like there’s a kind of – that decimal128 as a standard is somehow odd for most programmers. + +CDA: SFC did you want to reply here? + +SFC: I do have a general reply. My general reply is, like, you know, IEEE754 defines the behaviour and we can follow that behaviour. Because it defines what the behaviour is. We don’t need to figure out what it is because it’s defined and we have to follow the behaviour. + +WH: Okay. Well, we can do that. The problem is, now in the middle of your financial calculations you might get 30e1. And that will cause a lot of mayhem. + +PFC: [on queue] This seems like it should have been a GitHub discussion before the plenary. A further comment suggests that working this out in an off-line text discussion, rather than live here. + +WH: I will second that. I don’t want this resurrected unless it’s worked out. I don’t want this coming up at every meeting. This is not a good use of our time. + +SFC: Yeah. I agree, off-line. But I cannot speak or I can speak in response to the trailing zeros question. + +SFC: I guess since we have 60 minutes in the queue, I would go ahead and say some of the points. + +SFC: First, regarding the question about odd string formatting. This is a question also that we have had in Temporal and elsewhere. It’s totally reasonable for a stringification function to have different modes, depending on different needs of different users. And that’s totally fine to do. We have one stringification function or mode that follows the spec and another that gives consistent, reliable output, and so forth. I don’t see that as being an existential problem with supporting trailing 0s. + +SFC: Second, as I have said, trailing zeros are important for internationalization. There’s a difference between having 1 star versus 1.0 stars. Just in English and that’s more so in other languages. + +SFC: Third, as I posted on GitHub before the meeting, other implementations of decimal 128 and BigDecimal mostly support trailing zeros: Java, C#, Python, and postgres support the trailing zero and the concept of significant digits. + +SFC: Fourth, as Jesse already said, is that dropping the trailing zeros, you know, loses information which doesn’t seem like information we want to lose. Especially since it wouldn't cover all use cases. There are use cases for having trailing zeros. If we want decimal to help to represent those use cases, we can’t lose that information. You can still do the use cases where you don’t care about trailing zeros by having functions as Jesse proposed in the slides, by having certain functions for a middle ground where some respect them and ignore them and that's totally fine. As long as the data model is able to represent them and the users who do need them have access. That’s the preview of the 4 points. But I agree, it seems reasonable to continue the discussion. + +SFC: I guess, one other point, number 5, is that the decimal proposal, I think, predates my time on TC39. And I think the time of many others, including JMN, the current champion and many others. And any discussion about the normalization of digits, in terms of elapsed time, is about five years ago, which predates my time on TC39. The points that I raise, especially about the internationalization effects, were not well articulated at that time. So there is new information. It’s not just resurrecting an old topic without new information. So that’s my soapbox. Thank you for listening. + +DE: The reason it’s important to raise this in TC39 is because it has been raised at different TC39 meetings by SFC, who gave various reasons for this change, and despite trying, we haven’t been able to arrange an offline technical conversation on this topic. The next recourse when having trouble getting engagement is to raise it at plenary and talk it through in plenary. So I would appreciate Waldemar focussing on making the point, rather than this being sort of a Socratic method of conversation. + +DE: Also, I think it’s reasonable for us to decide to subset IEEE754 through normalization, through not supporting all the different signaling modes, that very few implementations support. I had trouble finding the documentation for the Python trigonometric functions. I think this is the right place to have this discussion. + +DE: I want to focus on the overall motivation, because in previous plenaries, the motivation has been questioned. JMN gave detail about that. Do delegates see the motivation here as being well-supported? + +WH: We do a lot of things which lose information. For example, the IEEE standard has status flags like "inexact". We don’t support it so we lose information about which operations are exact and which ones are not. We choose not to expose that to users. We choose not to expose signaling NaN’s to users. Many other implementations of IEEE arithmetic do expose these. In my mind denormalization falls into that category as well. It’s a really obscure use case and causes issues and surprises, where it’s better omitted from the language. And one other thing I would like to understand is, for the internationalization claim, I don’t understand why that couldn’t be done by supplying something like the number of significant digits. + +JMN: Right. It seems like that could also be a parameter. But my knowledge of the internationalization stuff is not very strong, so I can see that maybe finding that number of digits is awkward or impossible or something. I would also like to hear that. + +SFC: My response to that question is, it’s part of the data model. It’s the same – it’s the same sort of issue as saying, like, there’s a difference between, for example, if you have a date in Temporal. Let’s say today is November 27th, 2023. And let’s say that you want to take that data and format it as a month and day. You might format it as November 27. That’s a different operation than saying, I am going to only have a month day, November 27, and then format that. One has the configuration set in the formatter and the other in the data model. And it’s been a long pain point with internationalization that options for… the difference between what is the data model versus the options? One direction that we have been moving very much in the – in this space is that, well, the idea of trailing zeros should be part of the data model. It’s – currently, we have the ability – the capacity to do it as part of formatting options, which is fine. That exists. But it should be part of the data model. This is an example: if you have, the thing I have been talking about, you know, the example I always pull out is, one star versus 1.0 stars. In order to – in order to perform logic on this, you need to make sure you pass the same set of options to both the plural rules to generate the plural and the decimal format, and it’s a common bug, that sometimes these options aren’t in sync with each other. You might end up getting the wrong plural because the formatting options were different than the plural options for generating the plural form. By representing this in the data model, we eliminate those problems. If both those, when they consume ECMAScript decimal, say, okay, we will respect the number of significant digits in the decimal passed to use, they can do the right thing every time, without having to worry about making sure that options bags are in sync with each because, again, the concept of trailing zeros in formatting is a data model concern. + +WH: I don’t understand the answer. If you are printing a decimal128, it will print as “1” so you will get “1 star”. If what you are printing is “1.00”, you did something else which altered the output string and, whatever it is you’re doing, you should communicate it to the internationalization library. + +SFC: If you have a decimal 1 versus a decimal 1.00, we can handle both of those correctly through the internationalization APIs, if we are able to represent that as part of the data model. If both 1 and 1.00 are represented as the same aspect of the data model, but it’s desired to instead output with significant digits, then in addition to being able to pass through the data as part of the formattable object, you also need to be able to pass through options, which options are intended to be things that are, you know, fully configurable based on the desired display. I use grouping separators as an example of this. Whether to turn on or off grouping separators is an example of a configuration which definitely belongs in the formatting bag. Another example of a style is the width of strings, for example, the name of the currency or measurement units or percentage sign. Those widths are also display concerns. But the actual value, being formatted, is not a formatting options concern. It’s a data concern. Like you might have a date that has additional information, the formatting concern changes. There’s a difference between a formatting and a data model concern. My position has been that trailing zeros are a data model concern, not a formatting options concern. + +WH: We should take this off-line because I don’t know what to make of that. + +CDA Okay. Let’s move on to JHD. + +JHD: Yeah. My understanding is that the various browser representatives have indicated constraints that this can’t be a primitive until they change their position. I am trying to phrase this as forward-looking as possible: beyond that “it’s built in and you don't have to install anything”, what are the advantages of shipping this at all? Of adding a non-primitive, without-syntax-support, Decimal to the language? Is it going to be faster? Is it easy to screw up and this would be hard to screw up? What are the motivations to add it? I am having trouble seeing any. + +JMN: Yeah. I think one is, you said it yourself. Harder to screw up. I think if this were in the language, then I think programmers would be able to reach for this. Knowledge would spread in the community that this is something you can do. It might take some time, but then as time goes on, I would expect that more and more sophisticated JavaScript programmers will be aware and use this and have an effect throughout the community, rather than status quo, which is "LOL, JS numbers suck" . . . and then coming up with strange workarounds, which themselves are incomplete and buggy. There was one Google sheets forum post, one sophisticated user ends up reimplementing a BigDecimal entirely within some user-defined function. Whether it’s correct, I don’t know. + +JHD: I guess my follow-up question based on that answer is: I would expect if – assuming what you are saying is true – there is a user hand library has achieved dominance, that that would be a very good evidence of how easy it is to screw it up or hard to screw up, eg what sorts of bugs are common in there? What API designs we should avoid or seek out? Are there any clear winners, even if it’s a couple? I am sure there’s 50 or 100 decimal libraries. But I would expect to be able to count on one hand the number of dominant players and then I would want both that information from them, but also I would want something that is better than they can do as the reason to add it to the language. + +JMN: Yeah. I like that. You’re right. There’s two or three NPM packages when the battle of the decimal wars, for sure. I could give links to these after. It’s decimal JS. Big JS. And the implement arbitrary precisions in one case, and limited/configurable in another case. And I guess you’re right. The question might be something like, would we expect that these libraries – what is missing from the libraries? Why don’t they get the job done? Why are there still these issues? One might expect that being out of the language actually is an issue. There might be cases when – because it’s not in the language, we can’t really roll, we can’t import whatever in our project. That could be a blocker in many use cases. There may well – the speed issue is valid to raise, but it’s not that big of an issue. Because I think generally, the use cases I have seen speed is not so much the issue; it’s more simply the existence of this datatype in the language, which accurately represents decimal numbers, and indeed it won’t be as fast as binary floats. But that’s okay. It’s not – you know, a million times slower. It’s, say, 2 or 3 times slower. Things of that order. + +WH: I have the same question. The alternative we have is just using userland libraries, and why is that so bad? If there has or hasn’t become a dominant decimal user library, then why or why not? + +JMN: That’s a good question. I am not sure I have an answer for that. I will have to look at perhaps the statistics and see which one is dominant and why it’s dominant. Maybe there’s features one has versus the other. Another question might be, what is the reason for the apparent ignorance about the packages or unable to to deploy them? Why can’t we slip in the JS. Why not? + +ACE: An advantage is the way of encouraging people to choose the library. Otherwise, I think we see in Bloomberg, not with decimal, with other types; if one library says, okay. When you pause, we will turn dates into this one type. But then some other things that are going to consume that and sterilize it to something else. The system says, you want dates in this other type. It’s going to find, if it’s a value on its own. You can do the conversion. What you see is these really large nested objects describing the details about particular function transaction and then just deeply nested in there are these types that are slightly wrong. What the teams end up doing is writing things that traverse the whole object, looking for things incompatible and converting, grading, traverse or lots of object creation, switching from one type to the same type, but in a different library, to make this compatible. And that happens when there isn’t just a detective type, all the libraries with use, rather than each one ships their own, like mySQL, comes with its version of decimal, and yaml comes with its decimal library. Ideally everyone agrees on the user land library and have their own. But that doesn’t happen in practice on a large scale. + +DE: Continuing on what ACE was saying, I think an example here is RPC libraries. If you have a generic RPC library for sending and receiving methods, and its schema includes decimals, it has to choose a decimal datatype for JavaScript, analogously the database case or the kind of UI presentation-oriented one. It makes sense to include this in a standard library, even if it doesn't depend on primitives or operator overloading because this proposal is analogous with `Temporal` and with `Object.groupBy`. Those are bigger or smaller proposals, decimals are medium-sized. In all cases, JS developers use user land libraries, which have some medium-level of adoption. And then there’s some developers who kind of try to avoid adopting any library and roll their own. In decimal, it’s through strings or numbers that represent big calculations with logic to support them, sprinkled throughout the program. Bloomberg uses a mix of these strategies. And this split causes overhead and it causes lower quality implementations and less consistency. In decimal in the ecosystem we do see decimal libraries there, and everyone has to choose between two or three which are not updated frequently and written by the same person. There is a quality issue which, in principle, could be just addressed by the ecosystem. + +DE: But in the case of object group and in the case of Temporal, we decided, well, even though we have lots of precedent for the ecosystem, we take that as positive. It’s useful. So we won’t just say, you know, everyone should use lodash or use moment.js. But we will add something to the language because we have seen it’s useful. And decimal is particularly useful because it comes up in so many UIs and things that interact with people. That's a common thing with JavaScript, it is commonly used to represent such interfaces, such quantities. So that’s why we have this ecosystem library. + +DE: It’s normal that an ecosystem library is lower quality than what we can achieve with something built in. And that is the current state of the world. None of this had to do whether it’s a primitive. + +CDA: We have just under 10 minutes remaining. + +MLS: So I will ask and speak a little bit to why it should be in a language . . . I appreciate that the slides you presented, Jesse show other languages have decimal support and they have them support as libraries, not as built in, primitives. So I think it makes sense for interoperability to have one version. Yes, there’s a lot of things that we have added to the language or are adding to the language that came out of libraries. But I think that there’s a case here of making this – added to the language as part of the library. Earlier, in the discussion, about 20 minutes ago, there was a conversation that says we should discuss this on GitHub. I am looking at your slide 17, and I think there’s – I hope there are things to agree upon. You know, you have data models, IEEE decimal 128. No primitive datatype. No built in operators. And new standard library object. I think all of these things on the slide, I am hoping to agree upon. Normalization seems to be something that needs to be discussed. + +JMN: We are getting stuck in the mud with a couple of the issues here. What is interesting is that I wanted to make the argument, and let me rephrase it, that decimals do exist in the case of forms and they seem to work somehow. I didn’t really try to prove that claim, but the idea is we pick some reasonable, responsible, decimal, and put it in and that makes a big difference. That mission accomplished, basically. Maybe there are some fine details to be discussed. But overall, exactly, to repeat what you are saying, this decimal 128, is probably going to be just fine. The existence of other alternatives, doesn’t have to slow us down or block the discussion. + +DE: Although I support the conclusion that MLS is proposing, saying we are going with objects and not primitives, I was hoping that my two withdrawal topics would help us reach conclusions, custom literal suffixes. So could we call for consensus on that question once we go over those topics? Because I want to review more why we are not doing upper literals in general. + +SYG: I can say that Chrome will not support a primitive. We can live with an object, but we are not in explicit support of the object-based solution. In terms of, I think it’s – I will block on it being on a primitive, you have the negative side of that. + +JMN: Some of the design discussions taken over the course of the year, over the meetings that we have been through here are kind of designed to make it both implementer-friendly and cover enough use cases to be attractive and usable for developers as well. Yeah. Initially, when I started to think about the stuff, the proposal was in a much grander state and we retreated from things like suffixes. I appreciate the withdrawal from these as well + +DE: To be clear, I was just advocating for this in order to make sure that the committee had a thorough understanding of the motivation for this [concluding on objects and methods], rather than only hearing the fact that it [primitives and operator overloading] would be blocked. + +DE: I want to ask the committee: what are people looking for, for Stage 2? What are the questions that people want answers on, or like things maybe more concrete? The proposal is already quite concrete. The details might change during Stage 2. Any major discussions that are blocking or Stage 2, or is it really a question of agreeing on the motivation? + +WH: I am still unconvinced this cannot be done by a userland library. + +DE: Oh, hmm. Could you elaborate? Of course it can be done by a user land library, like Temporal or Object.groupBy. What is it that you’re looking for? + +WH: It’s qualitatively different from Temporal. It's a fairly self-contained library. And my nightmare scenario is that we ship this and get pushback from users who say they want this to be a primitive and a few years later we wind up with two versions of decimal in the language — that’s what I am afraid of. + +JMN: The intention with this proposal is to make it – is to be future proof, to allow for that future. Even though that’s not at all what we are looking at for what we are doing right now. + +WH: You can’t make this futureproof. + +DE: As JMN says, this proposal is future-proof in the sense that decimals could be retroactively understood as primitive wrappers. The only “overhang” or superfluous part is the existence of the static methods on the decimal object. Would that even be a bad thing, to have static methods for arithmetic operations while operators are available? This is the theoretical situation, where engines change their analysis of the cost/benefit tradeoff of operator overloading, which we see no signs of changing + +CDA: 2 minutes left. I want to get to JHD. + +JHD: My understanding is that Chrome and others won’t allow a primitive or to have operator overloading. (eg `A + B` and have them produce a Decimal). And so my current position, and I am open to be convinced otherwise, but it remains it’s not valuable enough otherwise to have it in the language at all. And I certainly hear the argument about coordination - I argued that point for Temporal’s inclusion. I think Dan’s question, one of the things helps convince me, there’s unknowns, but one of the things to convince me, if we had a reliable userland implementation and broad usage for a reasonable period of time, that would help convince me. We had moment and other date libraries for well over a decade - enough time to exercise all of the important use cases, work out the bugs, figure out the problems with its API design. If we had just shipped moment in 2015 or something, we would have been stuck with a bad API. But nobody knew it was bad until it was in use for a long period of time. That’s what I am hoping to see in the future + +DE: JHD, we do have that list of libraries as Jesse said. A few years ago, I got in touch with MikeMcl, who wrote them and gave us initial feedback. We can ask them a particular question. It’s great to follow up off-line about particular points to discuss with the well known dominant packages in the area + +JHD: Yeah. I don’t have any concrete points at the moment. I will let you know off-line if I think of some. It’s a general sense of… with dates, basically the entire ecosystem was like, “we use moment, but don’t want to have to. Can the language make something better?” and that’s what Temporal is. That’s not the same with Decimals. “We use this and want something better.” What they use is Numbers, and what they want to use is something to like Numbers. And I am not convinced that that is not the case yet, and that non-syntactic Decimal wouldn’t give them what they actually want. We can talk off-line for sure. + +DE: It would be good to know what is insufficient about the previously presented survey response which showed a bunch of unanimity. + +CDA: We’re out of time. I don’t know if you want to have a continuation on this or not. I have captured the queue either way. Did you want to quickly dictate key points, summary for the notes, JMN? + +### Speaker's Summary of Key Points + +JMN presented the state of affairs about decimal numbers in a variety of languages. He compared the current proposal to those languages and how they solve the problem with decimals. And then he opened up discussion to the authority issue of normalization. And discussed pros and cons about that. And other conditions for perhaps reaching Stage 2 in the future. + +The decimal proposal is based on objects and methods, rather than primitives and operator overloading. It uses IEEE 128-bit decimal semantics. WH and JHD raised doubts about whether Decimal would be better done in a library, rather than being built-in. DE argued that this was a natural evolution, analogous to other proposals and supported by both the current ecosystem state and would be an improvement on the ecosystem. + +### Conclusion + +The decimal proposal remains at Stage 1 diff --git a/meetings/2023-11/november-28.md b/meetings/2023-11/november-28.md new file mode 100644 index 00000000..45cc7edc --- /dev/null +++ b/meetings/2023-11/november-28.md @@ -0,0 +1,1016 @@ +# 28th Nov 2023 99th TC39 Meeting + +----- + +Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. + +You can find Abbreviations in delegates.txt + +**Attendees:** +| Name | Abbreviation | Organization | +| ---------------------- | ------------ | ----------------- | +| Samina Husain | SHN | Ecma International| +| Istvan Sebestyen | IS | Ecma International| +| Ashley Claymore | ACE | Bloomberg | +| Waldemar Horwat | WH | Google | +| Linus Groh | LGH | Invited Expert | +| Nicolò Ribaudo | NRO | Igalia | +| Ben Allen | BAN | Igalia | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Ujjwal Sharma | USA | Igalia | +| Chris de Almeida | CDA | IBM | +| Daniel Minor | DLM | Mozilla | +| Tom Kopp | TKP | Zalari GmbH | +| Romulo Cintra | RCA | Igalia | +| Kris Kowal | KKL | Agoric | +| Mark S. Miller | MM | Agoric | +| Kevin Gibbons | KG | F5 | +| Philip Chimento | PFC | Igalia | +| Jesse Alama | JMN | Igalia | +| Frank Yung-Fong Tang | FYT | Google | +| J. S. Choi | JSC | Invited Expert | +| Christian Ulbrich | CHU | Zalari GmbH | +| Devin Rousso | DRO | Invited Expert | +| Eemeli Aro | EAO | Mozilla | +| Sean Burke | SBE | Mozilla | +| Anthony Bullard | ABU | ServiceNow | +| Ron Buckton | RBN | Microsoft | +| Daniel Ehrenberg | DE | Bloomberg | +| Ethan Arrowood | EAD | Vercel | +| | | | +| | | | +| | | | +| | | | + +## TCQ Reloaded + +Presenter: Christian Ulbrich (CHU) + +- [slides](https://cloud.zalari.de/s/xkr7PLobd2w8pbT) + +CHU: Good morning, everyone. Some might not have met me in person. Yeah, so we are a German software development company that I own, and finally we put our stuff to good use and reloaded TCQ. I might be a little bit nervous because this is my first TCQ presentation. So please bear with me. So last time, the good parts of TCQ. We all know and love this. So TCQ, it does its thing. So we are using it for organizing the meetings, and it is a well-proven, tailor-made solution, but there are some bad parts as well, so what are the bad parts? Well, first and foremost, the development has stalled. So there are a lot of open PRs which have never been merged. It has, of course, also its own fair share of bugs and there seem to be performance issues. I am not sure where they are coming from, but we might find out later. In a way, I don’t know where it runs. But I have the feeling, underneath Brian’s desk, well, certainly in the past meetings, there were problems with TCQ, it goes down sometimes, so we needed to reach out to somebody from Microsoft, I don’t know, to call or whatever, to get up it and running again. So at least, there’s a group of people that are actually able to restart it. The group is pretty small. And it has no logo. So let’s talk about TCQ reloaded. So what did we do? Well, first and foremost, we made a logo. Well, actually DALL-E made the logo. (showing totally broken logo, having no resemblance whatsoever to TCQ) Either I'm not a good AI prompter or singularity has not happened yet, so we did not do the logo. But the idea was not to reinvent everything new, but to build on the basis of TCQ. Basically, to take it as what it is – we are using it and it does its thing, but currently, it’s very hard to maintain it. + +CHU: So we set out to get it to run locally. And to have the basis for developing it further. So we externalized all of the – all of the hard-coded stuff. And we finally got it to run locally. The development is open. We are now able to run TCQ locally with an externally provided CosmosDB from Azure. The way we have it currently running, we did not really touch anything apart from the externalized things, we didn’t change so much. So it also supports the current auth flow from GitHub. For the sake of testing we had to become chairs. So no offense meant and that’s how it looks. + +CHU: And next up will be my colleague Tom (TKP), who will just show you the current state. Tom, if you could, please, give us a small demo, you are happy to take over. + +TKP: So I have this thing running locally. I am already logged in as a chair on the left side, and have a private window on the right side. Typical things like we discussed on Monday: the agenda, the queue. Everything is like there. I didn’t change anything there. And I can also log in with a GitHub account. Like all the other things, the IDs don’t change. So let’s just take it. And so we also joined as a normal user. And the queue should also have been updated. So this just runs on a – let’s say, private CosmosDB, DocumentDB does not exist anymore. Yeah, everything is persisted there as a simple JSON document. So yeah. Further development should be easy, I guess. And even migrating away from distributed edge database might be possible + +CHU: Thanks for the presentation. So I will take over, again. So what are the next steps? Next step is a staging environment. We want to have a publicly deployed environment, that whenever we change the code that gets publicly deployed. Another step is that we want to decouple it from actually needed services. So it’s not any more bound to certain kind of – so to say, the cosmo DB stuff. And then we want to modernize the stack because the current stack is hand-crafted; the frontend is bases on VueJS. So the obvious thing is to migrate to nuxt.js. Then we want to set up a CI-backed staging environment. As I said, that is deployed publicly. The other stuff will be modeled in issues in the [repo](https://github.com/zalari/tcq/) and I will do this within the next days. So what is the outlook for TCQ? So my idea is, once we have a modernized staging environment, the idea is to take this to a test for the next meeting. So for the next meeting we can test and use the TCQ from the staging environment, if there are problems, we could always switch over to the normal TCQ. After the next meeting, if the staged one proved to work sufficiently, we could collect and actually implement future improvements, that are already there in the old repository. + +CHU: So coming to the end of my presentation, certain questions? So one thing is, right now the repo organization, but we can move this to the TC39 organization. And yeah, my idea would simply be that we continue developing it simply on GitHub. But there might be other ideas for collaboration. Probably just modeling it with issues and then talking about the issues and like redoing the PR and stuff like that. And, of course, I am open for general feedback. + +KG: I have no questions. But I am very excited that you have done this work. Looking forward to using this and having a TCQ that is maintained and we can all contribute to. + +MF: Yeah. I am also looking forward to being able to fix some of the many issues I've opened on TCQ. I am happy to contribute and I want to say the most important part is to make sure you have documented the development process so that I can run on my local machine and do that without whatever experience was needed to figure out the previous TCQ. + +RPR: I am very pleased by this. In general we have suffered problems in the past when TCQ has been mostly been unreliable, in some cases, it’s been permanently down and that’s helpful. Other times it has been flickering. And it seems to be up and it seems to be down. So yeah. Just in general, broadening the ability for people to maintain it, test it, all of that sounds good. I am sure there is in fact feature requests that people will have. Certainly, my personal one is that sometimes two people click at the same time, because it’s a multiplayer app, so two people can conflict, and if we could have an undo button, that would solve a lot + +USA: Rob, I just did that + +RPR: You just added a feature request for that + +USA: I accidentally clicked on so you’re not in the queue at the moment. + +RPR: So yeah, thank you for doing this work. + +CDA: This is really great. When you first mentioned rewriting it, it was – I think it was in the TDZ channel and I thought you were joking. I was really surprised to see this. I think it’s great. Excited about this. Obviously, there’s a lot that can be improved upon. So that’s great. My next topic was – I don’t have feelings on when to move the repo to the TC39 org, if you want to do it, but whenever you want to do it now, later, it shouldn’t make too much difference, I don’t think. + +CHU: Yeah. Perfect. So I think for the time being we will stay in the organization. But everyone is invited to contribute. Everyone is invited to join and have a look at it. + +CHU: And yeah MF, I hope TKP already provided the documentation to get it to run locally. That was at least a requirement. + +TKP: I hope that it’s sufficient. + +USA: And that seems to be it for the TCQ. Thank you Christian and Tom for all your work. + +CHU: Yeah. Thank you very much. As I have said, I will add the link to the presentation and you can just join the repo and, yeah, also reach out to us, by matrix or preferably on the repo because this is the way we coordinate. So thank you very much. + +## Stage 3 update of Intl Locale Info API + +Presenter: Frank Yung-Fong Tang (FYT) + +- [proposal](https://github.com/tc39/proposal-intl-locale-info) +- [slides](2023 Nov Intl LocaleInfo API Stage 3 Update) + +Stage 3 for locale API. I can also see Frank. Frank, are you ready? Perfect. + +FYT: I apologize, I was sick yesterday and I also I don’t think I put up the slides link on time. So apologize for that. So you can find the link from the repo. This was not added to agenda. It was linked to the proposal page and it has a link to the presentation. + +FYT: So today I am going to talk about an issue for Stage 3 proposal Intl locale info. We have discussed this in July and September. And but I have to come back because I think the solution we discussed in September, after we merged it, caused some problem that we didn’t consider. And there’s information whenever we make the decision in July or September, I missed that, I didn’t disclose that part, which we need to think about the issue more deeply So again, Intl locale inform has been in stage 3 for a while. Back to the what is the main purpose of the proposal, to exposes locale information such as weekday and the cycle used in locale. And we have been discussing this for a while. Having advanced to Stage 3 in April 2021. One of the issues we had to discuss in recent months, in July, is the firstDayOfWeek keyword. We have come back and forth a couple times, and you can see the slides there about our discussion. + +FYT: But I will give you a background to this. So in Intl locale, we have a getter function called getWeekInfo. And one of the fields is the value which supposed to be numeric from 1 to 7 with respect to the value the locale IDU-FW. This is the resolution information about for a particular locale, in that locale, that is considered a first day. For example, easy thing to understand, in US, the firstDayOfWeek is Sunday. In Europe, usually that’s Monday. Well, the complication come in that in the UTF35, when you define a locale, it’s a keyword FW. The FW is currently defined here as 7 values. Sunday to Saturday. Right? But also, the FW value could be anything from 3 to 8 alphanumeric, syntactically. That is limited to this 7 values. The problem is that with other similar thing for if we have value, but the value is not defined, we usually consider that in the level. We return the value as-is. So kind of garbage in, garbage out. In the parsing level of API. For whatever resolve, we resolve to the correct one, if we don’t throw. + +FYT: So what happened in July, we allowed to return the value in the firstDayOfWeek, which remember, that is not the resolved value. That is the parsed value. Garbage in, garbage out. If you put any string, return any string. But in September, I think because DE considered that we should return the value in whether we are ECMA 402 to usually return, right, with 262, the value between 1 to 7 present weekday, so there’s some back and forth discussion. We say, okay. Well, maybe we should return the firstDayOfWeek, that value we should canonicalize 1 to 7, or undefined if the string didn’t define it. There’s another issue. And the issue is what will happen if someone put in something that is a syntactically legal value, but not defined as of that 7 value. For example, let’s say one day in the future, we are going to support more than 7 days a week. Right? For example, French revolution calendar or French republic calendar has 10 days in a week. We don’t currently have a plan to support it. It’s hypothetically possible to add additional values for that. Right? Currently, we can – if we have locale, "fr-u-fw-octidi" that is a valid locale ID. We don’t need to support a calendar, but we should be able to constructor and pass through. With whatever we define in the September, the questions that we construct the thing, what will be value return? Should we throw or should we change the default value for French, or should we do something else? What should be the return value if we canonicalize everything to 1 to 7. That is not a good answer. We have to consider whether we should throw or what. With the September resolution that becomes very strange result. Because our firstDayOfWeek returns 1 to 7 or undefined, if people didn’t define there. So it becomes a problem and never represents anything syntactically illegal. So the interesting issue, I think, at that time, the considerations is that, well, people calling that should get it return value useful to passing into intl API. Whatever got resolved could be used for that purpose, already happening in another place. Which is whenever it construct that locale, the first day, actually always return 1 to 7. Right? And that is whether the other TC member asking for. That particular value, it present, in some other place. In this particular place, the firstDayOfWeek where it paralyzed to other information about parsing the locale ID, so for example, if you have a calendar for other things, French republic, if you don’t support the French republic, a return back to valueOf French republic. In a garbage in, garbage out. So it can be used as a transporting layer of passing-through information maybe passing through some future work that might be able to handle at that, but not now. We don’t want to canonicalize that. + +FYT: After a lot of discussion and I sent an email to DE and discussed in TG2, we think we like to backtrack to something slightly different than what we proposed in July. What does that mean? We are going to take the first day of a week value, anything that fits 3 to 8 alphanumerical. In addition, in the value is numeric or if the value are screened out from 0 to 7, we map it to Sunday, Monday to Sunday. Right? So the first map that number, take the number, so turn out the input value or more loose of that, and then if that thing got mapped or whatever got put in there, it didn’t fit, the type that are following the unicode 35 definition, then we throw. Right? That is whatever other API – similar API, do to fit into the syntactic model. If that is not defined, or return the string that is whatever people put it in to the FW keyword. Or the firstDayOfWeek option bag value. Notice, the get week info first day or still remain only return 1 to 7 and that actually was whatever I remember asking the purpose of that. So that is our proposed fix that we have to fix that. + +FYT: So here are the PR that people can take a look at it. I apologize that came out pretty late. We discussed this kind of idea in TG2 in our meeting. Unfortunately, that time the PR was not ready yet. So TG2 member doesn’t have a chance to look through it. But I think we agree that’s the right way to do, to solve both issue. But a PR need a lot more careful look for that. I send an email to DE and I think DE understand our consideration and understand our really true level of API. One level is to merging the string and option bag. Another layer is really the resolve value. He understand that resolve value still return the numeric form which is I think people ask for that. So he have some comment on that in PR 79. He can take a look at that Probably PR 79 or the issue linked to that. My request is to to have consensus for PR 79 to the committee. + +DE: As FYT said, I am in favor of landing this PR, but for a somewhat different reason: when I previously suggested normalizing to day numbers, I hadn’t fully considered how Intl.Locale is shipping APIs and other options, already handled in sort of literal parsing way. For example, hourCycle is treated similarly–an invalid value is still accessible directly via Intl.Locale.prototype getters. So just for consistency with that, this all makes sense and you can use locale info if you want to get a higher-level understanding,or the result options. So this seems good to land to me. I want to ask FYT, are there other issues in this or do we have everything in the proposal to be closed and resolved at this point? + +FYT: You mean Intl locale info itself? + +DE: Yeah. The Stage 3 proposal. + +FYT: We have another issue that I am still working on. Sorry. I haven’t – I couldn’t put it here. One of the issue is how to resolve the order of the firstDayOfWeek. Consider other keywords. For example, region or subdivision. The reason is that we were – we were waiting for UTS35 to define and resolve the algorithm, we want to defer to that and they just put it out last month, recently. And I need to add additional change to clearly define the order that was there. That is – there’s several like, 2 or 3 other small issues. Intl locale info from my point of view is not ready for the 2024 version. + +DE: Okay. It sounds like of the one issue that you went into, you have a solution in mind. Do you have a solution to the or issues in mind or decisions to make yet? + +FYT: Not yet. + +DE: In this case, we might want to consider retracting to Stage 2 at the next meeting, if you don’t have solutions to those problems at that point. I don’t think we should have proposals that are at Stage 3 for a longer period of time where we still have open questions that we’re not on the path to resolving really quickly. + +FYT: Okay. That’s fair. That’s pretty fair asking. + +DE: This is kind of a change from how we were operating a couple years ago, but it’s kind of the new trend we have been following. + +FYT: I totally understand. I feel it’s dragging too long. I agree with that + +DE: If it’s a couple of questions, maybe it’s possible to come back next meeting with solutions to them. If you could propose solutions, I don’t think there’s a need to retract. + +FYT: Yeah. + +USA: All right. Next up is Waldemar + +WH: Just curious, does intl have a notion of a week length? + +FYT: So far, no. Because – so far we haven’t touched anything beyond 7 days. + +WH: Okay. There are a few calendars that have other week lengths. The decimal calendar the French played with is one case. Another example is the Roman 8-day week. + +FYT: I don’t think that – the unicode consortium has worked on that part yet. + +WH: Okay. + +SFC: I wanted to second fully using the staging process. I think it’s very important to be more clear on, like, I think there’s nothing wrong and there should be no shame or anything in taking a Stage 3 proposal back to Stage 2 when we still have, you know, work to do on it. And I think we should do that more often. And I think this is almost a good example of one that is a fairly clear cut case where it can come back next meeting and ask for Stage 3 again with the new proposed changes. I think that as a group, we should be doing this more often than we currently do. And there’s no shame or everything in bringing in a proposal from Stage 3 back to Stage 2. It reflects and makes it more clear what the current status of the proposal is. + +PFC: We could theoretically support weeks that are not 7 days. That would be also possible with Temporal. But there are currently no calendars in CLDR that have weeks that are not 7 days. So it’s unclear what the correct answer would be in this case for the first day of the week. I personally think that whether the first day of the week is Monday or Sunday, that’s a question that is related to the 7-day week with, you know, or with this particular 7-day week. I think if there was a – you know, a calendar that had a week that wasn’t 7 days, it would be implemented, the number of the days in the week as well. But it’s probably not relevant to the question that this parameter tries to solve, which is "given the 7-day week, which most of the world shares, is Sunday or Monday the first day?" + +SFC: Yeah. The number of days per week is something that is, I think, much more in scope of the Temporal proposal because that’s where, you know, we sort of measure week lengths and support arithmetic and things on the week lengths. In terms of this particular proposal, this is just returning CLDR data. And it does not have any concept of a week length. The only way – the only place I see a concept that could possibly be considered a week length is this conversion from numbers to strings. But again that’s based on the ISO mapping. We’re not inventing anything here. So, the concept of the week length is not part of this proposal. In fact, now we are supporting arbitrary strings in the FW key word, if CLDR does in the future add support for an 8th or 9th day of the week, that will be easy to add in with this framework. So yeah. + +FYT: Yeah. I mean, currently, the modern days calendar we don’t see this issue. The issue; DE actually summarized it well, that it – we need this to be a string form to align with pre existing API. The alignment is the stronger motivation and this is kind of an example that deal with historical calendar that I don’t think most people have saw interest in support. Theoretically, there’s a lot of historical calendar may have this issue, but I think in a very short – short future, I don’t see people will work on that. We shouldn’t create this API in a way that prevent future expansion. Right? So that’s really the motivation. We want to align, not to block. The worst part is if we stay with the current way, it just – we don’t have a good answer, what will happen, if people put that in there. + +USA: I have a reply. And I am not quite sure about this one. So FYT or SFC you can correct me on this, but my understanding is the Tibetan lunar calendar used in certain places in the world does have the possibility of weeks longer than 7 days. And there has been an ongoing Google Summer of Code project for unicode adding that calendar. So it’s certainly something that could be on the horizon soon. + +FYT: That’s beyond my knowledge. Sorry. + +SFC: Just one other quick note is that we don’t always need to call these "weeks". We could call them "calendar-week" or something similar to that. So . . . just noting. + +DLM: I would like to briefly express my support for what Dan Ehernberg and Shane mentioned. I think with this proposal, this has been blocked for a year. In this case, it is probably appropriate to look at returning to Stage 2 to work through the open issues and then return to Stage 3. And I would like to – what SFC said, it’s not a failure or anything like that. It’s important that the stage of a proposal reflects its reality, especially when it comes to implementation. + +CDA: I just wanted to piggyback off that as well. And ask SFC, if this is – I realize my question is kind of phrased like a false dichotomy, which is not my intent, but: is this more that we should be regressing proposals to Stage 2 more often or a symptom that we are advancing proposals to Stage 3 prematurely? I am just trying to get a sense of what your thought is on that and if so, would what MF is proposing as far as process change, does that move the needle on this as well at all? + +SFC: Yeah. My response here, this is largely a question of just, you know, more information. After we proposed to Stage 3, like we got more eyes on the proposal, which – and like as a result, we had some – a number of issues opened that brought into questions some of the decision made when we went to Stage 3. This is the story that happened – that happens a lot, especially with Intl proposals. So would MF's stage process with this? Well, it might help potentially get more eyes on the proposals, if we have the new stage, where we have to make sure that we have tests and other things written for it. Possibly. I don’t know for sure if it will solve all the problems. We will have to see. Maybe it will help. Maybe it won’t. But yeah, that’s the reason, is just because after the proposal got to Stage 3, people – there are a lot more people who, you know, gave very important and useful feedback that you didn’t give it before on Stage 2 and I wish they would give it before Stage 2. But, it’s not bad to get feedback on proposals, and if feedback comes at Stage 3 we need to deal with it. So yeah. That’s all. + +USA: That’s all for the queue. FYT is there anything you would like to ask? + +FYT: Yeah, for consensus for this PR. + +USA: I think we heard a few comments supporting this. Let’s give a minute or so for people to comment. + +CDA on TCQ: +1 + +DE: I explicitly support merging the PR, and I believe that any consideration of a downgrade to Stage 2 is considered only at the next meeting. And only with advance notice. Not this meeting. + +FYT: Sorry. I don’t understand the last part of the your comment. + +USA: I can paraphrase. Downgrading should only be considered next meeting. We shouldn’t be considering downgrading this meeting, which is aligned with the process that we have. + +USA: So we have consensus. Congrats, Frank. And that’s it for this topic. + +### Speaker's Summary of Key Points + +- FYT explained how the previous idea of making Intl.Locale.prototype.firstDayOfWeek be normalized to a value 1-7 is not consistent with the rest of the Intl.Locale getters work. Therefore, he proposed that the getter would instead return the corresponding substring of the literal locale instead; the new locale info API can be used to access the high-level 1-7 value. +- There continue to be other open issues for locale info, in particular integration of some recent UTS 35 changes around defining an algorithm for determining regions/subregions, as well as 2-3 other small issues. Multiple delegates encouraged FYT to consider a downgrade at the next meeting if these problems aren’t solved promptly (e.g., by next meeting), and FYT agreed. + +### Conclusion + +- Consensus reach to merge [PR 79](https://github.com/tc39/proposal-intl-locale-info/pull/79) to align other Intl.Locale API +- The proposal remains at Stage 3 for now, but may be downgraded to Stage 2 if outstanding issues aren’t fixed promptly. + +## Temporal normative PR #2718 & general update + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-temporal/pull/2718) +- [slides](http://ptomato.name/talks/tc39-2023-11) + +PFC: This is a update on Temporal, which is a proposal currently at Stage 3. We have one small normative change to request consensus on, and otherwise this is a progress update. + +PFC: I am Philip Chimento, I work at Igalia and this work is done in partnership with Bloomberg. + +PFC: You haven’t heard from me in a while. The last time I gave a presentation on Temporal was in July. At that time, we achieved consensus on a number of changes to the way that calculations are done with durations, which was intended to address the concerns from engines about being able to do the calculations using floating points or integers of a certain bit width. These changes are on the way to being merged into the spec. ABL had some comments from the perspective of implementing it, which we are working to incorporate. These should be ready to merge fairly soon and they have coverage in Test262. + +PFC: As I said before we have one normative change to propose today, which stems from usage experience in the JavaScript developer community. If you want to know the status of everything, you can follow [issue 2628 in the proposal-temporal repo](https://github.com/tc39/proposal-temporal/issues/2628), if you want the detailed information. If you don’t want the detailed information, the proposal champions will give a loud signal when this checklist is complete and then we consider that the implementation would be shippable. + +PFC: I measured about a month ago, how conformant the in-progress implementations were to the Test262 tests that we have. I made a little list of percentages here. There are 5 implementations. SpiderMonkey and V8 are nearing full compliance, and the others are somewhat incomplete. I should know because I worked on part of the JSC one. There’s some things still missing. This graph is how many of the total Temporal test262 tests pass. It should not be interpreted as 92% finished or 31% finished because that’s not what it measures. It measures what percentage of the tests pass. I think it’s useful nonetheless to look at. + +PFC: Also, an update on the standardization of the string format in the IETF. This is complete. It’s been accepted for publication. And so this is no longer a blocker for shipping Temporal unflagged. + +PFC: All right. On to the normative change that we wanted to present. This is a pretty simple one. It changes two lines in the spec. If you took a look at the PR, during your preparations for this meeting, it’s about converting PlainMonthDay to PlainDate. A PlainMonthDay is an object with a month and a day. If you want to convert to a date, you need to supply a year. The example here is, you know when your birthday is. You want to know what day of the week your birthday is in 2030. So you take a PlainMonthDay with the birthday. (This is my birthday.) You convert that to a PlainDate in the year 2030 and then you query that for the day of the week. So unfortunately, if your birthday is February 29, this will throw an exception. This is a reasonable behavior, in some cases. You might want that, but you also might not want it. One of the design principles we had when designed the API of Temporal is no data-driven exceptions. If the shape of the data is correct, then it should produce a result. This behavior violates that principle. The way we adhere to the principle is by returning February 28th of the year 2030 because 2030 is not a leap year and doesn’t have a February 29th. I said that throwing would be a reasonable expectation in some cases. And you can still get that; you just have to be more verbose. So that’s the change we would like to make. That’s all. Are there any questions on this? + +JHD: Yeah. So it’s totally fine that a design principle is “no data-driven exceptions by default”. However, I feel like I would want to, for my uses, always get data-driven exceptions. So is there a reason I can’t just use toPlainDate and pass in an option like `overflow: reject` or whatever it is and get that directly? As opposed to having to come up with a complex workaround? + +PFC: Yes. Personally, I think it would be a good addition to the API, to have an option parameter like that so you could select the behavior you wanted. We didn’t want to add that at this point– we didn’t want to have this change be an entirely new parameter for an API. We felt like Stage 3 was not the time for that. But there’s a repo that tracks possible additions to the API for a follow-up proposal, and adding this parameter is tracked there. It’s something that I think would make a good follow-up proposal. + +JHD: So you are saying that as a potential follow-up proposal, it’s possible to add some sort of option every place where data-driven exceptions are avoided in order to get them? (As a general principle; I don’t know what specifics it implies) + +PFC: It sounds about right. I don’t think we are tracking that with an issue to add it everywhere. In most places it does already exist. In this case it does not because it's a shorthand API. + +JHD: Awesome. Thank you. + +WH: I am trying to understand the claim of “no data-driven exceptions”. So does this mean you could have a February 30th? What would that map to? + +PFC: February 30 is never valid. So yeah. In this case, you would not be able to create PlainMonthDay object with February 30th. + +WH: But you can create one with February 29th, right? + +PFC: That’s right. Because in the ISO calendar, February 29th is a day that occurs in a year. You know, once every four years it occurs, so . . . The idea was PlainMonthDay— + +WH: I am trying to figure out how that’s distinguished from a plain date of February 29th, 2030. + +PFC: The date of February 29th, 2030, doesn’t exist. You can see PlainMonthDay as a square that could possibly be on your desk calendar for a certain year. + +WH: That’s not what I am asking. What happens if you create the PlainMonthDay of February 30th? + +PFC: That’s not possible, the constructor or the from method will just throw because that date doesn’t exist in the calendar. + +WH: Okay. What happens when you create a plain date of February 29th, 2030? + +PFC: That will also throw because that date doesn’t exist in the calendar. + +WH: Okay. Then I am confused about what “no data-driven exceptions” means. + +PFC: Right. Well, it has to do with the option that JHD was talking about. So if you – + +JHD: I think what WH is asking is, if I can get an exception based on which day and month I pass in, such as February 30th or March 75th that is a data-driven exception because some data throws and some doesn’t. How is that compatible with the principle of no data-driven exceptions? + +PFC: I see what you are asking. March 75th in like, this is data of the wrong shape. Right? + +SFC: Can I respond to that. So my mental model is that once you’re inside the Temporal type system, we apply the principle. When you’re entering the Temporal type system – that’s the layer where we already, for a very long time, have enforced validity of things. And once you’re inside the Temporal type system we do not, that’s what is happening here. So like, you know, when – you can’t create a date that doesn’t exist, you know, because you’re going from basically untyped data into typed data, into the Temporal type system through the constructor. That’s where we have already, for a very long time, done the validity checking. This is only once you’re inside the type system. And that’s the difference between the code example on the top and/or – that’s the example, I guess, between – the line that we are fixing is shown at the top, toPlainDate. Yeah. So like . . . it’s not a constructor. That’s the difference. (addendum: constructors support validity checking but the default behavior for options bags is to constrain.) + +PFC: Yeah. I realized I misspoke. If you try to create a plain date of February 29th, 2030, what it does depends on the overflow parameter you can see on the bottom line of code. If you pass `overflow: "reject"`, you are saying I am fine with data-driven exceptions, I would like this to throw. The default is actually that you get a plain date of February 28th, 2030. I misspoke earlier. The reason why March 75th is not a data-driven exception is because that’s the wrong shape of data. That’s not data you would have in a list of months and days. The idea is, if you are processing a whole list of data that is valid in one domain, like the month-day domain, you don’t want to sometimes have a list that processes fine and sometimes have a list that throws halfway through. That’s the principle. If you have March 75 in the list, the data is faulty. That’s how we think about that difference. + +WH: If you have February 29, 2030, then your data is faulty too. + +PFC: Right. + +WH: So why do you need `overflow: "reject"` there? + +PFC: ToPlainDate date a shortcut for creating those properties. So you can basically choose the property you want with plain date.from. + +WH: Yeah, your explanation left me much more confused than when I started. So it’s hard for me to support this until I understand — + +PFC: Okay, I’m sorry about that. Is there another point that I can try to clarify? + +KG: So I have maybe a related question. Maybe the same question that WH has. so the makes sense to me why a plain month day of February 29th should be accepted by default. Is -- I guess the question is what’s the -- hmm. All right. Let me back up. I think phrasing this as +"no data driven exceptions" is a confusing way to phrase this principle because clearly some data causes exceptions. And I think that if you could elucidate this principle in a way that doesn’t sound like no data can cause exceptions, that might be an easy way to understand what is guiding all of the design decisions. It seems like it is probably coherent. But I think that the phrasing this as "no data driven exceptions" is confusing, because of the data driven exceptions. +. PFC: Okay, yeah, I understand that feedback. Let me try to phrase this in another way. Let’s forget the 'no data driven exceptions' and just say you have a list of people with their birthday, and it could be December 15th or it could be February 29th. People who are born on February 29th are not very common, so fair chance if you have a list of 50 random people with a list of their birthdays, you won’t have somebody born on February 29th. So you write code to find the day of the week of everybody’s birthday in 2030, and you write it in such a way that you write it in such a way that February 29th would throw, but you don’t know that because nobody thinks about February 29th usually. And if February 29th is not in your list of people’s birthdays, then you won’t notice. So you’ll have this code with this potentially throwing bug that lies dormant until you add somebody to your list that was born on February 29th, and suddenly your data processing stops working. And this was actually the motivation of the person from the developer community who reported this issue to us, who said, 'look, you have this example in your documentation of finding the day of the week that somebody was born on, but it doesn’t work for certain dates.' Which is the motivation for changing this. I apologize for the confusing line about the design principle, but does that explanation make more sense? + +KG: That explanation does make more sense, and I think the part that I am confused about now is not precisely about this PR. It’s about the design principles in general, and a way to put this is you’ve said that February 30th, unconditionally rejects, like a plain month day of February 30th unconditionally rejects, and my question is what distinguishing a plain month day of February 30th from a plain date of February 29th, 2030? Both of those seem wrong in exactly the same way, and I don’t understand why they’re handled differently, or perhaps they’re not handled differently and I’ve just misunderstood. + +PFC: I said earlier that I had confused the issue by misspeaking about the `from()` method, so let me give a rundown of all the ways that you can create a PlainMonthDay. You can create it by passing a string to the `from()` method, like in this example here. You can create it by passing a property bag to the `from()` method. Or you can create it using the constructor. The constructors are the low-level ways to do things. We expect people using the API generally to not use the constructor directly unless they are doing something in particular that they need it for. The constructor only takes dates from the ISO 8601 calendar, so if you pass to the PlainMonthDay constructor, a month of February and a day of 30, that will unconditionally reject. If you use the `from()` method, you can pass a string like in this example here. If you pass +`"02-30"`, that’s an invalid ISO string, so the parser should reject it. We can’t really do anything about that unless we want to accept some ISO strings which are invalid, which I don’t think is a good precedent to set. You can pass a property bag to `from()`, like `Temporal.PlainMonthDay.from({ monthCode: "M02", day: 30 })`, then you can pass the same overflow option that you see in the bottom line here: `reject` or `constrain`. So you can select which behavior you want. The default is to constrain, which would move the date back to the nearest valid date. You could also choose to pass `reject` there, which says I do want an exception. + +KG: Okay, so let me see if I understand. If you are using `.from()` with either PlainDate or PlainMonthDay and you pass a property bag, the default is to not throw an exception and you can specify an exception for invalid values by using overflow reject both for from and for plain month day. If -- + +PFC: That’s right. + +KG: On the other hand you are passing a string, then the string 02-30-2030 or whatever the relevant string is, 2030-02-30, I guess, if you’re passing that string the to PlainDate.from, it will not reject because that’s a valid iso string, even though it represents a date that is not real, whereas if you pass the string 02-29 to PlainMonthDay.from, that will throw because it doesn’t match the grammar, is that accurate? + +PFC: No. Well, the first part of what you said is is accurate. The bit about the string is not. For a month-day, `02-29` is a valid string. + +KG: Sorry, it should have been 02-30. you pass the string "02-30" to plain month day, it will throw. + +PFC: Right. ISO 8601 says that is an invalid month-day string because the ISO calendar never has 30 days in February. If you pass a string to `PlainDate.from()`, like `2030-02-29`, ISO 8601 also disallows that as a valid string because that date doesn’t exist in the ISO calendar. + +KG: So they are consistent in that if you pass an invalid date or if you pass February 30th -- if you pass an invalid date to PlainDate as a string or an invalid date as a string to PlainMonthDay, in both cases they throw, and in both cases you can instead pass a property bag and specify the behavior with the overflow reject, is that accurate? + +PFC: That’s accurate. + +KG: That sounds good to me. Thank you for clarifying. + +NRO: I just got my answer. SFC said that errors are only when entering the Temporal system, and I am asking if the second line in the second code block here is entering the Temporal type system and not from the discussion right now, I think my answer is that this is considered to be within that system already? + +SFC: Yeah I should clarify that, string parsing versus a structured options bag are sort of different. I think Philip said it much better than I said it. + +WH: When does Temporal.PlainDate.from throw if you don’t specify any options to it? + +PFC: Let me look that up. Okay, if you don’t specify any options it depends on the parameter that you pass. It could be a string or a property bag or another Temporal object. + +WH: So let’s say you have `{ year 2030, month: 2, day: 29 }`, would that throw or not if you don’t specify options? + +PFC: If you don’t specify any options, that would not throw. It would give you February 28th. + +KG: If you specify it as an options bag, but not if you specify it as a string, right? + +PFC: Right. Because ISO 8601 makes that an invalid string, which is not in our power to do anything about. + +WH: Okay, so what about `{ year 2030, month: 2, day: 30 }`? + +PFC: So if you specify that as a string, it would also -- + +WH: Not as a string. As an options bag. + +PFC: If you specify it as an options bag, you would get February 28th, if you don’t specify the option of overflow reject. + +WH: Okay, so you can get February 32nd? + +PFC: I mean, you can specify, like, day infinity -- no, you can’t do that. You can specify, like, `day: Number.MAX_SAFE_INTEGER` with the default option and then get February 28th. + +WH: Okay, if you have a negative day, what do you get? + +PFC: Negative numbers are not allowed. It has to be a positive integer. + +WH: Okay, so zero is not allowed either? + +PFC: Right. + +WH: Okay — what would help me is just a few slides explaining what the rules are, because right now I’m asking about some cases I can think of, and the more cases I ask, the weirder this seems, so I don’t — without understanding the principles, it’s hard for me to say whether this is a worthwhile change or not. + +CDA: Well, we are out of time. So are you -- sorry, are you seeking consensus to land this PR? + +PFC: Well, it sounds like we don’t have that. I don’t know, I’m happy to prepare an overview of how Temporal objects are created? Would that be an appropriate thing to do for the next plenary? What’s your suggestion? + +CDA: We do have a little bit of time available in the meeting if you wanted to do a continuation, if you felt like that would be valuable at all. We could take that offline, but at this point, we are past time, so do you want to -- I’ll capture the queue as it stands right now. Do you want to dictate key points and/or summary or would you rather do that asynchronously? + +PFC: I had some in the slides here. But I guess if we delete the last sentence, this can still be the summary. I will paste it into the notes. + +### Speaker's Summary of Key Points + +The blocker on IETF standardization of the string format has been resolved. The champions will give a signal when outstanding changes have been merged, and at that point implementations will be encouraged to continue their work and ship unflagged when ready. + +## Intl.DurationFormat stage 3 update and normative PRs + +Presenter: Ben Allen (BAN) + +- [proposal](https://github.com/tc39/proposal-intl-duration-format/) +- [slides](https://docs.google.com/presentation/d/1_e1qU8toLiXCR3IB-JEqXMnsV_iYpt9z) + +BAN: All right, so we have a number of new normative PRs for duration format. Some that have been sort of long coming. Before I get to them, first the silly thing is to call back to Frank’ preparation, you probably can’t see it, but behind me is there is a French republican calendar, so I want to wish everybody a happy 8 free mere. So now let me present. + +BAN: Okay, so before I get to the normative PRs, I just want to say we do have a fairly large editorial refactor coming in. Some of these problems especially the ones fixed by the last two PRs on this list, are ones that were sort of -- we -- they -- the errors slipped by us because it was a fairly tangled spec. So anyway, so that’s coming down the pike. I believe I’ve got six normative PRs, they sort of go from relatively small to relatively less small. But so the first one, we have a -- an option "fractionalDigits" that’s meant to determine how many digits are displayed after the decimal separator when using certain styles. We had previously in the resolve options, then outputting that even if the fractional digits were undefined, and we have a PR here that -- for consistency with the other Intl objects doesn’t display that in the resolved options if it is undefined. And this, like all of the PRs I’ll be discussing, are approved by TG2. So, again, to -- for consistency with I believe all the other Intl objects except for maybe number format, we don’t output anything in resolve options if it is undefined. + +BAN: The next one is, this is one we likely should have presented at the last plenary. But -- oh, pardon, this is the one. Okay, so this one is consistency with all other Intl objects. This is the one consistency with all Intl objects other than number format. So, yeah, previously number system had been in the wrong place, if I go to the PR (172), let me see. Yeah. So previously, it had been outputting at the bottom of the list of resolved options. And now for consistency with other Intl objects, it’s going to be after locale. And this was based on implementer feedback, both Frank and Anba. + +BAN: In this one (PR 173), we want this duration format to work correctly with Temporal.duration objects. Those objects have limits on the maximum values that the different components can have, if I recall correctly, everything down to, let me look to my notes actually to make sure I’ve got these numbers correctly. Yeah, the absolute value of weeks, months and years in Temporal.duration is capped at 2 to the 32nd, and the absolute value of subweek fields when expressed in seconds is capped at 2 to the 53rd. We have TD2 approval for this. The actual PR that’s going to be merged thanks to Anba for finding problems with how those limits were calculated in Temporal.duration, the actual PR that’s going to be merged is going to be matching the new version of Temporal.duration or Temporal.duration, and, yeah, so approved. We are going to be making some changes to it to keep it aligned with Temporal. + +BAN: (PR 176) We’re getting relatively larger here. Okay, so previously, and actually, this is an error, this could be for all durations, not for negative durations. Previously when we’re dealing with negative durations, we had been outputting the minus sign indicating that it’s a negative duration on every single component. Which simply put, looks weird. And after consulting with CLDR people, we have decided to follow their guidance and for a negative duration, so a negative amount of time, only include the negative sign before the first unit. We can do this because we don’t allow mixed durations in duration format, so we don’t allow, like, negative one minute, positive -- negative one hour, positive one minute, positive one second. That will throw. So if anything is negative, if any of the units are negative, all of them must be. So this is non-ambiguous, the new version, and again, we are following C LDR’s guidance on that one. And like many of these, it was first discovered by Anba. This should apply to all styles. + +BAN: (PR 178) Okay, this one specifically pertains to the numeric style. The numeric style is used back here using the -- it’s used by the overall digital style. It’s used to represent things as on a digital clock. It’s also used to indicate that a subsecond value should be appended to the value before it. So if I go to the examples, thank you again to Anba for these examples, here is how things are formatted if we’re not using the numeric style. So you notice that if we don’t use -- if we don’t use set or milliseconds to display always, they will not be displayed if the value is going to be zero. So in this case, if we didn’t have milliseconds display always here on the third line, pardon me, here on the fourth example, if we didn’t have that millisecond display always in there, it would display as just one second. So this is how it works for non-numeric styles. For numeric styles, so, again, displaying subsecond components, subsecond units as a fraction after the decimal separator on the next largest unit, we get some output that is just confusing to users. The thing to focus on here is the fourth example. In the previous version, in the one for non-numeric style, setting milliseconds display to always ensures that the milliseconds will always display even if it’s, you know, zero. Here if someone says, oh, well, I want to format this without that MS there, I want to format it as just the wrong number, so I’m going to use the numeric style, and I want to make sure that that will show up regardless of whether it’s zero, well, it turns out that this milliseconds display option is ignored if the style is numeric. So currently, it’s set up such that if the previous style is numeric, you know, indicating that while this -- the milliseconds are going to be appended after the decimal separator of seconds. If the previous style is numeric, requesting `millisecondDisplay: always`, just kind of doesn’t make sense. Because it will be displayed, it will be displayed as a fractional component of the one before it. So saying, we’re always going to display it, well, it is always going to be displayed. It’s just going to be displayed there a different way. It’s going to be displayed as that fractional component. Currently, I believe I may have already said this, currently if the previous style is numeric, specifying always for one of these subsecond units, that will throw. What this PR does is, because this is confusing behavior, it’s confusing behavior, like, this will not throw, but you’ll notice that the milliseconds aren’t displayed as a fractional component is in fact in the current version of the spec ignored. What this PR does is if we have this situation, if you’re specifying, if you’re using a numeric style and you’re dealing with a subsecond unit, it should always throw. If you’re specifying numeric style and you specify that it should always display and it’s a subsecond unit, it should always throw. This sort of always doesn’t make sense. Currently it throws if the previous style is numeric. E if the previous style 12 numeric, it doesn’t make much sense to do this. Currently this value is ignored and what our PR will do for this last line here, well, in this case this one would throw for same reason that it throws when the previous style is numeric. So essentially this PR -- this one is kind of fun because it -- all it does is remove one layer of -- a level of indentation. Previously we would do the check for display always and numeric styles conflicting only if the previous style was numeric, now we’re going to be doing that check always. + +BAN: Okay, and this is the final one, I believe (PR 180). And as with the previous one, this pertains to the numeric style. The numeric style when we’re dealing with hours, minutes and seconds is sort of like most often used to display as on a digital clock. But in the current version of the spec, if we have hours, minutes and seconds specified, and the minutes are zero, instead of displaying as with the new version, this is what the new PR displays, instead of displaying as one would expect, it instead displays as a comma separated list of values. So in this case, because there’s hours, minutes or zero and seconds are one, the current version of the spec says a h, we have two hours and one second, so I’m going to display that as the raw number 2 and then the raw number The -- if you have a -- one of the larger units specified. If you have hours specified, then minutes and seconds display as two digits as on a clock. So here this is output that’s sort of both confusing and ugly. So what we have done is if you have -- and, again, this only pertains right now to the minutes. If you have a quantity of zero minutes between an hour and a second that gets displayed, in this case, they get displayed because they have non-zero values, they could also get displayed if they’re set to always display. So if you have hours and seconds displayed and minutes are zero, even if this isn’t set to be always displayed - so typically, so we have sort of two options for display, either always, which always displays, and auto, which generally displays if we’re dealing with a non-zero value. So now in this case, where minutes -- zero minutes are sandwiched between hours and second that get displayed instead of this sort of confusing output, we instead display it as on a digital clock. So we’ll force these minutes to get displayed even if we’re dealing with zero minutes. + +BAN: And so thanks. So this is, as I said, six PRs that we’ve been sort of long overdue. So, yeah, so I suppose I’m asking for consensus on these six PRs. So -- well, first questions. + +DLM: Yeah, I just want to say we support these normative PRs. + +BAN: Fantastic. All right. So requesting consensus. + +CDA: Is there any more explicit support for these pull requests? Are there any objections? Sounds like you have it. + +BAN: All right, fantastic. Thank you. + +### Speaker's Summary of Key Points + +Many small PRs for `Intl.DurationFormat` +Change `resolvedOptions` order to align with rest of Intl Don’t output `“fractionalDigits”` when `undefined` +Limits on `Duration` values as in `Temporal.Duration` +For negative durations, only display negative sign on largest value (per CLDR guidance) Always throw when invalid `“always”` style used in numeric-like styles for subsecond units Avoid surprising/nonsensical output by always displaying `minutes` value when both `hour` and `second` are displayed + +### Conclusion + +- TG1 consensus on all PRs + +## Math.sum + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/bakkot/proposal-math-sum) +- no slides presented + +KG: Okay. Great. Hi, everyone, I don’t have slides for this because I’m only going for Stage 1. But I do have a repo. And a spec, and on. So the proposal, the thesis, is that this should be a more convenient and ideally higher precision way to sum multiple values in JavaScript. I’m sure everyone who has used Python has used `sum`. It’s incredibly useful. Right now if you have a list of values and you want to sum it, you have to write a reducer. It is in fact one of the very few cases that you have to write a reducer, which is sort of a big hammer; it’s a shame to have to pull out a hammer that large for something so simple. And I would like us to do better. To have a more convenient way of summing a list of values. The other thing, of course, is that as I am sure we are all aware, floating point math is imprecise and, as we might not be aware, you can in fact do better when summing a list of floating point numbers than the naive just add from left to right. And by better, I mean you can get an answer which is closer to the true mathematical sum of the values represented because floating point errors can accumulate and you can have an algorithm that account for some of the error accumulation. I’m not, at this time, certainly not for Stage 1, proposing any particular algorithm, although I do have one in mind. I will probably write down that algorithm and say engines should just use this rather than leaving it up to engines. I got feedback to that effect, but that doesn’t need to be decided at Stage 1. And probably this would be a method. I have a few design questions I would like feedback on if this does go to Stage 1, but first I would like to ask if we have consensus for solving the problem of summing multiple values in a more convenient and more precise way? + +WH: What is it that you are proposing to do here? + +KG: For Stage 1, I’m proposing to explore the problem of summing multiple values in JavaScript in a more convenient and precise way. For Stage 2, I will propose a particular function, math.sum, and a particular algorithm that accomplishes that. It will take a list of values to be consistent with math.max. Again, that will happen at Stage 2, not Stage 1. + +WH: The more convenient way doesn’t really make much sense to me, since you can do the `reduce` +fairly easily. So the key thing is the more precise, and you gave an example in the proposal. Now, consider that example. You’ve set length to 10 to show that `reduce` produces `0.9999999999999999`, whereas, `Math.sum` would presumably produce `1` given the algorithms you’re exploring. But consider what happens when you change the example to use length 7 instead of 10. In this case, naive `reduce` will give you `0.7` while `Math.sum` will give you `0.7000000000000001`. + +KG: So, yes, perhaps I should phrase this as on average more precise rather than more precise in every case. + +WH: That also is subjective thing since it depends on whether you mean `.1` exactly or whether you mean the actual double value that’s closest to `.1`. The answers differ depending on which one you mean. + +KG: Yes, since this function will work with Numbers, I think we are constrained to more precise in terms of floating point arithmetic. + +WH: Okay, yes, in this case, the examples like what you gave will go either way, depending on which definition you mean, and that wasn’t clear. + +KG: Yes, what I mean is specifically floating point arithmetic. + +WH: Okay. So in that case, one of the questions is how would something like this be specified? That’s really the one big thing in this proposal, do we want to add something to the language for which we have no good way to test it? + +KG: Yes. And that question will certainly need to be resolved by Stage 2. But my original plan was to leave it up to implementations in the same way that all the -- most of the transcendental functions are left up to implementations. I know I heard feedback from V8 that while they see some wisdom in that approach, in practice, they think it will be better to write down a precise algorithm because they have found that people come to rely on those precise details across engines. So I think in practice, we whether need to choose a particular algorithm, and I’m certainly not proposing a particular algorithm for consensus right now, although I do have one in mind, which is linked from the proposal. Or I guess not linked, but, yes, Neumaier's algorithm, which is like compensated summation, except you do it in the opposite order if the the summand is smaller than the error or something like that. + +WH: Okay. I would strongly object to picking a particular algorithm here. I think the right solution is to specify what the output should be exactly. + +KG: Right, for every possible list of inputs? + +WH: Yes. A plausible specification would be the mathematical sum of the inputs rounded to the nearest double, breaking times to even, which is what double rounding does. + +KG: I am not aware of a way of doing that which requires less than logarithmic overhead. Although, maybe you are. + +WH: I posted a link in the TCQ to do that. http://www-2.cs.cmu.edu/afs/cs/project/quake/public/papers/robust-arithmetic.ps + +KG: I’ll check it out, although -- + +WH: The one issue, it appears, with any of these things is that you might get intermediate overflow to infinity. That’s another can of worms. + +KG: Yes, well, I mean, if you specify a particular algorithm, that’s -- then you just do. + +WH: Yeah, but unless the algorithm gives exact answers, I would not support this if the intent is to specify particular algorithm. + +KG: Okay. Well, I am happy to explore the feasibility of giving -- of requiring it to be precise, although, can you say more about why? + +WH: If you specify something like Neumaier’s algorithm, it will give wrong answers in some cases. + +KG: Yes. + +WH: And then implementations cannot improve on it if it turns out, and it’s likely that it will turn out, that there exists a better and faster algorithm which gives more precise answers than the one you specified. + +KG: Yes, that’s -- + +WH: So if what you specify is the exact answer, then you don’t have that problem. + +KG: Yes, you just have the problem of it being slower. But the -- + +WH: And so this is what I want to -- this is what I’m curious about, if you’re proposing this for Stage 1, is that -- is there concern about speed of this or concern about precision? + +KG: What I would like to do is explore having something that is more precise. However, as with all things, there’s always a tradeoff between various concerns that one might have. I don’t think it would be worth adding something if it is infeasibly slow. So it’s not that I want something that is faster than the reduce. It is that I want something that is more precise and reasonably efficient, where reasonably is not a well-defined term. + +WH: Okay, there exists algorithms which are reasonably efficient. + +KG: Okay. + +WH: Where reasonably is not a well-defined term. + +KG: I am happy to specify it to be precise, if that turns out to be practical. I’d like to explore that during Stage 1. + +CDA: All right. We do still have some time. We’re technically past time, but we also sort of shortchanged this agenda item because it was thought we could get it done in less than the original allotted time. I just had a question in the queue. Aren’t we getting into things that are really post Stage 1 here to begin with? + +WH: I don’t think so, since the entire meat of this proposal is precision. + +KG: In any case, we -- I think Waldemar and I understand each other. So I think we should keep going through the queue. + +MM: I mean, precision is a nice value, but the more important constraint from my perspective is determinism, is that all browsers that, you know -- all engines that conform to the spec necessarily give exactly the same answer, that is the case with reduce right now, if some can be both more precise and -- but still have the output of sum be exactly in agreement across all conforming engines, that’s great. If it’s a best efforts kind of thing where the spec allows engines to improve the precision over time, bringing them not into exact agreement, that would be terrible, from my perspective, and I would think we would be better off with the proposal than having a best efforts proposal. If there’s a feasible algorithm that we can all agree on and if the algorithm produce the result that obeys some nice mathematical invariant like the invariant that Waldemar stated, then we can have a normative assert in the spec that the algorithm produces a result that obeys the invariant, and then both the invariant and the algorithm become equivalent, you know, equivalent normatively in the spec. And I think that’s a fine place to land if there’s a feasible algorithm we can agree on that produces the result that obeys an invariant. If not, I think we’re better off without the proposal. That’s it. + +JHD: Yeah, so this seems similar to the feedback here. Basically, if we specify it with the exact answer, then what will happen is that users will end up relying on whatever algorithm they choose right now and never be able to change it in the future anyway. This was the similar feedback I was given when I was trying to suggest that it would be acceptable for a predicate to, like -- we were talking about if we ship the symbol predicates and later made -- or it was -- I forget the exact detail, but something about shipping a predicate for which an existing code value would suddenly change from true to false or false to true, and my argument was that that was fine, because the condition changes because it answers a question and that answer to the question changes and, therefore, it’s correct that the predicate value -- or return value changes. But V8’s feedback specifically was that will break people and we will not do that. So I think that it’s idealistic thinking to think that we can ever -- that we can improve algorithms over time in JavaScript. I think experience has shown us that whatever algorithm people happen to choose, that’s the one we have to use forever no matter what, so we might as well go with determinism now is my perspective. + +KG: I’m happy to say that given this -- this will have to be specified fully deterministically. + +DLM: SpiderMonkey team discussed this internally and we’re in favor of from this proposal for stage one, and general concern has been raised about inconsistent behavior between implementations and leading to future web compatibility problem was also a concern for us. And that’s something to be explored in a later stage and doesn’t have to be decided right now. + +CDA: And you have similar from JHD and MM supporting Stage 1 with explicit deterministic algorithm. That’s it for the queue. + +KG: To be clear, this may or may not end up with writing down an algorithm. Maybe instead we just say that it has to be the mathematically precise answer with round ties to even, as Waldemar suggested. And I’m -- we’ll look into whether that’s feasible. And then there may not be an algorithm, because it may be just "get the right answer". + +MM: I think that’s worse than specifying an algorithm and using a normative assert to constrain that the algorithm must be consistent with the invariant. + +KG: In any case, it’s an editorial concern for the future. + +CDA: You have another message of support on the queue from DRO. Love this proposal. And that’s it on the queue. + +KG: Okay, well, I would like to ask for Stage 1 for a convenient and more precise `Math.sum` implementation, having taken the feedback for it to be acceptable, it must be fully deterministic and from WH, that WH considers this to only be worth doing if in fact it is precisely the right answer more than merely more precise. With that, I’d like to ask for consensus for Stage 1. + +CDA: Okay. EAO is on the queue supporting Stage 1. I think you already got some other explicit support earlier. You’ve got JSC supports Stage 1. + +KG: Okay, thanks very much. Do we have anything for the remaining 7 minutes or can I ask a couple of follow-up questions? + +CDA: Yeah go, ahead and use the time, because I think we don’t -- our smallest next items are like -- need 15 minutes. So please go ahead. + +KG: All right. So the main thing that I want to ask the committee’s feedback for: Math.max already exists and is var-args and this is kind of annoying because if you want to take the maximum of a potentially very long list, you can’t with Math.max. If you try to spread 70,000 items to the stack, every engine will throw a `RangError`. Of course, that’s not what the spec says because the spec has no notion of memory limits, but in practice, you just can’t use Math.max to take maximums of very long lists. So I might make a separate proposal propose a maxFrom or something similar that takes an iterable as its first argument so that you can actually take the maximum of long lists. And I would do the same for sum, of course, in whatever order they landed. Does anyone think this is not a direction worth pursuing? Or does anyone think that in fact `Math.sum` should take an iterable and be inconsistent with `Math.max`? Unfortunately I think the consistency is worth it. + +KKL: I have run into this problem in the past, and I am in support of establishing a precedent for whatever from methods for receiving an array of elements instead of splicing them onto the stack. Specifically, I have in the past been burned by attempting to splice 10,000 elements into an array. + +KG: Cool. Okay. + +EAO: Yeah, I was wondering, could we just make it work with Math.max? If it gets one argument and it’s an array, is there actual code somewhere that would actually break from this? + +KG: I guarantee yes, because in particular if you pass an array containing exactly one number, what to you think happens? It stringifies it and the stringification of an array is just the contents of the array joined with a comma if you have exactly one number, then you get that element and then it -- that turns that into a number, math.max turns that into a number, so if you do a math.max of the array containing 12, you get 12. And I guarantee someone is depending on that. + +EAO: But I mean, that behavior would not change? + +KG: Sure, that particular behavior would not change. If you take Math.max of the empty array, you get zero. And that behavior would change to negative infinity. + +EAO: Sure, unless we explicitly defined that to be zero. + +KG: You can’t, that’s the wrong answer. + +EAO: It would have been nice to not need to have another method for it. + +KG: Yeah, unfortunately, I guarantee people are depending on the horrible coercion behavior. Okay, well, those are the things I wanted your feedback on. Thanks, all, and I look forward to reading this PDF from WH. Some other context from this, Python has a sum function and also has a precise sum function, fsum. In version 3.12, they changed the regular sum function to be more precise, in particular to use Neumaier's algorithm. Because they found that it was only a little slower. Their fsum, I believe, I’m pretty sure, is log overhead. Log in the length of the list. But maybe I’m wrong. So if anyone has other PDFs for precise floating point summation, please send them my way. + +WH: The algorithm which I linked here is if the numbers have reasonably similar magnitudes, then it behaves very well. If the numbers are wildly different magnitudes, like you’re adding -- you’re adding, let’s say, some -- a few integers as well as 10200, then you accumulate intermediate results which span kind of both sets of bits in the representation. In practice, it works quite well. + +KG: Okay. + +WH: The other mistake in the proposal is that if you have a `Math.sum` of no numbers, the result should be the identity for addition, which is `-0`. + +KG: Yes, I already fixed that. I just always forget about negative zero because I hate it. + +CDA: All right, we are right about at time. + +KG: Okay, thanks. I believe I have consensus. I’ll go update the notes with the summary. + +CDA: Okay, thank you. All right, thanks, everyone. Thanks especially to the note takers. We will see you all back here in an hour and 55 seconds. + +### Speaker's Summary of Key Points + +It would be nice to have a more convenient and more precise way to do summation WH thinks it is not worth doing if it is only sometimes more precise, rather than as precise as possible Many people think it's important that the result be fully specified Some support for a follow-on with Math.sumFrom/maxFrom to sum iterables instead of needing to spread them to the stack + +### Conclusion + +Stage 1 KG to read more about Shewchuk's algorithm for precise floating point summation +. +[Lunch] + +## Iterator helpers web compat continuation + +Presenter: Michael Ficarra (MF) + +- [issue](https://github.com/tc39/proposal-iterator-helpers/issues/286) + +MF: So I am presenting this along with SYG, though SYG is not here, but we have talked about our options before the meeting.This topic is about the web incompatibility discovered with regenerator-runtime and a popular product from a company named Transcend. They sell a product that, when used in conjunction with generator runtime, causes a problem only when the Iterator global is present. When Chrome tried shipping iterator helpers, it was discovered on many larger websites that this incompatibility existed. We discussed last time some of our options. We decided to try outreach. We worked with this company to try to deploy fixes and update their customers. But their delivery method is such that each customer individually has to be upgraded and in a latest sample that was given to the V8 team, some of the websites still have not been upgraded. So this was going to be a conversation about whether, after the outreach was complete, whether we think it’s worth shipping again or taking the backup approach. Because of the unfortunate failure of that outreach, we are instead asking just for merging the PR that was considered the backup approach. To remind you, that PR creates two of these what we are calling funky or weird accessors: one on the constructor property of Iterator.prototype and the other on the Symbol.toStringTag property of Iterator.prototype. And there are setters instead of data properties. The getter returns things we want in the property and the setter does weird setting behavior to not be affected by the override mistake. + +MF: So hopefully this will be a shorter item than what we expected just because we are taking this unfortunate step. I am also hoping that if we want to follow up in the future, at some point, when maybe more outreach is done or the web has evolved, this does allow us at some point in the future to change these accessors to data properties. I don’t have a huge urge to go about that, but maybe sometime in the future we would and this doesn’t prevent that from happening. So that’s the request. I would like to request consensus on merging this PR which is a change to the Stage 3 proposal iterator helpers. Is there a queue item? + +JHD: Yeah. I think that rather than shipping something strange, that we may not be able to change later, we should omit these two properties temporarily. And that way, we can still add them later, if we want, but like then we don’t have something weird potentially in the language forever. + +MF: We had discussed that option last time, and discovered that that wasn’t an option for us. I don’t remember – + +JHD: That’s not my recollection. I would love to hear more about that. + +JHD: My recollection was just that the PR was the approach that was being recommended by the champions and that there hadn’t been much consideration of omitting it. And I thought when he last discussed it, that a number of folks were comfortable with that idea. If there’s a concrete reason, that would be great to know the the risk of being able to change it from like the likelihood to change from an accessor to a data property later I think is a lot – I think that’s a riskier world than the risk of not being – you know, not being able to add the properties that are absent later. + +MF: Yes. If we were going to consider that, I am going to have to think about it and possibly go back to the notes from the last meeting. I believe it was NRO who suggested that at the last meeting, originally. So if anybody remembers, feel free to jump in. I wouldn’t feel comfortable making that decision, without having to go back and look at that again. + +JHD: Yeah. So if it’s not clear, I would – do not like the idea of merging that PR. Unless there’s a concrete reason that omitting them can’t work, I would prefer not to have consensus for that. + +MM: So I am next on the queue. I do think there’s – let me offer a concrete reason why omitting them won’t have the beneficial effect that you’re hoping for. In general, when we omit something to enable us to add it later, the behavior of having omitted it is that the code that depends on it would break because of the absence of the thing that was omitted. In this case, omitting ‘constructor’ means that you are seeing the inherited constructor, omitting ‘toStringTag’, if you’re in the inheriting from something with a ‘toStringTag’, nevertheless the ‘toStringTag’ is really there to effect the behavior of ‘toString’. And toString has a fall back behavior. So in both cases, somebody could, you know, code that – you know, code that depends on the behavior under omission is not getting an error and would see a change of behavior when you go from omission to providing them, and the change in behavior is more viable than the change in behavior from accessor to data. So I agree with the skepticism, having introducing them as accessor, we might not be ever to introduce them as data, but I think it’s less violent. Having omitted them, we are even less lakely to add them later + +JHD: I agree for any change, one could break code with that change. But I have written code that would break and when things change between data properties and accessors, SES, by making that change breaks my code in a number of places. I think that likelihood is quite high compared to the likelihood of someone depending on the exact value of ‘toString’ for that one. When people use dot constructor, it’s instanceof it and that’s – I think that that code is less likely in terms of – like, I don’t think anybody is going to be writing that code in general as a practitioner for any of these three cases. And I think that in the kind of transitive dependency case, there’s an obscure thing, there’s likely they will depend on the kind of property it is and use constructor or depend on the exact output of toStringTag. + +JHD: If all of the things are considered risks and we are not all convinced one is less risky than the other, then the correct thing is to wait longer. + +MM: I think you’re making a plausible cause specifically with regard to toStringTag and constructor. I am skeptical, but it is an empirical question to try to bring evidence on. But I am certainly skeptical that omission is the easier, earlier state to not break when we add data properties later. I am done. + +CDA: Nothing else in the queue. + +MF: Okay. If nobody else is discovering why we might not want to omit the properties, I think I might prefer to yield the rest of my time so that I can hopefully take a look at the notes and think about it for a little bit before I bring up this topic for consensus later in the meeting, if that’s okay. + +DE: Let’s come back to this as an overflow topic. I think that’s a good idea. + +MF: Let’s get to it. It is important to achieve consensus at this meeting, whichever direction we choose. Chrome is trying to ship again and this would be the only thing holding them up. + +MM: With regard to JHD and my contrary points – is there any way to gain evidence in support of either hypothesis? + +DE: So we’re talking about whether instanceof will work, for example? If we see somebody doing instanceof on other iterators, would that be relevant evidence? [Note: This comment was confused and wrong. Instanceof does not reference .constructor.] + +MM: That would be relevant evidence. + +DE: I agree with what MF is saying, that we should come to a conclusion. This is a very small point. It makes sense to take time to get things right, and a couple of meetings is a good amount of time. + +KG: I think leaving out ‘constructor’ is pretty risky. I would be much happier including the getter. I think it’s likely that if we leave it out, we wouldn’t be able to add it back in. It’s unlikely that if we make it a getter, we wouldn’t later be able to make it a data property. And in addition, the outcome of leaving it as a getter is much more palatable than leaving it absent. So I much prefer to make it a getter. + +MF: Okay. Personally, I would probably agree with KG there. But I will yield my time on this topic and hopefully we can bring it back at the end of the meeting. + +CDA: Yeah. I mean, all the time you’re freeing up now is time available, providing we can move things up, which I am sure we. The next one will be easy. I think you are up next, as well, with joint iteration. Stage 1 update. + +MF: Okay. + +RBN: I was talking about this in the matrix and there’s a lot of discussion about not having a `.constructor` breaking `instanceof`, but `instanceof` doesn’t care – It does not look at the thing on the left for a `.constructor`. It doesn’t care about the `constructor` property at all. + +KG: To be clear, my expectation that people will come to rely on constructor is not based on instanceof. I am expecting people to write like `X.constructor === Iterator` or whatever. + +### Speaker's Summary of Key Points + +### Conclusion + +- Continued on day 4 + +## Joint iteration stage 1 update + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-joint-iteration) +- [slides](https://docs.google.com/presentation/d/1sgqXgWBsDF0S43wVuFgIyOC8Y3AMFt1qxBIFbzEq9Vg) + +MF: Okay, so joint iteration. I presented it last time for Stage 1. Was it last meeting? Yeah. Last meeting for Stage 1. And it reached Stage 1. For this proposal, I presented a survey of the languages who have a joint iteration construct in them, and libraries providing the same facility within JavaScript. And I also considered all of the different use cases that were served by joint iteration. And this is the API shape that I have come up with and I want to run this by committee before starting to do spec text and everything and getting it fully ready for Stage 2. + +MF: So I want to explain what I have come up with. It is one static method called zip. Two parameters: the latter is an options bag. The former can take two shapes: I am calling them positional and named. The positional shape is when the first parameter is an iterable of iterators or iterables. That yields tuples whose size is the number of iterables yielded by the input iterable. The second shape it can take is an object, whose values are all iterators or iterables and the names are, well, any names you like. And then what is yielded as the result here is an object who has the same names as the input object, where each value is the next yielded value from each of the iterators. So this is kind of like what we do with Promise.all, and what we want to do with the named variant of Promise.all, both in the same API. As far as the options bag, it supports the other kinds of use cases that we saw existing in the ecosystem, the libraries and other languages. Without any options the default is to stop iterating when the first iterator has finished. But you can give the longest option to continue until the final iterator has finished. This uses filler values from the fillers option. So depending on the shape of your first parameter, your fillers are either iterator or iterable of fillers or it’s a record whose names correspond to those same names used in the first parameter. And the third option is strict, which unlike shortest and longest actually ensures all of them finish at the same time. If they don’t, one finishes before another one, it throws. And you can’t use this together with the longest option. + +MF: So that covers all the use cases that I found, when doing that survey. These are some examples of what it looks like. You see on the left here, this the positional one. An array is the iterable. And we have three iterables there. This iterator or iterable produces three iterator or iterables. And you can see that the first thing yielded contains the first things yielded from each of the input iterators. You can see the second one and the third one This is giving no options; if one were shorter, it would have finished whenever the first one finished. On the right, the named one, where it’s a record where each of the yields corresponds to one yield in each of the output values. + +MF: By the way, I didn't mention but in the proposal repo I have a full implementation of this proposal with tests. It's not incredibly pedantic about everything, like, making sure to use the captured built-ins and stuff. But it should give you all the answers to any questions you might have about the behavior of this proposal. Things that we could discuss that I have questions about: are these covered use cases okay? Have I missed use cases? Am I covering use cases that don’t matter? The things I am considering are, I want to support whenever you have zero or more iterators or iterables, so that’s covered. + +MF: I wanted to support positional and named ones because really, just like we see with Promise.all, when you have more than a few positional ones, it’s unwieldy, and the names are nice. I will use that pretty much all the time when it’s not just two iterables or iterators. The longest option is useful. The shortest is the default in everything we saw. The strict option I'm kind of so-so on. I think it existed in only like one or two libraries and the only language that had it was Python. I am okay either including or omitting that. But assuming we do have all of those, what should the options bag look like? Right now I have two Booleans for the longest option and the strict option, which gives us 4 states, but one is invalid. That’s not the greatest thing. We could have a three-state enum, like a string, but technically all string inputs makes many more invalid states. I'm not sure. So I would like to hear from you on that as well. All of these inputs I have been saying iterator-or-iterable because that’s the same kind of input that the Iterator.from function takes. I think that’s the right thing to do here. But there is a design possibility for what we do for strings. Right now, I took the iterate-strings approach. If you intermingle strings, it will iterate the strings rather than throwing. It’s not like `flatMap` which throws on strings because it assumes those are errors. We can go either way on that. I think that it's more like Iterator.from, where we have chosen the iterate-strings approach. And parameter evaluation order, I don’t want to get into the details of this right now in the overview, but I think that – so this is the first time we are doing an options bag. And I think there could be some strange evaluation order. You could see it if you look through the polyfill I have on the proposal repo, that like depending – things – effects are wherever. When we pull the fillers property off the options object and enumerate those values, like all of that it might make sense to do it before we ever get the iterator out of any of these iterables. I think that probably nobody had any kind of expectations about the order. There might be some loose expectations, but not like a very strict expectation of ordering. I think it doesn’t matter. But if you have opinions on that, I would like to hear that as well. We could pretty much choose anything. I just chose what was most convenient when implementing that and that probably aligns with what is most convenient with engine implementers. But we can see – I'll also probably figure that out as I write the spec text. If you have opinions, I would like to hear them. + +MF: This slide has the links if you want to check them out. I have the proposal itself, the polyfill, and the tests which are not super thorough. Obviously, there will be more thorough tests of ordering and stuff in test262. + +JHD: Yeah. So maybe I missed it, but the positional approach makes sense to me. The named ones seems confusing. That something that there’s like concrete use cases for a JavaScript or is that something you collected from other languages, some other languages that have this? + +MF: This one is inspired by the proposal for the Promise.all named variant. The reason why we want to do that is because when you pass like multiple promises to Promise.all, and then you get out of it a big array and destructure that, it’s tricky to line up the destructuring with the inputs and like really easy to get that wrong. The named variant allows you to give it a name on the input and output and those line up. Where you put it in the list doesn’t matter. I think that that same kind of convenience is desirable here. + +JHD: Okay. So to make sure I am getting it, the promise.all, that one seems straightforward to me because you’re like – you’re getting individual items – put individuals in and about putting them out. But in this case, you’re saying that like your .map call back. The argument signature, destructure it by name and – and I see this example, but I mean like the – that doesn’t tell me where it’s useful. Am I understanding this right, if I stuck a .map on the end, the call back into .map, destructs ABC and out in each iteration and that’s the convenience? + +MF: Exactly. Yes. Yes. + +KG: Yeah. Just like - names are super convenient. Why do we have objects instead of just arrays? Because it’s nice to have names for things. + +ACE: Yeah. I enjoyed that you have referenced the await dictionary proposal, assuming we get AsyncIterator.zip, you don’t need await dictionary if you’re super confident with these APIs because you could do AsyncIterator.zip and then iterators of the promises with the names and then like to array at zero. I still think await dictionary would be nice. It’s a nice synergy between them. I would definitely use this named approach for when there’s definitely more than two of them. + +MF: Yeah. I see what you’re saying. That is kind of neat. But also, you do want this kind of discoverability aspect. And even if you can do it like that, it might still be valuable to add the conveniently discoverable version of it for people. But that’s a conversation to have during that proposal's time. + +CDA: Nothing on the queue. + +MF; Okay. I will give it another minute or so for people to think about their discussion topics on screen. And if I don’t receive if I other feedback, I will just go with what my initial design was. + +ACE: My assumption when I first saw this proposal was that fillers was a single value. So the undefined or I want it to be null. I am wondering if having been able to name each filler – I am trying to think how obvious I would want a different filler based on each thing. I can see why you need it because you can’t just concat the iterators because you don’t know which one is going to be the shortest. Yeah, I am not – I want want to use each use cases for each something + +MF: You could do it with a single filler and that’s how some languages have done it. This was inspired by languages with strict type systems where if you have an iterator of A and iterator B and you want to zip them, you need to produce an iterator of tuples of As and Bs. And if you give a filler that’s like a single thing and there’s no common supertype of A and B, you couldn’t produce a tuple of A and B. They make you give a filler for As and Bs, depending which is shorter or longer, it uses the appropriate filler and they just don’t have the option. I chose to go along with that because I know a lot of people want to use this in typescript and are going to want those types to check out nicely. But we don’t have to. We could do a single filler. I prefer – not just I prefer, I think that typescript users will prefer having the fillers match up. We could also probably figure out a way to do either where you have a filler per input iterable or you have a single filler that’s like a convenience mechanism for that. I am not sure if that would be worth it or not. + +ACE: Yeah. If you put like a proxy there. If you want to do a catch all thing, you would use – you didn’t want it actually statically list all of them or do like on object.keys and then map it. Yeah. I can imagine myself wanting an option to just say, use this one value. And I am happy that typed script will union that value across all of them. + +KG: Well, if you want the value to be undefined, you just pass like an empty object and then – + +ACE: If I don’t want it to be. I guess you’re right. I probably always want to be undefined in that case. + +MF: Yeah. And that’s what it will do as of what I have presented here. + +KG: Right. You omit the fillers property and you get undefined. I have a hard time imagining you want a particular value that is the same for all of them, and it needs to not be undefined. The cases where I am more precise than undefined, it’s because there's a sensible default. But it feels rare that you have the same sensible default for each of the items in the zip. + +CDA: Nothing on the queue. + +### Speaker's Summary of Key Points + +MF: Thank you for the feedback. I expect to write this up a little bit more. I don’t think there was any feedback requiring changes. I will go through the notes to make sure. I will write this up in the spec text and present for Stage 2 in the next meeting. Thank you. + +### Conclusion + +- MF presented a preview of the `Iterator.zip` API that will be proposed for Stage 2 in a future meeting + +## Iterator sequencing + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-iterator-sequencing) +- [slides](https://docs.google.com/presentation/d/1wMUfikXIIz7woLN-5MbYbW8an40c8ZPrN1ehzWVf4zw) + +MF: Iterator sequencing. This is also a proposal I presented for Stage 1 and reached Stage 1 at the last meeting, Tokyo. As a reminder, the goals of this proposal are these five that I have identified. We want to conveniently compose two iterators, that’s the most important one. We also would like to conveniently compose 0 or more iterators. Or if we have an infinite sequence of iterators, compose those as well. It would be nice to have a convenient way to interleave other values among the iterators. And we want there to be something really intuitive to reach for with a discoverable pattern, not something really kind of esoteric. With that in mind, I have made this table here. I have gone back and forth on this a lot in the last two months. I will explain this. + +MF: The approaches that I presented in the Stage 1 presentation are the first three listed here. The first one is a combination of Iterator.of and Iterator.prototype.flat. We discussed that. The second one is an Iterator.prototype.append which would take one or more iterators and append them to this value. And the third one was a variadic Iterator.from. Iterator.from supports exactly one argument. And this would extend that to support zero or more arguments. And each of those iterators is, you know, composed with the others. + +MF: One other possible player in this space that I had not presented at the last presentation is `Iterator.concat`, which makes an iterator or iterable of these iterators. So I considered all of these different possible solutions, and the goals we were trying to accomplish and no one solution fully accomplishes the goals that we wanted. But I think that the combination of variadic `Iterator.from` and `Iterator.prototype.flat` does. And at the moment, that is what I think I would prefer. So the mental model here would be that when you have some small number of iterators to iterables that can fit in a parameter list, you just pass them to iterator.from or if you have them in a small list, it can be spread to `Iterator.from`. If you have either an infinite iterator or any other iterable or iterator that contains these sub iterators, you can use `Iterator.prototype.flat`. That splits it into two. That’s kind of my realization – after having thought about this a lot, is that what I was thinking was one problem is instead two closely-related problems. And I think that these two together handle all of the situations well. I am not going to go through the whole – what each covers, but you can see on the table. That’s what I am thinking at the moment. + +MF: And is that – is that good? I guess another thing to consider is, we don’t have to stop at the minimal solution that solves all of the problems or meets the goals. If somebody was a fan of Iterator.prototype.append because they think it's going to be used in chaining a lot, and they want to have that ergonomics in chaining, we could have that as well. Or like I think iterator.concat is a super discoverable name. And we could have that as well. But if we want to go with the minimum solution, I think it’s these two additions. Also, remember that iterator.prototype.flat is just flatMap with the identity function. Again this is an ergonomics thing. If we didn’t want to add any more than we had to, we could just continue requiring that users use flatMap with the identity function, but I think that’s pretty unergonomic and actually probably more confusing, especially more novice users and flat with parens after is easier to get their head around rather than flatMap. Yeah. That’s all I want to present. How do we feel about that? If we feel good, similar to the last proposal, I would like to start writing up spec text and present it for Stage 2 at the next meeting. I also – I didn’t put it on the slides – but I have an implementation of these things, Iterator.from, iterator.prototype.flat, and Iterator.concat in the proposal repo. You can check out the implementation for any small details that you might want answered. + +MF:With that, I would like to go to the queue. + +JHD: Yeah. I am trying to understand. Right now if I do `Iterator.from` and pass one thing into it, I get an iterator of whatever that thing yields directly. If I pass two things, what do I get in an iterator of + +MF: An iterator of everything that the first thing yields followed by everything the second thing yields. + +JHD: Okay. That would be – I must be thinking of zip still. Okay. It’s literally just like one after the other + +MF: Yeah. That’s the goal with the proposal, get one after the other. That’s what we are trying to do here. + +JHD: Then I like this direction then. Thank you. + +KG: Variadic from seems fine. Flat is tricky. I think that we are going to have a hard time agreeing on semantics for flat because I suspect that you, Michael, want it to throw if one of the things yielded is not an iterator or iterable. And I suspect that JHD wants it not to do that. And I doubt we would be reconcile them, but I may be wrong. + +MF: I think there’s no other option for flat than for it to behave like flatMap with the identity function. If it behaves any differently than that, I would be very surprised. + +KG: Well, yes. But I still think that some people might not want that behavior. Which would mean there’s no route to have flat, is what I am saying. There may not be a route to have flat. + +MF: Another thing I realized thinking about this, Iterator.concat is basically just flat. So we could get rid of flat and have concat instead. It’s just a static version of flat + +JHD: I think Kevin’s prediction of my opinions may be right, but we can discuss this off-line. Concat might be tricky. People may not like to bring along its semantics, but the name might encourage bringing it along, but I am happy to work with you off-line so at least Kevin’s predicted concerns can be addressed and this advanced. + +CM: I was thinking it was doing one thing and now I’m thinking it’s doing something else. Is it strictly concatenating the output of the various iterators or is it actually flattening them? In other words, if one of the iterators itself returns an iterator, does that iterator itself get iterated or is that simply an element of what gets returned? + +MF: It’s just one. It’s just one flattening. + +CM: One level of flattening only? + +MF: Yes. + +CM: Very good. Thank you. + +MF: By the way, you can take a look at the implementation in the repo to see that + +CDA: ACE? + +ACE: Basically, unless you get a lot of push back, I would like you to include append as the convenience of chaining. You know, if other people really want to keep it minimal, that’s fine. If you can include it, that would be lovely. + +MF: I am happy to – I would like to hear other opinions on that. Either in favour or against. I don’t care which. But I would like to hear one other person say something about it. + +CDA: Nothing on the queue. + +### Speaker's Summary of Key Points + +MF: It sounds like variadic from was popular. The other portion, which is optional, of adding flat seems like it will probably work. I will have to work that out with JHD off-line to make sure that it is the kind of thing that he would want it to be. Failing that, we could try to replace that with concat or try to drop it. But any of those routes forward seem to be acceptable for this proposal. Additionally, if we hear any other feedback about append, that could sway whether that gets included in this proposal as well. Without hearing any other feedback, I will probably just not do it and we can add it as a follow up, if more people get behind that. + +### Conclusion + +- MF plans to come back at a future meeting proposing +- The proposal stays at Stage 1 + +## Allow users to specify rounding based on cash denominations in common use + +Presenter: Ben Allen (BAN) + +- [proposal](https://github.com/tc39/ecma402/pull/839) +- [slides](https://notes.igalia.com/p/nxMdcUtbb#/) + +BAN: So hello, again, everyone. So for this one, I am presenting a PR that we decided to leave out of the editor’s round up for 402 because it’s sort of a change that TG2 considered as larger as the PR should be. So it’s either a large PR or a small proposal. And getting feedback from this group and specifically feedback – not exclusively, but feedback from implementers would be nice. Let me share my slides. + +BAN: Okay. So this concerns how currency values are rounded in Intl format. If I go forward here. Okay. So currently, the normative reference for how currency is rounded or how many digits there should be after the decimal separator when dealing with currencies, we are taking data from ISO 4217. It’s the norm active reference for the currency codes but data on how many digits after the separator should be used. And this data is treated as normative by the spec. But in many cases, the values that ISO 4217 sometimes differ dramatically from the values that are in actual use. The Afghan Afghani is listed as using 2 digit after the decimal. But no subdivision has been issued and a number of other currencies. A number of other currencies for which ISO 4217 specifies a greater position used in any other reasonable use – the currency. + +BAN: Also, though, CLDR provides – supplies data on the number of digits used after the decimal separator when dealing with specific currencies. This data at least now appears to better reflect on the ground reality than the ISO for 4217 data does. I have seen the ISO data described as legalistic and pedantic. + +BAN: So this might be out of date. This issue was originally raised back in 2018, something like that. At least as of then, implementation varied on what their data source was for the precision to which currency should be presented. So V8 uses or used CLDR data SpiderMonkey use the ISO 4216 data and we spotted this back in the day, 2018, this was spotted because one – like V8 was failing tests. Okay. So yeah. The ISO 4217 is legalistic and sometimes attached from reality. Norbert Lindenberg said, for example, that although the smallest denomination issued for the Indonesian rupiah is 15 rupiah, which is about a half a cent USD, which is about half a cent USD, the ISO 4217 says 2 units after the separator should be used. And actually, I can’t remember who, contacted them and said, this minor unit is sometimes used in banking transactions and accounting. So it is actually the value to use, even though no one exchanging cash will use those qualities + +BAN: So this is a point where the CLDR data appears to be just better than the ISO 4217 data. Because it supplies two sets of values for each currency. So it’s got an attribute digits which is the greatest number of digits, like accounting or financial uses. Also, a cash digits attribute, which has – fixed to the smallest denomination that is currently used. In this case, with the Indonesiarupia, the smallest is 50. And so CLDR data, it might not actually be complete on this, but it should have as a cash digit attribute zero. I think it does have zero. Okay. So one thing ISO does not distinguish between rounding for cash and rounding for financial uses. Also, ISO 4217 doesn’t provide these two separate values for financial or cash transactions Additionally, changed has a cash rounding attribute. Which this is unfortunately incomplete. The smallest denomination for most currency is one unit. One Yen or penny or what have you. A lot of places and this list is expanding by the year, have stopped making their smallest unit. So, for example, the smallest denomination issued in Canada is a nickel. So if we were rounding a cash value in Canada, we want to round to the nearest nickel rather than the nearest penny. It’s not clear cut. If I recall like – you can often in a noncash, nonfinancial transaction, online shopping, values can be rounded to the penny, even though there are no physical pennies. Changed data, another improve. , Another way that is on the ground is more useful when dealing with cash than the ISO 4217 data is it has this cash rounding attribute. So it’s not possible to say, actually, there’s no pennies in this currency. Round to the nearest nickel. Just the ISO 4217 data, which you can do with this changed data. So yeah. In many cases, you can see these values are preferable. They are often preferable for online shopping, cash digits is very obvious preferable. Rounding, if the data were more complete, would be preferable, and also taking into account that online [tr*k] transactions, you’re not limited to the currency values. But that seems like a benefit + +BAN: Okay. Changed is not a normative, it is recommended, it is recommended in several parts of the spec, but not a required reference. Unlike ISO 4217 which because of the currency codes is normative. So this may or may not go through the entire timebox. I am expecting it not but so we have considered a lot of options. The three sort of like options that are most obviously apparent are, one, the first thing is making changed a normative reference. To say that actually the changed data, it’s complete and have the ability to separate between financial uses, involving small fractions of a currency that you never see in common use and the standard transactions that you would see your day-to-day Also, and I will come back to option 2, as that’s our preferred one. Not make it normative for currency for rounding. So and then the others, it’s not assumed this is useful to meet the status quo. Option 2 is the TG2 preferred solution. The reasons not to make it normative are it’s not currently normative. But more significant reasons are the data are not complete. And especially not complete for the cash rounding attribute. CashPrecision is good. It’s not perfect. But it’s good. Cash rounding is incomplete. But it’s good, where it’s complete. Making CLDR, you can use more accurate data if it’s available. If CLDR is wrong on, it’s normative to prefer that data. + +BAN: So in the interest of making this possible, we have a PR up, which provides the following new options for Intl NumberFormat and these names, one of the reasons why I wanted to break this out into its own separate timebox is that we’re bikeshedding these names. And feedback from the whole group on what names seem most intuitive and what values for the names are most intuitive, seems like a good idea. In the current version we have a new option currencyPrecision. This new option indicates if available, we want to use the – we recommend the currency precision data from CLDR. And if available, we – to use cash rounding rather than – cash rounding based on the smallest denomination rather than some other form – rounding to another value. So currencyPrecision turns on both cash rounding and cash digits. We considered other names. Currency cash, cashPrecision. We want to format it as cash rather than the financial uses. Currency digits. We also considered – so currencyPrecision, the available values that were considered cash for cash rounding, financial for other uses, which might be more precise; and to default to financial. To avoid breaking things. We also considered the value accounting for currencyPrecision, which given that like we have – like accounting uses are specifically cited as reasons why the greater precision might be used, it seems like a good idea, but we also used the value accounting for currency sign. And that does something orthogonal to currency position or basically accounting just says that if you have got a negative value, you shouldn’t format it with a negative sign, but in parenthesis as in common in accounting. So one could very easily want this, but not this. And if we use the same value for them, it might get confusing. + +BAN: So one, we are asking questions about what names to use. If these names seem all right? Or if people have ideas other than these that seem like they might be more intuitive. I am guessing keeping financial as a default for currencyPrecision is a good idea. If people think having the default be cash wouldn’t break that many things, that would be lovely. But so . . . yeah. So, one, names. We want to get feedback whether the names we have chosen are good, or better names. And two, this is very large for a PR. Not in terms of like lines of code, but in terms of the change that it makes. If people feel that this is actually too big for a PR, and should be a proposal, we wanted to get that feedback from people, as soon as possible. So . . . I think that’s time to go to questions. + +CDA: Nothing on the queue at this point. + +BAN: All right. And unless something – someone jumps in on the queue, just get consensus on the PR. + +EAO: Apologies for not having looked into this more deeply before. You were describing something I would highlight: for example, in Finland, we use the Euro as currency. When dealing with cash, everything is rounded to the nearest 5 cents. So the one of two cent Euro coins were never really in distribution in Finland. When payments are made not in cash, but by card or online, these legally have to be counted to the actual penny. So JavaScript defaulting to cash rounding in some situations could be surprising or even worse. I would be happy to look more into what the data for this looks likes, but I have some slight concerns that there are cases like this, that might prove problematic. + +BAN: Yes. That’s very good to hear. That is a concrete reason why having financial being the default is absolutely the right idea. Anyone else in the queue? + +DLM: Yeah. I was going to – I am in the same situation in Canada. A cash transaction, it’s rounded to the nickel. Otherwise, it’s to the penny. + +BAN: This is making me think that perhaps the right thing to do here is to have separate options for cash digits and cash rounding. Since it might be fairly common for people to want to use the number of digits after the separator after the currency as commonly used. But not use – not round to the smallest unit. So round to the penny instead of the nickel in Canada, for example. + +EAO: From what I can see, it looks like the CLDR data is indexed by currency. Specifically, for the Euro this means that the CLDR is not going to have all of the data available, given that different countries using Euro have different habits about how cash rounding, in particular, happens. + +BAN: Yes. + +DLM: Yeah. I’m sorry. The same situation. I haven’t had much time to look at this in detail before the meeting. I feel maybe this should actually be a smaller proposal rather than asking for consensus for the PR at this point, just to give people more time to think about this and examine it and I am going to take the lack of activity on the queue as an indication that maybe other people have more time to think about it. + +BAN: That is also something I expected to hear. Would it be fair to say that like the . . . the feel, if not the consensus of the group, that should be a smallish proposal instead of one PR? + +DLM: I am not going to block it. But I am not really hearing strong support for it either. + +CDA: Right. And we do ask that there’s at least some support. And nobody has spoken up for consensus, unfortunately. + +BAN: I am going to be provisionally planning on bringing this back as a proposal at the next plenary, rather than asking for consensus right now. + +### Speaker's Summary of Key Points + +- ECMA-402 specifies that a certain IANA data set is used to set default rounding for currencies, which is well-defined by central banks and appropriate for certain financial purposes. +- CLDR defines a more pragmatic +- This proposal adds an option to Intl.NumberFormat to switch between the IANA and CLDR modes, called `currencyPrecision: ”cash”` (recommended CLDR, but may be tailored) vs `”financial”` (IANA). +- Reception was generally positive, but some delegates did not take the time to review it well yet, and no one explicitly supported it for consensus. + +### Conclusion + +- BAN will bring this proposal back at a future meeting for consensus as a PR, so delegates have more time to review. + +## Withdrawing Operator Overloading + +Presenter: Daniel Ehrenberg (DE) + +- [proposal](https://github.com/tc39/proposal-operator-overloading) +- [slides](https://docs.google.com/presentation/d/1mT2VmZlC3YmhDsqdxrCxQ5GpLFHFntsb3XCM762eDvg/edit#slide=id.p) + +DE: Operator overloading: I introduced it a while ago to do something for decimal and other datatypes, this could be sort of vectors or matrices or data with units. Different sorts of data that that makes sense to – where it makes sense to have a notion of them using numerical-style operators. It is really for mathematical things. + +DE: Some use cases: + +- CSS object model: you could have `1px + 3em` and it looks like CSS `calc`. Data with units like that could really avoid bugs. +- Explaining Decimal: I think it’s important to have Decimal built-in, but it would be nicer if it were built using more generally available mechanisms. It’s kind of a design smell, if we need to appeal to something being a special magical built in thing that’s not part of a built-in mechanism. +- Python has widespread use in the data-science space, and that had something to do with this ergonomic of its use. Maybe operator overloading relates to that. + +DE: There was a concern raised about operator overloading in particular, which is that operators are one of the few things in JavaScript which have locally reliable behavior. Whenever `+`, `*`, or `===` appears in current JavaScript code, you know what that will do. It’s not like a method call. You know, it may call some methods, like `valueOf()` to convert to a number, but the expressiveness of these is limited–no method is called with both arguments together. + +DE: So a goal of the operator overloading proposal that I proposed was to avoid this injection of operator semantics into code that wasn’t locally expecting it. So I came up with this syntax. `with operator from Decimal`. This statement enables operator overloading to work on decimals within the following lexical extent–so, when decimal operator overloading is not expected, you get the built-in semantics. Operator overloading is supported on `*`, `+` and `==`, but not `===`, to preserve the integrity of `===`. + +DE: But there were several problems with this idea. There’s unclear cost-benefit trade off. In particular, the cost is really high for operator overloading. And the benefit might not be high enough. The people have been skeptical about BigInt to have sufficient value to pay for its cost. And there are many pieces of code to be updated at various compiler tiers to make this work. With this particular lexical mechanism, there are extra problems, like that lexical mechanism adds extra overheads, the syntax was confusing. People kind of had this idea that syntax had to be applied at the module level. That might be flexible. But additional overhead to do this checking, to make sure you opted in. + +DE: The reason that I want to withdraw this is, first, we heard repeatedly, especially loudly from V8, but also from some other engines, that operators won’t happen [they would block consensus], according to them. Everybody can change their mind, but that’s the current status. I would rather that this be an explicit conclusion that the committee reaches. At least a conclusion for now. We can revisit this but at least this is our plan for now: “We are not doing operators.” Rather than leaving things hanging and leaving everyone to have their own interpretation of what is going on. Someone could introduce operator overloading later, but it shouldn’t be considered on the table now. And this conclusion important for us to move forward, both on Decimal and on Records and Tuples. In both cases, I personally was really hopeful that we could get the syntax, which I think is nicer. But given the cost, and definitely given the difficulty of the implementation, the tradeoff just isn’t quite there [for these proposals to include operator overloading]. So I want to withdraw this proposal. And I want to ask for consensus on this. + +DLM: SpiderMonkey is in favor of withdrawing this proposal. I can definitely see the argument from the syntax side. It would make working with decimal nicer. I could see also Temporal for like dealing with time ranges and that thing would be nice to have for that, it’s a general-purpose mechanism. But with the implementation concerns that we encountered and the lessons learned on both implementing BigInt and the working progress on records and tuples, I think we support operator overloading at this time and in favour of it being withdrawn. + +CDA: I am next in the queue. + 1 for support. And I agree if somebody wants to resurrect it or something similar, they can bring that to committee. + +CDA: MM reluctantly supports and RMH + 1 on behalf of Google + +DE: I would be interested to hear more from MM. In making this proposal, I was really trying to follow through on a particular promise I made in ensuring BigInt that I would follow up with a general operator overloading proposal. That was my intention–it took me longer than I intended. And the sort of hope with BigInt was that it would set a trend to do operator overloading going forward. At the time, MM was very interested in this future story. What are your thoughts now? + +MM: That’s why I say reluctantly. I thought this was brilliantly principled. It solved problems I did not know how to solve until you suggested it. And I think it provides a principle basis for not just operator overloading, but other things we have talked about when we want to avoid global things in a principled manner. I will bring up again in the registry (?) and become a topic of conversation. I think JavaScript would be much more – much – much better language if we could do numeric operator overloading of vectors and matrices and complex and rationale and all those user-defined datatypes. But I understand the reality, and my concern, which you did successfully address in the wording you used, is that this not go on the record as any kind of irreversible decision. That it’s quite explicitly the case that if the implementer objections go away or if other constraints come on, the committee is free to change its mind later. When that as part of the record, I am okay even making it an explicit consensus decision to withdrawal. + +DE: Thanks for that, MM. The other side of the coin, I mean, you noted this, but I want to emphasize, the explicitness of reaching consensus as a committee on withdrawal. This isn’t just a case of “well, no one is picking it up right now.” It’s a case of: the committee has thought about this collectively and agreed to put this aside. + +MM: Yeah. I am okay with that, with the remaining language that you stated being explicit. + +DE: Okay. Great. I want to mention as far as what MM might have been alluding to for where this lexical scoping, this with operators from statement. The idea of this statement is, within the lexical extent, that comes after it, then you’re allowed to do operator overloading when the operants are decimals. So if you use + on something, that’s not a number or a string, or like an object, but it’s something that has operators overloaded on it, but not the operators from statement preceding, it will throw an exception. Another case where we could apply such logic, I don’t know if we want to, but a case we could use is in user-defined primitives. The conversion operation from a primitive to an object could be opted in to lexically with a similar construct. Overall, the veto – the downsides raised by engines on operator overloading were also on overloading. I don’t expect to see that. Primitives and operators, in particular how can you get from primitive to an operator table without having a global registry that was this kind of mystery, or sort of like self-contradictory thing. And that’s what this statement is about. MM, is that an accurate depiction? + +MM: Yeah, that’s accurate. There are other proposals that have been discussed that are thinking about making use of global registries of some kind, that if global, I will object to them. And I think that we should all keep the sense of what you invented here, in mind, as a way to deal with some of those problems. So I think it’s still a very useful invention to keep in mind. And that’s in addition to the – my general, you know, sad reluctance to let go of operator overloading specifically, as an addition to the usability of the language. + +DE: Okay. I do want to emphasize that there is a particular cost to adding such a mechanism. It’s difficult to avoid the cost of the lexical scope, which is referenced all over the place. Only higher compiler tiers could prove that it doesn’t need to be queried for each operator usage. So that’s unfortunate. And I don’t know if that makes this invention totally impractical. + +DE: SYG, did you have any further comments on this? + +CDA: SYG is not here. + +DE: Was there somebody else that went in the queue but didn’t have comments to make? + +CDA: Yeah, the other end of message . . . RMH from Google. + +RMH: No. I don’t have anything to discuss. No. + +DE: Okay. JHD, you have made the point that without operating overloading, that it may imply that certain other proposals are not justified. What do you think about withdrawing operator overloading, and how that relates with what we do going forward? + +JHD: I mean, I think the conclusion you referenced, I still hold. Withdrawal is appropriate because it matches the reality that it won’t happen unless multiple implementers change their position. I think that the conversation on the affected proposals, I don’t think, block this change. It’s the same conversation, whether this further operator overloading withdrawing or not, it just having it withdrawn clarifies the discussion, since there’s no more confusion that that might be a thing we can do. So I agree to withdraw it. You’re right to bring this item in. And I think that further conversations on the proposals that might have otherwise used it will still need to happen + +DE: Do you have any thoughts on the comments on DLM, it would have been anyways for things like Temporal? That applies for comparisons for equality or less than as well as adding and subtracting durations. + +JHD: There’s dozens of things that operator overloading would be glorious for, especially with some form of the scope version that this proposal had pivoted to. And I think that we’re going to end up with a worse language than we could have because we can’t have operator overloading. But the engine implementers said we can’t, so we can’t have a better language in that regard. That’s not tractable – that’s where we are. + +JSC: I am fine with withdrawing this proposal. I wanted to make a comment as one of the champions of the BigInt Math proposal, which is about adding functionality for several functions that are in the `Math` object and extending that for BigInts. One of the design problems it’s facing is whether to overload existing `Math` functions versus adding new functions to the `BigInt` object. I was slightly in favor of overloading the `Math` functions; part of its reasoning was in the same extending spirit of the operator overloading proposal. And I know that we already have overloaded operators for BigInt. But this presentation and the sentiments from the engine implementers that it shows are making me lean towards pushing the BigInt Math functions proposal to `BigInt`, rather than overloading `Math` functions. I wanted to comment on this because of a knock-on effect that this withdrawal has. But I don’t object to the withdrawal itself. + +DE: Okay. I agree with putting such operations as properties on `BigInt` [and not Math], but for different reasons. The design goal around BigInt was that, in general, when working with BigInts versus Numbers, you should remain aware which ones you’re using. Otherwise, you will quickly run into an error. In general, code should not try to be generic between BigInt and Numbers. Overloading a function is unrelated to overloading operators. Operators have high implementation cost, and there is low implementation cost to overloading functions. The reason we overloaded operators for BigInt was because it was seen as very essential to decent ergonomics, which is not as necessary for functions. For now we are deciding that those operator ergonomics aren’t worth it–that’s a separate thing. Even just for BigInt, where operators are overloaded, developers are supposed to be keeping track of what they’re doing, and not mixing things up, unless it’s the operator case. + +JSC: Sounds good, thanks. + +WH: For `Math`, you have an issue of what happens when there are no arguments to some of the `Math` functions, like for `Math.sum()`, which we discussed earlier, where the result would be different depending on whether you want double `sum` or BigInt `sum`. + +DE: Okay. Great. We are all in agreement on not overloading `Math` for BigInts. + +SFC: Yeah. You are asking for feedback from SYG, the consensus, I wanted to know that I double-checked and this was not on the agenda – so this was not discussed in the like Google – like, sync. So I don’t like – SYG is not here and I guess, therefore, I am just saying that, like, I don’t think like . . . Google has a position right now because this was not discussed before the meeting. + +DE: I apologize for not putting this on the agenda before the deadline. I had thought that the overflow topics were added automatically and I was mistaken. So the thing is, Google probably formed a position before the Japan meeting, because it was on the agenda there. Do you have any prior notes from that? + +SFC: I can look. + +DE: Okay. I am fine to delay the withdrawal. But honestly, this is very heavily motivated by Google’s feedback, in particular V8’s. Anyway, if you haven’t formed an opinion on this and anyone wants me to delay this until you have more of a chance before the agenda deadline, I am fine to delay that. + +CDA: We have support for withdrawal. I want to be really clear for the record. SFC, are you saying that you would block on the basis of the late addition at this point? + +SFC: Let’s see. It looks like from September, it didn’t bubble up all the way. + +SFC: I mean, I don’t feel comfortable speaking one way or another from like Google or Chrome’s position on this topic. I am just saying, if DE’s purpose is to get like someone like SYG to say, say, yes, I agree with it, we are not in a position. + +DE: SYG said it yesterday. I don’t need that message from SYG right now. It’s more, I want to hear from advocates of operator overloading. + +DE: Is anybody else in the queue? + +CDA: No. Nothing on the queue. + +DE: Okay. So do we have consensus on this? + +CDA: I am looking at SFC again. I think we had support, but . . . + +SFC: From my position, I am currently neutral. I don’t think this impacts a lot of Intl proposals. From my position, it’s Intl, I have no position on this. + +DE: Okay. + +DE: RMH put yourself on the queue as supporting the withdrawal as supporting Chrome? + +RMH: Yeah. I did that – I remember talking with you about it. I think I am supportive of withdrawal, but now SFC’s comment makes me suspicious. I don’t know. + +DE: Okay. Well, let’s consider this withdrawn. And if people raise extra doubts at the next meeting, then that’s okay. We can bring it back. + +CDA: Can we say, then, we are asking for consensus for withdrawal on the condition of just confirming with SYG upon his return? Would that satisfy? + +DE: It’s quite absurd to confirm with SYG, because he loudly advocated for this, yesterday. If SYG is a blocker, that shouldn’t be a condition. But if you want that to be the written conclusion, then sure. + +CDA: Yeah. It’s not so much what I want, but I feel like there’s a lack of clarity here on the potential blocking or not blocking and we are not getting an affirmative confirmation or denial from the Google representatives + +DE: Let’s confirm with this SYG, but we will be able to do this before the notes are published. [NB: SYG later confirmed] + +CDA: You had many messages of support for the withdrawal earlier. Yes from JHD. And + 1 from DLM. Is anybody opposed to this? Going once . . . going twice . . . operator overloading is withdrawn. Thank you. + +### Speaker's Summary of Key Points + +- Operator overloading was proposed to solve many problems with JS, including enabling different numeric types, unit-bearing calculations including CSS units, more ergonomic vector/matrix processing, and generalizing BigInt/Decimal. +- Several JavaScript engine maintainers held that operator overloading does not have the right cost/benefit tradeoff to be worth it to implement. It would be very hard to implement efficiently, especially with the scoping/safety features. +- Given the positions in committee on operator overloading, DE proposed withdrawing operator overloading, which may clear the way for proposals such as Decimal, where “whether to overload operators” is a pertinent design discussion. + +### Conclusion + +- TC39 reached consensus to withdraw the operator overloading proposal, pending SYG’s confirmation, which was given after the meeting. +- The withdrawal is due to the high cost to implementations and high complexity of the operator overloading proposal, or any other possible proposal in this space. +- Operator overloading may be re-introduced to TC39 in the future (especially if implementers' perception of the cost/benefit tradeoff changes), but the committee currently does not expect to spend time investigating operator overloading. + +## Withdrawing custom numeric suffixes + +Presenter: Daniel Ehrenberg (DE) + +- [proposal](https://github.com/tc39/proposal-extended-numeric-literals) +- [slides](https://docs.google.com/presentation/d/1me-RkloXmBJhDJKG3rl_q0CYW2KO_QFnvIPmIRmQhsw/edit#slide=id.g27efdfda19b_0_0) + +DE: So custom numeric literal suffixes. The motivation is for similar-use cases for operator overload being for CSS, for decimals, defined in JavaScript. For encouraging the use of strong types in general. And even to explain BigInt, or explain decimal had operator overloading, so for example, there was once syntax that I wrote and I was going crazy with key word. There were ideas how to squeeze this in, but the idea is basically that we would have something that sort of like string tag templates which then would get this frozen thing. And be able to pull out the string that came before the suffix, but also the preparsed number, if that made it faster. + +DE: So the motivation for withdrawing, you know, this presentation presumes that operator yore loading p;reviously got consensus, which it just did. So what was raised in September, in Tokyo, was that developers expect that suffixes imply operator overloading. If you see a decimal with like a decimal suffix, and then you have to use methods on it, and plus doesn’t work or equals doesn’t work, that could be confusing. So the direct result is that custom numeric literals violate developer expectations. Do we have consensus on this? I think this was raised first in July and then the decimal proposal was updated towards this in September. + +DE: Just a quick note: numeric literal suffixes are not expected to have a high implementation cost. There is the question of how to deal with this funny scoping. But that’s more of a design issue. And from an implementation perspective, I think it will be simple no matter which way we do it. We can go to the queue. + +DLM: Yeah. I agree with your argument. I support withdrawing this. I don’t think it makes much sense if we don’t have overloading. + +CDA: All right. And we have got + 1 end of message from MM. And we have PFC on the queue. + +PFC: I support withdrawing this if we are not going to pursue it. But I am a bit skeptical of the claim that developers expect that suffixes imply operator overloading. I was wondering if you could say a bit more about that. + +DE: Honestly, I was surprised about this claim as well. I was hoping that this proposal would work. I think just the ideas that you see something, you can expect that you use it with other parts of number syntax and multiple people seemed to share that thought. + +DE: Well I was hoping it would be used for operating overloading so the connection isn’t quite as surprising itself. + +USA: All right. PFC sent a + 1. JHD on the queue for + 1 for withdrawal. That’s it. That’s the queue. + +DE: Okay. Do we have consensus on this withdrawal, then? + +USA: We only heard positive comments regarding the withdrawals. So yeah. I would say we have consensus on withdrawal. + +DE: Okay. Thank you. + +### Summary + +- Custom numerical suffixes were proposed to enable more numeric-like data types to be usable with better ergonomics. +- But, operator overloading was withdrawn, and use cases of custom numerical suffixes tend to also involve operator overloading. +- Several committee members explicitly supported withdrawal. + +### Conclusion + +- The custom numerical suffixes proposal is withdrawn, due to the widespread (but not unanimous) developer expectation that values produced with numerical suffixes would work with operator overloading, and the withdrawal of the operator overloading proposal. + +## Decimal overflow + +USA: Thanks for ending super early. We have 10 minutes. And not necessarily anything that would fit in these 10 minutes. So what do you all say we take this large amount of time for ourselves and end early? + +DE: Was there anything overflow from previous topics that we could go into? Like decimal overflow? + +USA: There is definitely overflow from previous topics, but – it’s all 15 minutes, at least, and I am not convinced that – well, I didn’t check if people are prepared, but I could ask. Well, is Jesse around? + +JMN: I am around. What’s the suggestion? + +USA: Could you be prepared for overflow, to continue with decimal? With only 10 minutes? + +JMN: I think so. The idea was to capture, we had captured the queue at the end of the presentation yesterday. And then there were two items there. So I guess it may depend a little bit on whether those are here. + +USA: That is true. Let me quickly go and find the queue from the notes. + +JMN: I believe it was CM is one of the – had an item there. And Shu. + +USA: Oh, but then SYG is not going to be around. + +DE: CM is in call. After Chip goes, it would be great to discuss any implications for decimal from the withdrawal of those two topics. You know, JHD made a comment about that just now and great to go into that in more detail. Chip, do you want to elaborate on your point? + +CM: I am trying to swap back into my head what the point was . . . + +JMN: I think I can help you there. The idea, if I remember correctly, you were giving a + 1 to the question of why decimal should be in the language rather than the library. Does that sound right? + +CM: No. We were expressing skepticism about the argument that having it be a user space library was unrealistic. We think having it be a user land library would be fine. That was about the depth of the remark I had. It just happened to be when we cut off the queue. + +JMN: Right. I guess that aligns with what others were saying. Namely, we take it as a challenge for Decimal to formulate some kind of argument for why being in the language is worth it. Worth the cost. One suggestion that I might have, and doing a little bit of freestyling here, would be that JHD has mentioned this, numbers are a very primitive datatype for a language to support. So unlike some other datatypes that one might consider, it makes sense to think of numbers as something in the language. Especially given the kinds of errors and use cases we see with the numbers that are built in already. So that would be one argument that one might consider that. + +CM: Yeah. I mean, I think that the benefit of the standard being a point of common understanding for developers is a valid argument. I just don’t think it has enough weight to overcome the cost of the added complexity and extra complication to the language. I could also see an argument about performance – if the implementation is in cahoots with the engine, it can run faster, although I don’t think that the anticipated applications for decimal are in that space of things where performance is a primary consideration. I am not an expert in the application space, but I could imagine such an argument. + +DE: CM, I definitely see how adding anything to the standard library has costs. What sorts of considerations should we take into account when doing this cost-benefit comparison? + +CM: I think it’s just literally what you said there, a cost-benefit comparison. And obviously, both the cost and the benefits are really difficult if not impossible to quantify. So I think it’s going to come down to making an argument, and having that argument be persuasive. And at this point, a lot of us are unconvinced that the benefit outweighs the cost. If you want to continue – for example, I consider the fact that we dropped operator overloading and dropped numeric types syntax in the last two discussions we had, may, in fact, detract from the potential of having decimal language in the first place. So it’s all of a piece and I understand all of the issues with IEEE floating point. Those are all real. But we are at this point juggling a very large and complex language standard, and we are in the death of a thousand cuts stage with small incremental improvements. We just have to be pretty ruthless in terms of our demands for things being significant improvements. I am not actually personally taking a strong position for or against decimal in this. It’s just that I think we are becoming increasingly sensitive to the overall complexity issues. + +DE: Okay. Yeah. We all care about complexity and this has been a constant concern of the committee, as long as I have been involved. + +CM: Yeah. + +DE: And I share this concern. Still, we have been adding things to the standard library. SYG raised the idea of thinking about JavaScript as a product and bringing in more end-to-end product conversations. This is something that JMN and I will work on for the next meeting. Another thing that JHD raised, developing learnings from the library ecosystem. We have experience from the library ecosystem, we can talk to the maintainer for the dominant set of libraries for decimal [again]. That’s something we can come back with next meeting, while proposing Decimal for Stage 2, answering these concerns + +CM: Framing from an overall product perspective seems like a very compelling way to structure the argument – note this is my personal take on this; it does not reflect a position on the part of Agoric. But I think the case could be stronger if it’s framed in that holistic mode. + +DE: JHD, did I capture correctly what you meant about learning from libraries? + +JHD: Yeah. I mean, I think it’s that – so there’s a missing piece, what you said combined with what Jesse was remembering. Numeric things, math, is conceptually primitive. It means that I think that there would need to be a really, really compelling argument for why it would belong in the language rather than just being a userland library for a time. And those types of arguments usually revolve around performance, having a coordination point, or that it’s easy to screwup and so we can solve it for them, and that’s just 3 possibilities. I am sure there are dozens more. The more compelling reasons we have, the more people’s correct complexity fear is overridden and the more value there is in the language. I am continuing to be open to being shown those compelling things. But at the moment, it doesn’t seem like, to me, Decimal meets that criteria without the ergonomics boost from syntactic overloading, is the way to phrase it. It’s possible. I am not saying it will never get there, but I can’t see it right now. I can’t give a concrete, “if you check these boxes, I will be happy with Stage 2”, but that’s the thing I would hope to see and expect other delegates to be convinced by as well. + +CDA: We are at time. I know you might have gotten short changed on the continuation Jesse. There is more time if you would want a continuation + +JMN: No. This is a good discussion. Thanks for squeezing me in + +CDA: Do you want to dictate any key points for the notes? + +### Speaker's Summary of Key Points + +We discussed some – a general set of objections or further questions that remain for decimal. Possibly hinting a future Stage 2. If those objections could be satisfactory addressed: + +- JHD asked about an analysis of the dominant ecosystem libraries for Decimal, and what we can learn from them. It would be great if the proposal fixed issues that users and maintainers of these libraries encounter today. +- CM noted that adding anything to the standard library has cost, and asked about the cost/benefit analysis for this proposal. The proponents should explain why Decimal should be built into JavaScript, as opposed to being a user-level library. diff --git a/meetings/2023-11/november-29.md b/meetings/2023-11/november-29.md new file mode 100644 index 00000000..81493d1a --- /dev/null +++ b/meetings/2023-11/november-29.md @@ -0,0 +1,898 @@ +# 29 Nov 2023 99th TC39 Meeting + +--- + +Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. + +You can find Abbreviations in delegates.txt + +**Attendees:** +| Name | Abbreviation | Organization | +| ---------------------- | ------------ | ----------------- | +| Nicolò Ribaudo | NRO | Igalia | +| Michael Saboff | MLS | Apple | +| Linus Groh | LGH | Invited Expert | +| Rob Palmer | RPR | Bloomberg | +| Jesse Alama | JMN | Igalia | +| Ron Buckton | RBN | Microsoft | +| Daniel Minor | DLM | Mozilla | +| Istvan Sebestyen | IS | Ecma International| +| Ujjwal Sharma | USA | Igalia | +| Ashley Claymore | ACE | Bloomberg | +| Chris de Almeida | CDA | IBM | +| Jordan Harband | JHD | Invited Expert | +| Kristen Hewell Garrett | KHG | Invited Expert | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Devin Rousso | DRO | Invited Expert | +| Waldemar Horwat | WH | Google | +| Philip Chimento | PFC | Igalia | +| Chip Morningstar | CM | Agoric | +| Samina Husain | SHN | Ecma International| +| Jack Works | JWK | Sujitech | +| Daniel Ehrenberg | DE | Bloomberg | +| Ethan Arrowood | EAD | Vercel | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | + +## Recruiting people interested in JSX to join the matrix room + +Presenter: Jack Works (JWK) + +- [matrix](https://matrix.to/#/#jsx:matrix.org) + +JWK: If you’re interested in JSX, if you want to see it standardized please click the link in the agenda so we can make it happen. And that’s all. We have no proposal now. Then let’s go to the next topic. + +RPR: Okay. So there’s a matrix room that everyone with join if they are interested in talking about the possibility of a proposal for JSX. + +## JSON.tryParse + +Presenter: Jack Works (JWK) + +- [proposal](https://github.com/Jack-Works/proposal-json-tryParse) +- No slides + +JWK: As you can see, it’s very simple. The problem to solve is this one. Try it to parse something and the proposal – the spec is the same as this one. So are there any questions? + +RPR: There’s no one in the queue. + +KG: Can you say more clearly what the point is? Like, why you want to try this instead of writing try catch? + +JWK: Because this is a statement and usually you have to write code like this. + +```javascript +try { + return JSON.parse(v): +} catch { + return default; +} +``` + +JWK: For example. So because this is a statement, it’s breaking to the statement world, and it interrupts your expression. It might be annoying. So if we are going to have this, this will be easier to handle JSON, possibly JSON strings. As you can see, it’s the same as URL.canparse. Btw I don’t know why they don’t make tryParse but canParse. + +CDA: RMH? + +RMH: I think the same as the question that he already answered. I don’t know if you want to add anything or what. I feel like it’s the same. + +RPR: Yeah. It’s shorter and an expression. All right. Mark Miller? + +MM: So in general, a function call in JavaScript might throw, the motivation behind this would seem to apply to everything. So what’s unique about JSON.parse that distinguishes it from pretty much every other expression in the language that might throw? + +JWK: This is because `JSON.parse` is handling unknown input, although every else expression might handle unknown input, but this is a clear signal. + +MM: Okay. + +NRO: Yes. Like, I agree to having to write the statement, it’s annoying. I wonder if you think that `do` expressions would also be a possible solution to this problem, instead of having a special case for JSON.parse? + +JWK: Do expressions will solve this problem, but it looks like it hasn't had activity for a long time. And also, I saw a new idea in the Discourse that’s called try expressions that can also solve this problem. But is it a proposal now? Do expressions or something like try expression can solve this, but we don’t have that yet. + +NRO: Okay. I wonder if the champions of do expressions would say if there is any update coming in the near future of the proposal? + +KG: Sure. I haven’t been working on do expressions mostly because there’s a lot of syntax proposals in flight and I don’t think it makes sense to have all of them. And I think my time and probably the committee’s time as a whole would be better spent on standard library things rather than new syntax. So I haven’t been working on `do` and I am not planning to work on it in the immediate future. Someone else is welcome to pick it up, although like I said I think standard library stuff is probably more valuable. + +RPR: And there’s 4 minutes left. Two people in the queue. + +RBN:This is partially a response to something I think Mark was asking about, whether there might be some generalized solution and why we would need a `tryParse` specifically for this. Having a `tryParse` method is more useful as it can be used as a callback, since it would be a function, versus a generalized “`try` but don’t throw” syntax. While I’m not opposed to the addition of that kind of syntax, `tryParse` and `canParse` are generalized mechanisms that are employed for methods on things that have a representation that could potentially fail to parse, but you wouldn’t want to throw. So it’s not necessarily something new. + +RPR: All right. That’s the end of the queue. A couple of minutes to go. Do you want to ask for Stage 1, Jack? + +JWK: Yes. Do we think this is a problem to solve and we should have this one? + +KG: I will speak in support. I think that this is worth trying to solve. But mostly because I think that it is quite cheap to add stuff like this, and it’s convenient. So I am in favour of us having more cheap convenient things. + +MM: I am not. Every cheap convenient thing that you add is more complexity in the language and has to pay for itself. And in this case, the specialness of this to adjacent parse, solving, you know, solving an issue that does come up pervasively in the language for which the language already has adequate support, which is you define a wrapping function that does a try catch, if you want to – if you want to do something that is conditional on the catch, rather than propagate the throw. I mean, that’s why we have try catch in the language. So I think this is solving a nonproblem. And it’s adding special case complexity in the language for something that is just a pervasive language issue, because the language is already working as intended. + +JWK: All right. So maybe – maybe we should go this way, or by the do expression? + +MM: Do expressions has some real reach. It’s a fairly simple mechanism that solves many problems. So I am receptive to ‘do expressions’. ‘try expressions’ are in the middle. And altogether, I think I would still object to them, but more weakly than I object to special casing `JSON.parse`. It’s solving a narrower problem than ‘do expressions’, but solving a problem that comes up a lot. I am just not convinced that if the problem is the try catch takes you out of expression into statement world, then that’s the motivation for do expressions. And do expressions is no more complicated than any of these other proposals while solving many more problems. + +KG: Definitely disagree with that claim. I think this proposal is much, much simpler than do expressions. + +RPR: All right. From DLM we have support for MM. End of message. Meaning, DLM is saying that he does not support Stage 1. + +RPR: We are basically at time now. We have got a couple of people on the queue. Nicola, can you be quick? + +NRO: Yes. Just if – given Stage 1, you also considering scope of potentially of, for example, test to like to check if strings or not than already deciding this option is like a return object or undefined . . . + +JWK: Yes. It was `canParse`. JHD raised that when you do canParse, the next step is to parse it. So `canParse` isn’t meaningful in this case because you always want to parse it after you test it. + +NRO: All right. I was asking, the example I saw, was wrong. You had if not JSON result and also false or null. Setting up Boolean can we parse or not would be useful. Like, I am also open to exploring like this solution. Is it possible to include – + +JWK: Yes. If it’s `tryParse` you can do optional chainning on it. It would be more useful than `canParse`. + +RPR: All right. I think we should move on. Ron, did you want to speak? + +RBN: Yeah. I would say that I somewhat disagree with Mark. If the ecosystem, if you’re a third party library with a custom DSL or syntax, it’s often convenient for you or for the libraries to include a mechanism to allow them to parse these things. I have written similar things many times. In the ECMAScript specification in the library, there are very few things that we parse on behalf of the user. Basically, dates new regular expression, and functions and in many of those cases, they throw, in some cases for dates they return NaN. So it’s – or something that has a value of NaN. So we don’t have at that many places in the language that we need to do parsing, and this would be valuable. So it doesn’t seem, to me, that it’s worth the expense of adding a new syntactic feature to try to evaluate an expression as opposed to simply adding a method convenient that works with optional chaining, things like that. So I am not really – I don’t think I agree with the idea of increasing syntactic complexity. And I agree with Kevin, that do-expressions overcomplicates the solution. It doesn’t solve the issue of having to use try catch and write this complicated set of statements. All it does it put the complicated set of statements in something that returns an expression, what you could do with an – inline immediately executed arrow function and don’t have do-expressions. It’s still not convenient. I don’t really see why that this should be considered for Stage 1 + +RPR: I think we have to stop there now. Yeah. So I think the conclusion is that this has been blocked. + +### Speaker's Summary of Key Points + +- List +- of +- things + +### Conclusion + +- Not going to stage 1. Listed in the rejected proposals. + +## Module sync assert for stage 1 + +Presenter: Jack Works (JWK) + +- [proposal](https://github.com/Jack-Works/proposal-module-sync-assert) +- no slides + +JWK: Yeah. So the problem here is if some module in the dependency graph becomes async, the importing module becomes async too, then the evaluation time is deferred, and it might cause some bugs in the real world. This is a real bug that we have encountered in our products. We register an ‘onInstalled’ event that opens the welcome page to new users. Then there is a new top-level await module in the graph. So this whole module becomes async. But this event handler only gets invoked at the end of the first event loop, so when this module – when the call runs, it’s already missed events. And the welcome page no longer opens. This is very hard to find because when you debug the app, it is already not the first time opened. Here is a possible solution. We can solve it at the bundler level, but every bundler needs to invent their convention and this does not fit for projects that do not use a bundler or linter. This (import attributes) is not an option because it’s already renamed, the keyword to with it does not assert anymore. And also, the attributes, are not an option because when you attach this one, you always want to attach every import and export in the current file. But if you are using import attributes, it allows you to write some imports with the assert sync assert and some others without, which you never want. I think the best is to add a new directive. If a module appears with this directive, then it is an early error, if there is an async module in the graph or this module itself is async, so you will fail early to find the bug earlier. That’s all. + +NRO: Yes. I wanted to share one use case. There is one JavaScript platform [i.e. bun], that supports both CommonJS and ESM, and to make it easy to use them together, they decided to allow using require of ESM. But obviously, that only works when ESM does not use await because otherwise it can be asynchronous. Having a way to ensure that they do not accidentally introduce await in the dependency graph and so making sure that their module keeps being synchronously required would be helpful. + +DLM: Yeah. This is solving a real problem. This is something that has come up internally at Mozilla as well with the loading module mechanism. We are in favour at Stage 1 for this. This is good. Thanks. + +NRO: Yeah. Something – another reason why this is how to compose with the deferred imports in the deferred imports proposal right now, async models need to be eagerly. When you import with defer, you have to traverse the whole graph it look for asynchronous leaves. Like, this proposal would allow engines to like stop traversing the graphic at specific points, so it helps making that feature – make the developers the other language more performing + +JWK: This idea also jump out when I was implementing deferred imports in webpack. + +MM: I don’t need to speak, but just enthusiastic support. The problem is real and you explained it well. It’s worth a Stage 1 investigation. For the record, the directive approach is plausible and I am glad you stated several plausible approaches. + +MAH: Yeah. There can be observable effects from – of showing up in the dependency graph. I am confused by the example provided here for basically workers or anything like that. That feels to me like a bug in the environment that, where the environment would very well only trigger it once it realized the top-level module has fully – the module providing the end point has fully loaded. I am confused how to basically – this specific case is not a platform bug. + +JWK: Yes. In this case, maybe Chrome can fix it. But you also mentioned service workers. In the service worker, you also need to register all listeners during the first run and if the module gets async, it might cost a several event loops to register the event listeners and that becomes problematic. + +MAH: I mean, even for the service worker, I don’t understand why service workers do not consider this service worker as an incompleted, its load, once the main module has loaded. + +JWK: Have no idea, but that’s what Web currently does. + +MAH: Yeah. Anyway, but I support the use case in general because I know it has – it can be observed. It can have observable effects. I encourage to file platform tickets for these specific case. + +EAO: I like this just as a quick question on the syntax: my preference would be to go with something like assert sync true, which is in the middle of the first two ones. I understand this requires repetition. Is there a reason why you kept that specific form out + +JWK: Yes. I have already explained why this is not ideal, but maybe I should reexplain it. When you write assert sync, what you want is the current model to be sync. If you write code like this (`import 'a' with { assertSync: true}; import 'b';`), "a" must be sync and "b" does not have to be sync. This case does not make sense. The general intention of writing sync assert is to assert the current module itself is sync, and you will need to write the sync assert in the every import/export in this file, if we take this approach. This will be annoying. + +EAO: Okay. + +GB: Sorry, I'm just trying to follow the discussion here, so I don't want to rehash the same point again, but while we're on the topic, is the intention that the assertion form asserts that the direct import is a synchronous module that does not have any await, or does it assert that the entire graph, all the way to the bottom leaves, does not have await? + +JWK: Yes. The whole graph. + +GB: Thanks for clarifying. + +ACE: To reiterate what NRO said, I can see the issue where if you’re using this feature because of the bug you described, then yes it’s wrong to only put it on one of the imports, you would want it on all of them. However, as NRO said, there’s also use cases for wanting to use this on an individual import basis, which the directive then wouldn't let you, like the combination of this with deferred imports saying I want to defer this import and there should be no asynchronous pre-loading. So you can still achieve the fixing of this bug by wrapping all of the modules, by only having one import, you know, move all of your imports into a module, and then just having one import that one module that contains everything is sync. So maybe that gives you the best of both worlds, whereas the directive only really handles this one use case. + +JWK: Okay. That’s – that’s a way, but we need an extra file for this. I’ll add that after the meeting. + +RMH: Yeah. In our meeting, before TC39, we got the feedback that it seems like a Chrome extension API issue. And we were not convinced why we need a new language level feature for that. But we are fine to explore more this proposal. + +JWK: Yeah. We have already raised some other use cases. I can update this after the meeting. E.g. to assert my library always being sync, otherwise it might break customer’s code. + +RMH: Yeah. Thank you. + +GCL: Hello. The same thing that was said. For one thing, I don’t find this particular example to be super motivating because I feel like it’s just a bad API that is not – like, it’s basically a race condition. I understand there are other use cases, but I am kind of concerned that we’re like making the ecosystem, a world where it’s not possible to use async module at all in any dependency ever because somebody somewhere is going to put a sync assertion on it and then your module is a problem module because you used async. + +JWK: If you write this, you will have some reason that the module should be sync. So if an async module comes up, you know it will cause problem and you will have to rearrange how you import those modules so they don’t break the intention that’s letting you add this sync assert in the first place. + +GCL: I mean, I could see that. I am also seeing places where already stuff is being changed to not be async anymore. Like the WebAssembly ESM integration, which was not done specifically because of the use cases presented in this proposal. But somebody had a reason that an async module wrote what they were trying to do. So they changed the ESM integration to longer use async modules. I am just sort of a bit concerned that’s a thing that we will see more and more. Especially if we add more places where we are excluding async module. I am not sure if this is enough to block Stage 1, but I want to investigate what the ecosystem direction will start to be here. + +JWK: Yeah. Because if you are like writing a library and your module becomes async, it will be a breaking change. So adding this one can be a promise in your library that we currently – + +GCL: I would not consider that to be a breaking change. I am sure you can write code at that makes that a breaking change. But I would generally consider that to be buggy code. I guess that’s where we are running into this problem. + +GB: Since it came up on the last topic, I can clarify briefly the reason that web say assembly as an integration switched to no longer using top-level await was for support for deferred – the deferred import proposal because there is benefit in being able to defer the start initialization in a similar way that deferred modules work. And so definitely the interaction with deferred is an interesting point to consider for this proposal. And I think expanding the use cases beyond those which are currently listed would be very beneficial as well. So the deferred interaction and then there is also potential benefit for no JS, which is that no JS, you cannot require an ECMAScript module because it is – it might have top-level await and it might be asynchronous. And the loader pipeline has a way to be specified. But having some kind of implicit endorsed behavior for asserting a synchronous graph and needed for no JS to implement the semantics. This has come up, so that might be something that is a tangential use case, but would set a precedent to enable no JS as well. So it’s worth mentioning. + +EAO: Yeah. Just I thought I would highlight that Jack said previously, that what this is – this directive syntax is allowing, I think, slightly earlier determination when traversing the graph, whether the directive itself is valid. Because if it hits a node that says this is async, it doesn’t need to look further, but it can trust that. Otherwise it needs to check all of the imports of that one module and with the style of using something like with assert sync true, for instance, it will feel clumsy to include that in all of the place that is one knows already sync. Effectively, the – I think the directive syntax will be quicker to work with. I don’t think that’s going to have any real world effect, but I realized that I actually do like it more than the import assertion one. I thought I would highlight that + +JWK: Sorry. But the engines do need to analyze the whole graph because they need to raise an early error for async module used in the subgraph. So I don’t understand how it can be faster. + +RPR: Yeah. Normally, earlier, with the engine doing more work . . . Nicolo, do you have a clarifying question + +NRO: Yeah. Clarifying comment. This is an initial discussion about how to handle this. I have not checked yet, the status of that, but it’s still like open. So let’s not do this too much, to decide. + +NRO: Yes. I want to share another use case I have for this that is when writing polyfills, you need to make sure that the polyfill installs itself before running any other code because usually you import the polyfill into the libraries and other models using will assume that polyfill will be globally available. And so like any polyfill returns as ESM would make use to directive to make sure the this – make the polyfill install, the module versus been executed. + +ACE: I think I am right in saying, if I have multi imports, at the top of the file, the module itself isn’t going to be executed into all of those. The third import can execute before the first one. But it’s only like you were just saying, you would import the polyfills and import the rest of the code, and thinking that because I havecy certificated the first polyfill, it will evaluate before I evaluate the second import. I imagine that’s the way it works. It forces it and only a subgraph (?) if it’s synchronous, it’s guaranteed to be executed before other imports – + +ACE: This isn’t saying, everything is synchronous. The evaluation is synchronous. Like, if we were using this code in a browser, every import is async now. It can evaluate before the first one + +NRO: The way ordering works is that first everything is load. And then after . . . So loading async nows does not effect order, it’s only effected by the orders that import statements are in your code. And by whether those models used await or not. + +ACE: Great. This is me being negatively influenced by how the Bloomberg module loader doesn’t work that way. Okay. That’s great. That makes a lot more sense. So the assertion does imply evaluation order then. + +RPR: Do we have any more questions? Or can we have Stage 1? + +JWK: All right. An explicit request for Stage 1. + +RPR: We have support from NRO. Any other support? From Eemeli. And Daniel Minor. + +MM: Reiterating the support statement I already made. + +RPR: All right. Would you like to just summarize the key points. + +### Speaker's Summary of Key Points + +JWK: I don’t think we have something to summarize. But I will update the README about the motivation. Thank you. + +RPR: Okay. It sound like there was another point to clarify early errors as well. Okay. Thank you, Jack. + +### Conclusion + +- Achieved Stage 1 + +## Decorators normative update re: extra initializers + +Presenter: Kristen Hewell Garrett (KHG) + +- [proposal](https://github.com/tc39/proposal-decorators) +- [PR](https://github.com/pzuraq/ecma262/pull/12) +- (no slides) + +KHG: So the update here to the decorator spec, basically something somebody pointed out as we were working through it, that we initially didn’t have initializers, like the add initializer, extra initializers that decorators can add for class fields and accessors. The logic being they could just use the accessor or the initializer that actually initializes the valueOf the field. In discussion, in committee, we decided to keep – to add the add initialize method to those decorators because that way, it would just be simpler and we would have, like, uniform API everywhere. So that was the state of things. And then somebody pointed out that all of these initializers currently run prior to class field assignment. And right after super. Which is what we agreed on in committee for methods. And getters and setters. Which makes sense because those are the prototype values. For fields and accessors it doesn’t make sense. The logic was that fields, accessors code in general should not observe a period where a value has been not fully initialized, until that code runs. And for methods, that means that all of the methods need to be initialized before the field runs. And initializes itself. But the initializers for the field are currently running before the field. The extra initializers. So this change would rectify, it would have them run immediately after each field is defined. Which would kind of bring that in line conceptually. There is an additional benefit to this, which is that currently, with the ordering of initializers, the values of the initializers are – they are run in reverse order. This was discussed at a previous meeting, the last change we made. + +KHG: And overall, as we discussed in the last meeting, that makes sense, that, you knower, valuable for setting initial values and whatnot. You can’t define a method using decorators and then write using the same method using decorators to allow them to work on a field assigned to like an arrow function and have the output be the same. Because they are decorated in opposite orders. Methods are decorated, I believe, inside out. Like, baz, bar, foo. And the field decorators, the initializes would run foo, bar, baz. As if the value is getting set from the outside in when it is being initialized. So yeah. This would basically allow two stages for fields. So there would be the initial stage of outside in, foobar BAZ and the the next – and bar run extra initializer and foo extra run initializer. And that would allow you to write a decorator that would have the same decorators behavior for a method or a field that is assigned to a method, to a function, and that would eliminate a refactoring hazard because as the user, you might expect, that you could just, you know, refactor this to be an arrow method, if you wanted it bound or something. And yeah. Currently you would not necessarily be able to do that. Any questions? That’s the change. + +RPR: All right. So any questions about this? At the moment, there are none. + +KG: I have no questions, but that makes sense to me. + +KHG: Okay. + +RPR: Thank you for the positive statement, Kevin. + +KHG: I thought there might be more discussion. I put too much time in the queue. + +RPR: It would be nice to have other messages of support. Question from Daniel Minor + +DLM: Sorry I was hoping to dig up the specification before this, but I didn’t manage to. So do the extra initializers that are added by field and accessor have access to the value of the field? + +KHG: Yeah. So the extra initializers added via add initializer run after the field has been defined and are given the value, like, they run with the context of the class, I believe, that this – the class – the `this` of the method is the class instance. So using that, and using the access API, which is an API to use to access the value of the field, either public or private, you can get the initial value at the field and set the initial value of the field to an updated value. + +DLM: Okay. Yes. That makes sense. In general, I support this change. And this is actually the way I thought things were working. I missed the old behavior. I think this makes a lot of sense + +KHG: I think I have thought it was working this way too and I looked at the spec and I was like oops. Yeah. + +RPR: Happy accident. + +KHG: Yeah. Cool. So can I ask for – I guess, are there any other questions? + +RPR: No questions in the queue. + +KHG: Cool. Can I ask for consensus to merge this normative change? + +RPR: DLM has explicit support. We have already heard support from KG. CDA has a + 1 as well. + +KHG: Excellent. Cool. I think that’s – we just need to write – + +RPR: Yeah. And MAH as well. This has consensus. + +KGH: Perfect. + +### Speaker's Summary of Key Points + +I guess the key points are, the previous behavior of the – the current behavior in the spec of extra initializers with regards to fields and accessors was not in line with conceptually what the values did. They were not running after the fields were defined. They were running before. So creating this weird timing, kind of unintuitive timing. You know, concern, issue. And then also, the ordering of the standard initializers, the field – I wish we come up with a better way to distinguish those two. But the fields value initializers led to potential refactoring hazards. If you were wanting to, for instance, convert a method into a field. And with this change, the initializers will conceptually align with what they are supposed to do for methods and getters and why they were initially added in the first place. It will no longer be a refactoring hazard. + +### Conclusion + +- Achieved consensus for changing the order + +## Deferred import evaluation: deferred re-exports + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-defer-import-eval/) +- [slides](https://docs.google.com/presentation/d/1l-H2ntEDZGAWvtuOup1TJdylZsV1epKVSejVM-GwHLU/edit#slide=id.g29b94779710_0_0) + +NRO: So this presentation is looking for feedback from the committee on a possible feature of the deferred imports proposal. And specifically what we can do about `export … from` statements. + +NRO: So let’s just remind everybody the goal of the deferred import proposal. It’s the transapparently avoid as much initial work as possible, to like improve start up time. We don’t require any big change. So an example for the proposal is that we might have some function using import. But this function is not immediately used. In this example, the JS file is executed immediately after that. So you need to go in JS library and parse the styles. But this is only used once MyButton function is actually used. So this proposal, gives us a way to like mark the import so that the module will not only be executed when we actually need it. + +NRO: Okay. So deferred models – modules import with a deferred import, evaluated synchronously and on first use. And there are some restrictions that is that it only supports namespace imports. Because we want set of facts to only happen and not on binding axis. And they still need to be part of module graph because we don’t want to sync blocking fetch in the browser. + +NRO: And also, some of the deferred, of the deferred models still need to be executed, because if they use top-level await, you can’t execute them later. + +NRO: Okay. So that was the recap of what the proposal currently is. So what can you do about `export from` statements? Well, the reason I started to think about `export from` is because they are basically an import statement without an export statement. They still load external code, they're still somehow related to imports.. And do we give us another student to skip unnecessarily initial work that might affect start up time? Let’s look at an example. We have our – on the right, the components library that is some library that maybe. That is using the export from syntax. This file might have some code other than exports or might not. And then we, in other application on the left, import something from this library and use one of the export utilities. + +NRO: And like all this components library will be executed at start up and add an execution that will slow our upstart. So one possible solution for the like developer would be to stick to the deferred key word in import statement so the library is loaded but not executed until when it’s actually necessary. These might work in some cases when the library is not used for the initial render. It doesn’t work every time, but sometimes gives benefits. + +NRO: However, there is still a bunch of unnecessary work going on. Specifically, we have – we are not using all the exports on this file. Like firm, components. But we still need to eagerly load them just in case we will need to later execute them, the developers know that that will not happen actually So let’s not focus on when the code is need, but the whether the code is needed or not – in this case, we have an example here. Well, I will – I am now presenting with this a clear idea of what the solution would look like exactly. So let’s just say that we have a key word to mark export statement saying, hey, when importing this module, only import this like re-export statement if it’s actually necessary. In common JS this would put a bunch of the getters on of the export object and using object destructuring to require your library so that only the things that you use are actually required and like, for example, tooltip.JS is not loaded at all + +NRO: So how is this export some-keyword syntax different from import error. As I said, it’s not about the work, it’s more about the -- like, if some code is needed at all. So it doesn’t need to have the model synchronous available, because if some code is not used, we will never need to synchronize it later. This means it does not need to eagerly wait. So can we avoid loading altogether that code? Well, this would be possible only for with not reporting early errors for this expert keyword from modules. This is also why, like, it’s, as I mentioned before, this is still an open question for the import deferred proposal. So in our example, the tool JS and the form syntax JS file would never be executed. This is very similar to what we already know as reshaping. It has the same goals as the import proposal, that is to shave as much work as possible and to make it, like, upstart faster. But it has some different semia ticks and is loading delay. This is why I’m calling it a keyword. Maybe it should be deferred because the reason it’s been used the very similar, but given the semantics are different, maybe it should be something else, and there’s an education which is deferred. If you have export the first star, as a name space, you are creating a deferred name space or using only this load to syntax. These are all cases that would be gone through and so somehow even if you’re not using the same keyboard. So we have this keyword -- well, I was thinking about this expert syntax and how it didn’t track with the third imports. While this is more about just the current deferred imports proposal, but let’s take a look at this example for a second, because now it shows different variations with the deferred keyword in different places. So we have objects file that imports a model JS and later uses the -- one of the values exported from this model, and model.JS reexports to different dependencies. If we just use import defer and the module export from statements in the old files will be loaded, and in the initial execution, we would execute up to JS. We would not execute mode of JS because it’s deferred, but we need to execute the asynchronous dependencies of model JS, and later when we -- when this later callback fires, we need to read a value export from model JS, so we need to evaluate the remaining models of JS and all of its dependencies. + +NRO: If we have a normal named import with this expert keyword potential, what happens in this example is there is no more later distinction, but simply we never load and never execute that bar because we’re only using the full export of model JS. And if we’re using an import star instead of named import, we would still need to load everything because we need to populate this name space object. But maybe we could avoid -- we could -- if a keyword would be deferred, so the same keyword for deferred, we could avoid executing for the bar until when they’re actually needed, except obviously for the asynchronous transitive dependencies. And if we stick deferred on the import and also some keyword that could be deferred, well, in this case it would be deferring model execution of JS itself, because it’s marked as deferred in import statement. + +NRO: So this is mostly an idea of how we could extend the benefits -- extend, like -- tackle the goals of -- better address the goals of the import deferred proposal. I’ve been looking for feedback, A, if this is something that we want to do at all, we should explore. And if so, how old it be as part of this proposal, given that some of the same goals, or how old should it be something else given it’s not exactly the same thing. And then maybe related to it, whether this should be just the same keyword and have different semantics and the keyword can be changed to be something that describes both and they have different semantics or should we use a different keyword. And this is it. Is there anything on the queue yet? + +JWK: Yes, I like this because the previous proposal, the previous deferred is -- solves this problem on the consumer side, and this also provides a mechanism to slow the defer problem in the library provider side, and if the library provider used this feature, then the consumer does not need to have to change anything in their code. That would become much easier to get this feature widely adopted, and I think first, yes, we need to explore this, and I think we can have this in this proposal because as you said, it shares the same goal, and I think that the defer keyword is okay, but if we have other options, it might also be valuable to investigate. And I also have another feedback about the current deferred proposal, because we are already widely using it in our products. And I found you still need to do a lot of refactoring to make -- to make some dependency deffered becomes -- because we usually use named imports, and if you want to defer it, you need to write it into name space import, which is annoying, and I really think it is unnecessarily restriction, and we -- I think we should slow it by doing it in the spec instead of letting everyone rewrite it into name space. + +NRO: Thank you for the feedback. + +ACE: Expressing explicit support for Bloomberg. This is great for us where we have large libraries, as Rob said many the chat, dynamically linked in, so people will always get the latest version of things rather than statically bundling ahead of time the whole app. So this is kind of -- this kind of run time dynamic tree shaking is something we internally implemented and rely on and it would be so much better if this was something built in the ESM, because it prevents us from using ESM in certain places, which is a real shame. And I could also see a lot of people using this on the web again where they’re pulling in large libraries on the fly rather than building everything once ahead of time. So, yeah, big support.. + +JHD: Yeah, the importing side always made me nervous in a way I’m still struggling to vocalize, but the keyword on exports seems really nice. As I was talking about in the Matrix chat, tree shaking is not a good thing. It doesn’t work completely. The stat that I continue to cite is Coinbase started prohibiting barrel files and doing deep imports and their React Native app size dropped by 71% just by that change alone. I basically don’t believe any claims that tree shaking is a reliable tool, and however, this seems like a way we could turn it into a reliable tool in a very syntactically reliable way. It seems great. So I really like that direction. + +MM: So we -- we’ve -- previously we’ve gone through the import defer, the whole, you know, deferred imports before this additional feature, and understand how it plays with the overall model harmony. From internal discussions right now at Agoric, this one’s taking all of us by surprise, although, we’re getting it, and I think we see the value in it. But could you explain how or whether this fits into module harmony and if it’s adding an additional complexity in how it fits over import deferred without this feature. + +NRO: So, yeah, the way this interacts with the rest of the mods proposals is different from how the existing import defer -- import defer does. For import defer, basically all the other proposals don’t really need to worry about it because the import defer happens after linking, and that’s, like, after linking, that’s something you can, for example, virtualize anymore. For these, for this export defer feature, so one of the -- one of the things we’re exploring with harmony is which metadata to expose, for example, model source or which meta data virtual model source needs to expose themselves, and one of these would be a list of exports and imports. And these defer -- like, the fact that I’ve given an export from is deferred needs to be represented in that metadata because the way you would then, like manually link -- load and link this is that executing the model with the export deferred statement itself would not load these dependencies, but these dependencies would be loaded -- would need to be loaded while looking for dependencies on the importer module. + +MM: Okay, I’m somewhat getting that and somewhat it’s too much information to absorb in real time. Let me invite you to present this as well to the SES meetings. + +NRO: Yeah, I will be happy to come. + +CDA: Nothing else on the queue at this moment. + +NRO: If the queue is empty, I want to propose that we should explore this as part of the expert defer proposal, and I would leave open the question whether it’s a different keyword or the same. And it ends up being a different keyword and semantics end up being too different, we can then consider splitting it into a separate proposal at some point. + +KKL: I explicitly support investigating this direction. I like that it shifts the responsibility of this concern to the provider of the module. + +NRO: Okay. Then thanks to everybody that gave feedback on this. + +### Speaker's Summary of Key Points + +- List +- of +- things + +### Conclusion + +- List +- of +- things +. + +>> Would you like to dictate a key points summary for the notes. +. +>> Can I maybe do asynchronously. +. + +## TG4 Sourcemaps update + +Presenter: Nicolò Ribaudo (NRO) + +- [RFCs](https://github.com/tc39/source-map-rfc) +- [slides](https://docs.google.com/presentation/d/18DtsUGrXPOY1Hp6aLqGlaOGYColCUzBa7Y5BRzGBYN4/edit) + +NRO: So TG4 report. For those who don’t know what TG4 it’s the test group where we’re working on source maps on standardizing -- properly standardizing source maps. This is the status update of what happened since the last TC39 meeting. So there has been two main normative changes to the spec. Reminder that the spec is still, like, a draft spec. It’s not published yet. One of them is that we had X Google released field, which was originally added in Chrome, then other browsers started prototyping it, so we added to address spec, but now it’s been renamed to ignore list given that it’s -- well, it’s becoming a standard. And this field has the source maps to mark some pieces of code as, like, generated by tools, and so that they should be skipped while the bug is in the code. And then the other major normative change is that we defined how WebAssembly files should link to source maps. And, like, which was -- this was already what was happening while the source mapping URL custom section, but we just explicitly put things back. Then there’s been great progress on the -- on a proposal for new source map features. We call it scopes. It -- well, it unloads uncoding scope information in source maps, and this is the information that’s not possible to rebuild with the -- with the existing information that’s in source maps. It’s a list of bandings that have maybe been removed and renamed, like the type of scope, what visibility has bindings. For example, if you’re compiling some rust for the family. Rust functions do not get binding from other scopes. This will basically reconstruct the original variables when using step by step developers. And lastly we defined process on how to work on proposals for new source map features. And I was going to highlight a separate presentation for this, but I -- it’s right after this one, so unless there is something in the queue, I could probably just go directly to that. + +CDA: Nothing in the queue. + +## TG4 Proposals Process + +Presenter: Nicolò Ribaudo (NRO) + +NRO: Okay, then let me get those slides. Okay, so TG4 process, so what does TG4 exactly does with source maps? We had too many working work streams. First of all, we are completing the existing specification, the existing spec. There are many not specified yet and there are many ambiguities. It’s not possible to implement this spec because it’s not clear what it means, and also we’re making sure what the spec says matches what the implementation do, because especially given all the specific areas and all these am biggities, implementation starts aligning in different ways, so we have to make sure the spec is aligned. And also aside from this, we’re working to introduce smart features just as you mentioned with scopes source. The code source features have been stale for the past many years. There have been all sort of different experiments in the companies, and I think when introducing the capabilities to source maps, and now we’re trying to coordinate in this test group rather than everybody going in their own direction. So the way our process work in TG4 is different depending on which two work streams we’re talking about. For fixes to the existing spec, clarifications are, like, aligning with neutrality, and we basically handle them the same ways we handle normative PRs for ecma262. And most discussion is in GitHub, and we basically have consensus in the TG4 meetings for merging these pull requests to the spectre. And for new features, we just introduced TG4 internal stage process, which is heavily inspired by how TG1 works, so we have four number stages representing from, like, just an idea to something complete. We should probably start adding names to those like we’re doing for TG1. So those stages are not the same as TG1, because the source maps, there are different requirements, different ways implementations are used to experiment the proposals, and there are, like, different types of implementations, so we had to adopt the process or needs. + +NRO: What we have at Stage 1 is that we need to have an explainer we’re trying to solve. And in practice, with all the proposals we had, they were already introduced with some idea of a solution, so, like we just encoded there might be some solution, but we don’t need to have consensus on the solution yet. It will be completely rewritten. So we have a problem statement for Stage 1. Then we have Stage 2, that’s when without more concrete details, they’re not final and further iteration is expected. And in Stage 2, we have already encouraged implementations. And the reason is that source maps are, like, mostly meant to be mapped, like, communication from machines, so it’s not the user -- like, user experience of -- that matters, but it’s the -- like, how, like, is this in this form actually usable for various tools. And, like, we need to start implementations early, so we encourage experimental implementations at this point. These implementations will not be shipped, because there’s there’s still a high risk of breakin’ because it’s just Stage 2. And then Stage 3, we have a complete description of the solution. We have at least one implementation. It’s important to know that we basically had three different categories of tools, Source map generators, such as bundlers or minifiers. We have stack-by-stack debuggers, like your browser tools or many editors. And then we have stack trace decoders, such as [...] I guess the source maps to give the traces or, like, there are tools like (indiscernable) that get production and the code gets a place to help developers understand where the problem is. So we need to be able to generate source maps to actually implement the tools. And obviously it hasn’t been started. We are still figure out exactly how to write, share the tests for source maps. So once a proposal satisfies these requirements, it reaches Stage 3, and we expect that further improvements are possible by actually implementing these tools and testing the synapses. And finally, we have stage four when something is complete, which is that we have two tools, two implementations per category, and a complete suite. And at this point, the proposal can land in the spec draft. + +NRO: So this is an internal TG4 process, and how does this interact with what TG1 does. First of all, everybody from this group is joining -- very welcome to join the meetings if you think you have useful expertise or just interested in it. We meet monthly online, and one hour long. It’s important to notice that differently from how other test groups in TC39 work, source maps don’t actually affect the run time behavior of the language. And this is why we’re keeping most of this process only TG4 internal. We still need to go to TG1 because TG1 represents a certain level, so we need consensus to publish -- like, to publish the final draft, and then to publish new versions, to publish updates to the published spec. And we hope to have this similar to how TG1 approves the 262 spec snapshots, so there is like the staging process where we discuss proposals, and then, like, the matters for actual publishing from my perspective is that is represented by TG1 must have consensus on the final spec version. So we plan to do is to keep -- process this internal in TG4 and have TG1 +(indiscernable) and the final source maps. And once -- like, once the spec will be published, we will come to TG1 to further update to the spec once we reach Stage 4 in the TG4 process. And this is it. + +NRO: How -- is there anybody on the queue with questions or clarifications how this process would work? + +CDA: Nothing just yet. + +NRO: Okay, so just, like, to be clear, we would not come to TG1 for stage advancements of source map features within this TG4 process. Thank you very much, everybody. + +DE: Does anybody have concerns about this process? Does anybody disagree with the decisions made here? + +KG: I have no concerns. This seems good to me. In particular, source maps are pretty different from, like, Intl, where I think it makes sense that Intl proposals sort of go through a process where TG1 gets oversight and I think source maps are sufficiently different that it makes sense to follow a different process with TG4. So this sounds good to me. + +CDA: Nothing else on the queue right now. Yeah, I support this as well. It makes sense. + +NRO: Okay, then thank you, everybody. Again, I will get you a summary asynchronously. + +### Speaker's Summary of Key Points + +The new TG4 process was presented. The source map specs will have two types of changes: +"Normative PRs", focusing on correctness of the current spec and that require one-shot consensus Proposals, that will go through 4 TG4-specific stages The proposed relationship between TG4 and TG1 is that TG1 will ratify/approve the documents prepared by TG4 once they are ready, without TG4 coming for consensus at every step of the TG4-internal process + +### Conclusion + +There is support for the proposed process and relation between TG1 and TG4 + +## Provide source text to HostEnsureCanCompileStrings PR [continuation] + +Presenter: Nicolò Ribaudo (NRO) + +- [slides](https://docs.google.com/presentation/d/1MRItYS_b1hwKstlqlfoD8mgbecS2OkTSiPFVWHs3Y_8/edit?usp=sharing) + +NRO: Okay, great. Let me share those other slides. Okay. So after updating the normative test to match what was consensus that we got on Monday, we noticed that actually there is one case where we didn’t clearly -- that we didn’t clearly go through, so we need the come here again to understand what we want to do in that case. + +RPR: Nicolo, your audio is occasionally going fuzzy. I think we can understand you 80 to 90%. + +NRO: Okay. Let’s see if it starts working better, because there’s nothing that changed from before, so I don’t know how to fix it. Okay, so it’s specifically about behavior of new function with one argument when this argument is not a string. So the old version of new function -- well, the current version is that first it calls the host to, like, verify CSB. Then it stringifies the parameters and the body and then it concatenates everything to get the functions and finally evaluates the function. And from the discussion that we had on Monday, this is what the new version would look like, that is if there are no parameters, we pass the source stack to the host hook, however, the host book is before stringifying the body, so we don’t have the source body to pass yet to the host to. Further, like, currently there are a ways that this could be solved. One is that we only pass the body if it is a string, which means that CSB would only be able to, like, chat or hash a string and not for objects that cast to strings, and this is probably okay like this, because, like, hash is -- like, CSP hashes are meant to be first, so not like for dynamic. And another option is to first stringifying parameters of body, and then (indiscernable), and this is an ostensible change in which you change the parameters, but it might be okay since it just slightly affects the (indiscernable). Yeah, and then we would also have the option of only making the change normally for eval and keep the old function behavior, given that all the use cases for new function can also be solved by building one function string. If anybody has preference, please speak up. Otherwise, we want to go with option 1, just because it’s the -- like, the one -- the simplest one. + +MM: So the -- why not simply pass all of the arguments to function without stringification, just pass it without any coercion, just pass it all to the host so the host has the actual arguments that were passed? + +NRO: Because the motivation for this is to let the host compute the hash of the -- of the code so that you could -- like, in your CSP header, you specify the hashes of the code, and they possibly know whether they can be executed or not. So you need to have access to the string. + +MM: Maybe I'm remembering something that we ended up not doing, but with eval, I know Mike Samuel had been proposing that we pass the uncoerced argument to the host so that we could -- so that you can basically brand the argument, you know, Trusted Types and all that kind of stuff. And so for, you know -- that would be a non--- you know, not for a CSP use. But for other possible host behaviors. Are we doing that with eval, and if we are doing that with eval, why doesn’t that apply equally to function? + +NRO: So the change ended up never happening. There has been requests and something we’re thinking about for the future. It’s just not something that there is yet. Eval-- what eval does is if the argument is not a string, it just doesn’t call the pursuit, it doesn’t stringify the argument, and it just (indiscernable) as now. + +MM: I’m sorry, the audio is fuzzy. If the argument is not a string, it just what? + +NRO: If the argument is not a string, eval returns immediately the argument without calling the host hooks or without stringifying it. + +MM: Oh, right. Okay. Okay. Thank you. I’m -- I don’t know what I prefer with regard to this issue. And I would like to spend more time talking about it, and I think this is also a good one for you to bring to the SES meeting. Because obviously this is security relevant. + +NRO: I probably will at some point in the future. + +PFC: I wanted to agree with MM, especially if we need the host hook to be able to accommodate Trusted Types, this will be something we need to change in the future. You will have to pass the uncovered argument to the host hook as well. But it looks like there’s not yet a concrete use case on the host side for that. + +KG: Yeah, I’m fine with anything here. And if, like, we can see the future and tell which of these options will be more easily adopted to trusted types in the future, that would be my inclination, choose whatever would make trusted types easier. In the absence of any other reason to prefer one or the other, option 2 seems a little bit more natural. But option 1 is fine as well since it’s simpler. To be clear, what option 1 would mean is that you could -- the only way that you could evaluate something with a function constructor is if unsafe eval was allowed, right? + +NRO: Yes. Well, if it was an object constructor, you would develop it, yes. + +KG: Yeah, option 2 seems a little nicer, but, you know, if you think option 1 is sufficiently simpler, I’m fine with that too. + +PFC: I think, Kevin, what you’re talking about is option 3, where evaluating anything with the `new Function()` constructor would not work unless you had unsafe eval enabled. With option 1, if you did `new Function()` with a static string -- + +KG: Sorry, I did mean specifically the case in which you’re passing an object. + +PFC: Oh, okay, if you’re passing an object, that would not be -- if you wanted to do that with option 1, you’d have to have unsafe eval enabled. + +NRO: To be clear, we’re perfectly fine with option 2. We’re proposing option 1 mostly to pick something, given, like, it’s easiest one, because it doesn’t require moving line around, but if there is only weak support for option 2, then we’re very happy to go with it. + +DE: Could the function constructor pass both the value before and after stringification to the host hook? I know that requires moving the line around, and I know that for eval that doesn’t work. + +NRO: Yeah, if we go with option 2, then as soon as we need to also pass the original object for trusted types, well, like -- we could just add the objects to the values we are passing to the hook. There is just no need for it right now for this CSP change. + +DE: Does this CSP change here have implementor support? + +NRO: It has support from -- or at least interest from whoever did editors, I think, of CSPR, but they were for at least two different browser companies. + +JHD: I don;t like option 2, prefer 1 or 2. + +KG: So I do want to give a concrete reason to prefer option 2, which is that the whole point of this change to CSP is to make it easier to adopt CSP in the case that you have some use of `new Function` in your code base, and you’re okay with it because you statically know what it is. And with option 2, I could imagine - I hope it is not the case, but I could imagine - someone is passing an object that will be stringified to Function right now, and such a person would not be able to adopt the stricter CSP with option 1, and they would with option 2. And since that’s the goal of the proposal, it seems like option 2 better serves that goal. + +NRO: Okay. Then, well, let’s ask, do we have consensus for option 2, which is the one that’s being projected on screen right now? Okay, Mark? + +MM; Yeah, I don’t want to agree to consensus on any option until I understand all of this better. So I would like to not decide this meeting. That said, I’m not -- beyond that, I’m not resistant to option 2. I just want to understand it better. + +NRO: Okay, then given that nobody else had concerns with either option 1 or 2, other than Kevin preferring option 2, would it be okay for the rest of the committee if we resolve this on GitHub with Mark, the choice between these two options and if we can agree on one of them, just go with whatever it will be? + +MM: That sounds fine to me. + +KG: I’m fine with that. + +PFC: +1 (from chat) + +JHD: +1 + +RPR: All right, I think we’ve heard support for allowing Nicolo and Mark to figure out the choice between 1 and 2 offline. Are you happy with that as a resolution, Nicolo? + +NRO: Yes. + +### Speaker's Summary of Key Points + +Following up from the discussion on Monday on extending the HostEnsureCanCompileStrings arguments, we found an edge case that is when you call new Function with an object. We discussed two ways of solving this. One is to only pass the body to the host hook if it’s a string, so if it will not need to be coerced, and the other possible solution is to call the host hook after coercing the parameters and body to a string. And we will work offline to pick one of these two options. + +### Conclusion + +HostEnsureCanCompileStrings will be updated following either opinion 1 (preferred) or, if not possible, option 2. This will be discussed offline. + +[Lunch] + +## Slice notation stage 1 update and `[^a]` + +Presenter: HE Shi-Jun (JHX) + +- [proposal](https://github.com/tc39/proposal-slice-notation) +- TODO: get link to slides + +JHX: Okay. Yeah. Okay, good afternoon and good evening and a good morning. It’s my 5 a.m. I’m not sure I am very clear, but anyway, let’s start. I have two topics today, which are highly related, so I will discuss them together. My total time is one hour, and I hope I won’t need that much time. + +JHX: First let’s recap the slice notation proposal. This is the link to the proposal repo in the last presenting slides. And during the TC39 meeting on March 22nd, 2018, the slice notation proposal was presented by Sathya. The proposal aimed to introduce more explicit in a consistent way to extract subarrays by using a notation with square brackets and a colon. You can see the example here, and where the N index is inclusive, it should maintain similar semantics with the slice method, at least in the -- at that time, in the first draft. + +JHX: After discussion and consideration of various points, the committee agreed to move the slice proposal to Stage 1. And during the meeting on July 21st, 2020, the committee discussed the proposal also presented by Sathya with the aim of advancing it to Stage 2. Several concerns were raised by the delegates, and as a result, the proposal remained at Stage 1. Here is the main concerns. JHD raised the concerns about the proposal not addressing string support. He did not block the proposal at Stage 2, but indicated potential future opposition if string support isn’t resolved. GB expressed worries for implication for slicing, specifically the mismatch between Unicode and code points, versus code units. And YSV requested the motivation behind the proposal, whether the ergonomic benefits are sufficient to justify syntax. WH shared concerns regarding the necessity of new syntax for an operation that's already achievable, and emphasized the importance of language orthogonality. And SYG brought up the issue of inactive indexes and the inconsistency with current bracket index. The current bracket, it doesn’t support the inactive index. So the committee conclude that -- further discussion and the consideration was necessary before the proposal could move to next stage. + +JHX: So this status so far in the. Let’s also recap the index-from-end proposal. Here is the link, and the last presented slides. Just during the meeting on January 2021, I introduced a proposal to add a new syntax data allows referencing the last element of reusing character syntax. This syntax is inspired by C#, but aims to be minimalistic. In such syntax, the A character 1 will return the last element of an array. Just the same as array.length minus 1. The motivation behind this proposal, it will solve the block issue SYG raised for slice notation. The syntax solution also offers better ergonomics, especially in mutation cases, and otherwise the active zero edge case. The proposal didn’t advance because several delegates, including SYG, suggest that the proposal should potentially be merged with slice notation proposal to address unified syntax. There are some other discussions, but as I understand, none of them are Stage 1 block Issues. + +JHX: Afterward, I communicated with Sathya (code??) and she agreed that adopting the syntax in the slice notation is a good direction to push forward. I have been co-champion of slice notation starting from July 2022. Today I will restate these two proposals. My main goal is to restate the space of these proposals and the potential follow-up proposals explained to the committee. The advantage of the syntax solution addressing these issues, and I hope the committee will agree to move in this direction and allow the index to move on from Stage 1. But, yeah, that’s it. + +JHX: By the way, the potential for the proposal mentioned here to use the slice notation syntax to replace part of an array. This is mentioning the FAQ of the slice notation. So, for example, here is the slice notation, the current slice notation supposed to get the -- from the index from 1 to 3 exclusively, and the possible future follow-up proposal would also allow to replace the slice, so it just become, yeah, 100, 20 and 4, yeah. Let’s talk about the problems. A collection of elements accessed by integer index is probably one of the most commonly use structures used by programmers, and the slice is one of the most commonly used array operations. Here is a simple -- I’m not sure. + +JHX: Yeah, it’s very simple search, and it shows that the usage of the slice is very common. Yeah, I don’t list all the methods. These methods are here because they represent some kind of operation, so we can see that. And these statistics also indicate that we see the slice here. The `splice` is -- seems to be used more than we thought. + +JHX: So these usage, the slice and the splice is very common operations, and so the problems of them also scale. So the first problem actually SGN has already introduced them in the previous presentations. First it’s not clear whether the parameters of slice index or length, in particular, the semantics of the second parameter are inconsistent between slice and a splice. With the former being an index and the latter being a length, these two methods are very similar names and corresponding use. Slice is for obtaining a part of the array and a splice is for removing and inserting, so that’s just replacing a part of a -- that the parameters are inconsistent. + +JHX: So considering that both splice and slice are very commonly used in the symmetry code operations, having a more consistent and clear syntax in a semantics will be more clear to developers, especially beginners. Another important issue is negative index. Slice/splice and some other methods support negative slice. We have also recently added the `at` method, whose only motivation is to support a negative index. However, there are some problems -- there are some problems with negative index. First, only method support negative index while the subscript syntax, the square bracket does not support it. This contradiction has become apparent of introducing -- the introduction of the `at` method. Actually, it’s consistent with other methods, but it creates a semantic difference from the basic array syntax square bracket, with negative index being one of the most significant. And although many support negative method, not all methods do. In particular, methods on strings such as slice, `at` and `str` support negative index, but many other methods do not. Especially `lastIndexOf`, `indexOf`, `at` includes these methods on the string do not support negative, but these -- the same name, the methods array supports. Insert, I think very importantly, we have two very important sets of APIs, index of lasting and indexOf, lastIndexOf, written index and use negative one to indicate not found, which conflicts with the range of variables of -- for negative index and can easily lead to bugs. It’s -- you can search the code base. It’s very easy to find it, code like that. This is a piece of code in VSCode, and here we use slice and here index of subcomma amount plus one. Yeah, this code even have a comment, but the -- its writing is definitely not very correct. To be honest, I’m not sure if there is a bug, if there is really a bug. If it’s just -- if it just so happens that it’s written as negative one, maybe not need to be considered. But obviously, you know, if it don’t cause an actual bug, such uncertainty is not a good thing for the person writing the code and the person reviewing the code. + +JHX: And there is also the negative zero edge case. Actually, I already present in the last -- the last presentation for the index find, and so here is a very simple example. The people actually expect empty, written “empty”, but actually got the -- all elements. This is because of the edge case of the negative zero. So actually, developers might be acustomized to using the slice and the negative M to express dropping the first and last items and the last M items, but if you consider the M might be zero, so the truly correct way to write it should be like this. It’s a little bit complex, and maybe you should write code like that. But however, in practice, I feel people just ignore these case and the result is leaving the hidden risk and bugs. + +JHX: And in addition, methods like `slice`, `splice`, `at`, et cetera, also have a series of strange behaviors caused by coercing. KG have already demonstrate this in a series of discussions, so I won’t repeat it here. What I want to add is that in comparison to the coercing behavior of methods, although the subscript operator, it also has coercing, but it basically doesn’t have unexpected behaviors such as illegal index not turning into zero and the BigInt can work without throwing an error. By the way, these edge behaviors of method are, even incorrect documents on MDN (Mozilla Developer Network). Here it said if index omitted undefined or can be converted to number, if extract to the end of string, but actually it’s wrong because if it converts to number, it actually gives you zero. + +JHX: So the problems that I have summarized this time, the clarity and the consistency of the semantics of parameters conflict between the value range of negative index and the written variables of index and find index APIs. The edge case of negative zero and the very strange coercing behavior...these problems -- actually, these problems have been mentioned to some extent in previous presentations, and actually, it’s already in many discussions of various proposals. For example, when I was recently reorganizing slice notation issues, I found out the problem of negative index had already been mentioned, been mentioned by these people. I think he explained it well, and it tell us that Python already have these problems and -- yeah. Okay, of course, I must honestly say that even with these problems, it’s not the end of the world. Developers have always tolerance of such bad design and continue to develop. However, I think if possible, we should try to solve some of these issues. + +JHX: So the solution, the solutions, I think these -- as the proposals, the solution is simple, we would just extend the square bracket syntax and follow semantics. So we add an index from N and add a slice notation and we might add future support for modifier arrays with slice notation, and these very simple examples, and I think I do not need to explain them. I think most developers can get them, you know, in one minute. And I also have some interesting experience. I use ChatGPT, it actually recognize them, and it even add some comments, even such syntax not exist in this. + +JHX: So the previous list of problems have corresponding solutions. The syntax -- the syntax use square brackets, so clear -- it’s very clear it’s about the index. And in accordance, it’s symmetry on both sides. So you need to see that both index. And the negative index, in the syntax of these proposals, they just follow the original square brackets. So it’s consistent with them. So there’s no problems of negative zero and the negative one. And also the coercing behaviors. In addition of this, we also getting two extra benefits. The first one is we could support the syntax to replace the save to splice API, and in all examples, there is relatively better ergonomics, in some cases. For example, in this case, there is a significant improvement. + +JHX: There are two concerns that were raised in previous meetings, which were not block issues for Stage 1 or even Stage 2. The main purpose of my update is to restate the more fundamental motivation issues. So just briefly I’ll talk about my source here without going into detailed discussion. First, the issue of string slicing. This is a somewhat tricky issue. On one hand, most existing methods for strings are based on code units, and on the other hand, allowing cutting from the middle of the surrogate pairs is very bad from an internationalization perspective. In the past, in some proposal, including the draft of slice notation, there was a hope to avoid this problem by not including strings, which was rejected by some members of the committee. I understand the position of these delegates. In fact, I also hope to include string slicing to provide developers with a prop right (?) default behavior. But I must state that given this issue involves some inherent constraints and conflicts, tradeoffs are inevitable. If no tradeoffs are allowed, I can only give up on the proposal. + +JHX: Second, as many of you may already know, the caret syntax is borrowed from C#. However, in C#, caret will return a value. While in the current proposal, it’s just a syntax structure similar to three dots, rest of the spread syntax. If it were to be a written value -- if it were to re-evaluate in the current language, and I do not intend to introduce such a change. But we’ll try to keep this possibility open in the design. If some some context, please tell me, but I think it’s very hard, it’s very hard to include that, at least in current status. So maybe I just need to give up the whole proposal. Okay, so that’s it. + +RBN: This goes with what JHX was just talking about with the caret syntax in C# being an actual value, and I posted this both on the slice notation syntax and another comment on the index syntax proposals that, for one, I don’t think that it necessarily needs to exist in the core proposal. I think the syntax itself is extremely valuable and useful. I do think that potentially as a follow on, the ability to actually have a reified representation of a slice and index-from-end is extremely useful. This has been shown in languages like Python, where being able to have reified slices is very useful because you can pass slices around, construct them, and have an actual data representation for a slice into something that is profoundly useful in the programs that need it. It’s a good data representation and having syntax and semantics that actually makes that work correctly without having to implement it yourself is very useful. Also, producing a reified value from the index-from-end syntax exists in C#, as you mentioned, and it’s useful to be able to pass around an object that represents a relative index from the end of a list that isn’t tied to Array or String, so that you could use it with custom collections. But it does add complexity to specify something like this that I don’t think should be a blocker. I do have thoughts about how this can work with other values outside of just a Reified slice and “index-from-end” as well. + +WH: Yeah, with reification, this might be okay. Without reification, I think this proposal is harmful because it impedes abstractions. This would be yet another way to index into arrays, one which cannot be abstracted over. That is if you have `a[x]`, people will want to pass `^something` as x, and you can only do it directly `a[^something]`. You cannot create a function which does that. To use this for a position inside an array, if you’re writing the code directly in your function, you can do `a[^0]`, but you cannot abstract the `^0` into a variable x and then have somebody pass in `^0` to your function as x. And adding syntax to produce yet another way which cannot be abstracted over, I think is harmful. With reification, it would be okay. + +JHX: I think it’s actually -- actually, I think even without reification, it’s just very like the three dots. Yeah, we -- you can’t `...` something to a variable. So it’s -- it’s just the same. And I don’t intend to include, at least in this stage, because as my observation, in most code, if they use negative index, they just use negative in stack way. That means it’s very impossible they write -- pass in something that sometimes is negative, sometimes is not. So at least in practice, I think it’s not a big problem, because people -- it’s very clear that when they -- when they use the index, it’s very clear it’s index or it’s index from end. + +WH: I disagree with the argument. Sometimes people don’t use abstractions, but that doesn’t mean that people should not be allowed to define abstractions. + +JHX: Okay. I guess we -- do you think it’s a Stage 1 blocker? + +WH: Yeah, for me it is. + +JHX: Okay. + +MF: Okay. Yeah, this is kind of awkward because my queue item was a response to RBN's. But if you remember what RBN was talking about, about reified ranges, I just want to point people to the iterator range proposal where we have discussed reified ranges. I think if we are considering something like that in the future, those proposals need to align on whether or not we actually want something like that. + +MM: I don’t understand how reification plus abstraction could possibly work in JavaScript given the non-reified syntax, the reification with abstraction syntax would almost necessarily simply be X bar, open square bracket, X bar close square bracket, which has a meaning across the language, which is not just for arrays, which is either X is a symbol or anything else gets stringified, and then looked up. The -- if you reify the range into an object and you don’t treat object in the square brackets and -- you’ve changed something very pervasive in the language. I think reification plus abstraction, were it to happen at all, would have to start with the different syntax so so that the abstracted syntax would not collide with something that already has a meaning in the language. And that is a response to WH. + +WH: MM, that’s an excellent point, which is why we should not be adding syntax without reification because if it turns out that the problem you raised, MM, is insurmountable, then we would design ourselves into a corner. + +MM: I agree with that. + +MF: Yeah, just speaking about the slicing portion of this kind of combined proposal here. I definitely do support having the slicing syntax. There are other languages at a similar level of expressibility that have this kind of syntax, and I think developers that are moving between those languages and JavaScript would be looking specifically for that and be surprised to find that there is no slicing done via syntax, but it’s an API. Is it like a deal breaker? No. But I do think that there’s probably a large number of developers who expect to find that and would go looking specifically for that and not find it, have a hard time finding what they actually want. So for that reason, I think that slicing syntax is worth exploring for sure, as we have already agreed with it being Stage 1. + +JHX: Thank you for support. + +DLM: I apologize if I missed this earlier, but I’m just wondering what happens if slice notation is used on a non-array? + +JHX: As the current draft, it only defines slice notation on array, on a TypedArray. But, yeah, I think it -- it might be okay to apply to all array-like, but I still need some investigation whether it will cause any unexpected results. Of course, the strings need to be considered separate, because the -- as I mentioned, the current code unit base behavior is maybe not the people want, yeah. + +DLM: Thank you. On a related question, I’m not sure if this has been said yet, but I’m just wondering for how this would be specified? Is this going to be syntactic sugar around the existing slice method or is this going to be a new set of operations? + +JHX: The current -- the current draft has a symbol, so it can delegate to the method, but as my -- today’s update, I’m not -- I’m not -- I still need to investigate. But I tend to drop the symbol, because now if we -- the semantics should be consistent with the original square bracket, then directed (?) to the current slice method is not good. So, yeah, if it’s not have -- if -- so I think the simple solution is make it just like syntax sugar, yeah. + +DLM: Okay, thank you. + +KG: In working through this in chat, and I think that I am still in support of exploring this area, but I think there are some other possibilities that we haven’t gotten to. For example, I could see support for a more general, the caret syntax is just a symbol protocol that allows the container to do whatever it wants with the argument, and I think that might be something useful there. I am also not super inclined to change the normal bracket access. I think that if we sort of expand the scope to more general property access or more general container access or something using some new syntax, I think we might come up with something that’s really quite broadly useful, without affecting the normal property accesses at all. + +JHX: Yes. I think this is -- it’s a hard problem. I’m not sure -- yeah. Okay. + +RBN: In the issue on the Slice Notation proposal where I originally discussed the potential for reification, I’d also discussed the potential for making this customizable so that you could have classes that could understand these objects that aren’t just Array or String. That part of that suggestion was implemented into the proposal, so there is a +`Symbol.slice` method that you can put on any object and the slice syntax would redirect to that. The reification side of the discussion took things one step further by saying that, if you had a reified Slice object that could be passed around as a variable and then used as an element of an array, then the reified Slice would essentially be an indirection to call whatever argument it gets when calling the `Symbol.slice` method on that, so that it can pass that argument through, so it would kind of be a duality of Symbol-based protocols: one that says “here’s how I get sliced”, and another one that says “I am a slice, here is how I apply to an Object”. + +JHX: Okay, yeah, I -- personally, I’m not a fan to let it be a dedicated to some symbol, but, yeah it seems as the -- so if people really like that, could I ask to -- you all to -- others who interested in this area, can co-champion this proposal. + +MF: Yeah, so I’m not volunteering at the moment to co-champion the proposal, but, you know, possibly something I would consider down the line. My reply here was to KG. I like the idea KG suggested of trying to -- or at least as I understand it, maybe I'm putting words in his mouth -- trying to change what we have as this like slice from end feature you're proposing to be more general, something that can just derive an index in any configurable way via a protocol. So I think that would be a much more valuable use of that syntax space because it's more general. So I definitely support exploring that. + +MF: And, since there's nobody following me on the queue, I want to kind of continue this into a follow on topic of, now that I appreciate you've explored the relationship between these two proposals as you were asked to do, and having done that, I feel pretty good that Slice could move forward without being impeded by this additional work we've asked now for the other, the Slice from end proposal or what comes of that. So I would like, if anybody thinks otherwise, I would like them to speak up and hopefully we've, I want to hear from you, Hax, if we've answered enough questions about that proposal to unblock it and allow you to continue moving forward. I know you had some questions about whether it should apply to strings. Personally, if I was to answer that for you, I would say it should apply to strings just the way that slice already works today with code units. Have all of your questions being answered, or is there anything that we've not covered that you need to make progress to stage 2? + +JHX: Yeah, thank you. I think I need some help because if we -- if we -- so my original intention is just to have a minimal proposal. But, okay, I will try, yeah. + +CDA: So we are almost out of time, less than a minute left on scheduled time. KG, did you want to reply? + +JHX: I think I have one hour, because these two topics are -- I just merged them into one. + +CDA: Yeah, yeah, I mean, that’s totally fine as well. We can continue to consider your two as one. So, yeah, KG? + +KG: I reply to MF. I think that the slice proposal could well be a subset of a more general indexing proposal. So I don’t think it can really move independently. + +WH: SYG couldn’t make it today, but channeling SYG, he is also disinterested in the caret syntax proposal. + +RBN: I’m sorry, I apologize for not going on the queue, but is this based on a conversation with Shu or you’re just anticipating his -- + +WH: Based on a conversation with SYG from last week. + +RBN: Thank you. + +JHD: Yeah, just trying to understand, like, to summarize what I’ve been hearing, so it sounds like there’s some interest for a more general access mechanism and symbol protocol that could subsume some of these problems and apply to a lot more data types. But it also kind of sounds like the existing proposals being discussed are dead in the water without the major changes I just referred to. Am I missing something, or is that an accurate inference from this discussion? + +JHD: Oh, yeah, sorry, and I see Waldemar on the queue asking me to clarify. So I’m combining -- + +WH: No, I want to clarify. + +JHD: Please, go ahead. + +WH: I suspect that however many people there are in this conversation are all talking about slightly different things in subtle but important ways. I can state my position. I suspect there are many different positions here. + +WH: I have no particular issue with the slice notation as it is. The caret notation will cause problems unless we reify it, and Mark raised a point that if we do it without reification first, we are likely to block our ability to reify it. So I don’t want to introduce syntax into the language where you can use `^0` syntax directly on array or slice access, but you cannot pass `^0` to a function. + +WH: At the moment, I’m taking no position on whether the slice notation should be reified or not. But I am taking a position on the caret notation. + +EAO: Agree completely with WH. + +JHX: All right, I’m not sure what I should do. It seems, as I understand WH will not support the index from end syntax without reification, but it -- I’m not sure whether I can solve this issue, so maybe I will not ask for Stage 1, and I have to be clear that my intention to make a minimal proposal, and I don’t think I have the ability to solve -- I think it’s a very complex problem. You know, I like it to be happen, but I don’t think I’m the right one to solve that issue. So it seems that I -- so I -- so I can’t advance both proposals, and maybe I -- if someone still interested in this direction, please tell me and let me -- let us check if there are any other possible ways. Though I’m not sure, yeah, thank you. + +### Speaker's Summary of Key Points + +- List +- of +- things + +### Conclusion + +- List +- of +- things + +## Stop Coercing Things (pt 3) + +Presenter: Kevin Gibbons (KG) + +- [slides](https://docs.google.com/presentation/d/1AFzFeVtbUCpPcMXTER0Zzb5l5c5oPdXCF4Yi_9B1EEM/edit) + +KG: Let’s go. Okay. So stop coercing things. I’ve been bringing this for a while. It’s the third time I’m presenting it, so a recap. I am suggesting that we should change our philosophy about how we design APIs in the future to align with principle on the screen here: if you pass something of the wrong type, that’s probably a bug, and bugs should be loud. The example that I am keeping in mind throughout all of this is this snippet of code on here, which does indeed access the first item of the array. And I wish it didn’t. Of course, there’s very strong precedent for working this way, but precedent doesn’t have to bind us in cases where it’s sufficiently bad and I think a lot of the coercing that we’re currently doing is sufficiently bad that it’s worth breaking with precedent. + +KG: And I want to emphasize, I’m not proposing any hard and fast rules, just defaults. So we have talked about this before. And I already got consensus for a few principles, one is we shouldn’t have a coerced NaN to zero in APIs unless there’s some particular reason it makes sense to do so. An API that expects a particular -- like a number, an actual number, if it gets NaN, it should treat that as a range error or a type error, not as zero. Zero is in fact not a reasonable thing to have -- sorry, it is not reasonable to interpret NaN as meaning zero in general. Also, we got consensus that when there’s an argument that doesn’t have a default value, and the argument is required, we shouldn’t coerce undefined to match the type of that argument, for example, if you have a did the that takes a string and the user fails to pass an argument at all, that should not behave equivalently to passing the string undefined, like that's silly. And the last one we have consensus if you for is to stop ruining non-integral numbers in general, APIs that expect integral numbers should in general throw if they are give a non-integral number. Now, for that last one, there are some APIs, especially in Intl, for example, where there is only behavior that is particularly strongly defined for integral numbers, but if you pass an integral number, there is some reasonable behavior, which is usually rounding, again, not required to be a hard and fast rule, just a general default. For example, if you’re indexing a string, it doesn’t make sense to index by a non-integral number, so if we added a new, you know, string indexing, I would hope that if the user passes a non-integral number, that would throw instead of as currently rounding or truncating to zero or whatever we do. And the request we have ones we have consensus on so far. + +KG: I’m going to talk about it some more. In particular, I’m going to ask that we stop coercing objects to primitives except coercing to Boolean, which is probably fine, and then if we have time, I might get to stop coercing between some primitive types, but I’ll leave that for later. So this is the one that I most want to talk about. I’d like us to stop coercing objects from primitives. If you do have a new API that takes a string or literally any primitive that’s not a Boolean, I would like us to throw a type error immediately. Not invoke `valueOf`, not invoke +`toString`, not invoke `primitive`, not in fact invoke any user code whatsoever. Just throw if the user passes an object or a function. If they want typed coercion, they can do it before passing the argument. This would remove one of the major sources of side effects in the language in places that you wouldn’t necessarily expect side effects. And in fact, in places where implementations often dealt and often notice that side effects are possible, which is a fairly large class of engine bugs. It is very easy to write an implementation which assumes that user code is not going to run between two points, which I’m noticing that in fact it. Is there are, like, at least a half dozen examples, real world examples of cases where there’s been a security issue on one of the major engines because of stuff like this. + +KG: I do -- don’t I think there’s as much of a problem with coercing to Boolean. Coercing to a Boolean never invokes to user code and it’s a fairly common pattern. People are used to writing if, you know, object or whatever and this sort of coercion is -- is sometimes more natural. So I’m not proposing to reduce use of (?) to Boolean. So if as a parameter that accepts Boolean and the user passes an object, well, it’s probably fine to treat that as being true in the same way that if would treat that as being truthy. I do want to note explicitly that there are some ways in which this would be giving up flexibility. These examples are from Shane, so thank you to SFC. The example on screen, we have the new typed pattern or one way to write the new type pattern in JavaScript where you have a class that constraints or otherwise validates some primitive, but the instances of that class still act like the primitive when passed to APIs which do coercion. You know, if the whole number class defines the value of method which returns the primitive that it was constructed with, then you can use -- instances of this whole number class anywhere that accepts a number. And if the, you know, Intl number format did not coerce, the maximum fraction digits argument would not work in the place. + +KG: So that you get values that are more useful than a primitive but be used in place of a primitive because it is coercing to the relevant primitive. Again, this would be giving up the flexibility to write rounding mode or whole number and expect them to work with language APIs in the future. The users would need to do coersion themselves. That’s worth it. I want to make it clear what we would giving up by making the choice. I have examples on the screen of cases I consider them particularly silly. The – ending up with object, object in the output of one’s methods is almost a meme at this point because so many people run into it. You join an array, this gives you bracket space bracket B. You didn’t want that. That wasn’t what you wanted and it would have been better for you to get an error. Similarly, in cases where there’s some reasonable behavior you want with math.max, I would want a date and that’s not how it works, if you do that, the date is coerced to a number. Then you get a number out of it and not your date. If you pad a string with function, you get a string if I – of the function. That’s not what you wanted. + +KG: So I would like new APIs, of course, I am not proposing to change the existing one, but new APIs, in general, to not do this sort of thing, to throw, if they have an argument, expect a primitive I would like to discuss this topic moving on. Specifically, consensus for not coercing functions to non-Boolean primitives. + +WH: My question on the queue is about something further down in the proposal which contradicts something further up. But I also have a clarifying question: go back a slide. I don’t understand the polarity of this example. Would that be or not be allowed? + +KG: I am saying that if the second line gives you an `instanceof` the rounding mode class, then the third line would throw. Because these – would be getting an object rather than a string. + +WH: Okay. From the example, it’s not clear, the rounding mode – + +KG: Sorry. I had to align some of it to fit on the screen. I apologize. The idea is in the second case, it’s in an instance of the rounding mode class. + +WH: Okay. Thank you. + +SFC: Yeah. The example on the screen, rounding mode class implements to primitive symbol dot to primitive when coercing to a string, it returns the string expected in that position. And rounding mode that seal is basically a – a static – example of enum, static, that an instance of the class rounding mode that has the symbol did it to primitive. The full code is in the request that KG linked. + +MM: I was very much struck by one of the things that you said with regard to stop coercing objects to primitives, which is exactly the potential side effect in places you would not expect side effects. So this more generally is the issue of surprising hazards. And if I understand the set of rules that you’re proposing, together, they would eliminate surprising re-entracney hazards when the argument is a proxy. When the object argument is a proxy or exotic, the hard cases to test whether you avoided re-entrancy hazards. If it does – it is consistent with what you’re trying to achieve here, which I think it is, I would suggest making it explicit as part of the proposal since all of the proposal is advisory and allows exceptions to any one rule in particular cases where it’s – where it’s needed to make an exception. So I would make this explicit as an additional rule that, you know, certainly could still have an exception. But it would be an additional rule to consult, even if you made an exception to one of the rules that apply – that currently imply it. + +KG: Sorry what additional rule? + +MM: The additional rule of avoid surprising reentrancy, avoiding coercions that turn control over to code determined by the argument object, even in those cases where the argument object is a proxy or exotic. I believe that’s consistent with your other recommendations, but I would like to see it explicitly. + +KG: Yes and no. So in the case that you are receiving an object, which is pretty common, and perhaps getting more so as we have more cases where we think that the appropriate design is an options bag . . . It is not feasible to avoid re-entrantcy, if the – you know, if you have – + +MM: If you are destructuring an options bag, it’s not surprising re-entrancy. You are with the contents of the object. + +KG: Okay. Sure. I guess if that’s not intended to be covered by your rule, I don’t understand how this is different from the rule. I am happy to say something like this is consistent with a general principle of avoiding unnecessary re-entrantcy. + +MM: That completely deals with my concerns. Simply making that explicit as part of the rationale of the – the set of advice to be taken into account with this as an advisory proposal. + +KG: yeah. Okay. + +EAO: Yeah. Could you clarify if I understood right, effectively, what you are asking for is for us to deprecate `Symbol.toPrimitive` + +KG: It will continue to be used in all of the places it is currently used. Yes, I am proposing in new places, we would not be evoking it or valueOf toString. + +JHD: I am popping on the queue to reply to that. It would still be important to invoke it in explicit user invoked coercion. Built in methods wouldn’t be the one invoking it. + +KG: Yes. Of course, this applies to things that are just dealing with values in general, rather than things which are explicitly intended for coercion. + +EAO: Okay. So would it be possible to consider something like, when we have something like the whole number example you showed, we have a class that is kind of intended in certain ways to be able to be used as a primitive, and it provides its own symbol.toprimitive based method. Would it be possible to enter that in cases where the object that we are dealing with actually has a custom symbol.to primitive method, then we do – okay with that. But that we would not end up calling the sort of the object’s symbol.to primitive in the new interfaces. + +KG: No. That’s like explicitly what I wanted to stop doing. I really want us to not invoke user-defined symbol.toprimitive. That’s the whole point of this. + +KG: Like this thing on the screen, I want to us to stop having . . . Like, it is potentially some of us likely to be a bug, if you are passing something that has a symbol to primitive method, as a user, but I think it’s still frequently going to be a bug and of course it’s trivial to do the coerce yourself and not rely on the method to invoke the user code for you. We don’t have the method invoke the user code, if it doesn’t need to. + +EAO: I am not sure how I feel about this, but I don’t really have a strong opinion here either. + +SFC: Yeah. I mean, if – so okay. First, thanks, Kevin, for putting the additional examples in the slides. I understand more about the hazards when it comes to, you know, reentering into code and that’s definitely something important that we should be thinking about when designing a robust language that is easy to build robust engines for. I was wondering about this idea of, you know, in general, the principle of having interfaces, where the – basically, the definition of how we interpret a particular option or argument is to call a function on that interface. I am going to pull up the third queue item about messageformat. During the `MessageFormat` incubator call last month, we were discussing, you know, well – it provides better ergonomics to the user if for example we used valueOf or some other interface function when passing arguments into place holders in a `MessageFormat` string. And it’s the same concept here, where we were calling symbol.toprimitive. So I was wondering if we sort of – if we establish that pattern, it’s sort avoids a lot of the – you know, if – I guess it’s – I guess it’s still technically re-entrantcy, but if the re-entrantcy is the explicit behavior expected that time, we could avoid this and make it narrowly – the coercion operation is, is this thing the correct type? Fine. If not, that’s it. Not any of the other special case that we currently have with coercion. Like, you know, number – this is object. Objects, so we don’t really do I guess the number toString and all the things in the specifics. If we made it this and none of the other things, would that help address some of the concerns we have about re-entrantcy hazards when writing engines + +KG: No, not at all. The concern is addressed, if and only if there is no possibility of invoking any user-defined code. It maybe helps with program correctness for the user, but it doesn’t do anything for the reentrancy, unless there’s no possibility of invoking user code. I didn’t fully understand what you were saying about message format? Can you elaborate on that? + +SFC: Maybe EAO or JHD can help me + +EAO: The specific case we were considering and are considering for message format is to have cases, for example, a number with a currency is sort of this – a compound object-type thing. Other cases where the value that we would be expecting for a – for example, a number formatter within messageformat would, in most cases, be a primitive number, but there are cases where the exact same message with the same internal number formatter could or would externally be called sometimes with a primitive number and sometimes with an object that contains this number and some basket of formatting options. So effectively, this is why the proposal has and we discussed last month, using the valueOf method to get the value and having a separate options value on that object for the options basket that might be there as well. So effectively, we would need to be able to call the valueOf method in that case, to be able to support this compound value at the same point we are supporting primitive numbers or other primitive values. + +KG: Okay. I think that not having seen that API, but from the description, that sounds like a place where it might be reasonable to have an exception to this rule because there are some kinds of objects specific kinds of objects you are expecting to receive there. So that seems like a fine place to have an exception. + +EAO: Yes. That was in fact my understanding from earlier as well, that in this case, for the specific needs that we have for this interface, no matter the default, we have a decent story for why we need to do the specific thing we need to do. + +SFC: Okay. My next queue item is sort of a general – a general sort of possible middle ground position we could have, which is, what – after, you know, looking more closely at the different types of use cases, we have these two that are in the slides, the whole number example, a new type wrapper and the enum. I do feel that the enum example is a more compelling example because JavaScript does not have enums. The only way to do a type enum in JavaScript is to have a class with a bunch of fields that says, you know, how sort of – you know, enums are modeled in other languages that don’t have them and JavaScript coerces them to strings. Like all APIs in the JavaScripts, you know, standard library, you know, that take enum, take strings. There’s many others and Intl. There’s a couple in the standard libraries, but a lot in Intl of this type. And it seems very useful, you know, because these are things are not actually strings, they’re enums. There’s no types, therefore it’s a string. It’s seems nice from the userland to specify an enum class. So between these two use cases, I propose, I do feel that this particular one here, you know, does seem somewhat compelling with a pity to lose the ability for library authors to design type-safe enums to coerce to strings when passed into a string enum APIs. So I wanted to hear what other members of the committee thought about this type of use case? And whether it’s something we want to either support or to explicitly reject in the future, or if there is a type of case where we can sort of look at it on a case-by-case basis and not be bound by the hard-and-fast rule. + +KG: Can you say more about why you can’t just have `RoundMode.ceil` just be a strong? + +JHD: I have a reply on the queue that elaborates that as well. So Shane, I run into this a lot - when you say “type safe” (I am sure I am saying this wrong, so please, room full of pedants, jump on it) TypeScript is a typing system and my personal philosophy prefers nominal typing. The way I usually do this is, I would type the primitive with an `as unique`, something. I forget the syntax. There’s a way to type it so that it’s a primitive, but other primitives that are the same value won’t be recognized. It has to be that specific value and so then in this case, the `RoundingMode.ceil` would be one of these tagged strings. If you pass that same string, the type system wouldn’t allow it. And in that way, you don’t have to construct an extra object for no reason and you still get the type safety. I don’t know if that – to me that feels like a better approach than trying to support some sort of object-oriented, “let’s create a class instance” approach. It’s definitely not the most ergonomic thing to do in TypeScript, but it works well in my experience. + +DRR: Yeah. I think that that is a valid approach. We see some sort of mixed reception to that. So but because they are viewed as a standard feature within JavaScript, people take the view they should avoid them. And as a result, they end up trying to resort to these sort of hacks, where you have a primitive type, but it is tagged in a special way, to get those semantics, there are pretty difficult for a lot of people. It’s cumbersome. We have tried to find a design space within typed – for the best solutions still end up enum for a lot of people. So they have been brought up and they go into this Catch-22 of, it’s better to focus on people don’t want to use them in the static type checkers because they are not part of the standard. So there’s a little bit of this – truly an ergonomics mismatch there. It’s nice to fix that, in just typed script. I don’t have a solution there yet. + +KG: I want to hear from Shane about where this example on the screen RoundingMode.ceil can’t be a string + +SFC: The enum could be branded. You would have additional functions, and other things on it. And if there’s other APIs, rounding mode, you want a method, convert that turns – You know, I feel like it’s a pretty – it’s a fairly common sense that branded objects are generally like, you know, more advantageous ergonomic to use than nonergonomic flat strings for things. But if that’s not the sentiment and if we are okay saying, we are not going to support type-safe enums, via this model, to primitive in JavaScript and rely on some higher-level abstraction like typed script, either of the approaches mentioned, JHD and DRR said, you know, you can’t have it in JavaScript. You have to go one level above. If we are okay with that, making that type of statement, then that’s the way it is. So that’s fine. You know, I would sort of just . . . I do think that this is the type of thing where like in a particularly strong compelling case, maybe we want to consider exceptions to this rule for this particular case, but otherwise I think – that was mainly the key item was to discuss this case. + +KG: Yeah. I think if RoundingMode were something in the language, and number format were explicitly aware of that thing, that would be totally reasonable, although the way that I would expect number format to interact with the RoundingMode class, or rather to instances of the RoundingMode class, would be to directly read an internal slot out of it rather than by coercing. So even in that case, I would not expect coercion. + +CM: I was wondering about whether this larger discussion has prospects for converging at some point. I think Kevin framed this explicitly at the beginning as a proposal to shift our default mode, as to what happens when we have not otherwise thought about it explicitly. It accepts the idea that there will be exceptions on a case-by-case basis, as makes sense based on the circumstances at hand. And I’ve been hearing a lot of discussion of “what about possible exception X or possible exception Y”? And all of the exceptions are definitely topics for discussion on their own merits, but I don’t see how they connect up to the fundamental essence of this proposal, as I understand KG was trying to make it. + +KG: Yeah. The way I put it is, if we agree to the fundamental essence of my proposal, and – like in general, stop doing coercion and making exceptions for things like Intl format number, that means we give up the ability to have a fully generic rounding mode class, where the instances could be used anywhere the corresponding string could be used. So if we're okay with giving that up, and perhaps some new APIs could be aware of rounding mode styles, we go with that. + +EAO: I was starting to wonder here: do we need something to call the default behavior or this or determine that whether or not coercion to primitives happens is something that needs to be explicitly explained from new where when it comes up + +KG: We could do that, but I would prefer to have a default and only make exceptions where the champion thinks there’s a reason where it’s unusual. + +CDA: Nothing on the queue. + +KG: Okay. So I would like to ask for consensus for this thing on the slides: A general guideline that default for new API that are not close cousins of existing APIs, taking a non-Boolean primitives, throw a TypeError when passed on an object or function. MM says, “+ 1,000 end-of-message.” Thanks. + +CDA: Do we have any more voices of explicit support for stop coercing objects to primitives? We have support from SFC, I think. + +SFC: Yeah. My position here, you know, I like what we did with the integer rounding thing, we had the specific case and Intl listed out as a possible explicit exception. You know, if we have – an exception like that in the case of the enums or new typed classes, within cases where it makes sense, that’s nice. But, you know, giving that it’s only a default and a proposal champion can make a case to deviate from the default and also because there are well-documented examples of how the re-entrantcy causes security issues, real world security issues in major engines, there’s motivation to say this is a reasonable default for – in the general case. + +JHD: Yeah. I typed in there, I support it. But we should add wherever we document it, add a line saying something such as, case by case exceptions may be permitted or something. So it’s clear, it’s not like an absolute law. It’s just default. But otherwise, it sounds great. + +KG: Definitely I am saying this is only a guideline. I am happy to try to work something like that in, if you want to help me wordsmith it on the thing, on the pull request. I am not aware where we have something like rounding mode in the language. So it’s a little hard to document. If there is one, I am happy to use that as an example. We can work on that in the pull request. + +CDA: Okay. You have support from PFC as well and from CM. + +CDA: Quick question: where is this being documented? + +KG: in ‘how we work’. This is something new. I would be adding a normative-conventions.md. Okay. I will take that as consensus on this specific topic of stop coercing objects to nonBoolean primitives as a guideline. The remainder half-hour is mine? + +CDA: You have the rest of the day. + +KG: Excellent. That means we can get into the many, many kinds of coercing among primitive types. So originally, I had something between not coercing in general, and I am convinced we need to be granular than that. We got consensus to not coerce `undefined` to things like string. We didn’t talk about coercing to Boolean and perhaps we will revisit this to Boolean when we talk about `null`. + +KG: I have some – the following specific cases that I want to us to consider and I have slides for all of them. They can be considered independently, but I am going to talk about them all at once before I go back to the queue. Because I think it might make sense to consider their collective effect they’re an individual effect. + +KG: So the first thing is don’t coerce `null` to anything. If you get null, and you pass it where there’s a string expected, I don’t think you should get the string “null”. A number is expected. You shouldn’t get zero. Although you wouldn’t anymore. I would hope that we can agree if you have pass `null`, where a number is expected, you shouldn’t get any particular number value. That should be an error. The third is more controversial. It is a place where there is a Boolean argument, and doesn’t make sense to allow coercing null to the Boolean false. Maybe it does. On the other hand if we allow `undefined` to also coerce to boolean false, those being similar in general. Right. I guess that’s what I just said. + +KG: Another case, converting Boolean to number. This is straightforward. True is not one. Those are different things. And if there is an API that expects a number and you pass true, that shouldn’t be treated as one. It just shouldn’t. Similarly, the value false and the string “false” are fundamentally different. If you have an API that takes a string and you get to Boolean false, you have messed up. The program shouldn’t continue and throw an exception. Is that really all of my cases? Yes. Converting between strings and numbers and strings and BigInts is okay. There are cases where that might be okay. I am not proposing to do anything about them now, but others may feel differently + +KG: I have at the end a summary which again – so undefined is a special case because a lot of places that has default values for things, and so when you get undefined that actually means give me the default value, and particularly for Boolean arguments, the default is almost always false. So it makes sense for undefined. I am only talking about required arguments when talking about undefined. + +KG: Okay. So summary. If we kept the proposals for number-taking inputs, accept a number or a string. With some caveats on number. In particular, if you are an integer-taking API, you should reject nonintegral numbers and strings which coerce to nonincidental numbers and if you are an API which takes a non-NaN value, for which NaN is not a sensible argument, it should throw if it gets NaN or string to NaN. And then reject every other type. BigInt, objects, functions, undefined, null and Boolean. + +KG: String taking inputs accept strings and BigInt which is stratified and others rejected. Boolean taking inputs, the it depends how you want to handle null and [n] defined. This slide is silly. But every other value is accepted. And I expect we will talk about undefined and null in the queue. BigInt taking inputs would take BigInt strings. They reject numbers, symbols, start rejecting objects and functions. Boolean, all these things. + +KG: Symbol taking inputs would accept symbol. The only difference, they can take objects and functions. Object taking inputtings take objects and functions reject primitives. If you have an options bag and pass a string that’s a TypeError. That’s what we are doing already. But just for completeness, listing out that case as well. Again, these are only guidelines. When there is a case you think another behavior makes sense, make a case for it. + +KG: All right. Let’s get to the queue. + +SFC: Yeah. So in this specific case of string to number, I will point out that there is a longstanding bug that like people thought it was a bug, that they passed strings to the Intl number API and the strings round to the nearest float instead of respecting all digits. This converts to a mathematical value instead. I don’t necessarily – I don’t necessarily know if string to number, like, if there are cases of string to number, something like Intl mathematical value should be something to be considered as an option there for accepting a string and I think that’s a compelling case. If an API accepting a number acceptance a string number, then it should use Intl mathematical value for that purpose. + +SFC: For BigInt, it seems fine because they don’t lose precision accidentally. + +KG: Yeah. That’s a particularly interesting case. I think I might regard that as a reason for number-taking inputs to reject strings. Since NumberFormat at least was giving the wrong behavior by coercing. But I do think accepting strings in general is the right case for number-taking inputs. I think that is the right case – yeah. I think that in cases where you want full precision for a number, it might make sense. + +WH: I am really surprised that you want Number- and BigInt-taking inputs to accept strings by default. I can imagine there are special cases where you would want to do that, but I am really surprised that you want all or nearly all Number-taking inputs to accept and coerce strings as well. + +KG: Yeah. + +WH: Why is that? + +KG: That’s a good question. I went back and forth on that. My main concern is that I think it’s something that people are pretty used to and most of the case that is we have talked about, it’s usually a bug. You have done something wrong if you are passing a Boolean to a number-taking input. It’s obvious not a bug if you are passing a string to a number taking input. So that was the distinction in my head. + +WH: How is the string interpreted? + +KG: ToNumber. And I should say, I guess another relevant point is here, it does ToNumber and if the string is a random string as oppose to a string of digits, it will be NaN and we have consensus for rejecting things that coerce to NaN for APIs for which NaN is not a sensible argument. So for most cases, it actually – the strings that you end up accepting are only specifically digit strings, which I like. Almost certainly not a bug. + +WH: Well, all kinds of things will be accepted, like `0x52`. + +KG: Yeah. Yes, that’s true. + +WH: Yes. And I have concerns about this. But my biggest concerns are about treatment of Booleans, both to and from Booleans. If I have an API which takes a Boolean, it should accept anything and just call `ToBoolean` on it. No special cases for null and undefined. + +KG: That sounds reasonable to me. + +WH: Also, in the reverse direction, for things which accept strings, we should accept Booleans get the strings “true” and “false”. + +KG: Why? + +WH: Because you accept numbers there and convert them to strings. + +KG: So I think the . . . Okay. Sorry, you're talking about specifically string-taking inputs? + +WH: Specifically string-taking inputs. If these accepted only strings, I’d be fine with that. But if these accept strings, numbers, BigInts, they should accept Booleans too. + +KG: That does make sense. I guess – + +WH: You could present a couple of alternatives, depending on the type of API, either accept only strings or strings plus things which can be converted to strings. + +KG: Yeah. Okay. I guess . . . So accepting – changing the slide so everything is on the left and nothing on the right. I am quite happy with. Moving Boolean from right to the left column, I am fine with. I think it’s a fair point that there’s nothing special about number and BigInt versus Boolean here. So I would be fine with making that change as well. I guess I would like to hear if anyone else has opinions. + +MM: I do but I am already on the queue. + +KG: Okay. Michael has a reply. + +MF: Yeah. And maybe KG you should live-update the slides as we go so we can do a review at the end and everybody can be on the same page. So as a reply to WH’s first point, about numeric APIs accepting strings, I think the DOM is probably a huge use case here, in that numeric strings are pervasive and people expect those to work with numbers. There’s a big clash to try to reject strings there. On the points of – the other two points, I agree. So we should – and KG, you might want to phrase – because of the point about NaNs, it’s numeric strings or digits strings. That’s all that is accepted. The other two points, I agree with WH there. + +CDA: EAO has a reply, Boolean toString should be okay. + +MM: I disagree with a lot of what has been said and with the results changes. I don’t think it’s justified for advice going forward to include these coercions toString or coercions ToNumber. I think that there might be exceptional cases, and I want to introduce principle for the exceptional cases . . . Which is, we already have the existing bad behavior for coercing things to a target type that’s elsewhere in the language when we do coerce. So I would recommend not just that the default is – that string only accepts strings and numbers only accept numbers. But in addition, I think we should be explicit as a, you know, as a recommendation, again it’s just a recommendation, that if you make an exception and coercion other things toString, you should use the existing toString that also coerces objects toString. Because otherwise, we have got too many cases in the language. Finally, I wanted to ask a question about to Boolean. I like this outcome, this is what I was going to suggest . . . Because the notion of truthy and falsey is kind of pervasive to JavaScript and the programs in JavaScript learns what values are truthy and falsey. The question; as far as I am aware, the existing to Boolean behavior presents no possibility of reentrancy because objects and functions are necessarily truthy. Sorry. KG that was correct, you said? + +KG: Yes. + +WH: Is it correct for a DOM falsey thing? + +KG: Yes. `document.all` is special-cased, but doesn’t invoke user code, only a special path. + +MM: The more general thing is, for exotic objects in general, at the limit of what the behavior of an exotic object can be, given the object in variance, I think to Boolean is also doesn’t the possibility of a reentrancy hazard. + +KG: That’s correct. There is no interaction with the MOP (meta-object-protocol). The only special case is `document.all` and that’s a slot check. + +MM: my recommendation is nothing else coerced by default. Furthermore, for exceptions that do coerce, the default should be use to existing coerce log issuing, not to introduce yet for cases for the programmer to think about. + +KG: I will have to think about that. + +KG: Well, I guess, okay. Exceptions that do coerce yes, you probably do want to use the existing coercion. That doesn’t say whether, you know, if – what we should do about string taking inputs which receive a number. Because perhaps we will have two rules, either you coerce or do the – where it makes sense or don’t coerce, accept this is what coerce means. You have a specific types that you accept + +MM: What I am suggesting is – let me be explicit. It’s not what you just said. + +KG: No. I know it’s not what you just said. + +MM: okay. + +KG: To be clear, what I am saying is that if we do what I am presenting on the screen, there would still only be two kinds of behavior to learn: the legacy toString which coerces everything aggressively, and this. There would not be a third thing where it's string especially. So in either way, there would only be two kinds of things to learn. + +MM: Okay. I understand. + +WH: Yeah. I’m partial to MM’s position. So okay. What you have in the slide deck currently is okay. But another alternative, which I’d also be okay with, would be for the table currently on screen to accept only strings, unless it’s an API that prefers legacy behavior in which case, `ToString` of anything. And BigInts should accept only BigInts and not strings unless they fall into the legacy category. And I imagine a lot of DOM would fall into the legacy category. + +KG: So part of the problem with a DOM, sometimes it vends strings that are numbers. Numeric strings. And if you want to pass those into other things, the other things need to be able to deal with them. But these can always coerce. That is viable. + +CDA: Just a quick note. We are just under 10 minutes left. RBN? + +RBN: Yeah. I wanted to say that while I kind of agree in principle with what MM is presenting, I do have a concern with falling back to toString as the – if not only accepting string behavior. One of the values of this, these slides where we stop coercing things, is that it gives us more room to expand an API in ways that are web compatible. Such as if we want to add a new argument or replace an argument to an existing API with something that might – instead taking an options bag, and if we design the – if we say that the general principle, you only accept strings and nothing else, or you fall back to legacy behavior, then when only string number BigInt and Boolean makes sense, then by saying we just fall back to legacy behavior, that brings in object which then prevents us from having the benefit of potentially overload an API to add new functionality, by adding an extra arm. So I am – I mildly disagree with falling back to toString specifically there because if we have this implicit coercion, we couldn’t overload whatever that API might be in the future. + +MM: I’m sorry. You can skip me. I just understood the full implications of what Ron was saying. + +SFC: If we moved Boolean from the reject column to the accept column, I don’t immediately see why we are doing the same with null. I know that it’s a fairly – it’s fairly common in JavaScript for things to be undefined when you didn’t know they were going to be undefined and get undefined printed and things like getters and logs and things like that and that’s how JavaScript has worked for a very long time. And if we are doing this with Boolean, like I think – I don’t see why undefined and null are in a separate case here. I agree with the general sentiment that like if things are going to be accepting strings and we are taking this approach of being more strict with coercion, then we should just take strings or we should take everything. I don’t know why we are putting number BigInt and Boolean in a different class than undefined and null. Those are primitives like the other things are primitives. Now, one exception here is that if you have an API that takes number-like strings, then maybe it makes sense to put number and BigInt in the left column. Then I don’t know why Boolean would be there. + +KG: Yeah. So undefined – well, and null . . . I think are much more likely to be a bug is the main reason. If you are passing the number 42 to a message formatting API, it’s because you meant the string 42 to a probably. Whereas if you were passing undefined, it’s probably because you like were missing a property and didn’t realize it. So the practical reason to distinguish undefined is that it’s much more likely to be a bug. That said, I am okay with the outcome of moving everything to the right-hand column here. At least – sorry, everything except string of course. That’s a little more radical than I was originally proposing, but I am fine with that outcome. It’s not like it’s hard to coerce a number or BigInt or Boolean to a string. And perhaps it is useful discipline for the programmer to do that explicitly. Yeah. I see MF is suggesting maybe we don’t talk about – leave on strings a little longer. Yeah. That sounds good. + +KG: If we were going to change strings, I will also change numbers and BigInts. Probably. Although, frankly, numbers and BigInts because they are so much more restrictive about the kinds of string, these are palatable. You know, the – if you are passing a digit string to a BigInt input, there’s only one thing that you meant. Even if we do change the string column, we might change – leave the BigInt and number columns alone. Because they are already restrictive. But since we are short on time, I am not going to ask for consensus on any of the primitive things, except that we should not coerce null toString or – well, no. String – no. I am not going to ask for consensus. I will come back next time and present the hard coerce primitive type except the only primitive type of this proposal. + +WH: Did we get consensus on the changes we made to the slide on converting to Boolean? I didn’t hear anybody objecting. + +KG: I am not proposing any changes to converting to boolean. + +WH: The slide – yeah. Converting things – yeah, this one. + +KG: I am not asking for consensus because I am not proposing any changes. This is the existing world we live in. + +WH: Okay. + +MM: When you do make an exception and coerce more things, the default choice to use the existing coercion behavior why are to be – to avoid introducing more cases into the language. + +WH: Yeah. That sounds good to me. + +KG: RBN, I believe, spoke against that. + +MM: No. RBN spoke against just the choice between strings accepting nothing versus the full toString. So there’s an open question – and so that does conflict with the combination of my preferences. But if you’re coercing more things toString, by default, then it would only conflict with the exceptional case. + +KG: Yeah. + +MM: But I agree. Ron’s point is a valid point and it is in conflict with the totality of my preferences. + +KG: I think I am going to not try to give guidance on what to do in exceptional cases and we can talk about them as they arise. I think I agree with you, we should fall back to coercing. But it’s hard to think about exceptional cases, in general, by definition. So . . . + +CDA: We are just about out of time. KG, I think you dictated a good summary a couple moments ago. Is there anything you want to add at this point . . . ? + +KG: Look for part 4 at your next meeting. + +KG: All right. Thanks, everyone. We will see you tomorrow. + +### Speaker's Summary of Key Points + +We got consensus on the first item, that new APIs which take non-boolean primitives should throw when passed an object or function. And the second half we need to ruminate more. + +(end of day 3) diff --git a/meetings/2023-11/november-30.md b/meetings/2023-11/november-30.md new file mode 100644 index 00000000..062b8989 --- /dev/null +++ b/meetings/2023-11/november-30.md @@ -0,0 +1,852 @@ +# 30th Nov 2023 99th TC39 Meeting + +----- + +Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. + +You can find Abbreviations in delegates.txt + +**Attendees:** +| Name | Abbreviation | Organization | +| ---------------------- | ------------ | ----------------- | +| Istvan Sebestyen | IS | Ecma International| +| Michael Saboff | MLS | Apple | +| Nicolò Ribaudo | NRO | Igalia | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Jordan Harband | JHD | Invited Expert | +| Rob Palmer | RPR | Bloomberg | +| Waldemar Horwat | WH | Google | +| Chris de Almeida | CDA | IBM | +| Ujjwal Sharma | USA | Igalia | +| Linus Groh | LGH | Invited Expert | +| Daniel Minor | DLM | Mozilla | +| Philip Chimento | PFC | Igalia | +| Ron Buckton | RBN | Microsoft | +| Daniel Ehrenberg | DE | Bloomberg | +| Samina Husain | SHN | Ecma International| +| Ethan Arrowood | EAD | Vercel | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | +| | | | + +## continuation of the new stage discussion + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/process-document/pull/37) +- [slides](https://docs.google.com/presentation/d/1vdps2Ga2eHYhCSDN6pmYYWtPKfAsgP_i88sLlqEq-Xo) + +MF: All right, so this is the continuation of our discussion about adding a new stage to our process. A reminder, last meeting, we reached consensus on, in principle at least, adding a stage between 2 and 3 for testing or whatever we deem necessary experience with a proposal before advancing it to be recommended for implementation. The reason why this is an agreement in principle and not actual consensus is just that I didn’t have any specific wording written up to agree to, so today I have that. So things I’m looking to do today are first approve the specific process documentation changes that I’ve made. So I have an open pull request right there that you can go take a look at. We’ll look at it in a second ourselves. So that will officially include that stage as part of our proposal process. Then I want to come to a decision on how we are going to name stages now and hopefully into the future, in case we need to make any additional changes. And then lastly, I have a recommendation to revert some of the current Stage 3 proposals to this new stage. So first this is the pull request to the process document. I think this is the entirety of it on the screen. It’s very short. You can read it there, but that’s the thing I’m asking us to agree to. Effectively we’ve just moved Stage 3 entrance requirement to this new stage and changed Stage 3 entrance requirements to be appropriate pre-implementation experience. + +MF: Yeah, that’s it. And this is what it looks like rendered, so it adds a row to the table in the process document, which has all of the same content that you saw in the pull request. So that should be fairly straightforward since it is just the things that we talked about last time in writing this time. + +MF: As far as naming, I think I have spent probably too much time thinking about this topic, and I wanted to start with, you know, I looked around at some of what other standard bodies do for their stage naming. You can see here is an example from ISO, which is fairly elaborate, to say the least. They have about 50 different stages, and the interesting part about this is that they number their stages with, likes 10, 20, 30, 40, which conveniently doesn’t overlap with how we have numbered our stages up to this point. We could adopt something similar to that which would have more freedom in the future for putting stages between other stages. It’s not perfect, but that is an option that we would have that I considered while doing this -- while I think about how we name stages. + +MF: Mikhail from University of Bergen has also helped out. He sent me some information that, as you know, and as you saw earlier in this meeting, he has been doing a survey of all language design groups, whether they’re standards bodies or committees or whatever, and how they have stages and move between the stages, and here we see examples of how they name some of their stages, which I was using for inspiration as well. He also aligned them in a way that they aligned to what our stages are, so we could better select from those, if we wanted to. So thank you, Mikhail, for sending that over and helping out in that way. + +MF: So what I’ve come up with here is that there are, at least what I’ve thought of, there are three naming schemes that are possible, and so the first naming scheme is purpose-based, and this describes what the committee does during that stage. So Stage 0, we do exploration. When they’re given two names here, I’ve underlined what I would probably prefer calling that stage, if we are to name them. So Stage 0, we do exploration. During Stage 1, we do design. During Stage 2, we refine the selected design. During the new stage we do validation of that design. During stage 3 we implement it, and during Stage 4, we integrate it into the draft spec. + +MF: The middle column I call status-based. This describes the current status of the proposal as of this stage. So instead of what we do during that stage, it's kind of like the fence-post problem, this describes what we've achieved. So in stage 0, it's completely new. We've not achieved anything. In stage 1, we've defined the problem space. In stage 2, we've chosen a particular solution. In the new stage, we have approved, in principle, the design of that solution. All the details should have been worked out. Once the proposal reaches stage 3, we've recommended that proposal to be implemented by implementers. And at stage 4, that proposal is now a draft standard. So it is being integrated into the draft spec. And lastly, in the third column, these terms are prioritizing what we would like to communicate to the community, people who are not participating in this process themselves, so that we can make it clear what we want them to think of it. So Stage 0 means we just are not yet considering this proposal. It hasn’t been run by the committee. So it’s not being considered. Stage 1 proposals are under consideration, but that’s about it. All we’ve seen is the -- what is in the proposal README. Stage 2, we have a draft design, so that’s -- that emphasizes it that it is not final. And this -- in the new stage, it is undergoing validation. I think people understand what that means. During Stage 3, this was a little more difficult to come up with, I think people generally, especially in the software field, understand what staging is and this would be a staging proposal. And then Stage 4, effectively for everybody, that means complete. So complete works, and that also is in common use in other language design bodies. + +MF: So those are the three naming schemes that I was able to come up with, and I think they’re all useful in their own way. So I have come up with a concrete proposal for what to do for naming. This is my preference here. So I think that the left column, the purpose-based names should be the stage names that we use for our process. So that's exploration, design, refinement, validation, implementation and integration. Small note here that, like, when I’m saying them, kind of in my head, I like using the word phase with them instead of stage. It sounds a little more natural to me, and that’s a terminology that’s been used with the WasmCG and maybe why it’s sticking with me. But anyway, that doesn’t matter. It’s just something I have experienced and wanted to share. Additionally, to help with this transition from stage numbers to stage names, we can assign a number, which won’t be too important. It will probably just be used internally in tooling and such in the meantime, while they are on their way to adopting the stage names. All public messaging that we have, where we talk about stage names, we should use -- where we talk about stages of proposals, we should use the names instead of the numbers. We should phase that out in how we communicate to the public and how we communicate internally. And also, I -- as I said, I think the other two columns are useful in their own way. And the process document already has a status column. So I’m recommending that we update it to, like, match the naming -- or like the descriptions I’ve used there. And incorporate what we want to communicate with that stage, both of those names into the process document. So that’s the concrete proposal there related to naming. + +MF: And then finally, related to reverting proposal stages, I looked through all Stage 3 proposals. A lot of them didn’t have tests listed, but I found tests for them. The tests were implemented in test262 for almost all of them. So I’ve now updated the proposals list, and there are only two proposals that are completely missing tests, as far as I’ve been able to determine, and two proposals that currently have an open pull request with tests, which is effectively having tests. So because of that, my recommendation there is to revert just the two proposals on the left, the ones that have no tests, to the validation stage. This is the new stage. And for the two proposals that have open test PRs, we can leave them in the implementation stage, Stage 3, for now, just to not kind of disrupt whatever, you know, implementation or process might have been in place right now. I also want to note, there are other proposals that are at Stage 3, the implementation stage, that just have really, you know, poor or insufficient testing, so one or two tests. That wouldn’t be sufficient for moving to Stage 4, which is where our current requirement for testing is today. But I chose not to make a request for those to be reverted to the validation stage. I’m happy to do that in the future or, you know, if that is the committee’s desire, we can do that as soon as we want. But for now, my recommendation was at least just these two that do not have tests at all, and no indication of having worked on tests. + +MF: So a recap of the things I’m asking for consensus for today, I want to address these three things in this order. One, officially adopting the new stage into our process. I think this should be pretty much pro forma. We’ve already, in principle, agreed to all these things that I’ve written down here and I hope to have accurately reflected those things. So the new stage as described in the process document pull request. And agree to merging that pull request. If we are successful there, I would like to see if we can agree to that naming plan that I have proposed or possibly with some modification to it, if we have a discussion that necessitates it. So this would include the changes to the process document to improve communication that I recommend, as well as updates to our communication in the proposals list, the how-we-work repo, on the website tc39.es, within ecmarkup for tooling support, and in each of the proposal repos where they list a stage number today, as appropriate. And, after that, assuming we have worked through that, I would like to ask for consensus on reverting those Stage 3 implementation stage proposals to the new validation stage. And that is it for the presentation. + +JHD: Okay, I’m first on the queue. So we’ve discussed this before. Like, no matter what we do, numbers are never going away. Everyone will continue to refer to them as numbers. But then separately, and there’s a number of other items on the thing, is I would not offer consensus for that. The numbers are better. We already had names, nobody used them. We used the numbers because the progression is clearer; the ordering is clearer. And so I’m happy to add names back, new names. There’s nothing wrong with most of your names there’s a bunch of bikeshedding on the names from the new stage that will happen. But from name is fine, but I think it’s important that we keep the numbers in terms of making sure that we what we say in our external communication is aligned with what the entire community who pays attention to this stuff has grown to expect. + +CDA: Yeah, I agree partially with JHD. And I just feel like we skipped a step here because you’ve come up with these different names, and I think you’ve done a good job of it, but I don’t see the compelling reason of why we’re moving back to names or adding back names. When this came up previously, I looked at the discussion from some years ago where there were originally names and they switched to numbers, and I found that discussion to make sense and to be compelling. And I don’t know -- I don’t understand why we have a need or compelling reason to add the names back into it. So I’d like to understand that motivation before even talking about what the ideal naming scheme is. + +MF: I can offer a brief answer to that. I showed in the presentation that we could follow some numbering schemes and even numbering schemes that don’t overlap with our current numbering scheme today. The reason why I went the naming route is because in our last discussion, which was probably an hour, hour and a half, on this at the last meeting, we were unable to come to any agreement or even any set of constraints that permitted any choice of number for this new stage. There were people who said it must not start with a 2 and there were people who said it must not start with a 3 and people who said it must be between 2 and 3. These are not constraints that I can resolve. So instead of going that route and having the same fruitless discussion during this meeting, I decided to try to see if we can use a name-based scheme. We’ll see how that works. + +NRO: I was going to say exactly what MF said. + +EAO: Yeah, just reiterating my thoughts on this stat. We should make use of the fact that between -- if we’re going to have any number assigned here between the value integer numbers 2 and 3, there is the mathematical constant _e_, which has a value of 2.718 blah, blah, blah, which is roughly in line with the sort of number that I think we most might want if we want to actually specify a number. So if we’re not renumbering the later stages and if we are particular attempting to use a name like validation, which by the way, I kind of like, then I would very, very much like for the number here to be -- that is used to be the mathematical constant _e_. Be particular, because, well -- well, in particular, because it injects I think an appropriate amount of whimsy into who we are and how we present ourselves, and it’s also the only value between 2 and 3 that I am at least aware of that has a single character representation for it. That’s it. + +USA: Thank you, Eemeli. Next up we have Nicolo. + +NRO: Okay, yes. So specifically for the names proposed, I like them. I prefer the names in the third column, but I’m fine with any column. I just dislike integration, because where we use the integration term when talking about HTML integration, and that’s something that’s very often -- that very often happens before Stage 4. I understand these are integrated in the proposal into the main syntax, but we can probably find another name for this, if not just completed, even if it’s not the purpose of Stage 4, just because in practice, how everybody thinks about Stage 4 proposals, except maybe for the spec editors. + +USA: All right. Next up we have WH. + +WH: I find the names in the purpose-based column very confusing, and it looks like I’m not the only one here. I would very much prefer to use a status-based names, which are much more familiar and describe what stage a proposal is at right now. For the short form, number _e_ sounds like a great idea. And there is another thing which wasn’t addressed, which is whether moving from stage _e_ to Stage 3 will or will not require committee consensus? + +MF: What was the question? + +WH: Will moving from stage _e_ to Stage 3 require committee consensus or will it be automatic as soon as the tests are done? + +MF: Of course it requires committee consensus. Each of the stage advancements require committee consensus. + +KG: But I hope we can have a convention that it’s basically pro-forma. We’re not revisiting previous questions, we’re just saying, that, yes, we agree it is sufficiently validated that we can move on to Stage 3 and no other questions about the proposal at that time. + +WH: Can you tell me why it shouldn’t be automatic? + +MF: Because it’s any sufficient experience. Some proposals require other kinds of experience or feedback via possibly, like, polyfills that, you know, get community feedback or whatever it is that is necessary. Like, those criteria end up being determined per proposal. Yes, for some proposals, you will be able to just say, like, whenever test 262 maintainers have decided that the tests are mergeable, that this will advance. But that is definitely not always the case. So I don’t think we could have some sort of automatic process associated here. + +WH: Okay. + +USA: We next up have MLS. + +MLS: So I agree with Waldemar. I like the status-based better than the purpose-based, although the new stage, I think, since, you know, we have what’s going on, what the stage is for, I prefer that the new stage be called test development. + +USA: Next we have DLM. + +DM: Yeah, I would just like to say that as an implementer, I don’t think this advancement should be automatic. I’d like to examine in detail whether the testing or considerations in place are sufficient before advancing to Stage 3, since that’s when we begin our implementation. And I know in the queue right now, I would also prefer to continue with numbers. I think we’ll spend a lot of time coming up with names and people are already familiar with numbers. + +USA: Next we have RBN. + +RBN: So this actually was started as a response to the topic that Chris had, which you apparently moved to later, but one concern that I have was that “validation” and “implementation” are often too tightly coupled. You can write tests, but you often need to run an implementation against those tests to validate that the tests themselves are written correctly. They kind of go hand in hand, and it’s kind of hard to separate these things. So I’m wondering if the names here are wrong. Like, “implementation” to me is -- if the goal is that no one should start any implementation until tests are written, it’s, again, hard to write those tests. If the goal is that we should not generally implement this across all implementations, I think that’s also what Stage 4 means. It just feels like these names are too -- represent too much of something that is not quite as delineated as they seem. I know that I’ve had to, when writing tests for the various proposals that I’ve had, I usually have to use something like an implementation of engine 262, which is not caught up on all of these proposals that are currently in Stage 4, so sometimes that implementation might not actually be perfectly valid because it has dependencies, because the proposal has dependencies on things that haven’t been implemented yet, so there’s not really a guarantee that actual implementation is correct, so it’s really hard to write these tests without something to run them against. + +MF: I just want to make it clear, we’re validating the proposal here, we’re not validating an implementation. So validation of the proposal means that you have -- let’s say it is an API proposal -- you have considered all reasonable inputs and the expected behavior for those. You know, up to this point, you may not have considered all inputs that you would during the testing phase or all orderings of effects. These things are fairly nuanced, and even though you’ve written the spec text, you may not have understood the consequences of that spec text. You know, just simple step ordering things that, until you’ve been forced to consider, like, inputs and outputs themselves. So an implementation, if it’s a correct implementation, to confirm that you’ve written your test correctly, isn’t really necessary to do that. + +RBN: It sounds like what you’re wanting are tests against the specification, not tests against an implementation, which is something you kind of said, but we have no way -- well, there have been projects that have been discussed in committee of research to, like, turn the spec itself into something evaluatable approximately, that seems like something you would want to be able test to verify the inputs and outputs and be able to verify implementation tests from that, but actually writing and having dependency on having full tests in test262 when there is no implementation to validate is very difficult. + +KG: So I think even in a world in which we did have fully productionized esmeta to produce the tests, that wouldn’t accomplish what I want the stage to accomplish and what I think MF wants the stage to accomplish: a human, ideally champions of the proposal, have sat down and thought really hard about the trivial little consequences of decisions about, like, the order in which things happen. And while I agree, Ron, that it’s very hard to write correct tests without an implementation to test against, like, fully correct tests, I think that it is very important to do the work of thinking through all those little consequences and that that work is normally done only when writing tests, and that that work can be done without an implementation. Like, I have done that for several of my proposals, and I do end up putting some bugs in the tests where I do something. And I work through the process of that and working through the consequences of the trivial parts of the proposal that otherwise doesn’t happen. While I agree that it’s -- it should be expected that the tests continue to be refined once implementations are running and the tests can be run against the implementations, I think that they can and should be written prior to an implementation actually existing. + +RBN: I’m not going to argue on this, but I think when you stay writing tests is the only way to validate that these inputs are correct or to get that understanding of inputs and outputs isn’t necessarily true. I found things that I read a bunch of tests for and wrote the implementation, I realized there are pieces that are missing in the specification text that weren’t in the -- can’t be validated by the test because the tests were assuming certain things worked. These two things do kind of go hand in hand. + +KG: I’m not saying that writing tests is either necessary or sufficient. I’m saying in practice, that tends to be the point at which it happens. Like, we have seen this across many, many proposals. And of course, not all of it happens there. Some will only come up in implementations. But the bulk of the work tends to be in the test writing phase, in my experience. + +RBN: That goes to points of the topic, these names, they feel little bit too cut in stone. The way that the names that are chosen like "validation" and "implementation" , when those two things aren’t quite as delineated as they seem. So it’s not a thing that -- I’m not trying to say that these are the wrong phases. I’m saying these are the wrong names. + +USA: All right. If that topic is finished, next we have reply from PFC. + +PFC: This chicken and egg problem with writing tests, this isn’t a new problem with this process change. This is something that basically anyone who writes or reviews test262 tests will have encountered before. It would be nice if we could take a look at the test pull request and take a look at the spec text and say, 'these tests are correct' or 'these tests are not correct.' But in practice, that’s not possible unless the proposal is really trivial. There’s always this chicken and egg problem, where you write the tests, and then you test them against a polyfill, which may have bugs or may be incomplete. Or you test them against a partial implementation you build yourself from a branch of the engine, and if there are discrepancies, it could be a bug in the test or the spec or it could be a bug in the implementation. Or like KG and RBN said, ESMeta or engine262. There’s never going to be a clean separation of these, so personally, I think it’s fine that we acknowledge that there’s not going to be a clean separation in the stages. + +NRO: I also, when writing proposals, validate them with either -- I usually write a Babel implementation or sometimes an engine262 implementation. And if this helps the person writing tests, we can still work on some implementation to help us write tests. We’re just not expecting browsers to actually start putting in the work for it. + +USA: Thank you. Next we have CDA. + +CDA: Yeah, I’ll try to be brief, because I’m stacking up the queue now. And I think RBN already spoke to this, but I think a lot of trouble with these names are the overloading of the terms, validation to me seems like something that comes later. You know, maybe review is better, but we use that term in other contexts as well. So I’ll leave that at that. The -- you know, when I asked why are we looking at names now, the answer was that we couldn’t agree on a stage number, but I think we’re making the problem larger. You know, we couldn’t agree on a stage number, now we’ve -- you’re introducing the naming and now we’ve got to figure out names for four or five or six stages. That seems like a more difficult problem to solve than just the one with the new stage while retaining the numerals. But I don’t want to ignore the -- I was trying to look for the notes on the original conversation of when the move to numeric stages happened. But I don’t want us to just ignore that and pretend like that’s not an issue anymore, so if we are going to use names, we should be able to explain why the concerns about the names that we moved away from originally, why that’s either not an issue now or how we are still solving for that issue, and it’s not regressed to the original problem that was solved years ago. + +MF: I can clarify that for you. If the one you’re talking about is the one from, I don’t know, five or six years ago or whatever it was, that wasn’t a move from names to numbers. Those names that were listed in the process document were not used by anybody, and I don’t remember who it was, but somebody pointed out that they also didn’t accurately reflect how we used each of those stages. So the names were just dropped from the process document because of their inaccuracies. The numbers were always what was used, at least during my tenure of about ten years. + +JHD: That’s right, the names and numbers landed together with the post ES6 stage process. We never didn’t have the numbers, is my understanding. + +CDA: Okay, yeah, so I misspoke. This is, again, why it’s important that I find these notes, to be able to accurately speak about it. Thanks. + +LGH: [on queue] official stage names for external communication emphasis here, seemed like a good idea regardless of what the committee prefers to use internally. End of message. + +JHD: I don’t agree. I think the things people are already using outside of TC39 is the things we should use in our communication, otherwise it’s just being aspirational. No matter what we do, we’re not going to shift what people use to describe them, so we should be, thus, consistent. + +>> Next we have MLS. + +MLS: Following up on what JHD just said, the Venn diagram doesn’t have an intersection. If we say we’re going to have the current stage numbers and add a new stage and we have a stage. Why don’t we borrow a stage from ISO does that don’t go into great detail and fill it all out, but multiply our current stage business 10, so we have 0, 0, all the way up to 40 and new stage 25. I think it’s pretty easy to train the community, old Stage 2 is now stage 20, old Stage 3 is now stage 30. I do think we need to come up with a good name for the new stage. But let’s try make it a solvable problem with both numbers and names. + +JHD: Yeah, I mean, that’s certainly -- that certainly is possible, but I don’t see why it’s not easier to put in a 2.5. I don’t think anybody cares that there’s integers. There’s no type here. And the current disagreement is about, like, 2.5 versus 2.7 and stuff like that, and we would have the same discussion on 25 and 27 and so on. +. MLS: So _e_ is cute, but I don’t think it makes sense, because it doesn’t -- it doesn’t sort nicely, you know, 2E, 3 -- + +>> That’s fair. + +MLS: _e_ is nice and cute. And I think 2.5, okay, so you’re adding a significant digit, why do you care where it relevant to the decimal point. Why not make it 25 and have 0, 010, 20 and 30. + +JHD: That adds one thing instead of changing four or five existing things. + +MLS: Right. + +JHD: I agree it keeps the right ordering. +. MLS: Yeah, I don’t think the -- I think community is smarter than we’re giving them credit for. We say old Stage 1, new stage 10, bomb, we’re done. + +USA: Okay, so, Michael, that was it for that topic. We have Nicolo’s item up ahead, but that doesn’t really pertain to this whole discussion of stages. +. MF: Well, I don’t think really any of it pertained to the first thing I was asking for. We seemed to jump the gun, and I guess because everybody is so interested in naming, I assume that means that everyone already agrees to consensus on the first point, that we do want to adopt this stage, include it in our process, and merge that PR. I would like to ask for consensus officially on that first point. +. USA: Clarify, you’re asking for consensus on the introduction of a new stage between the new -- the existing stages 2 and 3? And, yeah, that’s it, I guess. +. MF: Exactly, as described in the process document PR I linked to on the slides. +. USA: Right. Nothing on the -- okay, Chris has support for Stage 1. Then there is Michael Saboff who says didn’t we agree to that at the last meeting. I guess that counts as support. +. MF: Yes, MLS, the agreement was in principle to add that stage, but I didn't have any specific wording. So we're agreeing on the specific wording, which would now officially include it in our process by adding it to the process document. As of the last meeting, it's not part of our process. It was just an agreement that we would explore this route. Yeah, and I was given the parameters of that. +. USA: Nothing new on the queue. I would say that we have consensus on the first sort of statement. +. MF: Great. Maybe, then, I can skip from 1 to 3. We can address NRO's queue item and we can try to reach consensus on point number 3 on this slide before going back to the whole naming debacle. + +JHD: Yeah, I would describe it, from my head, is consensus on number 1, comment that we can’t merge it until -- except we can’t merge it until we settled on number 2. Hopefully we don’t have to argue about number 1, anymore, only the naming. + +MF: I don’t see a reason why it prevents us from having it as part of our process. It’s an inconvenience that will hopefully motivate us to make a decision. But it’s not strictly necessary. + +JHD: In the history of software, the most permanent thing is “temporary”, so I feel like it’s pretty important to wait. +. USA: Note that it says TBD at the moment. +. JHD: But, like, people will externally start describing things as stage TBD. It will happen, I guarantee it. I would encourage us to wait until we settled the naming thing. +. USA: Next up we have Nicolo. +. NRO: Yes. For point 3 here, I guess. One of the current Stage 3 proposals mentioned is that do not have that 262 service phasing parts, and the reason why it does not have the test is because it’s incredibly difficult to write them, with the reason being that this proposal only works when the host platform exposes some type of module from JavaScript so it connects the source for that module. And it explicitly forbids a source for the modules. There are WPT tests for this proposal that specifically use wasm as the example external module that has a source, like, given how this proposal is on the edge between 262 and HTML, maybe we can consider not demoting it given that there are some tests, just not in test262. +. MF: Yeah, it sounds, to me, that WPT tests would be sufficient for this kind of proposal. So I would support not including it -- so that would mean that only decorator metadata would be recommended for demotion to the new stage. +. USA: That’s it. That’s the entire queue. I think we have a long bikeshed ahead of us, but apart from that. +. MF: Yeah, so I am actually kind of unclear on whether JHD has withheld consensus for number 1. It seems like we reached consensus. I’m not sure.. + +JHD: No, I do not think that we should be merging the PR yet. I think that we should agree that we will adopt the new stage into our process, and I would call it conditional consensus on resolving number 2. +. MF: Does that mean that if we’ve not decided on the new name, we cannot have proposals at this new stage? +. JHD: I’m talking about merging the new PR into the process document. I’m talking about updating the public sources of truth. We can internally consider things in those stages in the meantime. That might make it more confusing. That’s a different question. But I think we should settle on the naming before we make public communication. + +MF: Okay, well, let’s hope that in the next 45 minutes that we have allocated to this topic, we can come to a resolution here. + +JHD: Fair enough. + +MF: I think one of the biggest sticking points was that during the last meeting there were a lot of incompatible constraints voiced about choosing a number between 2 and 3. And that made it seem to me to be impossible to do that. Would some of the people who voiced constraints on that be willing to speak about their current positions on it, maybe they have changed, maybe it is possible to stick to numbering, we only have to decide this one number and be done with it. Maybe let’s explore that route first. + +KG: Well, I wasn’t such a person. I think JHD has at least expressed strongly that he doesn’t want 3 in the name. I don’t really care about the name of this too much. I do want to say I think we should try to leave the other stages alone, no matter what we do. And so no matter what we do, I think we should only try to pick one name right now, number or name. And I am fine with picking a number, and I’m also fine with picking names and having 0, 1, 2 and test stage developments 3 and 4. I think it’s okay to be inconsistent. I think that it is also a suggestion of text relevant if Michael or whatever it was, sounds good to me. Although I absolutely liked your approved in principle option. I’m fine with other things. I just want to emphasize I think we should focus on this and not try to touch other things. +. USA: Next we have Waldemar. + +WH: I’m fine with stage _e_. And if we do Michael Saboff’s suggestion to multiply the numbers by 10, I’d be fine with stages 0, 10, 20, 27, 30, 40. + +>> Okay, anything that’s a number, really. That’s it for the queue. + +KG: Does anyone not like stage _e_? + +MF: I hate the whimsy, but I would accept it. + +USA: Do you want to ask for consensus on this? + +MLS: I specifically don’t like it because it’s not -- it is not a literal number. + +USA: Okay. Well, there’s a clarifying question from Nicolo. + +NRO: Yes, do people that use E actually call it stage E or 2.7 or whatever is the E value? Like, it’s a number with digits, or would you call it a letter? + +WH: We’d call it stage _e_. + +USA: I guess E is the proposal for the number, right? Like, it would have another name still going with the idea of also introducing names. But E would be specifically the number. Chris, you’re on the queue. + +CDA: Yeah, there’s a question of whether anybody did not like E. I do not like E. Also, didn’t we discuss E previously? + +>> Yes. At the last meeting, it was suggested by EAO. + +USA: We have a reply by PFC. + +PFC: Yeah, I also don’t like it. I feel like whimsical things tend to be barriers to external communication. But I don’t want to hold up the process, so if people want E, that’s great. + +MF: Also the point that Nicolo brought up concerns me, about having some people refer to it as E and some people refer to it as approximately 2.7. Some people will be just confused. That kind of worries me. And I think we’re just bringing problems on ourselves for no reason where we could just choose, like, a decimal number that is rounded to the tenth place evenly. +. USA: Next we have Eemeli. +. EAO: Yeah, I just thought I’d mention that I at least would be happy with something like 2.718 as an ex-police number for this. +. USA: We have a reply by Ron. +. RBN: I kind of concur with the comments that the whimsy I don’t think makes sense. E as much as we’re all developers, and many of us have backgrounds in mathematics, it’s not necessarily obvious to many people what that’s going to mean. Something like 2.718 is also going to be relatively awkward to both write and remember when you’re trying to write things down. I think it just makes sense to call this -- call the stage something like 2.b or as I discussed in the chat, pre-3. It doesn’t necessarily have to be an integral number, it doesn’t have to be a floating point number, it doesn’t -- it can be text, it can be whatever, and we can call it 2B, we can call the first -- call 2, 2.a, and just consider those to be interchangeable. Just something that unblocks us but really doesn’t, like, have to necessarily imply something whimsical or specific closeness or distance and if we need to add another stage after it, we can. + +USA: We have Chris up next. + +CDA: 2.b implies the existence of 2.a, which is not necessarily a problem. I see MF’s response there. I guess if you’re changing 2 to be the A, then sure, but that leads to some ambiguity when people refer to things as 2 and does 2 mean 2B now or 2 mean 2A now? + +RBN: I mean, I think the way that this has been discuss and presented, it feels like really what we’re doing is we’re moving the testing phase to Stage 2. But we don’t want -- we want people to be able to advance beyond Stage 2 to the point where we don’t relitigate for our bingo, relitigate all of the things that were discussed in Stage 2 just because we needed to make a change in tests and get approval for that. So it does feel like making it 2 dot something makes sense. And like I said, using 2 and 2A interchangeably I think would be acceptable, and I’d put in the chat and call it 2.testing and I think Rob just said that just now, or 2.validation or 2.v, or this something that lets us indicate this without necessarily trying to make a huge change to the process that we have in place since, again, what it seems like what we’re doing is basically moving the tests back to Stage 2. + +MLS: We’re really concerned about confusing the community about not changing the existing names. But we’re willing to confuse the community by calling it E/2.718285. I’m -- this is the last time I make this case, I think we need to be consistent with our numbering, and if we want to introduce a decimal, I think that’s confusing, because it’s only one stage that’s going to have it. I think that the community, we got to give them more credit than I think we are. We’re adding a new stage. And so adding a new staple stage, we have to change something to fit the new stage in, because it’s at the middle and not at the beginning or the end. So let’s just bite the bullet and make it so that we do this in such a way that we can do this again if we want to. That’s the last time I speak on this. + +>> All right, and that was the queue. So I think the options, the big options here are E, 2.5, +2.9, renaming all the existing stages altogether, and then make it something like 25 or 29, +possibly. How do we do this? + +MF: Maybe this is an opportunity for a temperature check along those directions. I don’t know how to use the temperature check thing. + +>> Yeah, but, like, what do we check the temperature on? + +MF: No, what I mean is not that I don’t mechanically know how to use the temperature check thing. I’m not sure of how the temperature check might be appropriately used to resolve the problem. + +WH: Would anybody be opposed to 2.7? + +??: I think you have three temperatures. You have _e_, 2.x, (2.5 and 2.7 and 2.9 have been discussed), or changing the numbering system altogether. 10, 20, 25, 30, 40. + +MF: Can we get a response to Waldemar’s question, first. Would anybody be opposed to 2.7? + +MLS: I think it's confusing. I don’t think it’s a good idea. + +MF: Okay. Then yeah, we will need to do a temperature check along the lines of what NRO suggests. + +USA: Eventually, Michael what you are suggesting, start at the temperature check and redesignate every emoji, a certain option and then people vote for their favorite option + +MLS: There needs to be 3 options. I think there’s E, a lot of people want E. There’s 2.X, and I don’t know what X is, 2.5, 2.7, whatever. And there’s a new naming scheme, my suggestion, you can – the temperature check should be a new numbering scheme. Or you multiply things by 10. I think that’s only three choices. Let’s not confuse it more than that. + +MF: Michael, haven’t you already rejected two of those options? + +MLS: I know what I am going to select on temperature check + +MF: Haven’t you stated that like you would not permit 2.7 just now and you would not permit E? + +MLS: Well, I think 3 of us spoke up that we don’t like E. + +MF: Yes + +MLS: I don’t believe in lone veto, so we need to have a discussion sometime before consensus. If I am the only one that doesn’t want 2.X, then I live with it. + +MF: Well I will re-ask WH's question, does anyone in addition to MLS object to 2.7 in particular? + +RBN: I really don’t think 2.7 makes – is very clear. I don’t think it’s – the right direction. I also object. + +??: So we have objections to 2.7. Do we want to try the temperature check? + +??: I am willing to, but I am afraid it feels like we have incompatible constraints again. + +??: Right. I think you’re – people don’t like E. People don’t like 2.7. People don’t like the naming. + +??: It sounds like 2.7, arguing against 2.X or anything. + +??: There’s a few more options, 2 +. Would you like to try that? How do you folks feel about that? + +??: I think more options dilutes intention. + +RPR: Could we just check, are there objections – let’s start with 2.testing. Are there any hard objections to 2.testing? + +WH: Yes, I object. That sounds like Stage 2. + +??: Okay. + +MLS: Agreed. + +RBN: Are there hard objections to “2+”? + +WH: Yes. Same reason. + +MLS: Yeah. I don’t know – + +WH: “2.” followed by any letter is much worse. + +USA: Should we have a bunch of replies, but I think they’re not like . . . Things – Eemeli, would you like to speak to that? + +EAO: It sounds like we have multiple options of the temperature check process. It isn’t necessarily the best choice to go with them. Do we have some place like the notes document which we could entry a preferential vote across the candidates after we identify a subset of all the candidates to a vehicle giving us information how we stand on this, and possibly we could preagree to go with the winner of this vote with losers basically – single transferable vote style of balloting? + +USA: I feel it’s difficult to do the counting of the vote synchronously, especially on a topic like, this where there’s no clear preference. + +EAO: Did the counting for things like this turn out to be very easy, especially if we have only – what are we – 20ish people here. That could be done in a couple of minutes manually. USA: All right. Michael, you had a reply? + +MF: I am not sure of the details, but there are some rules that ECMA has any time a vote is taken, and the voting system used might be included in that. If we go that route, we need to make sure that that is actually acceptable. + +USA: That would make things even more significantly difficult, I feel. Going by the ECMA bylaws, voting is per member. Yeah. I am not sure rank voting is considered in that. Nicolo has a proposal? + +NRO: Yeah. I guess it’s been discussed. I don’t really think consensus is – we can just do like follow whatever rules we have for voting, which is I think as mentioned, like one vet per member and whoever the majority wins. And we are wasting a lot of time on bikeshedding a number. And like I didn’t think the process we have consensus applies to this type of things. + +JHD: I think things that are communicated externally pretty much do require consensus. I think that this isn’t like how we use TCQ or something. This is part of the public stance of the committee. So in this particular case, maybe we can all agree to do our consensus to a vote. And that would be great because the wasted time is unfortunate. But in general terms, I think something like this does require consensus. + +KG: So I happen to have been reading the ECMA bylaws recently. I do think that it’s fine to try to proceed with consensus here. But the bylaws specifically call out that voting is to be used when there’s no other way to make progress. And it seems like this might be such a case. So I don’t object to tying to seek consensus first. I just – at some point, I think we may have to give up on that and voting should be the fall back + +JHD: Yeah. And I will – as much as I would – if this were a language thing here, I think and hopefully all of us consider having to vote being a huge process failure. But since this isn’t a language thing, I think maybe it would be an acceptable consequence if we can’t come to consensus. I was reacting to the implication that this is something to be dictated by chairs. I agree, if we can’t get any progress, and we all agree we want the stage, it makes sense to do a vote. I don’t think we should go to like the ECMA one vote per member thing necessarily. + +MLS: I suggest you don’t want to do that because it’s only ordinary members that vote. I believe at this meeting we currently have Bloomberg, Apple, Google, IBM + +JHD: Ordinary members are voting on Ecma. I thought all members vote, that’s all it’s been for editors and chairs. Either way, doing by ECMA member means that anybody who has a co-worker as a delegate, their votes will count less because they have to smoosh them together into one vote. And invited experts don’t get votes. That might not be a satisfactory way to do voting. + +DE: Nonordinary members can vote for committee-level things. + +DE: If we did want to have a case where we didn’t want to go by strict consensus, I hope we only use it if there’s a clear super majority and not go by anything on the boundary. The Ecma bylaws give us leeway in how to run things. And how to make decisions. So we have flexibility here. + +MM: I am – even though we could invoke that ECMA bylaw, I am scared to do anything that sets a precedent to make decisions not by consensus, and I think this one is easily solvable by asking for consensus to resolve this particular issue by voting. What I am hearing is that there’s general agreement that we’re willing to resolve it by voting, so if we just get consensus on that, then we haven’t established a precedent of weakening consensus as the decision procedure. + +USA: All right. Michael, what about we do this: open up like hedge doc or something on your screen Shane and we could do a quick legend, and do mock vote or a vote of sorts using the temperature check. And then ask for consensus on whatever gets the most points or . . . Votes. + +CDA: Could we do a mock vote for fun, to inform whether I want to join the consensus for the vote. (joke). I think before you ask for consensus for the vote, we need to define, are we doing like a vote that is simple majority, based on the Ecma rules where its ordinary members voting or not – I actually don’t think it’s limited to ordinary members, or if it’s a 75% – you know, what is the actual vote needs to be defined before we ask for consensus? + +?USA?: Well, this is why I think it’s better to either go for a simple temperature check and ask for consensus on whatever wins the temperature check. Or just go for a proper vote with like simple majority. Because otherwise we have to get consensus on how to vote, and then – yeah. + +IS: [ in chat ] In TC39 practice we go with consensus, we try to avoid voting. The voting according to Ecma rules would be based on TC39 membership present in the meeting (ordinary, associate, NFP, etc…one per organization) and then a simple majority. Alternatively, though not in the Ecma Rules (but compatible with it…) we could go for a socalled “indicative vote” (by organizations present), which is basically similar to the “temperature check” (by anybody in person present in the meeting, except for observers). + +USA: (Reads the chat of IS) Michael, what do you prefer is that case? + +MF: Don’t make me choose. I put my opinion in the queue. I think if we are able to reach consensus on a vote, that is equivalent to some of the people holding up what is, I think, the majority opinion here, just deciding not to hold that up. By consenting to a vote you would say, it’s okay to override my opinion. I would say maybe we try that first. Otherwise, I am not sure what to do after that. + +USA: Okay. + +USA: Well, would somebody who has been against 2.7 so far like to speak to that? + +USA: Oh, yeah. Sorry. Ron? I think your topic . . . + +RBN: My reply. I was asking, and beginning to think this is – we need to solve this temporarily, at least, for this discussion. But I think we might want to have a larger discussion separator. We have been incorrectly using the temperature check since the day it was added to TC39 and it’s been, this means this thing this means that thing. We instead need to – as the work has been going and might be going on in the future for improvements to TCQ, include an informal binding mechanism, we talk about voting, it sounds very formalized, when a lot of the times we want to give the champion more feedback and be expressed more concisely than a lot of people just expressing their opinions [via + 1. A, is this a nonbinding poll and continue to use the temperature check in this way. I don’t think that’s the way it’s been designed. + +USA: I think a combination of your concern plus what Michael just suggested could be done properly through a temperature check. We could ask for temperature on 2.X or 2.7. And then like people could adequately vote, if they – for example, indifferent or if favor and so on. Sorry. Eemeli, you have raised a hand. But . . . + +EAO: Yeah. I mean, according to TCQ, I’ve been talking for the last couple of minutes, but I – yeah. My strong request would be to get consensus first, on binding to the results of the vote or a poll before we do the actual vote or poll or indicative poll or whatever we call it. Because as we are currently in the situation, where each of the options that has been presented has had somebody strongly objecting to it, if we vote first and then find ourselves to that by consensus, I would be very surprised at whoever lost the vote would not – would agree to that and rather not submit to that consensus. Whereas, I do believe it’s easier for each of us to bind ourselves to a vote or a poll before we know the results of it. + +USA: Can we move past – oh, there’s . . . More replies. So maybe let’s just quickly go through the queue. Nicola, you had a reply + +NRO: How to vote, giving the 50% we need, is to first do a temp check and then vote between the two winners. + +IS: Yeah. Okay. So once again, so Ecma has one rule regarding the whole thing in TC and this is the 50%. Yes. So the simple majority. So that can be done. In practice of a temperature tool check, et cetera, what you are doing, it is not explicitely supported by an ECMA policy, but it is also not said “no” by the ECMA policy. It is up to you, it’s possible. That’s the reason, we always left it there. And the other point is that the unanimous agreement, the consensus agreement that was again a TC39 practice. And so there’s a little bit of contradiction between the TC39 practice and the ECMA rule. The ECMA rule is very clear. You should try to get a consensus. If that’s not possible, and you want to move ahead, then it is the simple majority vote. But so now you have to be careful what you want to do. In the first case, you can fall back of course to the ECMA policy. But my preference would be that you do also this time in the TC39 way, you should be trying with temperature checks, maybe to come in to comprise, et cetera, and then if the consensus of TC39 doesn’t work, then you have to go back to the ECMA rules. So I don’t know if that was very clear or it was – it was just confusing the matters. + +CDA: I feel pretty strongly that – I don’t think we should move between the extremes going, you know, we can’t get unanimous consensus and go that way, we do one member, one vote. 50% majority voting. I will try to get, first – if we want to get unanimous consensus, a general consensus, and that could take the – for the purpose of voting, if we have an overwhelming majority in favor of one of the options we do that and not one member, one vote. We should find something in the middle ground before we go to the opposite extreme. + +USA: All right. MLS says consensus for a vote lowers The Bar for approval. What do you folks say we do a nonbinding temperature check regarding one or more of these options? Let’s start with one of the options, right? In the chat, MF you say, temperature check on what we don’t like would be useful. Do you want to do a reverse temperature check? + +MF: Yeah. I think Nicolo suggested this originally in the chat. I think that would be most helpful, we just find the number of people opposed to each of the currently considered options and that would allows us to focus our effort on the least opposed option making progress. You know, talk to the people who oppose it, see how strong the oppositions are. That kind of thing. What is the basis for it? + +MF: I see some thumbs up in the meet. + +>> All right. So let’s set up a legend, if you will. And then I can start the temperature check on the TCQ. It seems like people are happy with the idea of eliminating least popular options. What do you believe is the less popular option here? Because it’s very hard to get a feel of the committee at the moment. +>> No, that’s what we need the temperature check or whatever we are calling it, the vote, determines the least popular option. We can’t see what the least popular option is and then . . . +>> Okay. +>> You know what I mean? We have to . . . +>> Should I in the meantime, there is a legend – + +WH: There are lots of biased ways of trying to arrange the votes or assign emojis to favor particular outcomes. I would like to see a temperature check for each alternative presented here. + +USA: So you – Waldemar, you would prefer individual temperature checks? + +WH: Yes. A separate temperature check for each of the main alternatives. + +NRO: Strong support for Waldemar + +MF: Is there a concern that a particular choice of emoji would bias people’s decision here? + +USA: I believe so. + +NRO: Can I also say the support other than this, each option we know like how much it’s like and how much it’s disliked. + +USA: All right. Then, let’s pick the – the first – and then like we can sort of record for each of them, and then in the end, we get a breakdown of – I guess we have come back to what Eemeli suggested, ranked voting essentially. So Michael, which one do you think we should talk about first? + +MF: Please don’t include me in this. + +[_option 1_ temperature check: Stage “e”] + +USA: Let’s start with E. Let’s say that was one that came up sort of very early on. And then we have not talked about it for a while. And I will open the temperature check now. And let’s vote on _e_. So . . . Okay. Signing in. Let me check. There are 35 people. + +CDA: Not 35. We have the transcriptionist. We have a – 2 people are at least transcriptionists because you have cart files, there’s two IS, two MF. There’s not 35 + +USA: Safer 25. We should wait for maybe 25 people to vote. So far we have 6 and 9, so + +USA: So now we have 15 votes. It doesn’t seem to change. + +CDA: Now, point of order. If JHD is correct, we might need to start over. EAD didn’t have TCQ open + +JHD: If you could show the current state of TCQ. But EAD, if you see the emojis, nevermind. You can vote + +EAD: I’m sorry. Guys, I am new to this. The strong positive to unconvinced scale, voting on like Option 1 and – + +USA: Yes + +EAD: Okay. And then what are we going to do? Option 2 next and then revote? + +USA: Yes. + +EAD: Okay. Cool. Thank you. + +USA: All right. With the idea being that we should be voting in a vacuum on each one. + +RPR: Clarification from the chat. It’s E. Just E, the option. + +CDA: Right. Not 2.E. Okay. So who is screen sharing right now? RPR? + +RPR: Yes, I can. + +CDA: Can you update your option document, to reflect what we are actually voting on, which is just E alone. + +RPR: Just E sorry. If anyone has been voting based on 2.E, please change it. Stage E. + +USA: Ron asks about 2A, 2B. I would have thought it’s part of Option 2, but I guess – No – implies a number, 2.N more like. So perhaps that could be option 4. + +USA: It’s stabilized with around 21 votes. I would just take a quick screenshot. And yeah. We can remember what that was. Copy. + +USA: Okay. So we have 8 positive. One indifferent. And 12 unconvinced on my screen. I will stop the temperature and move on to Option 2. Okay. It’s done. And it’s starting again now. Also, I just realized, I didn’t vote. But I have given up at this point. I would happily accept what all of you vote for. + +EAO: What are we currently voting for? + +[_option 2_ temperature check: Stage “2.x”, where “x” is a digit to be decided later] + +USA: Option 2. Getting the temperature check on how you feel about Option 2. + +EAO: Where can I see what is Option 2? + +USA: On the screen share. + +CDA: In the Google meet. + +EAO: Thank you. + +USA: So it i 15 +. Yeah. 4 – so 19. 20, 21. More or less reached the same amount of votes. Let’s give it a couple more seconds. Okay. Okay. Anybody still voting? It seems like not. So . . . This – All right. So I will stop it again. + +[_option 3_ temperature check: Renumber stages to 0, 10, 20, 25, 30, 40] + +USA: And then now, let’s switch to Option 3. And I will open the temperature check again. This is for renaming the existing stages from stage 10, 40 – 0 to 40, and then adding 25. 19 so far. We’re missing a couple of votes. Somebody still in there? Okay. Another one. All right. This looks like we might have – yeah. We have – okay. All right. I am going to – if nobody is still voting, I would screenshot this as well. + +[_option 4_ temperature check: Renumber stage 2 to 2.a and insert stage 2.b] + +USA: And now, finally, we have the 4th option. So I will stop the temperature check and start again. Rob, if you could – yeah. Thank you. And it’s on. +>> Could you clarify if this option means renaming the current stage 2 to be 2A – +>> Yes +>> And the new one to be 2B? +>> Yes. To clarify, the existing stage 2 would then be renamed to stage 2A moving forward. Yeah. +But all the existing – other existing stages would rename – would remain unchanged. +So we have 10, 12 . . . 21 again. I suppose it’s stable now. I will take a screenshot again. And that’s the last one. So okay. I will finally stop the temperature. Let’s see. We have . . . So now, the most complicated thing about temperature checks is how to interpret them. I mean . . . There’s a few that are matched in terms of unconvinced. But then I'm indifferent and I suppose that affects the support. Purely based on support, strong, positive, plus half positive or just strong positive, + positive, the winner is Option 2. Stage 2.N. It’s also the lowest in terms of unconvinced. +Yeah. So talking of that, then, Samina, you have your hand raised? + +SHN: Yes. Can you share the screenshots of the 4 options with the votes so everyone can see them. + +CDA: They’re in the delegates chat + +SHN: Sorry. Thank you. + +USA: Good point. I could copy them into this document. + +>> It could be kept in the notes, if we want. +>> Sure. +>> But in the meantime, how do folks feel about Option 2, then? Can we get – perhaps ask for consensus on Option 2. Let’s see. The queue . . . Ron proposed one temperature check. We did that – +>> I think that’s old +>> Yes. So empty queue. And – +>> My topic was still valid. +>> Sorry +>> The question I wanted to ask +>> Yeah. I’m sorry. I skipped that by accident. Ron? +>> Yeah. And I know there’s also the discussion going on what is the number that comes after, but I also wonder if the topic I had up made sense to discuss prior to that because – well, we can ask it after, but my question is whether we could couch this as conditional advancement penning testing being written. Rather than have to introduce a new stage. But . . . + +JHD: So I replied this in matrix as well, but the reason for this whole effort is because it is really important to have a completely distinct category for which the design is almost exclusively finalized, but it’s not yet ready for implementation. And many proposals would be hopefully here for a short time. If we had this new stage 5 years ago, Temporal would have been sitting it in for many years, appropriately. Because the main reason that Temporal wanted to be at Stage 3 was so tests could be written and people could implement it and try it out. And the main reason they didn’t want to stay at stage 2 is that they didn’t want to relitigate the design. That’s what a new stage provides. There’s not a lot of benefit in just making the test requirements come sooner. + +USA: And next we have Eemeli. To answer your question, I can see a poll going on right now in delegates, so people are voting for what the X should be. It ranges from 0.5 to 0.9, including 0.75. + +KG: We can’t use the thing in the chat to determine this. + +USA: No. But it’s – I guess temperature check for what you could possibly ask consensus on. +>> Yeah. +>> So what I was also going to say on this, is that given that we have not chosen the X, it means that when we voted for this option, we were voting for each of us, the best of these 6 or 7 different options that we consider that would be the number. So that probably explains at least in part the higher popularity of this option compared to the others. But sure, let’s go with this. +>> There is a very significant point of order by Chip. I have no idea how we passed this timebox, but we have. + +CDA: Yeah. We deliberately ran past the timebox. + +>> Yeah. That’s true. +But at least let’s ask for consensus on Option 2 and then we can figure out the X later. What do you folks think? +>> Can you repeat that? +>> I was proposing we ask for final consensus on Option 2 and figure out the X later. Another question would be, JHD, would that help your concern of not having a name before we merge this? +JHD: Like it was 2.question mark or something and figure out the question mark later? +>> Yes. +>> So I – I am throwing in a point of order. The fact that before the lunch break, we can’t go to the next topic any way, without short-changing it, so I think we should just try and finish up here, if we feel like we can in the next 20 minutes. +Because again the next item is 30 minutes and we don’t have that much before the break. +>> All right. + +WH: I support consensus on Option 2. + +>> Okay. There’s nothing in the queue. But let’s give second or so before we finalize it + +WH: I was one of the initial objectors to it. + +??: Yeah. Thank you, Waldemar. Really appreciate it. + +??: I’m sorry, so are we – apologize, I am confused. Are we relying on the vote in the delegate’s chat in matrix at this point. + +??: No. + +??: Okay. + +??: No, We just – Eemeli is asking a clarifying question, I believe, regarding this. Yes, we are asking for consensus for 2.X, where X is anywhere from 5 to 9. + +EAO: So specifically, what I would like to ask is that, the range of choices we are considering, is that 2.5, 2.6, 2.7, 2.8, 2.9, exactly only or any exact number between the range from 2.5 to 2.9. Those are two different questions. + +USA: That’s a good question. We could have made Option 2 more specific. + +WH: Looking at the delegates’ channel poll, there are three options with more than one vote, which are 2.5, 2.7 and 2.9. So we should pick one of those three. + +MF: Yeah. People expressed that it should only be a single digit after the decimal place. I think that’s a good idea. It’s more convenient to talk about. Also, in the last meeting, a lot of people wanted to emphasize that it was mostly 3, so something closer to 3 than to 2. Which 2.5 does not qualify for. It seems to meet those desires, of the ones in the poll, that have more than one vote as WH said, that’s 2.7, and 2.9. So I would say we should try to consider those two options and see if there’s any opposition there. + +USA: All right. On the queue we have 3 messages. Michael Saboff, who supports 2.5. JHD consensus, one of them, preference for 29, failing that 2.5. And lastly, Daniel Minor supporting 2.5 also, or 2.7, 2.8 or 2.715. + +>> We have a good mix of everything. + +USA: Ron, says that they have preference for 2.9. 2.5 does not adequately indicate relative progress of the proposal. + +WH: I also have a preference, 2.5 is too evenly spaced. + +USA: Eemeli says, well, Eemeli you have a comment? + +EAO: Yeah. At this appointment, I don’t really care where we end up on this. But I would like to note explicitly, this has been way, way too messy. And we they would to come up with some way that we have – for making decisions like this in the future, where there’s multioptions between which we need to choose. This needs to be written down in an internal policy. I don’t know what. So we don’t end up in this mess later on again. + +USA: Thank you, Eemeli. Well, we can either decide amongst ourselves, which one of these to go for, or make two final temperature checks. At this point, why not. Michael, do you have a preference for one of the two? Would you like to just straight away ask for consensus on one, or would you – + +WH: Let’s do temperature checks on the two options. If somebody wants to propose another option, we can do a temperature check on that one too. I don’t want to discuss the process any more. + +USA: Okay. Okay. Fair enough. + +USA: So all right. Then I will open the temperature check yet again. Twice. This time, we first start with 2.5 and then go to 2.9. As you can see on your screens, I will starts now + +MLS: Can we do one temperature check with both. And people can weigh their – strong positives.5 and unconvinced 2.9. And you can put whatever. + +WH: No. Because the scale is biased. + +USA: The worst positive. Yeah. + +USA: CDA says, split the difference. 2.7. Perhaps we have an option C. + +CDA: Yeah. Would the people who prefer 2.9 be happier with 2.7 rather than 2.5? And same question reversed: for the 2.5 supporters, is 2.7 better than 2.9? + +WH: Let’s just do three temperature checks. One temperature check for each of the three options. + +USA: Okay. First we are running right now a temperature check on 2.5. The votes are coming in. Folks let me know – well, okay. Let’s see. We have 11. 15 votes at the moment. This is 16. Or, well . . . Yeah. All right. So more votes. At some point . . . 9, 10, 15. 20. We are missing one. Okay. We have one less vote now. 5, 6. 12. 20. Okay. No, don’t do two options, please. Unless somebody is still voting, I would take a screenshot here again. And we – okay. And then I will stop the temperature . . . And start for option B. In the meantime, I will copy this in the delegate’s chat. So that was option A, everyone. This time, I did vote. Let’s see. We have 9, 17. I think we are missing a vote. Or two. 9, 14, 20. Okay. Wait. All right. Unless somebody is voting at the moment, I will lock this one down as well. Okay. And stop the check. + +USA: And then finally, for 2.7. Here we go. We have 16. 17. Wait. No. 16. Yes. For some reason, we have more votes this time. Somebody . . . Voted multiple times. I don’t know. Let’s hope that. + +NRO: Be careful when voting, you want to change your vote, you have to click the button to remove the old vote and do a new vote. I think that’s at least what I have seen. + +WH: Yikes, I didn’t realize this thing would record multiple votes from the same person on the same poll if they changed their vote! + +USA: I am not sure if it’s supposed to do that. But – + +NRO: Yeah. Nobody voted more than once. Like, just be aware. + +USA: I think we have pushed the limits of TCQ – + +CDA: I think the results are past the margin of error as well. + +USA: All right. I will stop the temperature now. And basing this, I think, the winner is option C. Is that correct? Yes. Right? Like . . . It did actually manage to split the difference. So let’s – okay. Yes. Before we finally call it a day for this particular discussion topic, let’s ask finally for consensus on naming the stage 2.7, use open for your comments and support, as well as objections, if you have strong objections against this. + +WH: I support this. + +WH: The other thing that we should resolve is with stage naming, since it was the discussion about which columns of names to use. + +USA: We shall indeed get to that next. I see only support in the queue. So – 2.7 + 1. Okay. Okay. Michael, would you like to speak to your concerns? + +MLS: Well, I think our process is broken in this discussion, shown as the process is broken, the consensus process. I am not going to block on 2.7, although I could be a lone dissenter and block it. I don’t think it’s worth our time. My big concern is, we are trying to convey some kind of ranking of this stage relative to the stages around it, which I think is kind of a false thing to do. Each stage stands on its own and own entrance and exit criteria, how do we convey something by giving it a significant digit that the others don’t have. I put that in the chat. That’s enough said. + +USA: That’s fair. Well, apart from that, we see support for 2.7. Michael, I think you have your answer, despites all the conversation that went down. + +MF: I mean, thank you everyone for being so patient and I know it was as painful for me as it was for you. Nobody enjoyed that. But the good thing is, that we have this additional change to the process, which it did seem everybody was in favor of. So I would like to do a wrap up here. + +MF: As for consensus on the new stage adoption, I don’t think it is necessary because it was conditionally adopted based on choosing a name and we have done that. Can I ask for consensus on point 3 on this slide, for now reverting just decorator metadata to stage 2.7. [silence] Okay? I don’t hear any objection. + +RPR: Just to check Michael, this reversion, I think in the past, you have talked about how we are going to separate out the committee decision, to rename from the public messaging. We will prepare a public messaging so it’s done in an orderly way. So – + +MF: Rename, what do you mean by rename + +RPR: When we announce the new stage. Do the public communications of what all this means, we are not going to rely on the notes going up, there’s going to be a lot of proper communications plan. And so I am to check that this – this reversioning, the stage proposals will be delayed and in terms of when we actually show off until we have figured out the communications plan + +MF: I'm fine with that. I guess that that kind of skips ahead to what now would be in point two. Now in point two, we don't have to be concerned about changing any existing references to existing stages, because they have all remained the same. But I do still want to pursue adding additional. additional information to the process document along these lines. I know that there were some problems that some people had with some of it, so I will not ask for that today, but I plan to open a pull request. Hopefully, we can do most of the discussion about problems people had with that on that follow-up pull request that adds some of this additional information, and I can bring that just as a needs consensus PR in the future. So, at that point, we would have external communication information in there. + +RPR: This sounds fine. We do it atomically in a planned way. That was my only concern. Thank you + +MF: Okay. Does anyone want to speak against reverting just the one decorator metadata proposal + +CDA: Yes. The proposal is – I don’t like that it’s snuck in here. I think a proposal shifting between stages needs to be explicitly called out on the agenda. + +MF: It was. You mean the – you mean just – + +CDA: Decorator metadata. + +MF: The agenda should have said there were stage changes happening as part of this, not just linked to the slides. + +CDA: Yes. + +MF: Okay. I will ask for it next time, then. + +MF: Okay. That’s it, then. I think that’s all I have. + +USA: Thank you, MF, for what will go down as the favorite item throughout this plenary. + +WH: Point of order. We never resolved the naming column issue. + +CDA: I think the number choice meant that it is resolved or at least can be deferred to later because it unblocks the process change + +KG: We don’t have names until we have a discussion about names + +WH: We are not going to change the names from the status quo to any of the other columns? + +MF: Yes. For now. I will be opening a pull request. Hopefully we will do the discussion on the pull request. I will be happy to ping you so you’re aware of it when it opens. + +WH: Okay. + +USA: All right. Then that’s it. Let’s break for lunch. Thank you, everyone, especially our note-takers. +(lunch) + +### Speaker's Summary of Key Points + +### Conclusion + +- new stage officially adopted, numbered 2.7, sits between stages 2 and 3 +- MF will merge the process document PR +- MF will pursue improving the process document with additional information to improve communication +- MF will ask to revert decorator metadata proposal to stage 2.7 in January + +## Continuation of Temporal + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-temporal/) +- [slides](http://ptomato.name/talks/tc39-2023-11/#9) + +PFC: Thanks for permitting me to present this follow-up item to the Temporal item I had on Tuesday. This will not be very long. But I will use the time to clear up the questions that we had on Tuesday. So I will be presenting this, but I would like to acknowledge JGT who is also here, who contributed a lot to the presentation. + +PFC: So a lot of confusion on Tuesday was around data-driven exceptions. First of all, I apologize for confusing the issue by using the jargon. I thought it came from TC39, but I guess we only used it in Temporal meetings. We will take an action item in a future Temporal champion meeting to come up with a name for the design principle to avoid confusion in the future and then maybe document the improved term somewhere. + +PFC: I will give an overview of how data processing works from Temporal to clear up the confusion and then at the end, I will present the normative change again. And hopefully that makes it clear exactly what is being changed and why we are changing it. + +PFC: All right. The context behind the principle of avoiding data-driven exceptions is that date/time data always contains lots of weird edge cases. People normally don’t think about them. So things like leap days, daylight saving time transitions, but also things like non-Gregorian calendars where there are leap months. It’s common when writing code with Temporal to not test with the weird cases. If the weird cases throw exceptions, then the code will work fine iin development and testing, but it will break when confronted with real-world data in production. And there’s a precedent, code that breaks when confronted with valid but usual data is not how most JavaScript API works. We avoided that in Temporal. This is also not unique to Temporal. All software that deals with date and times needs to deal with these. For example, if you buy a yearly subscription on February 29th, you still have to pay your bill the next year, you don’t wait until the next leap year to pay it. Or if you have an email system that sends an email every day at 2:30 am, it shouldn’t skip the day when daylight saving time start or ends, and so on. So across many of the real world use cases we observed that these data dependent edge cases are handled by an existing software-defined default behavior for how to resolve ambiguity. If you don't have a leap year, February 29 is automatically clamped to February 28. And if a time of 2:30 AM is requested on a day when the hour from 2 to 3 is skipped because of daylight saving time, we use 3:30 AM instead. And a lot of these defaults were inherited from elsewhere, including the JavaScript `Date` object already. + +PFC: We have been using the term 'no data-driven exceptions' as a shorthand in discussions about Temporal. I have tried to state the principle more fully here in italics. "Data-dependent ambitious cases should default to reasonable behavior instead of throwing". Here, data-driven exceptions in retrospect is not a good name. But it basically means, if it doesn’t throw for a normal date and time, by default it shouldn’t throw for a weird date and time. I say by default, because there are cases where you do want to throw when confronted with weird data. It depends on the use case. So if your use case is sending an email at 2:30 am, then it’s fine to send it an hour later because of daylight savings time. But if your use case is determining what time to write on a baby's birth certificate, then my software should warn the user that that time is invalid. So because it depends on the use case, Temporal has a way to opt in to throwing an exception and it’s usually by passing an options bag to the method with some option that has the value of `"reject"`. + +PFC: A lot of other APIs like the existing Date object, they don’t let you opt in to throwing like this. They silently fix the weird data and return the date, which we are calling 'clamping' here. So no data-driven exceptions isn’t a new thing, but how a lot of other date/time APIs work. The new thing is that we try also to provide an option for 'yes data-driven exceptions' when you want to support the unusual case where the clamping is not acceptable. + +PFC: Something else we talked about on Tuesday is 'valid data'. I have a couple of slides on what valid data means. Temporal objects can be a form of data that we pass into a Temporal API. Temporal objects are immutable and always contain valid data in the internal slots. This contrasts with the JavaScript `Date` object: you can say `new Date(NaN)` and get a Date object that has NaN in its internal slot representing an invalid date. You cannot do this with Temporal objects. And in the normative change we are talking about here, we are converting from one type of Temporal object to another, from PlainYearMonth toPlainDate or PlainMonthDay toPlainDate. This is the valid data we are talking about. For completeness, I will go over the other data that we consider valid. + +PFC: Property bags can also be valid data. The way we consider these is that for each property there is a validity domain and the property bag is valid if each property is individually valid. So, for example, the domain of an hour property is an integer in the inclusive range of 0 to 23. The domain of the month property is a finite positive integer. Now, that might seem weird. Months go from 1 to 12. But values like months 13, you can see in the next line here, they might be valid in the non-Gregorian calendar. So we don’t consider positive out-of-range day and month values to be an invalid property bag, they are valid for the property bag and then later the calendar will do some validation on whether that date actually exists or not, and clamp by default or allow you to opt in to throwing. + +PFC: Calendars can be custom objects, and accept all sorts of month and day values that even the built in calendars don’t accept. But what’s always not valid, you can see in the bottom line here, things that are obviously bad data like a non-integer day or year or a negative hour or a zero month, these are not valid property bags, they will always be rejected. You can’t choose to clamp or reject. They are not weird data, but just plain wrong. So the general principle here is if an input could be valid in some day, month, and calendar, etc, then we don’t throw by default, we clamp by default and let the developer opt in to throwing. Even if the result isn’t present in a particular day/month/year calendar, if we can determine without doing a calendar calculation, that an individual value is invalid, like a negative day, then the property bag is not valid. If it could be valid but needs a calendar calculation to be sure, then it’s valid and subject to clamping. Now, this is a messy principle. I think that’s okay because dates and times and calendars and time zones are messy. But it is consistent. + +PFC: The third kind of data that we consider valid is strings. We accept strings that adhere to a specific grammar: whatever ISO 8601 defines, and extended by the new IETF RFC we are standardizing. The strings we accept are compliant with these standards. The flexibility discussed where the user can opt into clamping or throwing, that only applies to cases where we are interpreting number input, not string inputs. These standards are unambiguous about what is syntactically valid string or not. Here are some examples. Top one, 02-29, a valid month-day string. The next one, 2024-02-29, it’s a valid date string. 00-00 is not a valid month-day string. 12-32 also is not and 2030-02-29 is also not a valid date string. + +PFC: The standards I talked about ISO 8601 and the new RFC, they don’t include time zone transitions in their definition of validity. That would be impossible, because time zone data changes. So we do still occasionally have to deal with strings that are syntactically valid but don’t represent an existing time. The string on the second-to-last line represents a nonexistent time, it’s in the middle of the hour in a DST change. For a syntactically valid string like this, when converting, you can choose using an option to clamp or throw, and clamping is the default. The string on the last line, with the time 99:99, also doesn’t exist, but it's not a time that clocks can display in the first place. It’s not a valid string. + +PFC: Here is an overview — the slide's a bit full — but these are the ways that you can do a data conversion in Temporal that could end up being invalid in the result domain. So where this idea of clamping by default and letting the user choose to opt into throwing applies. There is a lot of things not shown here, that is that most conversions can’t fail at all, at least not because of being invalid in the result domain, like February 29, 2030. For example, If you convert from a PlainDateTime to a PlainDate, every PlainDateTime object can be converted to a valid PlainDate. + +PFC: We are talking about conversions that can fail due to being invalid in the result domain here. On the left most of these conversions clamp by default and have an option to throw. The code examples here, they all show what it looks like to opt in to throwing. If you leave out the options bag, or if you supply the appropriate different value for the option, you get the clamping behavior. This is crammed in, I hope everybody can read it. + +PFC: On the top right, these ones don’t let you opt-in to throwing. We considered them a convenience method or a convenience conversion. So they always use the default option. If you don’t want to use that, you can specify it manually by using the longhand way. For example, in the case of these property bag arguments to `.until()` and `.equal()` and other entry points, if you don’t want the default behavior for the conversion, you can convert the property bag yourself using a `.from()` method. Converting a PlainDate plus PlainTime, you can do that by first converting to a PlainDateTime, and specifying the option, and then converting to a ZonedDateTime and again specifying the option. Depends on which conversion you want it to throw on. + +PFC: Then we have these odd ones out, the two down in the bottom right. So now with this context, let’s go back to the normative change, which would be basically to empty the bottom right category of conversions that always throw, which we consider a bug, into the category above it: the "always clamp" category. + +PFC: So if the conversions in the bottom right, the throwing category, would follow the same default behavior of the rest of the Temporal API, they would clamp the result to a valid date, because the desired date doesn’t exist when the receiver is combined with the input. + +PFC: So, back to the normative PR that I presented on Tuesday. It moves those throwing items into the always clamping category. It’s a bug discovered by a user, in actual usage of a polyfill for this proposal. And we want to ensure that the default behavior of the similar Temporal APIs is consistent. + +PFC: If we were still working on the API design at Stage 2, we would probably want to add the option there. I don’t know for sure. But we will track that for a follow on proposal. So if we didn’t make the change, then a common use case like what day of the week is my birthday next year would throw when somebody’s birthday is February 29, which is what I referred to earlier as weird data. Valid data, but data that developers are likely not testing with. So that would be an unexpected result. + +PFC: Okay. We are a bit less 20 minutes into the timebox. This is what I had to present. I will be happy to answer questions now. + +WH: Thank you for the description of philosophy and rationale. The list of examples and cases is a bit lacking. While I understand the philosophy, I don’t understand the boundaries of what is changing. So let me ask a few cases to let me check if my understanding is correct. I see it allows November 31st. So by the philosophy, the month and day could be anything positive. `plainDate.until` would also allow `plainDate.until({year: 2023, month: 17, day: 952})`? + +PFC: That’s right. The reason for this— I didn’t go into this detail on this slide, but the principle here is that PlainDate.until conceptually takes another PlainDate as an argument. For convenience, and this is what I referred to as 'convenience conversion', anywhere an API takes a PlainDate as an argument we allow you to pass a plain date property bag instead, or a plain date string. And that is just simply so that you don’t have to type `Temporal.PlainDate.from(...)` every place you have data like that. It is treated as if you had called `Temporal.PlainDate.from` without any options. So you get the clamping behavior. + +WH: Okay. To check if my understanding is correct, this is not just the convenience methods? `Temporal.PlainDate.from` would also allow `Temporal.PlainDate.from({year: 2023, month: 17, day: 952})`? + +PFC: Right. So that goes back to what I said here on this slide with day 9999. If we can determine without consulting the calendar, that it’s invalid, then the property bag is not valid. If we would have to consult the calendar on whether that month or day is valid, the property bag is okay and subject to the clamping behavior. + +WH: So that is clamped to December 31 of 2023? Example is, year 2023, month 17, day 952. + +PFC: Yes. That is clamped to December 31st. + +WH: And if the month were 3, it would be March 31, not December 31? + +PFC: Right. + +WH: Clamping is per-field and not per-day-of-the-year? + +PFC: That’s right. + +WH: Okay. So yeah. In the `toPlainDate` conversion example on the slide, these would no longer throw and you could write `plainYearMonth.toPlainDate({day: 44})` and it would clamp it to the last day of the month? + +PFC: That’s correct. Yeah. + +WH: Okay. Is there a limit to how large of an integer you could provide for a day? + +PFC: I am not positive about whether it’s MAX_SAFE_INTEGER or MAX_VALUE. Let me check. + +WH: Okay. While you’re checking, thank you for the explanation. I am good with this. + +PFC: Okay. Glad to hear it. I will have an answer to that question about the day in just a moment. + +PFC: It has to be an integral number value and it has to be one or greater. So I guess Number.MAX_VALUE is integral. And infinity is not. + +WH: Okay. Thank you. + +CDA: Nothing else on the queue. + +PFC: Okay. Yeah. If we don’t have anything else in the queue, then I would like to ask for consensus on that change. + +CDA: You have a + 1 for consensus from DLM. Do we have any other voices of support for landing this PR? Do we have any objections to landing the PR? [silence] CDA: You have consensus, PFC. + +### Speaker's Summary of Key Points + +Overview of what constitutes "valid data" in Temporal APIs and when the clamp-by-default, opt-in-to-throwing behavior applies. + +### Conclusion + +- A normative change to overflow behavior in PlainYearMonth/PlainMonthDay.p.toPlainDate (PR #2718) reached consensus. + +Philip please link the slides above + +## Iterator helpers (continued) + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-iterator-helpers/) + +MF: So we talked about this a couple days ago. Iterators have an issue with web compatibility. I have a pull request, one possible solution for resolving that compatibility issue by replacing two data properties, the `constructor` and `Symbol.toStringTag` data properties on the iterator prototype with accessors that do weird stuff. + +MF: JHD had brought up that another possible temporary solution here is to just omit the properties. I had said that I needed to go look at the previous discussion because we discussed that at the last meeting. I looked at those notes and that was indeed what was suggested by NRO at the last meeting. JWS had raised an issue with it, but was mistaken. So that would also be a valid way to solve this problem in the interim. So I think personally, I am okay with either way forward. I think I do have a slight preference for going the accessor route because I do think it is a little bit less risky than omitting the properties. It seems that omitting the properties is more easily observable, you just toString any built-in iterator or, you know, or you ask for the constructor property of any builtin iterator. Whereas, observing the fact we have these setters, requires you to actually do getOwnPropertyDescriptor on Iterator.prototype itself, which I think is an incredibly unlikely thing for someone to be doing compared to the other things. Both routes should preserve our ability to replace them with data properties if we care to in the future. We would definitely care to in the future if we went the route of omitting it because one way or the other, we would like those properties to exist. + +MF: So I personally don’t see the upside of going that route. Either one is fine by me. I just want to pick one of these ways forward because we do want to get iterator helpers shipped, apparently there’s a lot of demand for it. One more thing: if we decide to omit the properties, going the route of adding data properties later does require an implementation to be willing to do that experiment again. And possibly risk running into a web compatibility issue. Rezvan has spoken on behalf of Chrome, that they would do that, but that’s obviously non-binding and also things may change between now and whenever it is, like 6 months from now that we can try this again. + +MF: So we don’t know if that would get us to the case where we have data properties. It might end up with the accessors anyway. That’s all I have to say on the topic + +JHD: When we are talking about toStringTag, zero risk. Period. People don’t depend on the exact value of toStringTag. We have added toString tags to things and it hasn’t broken. That’s not the concern - constructor is the property that is more often used. It’s been pointed out that `instanceof iterator` doesn’t care about the constructor properties. So that pattern will still work. It would only be if someone writes what I would find odd code of `X.constructor === Iterator`. And omission, that would be always false and with the accessor it’d be true. But nonetheless, I feel like adding properties is a much less risky change than especially properties that match what everybody thinks – what the intuition is. Much riskier than changing the underlying change between data and accessor in either direction. + +JHD: So I would much prefer to just omit them. And it seems like in the coming months the remaining sites will upgrade and I can coordinate with the representative Transcend to keep pushing to try and get the sites upgraded and to do the follow-up proposal to add the properties back at that time. So yeah. That’s my two cents. + +MM: So I find JHD's explanation very plausible. But it would be very nice if there was some way to get evidence one way or the other in a timely manner. The thing that strikes me is that whichever way we go, when we want to make the transition from whatever it is we did in the meantime to a data property, we might get stuck. We might not be able to transition to the data property. And the cost of omission is that if we can’t move from omission to the data property, we have locked in a behavior that we don’t like. Whereas, if we can’t move from the accessor to a data property, we are actually getting the behavior that we want, rather than locking in the wrong behavior. We are just getting the behavior we want with the undesired meta-level representation, which strikes me as a smaller cost. I am not blocking either way, whichever side of this can gain consensus, I am fine with. Because I think that either way, the risk is small. But given the cost of getting stuck with omission, my – I am still – in the absence of evidence, my inclination is towards the accessor rather than omission + +NRO: Yes. So let’s say we omit it in 6 months or one year, we still cannot move to the data property. Moving then to an accessor is less risky. So we could still in the future decide "okay. We tried, we failed. Let’s move to the accessor". Rather than remaining stuck in the omission status. + +KG: This was a response to JHD. I didn’t didn’t get in the queue on time. The claim about constructor to run into this you have to be doing `X.constructor === Iterator` and that’s weird code. It’s bad code, but I put a link in the matrix to 150,000 instances in GitHub in JavaScript. It’s a thing that people write a lot. I think that the chance of not being able to add the properties later, if we omit them now, is unacceptably high and therefore go with accessors. + +CDA: Nothing on the queue. + +KG: Also, just as another point, like, no one will notice the change from accessors to data properties. That won’t come up for any normal programmer. Whereas, going from omitted to not omitted will come up and I don’t think we need to expose users to this sort of thing. + +CDA: JHD? + +JHD: Is there any chance we could not make a change to the proposal, but see if Chrome is willing to ship, let’s say, the accessors with the expectation that they will be changed later? I mean, I know ideally the spec matches reality, so that’s the downside, but the intention would be in the near term, we would be bringing the implementations to match the spec and if that became impossible, we would have to change. I don't know if that’s much better. I prefer omission, but I thought I would bring it up. + +MF: JHD, I think that is the plan with the accessors. We ship accessors, and we revisit this and see if implementations are willing still to try to switch it to data properties. + +JHD: I was asking about skip the "spec the accessor" step. I see why that could be worse; it’s just something that popped in my head. + +CDA: Queue is empty. + +MF: Okay, I mean queue being empty, I think MM raised some good points that I hadn't had in my initial presentation there, that there's different kinds of risks, and one's getting stuck with a representation we don't like, and one's getting stuck with values we don't like, and it seems like getting stuck with values we don't like is the much worse case there. I also just still don't see the upside of the omission route other than giving us warm fuzzies, right? It makes us feel better, but to the user, there's no upside. So I still lean pretty strongly to preferring the accessors approach and would like to see if we can get consensus on that. But if we can't, obviously the number one priority is to just move forward in some way. So I'd be willing to do either one. + +MM: Just support – I support the accessors. As I said, I am not blocking the other. But yes, I support accessors. I hope we can get consensus on that. + +NRO: Yes. I originally, like last meeting, spoke in favor of just omission. Even with accessors the goal is to eventually try to get the properties, then I am fine with either temporary solution. + +MM: JHD – are you okay with accessors rather than omission? + +JHD: Yeah. I mean, it’s – I don’t think I would block on it. Is it worth doing a quick pair of temperature checks on those two options? And that might just end it? + +MF: I would leave it up to the chairs to decide whether we go the route of asking for consensus or going to the temperature check. + +CDA: The chairs do not want to decide. + +MM: MF, you’re the champion. I think it’s your call. + +MF: Well, yeah, then I would ask for consensus for going with the accessor approach. I do think, I want to state this, I'm not trying to steamroll anybody, I, in full good faith, think that we will be able to move to data properties in a couple months. So I just think, I want to go with what I feel is the more sure route of getting there.I think we'll all be happy about this one in a few months. + +MM: So we have some explicit support. Any objections? + +JHD: Yeah. I guess we can go with it then. I hope all the implementers in the room are hearing clearly that as soon as the sites are not broken anymore, that we want to move to data properties. Separately for the room, I guess, I assume that there is nobody that would have any objection to trying to move to data properties in the future, providing browsers are willing to ship? If you have an objection, now is the time to surface it, I think. If not, then yeah, I think we might as well go ahead with the accessor, even though it’s not my preferred option. + +CDA: So . . . A moment ago we were talking about consensus in support and objections. Are we talking about the accessor? + +MM: Yes. + +JHD: Consensus for accessors is what I have heard. And I haven’t heard anyone voice any objection to eventually moving to data properties. It seems like I am the strongest person against accessors and I am not blocking on it, so we can call it consensus. + +CDA: Yeah. Okay. I want to be clear for the notes. So we have explicit support from MM and from others? + +KG: I support accessors. + +CDA: Okay. + 1 from KG + +MM: NRO, didn’t you explicitly support accessors? + +NRO: I support either option, as long as we agree that we want to then move to data properties. + +MM: Okay. + +CDA: Okay. I think it was clear that there was no objection. Last call for objections. All right. + +### Speaker's Summary of Key Points + +- + +### Conclusion + +- Merge iterator-helpers PR #287 +- JHD to follow up with Transcend and pursue the transition to data properties when the time is right + +## Incubation call chartering + +EAO: So this is – advertising an introduction called from the last meeting because of timetable schedule constraints, I haven’t able to get it up until relatively late, if you look now on the Reflector, you would find an issue for this, that’s pulling for dates for the table format proposal for next time sometime. If you are interested, participate because I have one reply so far. If I don’t hear anything by the end of day tomorrow, I am going to basically cancel and figure out otherwise how to take that one forward. Also, this LA has been very cold. + +CDA: All right. I have shared the link in the chat, in the Google meet as well in the channel in matrix. Anything else for this item? I don’t think so. + +CDA: All right. If there is nothing else, no comments on the queue, I think that is it for this plenary. Thank you, everyone. + +RPR: And also, just to say, we are working on the invite for the next plenary. Obviously the next one is San Diego. The dates are set. So there’s no issue there. We are at the moment, Rodrigo is working on getting hotel recommendations. In particular, we had a discussion in the matrix chat, where people want to avoid hiring cars. So we are going to try and select an interesting part of town, where we can share taxis to make that a bit easier. + +KG: A reminder to fix up the notes, please and thank you. Find the places you spoke and correct them. + +CDA: Yes. Please ensure comments are accurate. Especially those attributed to you. All right. If there’s nothing else, I think we can call it. Thanks, everyone.