diff --git a/meetings/2024-02/February-6.md b/meetings/2024-02/February-6.md index 69938a4e..35ad5555 100644 --- a/meetings/2024-02/February-6.md +++ b/meetings/2024-02/February-6.md @@ -1,6 +1,7 @@ -100th TC39 Meeting -6th Feb 2024 +# 6th Feb 2024 100th TC39 Meeting + ----- + Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. You can find Abbreviations in delegates.txt @@ -46,7 +47,6 @@ You can find Abbreviations in delegates.txt | Rodrigo Fernandez | ROF | ServiceNow | | Samina Husain | SHN | Ecma | - RPR: San Diego is America's finest city. RPR: we will start with approval of the last meeting’s minutes. The 99th meeting. Are there any observations to considering those approved? Not hearing anything in the room. And nothing on line we will consider that approved. @@ -54,25 +54,20 @@ RPR: we will start with approval of the last meeting’s minutes. The 99th meeti RPR: We also have this week’s agenda. So I believe that ready to adopt. Any objections? We shall adopt the agenda. ## Secretary's Report + Presenter: Samina Husain (SHN) -- [slides]() +- Slides: See Agenda SHN: So welcome to the 100th meeting. Thank you very much ServiceNow for hosting it. Thank you very much Rodrigo and also Reny who has been supportive to me. -Fantastic venue, the weather hopefully is getting better each day. -My report, I will keep it short. We have a busy agenda. So let me do the usual update of the things we want to do. +Fantastic venue, the weather hopefully is getting better each day. My report, I will keep it short. We have a busy agenda. So let me do the usual update of the things we want to do. SHN: I want to recognize the appointments that we have for ECMA management, the last time we had a meeting we had voted on the 2024 management. So we will talk about that. I also want to recognize the different members of TC39 the chairs and editors. If I have made a mistake on the slide, correct me. We have new projects that we have, exciting, our statistics, and then the usual annex of our latest documents. At any time you can ask me a question or meet me anywhere, I will be here for the next 3 days and clarify. -SHN: For our management, our management line has not changed as you are familiar with for 2023. Jochen Friedrich from IBM will remain the president. The Vice-president will be DE from Bloomberg. And treasurer Luoming Zhang from Huawei. The executive team, Michael Saboff, will be the chair as he was previously. We do have a vacant position in the current executive committee. If you are interested, you may nominate yourself or you think somebody else would be interested, and you want to nominate, please do let me know. -From the ordinary members. Thank you for the clarification. And the non-ordinary members, we will have from EPFL. And PHE from Moddable. It’s also on the website it’s clearly noted. Let me know if you want to volunteer and be nominated. I want to recognize the chairs and editors. -Thank you for the chair’s, CDA, for your work. I understand you will continue to be chair for the next year. -Thank you very much for everything you have done and all of the reports you published. It’s done timely. For the editors, also thank you for your efforts. I have listed the editors as I understood them through going through GitHub. Please correct me if I have any errors. Thank you, all of your efforts are appreciated and I assume you will continue on for 2024. -If I have missed somebody, do let me know. I don’t know who the editors are for 414 and the TRs or if there are editors right now or any activity going on. But you may let me know that off-line. +SHN: For our management, our management line has not changed as you are familiar with for 2023. Jochen Friedrich from IBM will remain the president. The Vice-president will be DE from Bloomberg. And treasurer Luoming Zhang from Huawei. The executive team, Michael Saboff, will be the chair as he was previously. We do have a vacant position in the current executive committee. If you are interested, you may nominate yourself or you think somebody else would be interested, and you want to nominate, please do let me know. From the ordinary members. Thank you for the clarification. And the non-ordinary members, we will have from EPFL. And PHE from Moddable. It’s also on the website it’s clearly noted. Let me know if you want to volunteer and be nominated. I want to recognize the chairs and editors. Thank you for the chair’s, CDA, for your work. I understand you will continue to be chair for the next year. Thank you very much for everything you have done and all of the reports you published. It’s done timely. For the editors, also thank you for your efforts. I have listed the editors as I understood them through going through GitHub. Please correct me if I have any errors. Thank you, all of your efforts are appreciated and I assume you will continue on for 2024. If I have missed somebody, do let me know. I don’t know who the editors are for 414 and the TRs or if there are editors right now or any activity going on. But you may let me know that off-line. -SHN: Very important coming up, the approval process, as you are all working extremely hard and have an addition in the June time frame. Our deadlines are the executive meeting on the 24th and 25th of April. By that point we want to of course propose to the committee the new edition you will be edition 15 of 262 and the 11th edition of 402. You do remember we have a 60-days opt-out period for the RF policy and 60-day review period. We start this in advance of the 60-days review period. So they are not completely overlapping, in case there’s any issues we have a bit extra buffer to handle it. -So again, the suggestion is that late March, early April you are ready to propose the edition 15 and 11. If you chose to – short notice and you need more time, do approvals on postal ballot and that can happen. We do the in person review twice a year. +SHN: Very important coming up, the approval process, as you are all working extremely hard and have an addition in the June time frame. Our deadlines are the executive meeting on the 24th and 25th of April. By that point we want to of course propose to the committee the new edition you will be edition 15 of 262 and the 11th edition of 402. You do remember we have a 60-days opt-out period for the RF policy and 60-day review period. We start this in advance of the 60-days review period. So they are not completely overlapping, in case there’s any issues we have a bit extra buffer to handle it. So again, the suggestion is that late March, early April you are ready to propose the edition 15 and 11. If you chose to – short notice and you need more time, do approvals on postal ballot and that can happen. We do the in person review twice a year. SHN: Some new projects going on which is really very nice for ECMA and I want to thank everybody. There are a number of people in this committee that have supported to make some of the projects happen and continue to support in making future projects happen so I want to highlight some things. TC54 "software and systems transparency", which is CycloneDX, has had its third meeting. We meet every 2 weeks. We’re getting organized. It’s a good committee. 5 different members there. So if you or your organizations are interested, learn more about the details on the website. Come talk to me for more information. @@ -84,47 +79,43 @@ SHN: Some new members. Very good and thank you for the support of this committee SHN: There is an ad hoc that I spoke about the last time we were in the meeting. It will continue through 2024, which is the governance and now we work, better engage new projects, better engage new communities. So this is ongoing. I don’t have any specific updates to bring at this time. Of course, any input you would like to share or think we should bring to our governance, I am always open for that information. -SHN: I want to highlight a couple of things. We did some workshops at the end of the year. It was very good. We had a number of participants. We had very good presentations. And I think it generates good conversations for how TC53 will continue with the technical work and new inputs. -We tried to do another one, a workshop on data and cloud standardization. We postponed it. It’s hard to get people to participate. But it may have been timing. My point to bring these two slides up is if you have thoughts on other topics that you think would be interesting, let us know. We will put something together. These are all hybrid or mostly virtual. It enables us to bring in new ideas, some new members, and new projects. +SHN: I want to highlight a couple of things. We did some workshops at the end of the year. It was very good. We had a number of participants. We had very good presentations. And I think it generates good conversations for how TC53 will continue with the technical work and new inputs. We tried to do another one, a workshop on data and cloud standardization. We postponed it. It’s hard to get people to participate. But it may have been timing. My point to bring these two slides up is if you have thoughts on other topics that you think would be interesting, let us know. We will put something together. These are all hybrid or mostly virtual. It enables us to bring in new ideas, some new members, and new projects. -SHN: Some statistics. This is the year-end statistics of the access of 262. So still, there was a peak in 2015. It’s difficult to judge why. This was some time ago. If I look at last year, we’re consistent. A bit of a decline, but consistent and strong interest. It’s a good indication of what to expect with the next editions that will be coming. -This is the statistics on the download. And also, a little bit of a decline but not terribly bad in. In the yellow, that’s the percentage based on all downloads at ECMA. ECMAScript remains a very strong – very strong committee at ECMA international. +SHN: Some statistics. This is the year-end statistics of the access of 262. So still, there was a peak in 2015. It’s difficult to judge why. This was some time ago. If I look at last year, we’re consistent. A bit of a decline, but consistent and strong interest. It’s a good indication of what to expect with the next editions that will be coming. This is the statistics on the download. And also, a little bit of a decline but not terribly bad in. In the yellow, that’s the percentage based on all downloads at ECMA. ECMAScript remains a very strong – very strong committee at ECMA international. -SHN: I have just come to my very last slide and I am going to look over to KG,I don’t know if you have any feedback that you would like to share regarding the PDF, if you do, I would be happy to share some now. Thank you. +SHN: I have just come to my very last slide and I am going to look over to KG,I don’t know if you have any feedback that you would like to share regarding the PDF, if you do, I would be happy to share some now. Thank you. -KG: MF, you were going to make the PDF. +KG: MF, you were going to make the PDF. MF: No? -KG: You were talking about - we are going to follow Allen’s process. +KG: You were talking about - we are going to follow Allen’s process. -MF: Yeah. We – we are going to go through AWB’s documents and incorporate as much as we can into ecmarkup and into the spec document itself wherever possible. And then we will get feedback on any remaining differences between what we can automatically produce and what we can manually produce at the following meeting. But that hasn’t been done yet. +MF: Yeah. We – we are going to go through AWB’s documents and incorporate as much as we can into ecmarkup and into the spec document itself wherever possible. And then we will get feedback on any remaining differences between what we can automatically produce and what we can manually produce at the following meeting. But that hasn’t been done yet. SHN: Thank you. Can I ask, would you be ready to do this for the next edition, coming up in June? MF: Yes. - -SHN: Thank you. So we will keep following that. It is relevant. You can see from the statistics, it’s still quite relevant. So thank you for your efforts. -And that’s the end of my core slides. My next slides provide you with information on where the documents lie. There’s a list of documents. I will upload the slides after tomorrow. There are a bunch of documents you may wish to look at, minutes of the last meetings and what is going on in ECMA TC39 and GA. I won’t read them to you. There’s a list of them. -Our stats, So these are our stats for the last 4 years. And clearly, this is the highest in person meeting in the last while. This is an indication of how we move forward with other meetings. Also, participation. The last one we had, it was good. But this meeting may take the cake. + +SHN: Thank you. So we will keep following that. It is relevant. You can see from the statistics, it’s still quite relevant. So thank you for your efforts. And that’s the end of my core slides. My next slides provide you with information on where the documents lie. There’s a list of documents. I will upload the slides after tomorrow. There are a bunch of documents you may wish to look at, minutes of the last meetings and what is going on in ECMA TC39 and GA. I won’t read them to you. There’s a list of them. Our stats, So these are our stats for the last 4 years. And clearly, this is the highest in person meeting in the last while. This is an indication of how we move forward with other meetings. Also, participation. The last one we had, it was good. But this meeting may take the cake. SHN: The next meetings are highlighted there. Already RPR has mentioned them. And that is the last slide that I have. Reminder when the general assembly and our ExeCom. Tomorrow will continue with the slide as we finish the agenda, do some activities on the 100th meeting. -SHN: We have some celebrations and we have swag. There is a swag item for everyone. But if you choose to do your summaries and conclusions, you may get a second swag. If you choose to support note-taking and somebody tells me who they are, you may get a third swag. -Bribery, but that’s the mode I work in today. So let me know. I will be watching. Thanks to SYG for helping me with the swag and design. If there are any questions, please ask. +SHN: We have some celebrations and we have swag. There is a swag item for everyone. But if you choose to do your summaries and conclusions, you may get a second swag. If you choose to support note-taking and somebody tells me who they are, you may get a third swag. Bribery, but that’s the mode I work in today. So let me know. I will be watching. Thanks to SYG for helping me with the swag and design. If there are any questions, please ask. RPR: Thank you Samina. That was excellent. ### Speaker's Summary of Key Points + Summary: - Timelines 2024: A reminder of the timeline for the approval of a new addition of 262 were noted. At the April ExeCom on the 24-25 April, where TC chairs bring in the recommendation of the next addition. The 60-day opt-out period and a 60-day open for comments were noted. The anticipated editions are ECMA-262 15th edition and ECMA-402 11th edition for approval for the upcoming June 2024 GA meeting. -- Approval vote accepted by acclamation of ES2024 and opt-out period, for both ECMA-262 and ECMA-402 was agreed by the committee. +- Approval vote accepted by acclamation of ES2024 and opt-out period, for both ECMA-262 and ECMA-402 was agreed by the committee. - New work items: -New work items were reviewed, i.e. TC54 Software and system transparency (Cyclone/DX), and a the potential TC55 WinterCG to be discussed at the next Execom. The 2023 workshops were also highlighted to encourage the committee to consider future topics which may be of interest and may generate new work items. +New work items were reviewed, i.e. TC54 Software and system transparency (Cyclone/DX), and a the potential TC55 WinterCG to be discussed at the next Execom. The 2023 workshops were also highlighted to encourage the committee to consider future topics which may be of interest and may generate new work items. - Statistics: The yearly statistics were reviewed, and noted that both the html access and downloads of the ECMA-262 continue to have strong demand. @@ -136,106 +127,115 @@ The pending new members were noted, Replay.io (SPC), HeroDevs (SME), and Sentry Feedback on ES2024 PDF version solution was provided, based on Allen W-B process, from Kevin Gibbons and Michael Ficarra. The process is being reviewed and incorporated into the ecmarkup where possible. Feedback will be provided on what can be automatically and manually produced, this has not been done, but the aim is to be ready for the next edition coming up in June. ## ECMA262 Status Updates + Presenter: Kevin Gibbons (KG) - [slides](https://docs.google.com/presentation/d/1CxVe7IC5Nie1kvm688bX6We0xH7YEmpFAJHC44cac9k/edit) - + KG: Editors update. This will be brief. Only a couple of normative changes. We landed Object.groupBy and Map.groupBy, and Promise.withResolvers which got stage 4 at the last meeting. And only one editorial change worth calling out. We have introduced an AO to simplify using the iterator protocol in the common case. I am mainly calling this out because if you have a proposal that is using the iterator protocol, you may wish to use this abstract operation. I have already PR’d several proposals. If you want I can send a PR for any others. It really is much nicer, I think. And no other editorial changes to call out. KG: I don’t think I am going to go through the list of upcoming and planned work again. It’s pretty much the same, but I want to call out that last time, I claimed that we were done replacing complex spec values with records and just making the spec internally consistent in that way. And it turns out, there’s a couple left. So this fourth one on the list is still here. And not removed. So yeah. Same things we have always been working on. -KG: I want to especially call out the first one on the list: the spec has a terms and definitions section, containing 10 or 15% of the terms and definitions of the spec. And that’s silly. wW will get rid of that and spread the definitions in the appropriate places. -If you don’t think we should do that, let us know. But that’s the plan. +KG: I want to especially call out the first one on the list: the spec has a terms and definitions section, containing 10 or 15% of the terms and definitions of the spec. And that’s silly. wW will get rid of that and spread the definitions in the appropriate places. If you don’t think we should do that, let us know. But that’s the plan. KG: The last thing, SHN mentioned we need to cut the next edition of the spec. Because of the timelines, that should probably happen before the next meeting. The intention is to freeze the spec at the end of this meeting. We will send out a link. There’s only two things on the agenda that are likely to land in the spec before that happens, which is ArrayBuffer.transfer goes to stage 4 and the `-->` HTML comment bugfix for web reality. If there is consensus, we will land them, and then either way, we will cut the spec after the meeting, possibly including these changes. No other normative changes anticipated. We will send the link and the 60-day IPR opt out can start at that point, which should give us plenty of time. If you have something that you want to get in the spec, for some reason, let us know and know why. But otherwise, the plan is that it’s basically as it is today, plus those two changes. MLS: KG, do you have a rough date when you think you will do that, and can you send that out to everybody? KG: Yeah. We will send the link when it’s out. The rough date is the end of the meeting. So Friday. Whatever Friday is. Maybe next week. But within ten days of now. The plan is to post the link on the reflector as an issue. - -MLS: Sounds good. -PFC: Is there any guidance for proposal authors regarding the complex spec values versus records thing that we should be following? +MLS: Sounds good. + +PFC: Is there any guidance for proposal authors regarding the complex spec values versus records thing that we should be following? KG: You are almost certainly doing the right thing already. If you have a named records with lists of fields or whatever, that’s the right thing. If you have, like, a tuple of values whose fields are referred to as, like, "the matcher component of this tuple", rather than using the `.[[Name]]` field access - don’t do that. Do the “you have fields in a record” thing. But you are extremely unlikely to be doing the complex spec values thing. That was just an artifact of history. RPR: Okay. So I think we are on time. Thank you KG. + ## ECMA402 Status Updates + Presenter: Ben Allen (BAN) -- [slides]() +- Slides: See Agenda -BAN: So this is a very short update. We only have a couple of minor editorial changes. Let’s see, is that sharing correctly? It looks like it is. +BAN: So this is a very short update. We only have a couple of minor editorial changes. Let’s see, is that sharing correctly? It looks like it is. BAN: Fantastic. So one is related to the new iterator step value AO. Thank you, KG for bringing that over into 402. So yeah. Previously we had used the more elaborate process. This one clarifies things greatly. BAN: And the other very minor change is some of the ordering used for cable iteration was inconsistent. Most notably, like DatetimeFormat, set row where the rest of 402 and 262 would say current row. And that’s it. -SFC: We also have a few topics for proposals at various stages in the agenda as well. So we can look forward to those coming up throughout the course of the week. +SFC: We also have a few topics for proposals at various stages in the agenda as well. So we can look forward to those coming up throughout the course of the week. RPR: Excellent. All right. Thank you, BAN. ## ECMA404 Status Updates + Presenter: Chip M (CM) -- [slides]() +- Slides: See Agenda -RPR: We move on. The next one is ECMA402 status update from Chip. Chip, are you there? +RPR: We move on. The next one is ECMA402 status update from Chip. Chip, are you there? -CM: I am. So JSON continues its decades long tradition of stability and relentless backward compatibility. +CM: I am. So JSON continues its decades long tradition of stability and relentless backward compatibility. -RPR: Thank you. Always good to hear. +RPR: Thank you. Always good to hear. RPR: All right. I think that’s the most reliable topic on the agenda. ## Test262 Status Updates + Presenter: Philip Chimento (PFC) -- [slides]() +- Slides: See Agenda PFC: In Test262, we’ve merged some tests for proposals recently, including set methods, iterator helpers, accessors, and a test for a normative change that has been waiting for a couple of years to merge into ECMA262, the sync-to-async-iterator changes. PFC: I will basically repeat what I have said at the past few of these updates: we have more review work than we can comfortably assign to all the maintainers. We do thank you for reviewing tests. As a proposal author, if you could look at tests that people write for your proposal and give a signal about whether you think they’re correct or not. That helps us have more confidence as maintainers to merge them. And I think that’s it on our part. + ## TG3 (Security) Report + Presenter: Jordan Harband (JHD) -- [slides]() +- Slides: See Agenda -JHD: We have had two meetings of TG3 since the last plenary. We spoke about PR that Nicolò, I believe that will be talking about this week. We also decided to publish an agenda at least 24 hours before each of our meetings, and then if there’s nothing on the agenda, to cancel it. -However, we have a long enough backlog to not likely cancel for a while. If you have security-related input, please join. That’s all. +JHD: We have had two meetings of TG3 since the last plenary. We spoke about PR that Nicolò, I believe that will be talking about this week. We also decided to publish an agenda at least 24 hours before each of our meetings, and then if there’s nothing on the agenda, to cancel it. However, we have a long enough backlog to not likely cancel for a while. If you have security-related input, please join. That’s all. ## TG4 (Source Maps) Report + Presenter: Jon Kuperman (JKP) -- [slides]() +- Slides: See Agenda -JKP: Cool. Sorry, if this is a bit of a different format, but I recently had to do a quick year-end view of the source maps we have been doing and it’s the first year for us in TC39, I thought that summary would make a good update. -Last year in the beginning part of the year we formed the group, talked about ideas for new features, recruited members from open source tools, browsers, different companies and tools. In June, I was at the TC39 plenary, presented and became the official TG4 task group. So since then, we have gotten bigger. This is not all of the members, but a lot of them are the most active members. Anyone in the space that wants to collaborate on any work we are doing, it would be fantastic to have, but I am pretty happy with the group that we manage to get so far. We have been getting quite a bit done. +JKP: Cool. Sorry, if this is a bit of a different format, but I recently had to do a quick year-end view of the source maps we have been doing and it’s the first year for us in TC39, I thought that summary would make a good update. Last year in the beginning part of the year we formed the group, talked about ideas for new features, recruited members from open source tools, browsers, different companies and tools. In June, I was at the TC39 plenary, presented and became the official TG4 task group. So since then, we have gotten bigger. This is not all of the members, but a lot of them are the most active members. Anyone in the space that wants to collaborate on any work we are doing, it would be fantastic to have, but I am pretty happy with the group that we manage to get so far. We have been getting quite a bit done. JKP: The first big thing we do is that we worked through and approved our official process document. The basic ideas that we sort of mimic TC39 officials stage process, where things go from Stage 1 to Stage 4, that being said, we still will be planning on presenting everything through the TC39 plenary, once we have an internal Stage 4 reached. JKP: We got a lot done with regards to the existing specification. I have links to the best examples of this here, like directionsing around how to handle invalid versions. Instead of using our own language, source maps, using things like that. Removing X prefixes. So there’s quite a few links here. And then the new features, we have got 3 main proposals that are in flux right now. One of them is adding function scopes and variable names to source maps. So this is big for a lot of different tools. And inspired by a lot of work done previously at Bloomberg and Google. We have a range mappings proposal, so instead of linking to a specific mapping, tools can have a range of mappings, for tools combining through build processes. And then a proposal for debug ID’s, using a unique identifier to source maps. So a lot of ecosystem tooling can easily find the source file it’s referring to. -JKP: The last thing we have done is made our contributing guide. So I have a link here to our CONTRIBUTING.md. It has like everything that I could think of that people need to know to join and contribute. So where we are meeting, on the TC39 calendar, all of the chats, open source repositories, a list of the customers and the big effort right now, which is getting tests in place for all the existing source map functionality. Actively looking for anything who wants to contribute. I keep these short, but we would love to have more folks involved. Thank you very much. +JKP: The last thing we have done is made our contributing guide. So I have a link here to our CONTRIBUTING.md. It has like everything that I could think of that people need to know to join and contribute. So where we are meeting, on the TC39 calendar, all of the chats, open source repositories, a list of the customers and the big effort right now, which is getting tests in place for all the existing source map functionality. Actively looking for anything who wants to contribute. I keep these short, but we would love to have more folks involved. Thank you very much. MF: It looks like you said the proposals, once they reach your internal Stage 4, come to TG1 for review. Is your internal stage 4 post-implementation? Does it match up with our TG1 stage 4? -JKP: For a lot – I can share the process doc in particular, but the way that we had been thinking about it and presented it earlier, the answers are yes. Internally, we would have implementations complete before bringing it to TG1. But the implementations would not exist in the browsers for the language they exist. UI for the debuggers and other tools like that. +JKP: For a lot – I can share the process doc in particular, but the way that we had been thinking about it and presented it earlier, the answers are yes. Internally, we would have implementations complete before bringing it to TG1. But the implementations would not exist in the browsers for the language they exist. UI for the debuggers and other tools like that. MF: The reason I ask is because of a web compatibility concern. The source maps are distributed over the web. And tools would then become reliant on the format. I don’t see what the purpose of the review by TG1 is after that point. DE: I think Jon accidentally left out a part of the process. He was describing how things advance through stages in TG4. Additionally, each TC39 meeting will keep having an update for TG4’s work, including a summary of all the things that we are doing in TG4. There, we will be inviting people to come to TG4 and do a more detailed technical review. The point we are going to make an annual version cut, then we present the whole annual version to TC39 and ask for consensus about it. We didn’t want to make too much of a back and forth, get consensus in two different places. Because if you want to engage in the more detailed technical discussions, towards design, you should join the group that is doing that. But still, to make sure people have a table of contents, so they can engage in the discussions that they want to, that are relevant to them, that’s why we have these updates every meeting. Hopefully that’s less scary than you were imagining. -DE: Also, I want to point out that implementation by this process that JKP was mentioning is defined as not just browsers. But also, tools. We require both. Because they have complementary forms of implementation. +DE: Also, I want to point out that implementation by this process that JKP was mentioning is defined as not just browsers. But also, tools. We require both. Because they have complementary forms of implementation. JKP:The big thing to speak on, we are trying to find the right balance here. NRO: I want to try to answer that. Tools are happy to ship experiments, knowing we have to change them. For example, chrome did have tools already shipping the scopes proposal even if it’s a stage 2 in our process. Shipping and breaking changes there are much less bad than normative features. In that, breaking the debugging experience, so we have less web compatibility constraints compared to TG1. + ## Updates from CoC committee + Presenter: Chris de Almeida (CDA) -CDA: Nothing new to report from the code of conduct committee, other than we are always happy to welcome new participants who would be interested in joining us on the code of conduct committee. +CDA: Nothing new to report from the code of conduct committee, other than we are always happy to welcome new participants who would be interested in joining us on the code of conduct committee. + ## Needs consensus PR: refactoring the process document + Presenter: Michael Ficarra (MF) - [PR](https://github.com/tc39/process-document/pull/38) @@ -246,12 +246,16 @@ MF: The first column, the left one, is targeting more of the outsider just wanti MF: There’s minor, what I call, normative changes. They are listed right here. The proposal document describes all high-level API and syntax and illustrative examples of usage have moved from stage one entrance criteria to stage two entrance criteria. I think that actually just more correctly reflects how we actually do those things. I've replaced references of ECMAScript editors and ECMA 262 with the relevant editor group and ECMA-262 or ECMA-402 so that it applies to both of the documents we work on. And I removed the test262 entrance criteria from stage 4 since it's required for stage 3 now as per the last meeting's changes. So we've gotten feedback and reviews and a couple of approvals. It's been open since basically the last meeting. -MF: There has only recently been one additional piece of feedback from DE, but it seems mostly positive. I will let him speak to that, if he wants to. Yeah. I kept the timebox really short because I am not interested in litigating this in committee. That's not a valuable use of our time. If anybody has significant concerns, we will withhold consensus and we will address it and bring it back in the next meeting. So I am looking to make this consolidation and clarification to the process document. +MF: There has only recently been one additional piece of feedback from DE, but it seems mostly positive. I will let him speak to that, if he wants to. Yeah. I kept the timebox really short because I am not interested in litigating this in committee. That's not a valuable use of our time. If anybody has significant concerns, we will withhold consensus and we will address it and bring it back in the next meeting. So I am looking to make this consolidation and clarification to the process document. RPR: +1s from CDE and DE on the queue. CDE says "Thanks to MF for doing this. Let’s capture the follow along comments from DE". DE says that his comments are already in the thread. Likewise, DLM says +1. So lots of support. + ### Conclusion + Consensus on merging the PR. + ## Down with [[VarNames]] + Presenter: Shu-yu Guo (SYG) - [PR](https://github.com/tc39/ecma262/pull/3226) @@ -263,8 +267,7 @@ SYG: So what is [[VarNames]]? In general, this is taking a step back, high-level SYG: This is disallowed: - -``` +```html @@ -277,21 +280,19 @@ SYG: So, direct sloppy eval, can introduce new vars in the outer scope. Direct s SYG: We thought that we should also disallow this because after all we already disallow var and let bindings have the same name on the global scope. If you introduce a var, why not disallow it? It seems fine. Except, this is a giant pain in the ass to implement it. -SYG: So why is it a giant pain in the ass to implement? We should remember the direct eval var semantics here. When you introduce a new var, that binding is deletable. This is true of direct eval introduced var in both functions and global scope. +SYG: So why is it a giant pain in the ass to implement? We should remember the direct eval var semantics here. When you introduce a new var, that binding is deletable. This is true of direct eval introduced var in both functions and global scope. SYG: So at the global scope, this means it add as property to globalThis, because remember all global var are properties in globalThis. This basically means that it adds a property to globalThis that is configurable. But, wait a second: if you actually manually do this, if you manually add a configurable binding to globalThis, you can redeclare a lexical binding with the same name. We need a way to distinguish configurable global properties on globalThis, whether they are introduced by normal property assignment like this or via direct eval var statements like this. And that is [[VarNames]]. -SYG: It’s basically a list that is on the global environment whose purpose is to distinguish what are the `var`s that are introduced via direct eval. -Strictly speaking that is not true, it also attracts var that come by way of var statements of the global scope that don’t come from direct eval. But you don't actually need [[VarNames]] for those, because var declarations at the top level that are not from direct eval introduce non-configurable global properties. [[VarNames]] exists because you need to distinguish which of the configurable properties are actually var. -Because not all vars are non-configurable. Hopefully that is clear. +SYG: It’s basically a list that is on the global environment whose purpose is to distinguish what are the `var`s that are introduced via direct eval. Strictly speaking that is not true, it also attracts var that come by way of var statements of the global scope that don’t come from direct eval. But you don't actually need [[VarNames]] for those, because var declarations at the top level that are not from direct eval introduce non-configurable global properties. [[VarNames]] exists because you need to distinguish which of the configurable properties are actually var. Because not all vars are non-configurable. Hopefully that is clear. SYG: This extra name list is kind of annoying. It’s annoying to understand at the spec level. It’s also annoying to implement because like you have to reserve a bit on all global properties basically to remember whether it is in fact a var or not. And I argue, or my claim, is that this complexity that serves no one what are the actual use cases here? You shouldn’t be using sloppy direct eval to introduce global vars anyway, and you can redeclare sloppy vars, you just have to delete them first. If you want to redeclare them, type delete x and could a let x after. That doesn’t seem that great. SYG: So the current semantics is that we have three cases at the global scope to catch redeclaration errors. 1, is a SyntaxError to redeclare a let or const with a like-named let or const. 2, is it a SyntaxError to redeclare a non-configurable property with a like named let or const. 3 it covers the normal var case, because they are not configurable properties. Number 3, we have the extra case, that says, it is a SyntaxError to declare let or const with a name present in [[VarNames]]. -SYG: My proposal is to get rid of the third one. So basically, the upshot of this is that at the global scope, this is now allowed. `let x will shadow. Shadow in the sense that if you do a direct eval of `var x`, and then do a `let x`, `globalThis.x` will refer to the “binding”, refer to the property introduced by the direct eval while normal `x` will refer to `let x`. And I hope this will simplify the spec a little bit. And simplify implementations. +SYG: My proposal is to get rid of the third one. So basically, the upshot of this is that at the global scope, this is now allowed. `let x will shadow. Shadow in the sense that if you do a direct eval of`var x`, and then do a`let x`,`globalThis.x` will refer to the “binding”, refer to the property introduced by the direct eval while normal `x` will refer to `let x`. And I hope this will simplify the spec a little bit. And simplify implementations. -SYG: As a FYI to the other implementers, SM and JSC are conformant to the current spec and will need to change. V8 was unfortunately never conformant here, and then we wouldn’t do anything for V8 because the case -- this is currently incorrectly allowed in V8, but my claim is that we should just allow it because I don’t really see why we have this extra thing that doesn’t really serve anybody. +SYG: As a FYI to the other implementers, SM and JSC are conformant to the current spec and will need to change. V8 was unfortunately never conformant here, and then we wouldn’t do anything for V8 because the case -- this is currently incorrectly allowed in V8, but my claim is that we should just allow it because I don’t really see why we have this extra thing that doesn’t really serve anybody. SYG: Okay, that is what I am proposing. I will be open to questions. @@ -301,8 +302,7 @@ SYG: Let me respond quickly to that. Would there be a difference in your mind pr RGN: Sorry, what were the A and B there? -SYG: The alternative is that, one, we pull this out into an actual proposal, go through the staging process, treat it as a proposal, and ask for Stage 2 this meeting. That’s a bunch of work. The alternative is to keep it as a PR, we simply do not have consensus for it now because you want to wait to evaluate. -We keep it as a PR and we can communicate offline or whenever you are ready, you have looked at it and I will bring it back for consensus and we ask for consensus then. Is there a difference between those two alternatives in your mind? +SYG: The alternative is that, one, we pull this out into an actual proposal, go through the staging process, treat it as a proposal, and ask for Stage 2 this meeting. That’s a bunch of work. The alternative is to keep it as a PR, we simply do not have consensus for it now because you want to wait to evaluate. We keep it as a PR and we can communicate offline or whenever you are ready, you have looked at it and I will bring it back for consensus and we ask for consensus then. Is there a difference between those two alternatives in your mind? RGN: There is some benefit in tracking tests and implementation progress, but in practical terms if you feel strongly about introducing a delay without giving it stages, I think that would be fine. @@ -314,15 +314,15 @@ SYG: Okay, I’m happy to pull it out into a proposal. MM: Thank you. -DLM: Yeah, so we had a lot at this internally, and in general, we are in support of this. Removing it from our implementation will not be difficult. We do sort of share the concerns about maybe there’s some unforeseen implications, but the fact that V8 never implemented this makes me think that we’re not likely to run into problems if we were to do it, that being said, we’re fine with this going through as a staged proposal. Thank you. +DLM: Yeah, so we had a lot at this internally, and in general, we are in support of this. Removing it from our implementation will not be difficult. We do sort of share the concerns about maybe there’s some unforeseen implications, but the fact that V8 never implemented this makes me think that we’re not likely to run into problems if we were to do it, that being said, we’re fine with this going through as a staged proposal. Thank you. MLS: I support a staged proposal as well. SYG: Okay, that is the queue. I would like to change this, ask for consensus for Stage 2, given that the PR contains the spec text changes. I guess the slight difference here is that the – the request from other delegates to consider the indications, there’s nothing actionable on my side as champion, so can I request for folks who want to think through the implications that they do so before the next plenary. -RPR: I’m seeing a thumbs up from RGN in the room and thumbs up from Michael. DE? Thumbs up from you as well. Okay, that seems clear. And a thumbs up from DE as well. +RPR: I’m seeing a thumbs up from RGN in the room and thumbs up from Michael. DE? Thumbs up from you as well. Okay, that seems clear. And a thumbs up from DE as well. -RPR: NRO is asking do we need Stage 3? +RPR: NRO is asking do we need Stage 3? SYG: Sure. Yeah. Who wants to review a small PR? @@ -340,9 +340,7 @@ SYG: Say I ask for Stage 2.7 at a future meeting, entrance into 2.7 requires tes DE: After you’re at 2.7, then you may land a test. -SYG: Then for 3, at the point where test 2.62 tests are required, given there are tests - testing the opposite of this behavior currently, what do we do for tests for proposals that are - proposing backwards breaking changes? +SYG: Then for 3, at the point where test 2.62 tests are required, given there are tests testing the opposite of this behavior currently, what do we do for tests for proposals that are proposing backwards breaking changes? MLS: So change the test so that they test the exact opposite. Which no longer is a syntax error and you can create a variable with the same name as this. @@ -356,8 +354,7 @@ MLS: In 2.7. 2.7 is when you write those tests. RPR: So, Michael, just to clarify what by -- I should not have called a technicality, but it’s whenever we are asking for Stage 3 reviewers, as we always have done, really we’re asking for Stage 2.7 reviewers. Yeah. I think hopefully that answers NRO questions on the queue as well, I think. And then Kevin Gibbons. It says send a PR. It’s still on TCQ. -KG: Yeah, the test requirement doesn’t require that the tests be landed. They can just be in PR. -You can have a change to the tests that removes the old tests and adds the new ones and we can review that and then say it’s good. +KG: Yeah, the test requirement doesn’t require that the tests be landed. They can just be in PR. You can have a change to the tests that removes the old tests and adds the new ones and we can review that and then say it’s good. SYG: So to recap, the action on me is to move the PR into a proposal repo. I think I got Stage 2. I didn’t hear -- @@ -371,7 +368,7 @@ RPR: I saw a thumbs up from KG. RGN gives a +1. In the room there is a +1 from M SYG: And the final thing to record in the conclusion is I have asked the folks that said they would like time to consider the implications to please do so before the next plenary. And on the queue, that was MM and MLS and perhaps others. -KG: So I guess we didn’t formally discuss, but we have previously allowed proposals to advance multiple stages, and I think that achieving stages 2.7 and 3 at the same meeting should be reasonable if the requirements for both are met. Of course, that risks that you are writing tests for a proposal that will not in fact get consensus to go forward, but in this case, where the tests are reasonably small, I think that would be a reasonable thing to did if you’re willing to take that risk and want to move a little faster and present fewer times. So I guess I want to hear if anyone in the room objects to the possibility of moving a proposal like this or any other proposal to stages 2.7 and 3 at the same meeting, assuming the requirements are met prior to the meeting. +KG: So I guess we didn’t formally discuss, but we have previously allowed proposals to advance multiple stages, and I think that achieving stages 2.7 and 3 at the same meeting should be reasonable if the requirements for both are met. Of course, that risks that you are writing tests for a proposal that will not in fact get consensus to go forward, but in this case, where the tests are reasonably small, I think that would be a reasonable thing to did if you’re willing to take that risk and want to move a little faster and present fewer times. So I guess I want to hear if anyone in the room objects to the possibility of moving a proposal like this or any other proposal to stages 2.7 and 3 at the same meeting, assuming the requirements are met prior to the meeting. RPR: No objections on the queue. This all seems very reasonable. Okay, I think that’s -- that answers that. @@ -380,14 +377,15 @@ RPR: And a plus 1, positive, from SFC. Thank you. We are done within the time fr ### Speaker's Summary of Key Points ### Conclusion -Stage 2 -Reviewers: RGN, DRR -Delegates to consider implications before next plenary: MM, MLS + +Stage 2 Reviewers: RGN, DRR Delegates to consider implications before next plenary: MM, MLS + ## Allow Annex B scripts to start with --> + Presenter: Nicolò Ribaudo (NRO) - [proposal](https://github.com/tc39/ecma262/pull/3244) -- [slides]() +- Slides: See Agenda NRO: So normative PR, we have these B specific HTML comments, there is the `-->` or the opposite version, and there are some rules that are strict where they can happen, specifically for these, like, HTML comment, we require it to be at the beginning of the line or potentially precede only by the comments. So we require a newline, followed by these HTML closed comment production, which is basically white spaces. This is, line, block comments without lines in the middle and, like, the comment marker. The problem with the way this is specified is we always require a line terminator in front, so it cannot be on the first line. It has to be at least on the second line. However, all engines I tested that do support HTML comments also support these closing comments in the first line, both in eval, in script tags, and in external files. so this PR is to update the spec to match what these implementations do, to basically add a new case that says -- we already have some special cases for comments at the beginning with the hashbang comment, and so these would also add these HTML-like comment in Annex B together with the PR at the beginning of a file. @@ -395,51 +393,62 @@ NRO: I would like to ask for consensus for this change, which again is not chang RPR: DLM has support for the change. RGN is also +1. Did you want to say more? -RGN: Yeah, it strikes me as an oversight that we don’t include this in the grammar already and we are in favor of updating the spec to match the implementations. +RGN: Yeah, it strikes me as an oversight that we don’t include this in the grammar already and we are in favor of updating the spec to match the implementations. RPR: Plus one from SYG. Okay, it seems like this has consensus. Thank you, NRO. -### Speaker's Summary of Key Points +### Speaker's Summary of Key Points ### Conclusion -* Consensus on merging the PR + +- Consensus on merging the PR + ## Allow locale based ignorePunctuation default for Collator + Presenter: Frank Yung-Fong Tang (FYT) -- [proposal]() -- [slides]() +- [PR](https://github.com/tc39/ecma402/pull/833) +- Slides: See Agenda -FYT: So first of all, this -- thank you. So this is for a PR that -- an interesting one. So in ECMA 402 Intl.Collator has an option that’s called ignorePunctuation. And the -- currently in the ECMA 402 spec the default is false, but what really happens that we find out during our testing V8 increment for very -- I mean, I think from the very beginning, in -- when the locale is in the Thai locale, the default behavior is true. For all other locale, it’s currently that all the browser implement it actually is false. What happens is if we really look at what happens what happend in the last several years, that when chromium implemented that, the [[IgnorePunctuation]] depended on the location. But for the Thai language in the CLDR the default is true. So, therefore, it actually is not -- is not exactly as the specs specify in the implementation, so we have a compatibility issue only for the Thai locale. So whenever it does not piece if I the option for the Thai locale which is true in chromium. And I think we have [INAUDIBLE] -- we have talked with this too about this. We have this PR that we’re changing, and the change here basically saying that instead of treating the default as false, the default is actually locale dependent from the locale data, and I think Mozilla already look at that, and I think TG2 already have support for this, so I want to bring this to TG1 for approval. So the default value for this ignore punctuation instead of always hardcode it to false, is actually reading from the locale data. Any questions? +FYT: So first of all, this -- thank you. So this is for a PR that -- an interesting one. So in ECMA 402 Intl.Collator has an option that’s called ignorePunctuation. And the -- currently in the ECMA 402 spec the default is false, but what really happens that we find out during our testing V8 increment for very -- I mean, I think from the very beginning, in -- when the locale is in the Thai locale, the default behavior is true. For all other locale, it’s currently that all the browser implement it actually is false. What happens is if we really look at what happens what happend in the last several years, that when chromium implemented that, the [[IgnorePunctuation]] depended on the location. But for the Thai language in the CLDR the default is true. So, therefore, it actually is not -- is not exactly as the specs specify in the implementation, so we have a compatibility issue only for the Thai locale. So whenever it does not piece if I the option for the Thai locale which is true in chromium. And I think we have [INAUDIBLE] -- we have talked with this too about this. We have this PR that we’re changing, and the change here basically saying that instead of treating the default as false, the default is actually locale dependent from the locale data, and I think Mozilla already look at that, and I think TG2 already have support for this, so I want to bring this to TG1 for approval. So the default value for this ignore punctuation instead of always hardcode it to false, is actually reading from the locale data. Any questions? RPR: Plus one for this change from DLM. -SFC: Yeah, I just -- thanks, Frank, for putting this together, and I think that this style and structure for making locale data available is a good structure and makes it very explicit which parts of the algorithm are data-dependent, and that is a good -- a good precedent to set, both here and elsewhere. So this style of change is a change that I think we should be embracing here and also anywhere elsewhere it is needed in 402. +SFC: Yeah, I just -- thanks, Frank, for putting this together, and I think that this style and structure for making locale data available is a good structure and makes it very explicit which parts of the algorithm are data-dependent, and that is a good -- a good precedent to set, both here and elsewhere. So this style of change is a change that I think we should be embracing here and also anywhere elsewhere it is needed in 402. RPR: Thanks, SFC. USA is also plus 1 on this change, so I think you have good support. + ### Speaker's Summary of Key Points ### Conclusion -FYT: Okay, so I think the conclusion for these notes is that we conclude that we got consensus for merge ECMA 402 PR 833. + +FYT: Okay, so I think the conclusion for these notes is that we conclude that we got consensus for merge ECMA 402 PR 833. + ## ApplyUnicodeExtensionToTag and ResolveLocale set the result record's internal slots to non-canonical values + Presenter: Frank Yung-Fong Tang (FYT) -- [proposal]() -- [slides]() +- [PR](https://github.com/tc39/ecma402/pull/846) +- Slides: See Agenda -FYT: The next one, I think, this is accidentally merged by Ben last night. He got confused and clicked the button, literally just within 12 hours. But I think we still need to bring it here. And I put it on the agenda and he got confused. There’s 846. So what happened is that inECMA 402, there are two places that we deal with uniKo extension processing. One is for most of the Intl object. The other one is for the Intl locale itself, the operations are slightly different for agood reason. But the problem is that within one of that particular part, we actually are inconsistent across that too. Which one -- the one that have the problem with that is whenever we have that value currently, we didn’t really -- we didn’t lower case and we didn’t -- we didn’t -- sorry, we didn’t do the canonicalizations for that. Therefore, let me see, where is that? So this is called only by Intl locale object, and this operation that somehow in here, we did not do canonicalization, and what will happen is that some of the calendar value, I think, for example, Islamic civil, which have a different value that didn’t get canonicalized, so this particular PR is a normative PR. We have to put in explicit language we basically copy from the other place in ECMA 402 already where we do the canonicalization. These two operations have to be operated because the process is different, and we do exactly the right thing. So we’re replacing just strictly reading from that one to a canonicalization process that deferred to UTS35 plus the other place in ECMA 402. +FYT: The next one, I think, this is accidentally merged by Ben last night. He got confused and clicked the button, literally just within 12 hours. But I think we still need to bring it here. And I put it on the agenda and he got confused. There’s 846. So what happened is that inECMA 402, there are two places that we deal with uniKo extension processing. One is for most of the Intl object. The other one is for the Intl locale itself, the operations are slightly different for agood reason. But the problem is that within one of that particular part, we actually are inconsistent across that too. Which one -- the one that have the problem with that is whenever we have that value currently, we didn’t really -- we didn’t lower case and we didn’t -- we didn’t -- sorry, we didn’t do the canonicalizations for that. Therefore, let me see, where is that? So this is called only by Intl locale object, and this operation that somehow in here, we did not do canonicalization, and what will happen is that some of the calendar value, I think, for example, Islamic civil, which have a different value that didn’t get canonicalized, so this particular PR is a normative PR. We have to put in explicit language we basically copy from the other place in ECMA 402 already where we do the canonicalization. These two operations have to be operated because the process is different, and we do exactly the right thing. So we’re replacing just strictly reading from that one to a canonicalization process that deferred to UTS35 plus the other place in ECMA 402. FYT: The impact shouldn’t be too much, except whenever the -- the U extension have a calendar with a couple of them. In particular, I think Islamic civil and maybe two or three other calendars. FYT: Any questions or any support? RPR: Nothing on the queue at the moment. Does FYT have support, objections? DLM has +1. Anyone else? Okay, if there’s nobody else, then it looks like this has conclusion. + ### Conclusion -FYT: The conclusion is that we reach consensus to merge ECMA 402 PR 846, which was accidentally mentioned last night, so we’re not going to revert it. Thank you. +FYT: The conclusion is that we reach consensus to merge ECMA 402 PR 846, which was accidentally mentioned last night, so we’re not going to revert it. Thank you. RPR: Excellent. That was a good prediction to merge it already. Super fast. Thank you, Frank. + ## Iterator sequencing for Stage 2 + Presenter: Michael Ficarra (MF) + - [proposal](https://github.com/tc39/proposal-iterator-sequencing) - [slides](https://docs.google.com/presentation/d/1KhdGLNXOxWFEg3EhDDv9P-dkLxPKBTuI0EkEUc_fdNA/edit#slide=id.p) @@ -447,64 +456,60 @@ MF: All right, iterator sequencing for Stage 2, at the last meeting, my plans he MF: So I have written the full spec text for both of those methods. The one open issue is whether -- for that first solution, whether we continue with variadic `Iterator.from` as I said I would at the last meeting, or go with KG’s suggestion here of instead -- if you look at the spec text, I guess, you can see `Iterator.from` is kind of split into two sections. It’s the one case and the zero or more case. And this is what Kevin is getting at with the issue here. I can let him explain it more, but the quick summary is just that if you pass one thing to Iterator.from, there’s kind of a fast path, and you’re not getting that with Iterator.from passed many things because the work is delayed because it’s treating that as sequencing those things. So there’s that slight noticeable difference. I’m very neutral on this. I could go either way, whether we introduce a new method called `concat` that is always doing sequencing and doesn’t treat one as a special case or whether we just ignore that -- it’s not really meaningful that one is a special case here. -MF: The other point there is that there is a difference from Array.from which, if you recall, has a second parameter, which is a mapper. I don’t think that’s really related, other than I guess people can see the name of the method and kind of assume its behavior. But if you’re reading code that calls it, I think it’s going to be pretty obvious that people are doing sequencing using Iterator.from. It makes no sense to introduce a mapper to Iterator.from because unlike arrays, we don’t have to worry about doing a second pass over the iterator. So that’s the open design question. And I think that’s actually the only thing that I wanted to discuss here. Assuming we resolve that discussion of either going for this first path over the solution going with variadic form or introducing a new method, probably called Iterator.concat, assuming we can resolve that, I would like to ask for Stage 2 today and Stage 2 reviewers. So do we have any feedback on that one open issue or any other issues about the proposal? +MF: The other point there is that there is a difference from Array.from which, if you recall, has a second parameter, which is a mapper. I don’t think that’s really related, other than I guess people can see the name of the method and kind of assume its behavior. But if you’re reading code that calls it, I think it’s going to be pretty obvious that people are doing sequencing using Iterator.from. It makes no sense to introduce a mapper to Iterator.from because unlike arrays, we don’t have to worry about doing a second pass over the iterator. So that’s the open design question. And I think that’s actually the only thing that I wanted to discuss here. Assuming we resolve that discussion of either going for this first path over the solution going with variadic form or introducing a new method, probably called Iterator.concat, assuming we can resolve that, I would like to ask for Stage 2 today and Stage 2 reviewers. So do we have any feedback on that one open issue or any other issues about the proposal? -DLM: Thank you. So in general, when we discussed this, we thought it makes a lot of sense. -We weren’t certain about flat. I was wondering if there’s examples from other languages that are using something like this or if you have, like, you know, some example use cases that you could mention. +DLM: Thank you. So in general, when we discussed this, we thought it makes a lot of sense. We weren’t certain about flat. I was wondering if there’s examples from other languages that are using something like this or if you have, like, you know, some example use cases that you could mention. MF: Use cases of trying to sequence infinite sequences? DLM: Yeah, the infinite sequence. It wasn’t clear to us why that was useful enough to be added to the language. -MF: Yeah, any generator that produces iterators, you know, these iterators themselves wouldn’t be infinite, but the sequence would be. I don’t have specific concrete use cases in mind off the top of my head, but I can provide you with them if you want. +MF: Yeah, any generator that produces iterators, you know, these iterators themselves wouldn’t be infinite, but the sequence would be. I don’t have specific concrete use cases in mind off the top of my head, but I can provide you with them if you want. DLM: No, honestly, we’re fairly happy with this proposal, so I don’t think you really need to do extra work. I was just curious if you had something kind of off the top of your head. In general, we’ll be supportive of this. It looks good. -MF: Yeah, in my development of this, for that particular one, I had only been doing artificial things. I had been mapping over the Nats and repeating that many times, but then the natural numbers, and then you’d have an infinite sequence of iterators of lengths based on the number. But, I hadn’t done anything real with that yet. +MF: Yeah, in my development of this, for that particular one, I had only been doing artificial things. I had been mapping over the Nats and repeating that many times, but then the natural numbers, and then you’d have an infinite sequence of iterators of lengths based on the number. But, I hadn’t done anything real with that yet. -DLM: no, I think that’s fine. I don’t think it’s really worth arguing about. So, yeah, in general, we support this proposal for Stage 2. Thank you. +DLM: no, I think that’s fine. I don’t think it’s really worth arguing about. So, yeah, in general, we support this proposal for Stage 2. Thank you. MF: Okay. -SYG: I would also like to hear some concrete use cases for the infinite stuff. I think in general, if you’re going down the road of making iterators really featured and more ergonomic, all kinds of combinators, I would like to hear more concrete use cases across the board. It would be -- so the broader context here, and there’s nothing I have against this particular proposal or your other ones, it’s just that if we introduce all this stuff, and come time when people actually want to ship this to production and it turns out that there are performance issues for the ready ergonomic forms that can’t be easily optimized away, that is generally a concern where we want people -- we want the code that looks nice to also have the potential to be fast, so you can address that concern a number of ways. One is concrete use cases where we can look at something and say, while this -- given this concrete use case, ergonomic gains here are actually worth the possibility of it being not as highly optimized. Another way is you could decide that you could show that it is in fact very optimizable and that performance will not be a concern. You could demonstrate that with micro benchmarks or something like that. +SYG: I would also like to hear some concrete use cases for the infinite stuff. I think in general, if you’re going down the road of making iterators really featured and more ergonomic, all kinds of combinators, I would like to hear more concrete use cases across the board. It would be -- so the broader context here, and there’s nothing I have against this particular proposal or your other ones, it’s just that if we introduce all this stuff, and come time when people actually want to ship this to production and it turns out that there are performance issues for the ready ergonomic forms that can’t be easily optimized away, that is generally a concern where we want people -- we want the code that looks nice to also have the potential to be fast, so you can address that concern a number of ways. One is concrete use cases where we can look at something and say, while this -- given this concrete use case, ergonomic gains here are actually worth the possibility of it being not as highly optimized. Another way is you could decide that you could show that it is in fact very optimizable and that performance will not be a concern. You could demonstrate that with micro benchmarks or something like that. SYG: So per usual, the overall general concern here is performance for the kind of code that we want people to write and eventually to write and ship in production untranspiled. So, hopefully that concern makes sense. -LCA: I just wanted to follow up on the request of, is there other language precedent for this? And if there is, RUST has an iterator.flatten method, which does pretty much this. I was looking for some results on GitHub where this is used, and it doesn’t seem like it’s very widely used. There’s in all of GitHub 520 files that use this method, just wanted to give some context. Links: RUST Iterator.flatten: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.flatten results in GH code search: https://github.com/search?q=.flat%28+language%3ARust+&type=code +LCA: I just wanted to follow up on the request of, is there other language precedent for this? And if there is, RUST has an iterator.flatten method, which does pretty much this. I was looking for some results on GitHub where this is used, and it doesn’t seem like it’s very widely used. There’s in all of GitHub 520 files that use this method, just wanted to give some context. Links: RUST Iterator.flatten: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.flatten results in GH code search: https://github.com/search?q=.flat%28+language%3ARust+&type=code -KG: Just to say more about Iterator.from versus concat, the `from` method just seems pretty unintuitive, that if you pass it two things, it’s going to change the behavior and then also stick them together. Like, that’s not really how any of the existing `from` methods work and not how we’re expecting any future from methods to work, as far as I’m aware. Like, if we have a Set.from that’s going to take a list from its first parameter; or I’m hoping to introduce a version of Math.max that takes an iterable as its first argument and it's definitely not going to take more than one iterable. That would just be weird. So I like the consistency of the `from` methods generally taking the one thing you are coercing and then the second parameter customizes the behavior in the way that Array.from does. But using it as a variadic thing with sequencing seems like it’s going to be really inconsistent with the rest of the language. I would be much happier if there was a static or prototype concat method. +KG: Just to say more about Iterator.from versus concat, the `from` method just seems pretty unintuitive, that if you pass it two things, it’s going to change the behavior and then also stick them together. Like, that’s not really how any of the existing `from` methods work and not how we’re expecting any future from methods to work, as far as I’m aware. Like, if we have a Set.from that’s going to take a list from its first parameter; or I’m hoping to introduce a version of Math.max that takes an iterable as its first argument and it's definitely not going to take more than one iterable. That would just be weird. So I like the consistency of the `from` methods generally taking the one thing you are coercing and then the second parameter customizes the behavior in the way that Array.from does. But using it as a variadic thing with sequencing seems like it’s going to be really inconsistent with the rest of the language. I would be much happier if there was a static or prototype concat method. -JHD: So I put my topic that I’m neutral on variadic vs separate, but I think I’m slightly leaning towards separate because of the things KG said. The reason I’m on the queue is because if it’s called `concat`, then I will expect based on the precedent of array concat, I can give it a thing or a container of things and it will give me a container of all the things, and so if it can either do that, or have a different name and it would be fine, I just think that that -- like, I don’t think anybody has any assumptions about `@@isConcatSpreadable` or intuition about it, but -- so setting that aside, just that’s -- that is how concat works. I would like a different name if we can’t have those semantics. +JHD: So I put my topic that I’m neutral on variadic vs separate, but I think I’m slightly leaning towards separate because of the things KG said. The reason I’m on the queue is because if it’s called `concat`, then I will expect based on the precedent of array concat, I can give it a thing or a container of things and it will give me a container of all the things, and so if it can either do that, or have a different name and it would be fine, I just think that that -- like, I don’t think anybody has any assumptions about `@@isConcatSpreadable` or intuition about it, but -- so setting that aside, just that’s -- that is how concat works. I would like a different name if we can’t have those semantics. DE [on the queue]: +1 Iterator.from. MF: thank you. That was helpful feedback. For SYG, I’ll definitely get more use case information and just do as much performance work as I can. I hear really mixed feedback, it seems, on concat versus from. To directly address JHD’s feedback, I’m not interested in bringing a version of the method that has that behavior, so I will look to avoid the name concat so that we can bring a version of the method that has the behavior that I desire. There’s some -- there’s a lot of precedent, and I remember somebody was asking for precedent in another language, I think maybe DLM, there’s a lot of precedent here. You can look in the README or in the slides that I had presented for Stage 1 both in other languages and in libraries. Maybe we can look there for some naming inspiration as well. Maybe append or something. We’ll see. Yeah, and I think that’s all I plan to address for the next one. I will not be asking for Stage 2 based on this feedback. Thank you. + ### Speaker's Summary of Key Points -For SYG, MF to get more use case information on the use cases for flat, and do performance work. -Feedback was mixed on concat vs from. -JHD argued, anything named concat should use the same array flattening semantics, which MF isn’t interested in adopting, so he will investigate other names. -DLM and others asked about precedent in other languages; this was presented in Stage 1 previously. + +For SYG, MF to get more use case information on the use cases for flat, and do performance work. Feedback was mixed on concat vs from. JHD argued, anything named concat should use the same array flattening semantics, which MF isn’t interested in adopting, so he will investigate other names. DLM and others asked about precedent in other languages; this was presented in Stage 1 previously. + ### Conclusion + Did not seek consensus for Stage 2, due to critical feedback. The proposal remains at Stage 1. ## Iterator unique for stage 1 + Presenter: Michael Ficarra (MF) - [proposal](https://github.com/michaelficarra/proposal-iterator-unique) - [slides](https://docs.google.com/presentation/d/1381O5-rNH72MheHOIiTDfzentOn4APPps3R2MYeLzWY/edit#slide=id.p) -MF: So this is a new follow on for iterator helpers, iterator unique. The problem it’s trying to address is getting distinct values out of an iterator, getting the first value in some identified equivalence class. So if you see the example there, we’re calling `Iterator.from` on the string Mississippi, which puts it into code points using the string iterator, and then calling whatever this method is called. For the purposes of this example, it’s called distinct, it yields four values, which are the four unique code points. It happens to be the first instances of those, but that’s not really observable here. You can see in the second part of the example, I have an array of strings, each of them containing a name of one of the US states in alphabetical order. And we’re calling a distinctBy variant, which is passing a mapping function, which maps to some exemplar of the equivalence class, this being the first letter of the state, so then if you iterate that resulting iterator, you get the first alphabetically ordered state starting with that letter. +MF: So this is a new follow on for iterator helpers, iterator unique. The problem it’s trying to address is getting distinct values out of an iterator, getting the first value in some identified equivalence class. So if you see the example there, we’re calling `Iterator.from` on the string Mississippi, which puts it into code points using the string iterator, and then calling whatever this method is called. For the purposes of this example, it’s called distinct, it yields four values, which are the four unique code points. It happens to be the first instances of those, but that’s not really observable here. You can see in the second part of the example, I have an array of strings, each of them containing a name of one of the US states in alphabetical order. And we’re calling a distinctBy variant, which is passing a mapping function, which maps to some exemplar of the equivalence class, this being the first letter of the state, so then if you iterate that resulting iterator, you get the first alphabetically ordered state starting with that letter. -MF: So, yeah, this is a thing that I commonly need to do. There’s not too much design -space here, I don’t think. Something we definitely want to address at some point is -composite keys because you can’t always map to some exemplar that easily. Sometimes you want to compose two aspects of the thing and use that as the exemplar. But there’s no good -way to do that right now. I did have a good conversation with ACE, who I don’t think is here at this meeting, might be online, recently about his plans for composite keys going forward after records and tuples have been stalled, and that seems to be compatible with how we’re doing it via mapper in those examples. So I think a mapper is a good solution, assuming we go with those value-based composite keys in the future. I don’t want to go too far down that rabbit hole but if you have questions, I’m happy to answer. +MF: So, yeah, this is a thing that I commonly need to do. There’s not too much design space here, I don’t think. Something we definitely want to address at some point is composite keys because you can’t always map to some exemplar that easily. Sometimes you want to compose two aspects of the thing and use that as the exemplar. But there’s no good way to do that right now. I did have a good conversation with ACE, who I don’t think is here at this meeting, might be online, recently about his plans for composite keys going forward after records and tuples have been stalled, and that seems to be compatible with how we’re doing it via mapper in those examples. So I think a mapper is a good solution, assuming we go with those value-based composite keys in the future. I don’t want to go too far down that rabbit hole but if you have questions, I’m happy to answer. -MF: I did research in other languages, and some of them were using the mapper style -approach that I showed. Some of them were using a comparator where you pass in a function of two parameters and it tells you whether those things are considered to be the same or not. I think the comparator approach is actually pretty bad. It requires a much less efficient implementation. It also permits comparators that are nonsensical, whereas the mapper doesn’t, just by construction. So I strongly prefer a mapper-based approach, but I would be interested in hearing motivations for including a comparator version. You’ll see later that some languages and libraries do have that, so there must be motivation somewhere, right? And another question is for the mapper, would we pass in an index? We typically have been doing that since the original iterator helpers MVP, but I don’t really see a reason. So if anybody has a reason to do that, let me know. And there’s a lot of names for this kind of thing, some common ones, so we’d have to choose naming preferences as well. But overall, not too big of a design space here. So here you can see examples of other languages that have a uniquing method on their various sequence-like data structures. The summary here is that most of them have the mapping variant, a couple have a comparator. And you can see some of the common names distinct and unique are common, but also “nub” in Haskell for some reason. I don’t think anybody knows. There’s just some funny names there. +MF: I did research in other languages, and some of them were using the mapper style approach that I showed. Some of them were using a comparator where you pass in a function of two parameters and it tells you whether those things are considered to be the same or not. I think the comparator approach is actually pretty bad. It requires a much less efficient implementation. It also permits comparators that are nonsensical, whereas the mapper doesn’t, just by construction. So I strongly prefer a mapper-based approach, but I would be interested in hearing motivations for including a comparator version. You’ll see later that some languages and libraries do have that, so there must be motivation somewhere, right? And another question is for the mapper, would we pass in an index? We typically have been doing that since the original iterator helpers MVP, but I don’t really see a reason. So if anybody has a reason to do that, let me know. And there’s a lot of names for this kind of thing, some common ones, so we’d have to choose naming preferences as well. But overall, not too big of a design space here. So here you can see examples of other languages that have a uniquing method on their various sequence-like data structures. The summary here is that most of them have the mapping variant, a couple have a comparator. And you can see some of the common names distinct and unique are common, but also “nub” in Haskell for some reason. I don’t think anybody knows. There’s just some funny names there. -MF: And in JavaScript libraries, you can see that it’s pretty much the same story. Unique and distinct are both very common. Unique both with the comparator and without. They’re both very common names, and also the mapping variant is more common than the comparator variant. And, yeah, all but one have a mapping version, so I think people find value in that. So I kind of -gave clues as to my preferences earlier, but I’ll go over them again, I guess. I think a comparator style API would be really inefficient, and as I said before, problematic because you can define nonsensical notions of equality using it. So I’d be interested in hearing if anybody has a reason to have that. So I think a single method with an optional mapper is probably best, and I don’t see a reason for an index yet, so also please let me know. And I really don’t have a preference on names among the ones we saw there or whether we use two methods or one, like distinct or distinctBy. So anyway, that’s iterator uniquing, and I’m just going for a Stage 1, and I'd love to hear any feedback. +MF: And in JavaScript libraries, you can see that it’s pretty much the same story. Unique and distinct are both very common. Unique both with the comparator and without. They’re both very common names, and also the mapping variant is more common than the comparator variant. And, yeah, all but one have a mapping version, so I think people find value in that. So I kind of gave clues as to my preferences earlier, but I’ll go over them again, I guess. I think a comparator style API would be really inefficient, and as I said before, problematic because you can define nonsensical notions of equality using it. So I’d be interested in hearing if anybody has a reason to have that. So I think a single method with an optional mapper is probably best, and I don’t see a reason for an index yet, so also please let me know. And I really don’t have a preference on names among the ones we saw there or whether we use two methods or one, like distinct or distinctBy. So anyway, that’s iterator uniquing, and I’m just going for a Stage 1, and I'd love to hear any feedback. RPR: We have some queue. So starting with GCL. @@ -514,15 +519,13 @@ MF: Yeah, actually KG had raised that point to me personally earlier, and I forg RBN: I kind of wanted to ask for a little bit more clarity on when you say that a comparator is not efficient, that’s not a very detailed statement as to why it’s not efficient. I do want to say that I have for a number of years been interested in pursuing proposals that would look and address this within this language in general. And while a comparator that’s just a function, if you pass two values, are they equal, that’s definitely not efficient, but most languages that do have comparators or at least languages like, for example, any language that uses .NET that has comparators, they are generally a combination of an equality function that tests whether A and B are equal, but also a hash generation function that produces a hash code you can use for hashtable lookup, which is much more efficient in most cases. -MF: Yeah, if there’s a more principled way to define a comparator, which, you know, both is efficient using hashes and does not permit nonsensical notions of equality, I’m totally open to that. When I was making those statements, I was only referring to when passed a -function of 2 of the values. And that would be inefficient. +MF: Yeah, if there’s a more principled way to define a comparator, which, you know, both is efficient using hashes and does not permit nonsensical notions of equality, I’m totally open to that. When I was making those statements, I was only referring to when passed a function of 2 of the values. And that would be inefficient. RBN: Yeah, I do also want to say to the idea of nonsensical notions of equality, I think it’s valuable to say that that’s not something you would want, but also you can already have nonsensical notions of relational equality when you pass things to array.sort. So I don’t think it’s necessarily that critical to completely block something based on the idea of nonsensical equality when a principled developer that’s actually using this properly would not create a nonsensical notion of equality for it to actually matter. MF: I just want to clarify that with `Array.prototype.sort`, the implementation is allowed to sort them in any order when you define a comparator that does not work properly. So I don’t think that really compares here. -KG: Just on the specific topic of the comparator versus a mapper, I mean, we already have -groupBy. groupBy is already a mapper. It is a little different in that the mapper is used for the key of the result, although there’s some possible designs where you have a second thing that’s used for the key of the result. But I think the consistency with groupBy really suggests using a mapper. Especially if we get composite keys. +KG: Just on the specific topic of the comparator versus a mapper, I mean, we already have groupBy. groupBy is already a mapper. It is a little different in that the mapper is used for the key of the result, although there’s some possible designs where you have a second thing that’s used for the key of the result. But I think the consistency with groupBy really suggests using a mapper. Especially if we get composite keys. DLM: We discussed this internally. We also had a few concerns about this. We definitely share the concern about the potentially unbound memory use behind the scenes. That’s already been raised. I had a few other questions. One is I assume there’s no array.unique and I was wondering if you considered that as opposed to an iterator.unique. A few other things, like if we had an infinite iterator that only ever produces one decision digit value, that would be an infinite loop. So there’s plenty of ways to write infinite loops, but this might be an unexpected one. And I guess another thing would be side effects when producing values. I assume you would still get that side effect even if a value ends up being discarded and that also ends up being unexpected? @@ -534,42 +537,23 @@ MLS: Yeah, SYG, I think you got it. You know, so we talk about complexity, and i MF: Can I ask for clarification on that? Are you concerned about the complexity of an implementation of this method, like, in your engine? -MLS: Well, yes. You know, if you have a large object that you’re iterating, then, you know, -you have to determine the best way to do -- especially for the distinct, you know, how are you -going to do that internally. And with -- then with a callback, which I actually am more -comfortable with a callback because then you’re leaving it on whatever you’re calling back to -do the actual determining of whether or not something should be in the resulting list. So, -yeah, I’m concerned of both memory and complexity. It just seems to be something that probably -is better served by user code than by implementation code. - -MF: I think I’m still confused by your feedback a little bit. I saw two points. One was about -implementation complexity, and I -- I assume that the implementation in your engine will be -very similar to the way that I’ve polyfilled it, and I may be wrong there, but just backing by -a set with a couple of Booleans for special cases, and you already have those in your -implementation. For the second thing, I didn’t understand why a mapper would be better than a -no mapper variant, and again, this is an assumption of mine, the only difference is that your -tree is -- which value you’re treating as the, like, set key, whether that’s the -- the value -itself or the result of the mapper. But either way, it should be the same for your implementation, so I’d like -- if you could explain that to me more. - -MLS: Well, just that we’re adding that into an API, and optimizing that would probably require -more work than it’s worthwhile to do. Yes, we just back it with a set, simple. But there’s -memory implications to that and other implications as far as complexity in terms of the other -tiers of the higher engine. +MLS: Well, yes. You know, if you have a large object that you’re iterating, then, you know, you have to determine the best way to do -- especially for the distinct, you know, how are you going to do that internally. And with -- then with a callback, which I actually am more comfortable with a callback because then you’re leaving it on whatever you’re calling back to do the actual determining of whether or not something should be in the resulting list. So, yeah, I’m concerned of both memory and complexity. It just seems to be something that probably is better served by user code than by implementation code. + +MF: I think I’m still confused by your feedback a little bit. I saw two points. One was about implementation complexity, and I -- I assume that the implementation in your engine will be very similar to the way that I’ve polyfilled it, and I may be wrong there, but just backing by a set with a couple of Booleans for special cases, and you already have those in your implementation. For the second thing, I didn’t understand why a mapper would be better than a no mapper variant, and again, this is an assumption of mine, the only difference is that your tree is -- which value you’re treating as the, like, set key, whether that’s the -- the value itself or the result of the mapper. But either way, it should be the same for your implementation, so I’d like -- if you could explain that to me more. + +MLS: Well, just that we’re adding that into an API, and optimizing that would probably require more work than it’s worthwhile to do. Yes, we just back it with a set, simple. But there’s memory implications to that and other implications as far as complexity in terms of the other tiers of the higher engine. RPR: Okay, on to JHD. JHD: Yeah, to echo what DLM said, I would really like to see this solved for arrays as well. I think in general, any problem we want to solve on either arrays or iterators, we kind of want to solve on both. If, you know -- based on the experience with groupBy, maybe a prototype method isn’t even worth trying ever again, although, that’s not -- we got a lot of feedback from implementations that that would be a tough sell, but not that it’s impossible. But if we don’t want to go down that road, a static method is perfectly fine. But it would be nice to have a solution that doesn’t require an iterator. -LCA: To clarify, don’t we already have a solution for array, which is just to pass it to new set -and array from the set? +LCA: To clarify, don’t we already have a solution for array, which is just to pass it to new set and array from the set? -JHD: That does not allow you to give the mapping functionality that would be -- and either way, -it does still use an iterator. So, yeah, you can write code kind of similar to the polyfill that’s suggested that would use sets or maps and work. But it would be nice to have a straightforward method. +JHD: That does not allow you to give the mapping functionality that would be -- and either way, it does still use an iterator. So, yeah, you can write code kind of similar to the polyfill that’s suggested that would use sets or maps and work. But it would be nice to have a straightforward method. LCA: Okay. -SFC: Yeah, so the groupBy proposal, and this one seemed quite similar in the types of use cases -they serve. The groupBy would allow you to perform the mapping and perform uniqueness on the mapping. It’s just that, you know, it may be less efficient because you collect, you know, all the items and then if you just want one of them, then you have to throw away the rest. So it feels like if we think that this is well motivated enough to have an additional method for it, an additional iterator method for it, it seems groupBy is the place to look for inspiration and precedent for how we designed and thing, and I would sort of focus on why is this motivated independently of groupBy and then follow along that line of reasoning. +SFC: Yeah, so the groupBy proposal, and this one seemed quite similar in the types of use cases they serve. The groupBy would allow you to perform the mapping and perform uniqueness on the mapping. It’s just that, you know, it may be less efficient because you collect, you know, all the items and then if you just want one of them, then you have to throw away the rest. So it feels like if we think that this is well motivated enough to have an additional method for it, an additional iterator method for it, it seems groupBy is the place to look for inspiration and precedent for how we designed and thing, and I would sort of focus on why is this motivated independently of groupBy and then follow along that line of reasoning. RPR: Thank you, Shane. So we’ve got one minute left. @@ -577,8 +561,7 @@ LCA: I didn’t have time to write this on the queue, but there’s an alternati MF: Thank you. That was a lot of valuable feedback. I think that given that, I still would like to ask for Stage 1 to further explore this problem space that I’ve described here. -RPR: Okay. So we have a call for consensus for Stage 1 for this proposal. Any support or objections? We’ve got a plus one from LCA. JHD, is that a typo or is plus exclamation -mark special? +RPR: Okay. So we have a call for consensus for Stage 1 for this proposal. Any support or objections? We’ve got a plus one from LCA. JHD, is that a typo or is plus exclamation mark special? JHD: That’s a plus one, yes. @@ -588,84 +571,60 @@ SYG: MLS raised a question of why not library? Is that in scope for your explora MF: Yes, that will always be a possible outcome of this, is that we can’t provide anything that’s more efficient or more ergonomic than what we would do anyway with a library. And in this README, I’ve already written -- I think it might be a full fidelity polyfill, only about a dozen lines or so. It’s a bit inconvenient because it uses some temporaries that have to be introduced every time it’s called. But, yeah, that’s definitely a possible outcome. -RPR: With that clarification, we also have JHX and JSC with plus one and JSC -saying they also agree with adding to array. Okay, I’ve heard no objections to Stage 1. Congratulations, you have Stage 1. - +RPR: With that clarification, we also have JHX and JSC with plus one and JSC saying they also agree with adding to array. Okay, I’ve heard no objections to Stage 1. Congratulations, you have Stage 1. ### Speaker's Summary of Key Points ### Conclusion -* Consensus on stage 1 + +- Consensus on stage 1 ## Intl.MessageFormat: I have some questions + Presenter: Eemeli Aro (EAO) - [proposal](https://github.com/tc39/proposal-intl-messageformat) - [slides](https://docs.google.com/presentation/d/1c_6VoCMJdSP59LNYEUTjCNZi8nKEw_GvMQMlvEmD91s/edit#slide=id.p) -EAO: Hi, I presented an update on Intl.MessageFormat in September and now I’m back to ask a couple of well, one, maybe two different questions about what are we really doing? -So just as a refreshers thai proposal is about adding a new formatter Intl.MessageFormat. Unlike the other existing formatters, takes in a source parameter and its constructor, from which the source of the message that it’s then allowing to be formatted. And very roughly, it has normal source cases. It’s used roughly like this. Where you have a message that is defined as a string in the MessageFormat 2 syntax that you then use to build a message format instance. And then you can call .format() or .formatToParts() on that, feeding some specific values that you want to be taken into account in the formatting. -There are further details on this as well, but I have talked about this previously a bunch. +EAO: Hi, I presented an update on Intl.MessageFormat in September and now I’m back to ask a couple of well, one, maybe two different questions about what are we really doing? So just as a refreshers thai proposal is about adding a new formatter Intl.MessageFormat. Unlike the other existing formatters, takes in a source parameter and its constructor, from which the source of the message that it’s then allowing to be formatted. And very roughly, it has normal source cases. It’s used roughly like this. Where you have a message that is defined as a string in the MessageFormat 2 syntax that you then use to build a message format instance. And then you can call .format() or .formatToParts() on that, feeding some specific values that you want to be taken into account in the formatting. There are further details on this as well, but I have talked about this previously a bunch. EAO: Since my last presentation, the underlying Unicode MessageFormat 2 specification is in a feature freeze and due to be released as a tech preview in hopefully about 2 months, as a part of this spring’s ICU/CLDR releases. Further than that, the Intl.MessageFormat specific itself now is almost completely described by spec language. A couple of missing pieces, but almost all there. The polyfill for all of this is updated to match the spec and again no changes to the API from what I have presented before. -EAO: But so given this situation, I actually earlier was planning on asking for Stage 2 at this meeting. But when that process started, it turns out that there is this fundamental underlying question that I think we need to answer here before we can really move forward. -When can we consider MessageFormat 2, a new domain-specific language, to be supported in JavaScript with an actual parser for that format? -So roughly speaking, as far as I am aware, there are two different viewpoints on this, and one is presented here as option A is that when the underlying specification is finalized and it has a sufficient stability guarantee that we can be sufficiently certain that it will not be changing or breaking in the future. Or then we have option B, where we consider this to not be a sufficient criteria by itself, and that we need effectively years of experience with it in order to be certain that this is really the thing and that it’s got a large amount of signals of external support and usage and it hasn’t changed for quite a long while. - -EAO: So before I really open the question on this, I have, hopefully, an informative view of what the history of this proposal is. It started effectively presenting something very similar to the current API, back in 2013 already. And from then, there was some discussion which culminated in action being taken finally under the TG2 in 2019. To start organizing, to put together a working group for coming up with a message format for JavaScript that works for the Web. -And relatively soon after that, this subgroup was effectively reorganized and moved to go under the unicode CLDR TC because it was identified that what we are coming up isn’t good for just JavaScript, but the whole industry, and for something like localization and internationalization the right place for the home is the Unicode Consortium. And going on from there, the specific proposal that I am here championing was accepted for Stage 1 about two years ago now and then it’s had a couple of updates to it – while in Stage 1, where also back in ‘22, the message resources part was spun off. It’s now a separate Stage 1 proposal. And then I gave the update in September, and there was a follow-up incubator call from there. -So the kind of idea that we really had and still have back from 2019 onwards is that we need a more capable format than existed then or exists now, and that the whole web is really suffering from the lack of any such good solution here. -So the work on MessageFormat 2 has been building a syntax for Intl.MessageFormat to use. It’s a side benefit where the work started, but it is of course really important for the future of the syntax and all of its parts. -And now, one important part of why we think that what we have come up is a good solution is that the message format syntax and the data model as well, of how it represents messages, is capable, as far as we know, of representing any message in any other format in addition to itself. -And really, as one of the viewpoints that I have highlighted from the discussion back in 2019 is from Jan, the maintainer for i18next, one of the largest internationalization and localization libraries in the ecosystem at the moment. I am just going to read this out loud. “For some time I know thought about the next version of i18next [...] but with the current uncertainty what the decision here is - I better do nothing before picking the one that gets not adopted by browsers. [...] If there is a chance that one format gets the defacto standard for web [...] my bet is the tooling will improve. Currently, as a vendor, you just have too many formats for web you need to support (and the web is only one piece of what you need to support)” -And this, I wanted to highlight because it really – I felt like it describes the sort of situation in which we have been easily for the past 4 or 5 or 8 years in terms of localization for the web, we are all stuck and waiting for something better to come along in order for anything at all to happen. -So the pitch for MessageFormat 2 is that it is angling to be that better thing. -And in that context, then, two years ago, we accepted this proposal for Stage 1, and the motivation here, I would like to call your attention to the second item here, which is to “Introduce a native parser and message formatter for MessageFormat 2, a spec currently being developed under the Unicode Consortium.” -So another way of putting this question is, is that I think we need to reaffirm whether this is a valid motivation for this proposal for us to continue working on. -And so – what this is leading up to is that for the underlying MessageFormat 2 syntax, we’re currently at a stage where it’s entering into a tech preview phase which I estimate and expect that it comes out and becomes an official finalized part within half a year or maybe a year. But, of course, this is a forward-looking statement. So do with it as you may. -And the basis always has been that whatever – that the underlying format does need to stabilize in order for us to really consider ever advancing something like Intl.MessageFormat to Stage 4. -But the question here is, effectively, this same slide from earlier: I would like us to discuss and to come to some sort of an idea for when we can consider a MessageFormat 2 parser in JavaScript? And these, as far as I know, are the two options. And I’d like to open the queue on this. +EAO: But so given this situation, I actually earlier was planning on asking for Stage 2 at this meeting. But when that process started, it turns out that there is this fundamental underlying question that I think we need to answer here before we can really move forward. When can we consider MessageFormat 2, a new domain-specific language, to be supported in JavaScript with an actual parser for that format? So roughly speaking, as far as I am aware, there are two different viewpoints on this, and one is presented here as option A is that when the underlying specification is finalized and it has a sufficient stability guarantee that we can be sufficiently certain that it will not be changing or breaking in the future. Or then we have option B, where we consider this to not be a sufficient criteria by itself, and that we need effectively years of experience with it in order to be certain that this is really the thing and that it’s got a large amount of signals of external support and usage and it hasn’t changed for quite a long while. + +EAO: So before I really open the question on this, I have, hopefully, an informative view of what the history of this proposal is. It started effectively presenting something very similar to the current API, back in 2013 already. And from then, there was some discussion which culminated in action being taken finally under the TG2 in 2019. To start organizing, to put together a working group for coming up with a message format for JavaScript that works for the Web. And relatively soon after that, this subgroup was effectively reorganized and moved to go under the unicode CLDR TC because it was identified that what we are coming up isn’t good for just JavaScript, but the whole industry, and for something like localization and internationalization the right place for the home is the Unicode Consortium. And going on from there, the specific proposal that I am here championing was accepted for Stage 1 about two years ago now and then it’s had a couple of updates to it – while in Stage 1, where also back in ‘22, the message resources part was spun off. It’s now a separate Stage 1 proposal. And then I gave the update in September, and there was a follow-up incubator call from there. So the kind of idea that we really had and still have back from 2019 onwards is that we need a more capable format than existed then or exists now, and that the whole web is really suffering from the lack of any such good solution here. So the work on MessageFormat 2 has been building a syntax for Intl.MessageFormat to use. It’s a side benefit where the work started, but it is of course really important for the future of the syntax and all of its parts. And now, one important part of why we think that what we have come up is a good solution is that the message format syntax and the data model as well, of how it represents messages, is capable, as far as we know, of representing any message in any other format in addition to itself. And really, as one of the viewpoints that I have highlighted from the discussion back in 2019 is from Jan, the maintainer for i18next, one of the largest internationalization and localization libraries in the ecosystem at the moment. I am just going to read this out loud. “For some time I know thought about the next version of i18next [...] but with the current uncertainty what the decision here is - I better do nothing before picking the one that gets not adopted by browsers. [...] If there is a chance that one format gets the defacto standard for web [...] my bet is the tooling will improve. Currently, as a vendor, you just have too many formats for web you need to support (and the web is only one piece of what you need to support)” And this, I wanted to highlight because it really – I felt like it describes the sort of situation in which we have been easily for the past 4 or 5 or 8 years in terms of localization for the web, we are all stuck and waiting for something better to come along in order for anything at all to happen. So the pitch for MessageFormat 2 is that it is angling to be that better thing. And in that context, then, two years ago, we accepted this proposal for Stage 1, and the motivation here, I would like to call your attention to the second item here, which is to “Introduce a native parser and message formatter for MessageFormat 2, a spec currently being developed under the Unicode Consortium.” So another way of putting this question is, is that I think we need to reaffirm whether this is a valid motivation for this proposal for us to continue working on. And so – what this is leading up to is that for the underlying MessageFormat 2 syntax, we’re currently at a stage where it’s entering into a tech preview phase which I estimate and expect that it comes out and becomes an official finalized part within half a year or maybe a year. But, of course, this is a forward-looking statement. So do with it as you may. And the basis always has been that whatever – that the underlying format does need to stabilize in order for us to really consider ever advancing something like Intl.MessageFormat to Stage 4. But the question here is, effectively, this same slide from earlier: I would like us to discuss and to come to some sort of an idea for when we can consider a MessageFormat 2 parser in JavaScript? And these, as far as I know, are the two options. And I’d like to open the queue on this. DE: Thanks for the presentation. I am very excited about the MessageFormat 2 proposal, and really happy that it’s progressing like this. I think option A makes more sense. Not as a general principle for how we should approach new formats or new efforts, but as a tradeoff given how mature and well-supported this effort is, given that we have people from many different companies working together, working on at least three different implementations of formatting, precise definitions that are being put into things adopted by unicode, I think this makes sense for us to standardize relatively soon. Not today. But not several years after it ships in ICU. I think that would be too conservative. You have to remember that other ECMA402 features have actually resulted in the need to add features to ICU–things that weren’t there. And this is one where it will probably be shipping in ICU4C before it goes into ECMA402. So you know, it’s important we get all these formats right. But we also don’t have to unduly delay the development of JavaScript with excessive caution, if this thing is being developed well and it’s on a path to being broadly supported. -CP: As DE mentioned, I am also supporting option A. As one of the champions of these ten years ago, we are never going to get to do option B, in my opinion. And we already have good experience with unicode CLDR. Over the years, we haven’t had many issues, so if they are finalizing the specification, that means it’s good. And MessageFormat 2 has been developed by many people who have been champions of these proposals in this committee, people from Mozilla and so on. And I think it’s a culmination of a lot of work, it’s not something that just pops up and we decide to just upgrade to MessageFormat 2. So I am very supportive of going with option A. +CP: As DE mentioned, I am also supporting option A. As one of the champions of these ten years ago, we are never going to get to do option B, in my opinion. And we already have good experience with unicode CLDR. Over the years, we haven’t had many issues, so if they are finalizing the specification, that means it’s good. And MessageFormat 2 has been developed by many people who have been champions of these proposals in this committee, people from Mozilla and so on. And I think it’s a culmination of a lot of work, it’s not something that just pops up and we decide to just upgrade to MessageFormat 2. So I am very supportive of going with option A. MF: Yeah. I guess I’m the dissenting opinion here. The parsers that we add to JavaScript should be extremely time-tested, things that are effectively permanently relevant. In earlier discussions about this, I gave the example of JSON being there. JSON is, you know, ubiquitous. It’s used as a format for exchanging messages between entirely different ecosystems, it will forever be important whether we like that or not. I don’t think that we can make that claim about MessageFormat 2 yet. It will look very foolish, you know, if 8 years from now, we want to add MessageFormat 3. I am not trying to diminish this effort; I understand that it’s a significant effort defining MessageFormat 2 from a lot of interested and appropriate parties, but I think that to enshrine it in JavaScript, we have to have that confidence of permanent relevance that I don’t see we have until at least some number of years of experience with it. -SYG: I have not had good experience with CLDR. This is responding earlier to CP. CLDR, you had good experience with stability stuff. In 2023, this may have been a one-off event. But in 2023 Chrome experienced a pretty big issue with CLDR changing, and all the other browsers, with CLDR changing the space in English formatting the DATE-TIME. This is something that should have never flown at all. If someone with a passing experience with stability guarantees of the web were in the room, that does not give me a lot of confidence in CLDR as a body, that has the right – the people with the right experience in the room. -You can argue – I say this as someone who has experienced and outage due to this thing, but do not participate regularly in CLDR. So obviously, it’s a very limited view. I am happy to be corrected, but I don’t get the sense that CLDR cares about stability in the same sense that we care about it. +SYG: I have not had good experience with CLDR. This is responding earlier to CP. CLDR, you had good experience with stability stuff. In 2023, this may have been a one-off event. But in 2023 Chrome experienced a pretty big issue with CLDR changing, and all the other browsers, with CLDR changing the space in English formatting the DATE-TIME. This is something that should have never flown at all. If someone with a passing experience with stability guarantees of the web were in the room, that does not give me a lot of confidence in CLDR as a body, that has the right – the people with the right experience in the room. You can argue – I say this as someone who has experienced and outage due to this thing, but do not participate regularly in CLDR. So obviously, it’s a very limited view. I am happy to be corrected, but I don’t get the sense that CLDR cares about stability in the same sense that we care about it. -SFC: I could respond to SYG, but I will stick with my queue item, which is to emphasize again what EAO said earlier, in large part the effort to design the message came out of this body, out of TC39 because it’s been a goal of developing the internationalization specification since early on that we were going to work toward a message formatting library. You know, as like the crowning achievement in the ECMA402 space, and we don’t want to just standardize any random message formatting syntax. But make sure it’s the right one. That’s why we spent so much time and effort in the working group developing the syntax that EAO is presenting here. -So I want to emphasize that the syntax is largely built for ECMA. And you know, this is not really taking a position on, you know, like does it need also be used in the wild for a certain number of years? But a lot of the other things that we built here, APIs that we introduce into – into the Intl specification are also based on feedback that we have gotten from the CLDR and the ICU including the different features that we add all the different APIs and those are all based on – those are all new APIs added. No one has a relative formatter that looks exactly like the Intl one. A list formatter that looks exactly like the Intl one. -So these are all things introduced. And the message format is much in the same sense. No one has a message formatter like this one yet, but we are introducing it. We are designing it in order to serve the needs of the web platform users. -So yeah. That’s my angle on this. +SFC: I could respond to SYG, but I will stick with my queue item, which is to emphasize again what EAO said earlier, in large part the effort to design the message came out of this body, out of TC39 because it’s been a goal of developing the internationalization specification since early on that we were going to work toward a message formatting library. You know, as like the crowning achievement in the ECMA402 space, and we don’t want to just standardize any random message formatting syntax. But make sure it’s the right one. That’s why we spent so much time and effort in the working group developing the syntax that EAO is presenting here. So I want to emphasize that the syntax is largely built for ECMA. And you know, this is not really taking a position on, you know, like does it need also be used in the wild for a certain number of years? But a lot of the other things that we built here, APIs that we introduce into – into the Intl specification are also based on feedback that we have gotten from the CLDR and the ICU including the different features that we add all the different APIs and those are all based on – those are all new APIs added. No one has a relative formatter that looks exactly like the Intl one. A list formatter that looks exactly like the Intl one. So these are all things introduced. And the message format is much in the same sense. No one has a message formatter like this one yet, but we are introducing it. We are designing it in order to serve the needs of the web platform users. So yeah. That’s my angle on this. -KG: To second what MF said and put a slightly different spin on it, getting domain-specific languages right without usage experience is between inhumanly difficult and impossible. I accept that it’s possible in principle, but I think that experience has shown that most domain-specific languages that are not minuscule - people end up wanting them to be a little bit different after a couple of years of using them. Like, JSON doesn’t have comments. That was on purpose. But now everyone is using JSONC. It doesn’t have trailing commas, etc. And that is the oldest and most stable DSLs. And everyone is familiar with YAML, of course, and there are newer versions which are better. It's not that the first version was not carefully designed, but it’s just that it’s really hard to get these things right without using them in the wild for a long time. -And we can do our best sitting around thinking about what this ought to be, but the set of designers is several orders of magnitude smaller than the set of users and they will run into use cases and ways of using things that could not have occurred to the designers of a DSL. So the idea of standardizing a DSL without it having been used in practice, in the field, is – I don’t like that idea. I would be much happier if there years of widespread usage experience before it was codified. At least in JavaScript, because we can’t ever change anything. +KG: To second what MF said and put a slightly different spin on it, getting domain-specific languages right without usage experience is between inhumanly difficult and impossible. I accept that it’s possible in principle, but I think that experience has shown that most domain-specific languages that are not minuscule - people end up wanting them to be a little bit different after a couple of years of using them. Like, JSON doesn’t have comments. That was on purpose. But now everyone is using JSONC. It doesn’t have trailing commas, etc. And that is the oldest and most stable DSLs. And everyone is familiar with YAML, of course, and there are newer versions which are better. It's not that the first version was not carefully designed, but it’s just that it’s really hard to get these things right without using them in the wild for a long time. And we can do our best sitting around thinking about what this ought to be, but the set of designers is several orders of magnitude smaller than the set of users and they will run into use cases and ways of using things that could not have occurred to the designers of a DSL. So the idea of standardizing a DSL without it having been used in practice, in the field, is – I don’t like that idea. I would be much happier if there years of widespread usage experience before it was codified. At least in JavaScript, because we can’t ever change anything. -SYG: This is partly a clarifying question. So help me game out the two options here. So for option A, it seems to imply that MessageFormat 2 as a syntax is currently independently standardized. Like, ignoring JS for a second. MF2 is going to be a thing that the Unicode Consortium recommends as part of its standard body; is that correct? +SYG: This is partly a clarifying question. So help me game out the two options here. So for option A, it seems to imply that MessageFormat 2 as a syntax is currently independently standardized. Like, ignoring JS for a second. MF2 is going to be a thing that the Unicode Consortium recommends as part of its standard body; is that correct? -EAO: Yes. +EAO: Yes. -SYG: Okay. And then this option is that – sorry, and this slide is about that given that MF2 is an independent standard, when should JS implement a parser that is part of its standard library? +SYG: Okay. And then this option is that – sorry, and this slide is about that given that MF2 is an independent standard, when should JS implement a parser that is part of its standard library? -EAO: Yes. +EAO: Yes. SYG: Okay. So then help me game out what the alternative – these two options don’t seem to comprise the universe of alternatives. If it is an independent new standard where other tools and stuff will be writing to consume and produce it, you don’t include any user library that would provide an MF2 parser as an option here. I would like to hear more about why that is not even considered. -DE: Definitely user libraries are important. This is what EAO did through the whole process, maintaining a JavaScript implementation of this. Definitely the idea here is not to put it in the language today. I imagine that even if we go full steam ahead, it’s at least a year out. -Or if we go a little bit slowly, it’s two years out. The thing that doesn’t make sense to me is this “several years out” idea. +DE: Definitely user libraries are important. This is what EAO did through the whole process, maintaining a JavaScript implementation of this. Definitely the idea here is not to put it in the language today. I imagine that even if we go full steam ahead, it’s at least a year out. Or if we go a little bit slowly, it’s two years out. The thing that doesn’t make sense to me is this “several years out” idea. DE: My question is, what do we actually want to be occurring during that time? What kinds of validation steps? A lot of companies have already adopted a format for this sort of templating. And the work can be invested to upgrade to a different format and retrain the translators once there is an ecosystem consensus around it. We don’t have to be on the leading edge of that. But years is more conservative than “let’s see where this is going and figure out how strongly we support this”. In Bloomberg we are working on end to end testing with translators, using this format, integrating into applications. I would encourage others to do that kind of testing. I think we can do a more active strategy that falls slightly in between these options. It’s not standardizing today, but also not waiting ten years SYG: Hold on. Help me game out option A. What is the alternative – game out the user library thing… what are the concrete downsides, if MF2 is an independent standard and JS library instead of via built in. -DE: It depends which version we go with. With the version that is later in the slide deck, if we include AST but not the surface syntax, that – I see the difference between these two things as superficial. The more significant thing is the data model here. -The data model differs a bit from any other data model for templating and for doing the formatting. -So to me, it would seem silly to go with the AST but keep the surface syntax out. If we are gaming out the different options, another option would be to just have the things that we have in ECMA402 and go with, you know, people can use that library that already exists. And, you know, the fact is that it does exist. And because it’s not this second that we’re standardizing it, we can over the course of this proposal over the next couple of years, evaluate how its usage is +DE: It depends which version we go with. With the version that is later in the slide deck, if we include AST but not the surface syntax, that – I see the difference between these two things as superficial. The more significant thing is the data model here. The data model differs a bit from any other data model for templating and for doing the formatting. So to me, it would seem silly to go with the AST but keep the surface syntax out. If we are gaming out the different options, another option would be to just have the things that we have in ECMA402 and go with, you know, people can use that library that already exists. And, you know, the fact is that it does exist. And because it’s not this second that we’re standardizing it, we can over the course of this proposal over the next couple of years, evaluate how its usage is -SYG: What is the downside to using that library? I don’t understand. +SYG: What is the downside to using that library? I don’t understand. DE: So this is part – part of what incompetent hope is a broader effort, which EAO presented at a previous TPAC of bringing in localization as a first-class resource on the web, so this is building the stack, this has been what the ECMA 402 group has been doing, but the whole time, we have the primitive formatters, higher level APIs, message format, then eventually we have – maybe, it’s not settled – but higher-level understandings of dictionaries of strings and formatting instructions as a web platform ability. We are standardizing the individual elements we can help application developers, you know, have a very solid option in front of them that’s well-supported. This is kind of similar to why we have iterator helpers or why we have anything added to the standard library. Because it helps developers solve a need that they have. This is a common occurring need. @@ -681,24 +640,19 @@ SBE: So I wanted to comment largely on the idea that something needs to be perma RCA: Well, one of the motivations I had years ago, when I joined this group was message format. And I was looking at slides I presented to TG 2 about the needs and about the users and the developer experience of internationalization and localization on the web. And back then, one of the libraries that EAO mentioned, i18next, had around half a million downloads weekly. I was checking those numbers. And it’s approximately 5 million weekly, if npm doesn’t lie to me. Well, numbers probably are not a motivation. But since 3 or 4 years ago when we first presented these message format working group, things evolved. But the need to internationalize and to localize the web is still the same. And as people and developers are building, they need the right tools for duties work. So I do believe that to provide those tools, it’s extremely important, and on the other hand, those numbers also show us there is some way to go at level of battle proven APIs or syntax for the libraries for the interoperation for use cases for message format. Here, I do believe we need to understand what this committee wants regarding the stability because I do believe that everything on the web right now with a large user base demonstrates the need, the motivation and the certain way to go. -SFC: So to respond to some of the concerns that have been brought up, it sounds like there’s, you know, there’s two questions that I think we should treat separately and one question is, motivation of do we want to have this syntax parser in the ECMAScript standard? And I think that that is worthwhile discussions to have, to make sure we are aligned on that. The second one is, assuming we have a message format syntax like this in the language, then what is the stability policy? And I think that’s the question that EAO is trying to ask of the committee here. -This is I think the third, maybe fourth time, that we have presented a message format to the committee. So, you know, I have not heard any big concerns about like is this well-motivated. If there are concerns about it being well motivated we should hear us and discuss them. Have incubator calls to make sure we understand better about if there’s any motivational concerns for the proposal. But that’s all. +SFC: So to respond to some of the concerns that have been brought up, it sounds like there’s, you know, there’s two questions that I think we should treat separately and one question is, motivation of do we want to have this syntax parser in the ECMAScript standard? And I think that that is worthwhile discussions to have, to make sure we are aligned on that. The second one is, assuming we have a message format syntax like this in the language, then what is the stability policy? And I think that’s the question that EAO is trying to ask of the committee here. This is I think the third, maybe fourth time, that we have presented a message format to the committee. So, you know, I have not heard any big concerns about like is this well-motivated. If there are concerns about it being well motivated we should hear us and discuss them. Have incubator calls to make sure we understand better about if there’s any motivational concerns for the proposal. But that’s all. -ZB: So I just wanted to bring up a response to SYG’s comment about CLDR stability. MessageFormat in itself does not really depend on CLDR. It kind of binds together a lot of formatters that we already have and those that already exist in ECMA402 do depend on CLDR and are prone to the web capability issues that we experienced a couple times with CLDR updating and - This function is a black box, but of course that’s unrealistic to expect that everyone will respect and we know that it causes web compat issues. But MessageFormat, you can think of this more of a system that binds together a number of formatters that already exist and we control which ones are automatically included and excluded. The conversation as pointed here, is a question of are we comfortable bringing in a new DSL? And the downside of not bringing a DSL is that we will continue the JavaScript ecosystem as multiple DSLs for localization world and prevent ourselves from going up the stack and standardizing any system. Our position, as internationalization experts, we don’t think there is a sufficient justification to continue exploring DSL space. -But it’s unprecedented and for that reason EAO’s question is rational. But are we comfortable with the DSL being brought up rather than, if the dependency in CLDR is going to be somehow mitigated by the – lack of DSL here? +ZB: So I just wanted to bring up a response to SYG’s comment about CLDR stability. MessageFormat in itself does not really depend on CLDR. It kind of binds together a lot of formatters that we already have and those that already exist in ECMA402 do depend on CLDR and are prone to the web capability issues that we experienced a couple times with CLDR updating and - This function is a black box, but of course that’s unrealistic to expect that everyone will respect and we know that it causes web compat issues. But MessageFormat, you can think of this more of a system that binds together a number of formatters that already exist and we control which ones are automatically included and excluded. The conversation as pointed here, is a question of are we comfortable bringing in a new DSL? And the downside of not bringing a DSL is that we will continue the JavaScript ecosystem as multiple DSLs for localization world and prevent ourselves from going up the stack and standardizing any system. Our position, as internationalization experts, we don’t think there is a sufficient justification to continue exploring DSL space. But it’s unprecedented and for that reason EAO’s question is rational. But are we comfortable with the DSL being brought up rather than, if the dependency in CLDR is going to be somehow mitigated by the – lack of DSL here? -PFC: I have two topics back to back. The first one I wanted to point out, there is a precedent set by the IETF for TimeZone and calendar annotations that we used in Temporal. But you could see reasons in it to choose either option A or option B. What this precedent says about option A is – in Temporal’s case, we decided that as soon as this annotation is standardized, we are willing to use it in string parsing for Temporal. -What is relevant to option B in this precedent is that most of the annotation proposal was already used informally for decades in Java date-time strings. So there are really things that are relevant to both of these options. What we did add to the proposal that wasn’t part of the Java behavior is the calendar annotations. And that’s simply because we had a need that no other library addressed and so we needed a way to express that in a string. -It looks like there’s a reply to that. I will pause for that before moving on to the next topic. +PFC: I have two topics back to back. The first one I wanted to point out, there is a precedent set by the IETF for TimeZone and calendar annotations that we used in Temporal. But you could see reasons in it to choose either option A or option B. What this precedent says about option A is – in Temporal’s case, we decided that as soon as this annotation is standardized, we are willing to use it in string parsing for Temporal. What is relevant to option B in this precedent is that most of the annotation proposal was already used informally for decades in Java date-time strings. So there are really things that are relevant to both of these options. What we did add to the proposal that wasn’t part of the Java behavior is the calendar annotations. And that’s simply because we had a need that no other library addressed and so we needed a way to express that in a string. It looks like there’s a reply to that. I will pause for that before moving on to the next topic. SFC: Yeah, I think that’s me. So like PFC said with the IETF proposal, like, we are like – we are here to work on programming language syntax and the other DSL involved with it. That’s kind of our job. And we did this in Temporal. And those couple of corners that PFC pointed out, we do this in RegExp where we invented mini DSLs. The difference is the syntax is bigger than those. But it’s still a new DSL like the other DSLs that we invent and modify and tweak. And it’s totally not out of the realm of responsibility of a body like ours, to move forward on those types of decisions. PFC: Okay. I will move on to the other topic. So just to try and understand like where people’s deal breakers are for this, I was wondering, would anybody’s opinion change if the DSL being proposed were the DSL used by gettext, which is another message format tool? I don’t really know the background of what this proposal’s champions consider that the relevance to gettext is, but I am assuming we did not choose to use the gettext DSL because the web platform needed other things. But that DSL has been around for 30 years. As a thought experiment, I am wondering whether that changes anybody’s opinion, if what was being proposed here was the gettext DSL. Like would it be better to have a DSL with 30 years of field experience but is less relevant to the use cases of the web platform? EAO: The reason why gettext is not being considered by us at the moment is that it’s insufficient for the needs of the web. I could go into details, but they are technical details that are not relevant for the question you are actually asking. - -USA: I am not sure PFC, if you meant that rhetorically or seriously. But I think based on what I have known about gettext, it is actually a great example why being too conservative isn’t exactly helpful. Gettext was designed in different interfaces and as EAO pointed out, it doesn’t quite support any of the use cases of the modern web. This API, on the other hand, was designed with the current practices across the industry and sort of the dynamic nature of the web. -Holding it for a very long time would just mean that we end up again considering something that is outdated for its time. It wouldn’t be as outdated as gettext, from the late ‘80s, I think. But similar to, perhaps, message format 1, which is in 20 years, it aged, not just because of the time, but because of how the nature of the interface – interface in general has changed + +USA: I am not sure PFC, if you meant that rhetorically or seriously. But I think based on what I have known about gettext, it is actually a great example why being too conservative isn’t exactly helpful. Gettext was designed in different interfaces and as EAO pointed out, it doesn’t quite support any of the use cases of the modern web. This API, on the other hand, was designed with the current practices across the industry and sort of the dynamic nature of the web. Holding it for a very long time would just mean that we end up again considering something that is outdated for its time. It wouldn’t be as outdated as gettext, from the late ‘80s, I think. But similar to, perhaps, message format 1, which is in 20 years, it aged, not just because of the time, but because of how the nature of the interface – interface in general has changed PFC: It wasn’t a rhetorical question, but I am not proposing that we use the gettext DSL here. I would like to know, of the people who are saying we need to have years of field experience before having a DSL, would it change your mind if the DSL being discussed here was gettext, that has the field experience but is less appropriate to our use case? What changes or what doesn’t change? @@ -706,8 +660,7 @@ KG: I think that gettext is like a pretty good example of having a DSL and then USA: One thing that I wanted to mention to respond to your point, KG, is that this has – to some extent – this incubation has already happened within various projects and maybe like open source tools or within organizations’ internal tooling. This effort is essentially bringing all of that learned experience from the various stakeholders into a single DSL. So the exact DSL is not already tested because it’s still being fleshed out. But all the motivations, the use cases that it actually addresses have been well-documented. -ZB: Thank you for the thought experiment. I understand it and I think it’s actually a very good example because it’s such a bad example. In other words, I feel like it exposes us to the core of the question here. Gettext and MessageFormat 2 have a completely different history of industry backing in the development cycle. So while gettext, yes, became a dominant localization format for a decade, while being a bad format, and a lesson learned from it is why we shouldn’t create a data model that gettext enforced, it was developed outside of mainstream industry internationalization bodies. It was not designed by Microsoft’s internationalization group in collaboration with Apple’s. MessageFormat 2 is in a different place. From the beginning, it has representation of all major industry players of the core internationalization industry. It benefits from the experience of unicode, but not just unicode, also Google, Mozilla, Apple and other industry players, including my current employer, Amazon, and quite frankly, my position here that I would like to offer is that there’s no one else in the world who cares and understands how localization formats are evolving than the people who are designing MessageFormat 2. -And getting a DSL in JavaScript is one of the vehicles for us to flip the script on how we can build a single localization system for the most popular programming language in the world, and then force and popularize the tooling for it. I agree with the concern, which is that if we get it wrong, we don’t want to have message format 3. But I would also advocate that by not getting a DSL in JavaScript, we are not really creating an opportunity for some other group of experts to create a MessageFormat 3 or some other syntax; we are just tragically slowing down by a decade development of a web localization system. And I think that would be unfortunate and I think that the opportunity here is very illusionary. I don’t think there’s anything else that can be done or anyone else that can produce an alternative for this group to consider two years or three or five years down the road. It’s MessageFormat 2 or, honestly, status quo. +ZB: Thank you for the thought experiment. I understand it and I think it’s actually a very good example because it’s such a bad example. In other words, I feel like it exposes us to the core of the question here. Gettext and MessageFormat 2 have a completely different history of industry backing in the development cycle. So while gettext, yes, became a dominant localization format for a decade, while being a bad format, and a lesson learned from it is why we shouldn’t create a data model that gettext enforced, it was developed outside of mainstream industry internationalization bodies. It was not designed by Microsoft’s internationalization group in collaboration with Apple’s. MessageFormat 2 is in a different place. From the beginning, it has representation of all major industry players of the core internationalization industry. It benefits from the experience of unicode, but not just unicode, also Google, Mozilla, Apple and other industry players, including my current employer, Amazon, and quite frankly, my position here that I would like to offer is that there’s no one else in the world who cares and understands how localization formats are evolving than the people who are designing MessageFormat 2. And getting a DSL in JavaScript is one of the vehicles for us to flip the script on how we can build a single localization system for the most popular programming language in the world, and then force and popularize the tooling for it. I agree with the concern, which is that if we get it wrong, we don’t want to have message format 3. But I would also advocate that by not getting a DSL in JavaScript, we are not really creating an opportunity for some other group of experts to create a MessageFormat 3 or some other syntax; we are just tragically slowing down by a decade development of a web localization system. And I think that would be unfortunate and I think that the opportunity here is very illusionary. I don’t think there’s anything else that can be done or anyone else that can produce an alternative for this group to consider two years or three or five years down the road. It’s MessageFormat 2 or, honestly, status quo. KG: Can you say more about why actually having it in JavaScript matters so much, if – as you say – everyone who could potentially be involved is already involved in the MF2 standardization effort? Like, if there were just a library, but all of the people used and then in a couple of years you came back and said, yeah. It turns out the syntax we are quite happy with. And then we standardize it at that point. What’s – what is the cost you are seeing with that? And also to be clear, it’s not I am saying, I think there will be necessarily some other syntax that people will come up with; it’s that I would give reasonably high odds that it turns out that there will be some changes that you would want to make with the syntax, the same way there have been charges that people want to to make to YAML and SQL and JSON. And we only figured out that once we started to use them. @@ -729,48 +682,29 @@ CDA: Okay. The queue is clear. DE: Maybe we could have an overflow topic to go into more the ideas that people have for what kinds of investigation would be useful? I feel like we don’t have a solid conclusion yet. -EAO: I was just about to also be asking for overflow time. +EAO: I was just about to also be asking for overflow time. -CDA: Okay. There is no queue to capture, but … +CDA: Okay. There is no queue to capture, but … -EAO: I would be also happy at this time to continue with my presentation, because I never got to -my second question. +EAO: I would be also happy at this time to continue with my presentation, because I never got to my second question. -CDA: Okay. Is there anything you want to dictate for the notes at this point, or does that not -make sense? +CDA: Okay. Is there anything you want to dictate for the notes at this point, or does that not make sense? EAO: It’s -- later when we actually conclude this discussion might be better. ### Speaker's Summary of Key Points - ### Conclusion -* We will have an overflow topic to cover the second half of the slides, and + +- We will have an overflow topic to cover the second half of the slides, and ## status of the IEEE Software paper about TC39 -Presenter: Mikhail Barash (MBH) -Slides: https://docs.google.com/presentation/d/1m_Yq3BFaZzSMk-eXzZ8EeLCVwU8RPvW2BYvkS-OAmoc/edit?usp=sharing - -MBH: I want to present an update for the IEEE magazine paper. So just a quick reminder, this is -where the delegates would explain with a scientific rigor how TC39 works. This is purely a -descriptive effort, not introducing anything new about the work, so there’s this reflector -issue and the detailed outline of the paper. What I would like to summarize here is that we -essentially think of having a view on the work on the committee from several perspectives, the -ECMA perspective from TC39, what the standard, how the meetings are going, how the decisions are made, the interplay between ECMA and TC39, and there’s an engine perspective, web perspective, formal perspective and the “people” perspective. And I’ll come back to this slide in a moment, but for now, I want to say that we are now seeking more input from the delegates on these three -particular topics, so: ECMAScript and the web and its relation to the standard and test- -262. A summary of the three major browser engines would be much appreciated. -In the “web perspective”, we would like to have more input from the delegates on the relation -between ECMA262 and HTML and WebAssembly, also things like CG, web constraints, and we would also like to have some input on the test-262, for example, how the tests are grouped. And you can find more detailed outline as I mentioned in -- using this link. So what what we -are looking for is just a very rough draft quality itemized list. This would be perfect. No -need the polish anything. If you have any input on that, it would be greatly appreciated -and I would like to thank very much those delegates who have already expressed their interest -in participating in this effort and especially IS and MF, who have given -a lot of input on this. So, yeah, just if you have any input on this, please either do post on -the reflector post or send me an email directly. That’s it for this part. - - -USA: Thank you, MBH, there’s nothing on the queue yet. We can give it a minute or so. Are -you expecting any comments? + +Presenter: Mikhail Barash (MBH) Slides: https://docs.google.com/presentation/d/1m_Yq3BFaZzSMk-eXzZ8EeLCVwU8RPvW2BYvkS-OAmoc/edit?usp=sharing + +MBH: I want to present an update for the IEEE magazine paper. So just a quick reminder, this is where the delegates would explain with a scientific rigor how TC39 works. This is purely a descriptive effort, not introducing anything new about the work, so there’s this reflector issue and the detailed outline of the paper. What I would like to summarize here is that we essentially think of having a view on the work on the committee from several perspectives, the ECMA perspective from TC39, what the standard, how the meetings are going, how the decisions are made, the interplay between ECMA and TC39, and there’s an engine perspective, web perspective, formal perspective and the “people” perspective. And I’ll come back to this slide in a moment, but for now, I want to say that we are now seeking more input from the delegates on these three particular topics, so: ECMAScript and the web and its relation to the standard and test- 262. A summary of the three major browser engines would be much appreciated. In the “web perspective”, we would like to have more input from the delegates on the relation between ECMA262 and HTML and WebAssembly, also things like CG, web constraints, and we would also like to have some input on the test-262, for example, how the tests are grouped. And you can find more detailed outline as I mentioned in -- using this link. So what what we are looking for is just a very rough draft quality itemized list. This would be perfect. No need the polish anything. If you have any input on that, it would be greatly appreciated and I would like to thank very much those delegates who have already expressed their interest in participating in this effort and especially IS and MF, who have given a lot of input on this. So, yeah, just if you have any input on this, please either do post on the reflector post or send me an email directly. That’s it for this part. + +USA: Thank you, MBH, there’s nothing on the queue yet. We can give it a minute or so. Are you expecting any comments? MBH: Not actually. Maybe just saying here that we would very much appreciate delegates’ opinion and input on these topics. @@ -778,54 +712,28 @@ USA: All right. Thank you, then. I suppose -- oh, yeah, there is a clarifying qu EAD: Hello. What is the best way to get involved with this effort? -MBH: Right, so there is this -- the link here, the detailed outline. So this is the outline of -the papers, there’s a bit of subsections level. If you have some comments about any -of these items, it would be nice if you can leave some comment in this document. -If you want to have more input, maybe posting it on the reflector issue would be good, or just -sending me an email. +MBH: Right, so there is this -- the link here, the detailed outline. So this is the outline of the papers, there’s a bit of subsections level. If you have some comments about any of these items, it would be nice if you can leave some comment in this document. If you want to have more input, maybe posting it on the reflector issue would be good, or just sending me an email. -USA: Thank you. I encourage everyone to reach out in private to -- if you have any comments. Moving on, I think, MBH, -you’re also next on the queue? +USA: Thank you. I encourage everyone to reach out in private to -- if you have any comments. Moving on, I think, MBH, you’re also next on the queue? ## chartering TG5 on “Experiments in Programming Language Standardization” -Presenter: Mikhail Barash (MBH) -Slides: https://docs.google.com/presentation/d/1UUCJTCztvP8kYt4pycrxQKF65sT_3eS8w7hZ8nKPFdg/edit?usp=sharing - - -MBH: Yes. So just to make it clear, this presentation now is unrelated to the previous one. So -though there is only my name here on the slide, this presentation is a result of discussions -with YSV and she participated in formulating the scope and the program of work of this new TG -that we had proposed on experiments in programming language standardization. So the scope -would be to provide a forum for the discussion, development and dissemination of research work -on the standardization of ECMAScript and related technologies. Essentially, this is an -evolution of the research group that YSV started some time ago. This would be a space for -asking questions on how formal approaches could be applied to ECMAScript standardization, so -this would bring in academia and provide them with an official venue where they can discuss -research related to JavaScript specifically and on standardization of programming languages -more broadly. This would also be a place where academia could get early feedback from delegates without bringing it to the entire committee first. We have discussed this with several of ECMA member organizations, and on the slide, you can see the ones who have expressed interest. There is also some interest from organizations who are not part of ECMA. We also had adjacent interest from the Rust community, though to make it clear, we have not spoken to them about this in the form of a task group, as I present now. Also, YSV and I had a number of discussions with other institutions during our presentation last year at the SPLASH 2023 conference, and we expect this list to grow. So the work includes four items. To summarize and present ongoing research work from the academic community on JavaScript and JSON technologies, to investigate and discuss state-of-the-art approaches to aid the development of TC39 proposals, to produce documentation on best practices from research work to ECMA262, and to produce and present tools and technologies to aid in the understanding of -- and the design of ECMA262 and adjacent technologies. I could also say that in terms of kind of focus, this TG would be most probably closely related to TG3 in the sense that it’s not about new proposals, but rather assisting documentation, approaches, technologies, and so on. I would also like to mention that there was a previous discussion on this during the ECMA General Assembly in December 2023. So there was a potential discussion on whether it could be a part of TC49 “Programming Languages”, but now we see most work is probably related to TC39 and it would be too broad otherwise, and also TC39 has an active research community, and we think that it would benefit from sort of an official discussion forum. -. -MBH: And the work about experiments in programming language standardization still is largely -applicable to TC39, as it was actually inspired here. So this is just some example work that -in particular the University of Bergen is interested - looking at different aspects of -standardization documents themselves, such as navigation, customizability, and -modularization of the standard, extracting sublanguage standards, slicing a standard document, specification verification, consistency checking and executability, implementability, so this is where we can see more relevance to the work done by KAIST. Things like refactoring the standard and so on. The work is planned to be done in GitHub. We would have monthly Zoom calls. We would have external input, from users, other technical committees and standardization organizations, industries, academia, we would use TC39’s Alternative Copyright Notice, and just as a first item on what we already have been working on is that we are summarizing the state-of-the-art in programming language standardization. So I would like to request consensus on chartering the new Task Group with this scope and program of work. Thank you. + +Presenter: Mikhail Barash (MBH) Slides: https://docs.google.com/presentation/d/1UUCJTCztvP8kYt4pycrxQKF65sT_3eS8w7hZ8nKPFdg/edit?usp=sharing + +MBH: Yes. So just to make it clear, this presentation now is unrelated to the previous one. So though there is only my name here on the slide, this presentation is a result of discussions with YSV and she participated in formulating the scope and the program of work of this new TG that we had proposed on experiments in programming language standardization. So the scope would be to provide a forum for the discussion, development and dissemination of research work on the standardization of ECMAScript and related technologies. Essentially, this is an evolution of the research group that YSV started some time ago. This would be a space for asking questions on how formal approaches could be applied to ECMAScript standardization, so this would bring in academia and provide them with an official venue where they can discuss research related to JavaScript specifically and on standardization of programming languages more broadly. This would also be a place where academia could get early feedback from delegates without bringing it to the entire committee first. We have discussed this with several of ECMA member organizations, and on the slide, you can see the ones who have expressed interest. There is also some interest from organizations who are not part of ECMA. We also had adjacent interest from the Rust community, though to make it clear, we have not spoken to them about this in the form of a task group, as I present now. Also, YSV and I had a number of discussions with other institutions during our presentation last year at the SPLASH 2023 conference, and we expect this list to grow. So the work includes four items. To summarize and present ongoing research work from the academic community on JavaScript and JSON technologies, to investigate and discuss state-of-the-art approaches to aid the development of TC39 proposals, to produce documentation on best practices from research work to ECMA262, and to produce and present tools and technologies to aid in the understanding of -- and the design of ECMA262 and adjacent technologies. I could also say that in terms of kind of focus, this TG would be most probably closely related to TG3 in the sense that it’s not about new proposals, but rather assisting documentation, approaches, technologies, and so on. I would also like to mention that there was a previous discussion on this during the ECMA General Assembly in December 2023. So there was a potential discussion on whether it could be a part of TC49 “Programming Languages”, but now we see most work is probably related to TC39 and it would be too broad otherwise, and also TC39 has an active research community, and we think that it would benefit from sort of an official discussion forum. +. MBH: And the work about experiments in programming language standardization still is largely applicable to TC39, as it was actually inspired here. So this is just some example work that in particular the University of Bergen is interested - looking at different aspects of standardization documents themselves, such as navigation, customizability, and modularization of the standard, extracting sublanguage standards, slicing a standard document, specification verification, consistency checking and executability, implementability, so this is where we can see more relevance to the work done by KAIST. Things like refactoring the standard and so on. The work is planned to be done in GitHub. We would have monthly Zoom calls. We would have external input, from users, other technical committees and standardization organizations, industries, academia, we would use TC39’s Alternative Copyright Notice, and just as a first item on what we already have been working on is that we are summarizing the state-of-the-art in programming language standardization. So I would like to request consensus on chartering the new Task Group with this scope and program of work. Thank you. JHD: When I originally reviewed the slides, they had a much broader scope. It seemed like it was general for programming languages. I appreciate that the scope seems to have been largely narrowed down to EMCAScript. I think that’s appropriate for this ask. But like you mentioned the Rust language community – I think it’s great if learnings can be applied across programming languages, obviously. But, like, Rust has nothing to do with TC39, so, like, I don’t see that -- other than communicating results back and forth or coordinating research or something, like, I don’t see any reason for the TG you’re asking for to interact with rust or any other programming language that’s not this one. -MBH: Right, so in the current program of work, this is -- yeah, as I mentioned, this is narrowed -down to TC39. And I think it’s a good place to start with this, and then -- yeah. +MBH: Right, so in the current program of work, this is -- yeah, as I mentioned, this is narrowed down to TC39. And I think it’s a good place to start with this, and then -- yeah. JHD: So I mean, I’m glad to see is the scope narrowed, because from the slides, my suggestion was going to be that it should be its own TC, it’s not specific to this committee. But it sounds like the approach has been refined so that doesn’t really apply. So I guess that was sort of the question was, then, why not go for a TC to generally cover that topic? Why have a TG specifically under TC39 and only do this research in that narrow scope? -MBH: Well, I think here it’s -- it’s good to start with something sort of more manageable and we -think that there is significant experience and significant expertise, I would say, in PL -standardization in TC39, so maybe it’s a good idea to start within TC39, and then expand if we -see that this is not broad enough for the kind of work we are trying to do. +MBH: Well, I think here it’s -- it’s good to start with something sort of more manageable and we think that there is significant experience and significant expertise, I would say, in PL standardization in TC39, so maybe it’s a good idea to start within TC39, and then expand if we see that this is not broad enough for the kind of work we are trying to do. JHD: Okay. My other point, which I put separate, but it basically overlaps, I think if you could go a slide or two farther down, yeah, here. So it sounds like some of this stuff overlaps with what the editors generally do. I mean, I’m not going to speak for editor group, but I would assume that editors would generally welcome research on how to do things differently. Do you see this TG as making recommendations, or would the results of some things perhaps dictate the way that the spec is organized and so on. -MBH: I guess it would be more on the level of a recommendation and -- and as maybe sort of trying to report on best practices, and then it’s of course, up to the editors on whether they would -follow this recommendation or not. +MBH: I guess it would be more on the level of a recommendation and -- and as maybe sort of trying to report on best practices, and then it’s of course, up to the editors on whether they would follow this recommendation or not. JHD: Thank you. @@ -841,28 +749,17 @@ MF [on the queue]: +1, love the new program of work CDA: Yeah, just wanted to say plus one to chartering this task group. Similar to some of the comments JHD was making I was somewhat skeptical when I saw the original slides. It sounded like a whole new technical committee, but the new scope and program of work, it’s a lot easier to see how it falls within the scope of TC39 itself, so definitely would be great to see this. Thanks. -USA: And in the end, I -- there’s me on the queue, and I wanted to sort of plus one the point -that DE made. This is very necessary work. If we aim to serve the needs of all the -committee, as well as the greater community, so thank you. We also have SFC, who says plus -one. And he’s endorsed the formation of this new task group. So for all -the positive comments and nothing -- +USA: And in the end, I -- there’s me on the queue, and I wanted to sort of plus one the point that DE made. This is very necessary work. If we aim to serve the needs of all the committee, as well as the greater community, so thank you. We also have SFC, who says plus one. And he’s endorsed the formation of this new task group. So for all the positive comments and nothing -- JHD: I just wanted to add, instead of just typing this out on the queue, so I definitely support this as a TG with the narrowed scope. My original reaction to the previous version of the slides was because I thought that it would be valuable for the scope to be broader and more impact would be able to be had if it was actually a full TC. If this, however, is just kind of a start and eventually it could move to a full TC and have broader impact beyond just this one language, that would be great as well. Thanks. -USA: Thank you, JHD. So I think we have consensus on chartering this new TG. One thing, the -first order of business would be to pick chairs for this TG. So there is something for that. -I think it was part of the presentation, or was it not? +USA: Thank you, JHD. So I think we have consensus on chartering this new TG. One thing, the first order of business would be to pick chairs for this TG. So there is something for that. I think it was part of the presentation, or was it not? -MBH: Right, so the original idea was that it would be YSV and myself, but YSV’s focus is -currently on other projects at Mozilla, so for the time being, it would primarily be -myself who would coordinate the work of this new TG, if the committee is fine with that. +MBH: Right, so the original idea was that it would be YSV and myself, but YSV’s focus is currently on other projects at Mozilla, so for the time being, it would primarily be myself who would coordinate the work of this new TG, if the committee is fine with that. -You would -- my understanding is that this is okay with respect to ECMA process. The code convener or vice convener is recommended but optional role. I think SHN could clarify. One thing, however, if you’re looking for somebody to be co-convener, this would be the -right place to ask. +You would -- my understanding is that this is okay with respect to ECMA process. The code convener or vice convener is recommended but optional role. I think SHN could clarify. One thing, however, if you’re looking for somebody to be co-convener, this would be the right place to ask. -SHN: Just to comment, that would be fine. If you get a co-convener, it would be even -better so you have the support you need, Mikhail, and maybe next week when you put a little -news together regarding this TG5. Thank you. +SHN: Just to comment, that would be fine. If you get a co-convener, it would be even better so you have the support you need, Mikhail, and maybe next week when you put a little news together regarding this TG5. Thank you. USA: RGN also expresses plus one from all of Agoric for charting this TG. So, yeah, I think we’re all in agreement. Thank you, MBH. @@ -870,87 +767,82 @@ MBH: Thank you very much. MBH: Right. The decision has been made to charter a new task group on experiments in programming language standardization. The convener of the group is MBH. -SYG: Can I ask a meta question about charting these TGs? Are there -- so I know that charters in other groups like W3C are -- they have an expiry and then you come back and recharter to see if the scope is still relevant. Do we do that here, or once we charter something, is it in -perpetuity until something else happens? +SYG: Can I ask a meta question about charting these TGs? Are there -- so I know that charters in other groups like W3C are -- they have an expiry and then you come back and recharter to see if the scope is still relevant. Do we do that here, or once we charter something, is it in perpetuity until something else happens? -DE: Unless otherwise stated, it would be in perpetuity by default. I think this is one reason -that I’ve encouraged things like sort of Snaps or WinterTC to work within ECMA, because I -don’t want to work through the bureaucracy of rechartering and arguing all the time. I think -that the mitigation that we adopted for source maps was we would have status updates in the -- -in each plenary. It could be very short. And so if we find that the activity ceases, we can, -you know, discuss it easily at the status update. It should be very easy to raise at that -point. +DE: Unless otherwise stated, it would be in perpetuity by default. I think this is one reason that I’ve encouraged things like sort of Snaps or WinterTC to work within ECMA, because I don’t want to work through the bureaucracy of rechartering and arguing all the time. I think that the mitigation that we adopted for source maps was we would have status updates in the -- in each plenary. It could be very short. And so if we find that the activity ceases, we can, you know, discuss it easily at the status update. It should be very easy to raise at that point. -SHN: I confirm that would be correct, DE. All the TCs or TGs that we have, we do not -have expiry dates. We always work and they stay open. +SHN: I confirm that would be correct, DE. All the TCs or TGs that we have, we do not have expiry dates. We always work and they stay open. -SYG: Thank you. That sounds good. I think for most of ECMA’s activities where there’s, like -- -where the participants have some incentive to also participate in perpetuity, by which I -mean, like, browsers, like products are going to continue to exist, there’s no expiry on those, -as long as they exist, people will continue to participate. That makes sense to me. For TGs -where they have a more academic slant, I know that from personal experience, things tend to -fizzle out once some cohort of folks graduate, and I’m wondering if that’s a concern here. +SYG: Thank you. That sounds good. I think for most of ECMA’s activities where there’s, like -- where the participants have some incentive to also participate in perpetuity, by which I mean, like, browsers, like products are going to continue to exist, there’s no expiry on those, as long as they exist, people will continue to participate. That makes sense to me. For TGs where they have a more academic slant, I know that from personal experience, things tend to fizzle out once some cohort of folks graduate, and I’m wondering if that’s a concern here. SHN: So this TG, it would most likely result in a technical report. MBH, if you’re on the call, can you confirm that. MBH: Yes, yes, this is correct. -SHN: And often times we always maintain these technical reports. They may have different -additions. So you may have changes in your committee, but it could continue. So, SYG, your -concern is fair, you but I think that other TGs we have do continue with the work. They do -sometimes have a pause, they do get revived. We watch them. We never let them sit completely -idle. +SHN: And often times we always maintain these technical reports. They may have different additions. So you may have changes in your committee, but it could continue. So, SYG, your concern is fair, you but I think that other TGs we have do continue with the work. They do sometimes have a pause, they do get revived. We watch them. We never let them sit completely idle. + +USA: All right. We are over time. Thank you so much, MBH everyone and else for participating in an organization. Thank you. -USA: All right. We are over time. Thank you so much, MBH everyone and else for -participating in an organization. Thank you. ### Speaker's Summary of Key Points -* A new Task Group TG5 will be chartered. -* The scope of the task group is to provide a forum for discussion, development and dissemination of research work on standardization of ECMAScript and related technologies. + +- A new Task Group TG5 will be chartered. +- The scope of the task group is to provide a forum for discussion, development and dissemination of research work on standardization of ECMAScript and related technologies. ### Conclusion -* TG5 is convened with the stated scope/program. + +- TG5 is convened with the stated scope/program. ## ArrayBuffer transfer for stage 4 + Presenter: Jordan Harband (JHD) + - [proposal](https://github.com/tc39/proposal-arraybuffer-transfer) -- [slides]() +- Slides: See Agenda - [PR](https://github.com/tc39/ecma262/pull/3175) -USA: Before you start, we are going to the break a little bit during this, so, yeah, we -can just add -- but, yeah, please go ahead. +USA: Before you start, we are going to the break a little bit during this, so, yeah, we can just add -- but, yeah, please go ahead. JHD: Okay. Hi, everyone. I’m Jordan. I’m presenting the ArrayBuffer transfer proposal. Basically, I’m asking today for Stage 4, tests have all been merged. It has been shipped in chrome for quite a while. Since version 114. It is unflagged in Firefox version 122. I believe it is merged into webkit, but not yet released. That may have changed in the last couple weeks, but I haven’t heard anything yet. Serenity OS has it, and there’s polyfills that are published, and there’s a specification PR that is partially editor approved. So essentially, what I would ask for is the conditional Stage 4 on the rest of the editors approving that PR. JHD: SYG is a co-champion on this as well, and has reviewed this as well. So this is adding an accessor to a ArrayBuffer prototype for detached that tells you true or false, whether that buffer is detached or not. Transfer method, which transfers the buffer to a new one, and that preserves resizable or growability, and a transfer to fix length, method, which does not preserve it. It produces a method -- an ArrayBuffer that is not resizable or growable. That’s all. Hopefully I can ask for conditional Stage 4 for the remaining editor on the PR. -USA: There’s DLM who says it’s for Stage 4. That’s it. Anybody else would like to add a -vote of support? Okay, RGN mentions they support Stage 4. All right. +USA: There’s DLM who says it’s for Stage 4. That’s it. Anybody else would like to add a vote of support? Okay, RGN mentions they support Stage 4. All right. JHD: Cool. Thank you. USA: Well, congratulations, Stage 4. ### Conclusion + Reaches stage 4 + ## Set Methods bugfix and update + Presenter: Kevin Gibbons (KG) + - [PR](https://github.com/tc39/proposal-set-methods/pull/105) KG: So I have an extremely brief update for the set methods proposal. The first and most important thing is that it’s implemented in Chrome, and will be shipping to stable in I believe a couple of weeks. It’s also shipping in Safari since 17, I want to say, and I know Firefox has an implementation underway. I’m not sure what the status there is. But so my hope is to go for Stage 4 at the next meeting. -KG: But unfortunately, one small order of business that we need to take care of first is that I noticed a normative issue with a specification. You may recall that the way sets are implemented internally is as a list of items that never -shrinks and only ever grows. This is editorially convenient, but obviously not what actually is done in practice. So there were a couple of places where the spec was failing to account for the markers that are left behind when you delete an element, when computing the size of a set. Those need to be skipped over, and were not being skipped over. This is technically an observable change to the behavior, and since this is Stage 3, I need to ask for consensus for it. But it’s definitely a bugfix and no one would ever have done the other thing. So I am hoping for a rubber stamp on landing this fix to the proposal and then hopefully next meeting I will ask for Stage 4. Can I have consensus on this pull request? +KG: But unfortunately, one small order of business that we need to take care of first is that I noticed a normative issue with a specification. You may recall that the way sets are implemented internally is as a list of items that never shrinks and only ever grows. This is editorially convenient, but obviously not what actually is done in practice. So there were a couple of places where the spec was failing to account for the markers that are left behind when you delete an element, when computing the size of a set. Those need to be skipped over, and were not being skipped over. This is technically an observable change to the behavior, and since this is Stage 3, I need to ask for consensus for it. But it’s definitely a bugfix and no one would ever have done the other thing. So I am hoping for a rubber stamp on landing this fix to the proposal and then hopefully next meeting I will ask for Stage 4. Can I have consensus on this pull request? CDA: Plus one from JHD. You also have a plus one from RGN. KG: Okay, that’s all I got. Thanks very much. + ### Speaker's Summary of Key Points -* Set methods is shipping in Safari and will soon be shipping in Chrome, hoping to ask for stage 4 next meeting -* There is a normative bugfix to the spec + +- Set methods is shipping in Safari and will soon be shipping in Chrome, hoping to ask for stage 4 next meeting +- There is a normative bugfix to the spec + ### Conclusion -* Consensus on the bugfix https://github.com/tc39/proposal-set-methods/pull/105 + +- Consensus on the bugfix https://github.com/tc39/proposal-set-methods/pull/105 + ## Temporal update & proposed normative changes + Presenter: Philip Chimento (PFC) + - [proposal](https://github.com/tc39/proposal-temporal) - [slides](https://ptomato.name/talks/tc39-2024-02/) @@ -977,9 +869,9 @@ PFC: (Slide 9) Finally, this question was raised by ABL as part of the Firefox i PFC: We tested this on the difference between every date in a 4-year Gregiorian calendar leap year cycle with every other date and it affects 0.2% of those results. And the ones it does affect, they all make sense in our opinion. So we think this is a good change to make. PFC: Are there any questions so far? - + CDA: Nothing on the queue. No. DLM? - + DLM: Sorry. It was end of message. + 1 for normative changes. CDA: Okay. Thanks. @@ -988,15 +880,16 @@ PFC: I would like to move on to requesting consensus formally then on the PRs. CDA: You have a + 1 DE and the previous one from DLM. -PFC: Sounds good. I have a proposed conclusion for the note here, which I will copy into the notes in a moment. And I would like to say thanks. And looking forward to hopefully not presenting something like this again. +PFC: Sounds good. I have a proposed conclusion for the note here, which I will copy into the notes in a moment. And I would like to say thanks. And looking forward to hopefully not presenting something like this again. ### Speaker's summary of key points / Conclusion -Consensus was reached on a normative change making week numbering optional for calendars (PR #2756), a normative change to fix a bug in duration rounding (PR #2758), a normative change to return more useful results from date differences in end-of-month edge cases (#2759), and a normative change to fix a bug in ZonedDateTime differences (PR #2760) reached consensus. -The proposal champions are not aware of any further outstanding bugs, and expect implementations to be able to use the proposal as a stable base in the coming weeks with only editorial changes expected. Follow the checklist in #2628 for updates. +Consensus was reached on a normative change making week numbering optional for calendars (PR #2756), a normative change to fix a bug in duration rounding (PR #2758), a normative change to return more useful results from date differences in end-of-month edge cases (#2759), and a normative change to fix a bug in ZonedDateTime differences (PR #2760) reached consensus. The proposal champions are not aware of any further outstanding bugs, and expect implementations to be able to use the proposal as a stable base in the coming weeks with only editorial changes expected. Follow the checklist in #2628 for updates. ## Micro and mini waits in JS for stage 1 + Presenter: Shu-yu Guo (SYG) + - [proposal](https://github.com/syg/proposal-atomics-microwait) - [slides](https://docs.google.com/presentation/d/1XYn7rgPw-WYAnH3X10GboMwn8xLH3oUx2TlDe6f6lSY/edit) @@ -1016,7 +909,7 @@ SYG: Instead, I am proposing that we do some harm reduction. It’s not that peo SYG: I have some ideas on how to clamp the timeout. I think this space needs more exploration. But the idea is that, if you are on the main thread, in an agen who cannot block field is true, for the agent specifier, agent record, whatever it’s called, the time out value will be clamped to an implementation-defined positive finite number. Currently, if you cannot block and you pass it – any timeout, I guess, it will just throw. But this is saying, even if you pass it a number, `Infinity`, if you pass that option, it will clamp it to some implementation-defined value under the hood. Possibly zero, which means it will return time value immediately. But the idea there is that, it lets us await – it lets you sleep the thread, for short amounts of time, hopefully not long enough to harm the responsiveness goal of why we made this policy choice to begin with. -SYG: You might be wondering, who is this for? Basically one user for this and that’s Emscripten. This is not hypothetical. This is the tool that people use to compile to WebAssembly. WebAssembly, like JS, doesn’t do anything by itself. You need to embed it somewhere and then pass it in API, so it can do things like paint on the screen, whatever. JS acts as the embedder for WASM. And then the web embeds JS. That’s the layering there. Emscripten has basically an emulation layer written in JS that emulates LibC and kernel syscalls. So that when you run your C++ program, if you need to do thing, like block the thread to wait, it call us out to the JS library that emscripten, if you compile something with pthreads, it’s not something you can do to do, we have SharedArrayBuffers, shared memories, web workers… if you’re compiling a C++ program, pthread mutexes, they are – implemented with futexes. Futexes basically look like atomic style wait, except we can’t use atomic style wait on the main thread. And this emulates this with a busy loop. This is really bad. Because it is inefficient for power. It’s just not desirable. And because we can’t relax blocking on the main thread, the hope is, maybe we do some harm reduction instead. +SYG: You might be wondering, who is this for? Basically one user for this and that’s Emscripten. This is not hypothetical. This is the tool that people use to compile to WebAssembly. WebAssembly, like JS, doesn’t do anything by itself. You need to embed it somewhere and then pass it in API, so it can do things like paint on the screen, whatever. JS acts as the embedder for WASM. And then the web embeds JS. That’s the layering there. Emscripten has basically an emulation layer written in JS that emulates LibC and kernel syscalls. So that when you run your C++ program, if you need to do thing, like block the thread to wait, it call us out to the JS library that emscripten, if you compile something with pthreads, it’s not something you can do to do, we have SharedArrayBuffers, shared memories, web workers… if you’re compiling a C++ program, pthread mutexes, they are – implemented with futexes. Futexes basically look like atomic style wait, except we can’t use atomic style wait on the main thread. And this emulates this with a busy loop. This is really bad. Because it is inefficient for power. It’s just not desirable. And because we can’t relax blocking on the main thread, the hope is, maybe we do some harm reduction instead. SYG: I also independently heard this is useful for game engines with frame budgets. If you are writing a game engine or a rendering library, and you want the 60 budget, you can’t afford to wait on a lock for very long anyway, so they would also use something like this. @@ -1024,17 +917,17 @@ SYG: The open question is how do you actually clamp the value? For HTML, tying t SYG: So asking for Stage 1. The problem statement is a two-fold problem statement. To be clear, depending on the queue, it is possible to split these two up. These are pretty different. But I think there’s enough connection to try to look at them together. The problem statement is to explore solutions to one, the performance locking code fast paths, and two, to improve the status quo of busy loop workarounds in code that decide to block the main thread away. This is for the contention slow path for trying to put something to sleep. -SYG: And that’s the presentation. I will go to the queue. +SYG: And that’s the presentation. I will go to the queue. -MLS: It seems like the pseudocode that you presented, the last slide, that you think it’s okay, but I am going to say you think it’s okay. You think that people will block no matter what and we need to do something to reduce the harm. Even if we clamp it, we’re still going to block the main thread while it’s waiting to try to get the lock. Correct? +MLS: It seems like the pseudocode that you presented, the last slide, that you think it’s okay, but I am going to say you think it’s okay. You think that people will block no matter what and we need to do something to reduce the harm. Even if we clamp it, we’re still going to block the main thread while it’s waiting to try to get the lock. Correct? -SYG: That is correct. +SYG: That is correct. MLS: Okay. So I have problems that we’re putting in a foot gun effectively for a JavaScript developer who could maliciously block the VM from doing some other useful work. And I don’t have a good answer for that. It sounds like you don’t necessarily don’t have a good answer about that except for clamping it. Again, you probably need to allow some waiting, if we will allow people to write their own mutexes. That’s my first question. -SYG: sorry. The question is… ? +SYG: sorry. The question is… ? -MLS: I think you answered yes. It does allow people to block on the main thread for at least a short period of time. +MLS: I think you answered yes. It does allow people to block on the main thread for at least a short period of time. SYG: Yeah. As for – so I agree that it definitely could block – if you call this, it could block the VM from doing unuseful things. To clarify, this is – you’re responding to the clamping thing only. Is there any – that’s not to the microwaits? @@ -1044,52 +937,50 @@ SYG: Yes. MLS: If I give a huge spin value, and I put in a loop in my case spin count is pretty large, I am effectively blocking for a percentage of time, even though I go back to block and microwait it again. -SYG: You have written a infinite loop – +SYG: You have written a infinite loop – MLS: Sure. Agreed. Agreed. If the loop doesn’t do anything and just – which can be done today. Certainly. -SYG: Right. +SYG: Right. MLS: So I have a bit of a problem with that. My higher-level question, which is the second thing in the queue, is doesn’t it make sense to provide some optimized lock primitives active instead of giving people building blocks? And the reason I say that, we have found at least on our team, that the people trying to implement locks don't fully understand what is going on in the CPU, you know, you were describing something going on. When you have multiple cues, you have cache line thrashing. Cache lines need to move between cores. Or if you have snoopy caches, activities between CPUs and not making any progress. MLS: And you’re right. Different CPU respond in different ways, depending on the architecture. Does it make sense to have a higher level goal, some lock primitives and those can be implemented efficiently on different devices so we are not – somewhat at the whim – we provide the appropriate higher-level tools so we are not at the whim of the JavaScript programmer that is detrimental. SYG: My answer so that is definitely. That is part of the structs proposal. We want high-level mutexs or a condition instead of a futex-like API. But I don’t see that it’s mutually exclusive if the higher-level things are for JS authors. Whereas, this is basically for JS as a compiled target for C in the case of Emscripten. This is a very scoped use case. I completely agree that your average JS programmer is just not served by this. And they really should not use it because you really shouldn’t be writing your own mutex. In the case of Emscripten, there is no choice. Even if we give them a JS mutex, they can’t use that – they are compiling e.g. pthreads mutex's implementation that bottoms out at a syscall like futex or os_unfair_lock or whatever it is on macOS. -. -MLS: So I have some problems with what you said. So the use case is at this point Emscripten, a single use case. However, we put in the standard and people – +. MLS: So I have some problems with what you said. So the use case is at this point Emscripten, a single use case. However, we put in the standard and people – SYG: We already put futexes in the standard. MLS: I understand that. I am just thinking if we should have – if we come up with something that has less detrimental issues for where it’s used. I think you understand where I am coming from -SYG: I agree with you. +SYG: I agree with you. + +MLS: I know you do. -MLS: I know you do. - RBN: This is to MLS’s question about or talking about lock primitives instead of building blocks. If you want to write efficient lock-free algorithms you can’t depend only on synchronization and locking primitives. These algorithms often need spin waiting to be efficient even if you are using these other lock primitives or other synchronization primitives. It’s a building block that is essential to the algorithms. It needs to exist on its own unless it could be wrapped up into another thing, such as a SpinWait primitive, which is basically what this is doing. -MLS: I have already talked about it. I agree that, yeah, there are lock free primitives that do require spin waiting. It’s more difficult to make those as primitives. We can deliver. There are some possibilities, but it depends on the API that we’d want to surface. +MLS: I have already talked about it. I agree that, yeah, there are lock free primitives that do require spin waiting. It’s more difficult to make those as primitives. We can deliver. There are some possibilities, but it depends on the API that we’d want to surface. DLM: We discussed this internally, I think we have casual support for atomic stop microwait. We want that explored more. What I am less concerned about is the clamp time out. If you could expand a little bit on what you’re trying to accomplish with that. Is it easier to make code from a worker thread to a main thread. Trying to make it so that you could put the main thread to sleep. Because that doesn’t seem like a good idea. -SYG: It’s a non-goal to enable you to write the same code that can both work on the main thread and the worker. I think that is architecturally not possible and we should never encourage that anyway. -Because as Firefox has raised when we added SharedArrayBuffers, it’s not a matter of responsiveness, not not blocking the main thread, architecturally, Gecko at the time, but the main thread was responsible for doing some like IO stuff, in support – like in the service of worker threads. So if you blocked the main thread, you could also deadlock your worker threads unknowingly because of how certain Web APIs were implemented under the hood. And for that reason, code on the web, it is not possible for them to ignore the fact that they are on the main thread versus a worker thread. This is not a goal of this. The goal of this is pretty narrow. The goal of `clampTimeout` is basically just to not spin the CPU and heat up your battery in the – in the locking emulation layer in Emscripten. It’s a very narrow goal. At least that’s the starting point. There is like I said, on this slide, it’s possible. It's useful for game engines with a frame budget, where they need exclusive access to some piece of SharedArrayBuffer stuff. A worker has calculated and put the results in. And they are okay with just dropping a frame, if they can’t get the lock in time. +SYG: It’s a non-goal to enable you to write the same code that can both work on the main thread and the worker. I think that is architecturally not possible and we should never encourage that anyway. Because as Firefox has raised when we added SharedArrayBuffers, it’s not a matter of responsiveness, not not blocking the main thread, architecturally, Gecko at the time, but the main thread was responsible for doing some like IO stuff, in support – like in the service of worker threads. So if you blocked the main thread, you could also deadlock your worker threads unknowingly because of how certain Web APIs were implemented under the hood. And for that reason, code on the web, it is not possible for them to ignore the fact that they are on the main thread versus a worker thread. This is not a goal of this. The goal of this is pretty narrow. The goal of `clampTimeout` is basically just to not spin the CPU and heat up your battery in the – in the locking emulation layer in Emscripten. It’s a very narrow goal. At least that’s the starting point. There is like I said, on this slide, it’s possible. It's useful for game engines with a frame budget, where they need exclusive access to some piece of SharedArrayBuffer stuff. A worker has calculated and put the results in. And they are okay with just dropping a frame, if they can’t get the lock in time. -SYG: But it’s not meant to block the thread, the main thread in any real sense of block. And the max here is probably like 16 milliseconds. We are thinking 60FPS. The max that might be acceptable to block with a `clampTimeout` clamped to 16 ms. Possibly shorter. +SYG: But it’s not meant to block the thread, the main thread in any real sense of block. And the max here is probably like 16 milliseconds. We are thinking 60FPS. The max that might be acceptable to block with a `clampTimeout` clamped to 16 ms. Possibly shorter. -DLM: Just make sure I understand then, clamp time out is basically a longer wait than what you get with microwait. +DLM: Just make sure I understand then, clamp time out is basically a longer wait than what you get with microwait. SYG: Right. So yes. It’s mini – I am calling it mini. Micro versus mini wait. It’s a – like, architectural – on the CPU versus OP level, there’s two yields. One is a CPU hint that says, I will yield not the core, but I yield these other shared units like – it has to do from reading from memory that MLS was referring to. That is about a CPU yield. OS field, that says put my thread to sleep and wake me up later. That could be very long or relatively short. It’s longer than a few nanoseconds. Longer than 100 CPU cycles. And it requires the OS to actually put the thread to sleep. clampedTimeout out is `Atomics.wait` is OS level yield. And it’s totally fair that you might look at this later and say, this is – we think it’s too much. It’s not useful. I think it’s too narrow or whatever. Architecturally undesirable. That’s possible. These are things I want to explore during Stage 1. And it’s possible we come back and promote only the microwait part to Stage 2 and drop the clamping, or split out the proposal. -DLM: Sure. Thank you. That answers my question. We support investigating this for Stage 1. +DLM: Sure. Thank you. That answers my question. We support investigating this for Stage 1. -SYG: Okay. I think the queue is empty. In which ways, I will ask again for Stage 1 for both right now. But I did telegraph that we might split or progress independently after the investigation. +SYG: Okay. I think the queue is empty. In which ways, I will ask again for Stage 1 for both right now. But I did telegraph that we might split or progress independently after the investigation. -CDA: I support Stage 1. +CDA: I support Stage 1. KG: For the record, is exploring alternative locking primitive in scope? -SYG: Not in this. That is explicitly in the structs proposal. So like I guess, yes, for TC39. No for this. +SYG: Not in this. That is explicitly in the structs proposal. So like I guess, yes, for TC39. No for this. CDA: All right. Any more voices of support for Stage 1? Hearing nothing. But also hear no objections. I believe you have Stage 1. Do you want to dictate any key points, conclusions, or summary for the notes? @@ -1098,28 +989,32 @@ SYG:I don’t think so. I think that’s fine. CDA: Ok, thank you. We have another + 1 from RBN. ### Speaker's summary of key points / Conclusion -* Proposal reached Stage 1 + +- Proposal reached Stage 1 + ## Promise.try for stage 2 + Presenter: Jordan Harband (JHD) + - [proposal](https://github.com/tc39/proposal-promise-try) JHD: So back way, way, way back in the year 2016, I proposed `Promise.try`. Essentially, I have a function, it might be synchronous or asynchronous; it might return a promise or not; throw an exception or not. I don’t want to care, but I want to wrap it in a promise. So if it is a throw, it does the right thing. JHD: The easy to remember way to do this is here. `Promise.resolve` and in the `.then`, you run your function. This works fine. But it runs asynchronously when you don’t want that the more modern way is immediately invoked async function where you await the function, and then that has the actually desired semantics. -JHD: At the time that I made this presentation, the general response was that in order for it to qualify for Stage 2, more sort of convincing would need to be done about the utility. Given that you could just use the await syntax and solve the problem. At the time, the userland versions of this proposal were sort of mildly used and I think the general expectation was that or the general hope at least was that nobody would need this functionality and the syntax would be sufficient. However, since that time, two years later, this was published and 46 million downloads a week, this graph over time continues to go up and modulo a few NPM data hiccups are steady at 45 million a week. Pretty clearly there is some use here. It is one package, and it’s from one author. And that author certainly has a lot of other packages, and perhaps the usages are just because they stuck it in one of other packages that also has a lot of usage. I still find myself having a need for this functionality. The workaround I do is this `new Promise` snippet here. Where in the `new Promise` executor, I pass the function to `resolve`. It works. It’s ugly. It’s easy to mess up. And it is confusing when people first encounter it. Given I had looked up this package, and saw that it was now actually used very, very heavily, I thought I would bring it back and either ask for Stage 2 or get a fresh response from the committee about what the committee would see it to have to qualify for Stage 2. That’s all. +JHD: At the time that I made this presentation, the general response was that in order for it to qualify for Stage 2, more sort of convincing would need to be done about the utility. Given that you could just use the await syntax and solve the problem. At the time, the userland versions of this proposal were sort of mildly used and I think the general expectation was that or the general hope at least was that nobody would need this functionality and the syntax would be sufficient. However, since that time, two years later, this was published and 46 million downloads a week, this graph over time continues to go up and modulo a few NPM data hiccups are steady at 45 million a week. Pretty clearly there is some use here. It is one package, and it’s from one author. And that author certainly has a lot of other packages, and perhaps the usages are just because they stuck it in one of other packages that also has a lot of usage. I still find myself having a need for this functionality. The workaround I do is this `new Promise` snippet here. Where in the `new Promise` executor, I pass the function to `resolve`. It works. It’s ugly. It’s easy to mess up. And it is confusing when people first encounter it. Given I had looked up this package, and saw that it was now actually used very, very heavily, I thought I would bring it back and either ask for Stage 2 or get a fresh response from the committee about what the committee would see it to have to qualify for Stage 2. That’s all. -NCL: Yeah. I put – the topic, maybe it needs to be a clarifying question. And so I assume that this would show why this is needed. But from – is it needed in this case? Couldn’t it just do `value = await synchronousfunction`? Will it just work the same? +NCL: Yeah. I put – the topic, maybe it needs to be a clarifying question. And so I assume that this would show why this is needed. But from – is it needed in this case? Couldn’t it just do `value = await synchronousfunction`? Will it just work the same? -JHD: In this specific snippet, yes, it would work the same, given that top-level await exists. If the goal is to have a promise, on which you want to use the promise combinators. It’s not as trivial as doing that. There will always be use cases where you want the promise and not the awaited value and that’s where `Promise.try` comes into play. +JHD: In this specific snippet, yes, it would work the same, given that top-level await exists. If the goal is to have a promise, on which you want to use the promise combinators. It’s not as trivial as doing that. There will always be use cases where you want the promise and not the awaited value and that’s where `Promise.try` comes into play. -CDA: The queue is empty. +CDA: The queue is empty. JHD: If the queue is empty, I would like to ask for Stage 2. The spec is very straightforward. That is the entirety of it, recently rebased on the latest version of the spec. -CDA: KG? +CDA: KG? -KG: Yeah. It’s just – I still don’t understand why this comes up. Can you say more about why this comes up? +KG: Yeah. It’s just – I still don’t understand why this comes up. Can you say more about why this comes up? JHD: Yeah. In particular, when I am authoring an API where the consumer passes me a callback function, and then as I indicated when I was responding to NCL, that I want to essentially produce a promise, from that, and then do further work on it. Maybe I want to race it against something else, or `Promise.all` and further work with it, at some point I am going to get to a place where await syntax handles the rest of it. But the initial set up requires in many of my use cases, working with Promises. I do have a work around, so this isn’t a new capability. This is just kind of a more straightforward and elegant way to represent that thing that I find myself having to do now and then. @@ -1127,65 +1022,65 @@ KG: Specifically, when having an API, it takes an async function that you want u JHD: It’s a function that I don’t know its color.So yes, the user is passing a function, and I don’t know for sure if it’s sync or async or throws or not throws, and I don’t want to have to care. I want to just be given a Promise, and do my best to handle it. -KG: Okay. +KG: Okay. + +JRL: Another case https://github.com/ampproject/amphtml/pull/15107 we had for this, in AMP we had asynchronous error handling. If the error were wrapped in a promise, we handled everything properly. Because of the code change we had a promise.resolve and invoked a function. That function itself threw an error synchronously. Because we didn’t have synchronous catch handling, only async catch on the promise chain, we failed to properly handle this case. My developers didn’t understand the difference. They thought it would be caught by the promise and handled in the asynchronous promise handing. We forced everyone, if you had a Promise.resolve(fn()), it had to use our version of promise.try and that fixed the bugs for us. We were able to rely on asynchronous error handling from then on. -JRL: Another case https://github.com/ampproject/amphtml/pull/15107 we had for this, in AMP we had asynchronous error handling. If the error were wrapped in a promise, we handled everything properly. Because of the code change we had a promise.resolve and invoked a function. That function itself threw an error synchronously. Because we didn’t have synchronous catch handling, only async catch on the promise chain, we failed to properly handle this case. My developers didn’t understand the difference. They thought it would be caught by the promise and handled in the asynchronous promise handing. We forced everyone, if you had a Promise.resolve(fn()), it had to use our version of promise.try and that fixed the bugs for us. We were able to rely on asynchronous error handling from then on. +CDA: Shu? -CDA: Shu? +SYG: I guess I will do the queue item first. Any name concerns? Given try seems like a common word, given there’s other – these packages? -SYG: I guess I will do the queue item first. Any name concerns? Given try seems like a common word, given there’s other – these packages? +JHD: I mean, given it’s a static method, I don’t anticipate any issue. But I am also not – this seems like the most reasonable name to me, but I am not really attached to it. If it turns out there’s an issue, I am happy to dive back into the web compat mines and figure out something that makes sense and be less risky. I feel this is very low risk personally. -JHD: I mean, given it’s a static method, I don’t anticipate any issue. But I am also not – this seems like the most reasonable name to me, but I am not really attached to it. If it turns out there’s an issue, I am happy to dive back into the web compat mines and figure out something that makes sense and be less risky. I feel this is very low risk personally. +SYG: The queue is empty, if I can ask a follow-up question from before. I am still trying to understand it. Use case is you have an API, that takes a callback, that might throw synchronously, but is otherwise async? -SYG: The queue is empty, if I can ask a follow-up question from before. I am still trying to understand it. Use case is you have an API, that takes a callback, that might throw synchronously, but is otherwise async? - JHD: Yeah. I mean, it could be. It sort of depends – basically, I can’t trust the user will pass the thing that I think they should (in general, ever). So I would hope – this is a case where sometimes the user needs to do it synchronously, or asynchronous. And obviously, if they are using an async function, it will never throw. That’s ideal when async; I always return a promise. But I can’t rely on the user to exactly match that, and there’s not like an `AsyncFunction.is` predicate - there isn’t a reasonable or meaningful way to check that. Generally what I do is just invoke the function and catch whatever they throw or return and go from there. -SYG: So the two responses, I still would like to read a concrete thing where this happens. Second, if that is the use case, this – if I understand correctly, if that is the use case, you would – the recommendation would be that you always use promise.try, never promise.resolve, when the value producing thing is a call back function. But then this API seems rigid if you can only have it do nonlinear functions. What if you need to pass to generate the value? +SYG: So the two responses, I still would like to read a concrete thing where this happens. Second, if that is the use case, this – if I understand correctly, if that is the use case, you would – the recommendation would be that you always use promise.try, never promise.resolve, when the value producing thing is a call back function. But then this API seems rigid if you can only have it do nonlinear functions. What if you need to pass to generate the value? -JHD: I mean, that’s the same as anything else that takes a callback. Right? You bind it or you wrap in an arrow function. And I can show you something concrete. So this is a test framework I maintain called `tape`. And I have to find where in the file it is. But essentially, a tape callback and – I don’t know if this is the right code. I pulled it off the top of my head. A test call back can be synchronous and asynchronous. If it returns a promise, I want that promise to control the result of the test. But if it throws, I still need to catch it. If it returns something synchronously that is not a promise, I ignore it. I just searched for promise, I may be at the wrong spot in the file. I think that’s right. +JHD: I mean, that’s the same as anything else that takes a callback. Right? You bind it or you wrap in an arrow function. And I can show you something concrete. So this is a test framework I maintain called `tape`. And I have to find where in the file it is. But essentially, a tape callback and – I don’t know if this is the right code. I pulled it off the top of my head. A test call back can be synchronous and asynchronous. If it returns a promise, I want that promise to control the result of the test. But if it throws, I still need to catch it. If it returns something synchronously that is not a promise, I ignore it. I just searched for promise, I may be at the wrong spot in the file. I think that’s right. SYG: in this case, callback returns a thenable JHD: Yes. The – again, I pulled this off the top of my head so it may not be the concrete thing you’re looking for. I will get back to you regardless of whether or not that delays Stage 2. But essentially when I call the user callback and throw an exception, I do want to be able to catch it. Again, this may not be a good example. I probably should not have tried to pull one up off the cuff. But I can certainly dig up more of them. - -SYG: Like the cost is so low here, I am not really – I don’t have anything against it. I just don’t quite understand – like the kind of thing you might use it for sounds reasonable, but then there’s things in matrix saying, that’s bad and you shouldn’t do that and I just don’t know. -JHD: Sure. I mean, there’s lots of editorializing about the goodness and badness of patterns, either way. The other sort of nice effect of this proposal is there remain only two more ways or two more reasons I know of that you would use – sorry, three more ways, reasons I know of that you use `new Promise` in modern code. One is, of course, the one that always is: to wrap a callback taking API. Another this proposal replaces. And then the third is inverting a promise, you know, turning failure into success and success into failure. And I like the idea of, in general, of getting rid of all – you know, all but the one reason for using `new Promise` because it’s confusing when users run across a new promise construction in code. That’s not the primary motivation for the proposal. It’s just a nice benefit, I think, of landing this pattern that people do use. +SYG: Like the cost is so low here, I am not really – I don’t have anything against it. I just don’t quite understand – like the kind of thing you might use it for sounds reasonable, but then there’s things in matrix saying, that’s bad and you shouldn’t do that and I just don’t know. -CDA: There’s a clarifying question from GCL. +JHD: Sure. I mean, there’s lots of editorializing about the goodness and badness of patterns, either way. The other sort of nice effect of this proposal is there remain only two more ways or two more reasons I know of that you would use – sorry, three more ways, reasons I know of that you use `new Promise` in modern code. One is, of course, the one that always is: to wrap a callback taking API. Another this proposal replaces. And then the third is inverting a promise, you know, turning failure into success and success into failure. And I like the idea of, in general, of getting rid of all – you know, all but the one reason for using `new Promise` because it’s confusing when users run across a new promise construction in code. That’s not the primary motivation for the proposal. It’s just a nice benefit, I think, of landing this pattern that people do use. -GCL: Sorry, it’s not a question. Shu for an example, that’s more concrete, if you imagine AsyncIterator.prototype.map, that makes a map function. If the map function throws an error, you don’t want to like bubble that up. You want to put that into the async machinery happening. Imagine writing that function in JavaScript instead of spec text, you take the if abrupt reject or whatever logic and that is the same thing that promise.try is. +CDA: There’s a clarifying question from GCL. + +GCL: Sorry, it’s not a question. Shu for an example, that’s more concrete, if you imagine AsyncIterator.prototype.map, that makes a map function. If the map function throws an error, you don’t want to like bubble that up. You want to put that into the async machinery happening. Imagine writing that function in JavaScript instead of spec text, you take the if abrupt reject or whatever logic and that is the same thing that promise.try is. SYG: Mechanically, I understand. For the iterator case, like that – that is a case where you already produced a promise, then you must always produce a promise. I am looking for a concrete example where you have a single value – like an API that takes the single value that returns, always returns a promise. And that single value is wrapped, like produced by that factory function, that might throw synchronously, like – you use promise.try inside the in sync loop things -GCL: The map function you pass to that can be synchronous. And you could accidentally throw in that – you might not intend to. Or maybe you do. If you throw inside that function, that shouldn’t break AsyncIterator. +GCL: The map function you pass to that can be synchronous. And you could accidentally throw in that – you might not intend to. Or maybe you do. If you throw inside that function, that shouldn’t break AsyncIterator. -SYG: But okay. But in the loop case, you already have a promise. You would be on the lookout for – these exceptions and then reject that promise. Or like you couldn’t be like chaining these individual rejected promises. Right? +SYG: But okay. But in the loop case, you already have a promise. You would be on the lookout for – these exceptions and then reject that promise. Or like you couldn’t be like chaining these individual rejected promises. Right? JHD: So I mean my general usage here would just be to kick myself into the world of promises and then I would do stuff that I would hope people find normal after that. However, I mean when I have an array of things, `Promise.all`, I am obviously doing some unique things to respond to individual rejections differently than the aggregate and so on. So there certainly may be some use cases, even though I don’t have any off the top of my head SYG: [inaudible] on paper, on paper it seems like yes, it could come up. I would like to just read some things so I can better see. Even one of your, perhaps, polyfills where you have written to reach for this and it wasn’t there and you had to get the work around. I wanted to see some instances of that and then I can better understand -JHD: I am happy to provide those regardless. It would be – it would be nice if I could advance this to Stage 2 and then providing those could be a Stage 3 requirement. But I am also comfortable if you would rather not advance until you have those things. As I said, in the beginning, right, I would love to have Stage 2, but I am also fine with getting an updated response for what I need to get to Stage 2. +JHD: I am happy to provide those regardless. It would be – it would be nice if I could advance this to Stage 2 and then providing those could be a Stage 3 requirement. But I am also comfortable if you would rather not advance until you have those things. As I said, in the beginning, right, I would love to have Stage 2, but I am also fine with getting an updated response for what I need to get to Stage 2. -SYG: I think you could, given that there doesn't seem to be a lot of design room here, I think you could, for folks who have expressed reservation, work with us off-line and go straight to 2.7 or 3. If you want to get to Stage 2 today, and not 2.7, like are you planning on more design work for promise.try? +SYG: I think you could, given that there doesn't seem to be a lot of design room here, I think you could, for folks who have expressed reservation, work with us off-line and go straight to 2.7 or 3. If you want to get to Stage 2 today, and not 2.7, like are you planning on more design work for promise.try? -JHD: No. That’s a fair point. I really just haven’t learned to think in the new stage yet. Realistically jumping to 2.7 makes sense to me. But it has already been 7 years, 8 years since I last presented so I was trying to get a little bit of advancement at a time. Yeah. Certainly, if Stage 2 is fine, I am also content to go to 2.7 and write the test for this before tomorrow. So we have a – +JHD: No. That’s a fair point. I really just haven’t learned to think in the new stage yet. Realistically jumping to 2.7 makes sense to me. But it has already been 7 years, 8 years since I last presented so I was trying to get a little bit of advancement at a time. Yeah. Certainly, if Stage 2 is fine, I am also content to go to 2.7 and write the test for this before tomorrow. So we have a – LCA: The tests are for stage 3 -JHD: Yeah. I mean, I can have those prepared. +JHD: Yeah. I mean, I can have those prepared. -CDA: We have a couple of people in the queue. JRL? +CDA: We have a couple of people in the queue. JRL? -JRL: Shu’s comment this could only be a unary function, we’re addressing that bypassing the argument that is are called to AsyncVariable.run. And async? Shot.run to the call back. So you can handle any function with any number of parameters? +JRL: Shu’s comment this could only be a unary function, we’re addressing that bypassing the argument that is are called to AsyncVariable.run. And async? Shot.run to the call back. So you can handle any function with any number of parameters? -CDA: KG? +CDA: KG? -KG: I am just curious if this comes up for other people in the room very much? Just because I have never run into this pattern. And I know that JHD and I write code in a pretty different style. So I am not surprised that it does come up for JHD. The fact that it doesn’t come up for me isn’t very informative. So I would just like to hear if this is a thing that other people run into very much. It’s a pretty simple bit of sugar. Our bar for adding sugar should be low, but it should be this comes up pretty often, not like this comes up for 2 or 3 cases. +KG: I am just curious if this comes up for other people in the room very much? Just because I have never run into this pattern. And I know that JHD and I write code in a pretty different style. So I am not surprised that it does come up for JHD. The fact that it doesn’t come up for me isn’t very informative. So I would just like to hear if this is a thing that other people run into very much. It’s a pretty simple bit of sugar. Our bar for adding sugar should be low, but it should be this comes up pretty often, not like this comes up for 2 or 3 cases. -CDA: Did you want to answer the question? +CDA: Did you want to answer the question? JHD: I agree with what Kevin is saying. I think it comes up pretty often in my experience. I don’t expect that too much from everybody else’s and I am happy when more people have experienced a problem than fewer. So if anyone else – I mean, I think Justin has shared an example, so it’s at least not just me. But certainly the more the merrier. @@ -1209,15 +1104,14 @@ JHD: So then I would still like to ask for Stage 2. Yes? CDA: Do we have any explicit support for Stage 2 for promise.try?. -SYG: What I was proposing was show me, and we can go to 2 or 2.7 directly, although maybe 2 -- -I’m not comfortable with 2 right now until I see some concrete line. Basically any line so I -can just read what the -- where you would use it. And then which could be, like, you know, now -or tomorrow or something, and then it seems like we wouldn’t -- I want to think more about -passing the arguments in addition to nothing. +SYG: What I was proposing was show me, and we can go to 2 or 2.7 directly, although maybe 2 -- I’m not comfortable with 2 right now until I see some concrete line. Basically any line so I can just read what the -- where you would use it. And then which could be, like, you know, now or tomorrow or something, and then it seems like we wouldn’t -- I want to think more about passing the arguments in addition to nothing. JHD: Okay, so then to reiterate for the notes, I’m -- `Promise.try` is not advancing right now, however, I will provide Shu and, you know, the wider committee with some more concrete -- with some concrete examples to evaluate, and if Shu, since he’s the only one who has expressed this concretely, but if anyone else as well lets me know that they’re comfortable, then I may come back later in the meeting assuming there’s time, and request Stage 2 or possibly 2.7, even though I didn’t put that on the agenda in advance. Thank you. + ## Joint iteration for stage 2 + Presenter: Michael Ficarra (MF) + - [proposal](https://github.com/tc39/proposal-joint-iteration) - [slides](https://docs.google.com/presentation/d/150lLig7sNDr173RVzRgNRKrrUBKzKPImrHjGnfrETzQ/edit) @@ -1247,11 +1141,11 @@ MF: All right. Would anybody like to provide any guidance on those two biggest q CDA: Nothing on the queue. -GCL: Yeah, I just wanted to say that I agree that we should not try to do that in this proposal. We should stick to what you’ve suggested +GCL: Yeah, I just wanted to say that I agree that we should not try to do that in this proposal. We should stick to what you’ve suggested MF: yeah, just iteration. -JHD: Yeah, and just to respond to that, I could certainly try and make a follow-up proposal, but like every single semantic of how things are jointly combined would be the same and would have to follow it. And the array proposal could never, like -- either could never advance beyond the iteration one or the two would have to stay in, you know -- whichever was behind would have to follow the one ahead of it and so on. If that’s a procedural thing I’m required to do, I can do that, but it seems strange to separate it since they’re going to be so tightly coupled regardless. +JHD: Yeah, and just to respond to that, I could certainly try and make a follow-up proposal, but like every single semantic of how things are jointly combined would be the same and would have to follow it. And the array proposal could never, like -- either could never advance beyond the iteration one or the two would have to stay in, you know -- whichever was behind would have to follow the one ahead of it and so on. If that’s a procedural thing I’m required to do, I can do that, but it seems strange to separate it since they’re going to be so tightly coupled regardless. MF: I wouldn't go assuming right away that it's going to be trivial to figure out the semantics of this array variant. Yes, it'll probably be heavily influenced, but you never know if there are, like, special cases that need to be handled or whatever. I wouldn't just jump to that conclusion. @@ -1300,16 +1194,16 @@ MF: Thank you. Then I have at least two. Any others are welcome, but I have the MF: All right, thank you, everyone. ## revisit Promise.try + Presenter: Jordan Harband (JHD) JHD: So Shu was given a diff of -- like a concrete example of code in Matrix and indicated he’d be willing to reconsider Stage 2, and the chairs were kind enough to slot me in right now. So I’m again going to ask for Stage 2, but not any higher because of the open issue about passing arguments. So there’s no one -- since there’s no one on the queue, I guess I’ll take that to mean there’s consensus for Stage 2. -CDA: Do we have some explicit support for Stage 2 for promise.try? I am not seeing or hearing -anything. +CDA: Do we have some explicit support for Stage 2 for promise.try? I am not seeing or hearing anything. JHD: Sitting next to Kit Kats, if that helps. -CDA It sounds like this is not going to progress to Stage 2. +CDA It sounds like this is not going to progress to Stage 2. CDA: I mean, I guess if everybody here -- @@ -1328,19 +1222,21 @@ JHD: Thank you. ### Conclusion Stage 2. + ## Math.sum + Presenter: Kevin Gibbons (KG) + - [proposal](https://github.com/tc39/proposal-math-sum) - [slides](https://docs.google.com/presentation/d/13S_WcLPhJ43El9dXCfC0uO4d1PakHmJbNVr-S4g3K3Q/edit) -KG: We should have a built-in mechanism for summing a list of values. This is my thesis. And if we are going to do this, then we can do something that’s a little better than naive summation, because naive summation accumulates floating point summation in a really bad way. And in the last meeting, WH pointed out there are a number of algorithms for doing full precision addition for floating point, and since then I have gone and implemented one of them, and, yeah, it’s pretty straightforward. So since at the last meeting WH expressed the preference that this be fully precise and a couple of other people expressed the opinion that while it may or may not need to be fully precise, it does need to be fully specified, which means either being fully precise or picking one algorithm for the specification to bless, the simplest thing and I think best for user seems to me to be to choosing fully precise summation. That’s what I’m proposing, an API for summing a number of arguments and giving you the full precision result. I have an implementation in JavaScript. Python also has fsum, which uses the same algorithm except that for some reason theirs doesn’t handle intermediate overflow. Like, if you sum, you know, 2 to the 52nd plus 2 to the 52nd plus 2 to the 52nd plus, minus 2 to the 52nd and so on, it will overflow to infinity. But you can just keep track of that in an easy way and not overflow to infinity in that case and as long as the resulting sum ends up finite, you can just give the right answer. +KG: We should have a built-in mechanism for summing a list of values. This is my thesis. And if we are going to do this, then we can do something that’s a little better than naive summation, because naive summation accumulates floating point summation in a really bad way. And in the last meeting, WH pointed out there are a number of algorithms for doing full precision addition for floating point, and since then I have gone and implemented one of them, and, yeah, it’s pretty straightforward. So since at the last meeting WH expressed the preference that this be fully precise and a couple of other people expressed the opinion that while it may or may not need to be fully precise, it does need to be fully specified, which means either being fully precise or picking one algorithm for the specification to bless, the simplest thing and I think best for user seems to me to be to choosing fully precise summation. That’s what I’m proposing, an API for summing a number of arguments and giving you the full precision result. I have an implementation in JavaScript. Python also has fsum, which uses the same algorithm except that for some reason theirs doesn’t handle intermediate overflow. Like, if you sum, you know, 2 to the 52nd plus 2 to the 52nd plus 2 to the 52nd plus, minus 2 to the 52nd and so on, it will overflow to infinity. But you can just keep track of that in an easy way and not overflow to infinity in that case and as long as the resulting sum ends up finite, you can just give the right answer. KG: Unfortunately, it’s hard for me to quantify exactly how expensive this would be. There’s obviously a fair amount of overhead relative to the simple method of just adding things up in a C style for loop. There’s like at least five to ten times more arithmetic operations per value, but, like, in practical, that’s probably not a problem until you have a huge number of values and the case that you have a huge number of values is precisely the case that you care most about accumulation of floating point errors, so I’m inclined to say just do full precision summation. That’s the proposal. KG: So, questions. First, why not take an iterable? And the answer is that we already have math.max and math.max is variadic. I think we should have an iterable taking version of math.max, say math.maxFrom and math.sumFrom, and if this advances, I will probably follow up with that right away. Or maybe we should only have the iterable-taking version of sum because the varargs version encourages to you spread an array and that doesn’t work if you have more than 36,000 or so items in your array. Of course, it depend on the implementation, but at least in some implementations, once your array gets too big, you’ll blow the stack and get a range error, and that’s like a really annoying case to run into. So maybe we should just not have the varargs version and only have the iterable taking version. I’m open to either, having both or having only the iterable taking version. I’d like to hear from anyone if they have opinions on that. -KG: Another open question is whether to coerce arguments to numbers. Math.max coerces to numbers. -You may remember at the presentation last time, we talked about not doing coercion to numbers or primitives in general anymore, at least to types other than Boolean. The argument for consistency with math.max is pretty strong, but this kind of coercion is pretty nasty in a number of other cases that we’ve discussed. Again, I could go either way. I genuinely don’t know how I am leaning on this, so I’d like to hear opinions on the merits of consistency versus avoiding the weird coercion cases. +KG: Another open question is whether to coerce arguments to numbers. Math.max coerces to numbers. You may remember at the presentation last time, we talked about not doing coercion to numbers or primitives in general anymore, at least to types other than Boolean. The argument for consistency with math.max is pretty strong, but this kind of coercion is pretty nasty in a number of other cases that we’ve discussed. Again, I could go either way. I genuinely don’t know how I am leaning on this, so I’d like to hear opinions on the merits of consistency versus avoiding the weird coercion cases. KG: This one isn’t an open question, but just to mention this can’t work with bigints. You can’t -- it has to work with an empty list or it’s really hard to use, and you can’t get a value that works for both numbers and BigInts when you have an empty list. So if you need to sum a list of BigInts, it has to be in its own method, and of course BigInts don’t have to deal with the floating point precision anyway, so it’s kind of a different beast. @@ -1472,5 +1368,4 @@ KG: Okay. You can review it right now if you can read fast. CDA: All right, you’ll have to settle for just JHD at the moment, I think. -KG: Okay, thanks, all. - +KG: Okay, thanks, all. diff --git a/meetings/2024-02/February-7.md b/meetings/2024-02/February-7.md index a0b28231..650e80d0 100644 --- a/meetings/2024-02/February-7.md +++ b/meetings/2024-02/February-7.md @@ -1,5 +1,5 @@ -100th TC39 Meeting -7th Feb 2024 +# 7th Feb 2024 100th TC39 Meeting + ----- Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. @@ -42,7 +42,9 @@ You can find Abbreviations in delegates.txt | Mikhail Barash | MBH | Univ. Bergen | | Samina Husain | SHN | Ecma | | | | | + ## Continuation: Intl.MessageFormat update and discussion + Presenter: Eemeli Aro (EAO) - [proposal](https://github.com/tc39/proposal-intl-messageformat) @@ -52,141 +54,146 @@ EAO: The discussion yesterday ended up covering this Q1 that I presented here. I EAO: And in order to kind of work around this issue of being stuck here, one possibility that I would be willing to consider here is, what if we left out the syntax? And what if we left out the MessageFormat syntax parser, specifically, from the proposal? And this is actually a surprisingly small change to the whole external API or the implementation itself. Even as it’s currently proposed, the source could not be just a string, but also this MessageData structure, which is defined as a part of the MessageFormat to work, as this feels like a relatively ergonomic and efficient way of representing all currently available localizable message that is we have been able to find as in formats and specific messages, even. And it is universal in that we can take a representation of a message that, for example, in ICU MessageFormat, would read something as above for formatting a message total price and then the price as a currency. And it would allow you had this represented in the MessageData model. And the significant bit here is that this representation is not just for ICU MessageFormat, not just for MessageFormat 2. Fluent works with it, really, all formats that we have been able to identify work with this data model. And this is something that we would be willing to consider as the next step here, to leave out the syntax and work based on the data model, which in very briefly, is JSON-representable, as TypeScript interfaces and types for convenience. -EAO: So at the top level, the whole of the message, it’s an object, that has a type and a declarations field. And then if it’s a simple message, that means it doesn’t have any variance, then it has a single pattern in it, if it does have variance, then it has the selector or selectors that define how the choice is made between the variants and the variants themselves with the keys and value as a pattern. And they can include some variable declarations effectively at the start of this. And within a pattern, which is a sequence either of literal strings or expressions or markup, we have these sorts of values. And, yeah. The question, then, effectively becomes on which I would like us to consider here. Is that if we leave out the syntax, is what remains really motivated enough for inclusion in JavaScript the language? And overall, I mean, the – the proposal in its current form questions like, how do we express a message in syntax and a for yours form and work with a message in a data structure, how do we format the message and how do we define custom operations on messages? And now, leaving out the syntax leaves out just effectively the first part of this. And it leaves us with a very, in fact, valuable runtime definition for how MessageFormatting really works or ought to work on the web and provides an interface definition for that, and provides a sort of focal point for the discussions about localization to continue and for the work in this whole field, to to – to coalesce around. And hence my question here: is this sufficient motivation, if we leave out the syntax for the proposal to – for the proposal to continue, so that we can later, when there’s more of a sense that there’s – there are these sorts of external indicators for success, that we could bring in the syntax string form as an alternative form of the source to be used. +EAO: So at the top level, the whole of the message, it’s an object, that has a type and a declarations field. And then if it’s a simple message, that means it doesn’t have any variance, then it has a single pattern in it, if it does have variance, then it has the selector or selectors that define how the choice is made between the variants and the variants themselves with the keys and value as a pattern. And they can include some variable declarations effectively at the start of this. And within a pattern, which is a sequence either of literal strings or expressions or markup, we have these sorts of values. And, yeah. The question, then, effectively becomes on which I would like us to consider here. Is that if we leave out the syntax, is what remains really motivated enough for inclusion in JavaScript the language? And overall, I mean, the – the proposal in its current form questions like, how do we express a message in syntax and a for yours form and work with a message in a data structure, how do we format the message and how do we define custom operations on messages? And now, leaving out the syntax leaves out just effectively the first part of this. And it leaves us with a very, in fact, valuable runtime definition for how MessageFormatting really works or ought to work on the web and provides an interface definition for that, and provides a sort of focal point for the discussions about localization to continue and for the work in this whole field, to to – to coalesce around. And hence my question here: is this sufficient motivation, if we leave out the syntax for the proposal to – for the proposal to continue, so that we can later, when there’s more of a sense that there’s – there are these sorts of external indicators for success, that we could bring in the syntax string form as an alternative form of the source to be used. -EAO: Now, I would like to open the queue for discussion. And I think Nicolo is first. +EAO: Now, I would like to open the queue for discussion. And I think Nicolo is first. -NRO: Yeah. One reason for having this proposal as built in the language so that people don’t have to all – shipments with polyfills, implementation. If you remove parsing – is it just very little that could be effectively just added as a third party library. How big is the remaining part? +NRO: Yeah. One reason for having this proposal as built in the language so that people don’t have to all – shipments with polyfills, implementation. If you remove parsing – is it just very little that could be effectively just added as a third party library. How big is the remaining part? -EAO: I don’t have a number in kilobytes to give you. It is not huge because the whole – whole of of it is structured in a way that it’s relying on the Intl formatters for effectively their activities. This is entirely intentional, in fact: one of the key points here is something – I don’t know if you Zibi mentioned it yesterday, but the JavaScript layers and one of the layers we’re working on to make the localization of the whole web much easier. And one of the places where we like to very much continue the conversation after TC39 is WHATWG and W3C, and define DOM localization or at least open up the discussion about that. And there, for instance, being able to rely on JavaScript providing an interface for the imperative API for localization and MessageFormat would be hugely valuable. +EAO: I don’t have a number in kilobytes to give you. It is not huge because the whole – whole of of it is structured in a way that it’s relying on the Intl formatters for effectively their activities. This is entirely intentional, in fact: one of the key points here is something – I don’t know if you Zibi mentioned it yesterday, but the JavaScript layers and one of the layers we’re working on to make the localization of the whole web much easier. And one of the places where we like to very much continue the conversation after TC39 is WHATWG and W3C, and define DOM localization or at least open up the discussion about that. And there, for instance, being able to rely on JavaScript providing an interface for the imperative API for localization and MessageFormat would be hugely valuable. -DE: So, you mentioned that it would be unfortunate if this proposal were to stay at Stage 1 for many years. Could you elaborate on that? +DE: So, you mentioned that it would be unfortunate if this proposal were to stay at Stage 1 for many years. Could you elaborate on that? EAO: We have a certain amount of momentum here, and we have put the whole JavaScript ecosystem of internationalization or localization immensely on hold for the past 4 or 5 years while we have been working on this. It would be really nice to be able to have the work that goes on beyond TC39 and the users beyond TC39 be able to start getting some utility out of the work we have done so far. And also, be able to as I mentioned previously, progress the work in WHATWG and W3C on top of this. If this is stuck at Stage 1 for multiple years, then it’s not really – it’s a much more difficult proposition to expect much to happen here. In the JavaScript scope. -DE: So as far as getting utility out of this work in JavaScript, can we deploy this when it’s at Stage 1 with the polyfill? Would that be too risky for organizations? +DE: So as far as getting utility out of this work in JavaScript, can we deploy this when it’s at Stage 1 with the polyfill? Would that be too risky for organizations? -EAO: Deploy – what do you mean? +EAO: Deploy – what do you mean? -DE: An organization like Bloomberg or Mozilla could just already adapt this format through the JavaScript implementation of it. Personally, for Bloomberg, I see this as more risky, if it’s not kind of co-validated by standards processes. But it seems like that’s what they are proposing. Not the developments stop, but development take place in prototyping outside of, you know, the native JavaScript implementation. Do you think that is possible? +DE: An organization like Bloomberg or Mozilla could just already adapt this format through the JavaScript implementation of it. Personally, for Bloomberg, I see this as more risky, if it’s not kind of co-validated by standards processes. But it seems like that’s what they are proposing. Not the developments stop, but development take place in prototyping outside of, you know, the native JavaScript implementation. Do you think that is possible? -EAO: Anything is possible because this is code afterall. My sense is that if we don’t have the solidity and authority of TC39 backing this, any alternative is going to be facing much, much of a harder road to get forward. We will still advance. This will – the work we have been doing will still be useful. It’s just be less so and less unified for the whole ecosystem. +EAO: Anything is possible because this is code afterall. My sense is that if we don’t have the solidity and authority of TC39 backing this, any alternative is going to be facing much, much of a harder road to get forward. We will still advance. This will – the work we have been doing will still be useful. It’s just be less so and less unified for the whole ecosystem. -DE: Okay. So my second point was, talking with KG and MF this morning, we were talking about rather than saying, many years, if we could elaborate kind of a definition of what kind of experience we want to get to Stage 2.7. And roughly, I think we were talking about, and those two can clarify their own positions, once we get a number of organizations at a number of sizes trying this out end-to-end in production with some number of applications, then we have gotten experience. We have different ideas about how long we think that will take. They can continue to think that that will take many years and we can try to organize for that to occur faster. But I was hoping that we could articulate a kind of objective criteria for Stage 2.7. Stage 2 then is a little bit more complicated. In my opinion, it should be a somewhat intermediate level of experience. But maybe we can derive that in a future meeting. So if we come up with these measures, then hopefully what we could tell the internationalization communities, we’ve just maybe implicitly over-promised on how quickly this could be done. We thought it would take five years. Well, you know, it will be longer. But the committee could explicitly endorse the development of this and continue prototyping, and layout what it needs for Stage 2.7, and in terms of experience, we can deliver on that experience. Do you think that’s a path we can take? +DE: Okay. So my second point was, talking with KG and MF this morning, we were talking about rather than saying, many years, if we could elaborate kind of a definition of what kind of experience we want to get to Stage 2.7. And roughly, I think we were talking about, and those two can clarify their own positions, once we get a number of organizations at a number of sizes trying this out end-to-end in production with some number of applications, then we have gotten experience. We have different ideas about how long we think that will take. They can continue to think that that will take many years and we can try to organize for that to occur faster. But I was hoping that we could articulate a kind of objective criteria for Stage 2.7. Stage 2 then is a little bit more complicated. In my opinion, it should be a somewhat intermediate level of experience. But maybe we can derive that in a future meeting. So if we come up with these measures, then hopefully what we could tell the internationalization communities, we’ve just maybe implicitly over-promised on how quickly this could be done. We thought it would take five years. Well, you know, it will be longer. But the committee could explicitly endorse the development of this and continue prototyping, and layout what it needs for Stage 2.7, and in terms of experience, we can deliver on that experience. Do you think that’s a path we can take? -EAO: It sounds like yes, there is opportunity for the metastructure here to advance. But my question is, given that there is, I believe, significant identifiable utility from the proposal, even if the syntax part of it is left out, and given that the syntax part of it is the part that would be slowing this down by some amount of time, that could be quite significant – could be years, could be less, but is unknowable, effectively, at this time. My strong preference would be to advance the part that is we can at this time, and then return later with a follow-up proposal adding the syntax parsing to this. +EAO: It sounds like yes, there is opportunity for the metastructure here to advance. But my question is, given that there is, I believe, significant identifiable utility from the proposal, even if the syntax part of it is left out, and given that the syntax part of it is the part that would be slowing this down by some amount of time, that could be quite significant – could be years, could be less, but is unknowable, effectively, at this time. My strong preference would be to advance the part that is we can at this time, and then return later with a follow-up proposal adding the syntax parsing to this. -DE: Yeah. I agree that the more significant part of the proposal is the data model and the syntax is maybe more likely to be unstable. So yeah. I would want to call on the three people who raised concerns in the past, if they – you know, none of them are in the queue. What do you think of what EAO is proposing? +DE: Yeah. I agree that the more significant part of the proposal is the data model and the syntax is maybe more likely to be unstable. So yeah. I would want to call on the three people who raised concerns in the past, if they – you know, none of them are in the queue. What do you think of what EAO is proposing? -KG: I would certainly be less concerned with going to Stage 2 with just the API portions of it. Most of my concerns are about the DSL specifically. And not the API. That type – I agree with Dan, that I don’t think it’s necessarily going to be forever, to get the DSL to be sufficiently established, that we want to add it to the language. I don’t think it’s necessarily going to be ten years. But if you would prefer to focus on the proposal without the DSL, I do think that’s likelier to – I am more comfortable with that going forward. +KG: I would certainly be less concerned with going to Stage 2 with just the API portions of it. Most of my concerns are about the DSL specifically. And not the API. That type – I agree with Dan, that I don’t think it’s necessarily going to be forever, to get the DSL to be sufficiently established, that we want to add it to the language. I don’t think it’s necessarily going to be ten years. But if you would prefer to focus on the proposal without the DSL, I do think that’s likelier to – I am more comfortable with that going forward. -MF: I’m not as convinced that the data model alone would be unproblematic, that all the possible things that we would want to change are syntax stuff. That’s maybe my – you know, unfamiliarity, though, with the proposal. But otherwise, I agree with the other points from DE and KG. +MF: I’m not as convinced that the data model alone would be unproblematic, that all the possible things that we would want to change are syntax stuff. That’s maybe my – you know, unfamiliarity, though, with the proposal. But otherwise, I agree with the other points from DE and KG. KG: Stage 2, we often make changes to the API of things. And like, I think it’s much easier. The main reason I am more comfortable with this going forward with just the data model is that it’s much easier to augment or tweak the data model than to tweak the syntax. If you need to add a new type, that’s trivial to do in the data model and much harder with a set syntax. So I think those changes are to some extent to be expected during Stage 2, not like massive rewrites of the whole thing. But if we decide that there is additional bool, or, I don’t know, a new type, that can happen in Stage 2 and wouldn’t be shocking or particularly problematic -MF: That’s true. Only considering Stage 2, I do think that it helps to advance just the data model without the surface syntax. +MF: That’s true. Only considering Stage 2, I do think that it helps to advance just the data model without the surface syntax. -DE: That’s great. So we could both advance the data model sooner and articulate maturity criteria for the surface syntax. What would we need to understand that the data model is mature enough for Stage 2? What criteria? What would you be looking for? Because MF, you seem to express some concerns. And I want to know, like, what the champion ground should be doing in response to your concern, besides just going away. +DE: That’s great. So we could both advance the data model sooner and articulate maturity criteria for the surface syntax. What would we need to understand that the data model is mature enough for Stage 2? What criteria? What would you be looking for? Because MF, you seem to express some concerns. And I want to know, like, what the champion ground should be doing in response to your concern, besides just going away. -MF: So this is kind of what we were talking about earlier. I don’t think I have concerns directly, like, within our process for advancing the data model to Stage 2. The concerns are about the signaling to the community, and there’s two-fold concerns there as – the one side of it is that the kinds of changes we expect during Stage 2 and the other side of it is the duration we typically expect between Stage 2 and 2.7. A clear path forward. And I am not sure we are making the right communication to the community with this proposal at Stage 2. +MF: So this is kind of what we were talking about earlier. I don’t think I have concerns directly, like, within our process for advancing the data model to Stage 2. The concerns are about the signaling to the community, and there’s two-fold concerns there as – the one side of it is that the kinds of changes we expect during Stage 2 and the other side of it is the duration we typically expect between Stage 2 and 2.7. A clear path forward. And I am not sure we are making the right communication to the community with this proposal at Stage 2. -DE: Okay. It’s clear that communication with the community is important for everyone, for the presenter as well. Keeping it at Stage 1 would be also a signal to the community that it might discourage investment, in particular. Making things go more slowly as Eemeli was saying. Can you elaborate on what the benefits of keeping it at Stage 1 in terms of communication. +DE: Okay. It’s clear that communication with the community is important for everyone, for the presenter as well. Keeping it at Stage 1 would be also a signal to the community that it might discourage investment, in particular. Making things go more slowly as Eemeli was saying. Can you elaborate on what the benefits of keeping it at Stage 1 in terms of communication. -MF: I am fully in agreement with you, keeping it at Stage 1 with no other communication would possibly be seen as a negative signal to the community. It discourages further adoption. I think it would be best, if it was to stay at Stage 1, it’s best to – and I think we talked about this earlier – encourage the use of the polyfill and whatever other positive signals we need to try to gain the additional data points we need to further advance it. +MF: I am fully in agreement with you, keeping it at Stage 1 with no other communication would possibly be seen as a negative signal to the community. It discourages further adoption. I think it would be best, if it was to stay at Stage 1, it’s best to – and I think we talked about this earlier – encourage the use of the polyfill and whatever other positive signals we need to try to gain the additional data points we need to further advance it. DE: Yeah, I think it’s great that Eemeli has created a polyfill for all this and is promoting its use. I think going to advance to 2 would increase people’s likelihood to use the polyfill and give us feedback. Do you have significant concerns? Would you be opposed if the champion group came back and said, we are promoting the data model for Stage 2. What kinds of things would go into your consideration whether it’s the wrong signal -MF: No. I would not have strong opposition to that. The – and sorry. We have had a lot of this conversation earlier and are kind of repeating it to each other in committee. So it’s kind of awkward -But yes. I think that like – a lot of the stronger criteria is more appropriate at 2.7. So like this wider adoption by a variety of different consumers, consumers who were not participating in the development of the standard itself. That kind of stuff should be gating 2.7 and not 2. Yeah. Most of my concerns about 2 were just about what kind of signaling to the community that does. And sorry, I don’t think I directly addressed your question. +MF: No. I would not have strong opposition to that. The – and sorry. We have had a lot of this conversation earlier and are kind of repeating it to each other in committee. So it’s kind of awkward But yes. I think that like – a lot of the stronger criteria is more appropriate at 2.7. So like this wider adoption by a variety of different consumers, consumers who were not participating in the development of the standard itself. That kind of stuff should be gating 2.7 and not 2. Yeah. Most of my concerns about 2 were just about what kind of signaling to the community that does. And sorry, I don’t think I directly addressed your question. -CDA: We have less than 7 minutes left. If we could keep moving through the queue. EAO? +CDA: We have less than 7 minutes left. If we could keep moving through the queue. EAO? -EAO: Yeah. Just thought I would note I put together today a PR draft status, what the changes are required in order to get rid of the syntax parser in the spec in its current form and it’s about 5 or 10 lines of a div removing about one phrase of functionality from implementation to find method. The change is minimal. I invite anyone here to look at that change, if they are so interested. Because the spec itself is already defined completely. The runtime operations in terms of the data model, rather than the syntax. +EAO: Yeah. Just thought I would note I put together today a PR draft status, what the changes are required in order to get rid of the syntax parser in the spec in its current form and it’s about 5 or 10 lines of a div removing about one phrase of functionality from implementation to find method. The change is minimal. I invite anyone here to look at that change, if they are so interested. Because the spec itself is already defined completely. The runtime operations in terms of the data model, rather than the syntax. RCA: I am in agreement with the current proposal. But I would like to understand or have a clear definition of the next steps regarding the syntax. As we mentioned yesterday, we needed and it wasn’t then clear what were the requirements to that. So I think that point would be extremely important to work on the proposal for the syntax parser in parallel. - -SFC: Yeah. So Stage 2, we keep talking about we need this, you know, to have the experience using the syntax and all that, which is fine. But that seems like, to me, a requirement for Stage 2.7. But Stage 2 basically means that we as the committee think that the proposal is motivated. And we do want to see a syntax-based MessageFormat in the language. I see no reason that Stage 2 needs to be blocked on the syntax having, you know, on the ground experience for a certain number of years. Right? -So I think it might be good to clarify that if we as a committee believe that the proposal is motivated, does that really need to block Stage 2, maybe only 2.7. We can then continue to progress with the – having a higher understanding of the proposal and knowing that the committee is behind us. You know, as we develop – continue to develop the proposal. And to get this experience that we are looking for. -LCA: Yeah. I was wondering, how useful do you think the data structure is for using MessageFormat with the data structure even after the syntax is introduced? Like is there still going to be users that use MessageFormat with the data structure, even after there’s a surface syntax? +SFC: Yeah. So Stage 2, we keep talking about we need this, you know, to have the experience using the syntax and all that, which is fine. But that seems like, to me, a requirement for Stage 2.7. But Stage 2 basically means that we as the committee think that the proposal is motivated. And we do want to see a syntax-based MessageFormat in the language. I see no reason that Stage 2 needs to be blocked on the syntax having, you know, on the ground experience for a certain number of years. Right? So I think it might be good to clarify that if we as a committee believe that the proposal is motivated, does that really need to block Stage 2, maybe only 2.7. We can then continue to progress with the – having a higher understanding of the proposal and knowing that the committee is behind us. You know, as we develop – continue to develop the proposal. And to get this experience that we are looking for. -EAO: Short answer here is yes, absolutely. Largely, this is something like MessageFormat 2 is interesting to people and organizations and about projects and applications that are already interested in localization. And this means that all of these things, all of these interested parties already have some solution for the localization and one of the important things that the data model things is that it brings this ability to bring your own parser. And this means that a user could bring in effectively – keep their messages in their current shape only, air quotes only, the runtime formatting of how those messages are being used. And this is likely to make it much, much easier for migration from existing or legacy formats to MessageFormat 2, for instance. +LCA: Yeah. I was wondering, how useful do you think the data structure is for using MessageFormat with the data structure even after the syntax is introduced? Like is there still going to be users that use MessageFormat with the data structure, even after there’s a surface syntax? -LCA: Okay. Thank you. That clarifies things. +EAO: Short answer here is yes, absolutely. Largely, this is something like MessageFormat 2 is interesting to people and organizations and about projects and applications that are already interested in localization. And this means that all of these things, all of these interested parties already have some solution for the localization and one of the important things that the data model things is that it brings this ability to bring your own parser. And this means that a user could bring in effectively – keep their messages in their current shape only, air quotes only, the runtime formatting of how those messages are being used. And this is likely to make it much, much easier for migration from existing or legacy formats to MessageFormat 2, for instance. -JSC: Yeah. Real quick. The data model is abstract enough to also add JSON and XML forms. That’s all. +LCA: Okay. Thank you. That clarifies things. -DA: Within the MessageFormat, development group, campaign group, Mozilla like, Eemeli has been taking the position of everyone should adopt the surface syntax whereas other people in the group have been advocating for, well, the important thing is the data model because I want to stay with the XML format. Or other things. The data model will be useful. It doesn’t add expressiveness because you could always serialize it. But that’s – it’s not even something that everybody is completely set on, they want to adopt this particular DSL as opposed to adopting other serialization. Is that accurate, Eemeli? +JSC: Yeah. Real quick. The data model is abstract enough to also add JSON and XML forms. That’s all. -EAO Yes. +DA: Within the MessageFormat, development group, campaign group, Mozilla like, Eemeli has been taking the position of everyone should adopt the surface syntax whereas other people in the group have been advocating for, well, the important thing is the data model because I want to stay with the XML format. Or other things. The data model will be useful. It doesn’t add expressiveness because you could always serialize it. But that’s – it’s not even something that everybody is completely set on, they want to adopt this particular DSL as opposed to adopting other serialization. Is that accurate, Eemeli? + +EAO Yes. CDA: All right. We have 1 minute left. -EAO: Is it valid for me to ask for Stage 2 possibly at this point or should that be done at the next meeting? That would be Stage 2 without the syntax parser in the proposal as it currently is. +EAO: Is it valid for me to ask for Stage 2 possibly at this point or should that be done at the next meeting? That would be Stage 2 without the syntax parser in the proposal as it currently is. -CDA: The agenda did not call for a proposed stage advancement. +CDA: The agenda did not call for a proposed stage advancement. -EAO: If that’s saying that it’s not valid for me to ask for that, then I shall not. And I notice a reply from asking to discuss more in TC39 2. This is fine. This is what we will do. +EAO: If that’s saying that it’s not valid for me to ask for that, then I shall not. And I notice a reply from asking to discuss more in TC39 2. This is fine. This is what we will do. -CDA: You can certainly ask. It’s not invalid to ask for it. KG? +CDA: You can certainly ask. It’s not invalid to ask for it. KG? -KG: Just for signaling purposes, I guess I am fine regardless of whether it happens at this meeting – I would be fine going to Stage 2 with just the data model. So I don’t know if other people object to it happening at this meeting, but yeah, if you want to go with just the data model, for Stage 2, that’s fine with me. +KG: Just for signaling purposes, I guess I am fine regardless of whether it happens at this meeting – I would be fine going to Stage 2 with just the data model. So I don’t know if other people object to it happening at this meeting, but yeah, if you want to go with just the data model, for Stage 2, that’s fine with me. -SFC: In TG2 we reviewed the proposal and had similar discussions now, but I think that, you know, this idea of a data-model-only proposal was briefly mentioned as a possibility that we might consider, but I don’t know if all the TG2 delegates have reviewed that portion of the proposal. So I think it would be – there’s no harm in waiting a couple of months to have everyone on board and have a strong thumbs up next meeting for Stage 2 with that form of the proposal. +SFC: In TG2 we reviewed the proposal and had similar discussions now, but I think that, you know, this idea of a data-model-only proposal was briefly mentioned as a possibility that we might consider, but I don’t know if all the TG2 delegates have reviewed that portion of the proposal. So I think it would be – there’s no harm in waiting a couple of months to have everyone on board and have a strong thumbs up next meeting for Stage 2 with that form of the proposal. DE: It would be good if I could ask for concerns that anybody has with this resolution next meeting. Especially for non-TG2 people, Michael or Shu, raised concerns, have thoughts on this. Maybe we could do that off-line. [In chat, SYG confirmed no objection to Stage 2 without the syntax.] -CDA: Okay. You got a + 1 for Stage 2 with the data model from Luca and + 1 from JS Choi. Some folks don’t want to advance at this time. +CDA: Okay. You got a + 1 for Stage 2 with the data model from Luca and + 1 from JS Choi. Some folks don’t want to advance at this time. -EAO: Yeah. I am not going to ask for Stage 2. The point SFC made is valid. We need to discuss in TG2 before we ask for Stage 2 here. +EAO: Yeah. I am not going to ask for Stage 2. The point SFC made is valid. We need to discuss in TG2 before we ask for Stage 2 here. ### Speaker's Summary of Key Points -* Some TC members have strong concerns about adopting any novel DSL before it has seen significant real-world usage. -* This means that with the syntax parser, it could take multiple years until the proposal is standardized. -* Leaving out the syntax parser, and only initially supporting a data model representation of messages, would unblock progress. -* To standardize the syntax of a DSL, it would be meaningful/persuasive to see around a dozen organizations of various sizes, including ones which were not involved in MF2 development, make significant use in production of MF2 syntax across their stack (engaging application developers, translators, infrastructure developers, …). This will likely be required for Stage 2.7. It remains to be defined whether an intermediate, lower amount of experience would be sufficient for Stage 2. +- Some TC members have strong concerns about adopting any novel DSL before it has seen significant real-world usage. +- This means that with the syntax parser, it could take multiple years until the proposal is standardized. +- Leaving out the syntax parser, and only initially supporting a data model representation of messages, would unblock progress. + +- To standardize the syntax of a DSL, it would be meaningful/persuasive to see around a dozen organizations of various sizes, including ones which were not involved in MF2 development, make significant use in production of MF2 syntax across their stack (engaging application developers, translators, infrastructure developers, …). This will likely be required for Stage 2.7. It remains to be defined whether an intermediate, lower amount of experience would be sufficient for Stage 2. + ### Conclusion -* In a future TC39 meeting, there will be a presentation on Intl.MessageFormat for Stage 2, leaving out the parser. The committee has not expressed any concerns about this approach, but it remains to be reviewed in TG2. -* TC39 encourages continued development, prototyping and deployment of MessageFormat 2 syntax, e.g., implemented in a JS-level library + +- In a future TC39 meeting, there will be a presentation on Intl.MessageFormat for Stage 2, leaving out the parser. The committee has not expressed any concerns about this approach, but it remains to be reviewed in TG2. +- TC39 encourages continued development, prototyping and deployment of MessageFormat 2 syntax, e.g., implemented in a JS-level library ## RegExp.escape hex escape discussion + for stage 2.7 + Presenter: Jordan Harband (JHD) -- [proposal]() -- [slides]() +- [Issue](https://github.com/tc39/proposal-regex-escaping/issues/58) +- Slides: See Agenda JHD: All right. Good morning, everybody. I am here to talk about RegExp.escape. So there is one outstanding issue, I am going to talk about that in a minute . . . but setting that aside, this is the entirety of the spec. It’s adding a few more escapes. This is adding a few more escapes that are valid in unicodeMode RegExps, and here is the RegExp.escape function that does the escaping. Essentially, asides from this other issue, I would say this is ready for Stage 2.7. To start writing tests. -JHD: The remaining issue, however, is about hex escapes. So the current specification . . . so there was a question about using hex escapes. So, for example, you can see this example here, where if you put an ampersand in there, it puts a slash in front of hex escapes. So the sort of – the preference of myself, the champion, and KG, as well, would be to have the more readable escapes which are not the hex escapes. MF has indicated that this – he prefers to have the hex escapes because it makes the regular expression grammar less complex and MF feel free to step in if I was misstating your question. It is that "the readability of the output of RegExp.escape doesn’t matter". There you go. Do we change this to use hex escapes or not? My preference is not to. But I would want to hear thoughts before the committee about the pros or cons of the two approaches. If we decide to make the change, I will not be asking for stage advancement today. That is only in the case that we decide not to make the change. +JHD: The remaining issue, however, is about hex escapes. So the current specification . . . so there was a question about using hex escapes. So, for example, you can see this example here, where if you put an ampersand in there, it puts a slash in front of hex escapes. So the sort of – the preference of myself, the champion, and KG, as well, would be to have the more readable escapes which are not the hex escapes. MF has indicated that this – he prefers to have the hex escapes because it makes the regular expression grammar less complex and MF feel free to step in if I was misstating your question. It is that "the readability of the output of RegExp.escape doesn’t matter". There you go. Do we change this to use hex escapes or not? My preference is not to. But I would want to hear thoughts before the committee about the pros or cons of the two approaches. If we decide to make the change, I will not be asking for stage advancement today. That is only in the case that we decide not to make the change. MF: Yeah. I guess I want to clarify my position here. My position is not based on the RegExp.escape output. I frankly could not care less what the RegExp.escape output looks like. In fact, I like negative care about – I want nobody else to care about what the output of RegExp.escape is. The concern should have been expressed as this feature for RegExp escaping adds RegExp syntax unnecessarily. There are new identity escapes for a bunch of ASCII characters being added by this proposal, so that the output of RegExp.escape can then use those identity escapes. We should not be putting those two together. If we want identity escapes added, add identity escapes. What this means is, people who never intend to use RegExp.escape are still getting this feature. They still have to be on the lookout for identity escapes inside the RegExp. They are able to use this. I don’t want to encourage people to use this. So we can add RegExp.escape with identical functionality as far as the behavior of the RegExp without doing that. So that’s why. And I don’t see a downside of using hex escapes for RegExp.escape. -KG: Yeah. First thing is just, I don’t think JHD mentioned polyfillability. This is a concern some other people made which is that because the use of the more readable version requires changes to the RegExp syntax, it can’t be polyfilled. It cannot be easily polyfilled because you have to replace the RegExp parser and all that, which is a valid concern. The second thing is, I think we should prioritize the output of this being readable, even though it is intended to be executed rather than read because you will still end up reading it. Like, you’re going to debug this and you’re going to print stuff. Like, it is code, but code is data. And data gets consumed by humans a lot of the time. I think it’s worth putting some effort into making the output more legible. That’s all. +KG: Yeah. First thing is just, I don’t think JHD mentioned polyfillability. This is a concern some other people made which is that because the use of the more readable version requires changes to the RegExp syntax, it can’t be polyfilled. It cannot be easily polyfilled because you have to replace the RegExp parser and all that, which is a valid concern. The second thing is, I think we should prioritize the output of this being readable, even though it is intended to be executed rather than read because you will still end up reading it. Like, you’re going to debug this and you’re going to print stuff. Like, it is code, but code is data. And data gets consumed by humans a lot of the time. I think it’s worth putting some effort into making the output more legible. That’s all. MLS: Is it meant for humans or tooling/APIs, and I think it’s meant for tooling/APIs. This part, as far as escaping things, there’s – the different modes of RegExp allow different escapes, IdentityEscapes, you don’t have characters here, but it could be the case that at some point, some of these characters that we would escape would be part of the fourth mode of regular expressions (and I hope not). So I think we have to be a little careful about what precedent we set with what escapes we generate. So going back to my rhetorical question: I think that, yes, humans will read this, but we have to be careful that in making it readable, even though it’s for machine consumption, we don’t shoot ourselves in the foot with something future. MF: So I agree with MLS here. Tooling is obviously the consumer. I think that you said it was a rhetorical question. Not all of us are actually in agreement about that. But I agree with that, that tooling is the consumer, passing directly to an evaluator, or to do further processing, and God forbid you ever actually have to look at a RegExp. People can’t read them to begin with. You’re already not reading a RegExp. You put it in a visualizer. If you are concerned your Chrome DevTools doesn’t have a visualizer for your regex or an explainer, they can add that, right? That’s a tooling solution. You don’t need to read RegExp. You shouldn’t read RegExp. You should use explainers anyway. -SYG: We also support not adding – agree with MF, and probably should not add new IdentityEscapes here. For the same reasons, I think. Everything seems pretty reasonable. +SYG: We also support not adding – agree with MF, and probably should not add new IdentityEscapes here. For the same reasons, I think. Everything seems pretty reasonable. JRL: Justin prefers hex escapes. We need hex escapes for digitals. -RBN: Yeah. Kind of piling on with this as well, I also don’t think that we should be trying to introduce new IdentityEscapes. Because as has also been said, introduce a new mode for RegExp like we have for unicode and the new kind of extended unicode that supports set notation. We may, in those more restrictive modes, want to introduce escapes from the occasions meaning something different than what they mean in non-unicodeModes. Therefore, having the escape mechanism generate IdentityEscapes for things that might have a different meaning and different modes is not a good idea. And I think using or adding these here would essentially preclude us from ever using them for some other meaning. - +RBN: Yeah. Kind of piling on with this as well, I also don’t think that we should be trying to introduce new IdentityEscapes. Because as has also been said, introduce a new mode for RegExp like we have for unicode and the new kind of extended unicode that supports set notation. We may, in those more restrictive modes, want to introduce escapes from the occasions meaning something different than what they mean in non-unicodeModes. Therefore, having the escape mechanism generate IdentityEscapes for things that might have a different meaning and different modes is not a good idea. And I think using or adding these here would essentially preclude us from ever using them for some other meaning. + MM: Agree no regular expression syntax changes. -KG: I am in the strong minority, so I will let it go. I will respond to RBN. I would object to any use of any of these punctuators meaning anything other than IdentityEscape. `\-` cannot mean anything other than `-` in any mode ever. So it's fine for `\-` to mean `-` instead of being an error. But since there’s other concerns, I am fine letting this go and getting unreadable output. Whatever. +KG: I am in the strong minority, so I will let it go. I will respond to RBN. I would object to any use of any of these punctuators meaning anything other than IdentityEscape. `\-` cannot mean anything other than `-` in any mode ever. So it's fine for `\-` to mean `-` instead of being an error. But since there’s other concerns, I am fine letting this go and getting unreadable output. Whatever. + +JHD: Well, then, in that case, I will come back at hopefully the next meeting to request 2.7 with the changes that remove the syntax changes and add the additional hex escaping, and that’s it for today. -JHD: Well, then, in that case, I will come back at hopefully the next meeting to request 2.7 with the changes that remove the syntax changes and add the additional hex escaping, and that’s it for today. ### Conclusion + - Will incorporate hex escaping, and return at a future meeting to request stage 2.7. + ## WasmGC shared memory proposal and shared structs proposal convergence update -Presenter: Shu-yu Guo (SYG) -- [proposal]() -- [slides]() +Presenter: Shu-yu Guo (SYG) +- [proposal](https://github.com/WebAssembly/shared-everything-threads/blob/main/proposals/shared-everything-threads/Overview.md) +- Slides: See Agenda SYG: This is not asking for stage advancement. This is an FYI to committee about some changes that are happening in the WasmGC space with respect to shareholder memory and our plans to basically converge with that compatibility. And to be clear, this is not new. This has always kind of been the plan for shared structs. And giving an update now that the WasmGC side of things have picked up steam and there is a formal proposal there. @@ -204,117 +211,51 @@ SYG: The last bullet point is that for user-defined WasmGC structs and array in SYG: Here is a breakdown of where things converge and where things diverge. So core to both the JS and the Wasm proposals, things that have to be shared and have their and have the exact same form, is the memory model. I think that is a non-negotiable, like we have the same memory model for SharedArrayBuffers and linear memory – we must have the same memory model structured data as well. -SYG: Both proposals propose objects that are actually shared and not rewrapped. Both need some kind of run time checking for shared to unshared edges. JS doesn’t have a type system, so we must do this checking, and JS reflections, have Wasm things, must do this checking. And as I’ve explained, both need some notion of thread and possibly thread and run local stores. Both thread and run local. This is somewhat open to discussion still, but some kind of local storage, scope to either thread or around that can bridge the shared and unshared worlds. And a consequence of that last bullet point is that shared objects must be usable as keys in WeakMaps. So that’s what’s core to both. What only the JS proposal needs to concern itself with is the JS author and experience. The syntax of shared structs, the type -registry idea I presented last time that we were shopping around with with RBN and MAH and -other folks, that’s a JS only concern, because there’s a DX concern for folks working in JS and -TS. +SYG: Both proposals propose objects that are actually shared and not rewrapped. Both need some kind of run time checking for shared to unshared edges. JS doesn’t have a type system, so we must do this checking, and JS reflections, have Wasm things, must do this checking. And as I’ve explained, both need some notion of thread and possibly thread and run local stores. Both thread and run local. This is somewhat open to discussion still, but some kind of local storage, scope to either thread or around that can bridge the shared and unshared worlds. And a consequence of that last bullet point is that shared objects must be usable as keys in WeakMaps. So that’s what’s core to both. What only the JS proposal needs to concern itself with is the JS author and experience. The syntax of shared structs, the type registry idea I presented last time that we were shopping around with with RBN and MAH and other folks, that’s a JS only concern, because there’s a DX concern for folks working in JS and TS. [technical problems :) ] -SYG: All right, so where was I? The JS only concerns, so the author and -experience part and also high level synchronization primitive, MLS brought this up -last time. I do believe that JS needs high level synchronization primitives beyond what we have today, which are just few texts, and I’ve been thinking of things like not just text of condition, but maybe instead of a non-recursive single mutex we can have something like a slim read-write lock, as the high level lock. Condition variables, I think, are a foregone conclusion as well and need async locking for the JS side. So for the Wasm side -- its outside the purview of us and the core concerns. There is the Wasm -authoring experience for how Wasm tool chains would take advantage of that proposal. We don’t -care about that. The static check on shared stuff, that is also only a Wasm concern. And very low level synchronization primitives I’m going to claim for now are Wasm only concerns. This might change, but Wasm -- something like a few texts or what I’m thinking -of as a managed waiter queue, which is basically the only reason the few -- it is -basically a waiter queue, except it’s indirected through the memory address, and as far as I -can tell, that’s because memory addresses are the only keys that you have to work with in, -like, C, but if you have direct references, you could just give people managed waiter queues -directly, which are basically few texts that are a little faster, and I think that remains the -right level of abstraction for Wasm. Because they are compiling high level synchronization -primitives that down level to something else directly. They are not going to be able to use -whatever high level things we provide them. So this a pretty high level view of separation and -where the concerns are separate and where the concerns are not separate. - -SYG: So what are the next steps for the Wasm proposal and for this proposal? So for the Wasm -proposal is -- well, okay, before I get to that, the goal here is that it’s good for the -platform to converge. I don’t think we want to be in a future state where something like this -capability only exist on the JS side or only exists on the Wasm side. We should be thinking -about convergence from the get-go, because it’s going to leak abstraction-wise anyway. If only -happens in Wasm, it’s going to be usable on the web via JS through the Wasm JS API, maybe in a -really weird way, so we should converge the two proposals. And there is -- there’s commitment -that -- from both sides that this is what we do. And Wasm and JS share some of the goals, but -not all the goals, namely what kind of authoring experience we want to enable, like, our -constituency are, you know, JavaScript programmers, TypeScript programmers. Wasm’s main -constituency is not the Kotlin programmers and the Java programmers but the toolchain -authors to be able to compile Java. We share some goals but not all of them. I think next -step for the JS side is to -- is that we split the feature set of the current proposal, because -it’s already getting large -- fairly large, is to have an MVP feature set that ensures the Wasm -conversions and a base authoring experience. I don’t want something that -- I think I don’t -want something that is extremely unergonomic to the author. But we can probably pair some nice to -haves and we can cut those onto it later, and the rest of the stuff that is post MVP we can -deprioritize while we’re working in lock step with Wasm, basically. - -SYG: And the plan is to come back for Stage 2 for this MVP feature set. So, okay, before I dive -into some deeper dive -- deeper dive into some interesting technical kings that came up with -the Wasm discussions that have not come up in the JS discussions, any queue items about the -high level overview I have provided? I see two things on the queue. - -LCA: Yeah, I have a question about the syscall table. Is the syscall table scoped to the thread -by itself or to the thread plus the instance, and sort of the real question I’m asking is is -the instance locked to a given thread or is the instance shared across the threads, Wasm -instance? - -SYG: I’m pretty sure the instance is shared across threads, so the things that, like, per thread -has a view of would be this table and whatever TLS thing. - -LCA: Okay, so that makes it impractical to run two Wasm instances with different trust levels in -the same JavaScript thread, is that correct? +SYG: All right, so where was I? The JS only concerns, so the author and experience part and also high level synchronization primitive, MLS brought this up last time. I do believe that JS needs high level synchronization primitives beyond what we have today, which are just few texts, and I’ve been thinking of things like not just text of condition, but maybe instead of a non-recursive single mutex we can have something like a slim read-write lock, as the high level lock. Condition variables, I think, are a foregone conclusion as well and need async locking for the JS side. So for the Wasm side -- its outside the purview of us and the core concerns. There is the Wasm authoring experience for how Wasm tool chains would take advantage of that proposal. We don’t care about that. The static check on shared stuff, that is also only a Wasm concern. And very low level synchronization primitives I’m going to claim for now are Wasm only concerns. This might change, but Wasm -- something like a few texts or what I’m thinking of as a managed waiter queue, which is basically the only reason the few -- it is basically a waiter queue, except it’s indirected through the memory address, and as far as I can tell, that’s because memory addresses are the only keys that you have to work with in, like, C, but if you have direct references, you could just give people managed waiter queues directly, which are basically few texts that are a little faster, and I think that remains the right level of abstraction for Wasm. Because they are compiling high level synchronization primitives that down level to something else directly. They are not going to be able to use whatever high level things we provide them. So this a pretty high level view of separation and where the concerns are separate and where the concerns are not separate. + +SYG: So what are the next steps for the Wasm proposal and for this proposal? So for the Wasm proposal is -- well, okay, before I get to that, the goal here is that it’s good for the platform to converge. I don’t think we want to be in a future state where something like this capability only exist on the JS side or only exists on the Wasm side. We should be thinking about convergence from the get-go, because it’s going to leak abstraction-wise anyway. If only happens in Wasm, it’s going to be usable on the web via JS through the Wasm JS API, maybe in a really weird way, so we should converge the two proposals. And there is -- there’s commitment that -- from both sides that this is what we do. And Wasm and JS share some of the goals, but not all the goals, namely what kind of authoring experience we want to enable, like, our constituency are, you know, JavaScript programmers, TypeScript programmers. Wasm’s main constituency is not the Kotlin programmers and the Java programmers but the toolchain authors to be able to compile Java. We share some goals but not all of them. I think next step for the JS side is to -- is that we split the feature set of the current proposal, because it’s already getting large -- fairly large, is to have an MVP feature set that ensures the Wasm conversions and a base authoring experience. I don’t want something that -- I think I don’t want something that is extremely unergonomic to the author. But we can probably pair some nice to haves and we can cut those onto it later, and the rest of the stuff that is post MVP we can deprioritize while we’re working in lock step with Wasm, basically. + +SYG: And the plan is to come back for Stage 2 for this MVP feature set. So, okay, before I dive into some deeper dive -- deeper dive into some interesting technical kings that came up with the Wasm discussions that have not come up in the JS discussions, any queue items about the high level overview I have provided? I see two things on the queue. + +LCA: Yeah, I have a question about the syscall table. Is the syscall table scoped to the thread by itself or to the thread plus the instance, and sort of the real question I’m asking is is the instance locked to a given thread or is the instance shared across the threads, Wasm instance? + +SYG: I’m pretty sure the instance is shared across threads, so the things that, like, per thread has a view of would be this table and whatever TLS thing. + +LCA: Okay, so that makes it impractical to run two Wasm instances with different trust levels in the same JavaScript thread, is that correct? SYG: I mean, you can -- then you can make two instances with different tables. LCA: But you said the table is -- like, okay, so -- -SYG: So the instance has a table and the table has a thread local -- has a per thread view, but -you can have another instance with another table. This -- there’s a separate table and that -table has its own thread local view. +SYG: So the instance has a table and the table has a thread local -- has a per thread view, but you can have another instance with another table. This -- there’s a separate table and that table has its own thread local view. LCA: Okay, how would this table be populated on a different thread? -SYG: There would be a handshake initialization phase, basically, where, like, initially -- -currently, I actually don’t they how Emscripten generates this stuff, but it generates some -bootstrap code basically, so when you send over your instance to run before it can run, there -would need some boot call strap thing like populate my table, and this was identified -- this -the exact handshake thing was identified as a real bad DX problem for JS author and experience. -I’ve raised the same thing with the Wasm folks and they said, well, we already do this -handshake phase today, so we think we can live with it, but this is early days still, maybe -I’ll also find that it’s problematic. +SYG: There would be a handshake initialization phase, basically, where, like, initially -- currently, I actually don’t they how Emscripten generates this stuff, but it generates some bootstrap code basically, so when you send over your instance to run before it can run, there would need some boot call strap thing like populate my table, and this was identified -- this the exact handshake thing was identified as a real bad DX problem for JS author and experience. I’ve raised the same thing with the Wasm folks and they said, well, we already do this handshake phase today, so we think we can live with it, but this is early days still, maybe I’ll also find that it’s problematic. -LCA: Because I was talking to some Wasm folks a couple weeks ago, and they had mentioned that -there was thought about, like, removing this handshake and being able to start the thread from -the Wasm itself, which, yeah, I don’t know, I’m not sure how exactly I feel about that. +LCA: Because I was talking to some Wasm folks a couple weeks ago, and they had mentioned that there was thought about, like, removing this handshake and being able to start the thread from the Wasm itself, which, yeah, I don’t know, I’m not sure how exactly I feel about that. SYG: Yeah, I think it’s still early days. -MAH: Yeah, my question is somewhat related. Currently if Wasm moves towards removing the -handshake and then it most likely would have to rely on the global table that is not by the -module instances, and that effectually would end up creating an observable, mutable states for -not even the realm, but for the whole agent. And that is problematic, so it’s -- like, this is a -Wasm concern, but it has impact on JavaScript or it can have an impact on JavaScript, depending -on which route Wasm takes. +MAH: Yeah, my question is somewhat related. Currently if Wasm moves towards removing the handshake and then it most likely would have to rely on the global table that is not by the module instances, and that effectually would end up creating an observable, mutable states for not even the realm, but for the whole agent. And that is problematic, so it’s -- like, this is a Wasm concern, but it has impact on JavaScript or it can have an impact on JavaScript, depending on which route Wasm takes. -SYG: I hear your concern. I did not share that concern, as you know. But I think the most -a productive thing is if you’re not already engaged in the CG, to bring this concern up there. -Because whatever we say here, we still don’t have any force over what they did. +SYG: I hear your concern. I did not share that concern, as you know. But I think the most a productive thing is if you’re not already engaged in the CG, to bring this concern up there. Because whatever we say here, we still don’t have any force over what they did. MAH: I mean, at some point, WebAssembly is going to be exposed to the JavaScript realm. Is that a host question, then? SYG: What do you mean it’s a host question for, like, blink? -MAH: Yeah, so it’s WebAssembly’ is technically something introduced by being better of the -JavaScript engine. +MAH: Yeah, so it’s WebAssembly’ is technically something introduced by being better of the JavaScript engine. -SYG: Well, in the web setting that is true. In other settings, JS is not the embedder of Wasm. -It’s not true everywhere, anyway. But in the web it is. +SYG: Well, in the web setting that is true. In other settings, JS is not the embedder of Wasm. It’s not true everywhere, anyway. But in the web it is. -MAH: Yeah, we’re going to continue engaging there. We had a condition, I hope, that they -- that -we can continue engaging there, because that’s -- it would be a shame to have to -- for -environments are concerned about global state like that, to have the remove Wasm for it. +MAH: Yeah, we’re going to continue engaging there. We had a condition, I hope, that they -- that we can continue engaging there, because that’s -- it would be a shame to have to -- for environments are concerned about global state like that, to have the remove Wasm for it. -SYG: So I think this global mutable state is a concern from you, I’m month sure where you should -best raise that. If it’s -- maybe you’re right, though, it is a -- you know, if you get the -HTML side to agree that this is a concern, a principle they want to uphold, that might have -some force on what Wasm does. But I don’t think that’s a widely shared concern everywhere. +SYG: So I think this global mutable state is a concern from you, I’m month sure where you should best raise that. If it’s -- maybe you’re right, though, it is a -- you know, if you get the HTML side to agree that this is a concern, a principle they want to uphold, that might have some force on what Wasm does. But I don’t think that’s a widely shared concern everywhere. MAH: Yeah. I mean, WebAssembly is not web only, so that’s the concern. @@ -322,30 +263,15 @@ SYG: True too. MAH: All right, thanks. -DMM: So you mentioned condition variables, and I’m concerned that if you don’t allow blocking on -the main thread, they’re going to be extremely difficult to use from that main thread to get -the coordination of releasing the locks, waiting and reacquiring the locks to be correct. I -wonder if we should actually be aiming for some higher level constructs that will be less prone -to getting it very wrong. +DMM: So you mentioned condition variables, and I’m concerned that if you don’t allow blocking on the main thread, they’re going to be extremely difficult to use from that main thread to get the coordination of releasing the locks, waiting and reacquiring the locks to be correct. I wonder if we should actually be aiming for some higher level constructs that will be less prone to getting it very wrong. -SYG: The main thread will basically need special handling for all of this. As it is today, the -main thread can’t straightforwardly use mutexes because it can’t block, unless it wants to -emulate blocking. That will remain the case, so your concern is valid, and I think the -- -these APIs will still -- like, most of the uses will be workers with other workers. Not -workers with the main thread. The main thread would just need to be special. +SYG: The main thread will basically need special handling for all of this. As it is today, the main thread can’t straightforwardly use mutexes because it can’t block, unless it wants to emulate blocking. That will remain the case, so your concern is valid, and I think the -- these APIs will still -- like, most of the uses will be workers with other workers. Not workers with the main thread. The main thread would just need to be special. -DMM: I tend to agree, but I think something like a blocking queue or something like that is -easier to implement with a minimal sort of timeout for getting things rather than -- +DMM: I tend to agree, but I think something like a blocking queue or something like that is easier to implement with a minimal sort of timeout for getting things rather than -- -SYG: Thank you for calling that out. That’s an even longer term proposal, to have something like -concurrent collections. +SYG: Thank you for calling that out. That’s an even longer term proposal, to have something like concurrent collections. -MM: Okay, so first of all, let me say I appreciate the Wasm update, in particular the -- seeing -progress on was WasmGC is very exciting, I’m been excited about WasmGC from the very beginning -did, this concurrency model in in general shared memory multithreading is perfectly reasonable -and attractive thing for the growth of Wasm. It’s very much in line with the design sense of -Wasm. It’s completely abhorrent to the design sense of JavaScript as well as the contamination -that it would do the existing ecosystem, as I’ve explained before. So I do understand that JavaScript and Wasm are going to -- you know, do co-exist and will continue to co-exist after shared state multithreading getting introduced to Wasm. I think it’s really important to keep it out of JavaScript. That I think that the shared structs proposal should, as a proposal for JavaScript should never happen, and that the -- if the shared structs are only on the Wasm side, well, the Wasm side already has its own story for providing behavior on the Wasm side as well, so the whole issue of having functions on the JavaScript side, having behavior somehow associated on the JavaScript side, can be avoided if we simply avoid shared structs completely on the JavaScript side. And finally, the kind of engine integrity concerns that motivated you in particular to recommend the the creation of TG3, I think any introduction, any further introduction and shared state multithreading into JavaScript, especially of on the heap with regard to objects that are visible to WeakMaps, et cetera, I think those things should be very scary from an engine integrity point of view, and this should be taken to TG3 for critical review on those grounds for exactly the reasons that motivated you to want TG3 created in the first place. +MM: Okay, so first of all, let me say I appreciate the Wasm update, in particular the -- seeing progress on was WasmGC is very exciting, I’m been excited about WasmGC from the very beginning did, this concurrency model in in general shared memory multithreading is perfectly reasonable and attractive thing for the growth of Wasm. It’s very much in line with the design sense of Wasm. It’s completely abhorrent to the design sense of JavaScript as well as the contamination that it would do the existing ecosystem, as I’ve explained before. So I do understand that JavaScript and Wasm are going to -- you know, do co-exist and will continue to co-exist after shared state multithreading getting introduced to Wasm. I think it’s really important to keep it out of JavaScript. That I think that the shared structs proposal should, as a proposal for JavaScript should never happen, and that the -- if the shared structs are only on the Wasm side, well, the Wasm side already has its own story for providing behavior on the Wasm side as well, so the whole issue of having functions on the JavaScript side, having behavior somehow associated on the JavaScript side, can be avoided if we simply avoid shared structs completely on the JavaScript side. And finally, the kind of engine integrity concerns that motivated you in particular to recommend the the creation of TG3, I think any introduction, any further introduction and shared state multithreading into JavaScript, especially of on the heap with regard to objects that are visible to WeakMaps, et cetera, I think those things should be very scary from an engine integrity point of view, and this should be taken to TG3 for critical review on those grounds for exactly the reasons that motivated you to want TG3 created in the first place. SYG: I have a reply, but I see other replies on the queue. @@ -353,7 +279,7 @@ DRR: Is your suggestion that shared structs can only be created by wasm and cons MM: So the -- so I don’t know. Let me float a hypothesis that may or may not be consistent with the rest of the picture, and maybe you can tell me. The hypothesis is that, you know, the host already exposes host objects to JavaScript, as long as the -- what’s necessary on the JavaScript side to account for the behavior of an object exposed to JavaScript as if it is, you know, in the category of host object, as long as that’s consistent with all of the constraints that anything exposed as the host object in the spec right now must maintain. And excuse me for using non-spec terminology. I know the host object is no longer the terminology. But all the object and variants, et cetera, that apply to these things, the concurrency of what happens on the host side of invoking a host object isn’t necessarily visible to JavaScript and hopefully can be -- can remain non-visible to JavaScript, so I think the answer is, yes, they could be accessible as things for JavaScript to invoke, but to keep all of the shared state multithreading out of JavaScript and not have it infect the JavaScript script spec itself. The main concern there would be the TG3 one, which is is even that level of interoperation with Wasm dangerous to JavaScript integrity level? And for that one, I do not know the answer. -KG: Yeah, so, Mark, I first want to express agreement with the concerns that this is, like, a very scary thing to do and kind after different from something in JavaScript before, which with the exception of course of SharedArrayBuffers, which are multithread state. But I do want to really strongly disagree with the thesis that therefore we should not do it. I think that the threading story in JavaScript is currently very poor and that that is, like, doing major harm to users of the web. It causes people’s experience to be worse in a really concrete way, that using the web is bad because of our failure to make threading more convenient in JavaScript. And shared state is a big part of how one makes threading more convenient. So I think this is a really, really important direction for us to go, despite sharing your concerns about the security story, both in terms of the complexity and potential for bugs in browser engines and the additional complexity for JavaScript authors and people trying to analyze JavaScript programs. I think it’s just too important not to do, despite sharing those concerns. +KG: Yeah, so, Mark, I first want to express agreement with the concerns that this is, like, a very scary thing to do and kind after different from something in JavaScript before, which with the exception of course of SharedArrayBuffers, which are multithread state. But I do want to really strongly disagree with the thesis that therefore we should not do it. I think that the threading story in JavaScript is currently very poor and that that is, like, doing major harm to users of the web. It causes people’s experience to be worse in a really concrete way, that using the web is bad because of our failure to make threading more convenient in JavaScript. And shared state is a big part of how one makes threading more convenient. So I think this is a really, really important direction for us to go, despite sharing your concerns about the security story, both in terms of the complexity and potential for bugs in browser engines and the additional complexity for JavaScript authors and people trying to analyze JavaScript programs. I think it’s just too important not to do, despite sharing those concerns. MM: We are in complete disagreement here. The experience with languages that have shared state multithreading, in particular, conventional shared state multithreading with fine-grain locking, which is what we’re talking about here, like Java and C# and others, is that a large number of programmers at the same skill level as the average JavaScript programmer and even a substantially higher skill levels that -- feel like they can, you know, look at this, think they can program it correctly and make a mess. The history of programs of memory multithreading programming languages has just been awful, with regard the how bug prone they are, both with regard to, you know -- both in regard inconsistency and in regard to deadlock. And the -- if we were talking about something like the Rust approach to shared memory multithreading, that would be very different, but we’re not. And obviously it would be -- it would be a tremendous research effort to try -- to try to conceive of something like that in JavaScript and probably would not work. I think that the friendliness of JavaScript to the web and the fact that so much code out there written by programmers as normal skill work is largely to the credit of the communicating event concurrency model, especially concurrent communicating loops with promises, the whole -- you know, the paradigm that JavaScript really brought to world has just been incredibly more successful at enabling programmers at reasonable scale to write code that deals with concurrency. And I think SharedArrayBuffers was a terrible, terrible mistake, and I think so far the reason it hasn’t caused more disaster is because it’s so unusable in its current form that it largely goes unused. And every step that we make towards making it more usable will create more actual practical problems on the web. @@ -371,98 +297,53 @@ MM: I did not -- in that case, I did not understand your explanation. SYG: Let me try again. If you have Wasm objects that are shared, you have to get them out of Wasm to do something useful with them such as pass them to the web APIs or other embedder APIs to actually have effects on the world like I/O. Once you get them out, you would need to explain some behavior that those Wasm objects on the JS side have. You would need to be able to get data out of those objects. You would need to post message them. You would need to possibly pass them back into Wasm. So there’s no way to contain the observation of the parallelism and the non-determinism, because you need to actually get stuff out -- you have to get data out to -- unless you’re saying the boundary itself is restricted. Then we can have your world, but that ship has sailed. -MM: No, the -- let me repeat something that I explained, that I said in an answer to an earlier question, that itself I’m uncertain about. But my hypothesis is that the Wasm shared objects can be exposed to JavaScript as if they’re host objects without any change to the JavaScript language, that they can fit within all of the constraints that the JavaScript language specifies that host objects must be constrained by, the object invariants, et cetera, and that the JavaScript’s point of view, the -- you know aux of the concurrency in those host objects is internal to those host objects, and if the JavaScript language did not -have to be aware of that concurrency. +MM: No, the -- let me repeat something that I explained, that I said in an answer to an earlier question, that itself I’m uncertain about. But my hypothesis is that the Wasm shared objects can be exposed to JavaScript as if they’re host objects without any change to the JavaScript language, that they can fit within all of the constraints that the JavaScript language specifies that host objects must be constrained by, the object invariants, et cetera, and that the JavaScript’s point of view, the -- you know aux of the concurrency in those host objects is internal to those host objects, and if the JavaScript language did not have to be aware of that concurrency. -SYG: Then it sounds like your concern is spec and purity and not worried about what might happen -in the ecosystem. +SYG: Then it sounds like your concern is spec and purity and not worried about what might happen in the ecosystem. -MM: No, I am worried about what might happen in the ecosystem. The particular -- that -particular way of co-existing keeps all behavior, all shared -- all of the expression of -behavior under concurrency on the Wasm side. It does not -- it never has a JavaScript function -having to think about the shared state concurrency, having to think about the, you know -- the -locking versus race conditions. You keep all of that on the Wasm side of the behavior of those -objects. And you expose red safe objects to the JavaScript side. +MM: No, I am worried about what might happen in the ecosystem. The particular -- that particular way of co-existing keeps all behavior, all shared -- all of the expression of behavior under concurrency on the Wasm side. It does not -- it never has a JavaScript function having to think about the shared state concurrency, having to think about the, you know -- the locking versus race conditions. You keep all of that on the Wasm side of the behavior of those objects. And you expose red safe objects to the JavaScript side. -SYG: That is not possible. Like, you can export Wasm functions that are callable from JS that -can act on Wasm objects that exhibit data races. +SYG: That is not possible. Like, you can export Wasm functions that are callable from JS that can act on Wasm objects that exhibit data races. -MM: You can. The -- I mean, you know, you can write buggy code in anything. The idea would be -that the -- to -- is that on the Wasm side, the way you use -- the way you would use this -co-existence is to do all of your concurrency handling and variant maintenance on the Wasm side -of the behavior and expose to the JS side APIs implemented on the Wasm side where the JS side -just sees thread safe APIs. +MM: You can. The -- I mean, you know, you can write buggy code in anything. The idea would be that the -- to -- is that on the Wasm side, the way you use -- the way you would use this co-existence is to do all of your concurrency handling and variant maintenance on the Wasm side of the behavior and expose to the JS side APIs implemented on the Wasm side where the JS side just sees thread safe APIs. RPR: Okay, we are almost at time. And we have both KG and DE in the queue. -KG: Yeah, so I guess I was mostly just restating what RBN said. MM, you’re -correct that JavaScript model of concurrency has worked really well for it and it’s enabled -people to write programs - not always bug-free programs, because the concurrency still gives issues - -but it’s enabled regular people to write concurrent programs, and that’s great. But that’s not -enough. We need parallelism. Concurrency is just not sufficient. The experience of a user of a web page if the web page does not have a good way to be parallel is worse, because everything the contending for the main thread including the UI. I don’t think we should consider that an acceptable state affairs. I think we really -do need to have some story for parallelism as well in JavaScript. Not just concurrency. +KG: Yeah, so I guess I was mostly just restating what RBN said. MM, you’re correct that JavaScript model of concurrency has worked really well for it and it’s enabled people to write programs - not always bug-free programs, because the concurrency still gives issues - but it’s enabled regular people to write concurrent programs, and that’s great. But that’s not enough. We need parallelism. Concurrency is just not sufficient. The experience of a user of a web page if the web page does not have a good way to be parallel is worse, because everything the contending for the main thread including the UI. I don’t think we should consider that an acceptable state affairs. I think we really do need to have some story for parallelism as well in JavaScript. Not just concurrency. MM: Once again, if a program doesn’t need to be correct, I can make it arbitrarily fast. -. -KG: It does need to be correct, and it also needs to be fast. And if there’s no way to do -parallelism, it can't be fast. And if it can be parallel, it’s possible - -difficult, but possible - to be both correct and fast, and that’s a state that is necessary. - -MM: The shared structs proposal is not a route for regular programmers to write code that is -correct. - -DE: Thanks for doing the extension. So this proposal seems really good to me. I’m still curious which parts you’re going to put in -the MVP and which parts not. But even just starting with the Wasm API only part, I think that -would give some building blocks that would allow adoption of this within JavaScript in a way -that’s easier adopt than rewriting your whole program in Wasm. The shared struct type registry -and making prototype the actual -- the actual prototype of a JavaScript object, accessing the -thread local storage, that seems quite important to avoid the need for wrappers. And syntax -would -- I’m in favor of the syntax if we can figure it out. I’m glad -- I’m really glad that -you’re maintaining the correspondence of these two proposals and thinking about them together. - -SYG: Thanks. For what’s in the core MVP, that is still -- we’re still hashing that out with RBN -and -- yeah, with RBN. But I guess depending on Mark’s veto here, it is possible that we get -experience first via the JS-Wasm API and depending on how things go, that we -- like, that is -how you consume these things in the future, I think then we’re doing the language a disservice -and we’re doing our users a disservice, but we’re not closing off the capability. And frankly, -MM, that’s not in your purview the block on the Wasm side. So I feel like that is just -coming one way or another, so at least in response to Kevin, I think we will have this on the -web perhaps in a really crappy DX way, and maybe we can try to repair that later. - -MM: I’m sorry, just what in what I said sounded like there was something on the Wasm side I was -interested in blocking? +. KG: It does need to be correct, and it also needs to be fast. And if there’s no way to do parallelism, it can't be fast. And if it can be parallel, it’s possible - difficult, but possible - to be both correct and fast, and that’s a state that is necessary. + +MM: The shared structs proposal is not a route for regular programmers to write code that is correct. + +DE: Thanks for doing the extension. So this proposal seems really good to me. I’m still curious which parts you’re going to put in the MVP and which parts not. But even just starting with the Wasm API only part, I think that would give some building blocks that would allow adoption of this within JavaScript in a way that’s easier adopt than rewriting your whole program in Wasm. The shared struct type registry and making prototype the actual -- the actual prototype of a JavaScript object, accessing the thread local storage, that seems quite important to avoid the need for wrappers. And syntax would -- I’m in favor of the syntax if we can figure it out. I’m glad -- I’m really glad that you’re maintaining the correspondence of these two proposals and thinking about them together. + +SYG: Thanks. For what’s in the core MVP, that is still -- we’re still hashing that out with RBN and -- yeah, with RBN. But I guess depending on Mark’s veto here, it is possible that we get experience first via the JS-Wasm API and depending on how things go, that we -- like, that is how you consume these things in the future, I think then we’re doing the language a disservice and we’re doing our users a disservice, but we’re not closing off the capability. And frankly, MM, that’s not in your purview the block on the Wasm side. So I feel like that is just coming one way or another, so at least in response to Kevin, I think we will have this on the web perhaps in a really crappy DX way, and maybe we can try to repair that later. + +MM: I’m sorry, just what in what I said sounded like there was something on the Wasm side I was interested in blocking? SYG: Well, I heard one concern from you earlier that we ought to have more guard rails for the kind of correct programs that we enable JavaScript program force write, and shared-memory multithreading programming is taking a significant chunk of guardrail off in the way of performance you don’t find desirable. I don’t even disagree with that, and the need and demand is there and that’s why it’s being proposed in Wasm and originally why I to proposed it in JS. And if concern is this whether proliferate wacky behaviors because of libraries that have wacky bugs on the web, that future may be coming anyway, and I think blocking the JS side of things does not address that concern, if that is your concern. -DE: Sorry, I also wanted to add that I found that the handshake part very interesting, the fact -that in WebAssembly, multithreading people are okay with this explicit handshake. And maybe -that gives us a way forward for the shared struct type registry, which is kind of one of the -difficult points. +DE: Sorry, I also wanted to add that I found that the handshake part very interesting, the fact that in WebAssembly, multithreading people are okay with this explicit handshake. And maybe that gives us a way forward for the shared struct type registry, which is kind of one of the difficult points. -SYG: I did not hear. So that was a previous difficult point, but this seems more fundamental and -what MM said today in -- +SYG: I did not hear. So that was a previous difficult point, but this seems more fundamental and what MM said today in -- DE: Oh, sure. I was passing MM's point, this would be the next one, maybe. -SYG: Right, the global mutable state via the registry, if the handshake were acceptable, then -there will be less of a need for that. I agree. Yeah. +SYG: Right, the global mutable state via the registry, if the handshake were acceptable, then there will be less of a need for that. I agree. Yeah. ### Speaker's Summary of Key Points -SYG: Well, this is -- just an update. So no consensus was asked for. The -- I’m still -interested in bringing a proposal back for Stage 2, but Mark has telegraphed basically that he -will veto such a thing. So it may be unproductive. On the other hand, I have also the signal -that Mark may be in the minority here, and depending other discussions today, we’ll see how -that goes. +SYG: Well, this is -- just an update. So no consensus was asked for. The -- I’m still interested in bringing a proposal back for Stage 2, but Mark has telegraphed basically that he will veto such a thing. So it may be unproductive. On the other hand, I have also the signal that Mark may be in the minority here, and depending other discussions today, we’ll see how that goes. ### Conclusion - ## iterator chunking for stage 1 + Presenter: Michael Ficarra (MF) -- [proposal]() -- [slides]() +- [proposal](https://github.com/michaelficarra/proposal-iterator-chunking) +- Slides: See Agenda MF: All right, this is chunking. Okay, so I’m trying to solve two problems in this proposal. The first problem is consuming non-overlapping subsequences of an iterator. For example, if you want to consume this iterator of digits, 0 through 9, two digits at a time, you would get a resulting iterator that yields arrays that are indicated by these orange outlines, so you would have an iterator that yields the array containing 0 and 1 and then yields the array containing 2 and 3 and so on. And that’s how I’ve visualized things within this presentation. So if you wanted to consume things three at a time, you could also do that, this chunking operation is parameterized by the length of subsequences you want to consume. Notice how also once you reach the end of the iterator, you have to do something with the remaining items, if they don’t fully fill up your chunk, and we’ll get to that later, but I think that we should probably include those just as a smaller chunk. And similarly, if you were to do four, it would look like this. And also five. @@ -476,60 +357,59 @@ MF: So here is a summary of prior art in languages outside of JavaScript. This i MF: So, yeah, pretty big design space, but very common operation, very useful. So I do have preferences here which I’ll go over. I do think we should have a chunks method, which is that kind of specialization of windows. There are some other good names. You know, we can consider those later. I don’t have a preference on whether verb or noun naming is appropriate, but if anybody has relevant precedent, I would like to hear that. The final chunk can be truncated if necessary. I think that that’s non-problematic. I think the appropriate way to handle these invalid inputs is to throw, and on a chunk size of zero, I think the natural thing that falls out is infinite empty arrays. But I'm willing to compromise on such a thing. Oh, and the optional chunk size, I don’t have a preference on. As far as windows, I think it’s probably worthwhile to also have windows, the use cases seemed compelling enough to me. I don’t think that it’s necessary to actually support a step other than one. The vast majority of windows uses were using a step of one. And I think that kind of simplifies the things that we’re concerned with and we don’t have to be concerned with other really exotic use cases, and I think people can also simulate if they want to use a step other than one. -MF: And sliding in and out, I think I’ll leave that to somebody else to figure out later if that’s a -thing that we need to solve. +MF: And sliding in and out, I think I’ll leave that to somebody else to figure out later if that’s a thing that we need to solve. MF: That's my summary of this problem space that I wanted to explore. And I am looking for Stage 1. -DLM: We discussed this internally, we were in favor of this. I used chunking and sliding windows before. They both have value. And we support this for Stage 1. Thank you. +DLM: We discussed this internally, we were in favor of this. I used chunking and sliding windows before. They both have value. And we support this for Stage 1. Thank you. JHD: Yeah. So definitely supported for Stage 1 and as to no surprise, I am sure, I want the feature for arrays as well, not just not iterators, but that doesn’t block anything today, of course. Thank you. -KG: This is much less necessary on arrays because you can index around. - -LCA: An additional thing for the design space is whether the chunks are arrays or iterators this is not too relevant for synchronous iterators here, but for AsyncIterators it may be relevant whether the chunk is returned once the first element is yielded from the underlying iterator or ones for elements have been collected, you then yield the iterator that yields those elements immediately. Maybe that is an additional space for the design space. Also, very much in favour of this. - -SFC: Yeah. I looked through a lot of my codebase and found dozens and dozens of times when I use functions and chunks, 0 copy, converting from an array of bytes to an array of i32 or something like that. For windows, useful for looking at segments. If you have a list of breakpoints, for example. Also, for validating like if a list is sorted with windows length, those types. There’s a lot of use cases here and I definitely would like to see this built into the language. Because it seems useful. +KG: This is much less necessary on arrays because you can index around. -CDA: JRL with a support for both. LGH + 1 for Stage 1. +LCA: An additional thing for the design space is whether the chunks are arrays or iterators this is not too relevant for synchronous iterators here, but for AsyncIterators it may be relevant whether the chunk is returned once the first element is yielded from the underlying iterator or ones for elements have been collected, you then yield the iterator that yields those elements immediately. Maybe that is an additional space for the design space. Also, very much in favour of this. -DRR: Soft preference for throwing if the chunk size is zero. It feels like you get into trouble. But I don’t have a strong preference and I think I would support Stage 1 in general. +SFC: Yeah. I looked through a lot of my codebase and found dozens and dozens of times when I use functions and chunks, 0 copy, converting from an array of bytes to an array of i32 or something like that. For windows, useful for looking at segments. If you have a list of breakpoints, for example. Also, for validating like if a list is sorted with windows length, those types. There’s a lot of use cases here and I definitely would like to see this built into the language. Because it seems useful. -LCA: + 1 for Daniel’s point about throwing. +CDA: JRL with a support for both. LGH + 1 for Stage 1. -CDA: Queue is clear. +DRR: Soft preference for throwing if the chunk size is zero. It feels like you get into trouble. But I don’t have a strong preference and I think I would support Stage 1 in general. + +LCA: + 1 for Daniel’s point about throwing. + +CDA: Queue is clear. MF: On that point, I would say that it seems like – yeah. Really 50-50 here. For prior art on how to handle zero, if anybody else has more information on why you would have the preference of throwing or why you might have the other preference of infinite empty arrays, that would be helpful, put it on the issue tracker or something. We can have that discussion and try to figure that out. -CDA: Okay. Nothing else in the queue. Sounds like you have ample support for Stage 1. No objections I am hearing or seeing in the room. MF, did you want to dictate any key points summary, conclusion for the notes? +CDA: Okay. Nothing else in the queue. Sounds like you have ample support for Stage 1. No objections I am hearing or seeing in the room. MF, did you want to dictate any key points summary, conclusion for the notes? + ### Speaker's Summary of Key Points + At Stage 1, there’s still so much up in the air. It seems like there was strong support for exploring both those directions, the non-overlapping and the overlapping subsequence problems + ### Conclusion -* Stage 1 + +- Stage 1 ## Can we reach consensus on what is Consensus? + Presenter: Michael Saboff (MLS) - [slides](https://github.com/msaboff/tc39/blob/master/TC39%20Consensus.pdf) MLS: Okay, so what I have today is kind of a meta discussion, specifically I wanted to talk about how we work with regards to consensus and how it applies to TC39. So you look in the dictionary and you find a bunch of different definitions for consensus. The word consensus is actually from the Latin and it means agreement. -MLS: I provided a few of the dictionary definitions, generally accepted opinion, judgment arrived at with most of those concerned. So on and so forth. There are some definitions that use the word unanimity. Since, you know, we’re part of ECMA, TC39, well, what does ECMA have to say about consensus? And it turns out -that ECMA doesn’t really say much about consensus. In fact, it’s silent. But in the ECMA rules, we read three sections here where it talks about voting. Majority voting by TC members, each member only has one vote. Recommended that in the course that voting is -- I guess you’d say that we try not to use voting. It’s kind of frowned upon, but if needed, we use votes. +MLS: I provided a few of the dictionary definitions, generally accepted opinion, judgment arrived at with most of those concerned. So on and so forth. There are some definitions that use the word unanimity. Since, you know, we’re part of ECMA, TC39, well, what does ECMA have to say about consensus? And it turns out that ECMA doesn’t really say much about consensus. In fact, it’s silent. But in the ECMA rules, we read three sections here where it talks about voting. Majority voting by TC members, each member only has one vote. Recommended that in the course that voting is -- I guess you’d say that we try not to use voting. It’s kind of frowned upon, but if needed, we use votes. MLS: And then sometimes the output of a TC is actually a report, and there may be the desire for a minority report if there’s some disagreement among a committee. So let’s talk about what we do here at TC39. In our terminology. Now, in most cases, we do follow the notion of general agreement. Oftentimes, at the end of a presentation, we hear, do we have consensus for X, advancement for stage 2, blah blah, blah, and we look for delegates to explicitly support something, and somebody puts a thumbs up in TCQ and that seems all well and good. Although, we now rarely ask “does anyone block consensus”, we changed the terminology there and say “withhold consensus”, or when somebody asks do we have consensus, somebody may reply, "I withhold consensus and, it’s a positive way to ask that question. Fundamentally, our decision making process at TC39 is unanimity. We must all agree for something to move forward, and one (indiscernible) -MLS: Ontarioer can block consensus, and that’s what I want to speak about today. Basically, you know, "I withhold consensus" is identical to a veto. And I don’t think that should be contentious, that statement. A single member in the committee has the power to decide what we do or actually in most cases what we don’t do. I don’t want to impugn at this point the motives of anybody that’s used that, this meeting or other meetings. But we can all probably think of past instances in our own, you know, review of things where we would question the motives of somebody that would want to block something. And for me, it’s more of a principle and the general impact of the working atmosphere of the committee. Now there are some people -that may say, well, but I’m right.Thoreau wrote an essay called on the duty of civil disobedience, and there’s a quote in there that says "any man more right than his neighbor constitutes a majority of one already". I don’t think that this is a proper application of Thoreau in our committee because he wrote this essay specifically to talk about the evils of slavery and his disagreement with the Mexican-American war. That was in the 1800s. So I doubt that our deliberations have the same moral considerations, although some may disagree with me. +MLS: Ontarioer can block consensus, and that’s what I want to speak about today. Basically, you know, "I withhold consensus" is identical to a veto. And I don’t think that should be contentious, that statement. A single member in the committee has the power to decide what we do or actually in most cases what we don’t do. I don’t want to impugn at this point the motives of anybody that’s used that, this meeting or other meetings. But we can all probably think of past instances in our own, you know, review of things where we would question the motives of somebody that would want to block something. And for me, it’s more of a principle and the general impact of the working atmosphere of the committee. Now there are some people that may say, well, but I’m right.Thoreau wrote an essay called on the duty of civil disobedience, and there’s a quote in there that says "any man more right than his neighbor constitutes a majority of one already". I don’t think that this is a proper application of Thoreau in our committee because he wrote this essay specifically to talk about the evils of slavery and his disagreement with the Mexican-American war. That was in the 1800s. So I doubt that our deliberations have the same moral considerations, although some may disagree with me. -MLS: There may be some that we view that it’s our individual responsibility to safeguard JavaScript -for the future. I actually see that as one of my responsibilities. But as a member of the larger committee. That we need to work together with other members. Now, even if one has honorable intentions when they withhold consensus, there’s a dark side to this sole dissenter policy we use. At times, I believe that some members have weaponized this withholding consensus and to step forward as an autocrat for the current topic, whatever the topic is. These are usually rare cases of withholding consensus where it seems in my mind, it seems more like a code of conduct violation than it is a good decision-making process. So here are some of the issues that I have with our current consensus process. And these are my observations. +MLS: There may be some that we view that it’s our individual responsibility to safeguard JavaScript for the future. I actually see that as one of my responsibilities. But as a member of the larger committee. That we need to work together with other members. Now, even if one has honorable intentions when they withhold consensus, there’s a dark side to this sole dissenter policy we use. At times, I believe that some members have weaponized this withholding consensus and to step forward as an autocrat for the current topic, whatever the topic is. These are usually rare cases of withholding consensus where it seems in my mind, it seems more like a code of conduct violation than it is a good decision-making process. So here are some of the issues that I have with our current consensus process. And these are my observations. MLS: Typically withholding consensus is usually by a smaller number of committee members. And I would add that those withhold consensus, they tend to be more vocal. Maybe that’s their personality. I think a lot has to be their time on the committee, and they feel comfortable speaking up. I’m going to say that it appears that some also think that they wield a greater authority than others on the committee. Now, I do want to say that there are certainly people that have been on this committee far longer than I have, and I think I’m pushing eight years at this point, and they have experience working not only on the committee, but in the language. They understand JavaScript probably better than most people that attend. And there are cases where a single withhold has ended discussion of a particular topic, not just for that meeting, but going forward. The topic typically is not brought up again in some cases. I want you to consider a newcomer, somebody that, you know, they come to a meeting for the first time, and they see this single dissent policy in action, and I think that there are some cases where they would be -- it might energize them, hey, I’m part of this committee, and I have some authority to change things by blocking, but I think it’s more of the case that someone comes to committee the first time, they’re starting to feel things out, how does the committee work, where to I fit in, when is it okay for me to talk and things like that, and it may turn them off. And the last issue is I actually think that the lone veto policy, it hurts the working relationship with our committee. I will stipulate that we have competitors, I work for a browser -- a company that makes a browser. And there are other people in this room, I’m sitting next to one, his company makes a browser and SYG, we actually have a pretty good relationship with these folks, but we have different aims. There’s also people that have experience with using JavaScript as developers, and it’s part of their day-to-day work, and they come with different desires and different backgrounds than I do. And we come together from these diverse JavaScript backgrounds to try to shepherd the language in a way that benefits the whole community of developers and implementers. I might add that JavaScript, originally it was designed for web browsers, but it’s also used in servers and it’s also used in embedded devices. That JavaScript truly is probably the most used programming language in the world, and it’s being used in more and more cases. -MLS: So to be clear, I want to make it very clear that I am not advocating that we increase the number of proposals that we move forward in the stage process – that we open the floodgates, as it were. And that means I don’t think we should reduce the rigor with which we decide what’s in our standards. But I think that we need to collectively see if we can come up with a better – some modifications to our consensus -process. I present four options here. There could be others. I’m fully aware that any change to our consensus process requires the current consensus process to make that change. That we need 100% agreement if we make any change to the consensus process. So in some aspect, I’m providing some information as – how we can maybe modify things or not, but I don’t know how successful it would be. The four options would be, we maintain our current process. One person can veto any proposal or any other current thing we discuss. When I say that, there is time, like I believe it was the last meeting, we were talking about the naming of the new stage we were adding that we used consensus to lower the bar to a majority process and use the majority process that we agreed upon via our current con says to actually reach the name that we would use for the new stage. So there are times when we do temporarily reduce the threshold for a decision to be made. So we can make in our current process we will do that occasionally. We can increase the number of consensus withholders required, and make it more than one, and we can talk about what that would be. But, you know, we just increase it. Voting, okay, so voting, -If we vote, what’s the majority? You know, is it a simple majority? Is it some kind of supermajority? And then general consensus, which actually I believe it’s how most of the other TCs work, and lot of other standards organizations work, general consensus, we move forward if there’s no or minimal dissent, and if we stop moving forward, we stop with a decision or we don’t agree to a decision if there’s significant concerns. I favor the fourth option here. But that’s -- this is up for discussion, and I brought this for discussion. +MLS: So to be clear, I want to make it very clear that I am not advocating that we increase the number of proposals that we move forward in the stage process – that we open the floodgates, as it were. And that means I don’t think we should reduce the rigor with which we decide what’s in our standards. But I think that we need to collectively see if we can come up with a better – some modifications to our consensus process. I present four options here. There could be others. I’m fully aware that any change to our consensus process requires the current consensus process to make that change. That we need 100% agreement if we make any change to the consensus process. So in some aspect, I’m providing some information as – how we can maybe modify things or not, but I don’t know how successful it would be. The four options would be, we maintain our current process. One person can veto any proposal or any other current thing we discuss. When I say that, there is time, like I believe it was the last meeting, we were talking about the naming of the new stage we were adding that we used consensus to lower the bar to a majority process and use the majority process that we agreed upon via our current con says to actually reach the name that we would use for the new stage. So there are times when we do temporarily reduce the threshold for a decision to be made. So we can make in our current process we will do that occasionally. We can increase the number of consensus withholders required, and make it more than one, and we can talk about what that would be. But, you know, we just increase it. Voting, okay, so voting, If we vote, what’s the majority? You know, is it a simple majority? Is it some kind of supermajority? And then general consensus, which actually I believe it’s how most of the other TCs work, and lot of other standards organizations work, general consensus, we move forward if there’s no or minimal dissent, and if we stop moving forward, we stop with a decision or we don’t agree to a decision if there’s significant concerns. I favor the fourth option here. But that’s -- this is up for discussion, and I brought this for discussion. JHD: So what I wrote was the -- like, a temporary cost of a delay or block or something versus, like, the eternal cost of shipping something that’s harmful to someone. The value that I see from our current consensus process is that everyone can’t be in the room. Everybody who is affected by JavaScript isn’t necessarily represented in the room either. But the hope is that for everybody out there, at least one person in this room can represent their interests in some way. And allowing one person to essentially stop progress on something and ensure that their considerations are taken into account, makes sure that even the minority is given a voice in that regard. The options you presented up here, I mean, I’m sure we could come up with more, but just these, the issue with B is that some companies have a lot of people in the room. So disadvantaging invited experts who are one person or companies that only have one delegate in favor of the companies that can afford to send or sponsor more people to join, like that’s tricky. And then if you try and restrict to “by member company”, what do you do about invited experts? Then, you know, and what about non-profits, like the OpenJS Foundation that represents many interests? It’s sort of difficult to quantify how much impact anyone should have, and so for me, it seems like allowing one person to veto it makes -- you know, whatever terminology you want to use for that, is making sure that no one’s overlooked. That no one is crushed by the wheel of human -- the natural human desire for progress. Whether it’s better or worse. @@ -549,8 +429,7 @@ MLS: Right. JHD: So that is absolutely a tradeoff of option A here. That we have to consider. But the -- generally when I look at two options, like, you know, the current process or something else, I would look at what is the worst and best case down each path. The worst case of the current path is that sometimes something that should happen doesn’t. And the best case is that things that shouldn’t happen, don’t happen. It’s not like I’ve thought out this best and worst statement to where I’m probably not being fully articulate, but I think I see that that as a worthy tradeoff, in that we can always revisit something, like, there’s -- the quote that I like is “in software, yes is permanent, no is temporary”, and I think that for JavaScript, that’s especially true. So we can always revisit a block. We cannot revisit something we’ve shipped. -MLS: So let me add just a couple comments. Sometimes that block ends up being permanent because -of the fortitude of the presenter and how they felt, you know, they were underappreciated. The other thing is that the tradeoff, if we do maintain the current process, is I think it has detrimental effect on newcomers +MLS: So let me add just a couple comments. Sometimes that block ends up being permanent because of the fortitude of the presenter and how they felt, you know, they were underappreciated. The other thing is that the tradeoff, if we do maintain the current process, is I think it has detrimental effect on newcomers JHD: I completely agree with all of that, but I think the same principle can apply if it’s in fact of value, then surely another presenter will materialize at times, right? A block is only permanent if its value is questionable. @@ -564,8 +443,7 @@ MLS: So we use D almost all the time. It is -- is there consensus, you know, do LEO: If we want to explore that, we want to present how we want option A from happening again. -MLS: And anything that we come up with, any way that by come up with it, there’s probably ways to -weaponize it. It becomes more difficult with some of the other -- with just one person, it’s pretty easy. I can decide I don’t like it and I’m going to withhold, withhold, withhold. If we increase it, now you have to get more people involved, so on and to sort and a majority and now you have to get a block. I don’t want us to become political, and I think in some ways we are political. +MLS: And anything that we come up with, any way that by come up with it, there’s probably ways to weaponize it. It becomes more difficult with some of the other -- with just one person, it’s pretty easy. I can decide I don’t like it and I’m going to withhold, withhold, withhold. If we increase it, now you have to get more people involved, so on and to sort and a majority and now you have to get a block. I don’t want us to become political, and I think in some ways we are political. LEO: Thank you. @@ -579,8 +457,7 @@ MLS: No. USA: There is a reply to that by NRO. Sorry, DE, we’ll get to your reply later, if that’s okay. -NRO: Yeah, if you’re concerned with, like, being a lone objector because other objectors are not -in the room, we can have rules there place of that. We can say if there’s a lone objector, like, other delegates have time until the meeting after to object or something like that. Like, once we -- like, once we agree on some sort of rule, like, applying it in a way that’s still equitable and doesn’t advantage some companies, some delegates compared to others, so it’s still doable. +NRO: Yeah, if you’re concerned with, like, being a lone objector because other objectors are not in the room, we can have rules there place of that. We can say if there’s a lone objector, like, other delegates have time until the meeting after to object or something like that. Like, once we -- like, once we agree on some sort of rule, like, applying it in a way that’s still equitable and doesn’t advantage some companies, some delegates compared to others, so it’s still doable. SFC: All right, yeah, just pointed out that the thing about the lone objector that RGN brought up, is that I think there’s definitely cases where, like, you know, the lone objectors have very good points, and, you know, like, if you were to almost temperature check a lot of delegates many the room, there might be other delegates that agree with them, but might not agree strong enough and want to, you know, put their position out there to actually join formally as a second objector. I know that that’s, you know -- I’ve sort of been in that situation before, you know, like I don’t like to be a lone objector for things other than things of internationalization concerns, because that’s what I’m here for. Right? So I think that that’s, you know -- like, if there is a lone objector, like, you know, I think that that’s a very strong, you know, position to have, and, like we should respect that because it already is kind of difficult to be that lone objector. @@ -600,13 +477,11 @@ MLS: Okay. PFC: Not having been a participant at the time that this model was established, I would like to understand the rationale better for why we went with this model in the first place. Is there anybody who can talk about that and then I might have some things to say after that, depending on whether it matches what my understanding is. -MM: I can speak somewhat to historical issues. WH is the only person I know of on the committee that has a longer historical memory than I do. I joined the committee in 2007. And this was already the rule at the time that I joined. But the history right at that moment is very interesting because shortly before I joined, most of the rest of the committee, in fact, all but one of the committee, wanted to go forward with ECMAScript 4 as the next version of JavaScript. And there was one sole objector at one point, which was -DC, who I think very correctly said this -- you know, "this language proposal is really bad", and like the Henry Fonda character in 12 Angry Men, having blocked unanimity with an articulate objection, I think that’s key to why number D is still the best characterization of our process, but he then gradually convinced other members of the committee to move over 20 to his position, and we collaborated ECMAScript 3.1, which eventually became ECMAScript 5. If we had even a requirement of a vote of two, at the moment when everybody wanted to do ECMAScript 4 except for DC, we would have ended up with ECMAScript 4, which I think this very much speaks to JHD’s point regarding “yes is permanent, no is temporary” in software. The way I would put it is everything we -- you know, every possible process we might come up with can be weaponized. The only thing that restrains weaponization of a process is social norms, which we need to lean into perhaps more than we have been. But the social norms around this -- from the beginning is that if you’re a lone objector, it’s your responsibility to very clearly address your objection in enough -- you know, clearly enough that people understand what it is they need to argue with, and that has been largely the norm that I’ve seen us engage in with regard to lone objectors. The -- so the way I would put JHD’s point is that the rule that we’ve got now fails safe. Any other rule fails unsafe, but all rules are able to be weaponized, so under weaponization, something that fails safe is better than the alternative. +MM: I can speak somewhat to historical issues. WH is the only person I know of on the committee that has a longer historical memory than I do. I joined the committee in 2007. And this was already the rule at the time that I joined. But the history right at that moment is very interesting because shortly before I joined, most of the rest of the committee, in fact, all but one of the committee, wanted to go forward with ECMAScript 4 as the next version of JavaScript. And there was one sole objector at one point, which was DC, who I think very correctly said this -- you know, "this language proposal is really bad", and like the Henry Fonda character in 12 Angry Men, having blocked unanimity with an articulate objection, I think that’s key to why number D is still the best characterization of our process, but he then gradually convinced other members of the committee to move over 20 to his position, and we collaborated ECMAScript 3.1, which eventually became ECMAScript 5. If we had even a requirement of a vote of two, at the moment when everybody wanted to do ECMAScript 4 except for DC, we would have ended up with ECMAScript 4, which I think this very much speaks to JHD’s point regarding “yes is permanent, no is temporary” in software. The way I would put it is everything we -- you know, every possible process we might come up with can be weaponized. The only thing that restrains weaponization of a process is social norms, which we need to lean into perhaps more than we have been. But the social norms around this -- from the beginning is that if you’re a lone objector, it’s your responsibility to very clearly address your objection in enough -- you know, clearly enough that people understand what it is they need to argue with, and that has been largely the norm that I’ve seen us engage in with regard to lone objectors. The -- so the way I would put JHD’s point is that the rule that we’ve got now fails safe. Any other rule fails unsafe, but all rules are able to be weaponized, so under weaponization, something that fails safe is better than the alternative. PFC: So I actually did want to continue on my point about the rationale here. I’m not familiar with the details of the argument at the time about ECMAScript 4. But I guess more generally, my impression, and you can correct me if I’m wrong, was that this kind of process was instituted because no browser vendor wanted it to be possible for all of the other browser vendors to gang up on them to force something through that would, for example, work in an anticompetitive way. So I don’t know if that’s accurate. But if that was the goal, I definitely support that goal. I don’t think that should be possible. -MM: I can speak to that some. The -- I did not see that -- I mean, you know, I’ve had lots and lots of offline conversations. Obviously having been part of this process since 2007, lots and lots of conversations online and offline with many of the parties. I did not -- that particular thing as a rationale for this rule does not ring a bell, especially because, and this is a point I want to emphasize, it was always understood that any of the -- that any of the major browsers have a veto anyway. It was always understood that if something gets accepted in committee, and a browser maker says “well, I just won’t do that”, that it’s dead. -And we have had -- we’ve seen that in practice where one browser vendor has just objected to something and everybody else understood it was, you know -- it was dead if we couldn’t get the browser vendor to agree to it, and that was -- that was whether or not we have the lone dissenter rule. There’s this, you know, obvious dynamic in practice, which is without the up of all the major browsers, it doesn’t really matter what TC39 says. The reality is what the browser makers do. So all the browser makers have a single voice veto no matter what rule we adopt in committee. +MM: I can speak to that some. The -- I did not see that -- I mean, you know, I’ve had lots and lots of offline conversations. Obviously having been part of this process since 2007, lots and lots of conversations online and offline with many of the parties. I did not -- that particular thing as a rationale for this rule does not ring a bell, especially because, and this is a point I want to emphasize, it was always understood that any of the -- that any of the major browsers have a veto anyway. It was always understood that if something gets accepted in committee, and a browser maker says “well, I just won’t do that”, that it’s dead. And we have had -- we’ve seen that in practice where one browser vendor has just objected to something and everybody else understood it was, you know -- it was dead if we couldn’t get the browser vendor to agree to it, and that was -- that was whether or not we have the lone dissenter rule. There’s this, you know, obvious dynamic in practice, which is without the up of all the major browsers, it doesn’t really matter what TC39 says. The reality is what the browser makers do. So all the browser makers have a single voice veto no matter what rule we adopt in committee. MLS: MM, do you think that that has changed now that we have node and XS and the other engines? @@ -624,10 +499,9 @@ MLS: They embed, right. DE: But they could embed it in a way that violates the spec, because the spec does put constraints on posts. And they also are involved a lot with modules. -JHD: To be clear, node definitely has representation. Multiple of us are in the standards group -for OpenJS, so they definitely have representation. +JHD: To be clear, node definitely has representation. Multiple of us are in the standards group for OpenJS, so they definitely have representation. -MM: I’ll state what I think I observe just in terms of the social dynamics of the committee, I think in the absence of the current rule, not only would the major browser vendors have a veto, effectively, because everybody would understand that the thing is dead if it’s not implemented by all the major browser vendors, I think node would also effectively have a veto, if node says they won’t do it, somehow disable it, everybody would still understand it will therefore not actually be part of JavaScript no matter what TC39 says, I do not think based on the social dynamics that I’m see on TC39, I do not think that we would extend the same consideration to Moddable, much as I would like to think we should. I think that the view of JavaScript from embedded is dismissed by enough of the committee that in the absence of the current rules, I could imagine Moddable saying they won’t ship something as just being -- as not being taken to be a de facto veto under the -- under this more relaxed consensus rule. +MM: I’ll state what I think I observe just in terms of the social dynamics of the committee, I think in the absence of the current rule, not only would the major browser vendors have a veto, effectively, because everybody would understand that the thing is dead if it’s not implemented by all the major browser vendors, I think node would also effectively have a veto, if node says they won’t do it, somehow disable it, everybody would still understand it will therefore not actually be part of JavaScript no matter what TC39 says, I do not think based on the social dynamics that I’m see on TC39, I do not think that we would extend the same consideration to Moddable, much as I would like to think we should. I think that the view of JavaScript from embedded is dismissed by enough of the committee that in the absence of the current rules, I could imagine Moddable saying they won’t ship something as just being -- as not being taken to be a de facto veto under the -- under this more relaxed consensus rule. USA: Thank you, MM. Next we have a topic by DE. @@ -635,7 +509,6 @@ PFC: Sorry, I was not given the opportunity to finish my topic. My point about t USA: There’s a reply by LEO. - LEO: Okay, just to move on, one of the things that I’d just like to highlight, and I’m going to be short, less than 30 seconds, one of the things we are talking about a lot of concerns, if not all, most of them are valid, like from all the perspectives. But trying to go back and bring into the perspective of, like, why we are doing this consensus process, as we were discussing yesterday, there’s a whole importance of, like trying to find consensus so we can actually guarantee as a standards process that, like, things that we move on are actually going to have like commitment from all the participants of TC39, that things are going implemented and consistent and reliable as a standards process. Yes, we want to guarantee that part. This is actuality non-negotiable, and what we’re discussing here is how we actually mitigate bad actors that there’s been perceived, which I agree with Michael at this point, like, yes, there are some episodes that we recall, like might not be on everyone’s perspective, but I agree some -- many people here will recall some episodes that we feel like an aggression to the process, like, people bad acting to it. Thank you. USA: Thank you, Leo. DE, before we move on with the queue, we have under ten minutes left. I request you all to be quick. But, yeah, DE, please go on. @@ -658,14 +531,12 @@ MLS: And we can discuss this at a future date as well. CM, I agree with you in p USA: All right, the next on the queue, we have SYG. Again, I encourage you all to be brief. -SYG: Hello. Thank you, MLS, for bringing this topic. I also agree with -- I agree with your whole program, basically, that I think there is serious dysfunction and deficiencies in the process and we should try to improve it. I’ll give a little spiel on why I find the lone veto thing uncomfortable. I don’t think it’s actually the -- like, why do people not like stop energy? Personally, I think it’s because you get the sense that the person giving out stop energy is not open to incentives. Like, I’m here as part of my job. I’m not here trying to do what’s right in my heart by the language. I’m here representing a particular set of JS constituents that responds to a set of incentives, a lot of which is economic, like we want Chrome to succeed, we want web platform to succeed, and that is -- that kind of sets the rules of engagement. That means that I think -- I think that means that fundamentally, I would like to present myself, and I hope other people see me this way, I give evidence convincible. Where I find the lone veto really uncomfortable is that we give the veto power to folks who I do not feel respond to incentives. They are trying to do what is a deep conviction on what they think the language ought to do or ought to be that is at odds with what I see in the incentives, in the demand, in the data that we see out in the ecosystem, and if I don’t believe I can convince those people, that says it’s not productive to engage with those lone vetoes, and I find that deeply harmful to a standards body that is at the end of the day about businesses coming together to agree on an interoperable thing that hopefully encourages a platform, that flourishes with more new businesses get built, whatever, whatever, there’s obviously a neolib slant it to, but I’m not here because I love JavaScript. I’m here because I think this is good for the world in some material sense. And I would like to work with other people who also feel that way. And the easiest way for me to feel that I work with other people that feel that way is that they come with, I guess, you know, thing that are more rooted in something in the real -world than I believe this is the right thing and it looks right and things -- you know, strong convictions that just I don’t know how to work with. And I think that is the thing that even if we get rid of the lone veto process and we improve that, that is something I would like to -- maybe that’s the social norms that MM was talking about. That’s something I would like to see shifted. Because it’s difficult for know really weigh the different opinions if everyone is literally just supposed to be the same weight. But that’s not how the world actually works. For JS, even in our small corner of the world. Anyway, I think I’ve said enough. +SYG: Hello. Thank you, MLS, for bringing this topic. I also agree with -- I agree with your whole program, basically, that I think there is serious dysfunction and deficiencies in the process and we should try to improve it. I’ll give a little spiel on why I find the lone veto thing uncomfortable. I don’t think it’s actually the -- like, why do people not like stop energy? Personally, I think it’s because you get the sense that the person giving out stop energy is not open to incentives. Like, I’m here as part of my job. I’m not here trying to do what’s right in my heart by the language. I’m here representing a particular set of JS constituents that responds to a set of incentives, a lot of which is economic, like we want Chrome to succeed, we want web platform to succeed, and that is -- that kind of sets the rules of engagement. That means that I think -- I think that means that fundamentally, I would like to present myself, and I hope other people see me this way, I give evidence convincible. Where I find the lone veto really uncomfortable is that we give the veto power to folks who I do not feel respond to incentives. They are trying to do what is a deep conviction on what they think the language ought to do or ought to be that is at odds with what I see in the incentives, in the demand, in the data that we see out in the ecosystem, and if I don’t believe I can convince those people, that says it’s not productive to engage with those lone vetoes, and I find that deeply harmful to a standards body that is at the end of the day about businesses coming together to agree on an interoperable thing that hopefully encourages a platform, that flourishes with more new businesses get built, whatever, whatever, there’s obviously a neolib slant it to, but I’m not here because I love JavaScript. I’m here because I think this is good for the world in some material sense. And I would like to work with other people who also feel that way. And the easiest way for me to feel that I work with other people that feel that way is that they come with, I guess, you know, thing that are more rooted in something in the real world than I believe this is the right thing and it looks right and things -- you know, strong convictions that just I don’t know how to work with. And I think that is the thing that even if we get rid of the lone veto process and we improve that, that is something I would like to -- maybe that’s the social norms that MM was talking about. That’s something I would like to see shifted. Because it’s difficult for know really weigh the different opinions if everyone is literally just supposed to be the same weight. But that’s not how the world actually works. For JS, even in our small corner of the world. Anyway, I think I’ve said enough. DRR: I think I want to echo SYG’s sentiment there. One of the issues that I often find is if there is sort of like this disengagement following the block. There’s not a willingness to elaborate in some capacity. A block can be "I find this distasteful" and it would be okay if it was left at that. Or, like, that can be sort of a follow-up in some way. I often don’t feel like I can accomplish things because iit becomes a sort of chasing game of "come to the next committee meeting", try to hash it out, and then still not getting the level of detail that you’re expecting out of something like a block. So I think that that is another fundamental problem with the model as it stands. -CP: My comments about this mostly are on risk. Clearly we want to protect the outcome of what we do here. And it seems that some of the sentiments, and I do agree with them from the presentation from MLS, is can we get more people to put energy on this effort, can we get more people to participate or not feel that they are intimidated by this process is. So maybe there’s more practical things we can do in terms of maybe there is no veto on Stage 1, Stage 2. I also feel that sometimes when someone is dissenting on particular proposal, whether that’s a proposal in general or they have certain things that they want to be added to the proposal in order to advance, I don’t feel that we have a process today to document that in such a way that people who are following the proposal, even all the delegates that are not here for the presentations, to really follow up. So you have to go to the notes and see where the notes -are, try to understand what happened in the discussion, so I think we might be able to put in place a process that requires the champions of the proposal to have very well documented the dissent in the documentation that they have for the proposal, and maybe some of that will also help to alleviate some of these problems that we have. Because if you are weaponizing this process, well, that will get documented in the actual proposal, not just the notes. And so I think that there is an incentive for people to sort of avoid that kind of situation as well, if that’s the angle that they have, which I have seen some of it, but not really to the point that would be fatal for this committee. So I think more practical things like that would be things that we could implement really easy, and maybe see what has happened with that. - +CP: My comments about this mostly are on risk. Clearly we want to protect the outcome of what we do here. And it seems that some of the sentiments, and I do agree with them from the presentation from MLS, is can we get more people to put energy on this effort, can we get more people to participate or not feel that they are intimidated by this process is. So maybe there’s more practical things we can do in terms of maybe there is no veto on Stage 1, Stage 2. I also feel that sometimes when someone is dissenting on particular proposal, whether that’s a proposal in general or they have certain things that they want to be added to the proposal in order to advance, I don’t feel that we have a process today to document that in such a way that people who are following the proposal, even all the delegates that are not here for the presentations, to really follow up. So you have to go to the notes and see where the notes are, try to understand what happened in the discussion, so I think we might be able to put in place a process that requires the champions of the proposal to have very well documented the dissent in the documentation that they have for the proposal, and maybe some of that will also help to alleviate some of these problems that we have. Because if you are weaponizing this process, well, that will get documented in the actual proposal, not just the notes. And so I think that there is an incentive for people to sort of avoid that kind of situation as well, if that’s the angle that they have, which I have seen some of it, but not really to the point that would be fatal for this committee. So I think more practical things like that would be things that we could implement really easy, and maybe see what has happened with that. + CDA: If we’re trying to make a change to our process, I would like to see us articulate a precise problem definition. I think everybody has perhaps a different version of what the shape of the problem looks like. So it’s something I’d like to articulate specifically before we decide what we want to do about it, if anything. NRO: Yeah, MLS already quickly mentioned this, but, like, moving away from this model where a single person can object will have more people objecting. I think in this meeting, both in person and online, there are probably less than ten people that would be comfortable with blocking a proposal. I’m not one of those. And we already have, like -- we already discussed a couple meetings ago about, like, disagreeing with the proposal explicitly while blocking it, but it feels like daunting, and being, like, knowing that if we hold consensus for a prose poll without necessarily stopping it unless other people agree with me, that would help surfacing these disagreements. @@ -673,17 +544,17 @@ NRO: Yeah, MLS already quickly mentioned this, but, like, moving away from this USA: All right, thank you, NRO. That was all. MLS, would you like to make a conclusion? MLS: First of all, I’d like to thank the committee for listening to what I brought up. I believe there are some issues, but there’s issues on both sides. There’s concerns that we would allow things in a language of the fail safe that Mark talks about. But we -- I think we do need to address the issues raised. And I think it bears further discussion at future plenary meetings. No -- nothing concrete to -- for a conclusion. + ## ShadowRealms update + Presenter: Leo Balter (LEO) - [proposal](https://github.com/tc39/proposal-shadowrealm) - [slides](https://docs.google.com/presentation/d/1fd5-VKtl0LxYitLHr_bJ82_xaLs1w5xSmD0dCdh2TvU) -LEO: I’m going to move this eventually to CP. Hi, everyone. We are here again to talk about ShadowRealm. If I have the historic points, I think this proposal has been here for 15 years now. Is that correct? Yes, and -- all right. So a couple TC39 meetings ago, so ShadowRealm got demoted from Stage 3 to Stage 2. We didn’t have Stage 2.7 by that time. As long as I recall. But the whole idea was trying to make sure the Stage 3 would be meaningful, as readiness for implementation, in which we all agreed and, these are a screenshot from the nodes of the previous meeting and it consists of a note from Stage 2 with advancement to Stage 3 dependent upon having a list of suitable APIs exposed to ShadowRealm along with sufficient tests to ensure correct behavior in implementations. So this is the work that we’ve been -trying to do and making sure it is going well. So I’m just, again, capturing parts of the conclusions from that meeting. And what we want to do here is making sure we provide at least suitable APIs to be exposed to sufficient tasks. We presented a thread where we presented some lists and we already collected some feedback. And so most of, like, our update today is that, yes, we have documented the selection criteria. We completed the work. We have an initial list of API names to be included. And we increased the test coverage to match the list of API names that are included. CP will follow up with the rationale for the API inclusion and exclusion. We mostly focus on the known use case, if it preserves confidentiality, and if it operates with the boundary as a model we shaped within the EMCAScript proposal. And we can go and revisit that, but, like, for 262 itself, there is one actual change to report, but it’s mostly, like, that we already adapt a ShadowRealm spec text to the recent improvement for host ensure can compile string, and it’s just going to be an adaptation. +LEO: I’m going to move this eventually to CP. Hi, everyone. We are here again to talk about ShadowRealm. If I have the historic points, I think this proposal has been here for 15 years now. Is that correct? Yes, and -- all right. So a couple TC39 meetings ago, so ShadowRealm got demoted from Stage 3 to Stage 2. We didn’t have Stage 2.7 by that time. As long as I recall. But the whole idea was trying to make sure the Stage 3 would be meaningful, as readiness for implementation, in which we all agreed and, these are a screenshot from the nodes of the previous meeting and it consists of a note from Stage 2 with advancement to Stage 3 dependent upon having a list of suitable APIs exposed to ShadowRealm along with sufficient tests to ensure correct behavior in implementations. So this is the work that we’ve been trying to do and making sure it is going well. So I’m just, again, capturing parts of the conclusions from that meeting. And what we want to do here is making sure we provide at least suitable APIs to be exposed to sufficient tasks. We presented a thread where we presented some lists and we already collected some feedback. And so most of, like, our update today is that, yes, we have documented the selection criteria. We completed the work. We have an initial list of API names to be included. And we increased the test coverage to match the list of API names that are included. CP will follow up with the rationale for the API inclusion and exclusion. We mostly focus on the known use case, if it preserves confidentiality, and if it operates with the boundary as a model we shaped within the EMCAScript proposal. And we can go and revisit that, but, like, for 262 itself, there is one actual change to report, but it’s mostly, like, that we already adapt a ShadowRealm spec text to the recent improvement for host ensure can compile string, and it’s just going to be an adaptation. -LEO: Yes, there are things that we are seeking today. One of them is a big question. For me, it’s like the question that, like, I’m just learning about the Stage 2.7, and I’m going to be asking everyone here, does -it -- like, the meaning of this, for what we have as the next steps, does it -- do we meet this proposal as a Stage 2.7? And also, like, we would love to get commitment from two implementers as we described from the previous conclusion to reveal and provide feedback about the HTML integration. Just want to be clear, part of this review is mostly going through what week, in which we are doing this work, but we also, like, given that this opportunity here to also consult with TC39. +LEO: Yes, there are things that we are seeking today. One of them is a big question. For me, it’s like the question that, like, I’m just learning about the Stage 2.7, and I’m going to be asking everyone here, does it -- like, the meaning of this, for what we have as the next steps, does it -- do we meet this proposal as a Stage 2.7? And also, like, we would love to get commitment from two implementers as we described from the previous conclusion to reveal and provide feedback about the HTML integration. Just want to be clear, part of this review is mostly going through what week, in which we are doing this work, but we also, like, given that this opportunity here to also consult with TC39. LEO: CP, do you want to go on a technical aspect? @@ -695,49 +566,49 @@ CP: There are other cases where we have a family of APIs that needs to be includ CP: Then we have another family of APIs that are potentially exposing information that we don’t – we’re not sure about. And this is basically in the confidentiality aspect of the ShadowRealm, with we have many APIs that are giving you information about the outside world, specifically, the outer realm or information that can be modified by the outer world and then you have access to that information. Because we’re not sure about those, we prefer to keep those excluded for now. And then having a process of looking at them individually and deciding whether or not we can do something different about those APIs, for example we can try to censor some of that information. A good example is the performance global object, which gives you access to certain APIs that expose information about the memory allocation, including the different scripts that are using in the outer realm, to denote how much memory they are using, that is the kind of information that we are not sure, we better censor that or go for a different approach, which there are some precedence of. For example, the iframe sandbox attribute in HTML allows you to control what can be observed or accessed. We can introduce something like that in the future for the ShadowRealm constructor as well. The developer will have the controls, what are the things that can be used or observed from inside the ShadowRealm as a way to relax the confidentiality of the ShadowRealm. -CP: I think that’s the last bucket of APIs we looked at. Again, we are committed to continue looking at these APIs, if there is an API that was excluded in the first list, open an issue and we will revisit them and find solutions. As I said, the list of APIs exposed inside the ShadowRealm is going to grow. That is more or less the process that we have to follow to decide what APIs to be included in the initial list for implementers to look at and tell us whether they agree or not, whether they think this is implementable or not, which is another aspect of this process. And then getting the thumbs up from two of them at least to get to a Stage 3. So for now, all we are asking for is to revisit the current stage, which is 2, to see if we can advance to Stage 2.7. Based on what we're reading about the new definition of 2.7, it seems fine. But more important for us is to get the commitment from implementers to help us to review those pull requests we have opened for spec and for the test, some of the test hasn’t been merged because the spec is not merged yet… that’s a problem. We need implementers to pay attention to prioritize this so we can get feedback as soon as possible, preferable before the next meeting, so we can try to get to Stage 3. That’s all we are asking for today. +CP: I think that’s the last bucket of APIs we looked at. Again, we are committed to continue looking at these APIs, if there is an API that was excluded in the first list, open an issue and we will revisit them and find solutions. As I said, the list of APIs exposed inside the ShadowRealm is going to grow. That is more or less the process that we have to follow to decide what APIs to be included in the initial list for implementers to look at and tell us whether they agree or not, whether they think this is implementable or not, which is another aspect of this process. And then getting the thumbs up from two of them at least to get to a Stage 3. So for now, all we are asking for is to revisit the current stage, which is 2, to see if we can advance to Stage 2.7. Based on what we're reading about the new definition of 2.7, it seems fine. But more important for us is to get the commitment from implementers to help us to review those pull requests we have opened for spec and for the test, some of the test hasn’t been merged because the spec is not merged yet… that’s a problem. We need implementers to pay attention to prioritize this so we can get feedback as soon as possible, preferable before the next meeting, so we can try to get to Stage 3. That’s all we are asking for today. DE: For the HTML integration, I am not the person who has to approve the review. But there were a couple of things that were already requested that have not yet been done. One is on ensuring the global object for ShadowRealm can have methods and attributes like `atob` or crypto. And one is a piece of documentation summarizing the criteria for web spec authors on whether to expose an interface or something like that to ShadowRealms. I really would encourage browsers to review. Somebody in the champion group or browsers have to complete the other pieces of work from the HTML integration is complete. -CP: Are you talking about the rationale? I think we have some of them here. But we also have an apis.md in the repo. Is that what you’re referencing? +CP: Are you talking about the rationale? I think we have some of them here. But we also have an apis.md in the repo. Is that what you’re referencing? DE: I haven’t seen a single – I mean, you’ve posted one recently, very recently before this presentation about a rubric that people could use. I think that needs to be socialized and made sure that it is intelligible to web spec authors and I have given you feedback about how this could be made more intelligible. About globals – maybe this was finished like 12 hours ago. Okay. Thank you. PFC, for pointing this out. -PFC: It was finished far more than 12 hours ago. 12 hours ago was the last time that PR was updated with other stuff. The ability to have properties in the global object and have them deletable, that’s been done for I think a couple of weeks now. +PFC: It was finished far more than 12 hours ago. 12 hours ago was the last time that PR was updated with other stuff. The ability to have properties in the global object and have them deletable, that’s been done for I think a couple of weeks now. -DE: Sorry. Last time I looked at it, there were TODOs in the specification. +DE: Sorry. Last time I looked at it, there were TODOs in the specification. JHD: I support Stage 2.7. I think that’s exactly the appropriate signal to send. Like the design for ShadowRealm is basically finished. But please don’t ship it until, you know, it hits stage 3 because we are working out integration issues. So yay for 2.7. DLM: Yeah. So we agree that this meets the requirements for Stage 2.7. And we have been actively looking at HTML integration, but we will do our best to complete that before the next meeting, but we can’t really commit to that because we don’t know what sort of issues we might uncover. -SYG: I want to point out that there are two kinds of reviews. I don’t have any concerns with 2.7. This seems like it meets the criteria as written. But there’s kind of the spec side review where we can look at the PR that the champions are written for the list of APIs and give a judgment. That looks good, that doesn’t, there are issues. Then I want to put Mozilla on the spot here (sorry). MAG’s feedback earlier I thought came from during actual implementation where like it looked like maybe the spec side of something – I forget which API it was – looked fine. When you tried to implement it, he found architectural assumptions or something else that became problematic. And I want to kind of point out that that kind of – the second kind of review is one very labour intensive, two, it’s unlikely to happen during 2.7. And I would welcome the champions kind of trying to – try to move the needle there. We are not going to go out and implement everything right there and then to see if it actually works out. I am wondering, how is Mozilla feeling about that? +SYG: I want to point out that there are two kinds of reviews. I don’t have any concerns with 2.7. This seems like it meets the criteria as written. But there’s kind of the spec side review where we can look at the PR that the champions are written for the list of APIs and give a judgment. That looks good, that doesn’t, there are issues. Then I want to put Mozilla on the spot here (sorry). MAG’s feedback earlier I thought came from during actual implementation where like it looked like maybe the spec side of something – I forget which API it was – looked fine. When you tried to implement it, he found architectural assumptions or something else that became problematic. And I want to kind of point out that that kind of – the second kind of review is one very labour intensive, two, it’s unlikely to happen during 2.7. And I would welcome the champions kind of trying to – try to move the needle there. We are not going to go out and implement everything right there and then to see if it actually works out. I am wondering, how is Mozilla feeling about that? DLM: Yeah. So that’s exactly when I – yeah. That’s why I avoid using review in my comment earlier. Yeah, that’s the part that we are most concerned about, looking at the implementation of these and it’s true, at least within our codebase, there’s assumptions that things are either made thread or worker, that might be violated by add to go ShadowRealm so we need to go step by step. MAG, is sick this week, but I saw his comment where he thought, exposing console would be difficult because of relying assumptions in the codebase. That’s the review we are most interested in. The spec review is handled by other people, that detail, going through and see what issues to encounter and maybe that’s something more for Stage 3, but it’s something we would like to get started on now -SYG: The V8 in Chrome might not get to that part of the actual implementation of the specific APIs. Until later, and I understand that is – that puts a wrench in things in terms of what stage that makes sense. 2.7 makes sense. If Chrome does it – usually Chrome doesn’t do it until Stage 3, but given this has come up before and expected to possibly be a problem, that needs significant remedy, not just "oh, we fixed this part here". Maybe it requires re-architecture here and there, or rewriting specs. But that could change the shape of the API, of the proposal again. I want to be transparent about that. I don’t have a good recommendation for what we ought to stage-wise. The shape is what we would design. But to actually discover the implementation issues might still be delayed until like the spec review are all done, but there’s some kind of loop here. After it’s done, we implement and then maybe we need to come back, but hopefully we don’t. But it’s a large list of APIs, so it will be a while. +SYG: The V8 in Chrome might not get to that part of the actual implementation of the specific APIs. Until later, and I understand that is – that puts a wrench in things in terms of what stage that makes sense. 2.7 makes sense. If Chrome does it – usually Chrome doesn’t do it until Stage 3, but given this has come up before and expected to possibly be a problem, that needs significant remedy, not just "oh, we fixed this part here". Maybe it requires re-architecture here and there, or rewriting specs. But that could change the shape of the API, of the proposal again. I want to be transparent about that. I don’t have a good recommendation for what we ought to stage-wise. The shape is what we would design. But to actually discover the implementation issues might still be delayed until like the spec review are all done, but there’s some kind of loop here. After it’s done, we implement and then maybe we need to come back, but hopefully we don’t. But it’s a large list of APIs, so it will be a while. LEO: Sorry. To things to answer here. One of them, I just think it’s like most reasonable to make sure before we request Stage 3 we have concerns addressed, especially as you mentioned, he’s not here, we won’t seek anything without his seeing like – it’s also okay. I think it’s a good response to like his being actively working is one of the expert on it. So it’s totally reasonable to make sure all the concerns are clear out. The other part like implementation wise on the loop, we actually – we have taken measures to make sure this work is supported as well. Like from our end. And hopefully we can mitigate concerns about delay. Happy to share more details after. -DE: I just want to withdraw the concerns I stated about 2.7 is mistaken. At least with respect to the mechanics of globals. Yeah. I still look forward to this explanation about criteria. +DE: I just want to withdraw the concerns I stated about 2.7 is mistaken. At least with respect to the mechanics of globals. Yeah. I still look forward to this explanation about criteria. -SYG: And for the notes, to be perfectly explicit, I think some necessary criteria for 3 are the tests, which you are working on, the PRs for the APIs which you are working on, given it’s basically all X TC39 concerns, and the pickle we got ourselves into last time, we moved to 3 without integration, done, and integration being a large part of the semantics. This time, to readvance to 3, I would like to see or hear explicit sign off from the HTML folks. So at least the editors. So AVK and DD from Apple and Google respectively. And the tests, and then so at least those are – I don’t know go that’s it, but those are at least necessary conditions. +SYG: And for the notes, to be perfectly explicit, I think some necessary criteria for 3 are the tests, which you are working on, the PRs for the APIs which you are working on, given it’s basically all X TC39 concerns, and the pickle we got ourselves into last time, we moved to 3 without integration, done, and integration being a large part of the semantics. This time, to readvance to 3, I would like to see or hear explicit sign off from the HTML folks. So at least the editors. So AVK and DD from Apple and Google respectively. And the tests, and then so at least those are – I don’t know go that’s it, but those are at least necessary conditions. LEO: Yeah. Would it be better for clarification to get a sign off from HTML (WhatWG) as an entity. I know we agree in one signs off as this is HTML -SYG: That’s fine. My sign off doesn’t mean much. I need – this is not my area of expertise. I want the external signoff. +SYG: That’s fine. My sign off doesn’t mean much. I need – this is not my area of expertise. I want the external signoff. LEO: Great. With that, I want to ensure we resolve the concerns pointed out by Mozilla as well. I think it’s a nice commitment. Do we have Stage 2.7? -USA: That’s a positive silence. +USA: That’s a positive silence. DE: Is your concern we should hear from HTML by Stage 3 or by Stage 2.7 -SYG: (?) Stage 3. Ready to implement signal, I would like to hear from the HTML folks that they find the principles for what APIs ought to be included and the PRs acceptable. +SYG: (?) Stage 3. Ready to implement signal, I would like to hear from the HTML folks that they find the principles for what APIs ought to be included and the PRs acceptable. -DE: I feel a little bit cautious about using this space between 2.7 and 3 for this. Because we previously – maybe do you want to use this space for this, for host concerns? But in theory, the initial difference 2.7 and 3 was about whether we had tests. +DE: I feel a little bit cautious about using this space between 2.7 and 3 for this. Because we previously – maybe do you want to use this space for this, for host concerns? But in theory, the initial difference 2.7 and 3 was about whether we had tests. -SYG: I see. +SYG: I see. MF: That’s not entirely true. The requirement was "Tests and any necessary experience". Whatever we deemed necessary when it reached 2.7. My intent was to include things like this. @@ -745,25 +616,23 @@ DE: I am fine using this to establish precedent as we say of host concerns or by SYG: I want to avoid the outcome where we re-advance to Stage 3 and come back and say we still don’t know how to implement parts of it. -DE: Right, that’s the reason not to advance to Stage 3, it should be 2.7 today. Okay. So sorry. Neither SYG nor I are objecting to it moving to 2.7. +DE: Right, that’s the reason not to advance to Stage 3, it should be 2.7 today. Okay. So sorry. Neither SYG nor I are objecting to it moving to 2.7. -USA: Thank you, everyone for the discussion. And congratulations, LEO. +USA: Thank you, everyone for the discussion. And congratulations, LEO. ### Summary ### Conclusion/Resolution -Consensus for Stage 2.7 -Proposal champions will seek external signoff from WHATWG HTML stakeholders and Mathew Gaudet (Mozilla) regarding the proposed list of suitable APIs to be exposed to ShadowRealms, along with sufficient tests to ensure correct behaviour in implementations. -The external signoff on HTML integration is set as entrance criteria to meet Stage 3 as ready to implement signal for the ShadowRealm proposal. -In general, in some cases, it may be OK to delay full validation of host integration issues to Stage 3, rather than everything being resolved by Stage 2.7. +Consensus for Stage 2.7 Proposal champions will seek external signoff from WHATWG HTML stakeholders and Mathew Gaudet (Mozilla) regarding the proposed list of suitable APIs to be exposed to ShadowRealms, along with sufficient tests to ensure correct behaviour in implementations. The external signoff on HTML integration is set as entrance criteria to meet Stage 3 as ready to implement signal for the ShadowRealm proposal. In general, in some cases, it may be OK to delay full validation of host integration issues to Stage 3, rather than everything being resolved by Stage 2.7. + ## Raw String Literals for Stage 1 + Presenter: John Hax (JHX) - [proposal](https://github.com/hax/proposal-raw-string-literals) -JHX: Hello, everyone. I am JHX. And today I would like to process raw string literal for Stage 1. -The problem developers often need to include some hexing there in the programs, which may contain things like quotation marks. These symbols require escaping or regular or double quote strings. Further we can’t use 'String.raw' built-in function to escape. However, there’s one important symbol that can be avoided for escaping which is the backtick. Here is a very simple example of a database query in SQL. It requires fields to be wrapped in tactics. We can’t do this in JS. We have to escape them. The reason we must escape is that the backtick itself is the delimiter for the strings and you can’t include it directly in the text. +JHX: Hello, everyone. I am JHX. And today I would like to process raw string literal for Stage 1. The problem developers often need to include some hexing there in the programs, which may contain things like quotation marks. These symbols require escaping or regular or double quote strings. Further we can’t use 'String.raw' built-in function to escape. However, there’s one important symbol that can be avoided for escaping which is the backtick. Here is a very simple example of a database query in SQL. It requires fields to be wrapped in tactics. We can’t do this in JS. We have to escape them. The reason we must escape is that the backtick itself is the delimiter for the strings and you can’t include it directly in the text. JHX: This problem is even worse when you need to use String.raw because you simply can’t express a backtick directly in the template. If using string.raw. Because string.raw even escaping is gone. Of course, we can use interpolation to reserve backticks. But I think obviously I think it’s not a pleasant writing or reading experience. @@ -779,27 +648,27 @@ JHX: In addition to these 3 things, I also uncovered other needs while borrowing JHX: And the – it will be nice to have a mechanism to do comments. People may put user interpolation for that purpose, but if we can’t have higher mechanisms to do comments, I think that’s good. And would also – it would be nice to have an escape in a specified place. This point is – for example, we support that. And the final point, if you nest, I mean, I write a nest – if the syntax allow, a short delimiter outside and long inside, if you write nester, to go back to this part and change the delimiter. This also helps like generators [inaudible]. And modify this already. -JHX: So here are some things I want to achieve but maybe we cannot get everything. But I just have a list here. The possible solution, the current starts with @sken130's draft. However, there are many possible syntaxes. Here are the documents that will have many languages and Swift and Rust use different style. They wrap the string with the hash. For example, if we adopt the style design, the code – the previous example will look like this. The hash to wrap the string and so you can use the backtick in the… if you wrap it, left the curl brace, it makes normal strings. If you want to do in place, you should use – must use the hash here. And a number of hashes should – in the source style, you can use any number of hash here. For example, if there are three hash, you can use three hash with any and use 3 hash here to label the interpolation like that. +JHX: So here are some things I want to achieve but maybe we cannot get everything. But I just have a list here. The possible solution, the current starts with @sken130's draft. However, there are many possible syntaxes. Here are the documents that will have many languages and Swift and Rust use different style. They wrap the string with the hash. For example, if we adopt the style design, the code – the previous example will look like this. The hash to wrap the string and so you can use the backtick in the… if you wrap it, left the curl brace, it makes normal strings. If you want to do in place, you should use – must use the hash here. And a number of hashes should – in the source style, you can use any number of hash here. For example, if there are three hash, you can use three hash with any and use 3 hash here to label the interpolation like that. JHX: So I plan to investigate the different syntax options if this proposal approved as stage 1 and I plan to discuss the possible syntax design in future meetings. -USA: The queue is currently empty. But let’s give it a minute or so to – Justin? +USA: The queue is currently empty. But let’s give it a minute or so to – Justin? -JRL: There’s three cases, and template literals that are representable. Two which you have solved. The closing ticks. If you want to enter a backtick anywhere, you solve that by requiring an extra backticks, the interpolation sigil, which you solve by requiring the sigil in interpolations. But also backslash at the end of the string. It can’t be represented in the new format. +JRL: There’s three cases, and template literals that are representable. Two which you have solved. The closing ticks. If you want to enter a backtick anywhere, you solve that by requiring an extra backticks, the interpolation sigil, which you solve by requiring the sigil in interpolations. But also backslash at the end of the string. It can’t be represented in the new format. JHX: Yeah. This is another case. -DE: I am not sure whether this topic is worthy of further committee investigation. We have template literals. We could consider some of these, I think, extra new syntax is a complicated way to go about it. Representing more things within template strings, I could see that, but if we have something at Stage 1, I would want to make sure that the changes to syntax stay pretty simple. +DE: I am not sure whether this topic is worthy of further committee investigation. We have template literals. We could consider some of these, I think, extra new syntax is a complicated way to go about it. Representing more things within template strings, I could see that, but if we have something at Stage 1, I would want to make sure that the changes to syntax stay pretty simple. -JHX: Yeah. There are many – many possible syntax solution. It needs some time to investigate all the options. There are already many other language supports raw string literals. So I think we can try to find some way to, some simple way to add too many costs. +JHX: Yeah. There are many – many possible syntax solution. It needs some time to investigate all the options. There are already many other language supports raw string literals. So I think we can try to find some way to, some simple way to add too many costs. -LCA: So I think that it is generally useful to support the string literals that support is wide of raw string data in them as possible. I have run into some use cases you have described myself. I do agree with DE, that I think the syntax here should be incredibly minimal, and should not come at the cost of any other future syntax. I don’t think it’s worth taking one of the ASCII characters that we have not used yet to start a new string literal or anything like that. Which I think rules out some of the design options. But yeah. I do think it may be worth continuing to investigate, if there is something that is relatively minimal that would solve this use case. +LCA: So I think that it is generally useful to support the string literals that support is wide of raw string data in them as possible. I have run into some use cases you have described myself. I do agree with DE, that I think the syntax here should be incredibly minimal, and should not come at the cost of any other future syntax. I don’t think it’s worth taking one of the ASCII characters that we have not used yet to start a new string literal or anything like that. Which I think rules out some of the design options. But yeah. I do think it may be worth continuing to investigate, if there is something that is relatively minimal that would solve this use case. -SYG: For the – moving it Stage 1, I would like to understand all of the requirements, I think you gave a list of like 7 or 8 requirements. It seems that they’re not actually all requirements. I would like to better understand what are the non-negotiables given there are other constraints like minimal syntax, what is the non-negotiable requirements for you that a solution must have? +SYG: For the – moving it Stage 1, I would like to understand all of the requirements, I think you gave a list of like 7 or 8 requirements. It seems that they’re not actually all requirements. I would like to better understand what are the non-negotiables given there are other constraints like minimal syntax, what is the non-negotiable requirements for you that a solution must have? JHX: I might have used the wrong words. Actually, only the first three I think is the – the most important. All others goes – if we can without much cost, I would like to try to achieve them. But they are – they’re not must have. -SYG: Okay. Thanks. So those 3. +SYG: Okay. Thanks. So those 3. DRR: Yeah. I know this is proposed for Stage 1. So the syntax is still not a concrete thing. But from what you have shown on the slides right now, I would kind of describe it as sort of kicking the can down the road when it comes to being able to escape specific text. Meaning, if you want to have a new kind of string, that uses a different scheme, you now have to decide how you escape out of that new scheme as well in some capacity. And so one thing to consider is that if you have something like this, the embedder needs to be to say – meaning the *author* of the actual file containing the string – needs the ability to say what the start and the end of the string looks like. Because otherwise, now you have the entire issue of, "how do I want to say I wanted a pound and a backtick in this specific location?" or whatever. So basically, I don’t think it – it holds its weight unless it gives that affordance – unless you were able to do that as an author in JavaScript. @@ -811,21 +680,21 @@ USA: I think there was at least one instanceof asking the syntax to be more or l DE: There seemed to be agreement that the syntax should be simple. Maybe it would – it would be thighs to have an example of proposed syntax that would be simple. But I don’t know whether or not that should be a Stage 1 requirement. Certainly a Stage 2 requirement. -JHX: For example, do you think this syntax is simple or ? +JHX: For example, do you think this syntax is simple or ? -DE: I am not – sorry. I am not a fan of that syntax. We have hash being one thing already. There’s a second meeting proposed – +DE: I am not – sorry. I am not a fan of that syntax. We have hash being one thing already. There’s a second meeting proposed – -JHX: But it is also possible to use some other symbols. +JHX: But it is also possible to use some other symbols. -USA: Yeah. I think one important thing you mentioned, it’s something to be discussed before Stage 2. So I think hex, would you like to ask for Stage 1 with the understanding that you would have further discussions and possibly like an incubator call or something to discuss in more detail what possible syntax could be viable? +USA: Yeah. I think one important thing you mentioned, it’s something to be discussed before Stage 2. So I think hex, would you like to ask for Stage 1 with the understanding that you would have further discussions and possibly like an incubator call or something to discuss in more detail what possible syntax could be viable? -JHX: Yes. +JHX: Yes. -USA: All right. So let the queue sit for that just a little bit. LCA? +USA: All right. So let the queue sit for that just a little bit. LCA? LCA: Yeah. For what it’s worth, you asked whether this syntax is minimal enough? I don’t think so. I think the hash sign should be used for something that is more useful than another string literal. And if we used it here we probably can’t use it anywhere else so I don’t think we should use it. So something else. Like my definition of minimal does not necessarily mean that it has to be a single character. I think it has – this is a string literal that fewer people are going to write than regular string literals. It could be more verbose to write. But it shouldn’t – like, use a production that we are – nice to use for some more useful feature in the future. -JHX: Yeah. But I want to do some explanation. Here, we just use a hash as an example. Actually, it could be some more symbol. But even hash here, it doesn’t conflict with current – with any – any proposal I know would use hash. Because it’s actually a combination of hash and the backtick. +JHX: Yeah. But I want to do some explanation. Here, we just use a hash as an example. Actually, it could be some more symbol. But even hash here, it doesn’t conflict with current – with any – any proposal I know would use hash. Because it’s actually a combination of hash and the backtick. LCA: This would conflict with any proposal that would use hash as a like – in a position where an identifier is valid. For example, the replacement character in pipe. Which I think we settled on another character for that, but yeah. @@ -841,11 +710,11 @@ JHX: I am not sure. I still want to ask for Stage 1 because I think it’s a pro USA: So what if you ask for Stage 1 and enumerate a list of suggestions that you will sort of work through within Stage 1 and would present before the committee before Stage 2. How does this sound? -JHX: Yeah. +JHX: Yeah. -USA: So I suppose you are asking for consensus? The meantime, could you enumerate the things you will work on. I suppose syntax is one of them, as you mentioned. While we wait for the queue, you could enumerate the conclusions. DE is proposing a scope. +USA: So I suppose you are asking for consensus? The meantime, could you enumerate the things you will work on. I suppose syntax is one of them, as you mentioned. While we wait for the queue, you could enumerate the conclusions. DE is proposing a scope. -DE: The proposed scope would be, I believe, correct me – an investigation on how to deal with the limits of template strings, especially when it comes to including backticks or escapes for raw strings. Is this the scope, or are there more issues? +DE: The proposed scope would be, I believe, correct me – an investigation on how to deal with the limits of template strings, especially when it comes to including backticks or escapes for raw strings. Is this the scope, or are there more issues? JHX: I think this could be the scope. @@ -853,17 +722,17 @@ USA: All right. Then with this scope in mind, let’s get consensus for raw stri DE: Could we name the proposal something different because – or…? -USA: Do you have any suggestions? +USA: Do you have any suggestions? -DE: Overcoming template literal restrictions . . . +DE: Overcoming template literal restrictions . . . -USA: Okay. +USA: Okay. DE: So “improve escaped template literals”. That was LCA’s suggestion that I like. JHX: Okay. Like improve escape – okay. Yeah. We can change the name. Yeah. -USA: All right. Congratulations JHX. And you have – while we break for the break, would you like to go to the notes and write down a conclusion? +USA: All right. Congratulations JHX. And you have – while we break for the break, would you like to go to the notes and write down a conclusion? JHX: Okay. @@ -871,27 +740,14 @@ JHX: Okay. Proposal reaches Stage 1 with the scope of “an investigation on how to deal with the limits of template strings, especially when it comes to including backticks or escapes for raw strings” and the new title “improve escaped template literals”. -## Uint8Array Base64 for stages 2.7 and 3 +## Uint8Array Base64 for stages 2.7 and 3 + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/proposal-arraybuffer-base64) - [slides](https://docs.google.com/presentation/d/1c4-RAJsGcmvzFClOn3ia26njuEqoecl43WmK5u1XAmE/edit#slide=id.g106f4536d9_0_109) - -KG: Okay. Hello. -I am coming to the committee with the “`Uint8Array` to and from Base64” proposal for stages 2.7 and 3. The proposal is on GitHub. As a reminder, the thesis statement is that we should have a built-in mechanism for converting binary data to and from Base64. It’s grown, but that’s the scope. -The basic API hasn’t changed in a long time. There are methods on `Uint8Array.prototype` called `toBase64` and `toHex`, and static methods for `fromBase64` and `fromHex`. -These have options, and some details that we will get to later. -There is also a pair of methods for writing into an existing Unit8Array. These take a target and give a `read` and `written` pair that tells us how many characters from the input you read and how many bytes to the output you have written. -And this is again for both Base64 and hexing. There was a question whether this should be a static or prototype method. As static, it takes the TypedArray as an argument, and as prototype it takes it as receiver. -I am okay with either. There was some support for the prototype method. We already have `TypedArray.prototype.set`; it’s similar to that. I am open to either. And I am hoping that this is a small enough issue that wherever we go, the proposal can advance to stage 3 at this moment. Although if the fact that this is still open means it can’t, that’s okay. And I will just come back again. -But this is the – the only open question that I have for the committee, unless I am forgetting something. I want to call attention to the fact that this is the first time that there would be something that is on one specific kind of TypedArray. -I think that’s fine. It’s a sequence of bytes, this is the sequence-of-bytes type. But it is the first time we're doing this so we should take special care. All right, getting into some of the details. The first is the base64 methods take an alphabet parameter which takes either the string base64 or the string base64 URL. The default is base64. The hex methods do not support an alphabet parameter. The input can be mixed case hex, the output is either always uppercase or always lowercase. I am blanking on which it is but it's not customizable. There are also some details about the handling of invalid characters. Base64 and not hex supports ascii whitespace because the standard base64 implementations all support ascii whitespace. Not other kinds of whitespace, this is a very small list, not the whole unicode edition of whitespace. And if you encounter any non-alphabet character with the exception of ascii whitespace in the case of base64, then the decoding methods throw an exception. So in addition to input being invalid because of having invalid characters in it, in the case of base64 it can also be invalid because you don't have a full chunk. Recall that base64 requires chunks of four characters to decode. What happens if the input, the length of the input is not a multiple of four characters, ignoring whitespace. So we decided ultimately that this should be customizable by an options bag argument, which I've chosen to spell last chunk handling. It can, there's three valid values for it. Either loose handling, which is sort of the most permissive and matches what `atob` does on the web, which allows you to omit padding or to include padding. So it will just assume that if it gets to a partial chunk it will assume that this should have padding. Now there's a caveat there, which is that for valid base64 data you can have either one or two padding characters, you can't have three. So you can't have precisely one alphabet character and then three padding characters, which means that even in loose mode if you have precisely one additional character that's going to be an exception. So this doesn't allow all possible final chunks, only final chunks that could conceivably have been produced by omitting padding. In addition to this there's the strict option, which requires the padding characters to be present and also enforces that the two or four additional bits that are represented in the final two or three characters of the base64 that don't map to the decoded byte stream requires those to be zero. This helps to enforce that your base64 encoding is canonical, although you will also have to enforce absence of whitespace if you want to enforce canonical encodings. There's no option to enforce absence of whitespace but it's easy to do with a regex or something. And then the last and sort of most interesting way that you might choose to handle the last chunk not being complete is to just stop. And in particular this is something that you might want to do if you are expecting to get more input in the future, which might allow you to complete the chunk. So in the "stop-before-partial" case you stop decoding without an error and just return or write the bytes that you have decoded thus far. In the case of producing a new uint8array value, this is hard to use because you don't know exactly where in the string you stopped reading, unless you know that there's no whitespace in which case you can figure it out. But in the case of the API that gives you a read-written pair you can just use the "read" to say okay this is how far into the string I read before I stopped before the partial chunk and then you can pick up from there when decoding in the future. So it's not as useful on the not writing into an existing -buffer methods. But it's present on both for consistency. And it's not present on the hex methods because the hex things don't have nearly the same complexity. So it didn't seem worth having the additional complexity to the API surface there. Okay. -Another detail that we discussed and decided not to do was supporting writing to a specific offset of a given buffer. You can of course do this using subarray to create a view and then writing to the subarray, but this is something that I think we could add as an additional options bag option later. It's a little hard to feature test, but you can feature test for it, or of course you can just use the subarray. Or, you know, instead of adding this parameter later, if we decided we wanted that, we could maybe do something more holistic like having a view that you can retarget. You can shift the offset, the byte offset of an existing typed array without allocating a new one. There's lots of things we could do here, so it seems like it doesn't need to be solved for this version of the proposal. And the last thing to note is that there's no explicit support for streaming, but it is fairly straightforward to do it in userland using the stop before partial mode for handling final chunks. You have to use a little code in userland, but it's efficient in the sense that it never requires you to do an additional pass over the input in userland. -So you can keep most of the work in the engine's implementation of the basic C4 decoder. All right. There's spec text. It's been looked at by a few people. Anbo's raised a couple of issues which I've addressed. There's tests in PR. I don't think anyone formally signed up to review, but several people have given reviews. -And that's it. That's the proposal. I'm asking for, well, first, I am asking the committee for opinions and or consensus on this last open issue [Uint8Array.fromBase64Into vs Uint8Array.prototype.setFromBase64Into]. In particular, people seem to be leaning towards the second form. So I'm hoping that we can just adopt the second form and then go to stage three. I'll need to update the tests, but it's a very small diff to the tests. -And then I'm hoping to ask for a stage three after that. So let's get to the queue. +KG: Okay. Hello. I am coming to the committee with the “`Uint8Array` to and from Base64” proposal for stages 2.7 and 3. The proposal is on GitHub. As a reminder, the thesis statement is that we should have a built-in mechanism for converting binary data to and from Base64. It’s grown, but that’s the scope. The basic API hasn’t changed in a long time. There are methods on `Uint8Array.prototype` called `toBase64` and `toHex`, and static methods for `fromBase64` and `fromHex`. These have options, and some details that we will get to later. There is also a pair of methods for writing into an existing Unit8Array. These take a target and give a `read` and `written` pair that tells us how many characters from the input you read and how many bytes to the output you have written. And this is again for both Base64 and hexing. There was a question whether this should be a static or prototype method. As static, it takes the TypedArray as an argument, and as prototype it takes it as receiver. I am okay with either. There was some support for the prototype method. We already have `TypedArray.prototype.set`; it’s similar to that. I am open to either. And I am hoping that this is a small enough issue that wherever we go, the proposal can advance to stage 3 at this moment. Although if the fact that this is still open means it can’t, that’s okay. And I will just come back again. But this is the – the only open question that I have for the committee, unless I am forgetting something. I want to call attention to the fact that this is the first time that there would be something that is on one specific kind of TypedArray. I think that’s fine. It’s a sequence of bytes, this is the sequence-of-bytes type. But it is the first time we're doing this so we should take special care. All right, getting into some of the details. The first is the base64 methods take an alphabet parameter which takes either the string base64 or the string base64 URL. The default is base64. The hex methods do not support an alphabet parameter. The input can be mixed case hex, the output is either always uppercase or always lowercase. I am blanking on which it is but it's not customizable. There are also some details about the handling of invalid characters. Base64 and not hex supports ascii whitespace because the standard base64 implementations all support ascii whitespace. Not other kinds of whitespace, this is a very small list, not the whole unicode edition of whitespace. And if you encounter any non-alphabet character with the exception of ascii whitespace in the case of base64, then the decoding methods throw an exception. So in addition to input being invalid because of having invalid characters in it, in the case of base64 it can also be invalid because you don't have a full chunk. Recall that base64 requires chunks of four characters to decode. What happens if the input, the length of the input is not a multiple of four characters, ignoring whitespace. So we decided ultimately that this should be customizable by an options bag argument, which I've chosen to spell last chunk handling. It can, there's three valid values for it. Either loose handling, which is sort of the most permissive and matches what `atob` does on the web, which allows you to omit padding or to include padding. So it will just assume that if it gets to a partial chunk it will assume that this should have padding. Now there's a caveat there, which is that for valid base64 data you can have either one or two padding characters, you can't have three. So you can't have precisely one alphabet character and then three padding characters, which means that even in loose mode if you have precisely one additional character that's going to be an exception. So this doesn't allow all possible final chunks, only final chunks that could conceivably have been produced by omitting padding. In addition to this there's the strict option, which requires the padding characters to be present and also enforces that the two or four additional bits that are represented in the final two or three characters of the base64 that don't map to the decoded byte stream requires those to be zero. This helps to enforce that your base64 encoding is canonical, although you will also have to enforce absence of whitespace if you want to enforce canonical encodings. There's no option to enforce absence of whitespace but it's easy to do with a regex or something. And then the last and sort of most interesting way that you might choose to handle the last chunk not being complete is to just stop. And in particular this is something that you might want to do if you are expecting to get more input in the future, which might allow you to complete the chunk. So in the "stop-before-partial" case you stop decoding without an error and just return or write the bytes that you have decoded thus far. In the case of producing a new uint8array value, this is hard to use because you don't know exactly where in the string you stopped reading, unless you know that there's no whitespace in which case you can figure it out. But in the case of the API that gives you a read-written pair you can just use the "read" to say okay this is how far into the string I read before I stopped before the partial chunk and then you can pick up from there when decoding in the future. So it's not as useful on the not writing into an existing buffer methods. But it's present on both for consistency. And it's not present on the hex methods because the hex things don't have nearly the same complexity. So it didn't seem worth having the additional complexity to the API surface there. Okay. Another detail that we discussed and decided not to do was supporting writing to a specific offset of a given buffer. You can of course do this using subarray to create a view and then writing to the subarray, but this is something that I think we could add as an additional options bag option later. It's a little hard to feature test, but you can feature test for it, or of course you can just use the subarray. Or, you know, instead of adding this parameter later, if we decided we wanted that, we could maybe do something more holistic like having a view that you can retarget. You can shift the offset, the byte offset of an existing typed array without allocating a new one. There's lots of things we could do here, so it seems like it doesn't need to be solved for this version of the proposal. And the last thing to note is that there's no explicit support for streaming, but it is fairly straightforward to do it in userland using the stop before partial mode for handling final chunks. You have to use a little code in userland, but it's efficient in the sense that it never requires you to do an additional pass over the input in userland. So you can keep most of the work in the engine's implementation of the basic C4 decoder. All right. There's spec text. It's been looked at by a few people. Anbo's raised a couple of issues which I've addressed. There's tests in PR. I don't think anyone formally signed up to review, but several people have given reviews. And that's it. That's the proposal. I'm asking for, well, first, I am asking the committee for opinions and or consensus on this last open issue [Uint8Array.fromBase64Into vs Uint8Array.prototype.setFromBase64Into]. In particular, people seem to be leaning towards the second form. So I'm hoping that we can just adopt the second form and then go to stage three. I'll need to update the tests, but it's a very small diff to the tests. And then I'm hoping to ask for a stage three after that. So let's get to the queue. GCL: Yeah, I think this is a good proposal. I’m a fan. I think I would tend towards the prototype version, just because it matches what we’re doing in the language a little more. But, yeah, they not a super strong opinion. so, yeah. @@ -905,13 +761,13 @@ SYG: Okay. I would still -- so V8 is happy with 3 regardless, but we would still KG: Yeah, and I do want to say that I’m explicitly wanting to leave open the possibility of adding an offset later. Especially if it is done for TextEncoder's encodeInto as well. I think that it is something that we could do; despite it being slightly difficult to feature-test, it is possible. -KKL: [on the queue] favors the prototype version. +KKL: [on the queue] favors the prototype version. LGH: I already submitted some feedback on the spec text, but I’m happy to do an official review if you need more people, and we also had various people within Bloomberg express excitement, about the flexibility of the API, so the option bags having different options on how to treat the behavior instead of hard coding those things. So definitely excited to see this advance as well. -RGN: [on the queue] says +1 for Stage 2.7 and 3, end of message. +RGN: [on the queue] says +1 for Stage 2.7 and 3, end of message. -DLM: [on the queue] says also +1 for Stage 3. +DLM: [on the queue] says also +1 for Stage 3. JHD: [on the queue] also +1 for Stage 3 and being a reviewer. @@ -919,8 +775,7 @@ DE: SYG expressed support for adding this outputOffset option. Is there a partic KG: Yeah, it was contentious, although mostly among Anne, who doesn’t participate in TC39, and DD, who also doesn’t participate in TC39. And I guess Peter expressed that he saw both sides of it. I don’t want to speak for him. But I mostly took this out because the web platform people said that they didn’t really like it, and the hope is to maintain rough consistency with the text encoder and encodeInto method, which is somewhat similar. I don’t think we have to be absolutely identical, but it would be kind of a shame if we added this parameter here and then the web platform did something different for offsets for text encoder. -DE: I want to suggest that when we’re considering this accelerated UTF-8 API that we include -reconsideration of this issue and, for example, maybe at that point, we’ll decide we really want this parameter and add it to both the new API and the old one [meaning, both base64 and utf8]. Thanks. +DE: I want to suggest that when we’re considering this accelerated UTF-8 API that we include reconsideration of this issue and, for example, maybe at that point, we’ll decide we really want this parameter and add it to both the new API and the old one [meaning, both base64 and utf8]. Thanks. KG: And I want to -- I don’t know if Peter’s in attendance. I want to make sure that since previously the proposal did not meet Moddable’s needs, I want to make sure we give Peter a chance to say if this meets his needs. I know there’s some design questions that not everyone is completely happy with. But, yeah. @@ -928,19 +783,17 @@ PHE: Thanks for checking. I am content with where this has landed. It addresses KG: Okay. Thanks very much. So I’ve heard a fair bit of support, and everyone who spoke about the bikeshedding issue preferred the prototype. So I would like to ask for Stage 3 for this proposal with the caveat that it will be updated to accept my PR to do the prototype placement and I will separately do the tests; so tests are not 100% complete since they are testing the slightly wrong form of this, but I think I can still ask for Stage 3 with that caveat. I’ve already heard several explicit supports, so just a final chance for people to object. -RPR: Any objections to stage 3 with the prototype method choice? A repeated +1 from JHD for -the Stage 3 and existing tests and prototype. I think we can call it. Congratulations, you -have consensus for Stage 3. +RPR: Any objections to stage 3 with the prototype method choice? A repeated +1 from JHD for the Stage 3 and existing tests and prototype. I think we can call it. Congratulations, you have consensus for Stage 3. Everybody: Yay. ### Conclusion/Resolution + - The base64 proposal reaches Stage 3 - On the bikeshedding question, the proposal will switch to a prototype method from a static method https://github.com/tc39/proposal-arraybuffer-base64/pull/45 ## Extractors update - Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-extractors) @@ -948,7 +801,7 @@ Presenter: Ron Buckton (RBN) RBN: I wish I could be there for the 100th meeting. But that didn’t pan out on my side. But I am happy to present a few topics at this meeting. I’d like to briefly talk about Extractors. This is a proposal that I brought to committee over a year ago in discussion around how we wanted to handle things like custom matchers and pattern matching, and having a syntax that was consistent across pattern matching and destructuring and some of those use cases. The motivation for this proposal is that there’s currently no way to evaluate user defined logic during destructuring. The pattern matching proposal has a mechanism for user defined logic when matching via custom matchers. The idea with the extractor syntax is to leverage a common pattern that you see in multiple languages to interject into that matching process in a way that would be consistent for both pattern matching and destructuring, and this is again present in multiple different languages in different ways. Scala uses extractor objects, which I presented in depth in a previous plenary, Rust uses pattern matching, C# uses `Deconstruct`, and the list goes on and on. Now, what I had proposed previously included the concept of what essentially looks like a call expression, but in a declaration position. Here you see `const Parse(a, b, c) = input`. In the Scala world this is what is called an extractor, and it would leverage something called the `unapply` method. The basic concept that is calling the constructor is "application" of arguments, while extraction is the “unapplication” of a result into its arguments, pulling that one value out into the multiple things that may have produced it. This is extremely useful for things like algebraic data types, for patterns, for custom parsing, and for validation, and this proposal introduces these mechanisms through the use of this syntax. So examples here show parse, which uses an identifier reference, Option.some, which is a dotted name, and nested destructuring, so you could have an element that then is further destructured using normal destructuring syntax or using these extractor patterns inside of object literal destructuring nesting them within themselves, et cetera. -RBN: I last presented on this in September of 2022. At the time, this proposal included two concepts, the idea of array extractors, which used the argument syntax, and object extractors, which used a Rust-like curly brace syntax for object destructuring. One of the goals was to match some thinking and direction we had for proposals like algebraic data type based enums. We’ve previously presented a proposal for enums, and I’m planning to re-present at some point in the future, and we were trying to find ways to make a syntax that would match the end goals for each of these proposals to have something consistent in the long term. +RBN: I last presented on this in September of 2022. At the time, this proposal included two concepts, the idea of array extractors, which used the argument syntax, and object extractors, which used a Rust-like curly brace syntax for object destructuring. One of the goals was to match some thinking and direction we had for proposals like algebraic data type based enums. We’ve previously presented a proposal for enums, and I’m planning to re-present at some point in the future, and we were trying to find ways to make a syntax that would match the end goals for each of these proposals to have something consistent in the long term. RBN: Now, in between when this was adopted at Stage 1 in September of 2022 and now, we’ve been mostly having discussions about this syntax within the pattern matching champions group. One of the things that we’ve decided is that the object extractor syntax is something that we don’t want to continue to include going forward. It takes up too much syntax space and introduces some complexity that we’d like to consider removing. So that’s basically the change that I’m here to discuss today. @@ -956,7 +809,7 @@ RBN: Let’s talk about extractors and binding and assignment patterns. This is RBN: As for the things we are still keeping, we’re still looking at this novel syntax. We still have the same main motivations for doing things like data validation, transformation and normalization in the midst of destructuring, to be able to leverage things like prior art from scala extractors, rust variable pattern, C#’s `Deconstruct` and many other languages, and align with what we’re doing in the pattern matching proposal. We’re also keeping the ability to reference a qualified name, which is similar to what we currently allow for decorators as decorators can have an identifier or a dotted identifier, so A.B.C, et cetera, and this uses that same mechanism. When you would run into a dotted name, we would resolve the root identifier of that dotted name in the current lexical environment as an identifier reference and then the dotted members are then just property access off of it. That result that it resolves to is then used as a custom matcher -RBN: When you have binding patterns, you would be able to, say, parse input, you would be able to emulate the list-like semantics restructuring, so here List(a, b) would allow you extract something that produces a list of two elements or an option type where you might say Option.some(value) is the thing on the right. In the pattern matching space, you could have multiple branches ot match against, where if you don’t match one branch, you move on to the next potential branch. In the destructuring case, you have only a single branch that can be the thing that matches. If it fails to match, you would throw an error and this is consistent with if you tried to do array destructuring on null or on an object that doesn't have an iterator, we would throw an error. And we would also have the same capability in assignment patterns. +RBN: When you have binding patterns, you would be able to, say, parse input, you would be able to emulate the list-like semantics restructuring, so here List(a, b) would allow you extract something that produces a list of two elements or an option type where you might say Option.some(value) is the thing on the right. In the pattern matching space, you could have multiple branches ot match against, where if you don’t match one branch, you move on to the next potential branch. In the destructuring case, you have only a single branch that can be the thing that matches. If it fails to match, you would throw an error and this is consistent with if you tried to do array destructuring on null or on an object that doesn't have an iterator, we would throw an error. And we would also have the same capability in assignment patterns. RBN: Right now it’s already legal syntax to write a call on the left side, but we throw because it’s not a reference, so you can’t actually utilize this syntax in JavaScript today. One of the advantages of removing the curly brace syntax is again we don’t have to be concerned about the potential conflicts, future conflicts with cover grammars as a result of trying to use identifier curly in one wave, it looks like it’s on the left side of an assignment and another way if it’s just a normal expression. @@ -966,10 +819,9 @@ RBN: This example here is a relatively simple data structure. But while this say RBN: This is another example I showed earlier in the September meeting in 2022. We could, for example, support nested destructuring and pattern matching against regular expressions. This is something we’re also pursuing in the pattern matching group. Which is the ability to do pattern matching on regular expressions. This pulls this out into an object so you can use it for reference. The pattern matching syntax would actually theoretically allow you to embed the RegExp pattern directly within destructuring or pattern matching. This basically is a custom matcher that returns a single element that either you can pick out the group and a group value that you need or you can look at things by ordinal position within the RegExp match result. -RBN : This proposal has some relationships to a few other proposals that have either been discussed or are upcoming. As I’ve mentioned before, this is very strongly tied to the pattern matching proposal which is currently at Stage 1. This is a preferred syntax versus a prior syntax that was being considered for kind of doing nested matching after doing a custom nested pattern matching after doing a custom match against something. So you can see there’s some parallels here. The basic patterns of doing object-based patterns, doing array-based patterns, doing string-based patterns, et cetera, and involving custom matchers, for example, looks -somewhat similar. +RBN : This proposal has some relationships to a few other proposals that have either been discussed or are upcoming. As I’ve mentioned before, this is very strongly tied to the pattern matching proposal which is currently at Stage 1. This is a preferred syntax versus a prior syntax that was being considered for kind of doing nested matching after doing a custom nested pattern matching after doing a custom match against something. So you can see there’s some parallels here. The basic patterns of doing object-based patterns, doing array-based patterns, doing string-based patterns, et cetera, and involving custom matchers, for example, looks somewhat similar. -RBN: Another proposal that has been discussed and is currently Stage 1 is the parameter decorators proposal. There is a small amount of overlap between these two in that they both can target a parameter, but there are definitely different use cases for these. Parameter decorators are designed to face outward and can only appear at the top level of a parameter declaration and run very -- and they’re useful for reflection, attaching metadata and meta programming when you’re dealing with the function declaration itself. Things outside control of the body, such as doing registration of things or binding something from a route parameter to a -- I’m sorry, binding something from an HTTP route to a specific parameter or of the body is not something an extractor can do because the extractor has to run when the function is invoked. Extractors face inward and can be nested anywhere in a parameter and much more in depth and closer to the code and, again, only run during invocation, you can’t use them for reflection, metadata or metaprogramming. These two proposals, while they have a place within a function declaration where they touch at the parameter level, they’re really not designed to be conflicting. They’re designed to be complementary to each other. +RBN: Another proposal that has been discussed and is currently Stage 1 is the parameter decorators proposal. There is a small amount of overlap between these two in that they both can target a parameter, but there are definitely different use cases for these. Parameter decorators are designed to face outward and can only appear at the top level of a parameter declaration and run very -- and they’re useful for reflection, attaching metadata and meta programming when you’re dealing with the function declaration itself. Things outside control of the body, such as doing registration of things or binding something from a route parameter to a -- I’m sorry, binding something from an HTTP route to a specific parameter or of the body is not something an extractor can do because the extractor has to run when the function is invoked. Extractors face inward and can be nested anywhere in a parameter and much more in depth and closer to the code and, again, only run during invocation, you can’t use them for reflection, metadata or metaprogramming. These two proposals, while they have a place within a function declaration where they touch at the parameter level, they’re really not designed to be conflicting. They’re designed to be complementary to each other. RBN: As far as upcoming proposals, we have presented before around this idea of an enum proposal. We have been rethinking that proposal and what its goals are. Our original intent was around producing something that was more aligned with TypeScript enums, which essentially are either string-to-number or string-to-string based mappings, but we found that there’s a lot more potential in the idea of algebraic data types and more capabilities we could express there. There’s still some interest in pursuing that and bringing it back again to TC39 to discuss further, so that’s still something that’s on our agenda. @@ -977,13 +829,13 @@ RBN: This is an update. I just wanted to give everyone an idea of where the extr RGN:I think this is really nicely general. I appreciate it’s separation from pattern matching, it looks like it’s got a lot of foundational support that can really assist with a number of patterns, if you’ll excuse the pun, that come up in lots of places. So I’m excited about it. Thank you for the update. -DE: Similarly to Richard, I’m very happy about the current shape of the proposal. I think the object extractors made it a little difficult for people to understand the first time, but this is very natural evolution. My biggest concern about pattern matching besides complexity was that this sort of custom destructuring wasn’t available outside of pattern matching, and this resolves that very well. I think it stands on its own, but also would be fine to advance with pattern matching. So, yeah, thank you. +DE: Similarly to Richard, I’m very happy about the current shape of the proposal. I think the object extractors made it a little difficult for people to understand the first time, but this is very natural evolution. My biggest concern about pattern matching besides complexity was that this sort of custom destructuring wasn’t available outside of pattern matching, and this resolves that very well. I think it stands on its own, but also would be fine to advance with pattern matching. So, yeah, thank you. RBN: And I would like to say that the goal is, right now, I still have some things that I need to do to work towards this being ready for Stage 2, but the goal is to have this proposed for advancement to Stage 2 around the time that we’re ready to propose pattern matching to Stage 2, if not before, but at least by that point. MM: Yeah, so first I’d like to express my strong, strong support for this. As you and many people on the committee know, I’m often a skeptic about new syntax added to the language, that it does not pay for itself. This one pays for itself. This one has a tremendous amount of reach. And it’s fairly small and elegant for something with this much reach, this many different things you can apply it to in a coherent and unified manner. I do have some questions. First of all, I want to clarify, the object extractor, the thing that you withdrew was simply the way to express the syntax. There was no loss of power, is that correct? -RBN: Essentially. So the difference -- let me see if I can go back to a slide that relates to that. Previous extractor syntax would -- it would still have done the same resolution mechanism and still have called the symbol.custom matcher method. At the time the return value of that method was an object that had either a `matched: true` or `matched: false` and then a `value` property. For an array extractor, which is the syntax that we kept, the value would be an array, and for an object extractor the value would be an object. So you would have these two branching paths based on the kind of extraction you want to do. But that really doesn’t fall out of how the syntax is used. So we now pass in some type of hint that indicates how you are using the extractor, and that’s something we still need to do for other reasons. But it does mean that you have this -- this dichotomy between “what I am requesting” and “what can I actually give”. So you could give a value that was incorrect. By switching things to just the array pattern syntax, we’re actually able to vastly simplify the logic for a custom matcher. +RBN: Essentially. So the difference -- let me see if I can go back to a slide that relates to that. Previous extractor syntax would -- it would still have done the same resolution mechanism and still have called the symbol.custom matcher method. At the time the return value of that method was an object that had either a `matched: true` or `matched: false` and then a `value` property. For an array extractor, which is the syntax that we kept, the value would be an array, and for an object extractor the value would be an object. So you would have these two branching paths based on the kind of extraction you want to do. But that really doesn’t fall out of how the syntax is used. So we now pass in some type of hint that indicates how you are using the extractor, and that’s something we still need to do for other reasons. But it does mean that you have this -- this dichotomy between “what I am requesting” and “what can I actually give”. So you could give a value that was incorrect. By switching things to just the array pattern syntax, we’re actually able to vastly simplify the logic for a custom matcher. RBN: I had an example here showing Point. Before this, the example here would have been if it’s a match, then I am returning a more complex result that has `matched: true`. If it’s not, I have to return `matched: false`. Now since it’s only really ever just array extractors, you just return either an array value, or something that’s truthy, or something that’s falsy. There’s a hint that gets passed in that indicates whether the pattern is expecting you to return an Array or a Boolean, but that’s primarily an optimization mechanism. If you say that something is a match and in the pattern matching case and you’re just using `when Point: …`, you don’t need to do the extra work to allocate the array. Note though that this isn’t something you can do in the destructuring case, but it’s something you would do in the pattern matching case for type/brand tests. @@ -995,9 +847,9 @@ MM: Do you have an example that uses the hint? RBN: I do not have one in this proposal at the moment because it was not necessary, and I’m using the semantics we’ll be using for pattern matches. The pattern matching case would be `when Point:`. And that’s not something you can do in the destructuring case because that would just be an identifier that you would be declaring. So it’s not necessary for the cases that we’re actually presenting for this proposal. In the pattern matching case you could use the hint as Huawei to not produce the array object if I’m carrying, is this a match, yes or no, and not further destructuring. -MM: So I am agnostic on that. I’d like to dive into that more, but that certainly, with regard to going for Stage 2, which I understand you’re not even doing today, that certainly is a fine for an investigation that can proceed within Stage 2. +MM: So I am agnostic on that. I’d like to dive into that more, but that certainly, with regard to going for Stage 2, which I understand you’re not even doing today, that certainly is a fine for an investigation that can proceed within Stage 2. -MM: The other questions I had with -- was with regard to the pattern matching proposal. The places in this slide deck where you showed pattern matches was just showing the match when as a -- the switch-like alternative for doing a bunch of sequential pattern matches. I certainly want extractors combined with the match when syntax. What -- given match and extractors, is there any other functionality from pattern matches that’s left on the table or could we just do match when plus extractor and get all the functionality? +MM: The other questions I had with -- was with regard to the pattern matching proposal. The places in this slide deck where you showed pattern matches was just showing the match when as a -- the switch-like alternative for doing a bunch of sequential pattern matches. I certainly want extractors combined with the match when syntax. What -- given match and extractors, is there any other functionality from pattern matches that’s left on the table or could we just do match when plus extractor and get all the functionality? CDA: Sorry, MM, we’ve got a point of order from JHD. @@ -1015,7 +867,7 @@ SYG: So in that case, I’m wondering -- so, one, that puts my mind a little bit RBN: It depends, really. Actually one of the goals I’m looking at is to find ways of optimizing that experience by being able to declare a closed set of potential object shapes that you might expect. But just like any function, you would have to expect that the -- the thing that you come in can differ from what you expect, because -- but it’s the nature of JavaScript. But just like anything in destructuring today, if you had an object literal or array destructures anding, you don’t know what the thing on the right-hand side is until you go to perform that, so you’re still having to run code that gets the symbol.iterator method, and if doesn’t exist, to throw and if it does exist you have to run possibly user to code the evaluate that. There might be performance optimizations that could be made by a run time by known shapes and known expectations, but in the -- at the end of the day, this is still basically just array destructuring. It just has that -- there’s that extra bit of user defined code that you could run that you might not be running, but you still could if it’s a getter or has all this other behavior behind symbol iterator. -SYG: Side bar comment, the array destructuring is, Array destructuring is so much slower than object destructuring. +SYG: Side bar comment, the array destructuring is, Array destructuring is so much slower than object destructuring. RBN: Yes, and that’s something that we -- I’ve been talking about with the pattern matching group about, is there a potential future for a -- if we in the future, say, had an algebraic data type data structure and question could define something that says you have a return value that is fail, a unary value or an n-ary value, the unitary value could be a simpler case for doing nested object destructuring having to deal with all of the array destructuring wrappers around it. Again, there’s not much progress there, plus we’d need to get an ADT enum proposal to Stage 1 to really start considering what the implications would be there. @@ -1066,120 +918,82 @@ RPR: Thank you. I think that’s the end of the queue. RBN: All right. I appreciate the feedback. I hope to have -- this back at committee again in the near future, and I wanted to give an update on where things were to keep the committee in the loop. We’ve been discussing this regularly in the pattern matching calls that we meet quite often, we discuss this also in the pattern matching discussion on Matrix. Most of that discussion has been happening there because the goal is to have both the binding pattern, assignment pattern version and the pattern matching version work together and be consistent in both respects, so there’s a lot of discussion around cross-cutting concerns and how this all works happening there. So thank you. ### Conclusion/Resolution + - Only an update, no consensus needed at this time. - Object extractors removed from proposal. - Investigating the performance characteristics of iterator-destructuring vs. explicit array indexing for extractors. -## Intl.MessageFormat +## Intl.MessageFormat RPR: Thank you, Ron. Okay, next we have the continuation of -- from Eemeli, although I don't actually know if Eemeli is here on the messageformat. -DE: I’ll present if Eemeli isn’t here. So I wanted to review a potential summary for Intl.MessageFormat next steps. Right now the proposal is at Stage 1, and I wanted to lay out what future work would be like. So here this big paragraph I have, I wrote this and I wanted to verify it with the group. So one piece of feedback that we’ve gotten is that it would be meaningful or persuasive to see, maybe a dozen organizations of various sizes, including ones that were not involved in MessageFormat 2 development to make significant use in production of MessageFormat 2 syntax across their stack. And that this will likely be required for Stage 2.7. The requirements for Stage 2, the exact requirements, I think are a little bit undecided. I think people were a bit more -- em Eemeli during his presentation expressing maybe the strongest possible interpretation of the need for a delay. But I think we could discuss that more in the future. Anyway, I wanted to put this in the summary to encourage this kind of future development. And in the conclusion noting that probably in the following TC39 meeting, there will be a presentation on Intl.MessageFormat for Stage 2, leaving out the parser and focusing on the data model. The committee has not expressed concerns about this approach. But it remains to be reviewed by TG2. And that TC39 encourages continued development prototyping and deployment of MessageFormat 2 syntax, for example, as implemented in the JS level library. +DE: I’ll present if Eemeli isn’t here. So I wanted to review a potential summary for Intl.MessageFormat next steps. Right now the proposal is at Stage 1, and I wanted to lay out what future work would be like. So here this big paragraph I have, I wrote this and I wanted to verify it with the group. So one piece of feedback that we’ve gotten is that it would be meaningful or persuasive to see, maybe a dozen organizations of various sizes, including ones that were not involved in MessageFormat 2 development to make significant use in production of MessageFormat 2 syntax across their stack. And that this will likely be required for Stage 2.7. The requirements for Stage 2, the exact requirements, I think are a little bit undecided. I think people were a bit more -- em Eemeli during his presentation expressing maybe the strongest possible interpretation of the need for a delay. But I think we could discuss that more in the future. Anyway, I wanted to put this in the summary to encourage this kind of future development. And in the conclusion noting that probably in the following TC39 meeting, there will be a presentation on Intl.MessageFormat for Stage 2, leaving out the parser and focusing on the data model. The committee has not expressed concerns about this approach. But it remains to be reviewed by TG2. And that TC39 encourages continued development prototyping and deployment of MessageFormat 2 syntax, for example, as implemented in the JS level library. DE: I really want to avoid giving the impression that TC39 kind of thinks that development is a bad idea. I don’t think anybody here has argued that. But taking out the syntax might be misinterpreted as that, so this is why I am proposing such a shared conclusion text. Any thoughts? Any concerns? Do we have consensus on this statement of, you know, encouragement? -MF: thumbs up from Michael Ficarra +MF: thumbs up from Michael Ficarra JHD: thumbs up MS: Just a nit, don’t stick to the following meeting, exchange at a future meeting, that will include the following meeting. - -DE: Okay, done. -JSC: And JS Choi has a + 1, end of message. +DE: Okay, done. JSC: And JS Choi has a + 1, end of message. ### Conclusion/Resolution +RPR: Okay, all wrapped up. Thank you. Okay. I think we’ve kind of come to end of the normal agenda. I know Chris wanted to do ECMA recognition awards, but I think we’ve actually got now at least a 20 minute gap, maybe a bit more. Dan, would like -- we don’t have enough time to fit in any of the usual topics. But, Dan, would you like to use this to do some scrubbing of Stage 1 proposals? -RPR: Okay, all wrapped up. Thank you. Okay. I think -we’ve kind of come to end of the normal agenda. I know Chris wanted to do ECMA recognition -awards, but I think we’ve actually got now at least a 20 minute gap, maybe a bit more. Dan, -would like -- we don’t have enough time to fit in any of the usual topics. But, Dan, would you -like to use this to do some scrubbing of Stage 1 proposals? +## Scrub Stage 1 proposals -## Scrub Stage 1 proposals -Presenter: Daniel Ehrenberg +Presenter: Daniel Ehrenberg -DE: Yeah, let’s do it. So I’ll share my screen again. This is continuing something that my -colleague in Bloomberg, Peter, has kind of started of looking through past Stage 2 and 3 -proposals to figure out what our next steps should be. In the chat, in the delegates chat, in -the past week or two, the question was raised what about this Stage 1 proposal, should we -withdraw it? And so that kind of raises the question, well, we have a lot of Stage 1 proposals. Why don’t we do the scrubbing process through them. And this is intended to be completely informal, no particular preparation expected. But don’t feel like have to answer if you’re put on the spot. So let’s just talk this through, and then in a future meeting, we can figure out -- we can actually put something on the agenda to propose a particular action on the proposals. So “export V from mod”. +DE: Yeah, let’s do it. So I’ll share my screen again. This is continuing something that my colleague in Bloomberg, Peter, has kind of started of looking through past Stage 2 and 3 proposals to figure out what our next steps should be. In the chat, in the delegates chat, in the past week or two, the question was raised what about this Stage 1 proposal, should we withdraw it? And so that kind of raises the question, well, we have a lot of Stage 1 proposals. Why don’t we do the scrubbing process through them. And this is intended to be completely informal, no particular preparation expected. But don’t feel like have to answer if you’re put on the spot. So let’s just talk this through, and then in a future meeting, we can figure out -- we can actually put something on the agenda to propose a particular action on the proposals. So “export V from mod”. JHD: So I brought this one up within the last handful of meetings. I will have to check the notes to remember the conclusion. The purpose of bringings up was to say is this going to be worth my effort to champion and bring it back. So if --, If the notes contain an encouragement of that path, then I will update the presented date and the champion date. But if not, then I would withdraw it or I would want to mark it as withdrawn. -DE: That’s great. Thanks for bringing it up. Does anyone want to encourage or discourage -JHD from progressing this proposal? Feel free to also provide feedback asynchronously. - -DE: Observables, so observables, -as many of you know, are under discussion as a what way HTML proposal. They were previously -proposed as a TC39 proposal. I think the move to WHATWG was partly because there was somebody -in -- somebody who was excited to work on it within that forum, partly might have been due to a -misunderstanding about TC39 having rejected the proposal, which I don’t think we did. Anyway, -should we withdraw this proposal and be content with it proceeding in WHATWG, or what would you -all like do? - -MM: So as the co-champion with Jafar, the I am not interested in putting effort into this by -myself. So if that’s the conditions under -- only conditions under which it would advance, -then it won’t advance. Jafar, as far as I know, has not been active for a very, very long -time. I haven’t heard from him in a very, very long time. I was not -- I do think that -this -- you know, that this thing, if it happens, should be, you know, just abstractly, if -- -if give an choice between WHATWG and TC39, TC39 is the appropriate venue, but obviously only if -someone’s willing to push it here, which I’m not. I’m not willing to do -- to do it alone. +DE: That’s great. Thanks for bringing it up. Does anyone want to encourage or discourage JHD from progressing this proposal? Feel free to also provide feedback asynchronously. + +DE: Observables, so observables, as many of you know, are under discussion as a what way HTML proposal. They were previously proposed as a TC39 proposal. I think the move to WHATWG was partly because there was somebody in -- somebody who was excited to work on it within that forum, partly might have been due to a misunderstanding about TC39 having rejected the proposal, which I don’t think we did. Anyway, should we withdraw this proposal and be content with it proceeding in WHATWG, or what would you all like do? + +MM: So as the co-champion with Jafar, the I am not interested in putting effort into this by myself. So if that’s the conditions under -- only conditions under which it would advance, then it won’t advance. Jafar, as far as I know, has not been active for a very, very long time. I haven’t heard from him in a very, very long time. I was not -- I do think that this -- you know, that this thing, if it happens, should be, you know, just abstractly, if -- if give an choice between WHATWG and TC39, TC39 is the appropriate venue, but obviously only if someone’s willing to push it here, which I’m not. I’m not willing to do -- to do it alone. JHD: I'm on the queue for, like, basically +1ing that. If I thought that it would -- that the -- how would it put it in? The folks who are enthusiastic about working on it in the web, if I thought that they would pause their efforts and, you know, and help us or not advance a solution -- a more general solution in the language, then I would be happy to join as a champion and help with that. But I have had a few backchannel conversations and do not get the sense that they are willing to do that, which, you know, makes that -- the effort of doing it here complicated. MM: Okay. I also want to take the opportunity to compliment Jafar on the extraordinary job he did. I was really along for the ride. He was the driving champion, and he just did a great job of this. Very powerful and elegant. -DE: So is anybody interested in getting involved in the WHATWG standards effort and having -trouble finding a path towards that, or is anybody interested in -- no one’s interested in reviving this, we should probably move. +DE: So is anybody interested in getting involved in the WHATWG standards effort and having trouble finding a path towards that, or is anybody interested in -- no one’s interested in reviving this, we should probably move. MLS: Is Jafar part of the WHATWG effort? -DE: No, I don’t think -- at least not visibly involved in any of these sort of standards. But -Len LicSh, the current maintainer of RX JS is very much involved. +DE: No, I don’t think -- at least not visibly involved in any of these sort of standards. But Len LicSh, the current maintainer of RX JS is very much involved. -JHD: I’ve spoken to Ben, And if he got enough of a signal this committee that convinced it would happen -roughly as quickly in TC39, he would prefer it here, but his overarching priority is to get it -shipped on the web, one way or another. +JHD: I’ve spoken to Ben, And if he got enough of a signal this committee that convinced it would happen roughly as quickly in TC39, he would prefer it here, but his overarching priority is to get it shipped on the web, one way or another. -DE: Right. I suspect that Ben would have enough community sway to make that transition occur. -So it really comes down to if anybody wants to actively work on this, and if not, then we -should, you know, just be happy that it’s progressing in this other forum. Okay,. +DE: Right. I suspect that Ben would have enough community sway to make that transition occur. So it really comes down to if anybody wants to actively work on this, and if not, then we should, you know, just be happy that it’s progressing in this other forum. Okay,. -LEO: DE, if you’re planning to go through all the list, I recommend we just organize some -verification work and we can reach out to all the implementers, otherwise we might take like -a -- just like too many long minutes to go through each one of them. Like, we can do some -verification to see, like, if there is intent to continue work on those proposals. I can help -on that, but yes. +LEO: DE, if you’re planning to go through all the list, I recommend we just organize some verification work and we can reach out to all the implementers, otherwise we might take like a -- just like too many long minutes to go through each one of them. Like, we can do some verification to see, like, if there is intent to continue work on those proposals. I can help on that, but yes. -DE: So, yeah, like, I don’t have anything else that we should use the time for, and this is a -thing that’s been on all of our shared backlog for a really long time. +DE: So, yeah, like, I don’t have anything else that we should use the time for, and this is a thing that’s been on all of our shared backlog for a really long time. -RPR: In the previous meetings when we’ve done these scrubs, I think they have proved effective use of time. +RPR: In the previous meetings when we’ve done these scrubs, I think they have proved effective use of time. -DE: For secure EMCAScript +DE: For secure EMCAScript -MM: We are currently working on it. And as well as a lot of issues around it. We will be changing the name from secure EMCAScript to hardened JavaScript. That’s the name we’ve been using. That name’s worked much better and it avoids some of the political issues around using the term security. Not just political, but also clarity. Hardened is clearly more evocative of integrity, and integrity better names what this -is about rather than the vague security. So, yes, this is -- this stays active, and we can do -the rename later. +MM: We are currently working on it. And as well as a lot of issues around it. We will be changing the name from secure EMCAScript to hardened JavaScript. That’s the name we’ve been using. That name’s worked much better and it avoids some of the political issues around using the term security. Not just political, but also clarity. Hardened is clearly more evocative of integrity, and integrity better names what this is about rather than the vague security. So, yes, this is -- this stays active, and we can do the rename later. -RPR: Just to address Michael’s point of order on the queue, yes, please use the queue to question -whoever is leading the topic. Yes. And there’s just under 10 minutes left. +RPR: Just to address Michael’s point of order on the queue, yes, please use the queue to question whoever is leading the topic. Yes. And there’s just under 10 minutes left. -DE: Math extensions. So this proposal started with a couple things 2 degrees to radiants. Last -- when it was present, I wasn’t sure of the motivation of these particular things. Does anyone want to pick up this proposal? Okay. So we could move that to withdrawn. +DE: Math extensions. So this proposal started with a couple things 2 degrees to radiants. Last -- when it was present, I wasn’t sure of the motivation of these particular things. Does anyone want to pick up this proposal? Okay. So we could move that to withdrawn. -LEO I personally, I have no interest in continuing this work(array.of AND array.FROM). Is it this work? I have no intention to continue the work on these other proposals right now. +LEO I personally, I have no interest in continuing this work(array.of AND array.FROM). Is it this work? I have no intention to continue the work on these other proposals right now. -KG: I still think they’re worth doing, and would pick them up if I get through everything else. So array.of is a variadic argument way of creating an array. Array.from takes an iterable -and gives you an array. This would be the same on map and set and WeakMap and WeakSet. So you -could do set.of 2, 3, 4 or whatever. +KG: I still think they’re worth doing, and would pick them up if I get through everything else. So array.of is a variadic argument way of creating an array. Array.from takes an iterable and gives you an array. This would be the same on map and set and WeakMap and WeakSet. So you could do set.of 2, 3, 4 or whatever. RPR: And there is support from MF. DE: Generator error functions. Yeah, isn’t it kind of weird that you can make a function or an async function as an arrow, but not a generator? And one idea here was that we make this syntax, which I like a lot, generator instead of a star. But I don’t know how often this comes up. So does anybody want to work on this? It’s pretty tricky syntactically if we try to go with the star. And then the generator keyword feels redundant, so I think that was the conundrum we ran into. Any interest? Okay. So we will withdraw. -DE: Math.signbit proposal So this was a floating point function that could help make it kind of easier to see the sign of positive and negative zero. And JF Bastien proposed it. I think it was kind of -- got a negative reception due to it being a bad thing to go, maybe, I’m not sure. Mark, were you one of the people with opinions on this? +DE: Math.signbit proposal So this was a floating point function that could help make it kind of easier to see the sign of positive and negative zero. And JF Bastien proposed it. I think it was kind of -- got a negative reception due to it being a bad thing to go, maybe, I’m not sure. Mark, were you one of the people with opinions on this? MM: Well, I don’t remember. @@ -1189,7 +1003,7 @@ MM: I certainly do not remember vetoing it and just looking at the short summary DE: Okay. That’s -- that’s nice to hear. -DE: Error stacks, if I could put you on the spot, do you want to give a status update. +DE: Error stacks, if I could put you on the spot, do you want to give a status update. JHD: Yeah, the -- we were discussing this in matrix last week, I think. So it still should remain Stage 1. I would like to advance it. The last time I did when you thought it was ready for Stage 2, I was given new feedback and a requirement, which is that it -- the current -- currently all it does is strictly specify the format of the stack and not the exact contents. And the new ask I was given was to extend it to fully specify everything, like, the union of all browsers’ behavior for the contents of the stack and I have not yet had time to boil an ocean, so I haven’t gotten back it to to it yet, but it still remains something a lot of people are interested in. So I would love some help or some signal that I should bring it back and that requirement will no longer be imposed. @@ -1199,15 +1013,13 @@ MM: Yeah, I should -- it says spec drafted by the two of us, which is correct. I JHD: You are, and if that’s not listed, I will try and update that so it lists you. -MM: Okay. And the table -- it was not listed. But that’s fine. Good. And with regard -- and since it’s Stage 1, I can postpone all my other questions. Thank you. Yes, I’m also interested there continuing to collaborate on this. +MM: Okay. And the table -- it was not listed. But that’s fine. Good. And with regard -- and since it’s Stage 1, I can postpone all my other questions. Thank you. Yes, I’m also interested there continuing to collaborate on this. -RPR: Thank you, Mark. The -- oh, and Chris. Chris says it might be worth bringing back error -stacks to committee without further work, even if just for discussion refresher. +RPR: Thank you, Mark. The -- oh, and Chris. Chris says it might be worth bringing back error stacks to committee without further work, even if just for discussion refresher. JHD: Yeah, I mean, I can certainly do that. I will just reask for Stage 2 in the future meeting and see what the feedback is at that time. I’m happy to do that. Just please someone, if that would be a waste of my and committee time, please let me know asynchronously, so I don’t do that. -DE: Can I ask to clarification for Chris what they would like to see and what -- what discussion -they’re interested in having. +DE: Can I ask to clarification for Chris what they would like to see and what -- what discussion they’re interested in having. CDA: It’s really just, there’s a possibility that the -- without further work, that it would be received better by the committee as it’s made up of today. Put it that way. @@ -1229,8 +1041,7 @@ DE: Do expression. I’m really excited about these. KG: I’ve mentioned this to a couple people, but just to repeat, we have, like, fully half a dozen fairly large syntax proposals in the works. And I think that the time of the committee as a whole and the time of me personally, at the current margin, would be better spent on new APIs than new syntax. However, that doesn’t mean I don’t think it’s a good proposal. I don’t think it necessarily needs to be withdrawn. I just don’t think it’s a good use of our time right now given how much syntax stuff is going on. If we wanted to, like, spend a little less effort on syntax in general and maybe concentrate on certain parts of the syntax, I think this would be a good candidate for it. But I am not currently working on this. That doesn’t mean I think it should be withdrawn, just that I’m not currently working on it. -DE: Cool, so you seem to be making a lot of progress towards resolving the previous open issues. How far along did you get? Like, what would be the next steps if someone else wanted to pick -this up? +DE: Cool, so you seem to be making a lot of progress towards resolving the previous open issues. How far along did you get? Like, what would be the next steps if someone else wanted to pick this up? KG: Major things are specifying the semantics of break and continue, making sure those work in arbitrary expression position, making sure the list of things that are prohibited is comprehensive so you don’t have weird edge edge cases if you a put a loop at the end of the do expression etc, and potentially investigating the precise syntax - there was discussion of maybe using `expr {}` instead of `do {}` or various things like that. But mostly small stuff. @@ -1242,176 +1053,41 @@ JWK I’m the same as JHD. RPR: Okay, I think that’s time for now. So thanks, Dan, for progressing those. All right, great. So we’re getting towards the end of the day. And we’re through the kind of like regular agenda topics. Chris I know has a few words the say on our process for how we interact with ECMA recognition awards. So we’re nearly there. We’re just having some technical difficulties. Okay. Screen -- we’re figuring out screen sharing. Now is the time of the day to wake up. We’re almost there. Sure, you’re allowed in. All right, you’re sharing your screen. -## Ecma Recognition Awards - -CDA: All right. Ecma recognition awards. The Ecma award program was established to recognize -contributions by individuals to development of standards and the benefit of the ICT and consumer electronics industries. So we have a number of lovely people from TC39 that have previously received this award. I’m sure you all recognize some of the names here. We dropped the ball in 2022. We did not have any nominations -- or nominees despite definitely having some folks that could have been nominated. So let’s not repeat that. We can nominate at any time of the year and these get reviewed at the GA meetings. If you have someone you would like to nominate, please let the chairs know. It’s helpful to have the nomination written up with the justification. But we can also help you with that. For the year 2023, we had three nominations. It’s not unheard of to have multiple nominations in a year, but we felt like we needed to make up a little bit for not putting anyone up in 2022. We were kind of spoiled for choice on our shortlist of people to nominate. Which is great. But, yeah, can’t nominate everybody. So there are still some folks probably deserving that maybe we can put up for this year. We will do better about reminding people to think about nominations for this every year so we do not drop the ball again. So start thinking about -that. If you have anyone in mind, please let us know. And now I will kick it over to Samina. - -SHN: I’ll share my screen. Okay, thank you, thank you for that lead, CDA. So nominations for awards, yes, we had three nominations. And, the December general assembly unanimously approved and are excited to accept the three that we have here. So Myles Borins, I don’t think Myles is on the call. And he -- -I did invite him to join, but he wasn’t able to attend. He did receive his recognition. We’ve -got Shane who is sitting here. And we have Mark Miller online, and I had the chance to give Mark his recognition award in words when I saw him in December. -But I’d like to be through a little bit about the citation that was written for each of them, -so we’ll start with Myles. Who in this room has worked with Myles? Would anybody like to share any other words than what’s on the screen about Myles? An anecdote? - -RPR: I would definitely follow up by saying I worked with Myles probably most intensely on -getting ES module support into node. As he chaired the subgroup, the task group, well, -associated with that. And I just can’t believe how determined and diligent he was about seeing -that process through to completion, because, you know, we standardized ES modules a long time -ago, getting it into node was a very big deal, but loss of people had opinions, and -- but lots -of people had opinions and I’m just super grateful and impressed with his both technical and -social skills to, you know, to rally people together to make that a success. - -SHN: It’s nice to hear. Also at the GA, I heard nothing but really positive words from some of -the people that were there. So I think Myles is very much appreciative of this and very -humbled and surprised when he received the award. He has received his award. The next -individual is Shane, who is sitting here. And this is a citation that you all put together for -Shane. Shane, would you join me for a moment so I can give you the award in person, please. -Thank you for all your contributions. - -SFC: Oh, wow, look at this. It’s made of glass. It is, it’s made of glass, and we’ll take some official professional Hollywood -- please. So first of all, it’s very much of an honor to be recognized for this. And to have had the opportunity to work with so many talented individuals to make web platform more -accessible to users around the world. We -- what we are building the most thoroughly designed -internationalization API in the industry. Not only does it form the base for -internationalization on the web platform, but now it’s even being used to form the basis for -internationalization in other programming languages on including rust and the ICFRX project I -also work on to bring internationalization to rust and to client side and low resource devices -and WebAssembly. So just wanted to extend my thanks first of all to the Google -internationalization team for continuing to sponsor my participation in this group. To the -ECMA 402 editors and proposal champions that I’ve had the opportunity to work with, including -Daniel Ehrenberg, Richard, Ujjwal, Ziibi, Ben, Justin, Romulo and many more and also to ECMA -402 implementers including the IC4X team and of course to ECMA, including Samina and the chairs -of the -- of TC39. So thank you again for the honor and recognition. It means lot to me and I -look forward to continuing to be a champion for internationalization for some time to come. -Thank you very much. Thank you. - -SHN: Okay, and last but not least, Mark, you’re on the call. I can see you there. So Mark -Miller, I had the pleasure to meet you in Cupertino. I’m glad you’re online. This is the citation that the team put together for you for your outstanding contributions of many years of work in TC39. Mark, would you like to share a few words. - -MM: Yeah, actually, I’d like to -- first of all, I’ll just -- just all of the thanks that you -would expect, I’d like to just express all of that, and then use my moment to relay a -conversation that Samina and I had, which is a few years ago, we had around TC39 over a period -of about a year where people gave their vision talks, their vision for what the future of the -language holds. And then we stopped, and in the absence of that, we’re just sort of each -working on these point proposals, each advancing different agendas. And I think those vision -talks help us understand the variety of agendas people are interested in advancing and help get -shared enthusiasm for some of those agendas. So I would like to suggest that that kind of -orienting activity, that’s also addressing the larger world, that that’s something we should do -again. - -SHN: Thank you, Mark. Mark, your vision talks, you’ve sent me your links, so I will be watching -them and we’re going to find a way to put good use to them and use that to continue to develop -further. Any anecdotes or any meeting where Mark was difficult and you want to remind him of that? All right. Mark, thank you very much. I’m sorry you’re not here. We’re going to have a little bit of cake later. We can certainly save some, but I’m not sure it will make it in the mail to you. But we have some -swag and I’ll make sure I get one to you. Thank you. - - -LEO: Like, if that’s not to interrupt swag time, I’d like to say a thing -- So my first TC39 meeting -- like, 8 years ago, I came in to TC39 because I was -- I juster started -working with test262, and I found some bugs on typed a arrays, bugs including on specifications -and implementations, and I was totally horrified to be in that meeting. Like, I was like, I’m -just a web developer, what should I do here? And Mark was the first person to actually talk to -me. But I used to work with -- I was working with Rick at the time and he actually gave me, -like, some notion who would be in the room. And he also told me who was Mark Miller, like, if -you use any micro S, if you have the finder, the Miller columns are named after him, and there -are so many other things that you can just come in deep. Mark is a person who is not only part -of the EMCAScript, but the web. He’s a very important part, and I totally recommend everyone -to go have a chat with Mark to, like, know more of, like project Xanadu. There’s so many -brilliant stories to discuss with Mark. It’s extracting some bits of the history of the web. -So so amazing. And all the awards here are very well deserved. - -SHN: Thank you, Leo. That’s great. Thanks for that comment. Mark, you’ll be inundated with -conversation and maybe I should and to do a video series, adventures with Mark or historic -moments of TC39 with Mark. If you’re good with that. - -MM: Certainly doing highlights on the TC39 history starting in 2007, I’d be very happy to do -that. - -SHN Thank you. Well, that leads me to my next little few slides, if I may, Mark. Thank you -very much. And we’ll continue the conversation. I’m just going to go to my next slides. So -100th meeting, we had this conversation at the last plenary or the one before. We couldn’t -quite agree this was 100, but we like to number. It’s pretty close to 100. Maybe it’s 102, -with you but we’ll stick with the 100. Little bit of history. With the help of Istvan, who -you all know and Allen who has done a lot of work on history, I just took a caption of one of -the information from his book, which is the very first TC39 meeting, nearly 30 years ago. It’s -impressive how sustaining this particular TC39 is. And the impact it’s made on the world, on -what we do, how we communicate, how we interact. And I looked at the list of attendees for -that meeting, there were 30 attendees for that meeting, we have double that by now. I don’t -recognize the names, but I do recognize the organizations. But who in this room recognizes -names of the persons who attended that very first meeting? Oh, that’s good. I’m sure -- Mark, -I see your hand is up. I’m sure -- Mark, you weren’t in that first meeting, though. You’re -mute. +## Ecma Recognition Awards + +CDA: All right. Ecma recognition awards. The Ecma award program was established to recognize contributions by individuals to development of standards and the benefit of the ICT and consumer electronics industries. So we have a number of lovely people from TC39 that have previously received this award. I’m sure you all recognize some of the names here. We dropped the ball in 2022. We did not have any nominations -- or nominees despite definitely having some folks that could have been nominated. So let’s not repeat that. We can nominate at any time of the year and these get reviewed at the GA meetings. If you have someone you would like to nominate, please let the chairs know. It’s helpful to have the nomination written up with the justification. But we can also help you with that. For the year 2023, we had three nominations. It’s not unheard of to have multiple nominations in a year, but we felt like we needed to make up a little bit for not putting anyone up in 2022. We were kind of spoiled for choice on our shortlist of people to nominate. Which is great. But, yeah, can’t nominate everybody. So there are still some folks probably deserving that maybe we can put up for this year. We will do better about reminding people to think about nominations for this every year so we do not drop the ball again. So start thinking about that. If you have anyone in mind, please let us know. And now I will kick it over to Samina. + +SHN: I’ll share my screen. Okay, thank you, thank you for that lead, CDA. So nominations for awards, yes, we had three nominations. And, the December general assembly unanimously approved and are excited to accept the three that we have here. So Myles Borins, I don’t think Myles is on the call. And he -- I did invite him to join, but he wasn’t able to attend. He did receive his recognition. We’ve got Shane who is sitting here. And we have Mark Miller online, and I had the chance to give Mark his recognition award in words when I saw him in December. But I’d like to be through a little bit about the citation that was written for each of them, so we’ll start with Myles. Who in this room has worked with Myles? Would anybody like to share any other words than what’s on the screen about Myles? An anecdote? + +RPR: I would definitely follow up by saying I worked with Myles probably most intensely on getting ES module support into node. As he chaired the subgroup, the task group, well, associated with that. And I just can’t believe how determined and diligent he was about seeing that process through to completion, because, you know, we standardized ES modules a long time ago, getting it into node was a very big deal, but loss of people had opinions, and -- but lots of people had opinions and I’m just super grateful and impressed with his both technical and social skills to, you know, to rally people together to make that a success. + +SHN: It’s nice to hear. Also at the GA, I heard nothing but really positive words from some of the people that were there. So I think Myles is very much appreciative of this and very humbled and surprised when he received the award. He has received his award. The next individual is Shane, who is sitting here. And this is a citation that you all put together for Shane. Shane, would you join me for a moment so I can give you the award in person, please. Thank you for all your contributions. + +SFC: Oh, wow, look at this. It’s made of glass. It is, it’s made of glass, and we’ll take some official professional Hollywood -- please. So first of all, it’s very much of an honor to be recognized for this. And to have had the opportunity to work with so many talented individuals to make web platform more accessible to users around the world. We -- what we are building the most thoroughly designed internationalization API in the industry. Not only does it form the base for internationalization on the web platform, but now it’s even being used to form the basis for internationalization in other programming languages on including rust and the ICFRX project I also work on to bring internationalization to rust and to client side and low resource devices and WebAssembly. So just wanted to extend my thanks first of all to the Google internationalization team for continuing to sponsor my participation in this group. To the ECMA 402 editors and proposal champions that I’ve had the opportunity to work with, including Daniel Ehrenberg, Richard, Ujjwal, Ziibi, Ben, Justin, Romulo and many more and also to ECMA 402 implementers including the IC4X team and of course to ECMA, including Samina and the chairs of the -- of TC39. So thank you again for the honor and recognition. It means lot to me and I look forward to continuing to be a champion for internationalization for some time to come. Thank you very much. Thank you. + +SHN: Okay, and last but not least, Mark, you’re on the call. I can see you there. So Mark Miller, I had the pleasure to meet you in Cupertino. I’m glad you’re online. This is the citation that the team put together for you for your outstanding contributions of many years of work in TC39. Mark, would you like to share a few words. + +MM: Yeah, actually, I’d like to -- first of all, I’ll just -- just all of the thanks that you would expect, I’d like to just express all of that, and then use my moment to relay a conversation that Samina and I had, which is a few years ago, we had around TC39 over a period of about a year where people gave their vision talks, their vision for what the future of the language holds. And then we stopped, and in the absence of that, we’re just sort of each working on these point proposals, each advancing different agendas. And I think those vision talks help us understand the variety of agendas people are interested in advancing and help get shared enthusiasm for some of those agendas. So I would like to suggest that that kind of orienting activity, that’s also addressing the larger world, that that’s something we should do again. + +SHN: Thank you, Mark. Mark, your vision talks, you’ve sent me your links, so I will be watching them and we’re going to find a way to put good use to them and use that to continue to develop further. Any anecdotes or any meeting where Mark was difficult and you want to remind him of that? All right. Mark, thank you very much. I’m sorry you’re not here. We’re going to have a little bit of cake later. We can certainly save some, but I’m not sure it will make it in the mail to you. But we have some swag and I’ll make sure I get one to you. Thank you. + +LEO: Like, if that’s not to interrupt swag time, I’d like to say a thing -- So my first TC39 meeting -- like, 8 years ago, I came in to TC39 because I was -- I juster started working with test262, and I found some bugs on typed a arrays, bugs including on specifications and implementations, and I was totally horrified to be in that meeting. Like, I was like, I’m just a web developer, what should I do here? And Mark was the first person to actually talk to me. But I used to work with -- I was working with Rick at the time and he actually gave me, like, some notion who would be in the room. And he also told me who was Mark Miller, like, if you use any micro S, if you have the finder, the Miller columns are named after him, and there are so many other things that you can just come in deep. Mark is a person who is not only part of the EMCAScript, but the web. He’s a very important part, and I totally recommend everyone to go have a chat with Mark to, like, know more of, like project Xanadu. There’s so many brilliant stories to discuss with Mark. It’s extracting some bits of the history of the web. So so amazing. And all the awards here are very well deserved. + +SHN: Thank you, Leo. That’s great. Thanks for that comment. Mark, you’ll be inundated with conversation and maybe I should and to do a video series, adventures with Mark or historic moments of TC39 with Mark. If you’re good with that. + +MM: Certainly doing highlights on the TC39 history starting in 2007, I’d be very happy to do that. + +SHN Thank you. Well, that leads me to my next little few slides, if I may, Mark. Thank you very much. And we’ll continue the conversation. I’m just going to go to my next slides. So 100th meeting, we had this conversation at the last plenary or the one before. We couldn’t quite agree this was 100, but we like to number. It’s pretty close to 100. Maybe it’s 102, with you but we’ll stick with the 100. Little bit of history. With the help of Istvan, who you all know and Allen who has done a lot of work on history, I just took a caption of one of the information from his book, which is the very first TC39 meeting, nearly 30 years ago. It’s impressive how sustaining this particular TC39 is. And the impact it’s made on the world, on what we do, how we communicate, how we interact. And I looked at the list of attendees for that meeting, there were 30 attendees for that meeting, we have double that by now. I don’t recognize the names, but I do recognize the organizations. But who in this room recognizes names of the persons who attended that very first meeting? Oh, that’s good. I’m sure -- Mark, I see your hand is up. I’m sure -- Mark, you weren’t in that first meeting, though. You’re mute. MM: I was not. The name that jumped out at me is the name that everybody here recognizes, Mr. Eich. That’s Brendan. If Waldemar is not here, I probably don’t recognize anybody else. -SHN: No, Waldemar is not there. I think at that time he did represent Netscape, but he wasn’t on -this particular meeting. I think he started to coming to later ones. Of course Brandon’s name -I recognize. That’s a small anecdote. You’ve been around for a while and it has sustained -itself extremely well. If I just go to my next slide. So this is just a little bit of a -timeline of what’s been happening with the different EMCAScripts. So you had the first one in -1997. And it’s continued since. And there were some gaps. It was not every single year -directly. There was even a year that it was abandoned for some time, and then picked up again. -But basically since 2016, it’s been every year. So it’s also extremely impressive. So just -the foundation that you’re working on is extremely strong. All right. So I didn’t know this, -but I learned it, and maybe all of you in this room already know about the book. Do you all -know about the book, the first 20 years of the -- the first 20 years? No? So the book can be -downloaded for free. And we mentioned Brandon’s name just a moment ago, he is one of the -authors, together with Allen. Allen’s the one that pointed out the book. He and I had some -good conversations regarding this history and I found some other link, which I will share these -leads. Why have them posted on GitHub, and it gives a little bit of history and archive which -information about TC39 and egg EMCAScript it’s a journey to where the came today, and I asked -Allen will he write the next chapter? He’s done the first 20 years and 10 years have passed -since then and who is going to do the next 10 years? Allen didn’t bite. I couldn’t entice him -enough to do it, and I thought maybe somebody in this room is going to pick that up. And I’ll -leave that as a challenge to you. The next chapter and where will it take us, it would be -really great if somebody in room wrote that next chapter, unless of coursing Mark, you’d like -to write it. You’re online. Would you like to do the next 20 years? - -MM: I’m not signing up for that. If somebody else wants to take the lead, I’m certainly happy -to contribute. - -SHN: Thank you. I was afraid of that answer. But anyway, we’ll leave it as a challenge. But it -could be interesting. And I think it would be very interesting, and I can imagine that the -stories will continue. Does anybody else have any anecdotes on this? Istvan, I see that -you’re on the call. You’re on mute. I know it’s extremely late for you, but you know TC39 -also for many years. If you would like to share a few words, you’re welcome. But no pressure. - -IS: First of all, if you can go back to the previous slide, I can give you a couple of names. -Okay, one more. Yeah, here. Yeah. So first of all, the two people who were absolutely -essential in creating TC39, it was my predecessor, is secretary-general of ECMA, Jan van -denberg, and he knew everybody on earth and one of those gentlemen is also here in the list of -participate, and that was Carl [INAUDIBLE] from Netscape. And Carl was really an absolutely -great guy, and then he continued after -- afterwards also in other companies, and we always had -contact with him. And even a couple of years ago until lately he worked all in the ECMA -executive committee. And Carl knew also very well Jan van denberg, and when Netscape wanted to -standardize JavaScript, and first to all, they tried in IETF and tried in the worldwide web -consortium, but somehow they didn’t accept the proposal that it was necessary and to have some -kind of scripting languages on the web, which was a big mistake, in my opinion. And then -recognized, and then Jan van denberg, and they convinced ECMA to take over this task, and -then this is the results of it, and I recognize several people from ISO and also from the ECMA -page like Mike Skarb from Hewlett-Packard and William Meyer from GTB associates who was -actually a lawyer, and he worked at that final for Microsoft, so it is very, very interesting. -And then of course, you know, Gary Robson, who was then elected as the first chairman of TC39, -he was a very well-known figure and he worked at that time for Sun. And then the vice chairman -was Carl Carl gait, et cetera. I knew some of the people and they were absolutely great guys -and did a very good job, not only here in TC39, but also generally in standardization. So -maybe that’s, you know, what I can tell you from this first -- from this -- from this table, -you know, participation in the first meeting and the first elected officers. And if you go to -the next one, next slide, yeah, so if you -- yeah, here. So actually, at the beginning, it was -extremely difficult standardization. I was not here, but you can see from this -- from the -- -that -- how difficult it was standardized. The first version, it was very quickly -standardized, actually, so it must have been an absolutely great performance of the group that -they managed within half a year to come out with a first version of the standard. Also, the -second version they came up. And if I remember, maybe for 250 pages or even less, something -like that. Now, today we are between 800 and 900. And then later on, the development, it -really slowed down, so when I came to ECMA, it was 2006. Then it was that they had already -been work on it and fighting with each other and it was Phil pieman who was then the chairman -of TC39, and John, like myself, didn’t understand anything about JavaScript or ECMAScript. But -he was a great standardization guy and he made it possible, you know, to come up with a -compromise solution, and that was actually when we came out with JavaScript finally in December 2009. Which then 5.1, 2011, et cetera. And when the decision was made, actually, in -2016 to come up with the yearly release, honestly, I didn’t believe that it would be possible -to come out year by year. So I have to congratulate to the current members of the group that -you are still able after so many -- after about 8 years after 2016, you know, to come out so -precisely every June with a new edition of the standard. So I really would like to congratulations to everybody on that. And maybe that was the -- that was the end of what I wanted -to say about the standardization that I have seen here. +SHN: No, Waldemar is not there. I think at that time he did represent Netscape, but he wasn’t on this particular meeting. I think he started to coming to later ones. Of course Brandon’s name I recognize. That’s a small anecdote. You’ve been around for a while and it has sustained itself extremely well. If I just go to my next slide. So this is just a little bit of a timeline of what’s been happening with the different EMCAScripts. So you had the first one in 1997. And it’s continued since. And there were some gaps. It was not every single year directly. There was even a year that it was abandoned for some time, and then picked up again. But basically since 2016, it’s been every year. So it’s also extremely impressive. So just the foundation that you’re working on is extremely strong. All right. So I didn’t know this, but I learned it, and maybe all of you in this room already know about the book. Do you all know about the book, the first 20 years of the -- the first 20 years? No? So the book can be downloaded for free. And we mentioned Brandon’s name just a moment ago, he is one of the authors, together with Allen. Allen’s the one that pointed out the book. He and I had some good conversations regarding this history and I found some other link, which I will share these leads. Why have them posted on GitHub, and it gives a little bit of history and archive which information about TC39 and egg EMCAScript it’s a journey to where the came today, and I asked Allen will he write the next chapter? He’s done the first 20 years and 10 years have passed since then and who is going to do the next 10 years? Allen didn’t bite. I couldn’t entice him enough to do it, and I thought maybe somebody in this room is going to pick that up. And I’ll leave that as a challenge to you. The next chapter and where will it take us, it would be really great if somebody in room wrote that next chapter, unless of coursing Mark, you’d like to write it. You’re online. Would you like to do the next 20 years? + +MM: I’m not signing up for that. If somebody else wants to take the lead, I’m certainly happy to contribute. +SHN: Thank you. I was afraid of that answer. But anyway, we’ll leave it as a challenge. But it could be interesting. And I think it would be very interesting, and I can imagine that the stories will continue. Does anybody else have any anecdotes on this? Istvan, I see that you’re on the call. You’re on mute. I know it’s extremely late for you, but you know TC39 also for many years. If you would like to share a few words, you’re welcome. But no pressure. + +IS: First of all, if you can go back to the previous slide, I can give you a couple of names. Okay, one more. Yeah, here. Yeah. So first of all, the two people who were absolutely essential in creating TC39, it was my predecessor, is secretary-general of ECMA, Jan van denberg, and he knew everybody on earth and one of those gentlemen is also here in the list of participate, and that was Carl [INAUDIBLE] from Netscape. And Carl was really an absolutely great guy, and then he continued after -- afterwards also in other companies, and we always had contact with him. And even a couple of years ago until lately he worked all in the ECMA executive committee. And Carl knew also very well Jan van denberg, and when Netscape wanted to standardize JavaScript, and first to all, they tried in IETF and tried in the worldwide web consortium, but somehow they didn’t accept the proposal that it was necessary and to have some kind of scripting languages on the web, which was a big mistake, in my opinion. And then recognized, and then Jan van denberg, and they convinced ECMA to take over this task, and then this is the results of it, and I recognize several people from ISO and also from the ECMA page like Mike Skarb from Hewlett-Packard and William Meyer from GTB associates who was actually a lawyer, and he worked at that final for Microsoft, so it is very, very interesting. And then of course, you know, Gary Robson, who was then elected as the first chairman of TC39, he was a very well-known figure and he worked at that time for Sun. And then the vice chairman was Carl Carl gait, et cetera. I knew some of the people and they were absolutely great guys and did a very good job, not only here in TC39, but also generally in standardization. So maybe that’s, you know, what I can tell you from this first -- from this -- from this table, you know, participation in the first meeting and the first elected officers. And if you go to the next one, next slide, yeah, so if you -- yeah, here. So actually, at the beginning, it was extremely difficult standardization. I was not here, but you can see from this -- from the -- that -- how difficult it was standardized. The first version, it was very quickly standardized, actually, so it must have been an absolutely great performance of the group that they managed within half a year to come out with a first version of the standard. Also, the second version they came up. And if I remember, maybe for 250 pages or even less, something like that. Now, today we are between 800 and 900. And then later on, the development, it really slowed down, so when I came to ECMA, it was 2006. Then it was that they had already been work on it and fighting with each other and it was Phil pieman who was then the chairman of TC39, and John, like myself, didn’t understand anything about JavaScript or ECMAScript. But he was a great standardization guy and he made it possible, you know, to come up with a compromise solution, and that was actually when we came out with JavaScript finally in December 2009. Which then 5.1, 2011, et cetera. And when the decision was made, actually, in 2016 to come up with the yearly release, honestly, I didn’t believe that it would be possible to come out year by year. So I have to congratulate to the current members of the group that you are still able after so many -- after about 8 years after 2016, you know, to come out so precisely every June with a new edition of the standard. So I really would like to congratulations to everybody on that. And maybe that was the -- that was the end of what I wanted to say about the standardization that I have seen here. SHN: Thanks. Thank you, Istvan, and yes, it’s definitely a big congratulations to the whole committee here. I’m going to move forward to the celebration part. And I’m sorry that there are people online that are not here. @@ -1419,16 +1095,8 @@ RPR: Kevin has a fun fact. KG: You presented the attendees of the first meeting. Here's attendees of the second [on screen], which I’m amused by because that’s my dad. -SHN: That’s right. You’re the only father and son combination here. Yeah, congratulations. -Should have invited your father here. All right. If I move forward. Oh, I need to share -again. Thank you. Thank you for that anecdote. All right. So the 100th meeting. So I want to thank Shu for helping. I’m certainly not an artist and he is much more of one, and Shu put together the 100th logo and the idea, and we got these hats. So baseball caps and beanies and some stickers for the swag, and when we go to room on the other side, you may all share the -- you may all have the ones you want and please collect something for your colleagues, but let me know how many are collecting and we’ll make sure that everybody gets something. Somehow, if not here, then in Helsinki, if not in Helsinki, in Tokyo, if not in Tokyo, through some mail. But fear not, you will have something. Thank you, that’s the end of the presentation. - -There is cake to celebrate with. I should point out I reached out to a number of people that have won awards previously and have been active in TC39 for a long time and one of them was Jory, who you all -know very well. -We’ve got cake on the other side, so whenever you want to join the meeting, we can do that. Thank you. - -RPR: Thanks all. So that’s wrapping up for today. Let’s head over to the break room. And -response to Justin, the cake is definitely not a lie. - +SHN: That’s right. You’re the only father and son combination here. Yeah, congratulations. Should have invited your father here. All right. If I move forward. Oh, I need to share again. Thank you. Thank you for that anecdote. All right. So the 100th meeting. So I want to thank Shu for helping. I’m certainly not an artist and he is much more of one, and Shu put together the 100th logo and the idea, and we got these hats. So baseball caps and beanies and some stickers for the swag, and when we go to room on the other side, you may all share the -- you may all have the ones you want and please collect something for your colleagues, but let me know how many are collecting and we’ll make sure that everybody gets something. Somehow, if not here, then in Helsinki, if not in Helsinki, in Tokyo, if not in Tokyo, through some mail. But fear not, you will have something. Thank you, that’s the end of the presentation. +There is cake to celebrate with. I should point out I reached out to a number of people that have won awards previously and have been active in TC39 for a long time and one of them was Jory, who you all know very well. We’ve got cake on the other side, so whenever you want to join the meeting, we can do that. Thank you. +RPR: Thanks all. So that’s wrapping up for today. Let’s head over to the break room. And response to Justin, the cake is definitely not a lie. diff --git a/meetings/2024-02/February-8.md b/meetings/2024-02/February-8.md index ad82c683..166c16ab 100644 --- a/meetings/2024-02/February-8.md +++ b/meetings/2024-02/February-8.md @@ -1,5 +1,7 @@ -100th TC39 Meeting -8th Feb 2024 +# 8th Feb 2024 100th TC39 Meeting + +----- + Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. You can find Abbreviations in delegates.txt @@ -36,32 +38,37 @@ You can find Abbreviations in delegates.txt | Mikhail Barash | MBH | Univ. Bergen | | Samina Husain | SHN | Ecma | | | | | + ## call for reviewers for Promise.try + Presenter: Jordan Harband (JHD) - [proposal](https://github.com/tc39/proposal-promise-try) -- [slides]() +- Slides: See Agenda JHD: Yes. So I – it slipped my mind to request reviewers for `Promise.try` the other day, so I would like volunteers for people to review `Promise.try`. -RGN: I’m in. +RGN: I’m in. RKG: I'll do it. JHD: Thank you, RKG. And RGN. Anyone else is welcome, but I will start with those two. Thanks. + ### Conclusion -* RGN and RKG have volunteered to review. + +- RGN and RKG have volunteered to review. ## Intl.DurationFormat stage 3 update + Presenter: Ben Allen (BAN) - [proposal](https://github.com/tc39/proposal-intl-duration-format) - [slides](https://notes.igalia.com/p/nxMdcUtbb/) - BAN: Okay. So for this update, there are no normative changes, with an asterisk we'll get to in the end. We finished up a fairly large refactor of more or less the entire spec based on feedback from implementers about it being just difficult to implement. So let me go through with the slides. BAN: So the first thing is as I said, the current version of the proposal has an AO partial duration format pattern. That we have received a lot of frustrations from implementers about it. Most are related to how when we’re formatting durations, there are two kinds of radically different ways that we can format it: + - One is the standard way, which lists out each unit with designators for what the units are. For example, "one day, one hour, one minute and 100 microseconds", something like that. - There’s a separate way of formatting, which is equivalent to the formatting you would find on a digital clock. So instead of using words… so this is the – where I am saying `{style: "short"}`, this is sort of how most styles work. So if it were not "short", if it were "long", it would say "hour", "minute" and "second", rather than "hr", "min" and "sec". That’s the way using anything but what I am referring to as numeric-like styles, that’s how it works. Numeric like styles, it looks like a digital clock, the bare numbers separated by separators, the appropriate separators. This is if this were a locale that uses a different separator, it uses the different separator. @@ -79,31 +86,31 @@ BAN: So we are Stage 3. And this can be interpreted as a normative change, and a BAN: Okay. And that is it for the slide set.I welcome feedback on this potentially bugfix, potentially normative change business going on with this. And while I am waiting for questions, I will copy in the link to the draft PR. -KG: Yeah. This seems like a good change. I think it’s fine to make the normative change at this point and say, like, we can just do the right thing. It happens that sometimes we miss something, especially with localization given how complicated it is. Relaxing it at this point is good. +KG: Yeah. This seems like a good change. I think it’s fine to make the normative change at this point and say, like, we can just do the right thing. It happens that sometimes we miss something, especially with localization given how complicated it is. Relaxing it at this point is good. BAN: That’s fantastic to hear -DE: The PRs sound good to me. Are there any further issues that you are aware of for DurationFormat? Or does this resolve all remaining ones? +DE: The PRs sound good to me. Are there any further issues that you are aware of for DurationFormat? Or does this resolve all remaining ones? -BAN: I believe it actually resolves all remaining ones. +BAN: I believe it actually resolves all remaining ones. DE: That’s great to hear. That’s a great milestone to have reached. -BAN: We are still missing some tests. And we have got some decisions to make if we want to reflect recent changes in Temporal. But yeah, these are the – these are the meaningful changes, the meaningful issues still we’re dealing with. +BAN: We are still missing some tests. And we have got some decisions to make if we want to reflect recent changes in Temporal. But yeah, these are the – these are the meaningful changes, the meaningful issues still we’re dealing with. -DE: Could you elaborate on the last one? Reflecting changes in Temporal? +DE: Could you elaborate on the last one? Reflecting changes in Temporal? -BAN: Yes. So, for example, Temporal recently provided limits on the – basically, it explicitly forbids using utilities above 2^53rd and we will reflect that change, but the current state of the spec doesn’t. +BAN: Yes. So, for example, Temporal recently provided limits on the – basically, it explicitly forbids using utilities above 2^53rd and we will reflect that change, but the current state of the spec doesn’t. -DE: I imagine that that’s also – I see that also a bugfix that doesn’t require further discussion. Maybe today we would call for consensus on you implementing that bugfix. +DE: I imagine that that’s also – I see that also a bugfix that doesn’t require further discussion. Maybe today we would call for consensus on you implementing that bugfix. BAN: Okay. -DE: We have done this previously with Temporal, we agreed on this before the patch was fully done. +DE: We have done this previously with Temporal, we agreed on this before the patch was fully done. -BAN: Fantastic. +BAN: Fantastic. -SFC: Yeah. I just paste a link to the issue that BAN is referencing. And we have done a fairly good job over the last – since the last update of making sure that the bug tracker is properly labelled and that all the issues are properly, you know, triaged. So I think based on what we know, right now, Ben is correct that these two issues he presented plus this Temporal limits issue are the three open issues that could affect the proposal in its current state. There are other issues that are either tests or labelled for V2, which means we will do this in the future maybe, but not for the current proposal. So, yeah. It would be – maybe if we have enough time, maybe the chairs, if we have enough time in the slot we could go through this issue 157. +SFC: Yeah. I just paste a link to the issue that BAN is referencing. And we have done a fairly good job over the last – since the last update of making sure that the bug tracker is properly labelled and that all the issues are properly, you know, triaged. So I think based on what we know, right now, Ben is correct that these two issues he presented plus this Temporal limits issue are the three open issues that could affect the proposal in its current state. There are other issues that are either tests or labelled for V2, which means we will do this in the future maybe, but not for the current proposal. So, yeah. It would be – maybe if we have enough time, maybe the chairs, if we have enough time in the slot we could go through this issue 157. PFC [on the queue]: Strongly support consensus on this in advance. @@ -111,42 +118,43 @@ BAN: All right. So this is what – this is the issue as it stands. This – thi SFC: So, Ben, it looks we have a PR open this, number 173 -BAN: That reflects the handling before the fix to the bug that ABL identified with the new behavior of Temporal. Which we have TG2 consensus for the old version, the only thing that needs to be done is update that to fix the bug that ABL fixed with the Temporal solution. +BAN: That reflects the handling before the fix to the bug that ABL identified with the new behavior of Temporal. Which we have TG2 consensus for the old version, the only thing that needs to be done is update that to fix the bug that ABL fixed with the Temporal solution. -SFC: I see. Okay. The pull request says we brought to the last TG1 meeting and it got consensus there. +SFC: I see. Okay. The pull request says we brought to the last TG1 meeting and it got consensus there. -BAN: There we go. +BAN: There we go. DE: So I don’t think you should have to come back to committee to lend the union of these two bugfixes, I suggest we ask for consensus today on, you know, if someone want to review the patch, I feel like we can defer reviewing that to TG2 because this is well understood. Can we call for consensus on that? Does anybody have any concerns? -BAN: All right. Sounds like we have no concerns. And just to confirm, this would be for the version without the notes saying that hours, minutes, separators and minutes, second separators have to be the same and that [[HoursDigits]] has to be one. So removing those notes, regarding this as a normative bugfix to accommodate those locales better, and moving on. +BAN: All right. Sounds like we have no concerns. And just to confirm, this would be for the version without the notes saying that hours, minutes, separators and minutes, second separators have to be the same and that [[HoursDigits]] has to be one. So removing those notes, regarding this as a normative bugfix to accommodate those locales better, and moving on. -DE: Okay. I was suggesting for this duration thing. We should get consensus on that. If we haven’t already. +DE: Okay. I was suggesting for this duration thing. We should get consensus on that. If we haven’t already. BAN: Yes. So I would echo DE on that. I am requesting consensus for both of those things. -CDA: Yes. Can we get some explicit support? +CDA: Yes. Can we get some explicit support? SFC: In the notes we should list exactly what we got consensus on. -CDA: Yes. Ben, do you want to dictate for the notes? That would be helpful. +CDA: Yes. Ben, do you want to dictate for the notes? That would be helpful. ### Conclusion + Consensus on + - Adding additional slots to DurationFormat to handle locales that always use 2 digital hours, and for locales that use different separators between units and seconds and hours and minutes. (https://github.com/tc39/proposal-intl-duration-format/pull/186) - Incorporating the recent changes to Temporal that limit the range of values that can be used. (https://github.com/tc39/proposal-intl-duration-format/pull/173 + a not-yet-drafted bug fix parallel with Temporal) ## ESM Phase Imports for stage 1 + Presenter: Guy Bedford (GB) - [proposal](https://github.com/lucacasonato/proposal-module-instance-imports) -- [slides]() - +- Slides: See Agenda GB: So I am going to present today ECMAScript model phase imports or ESM phase imports. And this is an extension of the work that we did on the phasing process for source phase imports. That was for the WebAssembly use case. We were looking at the current WebAssembly model today which doesn’t benefit from a lot of the nice features of the module system because of its dynamic instantiation. With the source phase, we can import this as an actual Module, get it object, an existing object, that we updated to extent from abstract model source, in TC39, and you can then instantiate it normally and get some nice guarantees of static analyzability, ergonomics, improved tooling support, and some security-related features. -GB: So this is currently at Stage 3 and progressing, whereas what we don’t have is a phase import for JavaScript itself right now. The question is what about JavaScript model with as and is this useful for JavaScript modules and what use cases can we solve with that and how does it fit into the overall module. -And so the use case that we would like to focus on is actually the worker instantiation and this is because it has very similar properties to what the problem was for Wasm, which is the new worker, it’s not ergonomic, it’s not a static capability, the string passed to `new Worker` is not a module specifier, but a path. It has limited tooling support. If you write exactly this [on screen], webpack will build it. Other build tools won’t. Esbuild won’t. Rollup you need a special plug in for it. If you are off the path and a different worker, the build tool will miss it. That also creates portability frictions, libraries are discouraged from using worker patterns because tooling won't necessarily support it as easily. And the idea is, if we support a phase for imports from JavaScript, we can solve that problem just like we did for Wasm and supporting importing this source from JavaScript statically through the ESM module system. That source is then loaded through normal module resolutions so you are getting module specifier resolution in the host. That represents the module to pass directly to the worker. Furthermore because we know it’s a module we don’t need to put the `{type: "module"}` into the `new Worker` invocation. We can get a much more ergonomic worker indication out of exposing the source phase for JavaScript modules. +GB: So this is currently at Stage 3 and progressing, whereas what we don’t have is a phase import for JavaScript itself right now. The question is what about JavaScript model with as and is this useful for JavaScript modules and what use cases can we solve with that and how does it fit into the overall module. And so the use case that we would like to focus on is actually the worker instantiation and this is because it has very similar properties to what the problem was for Wasm, which is the new worker, it’s not ergonomic, it’s not a static capability, the string passed to `new Worker` is not a module specifier, but a path. It has limited tooling support. If you write exactly this [on screen], webpack will build it. Other build tools won’t. Esbuild won’t. Rollup you need a special plug in for it. If you are off the path and a different worker, the build tool will miss it. That also creates portability frictions, libraries are discouraged from using worker patterns because tooling won't necessarily support it as easily. And the idea is, if we support a phase for imports from JavaScript, we can solve that problem just like we did for Wasm and supporting importing this source from JavaScript statically through the ESM module system. That source is then loaded through normal module resolutions so you are getting module specifier resolution in the host. That represents the module to pass directly to the worker. Furthermore because we know it’s a module we don’t need to put the `{type: "module"}` into the `new Worker` invocation. We can get a much more ergonomic worker indication out of exposing the source phase for JavaScript modules. GB: And this is just copying what we said for Wasm. If you cross out Wasm, we get a lot of benefits. Seriously, it’s very specifically the proposal is that the phase import for JavaScript modules can solve for these worker ergonomics and static guarantees and worker capability should be the constraining design space for JS module phases. Workers bring up problems around how you want to share modules and share various properties of modules. And so solving those as the sort of next foundational step within the modules harmony effort seems like the right direction. So that’s the overall proposal. @@ -154,20 +162,11 @@ GB: One of the questions we have is, which phase we want to primarily specify wh GB: So we could also potentially explore using an "instance" phase for the use case and pass an instance to the worker. There’s some tradeoffs to be considered in that. We are leaving that as a slightly open design question for now. -GB: To explain some of the tradeoffs, but this is part of the design exploration space, while instances represent the full linkage graph, they are actually linked – it represents providing every single module in that graph to the worker. And if you have got a Wasm module instance, with shared instances, you can actually share the instance fully. And so that could potentially make sense. For JavaScript modules we don’t have the shared semantics. So when you share an instance with the worker, you will be copying it and then you have to draw an equivalence relation of a graph of modules in one agent and the graph of modules in another agent. And creating that equivalence relation and maintaining it in a way that is well-defined is quite difficult. On the other hand, instances might be more amenable to pre-loading because they represent the entire graph. You could preload it. And sources don’t represent the full graph up front. Although a source can be associated with the resolution model because every phase in the phase import system has all of the information of the previous phase. -So when I have a source, I also know the resolved module key. So I can extract from the sources fully resolve module key and resolve relative specifiers relative to the source. And if I have got an instance, I can extract the source from it. +GB: To explain some of the tradeoffs, but this is part of the design exploration space, while instances represent the full linkage graph, they are actually linked – it represents providing every single module in that graph to the worker. And if you have got a Wasm module instance, with shared instances, you can actually share the instance fully. And so that could potentially make sense. For JavaScript modules we don’t have the shared semantics. So when you share an instance with the worker, you will be copying it and then you have to draw an equivalence relation of a graph of modules in one agent and the graph of modules in another agent. And creating that equivalence relation and maintaining it in a way that is well-defined is quite difficult. On the other hand, instances might be more amenable to pre-loading because they represent the entire graph. You could preload it. And sources don’t represent the full graph up front. Although a source can be associated with the resolution model because every phase in the phase import system has all of the information of the previous phase. So when I have a source, I also know the resolved module key. So I can extract from the sources fully resolve module key and resolve relative specifiers relative to the source. And if I have got an instance, I can extract the source from it. So the information that gets associated with these phase is progressively added to the information available. So when you pass a source through worker, you could potentially actually treat the resolution information as part of that transfer and that’s another thing that we want to explore within the proposal -GB: The overall scope is to -Solve for the ergonomics, portability, and security of the worker instantiation -Specify a built in phase object for JS which associates the registry key, and that reflects the module. -Support the module harmony layering, so the object we specify can be used by module expressionion, module declarations and other proposals in the effort -Investigate whether we want to be able to share the resolution model. -Explore which phase we want to expose exactly. -Whether we want to also specify post-message transfer for these objects. -Whether we want to consider import source directly because it represents a key, so it could act as a capability for an import key that is static analyzable. -The last thing is to consider reflection APIs. +GB: The overall scope is to Solve for the ergonomics, portability, and security of the worker instantiation Specify a built in phase object for JS which associates the registry key, and that reflects the module. Support the module harmony layering, so the object we specify can be used by module expressionion, module declarations and other proposals in the effort Investigate whether we want to be able to share the resolution model. Explore which phase we want to expose exactly. Whether we want to also specify post-message transfer for these objects. Whether we want to consider import source directly because it represents a key, so it could act as a capability for an import key that is static analyzable. The last thing is to consider reflection APIs. So in the sense that we are enabling better static analysis of worker invocation in JavaScript, tooling is always doing analysis of module graphs and things like that. And we could very easily provide some primitives that make it easier to do module analysis in the language. @@ -175,7 +174,7 @@ GB: So this is a topic I touched on briefly, but it’s sort of very closely rel There may well be something like this [on screen], this is just a shape of the proposal. There’s no current active efforts towards this direction. But in discussion, something like this has been discussed before, where you could pass in the import map to the worker invocation and have the ability to pass `{ importMap : "inherit" }` to share the import map from the main parent page context. Because there’s currently also no way to read off the import map from the top level page. -So the idea is here just like we were able to pass `{ type : "module" }` by default, it would be nice to pass ` { importMap: “inherit” }` by default. And explore that as well. +So the idea is here just like we were able to pass `{ type : "module" }` by default, it would be nice to pass `{ importMap: “inherit” }` by default. And explore that as well. And `{ importMap : "inherit" }` hasn’t been specified. But there may be a way to specify in ECMA262, even though it doesn't exist, based on the concept that a course carries a reference to its original host context and define that create a worker, you could potentially share the resolution context and define some concept of resolution context that could be set up for the worker agent when you create that worker and that might be a nice way to effectively define `{ importMap : "inherit" }` by default without it existing yet. When it does exist, we would support it. So that’s the other thing that would be nice to explore. @@ -183,8 +182,7 @@ GB: And then finally, just to explain how this fits into the overall module harm GB: Deferred imports are pretty much entirely orthogonal at this point. They represent the last phase. It doesn’t affect anything to do with the source phase. -GB: Module expressions. Depending on whether we define an instance phase or a source phase or both, module expressions are currently specified as an instance phase. They could potentially be specified as a source phase. They would likely have a getter to go from the module instance object, read off of it the `.source` to get the source object. -So we have layering in either of those directions. But they combine nicely with this proposal. Because you could basically just have `new Worker` off the module or the `module.source`. And it would allow working like this for workers which is much more ergonomic and solve a lot of the problems we have around data: URLs for workers today. So this layers nicely with module expressions, this is designed to be a first step towards module expressions, to get us towards a better ergonomics and they combine very well. +GB: Module expressions. Depending on whether we define an instance phase or a source phase or both, module expressions are currently specified as an instance phase. They could potentially be specified as a source phase. They would likely have a getter to go from the module instance object, read off of it the `.source` to get the source object. So we have layering in either of those directions. But they combine nicely with this proposal. Because you could basically just have `new Worker` off the module or the `module.source`. And it would allow working like this for workers which is much more ergonomic and solve a lot of the problems we have around data: URLs for workers today. So this layers nicely with module expressions, this is designed to be a first step towards module expressions, to get us towards a better ergonomics and they combine very well. GB: As I say, there’s various design discussions to explore. But the layering working out however things play out is the important point. @@ -198,17 +196,15 @@ We have worker interactions for the earlier phases that would potentially be tra Imports attributes and model sync attributes very much don’t have too much cross-layering. Exactly because import attributes are orthogonal. -I will open up to any questions or discussion. +I will open up to any questions or discussion. USA: Let’s give it a couple of minutes for the queue. But there’s nothing in there yet. -DE: That was a great presentation describing a pretty complicated area. -I think it’s important for the various reasons, guy stated, we move forward on exposing JavaScript modules that are not, you know, already imported at one of the earlier phases. I hope that within this, we can get a maximally simple interface. -For module declarations and module expressions, it will be really important to package that resolution context together with the declarations/expression. I previously thought that this meant we had to use instances. But if sources contain the resolution context, then that might also be a good way to do it. And it would provide us with a nice unification of all the concepts. Guy has successfully changed my mind about this. Anyway, good. +DE: That was a great presentation describing a pretty complicated area. I think it’s important for the various reasons, guy stated, we move forward on exposing JavaScript modules that are not, you know, already imported at one of the earlier phases. I hope that within this, we can get a maximally simple interface. For module declarations and module expressions, it will be really important to package that resolution context together with the declarations/expression. I previously thought that this meant we had to use instances. But if sources contain the resolution context, then that might also be a good way to do it. And it would provide us with a nice unification of all the concepts. Guy has successfully changed my mind about this. Anyway, good. -USA: Thank you. Next we have SYG. +USA: Thank you. Next we have SYG. -SYG: Thank you for the presentation. I agree with Dan, it was a well-stated problem statement and summary of the details. And I want to express my support for the worker use case. I believe it’s an important use case. I have some questions about which phase to choose. It’s somewhat clear to me or I can imagine how it would work in a straightforward way, to postMessage the source phase. It’s not clear what it means to postMessage the instance phase. Do you have thoughts there for why that is even part of the - why it is even a choice for the open question? +SYG: Thank you for the presentation. I agree with Dan, it was a well-stated problem statement and summary of the details. And I want to express my support for the worker use case. I believe it’s an important use case. I have some questions about which phase to choose. It’s somewhat clear to me or I can imagine how it would work in a straightforward way, to postMessage the source phase. It’s not clear what it means to postMessage the instance phase. Do you have thoughts there for why that is even part of the - why it is even a choice for the open question? GB: So I guess there’s two indications to define. The one the direct `new Worker` – and we are only specifying a top-level working instantiation, you are passing in place of the current string path to the worker. So there’s that, and then the transfer question. So generic instance transfer is – you know, related to that, or you could imagine saying, for inFrance stance, they only support the new worker, but not arrest temporary transfer. If we do that, it’s similar to the source phase worker invocation because you’re passing at the top level. @@ -226,56 +222,57 @@ LCA: OK, it is difficult to do. And we are not able to do it right now. Is that LCA: And as such, it is difficult to – when you post message an instance over to a different thread, and then pass it back to the main thread and pass it back to that thread to have that instance have the same identity on that thread without having a cross garbage collector. And because of this we are constraining ourselves – also discussions with YSV – to not go any direction where it would be necessary to have cross thread GC. If we say instance transfer, this would be a "clone serialized parts of the instance" rather than a transfer that maintains the identity. -DE: Yeah. I want to re-enforce what LCA said. This is kind of the point we got to a couple of years ago with respect to postMessage on module expressions, which is that it would only postMessage the source and some things about the resolution context, it would not post-message the module identity. We had meetings about this, which are captured in the notes. Tracking the identity back and forth is pretty – is too difficult . We reached that conclusion in the past. I think overall, this doesn’t really represent a difference between instances and source. Because in practice, we would just get the source and resolution context, do the transfer and reconstruct it on the other side. -If there’s – the resolution context is packaged with the source, as I think is necessary for whatever module expressions evaluate to, then there’s no difference that I am aware of, between instance and source in terms of post-message. +DE: Yeah. I want to re-enforce what LCA said. This is kind of the point we got to a couple of years ago with respect to postMessage on module expressions, which is that it would only postMessage the source and some things about the resolution context, it would not post-message the module identity. We had meetings about this, which are captured in the notes. Tracking the identity back and forth is pretty – is too difficult . We reached that conclusion in the past. I think overall, this doesn’t really represent a difference between instances and source. Because in practice, we would just get the source and resolution context, do the transfer and reconstruct it on the other side. If there’s – the resolution context is packaged with the source, as I think is necessary for whatever module expressions evaluate to, then there’s no difference that I am aware of, between instance and source in terms of post-message. -SYG: I will pass on this. It looks like I will follow up with some folks off-line. +SYG: I will pass on this. It looks like I will follow up with some folks off-line. -LCA: We'll follow up off line. +LCA: We'll follow up off line. -USA: Well, then the next topic is by Gus. +USA: Well, then the next topic is by Gus. -GCL: Big + 1. The problem here is well defined and is something we want to solve. There is clearly a lot of stuff to work out. -I am interested to see how the embedding with HTML goes. Very positive and excited. +GCL: Big + 1. The problem here is well defined and is something we want to solve. There is clearly a lot of stuff to work out. I am interested to see how the embedding with HTML goes. Very positive and excited. -KG: Yeah. Also excited for this. I just want to make sure that the other parts of the ecosystem are looped in early. In particular HTML, but also CSP. CSP is not super maintained, so it’s good to pull them in as early as possible. +KG: Yeah. Also excited for this. I just want to make sure that the other parts of the ecosystem are looped in early. In particular HTML, but also CSP. CSP is not super maintained, so it’s good to pull them in as early as possible. -GB: The Wasm source phase very much has CSP interactions. So in the same way that CSP applies to Wasm, there should be some story there and that will be part of the – part of – certainly feedback taken on looping that in early. Certainly, we’re defining the object and then hoping that we can make that progress in HTML. So the – creating that compatibility and progressing those discussions certainly feedback taken. +GB: The Wasm source phase very much has CSP interactions. So in the same way that CSP applies to Wasm, there should be some story there and that will be part of the – part of – certainly feedback taken on looping that in early. Certainly, we’re defining the object and then hoping that we can make that progress in HTML. So the – creating that compatibility and progressing those discussions certainly feedback taken. -KKL: And I am in favor of Stage 1 for this. And really, thanks Gus for the excellent summary of the conversations we have had. I am leaning – I remain leaning towards having source not package resolution. But otherwise, I don’t recall where we settled on it some months ago. Looking forward to having conversations again. +KKL: And I am in favor of Stage 1 for this. And really, thanks Gus for the excellent summary of the conversations we have had. I am leaning – I remain leaning towards having source not package resolution. But otherwise, I don’t recall where we settled on it some months ago. Looking forward to having conversations again. -NRO: I would like to refresh with what – like, to remind you what we talked about months ago about this. One possible solution that we might have found was to split what we originally called resolution context into two different pieces. Like data and behavior, where data is, for example, the URL to resolve things from. And to put the data in the source, like we want to – like, proper to the private field and then have the – the behavior be in the instance so that you can still have source error roadway source data and realize how they are allotted . . . with custom loaders, that just read the data and decide if they want it use it, how they want to use it. But yeah, we will be happy to talk about this, more on that. +NRO: I would like to refresh with what – like, to remind you what we talked about months ago about this. One possible solution that we might have found was to split what we originally called resolution context into two different pieces. Like data and behavior, where data is, for example, the URL to resolve things from. And to put the data in the source, like we want to – like, proper to the private field and then have the – the behavior be in the instance so that you can still have source error roadway source data and realize how they are allotted . . . with custom loaders, that just read the data and decide if they want it use it, how they want to use it. But yeah, we will be happy to talk about this, more on that. -KKL: Right. Yeah. That’s exactly the right thing to be thinking about, how this virtualizes if we get farther into the harmony proposals. Yeah. Thanks. +KKL: Right. Yeah. That’s exactly the right thing to be thinking about, how this virtualizes if we get farther into the harmony proposals. Yeah. Thanks. -DE: Another aspect here is that even if we do package things, we could still have virtualization-related APIs which taking a source that has the extra stuff, and disregard part of that and, you know, operate on or even return a different source that has different values or just do things with certain components of it. This is in fact what the post-message semantics for module instances would do if we went on the instance path. +DE: Another aspect here is that even if we do package things, we could still have virtualization-related APIs which taking a source that has the extra stuff, and disregard part of that and, you know, operate on or even return a different source that has different values or just do things with certain components of it. This is in fact what the post-message semantics for module instances would do if we went on the instance path. -GB: One more clarification on the resolution context topic. Since we have these two invocations now, previously within the module group we always had a single way that you would import the source. And consider the resolution context being used. With the new worker style of worker creation, the worker creation is like – the process of creating the worker could be associated with picking up a resolution context that is provided to the creation as separate from the concept of just getting a source from a loader that you want to virtualize. So the difference between creating a loading environment and passing a source, whereas the source can provide the resolution context when you’re creating a loading environment is one way you might be able to split that. +GB: One more clarification on the resolution context topic. Since we have these two invocations now, previously within the module group we always had a single way that you would import the source. And consider the resolution context being used. With the new worker style of worker creation, the worker creation is like – the process of creating the worker could be associated with picking up a resolution context that is provided to the creation as separate from the concept of just getting a source from a loader that you want to virtualize. So the difference between creating a loading environment and passing a source, whereas the source can provide the resolution context when you’re creating a loading environment is one way you might be able to split that. -GB: And, yeah. I am going to formally ask for Stage 1. So yeah. Requesting Stage 1 for the proposal. And that would be great to get some support. +GB: And, yeah. I am going to formally ask for Stage 1. So yeah. Requesting Stage 1 for the proposal. And that would be great to get some support. -USA: All right. I guess, KKL is the first supporter. Let’s give folks some time – or Nicolo, do you want to speak to that? +USA: All right. I guess, KKL is the first supporter. Let’s give folks some time – or Nicolo, do you want to speak to that? -NRO: Yes. Obviously, I am very involved in this case, but I support the proposal. If this goes ahead enough to give me the necessary building blocks, I will then start working on module expressions building on top of this. +NRO: Yes. Obviously, I am very involved in this case, but I support the proposal. If this goes ahead enough to give me the necessary building blocks, I will then start working on module expressions building on top of this. -USA: Next we have support from DE and DLM on the queue. I would like to remind folks this is also the correct time to share any concerns that you might have. But it sounds like you have a strong consensus. Congratulations for Stage 1! +USA: Next we have support from DE and DLM on the queue. I would like to remind folks this is also the correct time to share any concerns that you might have. But it sounds like you have a strong consensus. Congratulations for Stage 1! GB: Thanks everyone. ### Speaker's Summary of Key Points -* List -* of -* things + +- List +- of +- things ### Conclusion -* Stage 1 + +- Stage 1 + ## Throw expressions update or stage 2.7 + Presenter: Run Buckton (RBN) - [proposal](https://github.com/tc39/proposal-throw-expressions) - [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkqMid5YKobUhLWZJkA?e=2xqwWq) - RBN: I wanted to circle back to a discussion that we had in the September plenary session, regarding the ThrowExpressions proposal. And just for some – a brief– The idea of three expressions, this allows you to use the throw keyword to have exceptions thrown in situ within an expression context. This is not dissimilar with regular function calls in that functions can and produce a ThrowCompletion that results in the same type of logic that a three would. It fits right in within existing logic that we already maintain within the specification for how those completions are handled. One of the motivations, these are convenient. You don’t need a statement context to throw an exception. But also, the alternative to that is to use a function or method that you would invoke to throw an expression but that has issues with the debugging experience, especially if it’s not a built-in compatibility. So if you were to use a throw helper function, then you were having to deal with stack traces and consistency and capabilities with different engines, effects how debugging stops at break points and there are some other potential benefits for things like static analysis, control flow analysis as well. @@ -288,59 +285,43 @@ RBN: Prior to the 2023 plenary, one of the biggest concerns with the syntax was We chose UnaryExpression because it does – because the most predominant use case, throwing a new error or throwing an existing error object are satisfied by the UnaryExpression precedence without requiring excess by wrapping the expression parentheses or wrapping the nest parentheses. Tooling like minifiers might support ThrowExpression might do inlining in a way that looks different from how what you expect with a throw statement and could potentially be a hazard to those reading code - With a ThrowStatement. This would still maintain that unary statement but avoid potential user footguns where they have expectation of difference in precedence +And one of the other things that come up during the September plenary was appointed out by WH that there were certain restrictions or issues due to ASI – we put together PR #18 to address that. -And one of the other things that come up during the September plenary was appointed out by WH that there were certain restrictions or issues due to ASI – we put together PR #18 to address that. - - -One of our primary rationales behind this being something viable is that the ASI, the rules used to express +One of our primary rationales behind this being something viable is that the ASI, the rules used to express Given that it had been blocked previously by WH, one alternative we presented was one that not only mandated the use of outer parenthesis and left it as UnaryExpression so we would have the flexibility to relax that restriction in one direction or the other. But that approach was also blocked by WH at the time. So we have gone back to the drawing board and considered two other alternatives to consider: so we still again want to maintain the statement precedence that ThrowStatement has for its expression within ThrowExpressions. Both require parenthesis. One we move ThrowExpression into expression. It’s already in the current spec text banned from being used in expression statements. So there is no conflict with a ThrowStatement in those cases. But by having it be nested in expression, that means that parentheses are only nested inside another one. -I will give an example of each one. In the first option, moving ThrowExpression into expression, this would require parentheses in almost all cases. -So pretty much everywhere you use an expression inside another expression, would result in needing to use parentheses. -However, if you were to use a ThrowExpression in the head of if statement with a while switch, the expression of a case statement, inside a ThrowStatement or in – as the top level, this would not require it. +I will give an example of each one. In the first option, moving ThrowExpression into expression, this would require parentheses in almost all cases. So pretty much everywhere you use an expression inside another expression, would result in needing to use parentheses. However, if you were to use a ThrowExpression in the head of if statement with a while switch, the expression of a case statement, inside a ThrowStatement or in – as the top level, this would not require it. Better ways of expressing through code than those types of expressionsBy this would avoid that in those cases. The second alternative we considered is only allowing ThrowExpression inside of a parenthesized expression. In this case, parentheses would be always required. So if for some reason you felt that you wanted to do the nonsensical thing of throwing in the head of an `if`, you would need to wrap in double parentheses. -Now, I will give a couple examples for these cases. The syntax legal in both cases, level in both is, for example, if I say parenthesis throw B, C, then it would have the semantics that ThrowStatement does. To parenthesis throw B and comma C, it throws B. In both of the expression, it would be illegal to not have a parenthesis in these cases -And again, going back to why this was originally in UnaryExpression precedent, illegal here that are scenical uses, it has a trailing throw, or more specifically, having the conditional expression here, that has the null coalesce expression, a trailing throw, these are cases that you might want to use parenthesis for, but now required in these – with this change. +Now, I will give a couple examples for these cases. The syntax legal in both cases, level in both is, for example, if I say parenthesis throw B, C, then it would have the semantics that ThrowStatement does. To parenthesis throw B and comma C, it throws B. In both of the expression, it would be illegal to not have a parenthesis in these cases And again, going back to why this was originally in UnaryExpression precedent, illegal here that are scenical uses, it has a trailing throw, or more specifically, having the conditional expression here, that has the null coalesce expression, a trailing throw, these are cases that you might want to use parenthesis for, but now required in these – with this change. Versus regular expression is, everything here on the left would be essentially legal in both cases. But the things that are on the right would be illegal only if we chose Option 2, which is parenthesized expression. -So going back to all this, we were left with three options to consider as a way forward: one is Option 1, move ThrowExpression into expression. Another is to move ThrowExpression to parenthesized expression. And the third is to continue as is with the look ahead restriction. As champion, Option 3 is still my preference. I still believe that WH’s concerns at the time were related to spec complexity and I think that is something that is more within the domain of the editors to determine if they believe that that complexity is warranted. And assuming we are actually maintaining – maintain whatever semantics we chose for and the syntax we chose, if we’re to go with the existing version of the spec, that is today, we could write that actual specification text in multiple different ways, based on editorial feedback. But have the same syntactic rules going forward. I would like to discussion. We want to believe that the third option we have been discussing is something to consider using going forward. +So going back to all this, we were left with three options to consider as a way forward: one is Option 1, move ThrowExpression into expression. Another is to move ThrowExpression to parenthesized expression. And the third is to continue as is with the look ahead restriction. As champion, Option 3 is still my preference. I still believe that WH’s concerns at the time were related to spec complexity and I think that is something that is more within the domain of the editors to determine if they believe that that complexity is warranted. And assuming we are actually maintaining – maintain whatever semantics we chose for and the syntax we chose, if we’re to go with the existing version of the spec, that is today, we could write that actual specification text in multiple different ways, based on editorial feedback. But have the same syntactic rules going forward. I would like to discussion. We want to believe that the third option we have been discussing is something to consider using going forward. -With that, I can go to the queue. I have had both my – my reviewers taking a look at both of the alternatives presented here as well as the versions we have presented in the past. So we have again 3 different directions to consider. So now I can go to the queue. +With that, I can go to the queue. I have had both my – my reviewers taking a look at both of the alternatives presented here as well as the versions we have presented in the past. So we have again 3 different directions to consider. So now I can go to the queue. -KKL: Yeah. On behalf of Agoric, I can say that we strongly favor solving this problem without new syntax, given the price of new syntax. -And that we find a helper function to be a tolerable solution and sidesteps much of the issues. It’s possible to have a throw constructor method that is inherent to the other constructor errors that accepts the constructor arguments. As opposed to accepting an error constructed otherwise, which is ergonomically enticing. And worth consideration. That’s my piece. +KKL: Yeah. On behalf of Agoric, I can say that we strongly favor solving this problem without new syntax, given the price of new syntax. And that we find a helper function to be a tolerable solution and sidesteps much of the issues. It’s possible to have a throw constructor method that is inherent to the other constructor errors that accepts the constructor arguments. As opposed to accepting an error constructed otherwise, which is ergonomically enticing. And worth consideration. That’s my piece. RBN: So to the second point that you made about prototype inheritance, that’s valuable to anything that inherent from the native error, JavaScript has no requirements to not throw B in error. It doesn’t satisfy that. Error.throw as a method to consider has been discussed in the past in this proposal. It is also discussed in summary form within the proposal explainer. -Our main rationale for not using a method is the problems with methods and functions when it comes to optimizing ability, static analysis, which has error throw as a function, then there’s the Prince Edward Island tension – but it has the – static analysis capabilities in pipe languages like TypeScript that can benefit. I have been discussing with SYG if there is any potential for effecting escape analysis. The way V8 works today, they don’t go to the level of death, but there is an opportunity for that in the future - having it be a method or a function -does have complexity involved around the debugging experience works, you’re essentially have to -have debuggers informed about the blessed specific function or have a function have the -mechanism around stack phrases and frames and debuggability that isn’t unique to( +Our main rationale for not using a method is the problems with methods and functions when it comes to optimizing ability, static analysis, which has error throw as a function, then there’s the Prince Edward Island tension – but it has the – static analysis capabilities in pipe languages like TypeScript that can benefit. I have been discussing with SYG if there is any potential for effecting escape analysis. The way V8 works today, they don’t go to the level of death, but there is an opportunity for that in the future having it be a method or a function does have complexity involved around the debugging experience works, you’re essentially have to have debuggers informed about the blessed specific function or have a function have the mechanism around stack phrases and frames and debuggability that isn’t unique to( -You were having to have debuggers informed about a blessed-specific function or a function have some specific mechanism to affect stack traces and a debugability. (switch) `Error.throw` and pull something off to make sure what I’m doing at run time is able to make thedecisions and adds a level of complexity isn’t there when we consider throw as syntax. -With throw as syntax it’s something that run times understand statements work and don’t require the type of complexity to interrogate whether this function when optimized is inlinable as just throw and do this type of complexities. It’s a perfectly viable selection for the debugging experience but in the actual feature in the language is not something that I would recommend. +You were having to have debuggers informed about a blessed-specific function or a function have some specific mechanism to affect stack traces and a debugability. (switch) `Error.throw` and pull something off to make sure what I’m doing at run time is able to make thedecisions and adds a level of complexity isn’t there when we consider throw as syntax. With throw as syntax it’s something that run times understand statements work and don’t require the type of complexity to interrogate whether this function when optimized is inlinable as just throw and do this type of complexities. It’s a perfectly viable selection for the debugging experience but in the actual feature in the language is not something that I would recommend. -KKL: I think that to respond to that, I think that the feeling around the water cooler is those -down sides are lesser than the down sides to syntax. +KKL: I think that to respond to that, I think that the feeling around the water cooler is those down sides are lesser than the down sides to syntax. USA: We have a response by RGN. -RGN: Right. The first thing that you mentioned is that a hypothetical Error.throw couldn’t support -a non-error and I want to explicitly state that I don’t care. I’m positive on an approach that lacks sugar for throwing a non-error. You can already do it with existing syntax, -and I don’t think it needs to be made easy or that a proposal needs to be compromised in order to -support it. +RGN: Right. The first thing that you mentioned is that a hypothetical Error.throw couldn’t support a non-error and I want to explicitly state that I don’t care. I’m positive on an approach that lacks sugar for throwing a non-error. You can already do it with existing syntax, and I don’t think it needs to be made easy or that a proposal needs to be compromised in order to support it. RBN: I think you misunderstood. I wasn’t saying that error.throw wouldn’t work for non-error, but that the `Error.prototype.throw` inherited thing wouldn’t work for those cases. @@ -348,106 +329,37 @@ RGN: That doesn’t change my position. Next we have a new topic by Nicolo. -NRO: Yes. So just to make it clear to everybody here in the past we talked about potentially relaxing -parenthesis in the future, I believe if we go with option 1 or option 2 we have parentheses forever because the expression. +NRO: Yes. So just to make it clear to everybody here in the past we talked about potentially relaxing parenthesis in the future, I believe if we go with option 1 or option 2 we have parentheses forever because the expression. -RBN: You essentially have it forever because the moment that we choose option 1 or option 2, we allow all expressions inside of throw, I’ll go back to an example of syntax for those. It’s just throw then expression. -As a result we can never restrict it to Uniary without breaking body in the world and in the September meeting we had a potential of this parentheses and UnaryExpression and that was allowed and the narrow focus and something that WH did not or was not willing to support. +RBN: You essentially have it forever because the moment that we choose option 1 or option 2, we allow all expressions inside of throw, I’ll go back to an example of syntax for those. It’s just throw then expression. As a result we can never restrict it to Uniary without breaking body in the world and in the September meeting we had a potential of this parentheses and UnaryExpression and that was allowed and the narrow focus and something that WH did not or was not willing to support. LCA: Yeah, I want to echo again the points that I made in September already that I have a preference for option 3. I think the use case that you present with the double question mark and the default and argument function arguments motivates the case that you don’t want to wrap every throw in parentheses. -And I think these are probably the most common use cases here throwing when a default argument -is not present or when an argument is not passed. Throwing when something is null. -So I think we should make these use cases as simple as possible.If this increases spec complexity in some small amount it is a trade off that is worth it. The spec is read by much fewer people, then people that would read or write this code. +And I think these are probably the most common use cases here throwing when a default argument is not present or when an argument is not passed. Throwing when something is null. So I think we should make these use cases as simple as possible.If this increases spec complexity in some small amount it is a trade off that is worth it. The spec is read by much fewer people, then people that would read or write this code. -RBN: I also did not agree with WH’s perspective that increased complexity of the specification, I don’t have it in these slides or actually I have it in some hidden slides for reference but these were also presented in September that the actual spec complexity is essentially this. It’s early errors for three specific cases, or rather four because there’s a minor function that needs to be added to an SDO that -needs to be added to validate these cases and then the restrictions that we already discussed previously which was the trailing look ahead. -That actually isn’t that complicated of a change. The specification was built on the existing mechanism that we used to restrict optional chain template expression to avoid ASI that we had for several years now. +RBN: I also did not agree with WH’s perspective that increased complexity of the specification, I don’t have it in these slides or actually I have it in some hidden slides for reference but these were also presented in September that the actual spec complexity is essentially this. It’s early errors for three specific cases, or rather four because there’s a minor function that needs to be added to an SDO that needs to be added to validate these cases and then the restrictions that we already discussed previously which was the trailing look ahead. That actually isn’t that complicated of a change. The specification was built on the existing mechanism that we used to restrict optional chain template expression to avoid ASI that we had for several years now. USA: Next we have mark. -MM: I just want to raise and I won’t take much time during this presentation for this. -But we need to treat this I think as an emergency as a committee because WH has shown -himself over and over again to be the only person who is able to in realtime spot problems with -syntax proposals because of the complexity of analyzing the implications of job description -syntax proposals with the look ahead and the semicolon insertion and just the weird not context -dependent lexical rules. -JavaScript is incredibly hard to figure out what problems are introduced by a syntax proposal -and WH has saved our butts over and over again and proved himself over and over again to -be the only person who could. -So I would like the committee to find some arrangement like permanent invited expert or something -or whatever it is that we did to permanently invite Brendan no matter what he is. -I think we need to do that with WH and I don’t feel comfortable accepting any syntax -proposal without seeing WH’s reaction to it. - -DE: Yeah, WH has definitely been a valuable member of the committee over the years. -And he’s definitely found some syntax issues. I think that it would be unfair to say that, no, we don’t have anybody else in the committee who is capable. -It would be very unfair to say we don’t have anybody else in committee capable of also -detecting these issues. -There are many people who implement and maintain these and work out the ambiguities and WH’s -style of reporting the issues that he found was very in the meeting and we found a lot of -issues both before and after his analyses that weren’t reported with the same kind of brash, -hey, you find the error in the meeting. -So I think if we were in a place where we didn’t have implementer support and many different -people looking at the grammar would be an issue. -But I think we’ll be okay as a committee grammar-wise. - -MM :What I meant to say and I’m not sure if I said it is that WH shown to be the only person who -can evaluate it in realtime. -He really has a genius level uncanny genius level ability to spot problems that I think is -really, you know – I’ve just never seen anything like it. - -DE: I think we should really be focusing on detailed offline review of these sorts of things. -And also clear explanations of what our goals are with grammar. - -SYG: I’m also somewhat – to be clear I’m pretty neutral on the syntax versus the helper function -thing. -I admit I’m not fully convinced by the syntax use cases over a helper function but I don’t see -much harm for a language usability point of view. So I think for me basically comes down to which of the three grammar options is the simplest to -implement and the fastest by which I mean really that it’s the most localized, that there’s the -smallest likelihood that it will change parts and performance in any other way? -So which option would you say that is? - -RBN: The option that’s likely to have the least impact on parts or performance most likely in my -opinion would be option 1. -Because it just is parsed as a top level expression with the single restriction on expression -statement. -That said, it’s also the least convenient for users, both option 1 and 2 are inconvenient for -the 95% use case which are the things that are – which are many of the things that are marked -as currently illegal on this slide. The 90% use case for ThrowExpressions is the right-hand side of a null coalesce operator, and in the true and false branches of a ternary, and initializer of the parameter. -Those are the places that are the most useful. -Also potentially useful in a concise error body, but that’s not – you can just write curlies -versus parentheses in those cases, it doesn’t really matter. -Those cases coalesce and conditional and parameters are the most likely conditions you see in -use. -And again there’s an issue with – my concern is user convenience when it comes to man dating -parentheses because that adds additional typing that you have to do for any place that you want -to throw. -It potentially unnecessary complexity and the 95% use case is throwing an existing thing or -throwing a new error. -You rarely will throw a comma separated list or throw A and B. -That can happen. -But again it’s not the majority use case. -And goal with the parsing restrictions was to have the ergonomics and convenience for users -with a hopefully limited effect on parsing since parsing can continue to parse as sequence of -tokens and as it producers the production says the left-hand side is the ThrowExpression and -you check is the right-hand side wants a set of specific tokens. -The only corner cases being ASI handling and I’m sure that how the spec is written to discuss -ASI handling isn’t necessarily how parsers often will implement it if consistent with it and -easier and cheaper ways than we do it in the specification as well. - -KG: I don’t think any of these options would affect the performance of parsing code that is -not using throw expressions. It is really an obvious token. So once you are in the branch where you’re parsing a ThrowExpression, option 3 is definitely a little bit more complicated. -But it’s like the difference between 20 lines in your parser and 5 lines in your parser. -Since it doesn’t affect anything outside of ThrowExpressions, I wouldn’t worry too much about -it. +MM: I just want to raise and I won’t take much time during this presentation for this. But we need to treat this I think as an emergency as a committee because WH has shown himself over and over again to be the only person who is able to in realtime spot problems with syntax proposals because of the complexity of analyzing the implications of job description syntax proposals with the look ahead and the semicolon insertion and just the weird not context dependent lexical rules. JavaScript is incredibly hard to figure out what problems are introduced by a syntax proposal and WH has saved our butts over and over again and proved himself over and over again to be the only person who could. So I would like the committee to find some arrangement like permanent invited expert or something or whatever it is that we did to permanently invite Brendan no matter what he is. I think we need to do that with WH and I don’t feel comfortable accepting any syntax proposal without seeing WH’s reaction to it. + +DE: Yeah, WH has definitely been a valuable member of the committee over the years. And he’s definitely found some syntax issues. I think that it would be unfair to say that, no, we don’t have anybody else in the committee who is capable. It would be very unfair to say we don’t have anybody else in committee capable of also detecting these issues. There are many people who implement and maintain these and work out the ambiguities and WH’s style of reporting the issues that he found was very in the meeting and we found a lot of issues both before and after his analyses that weren’t reported with the same kind of brash, hey, you find the error in the meeting. So I think if we were in a place where we didn’t have implementer support and many different people looking at the grammar would be an issue. But I think we’ll be okay as a committee grammar-wise. + +MM :What I meant to say and I’m not sure if I said it is that WH shown to be the only person who can evaluate it in realtime. He really has a genius level uncanny genius level ability to spot problems that I think is really, you know – I’ve just never seen anything like it. + +DE: I think we should really be focusing on detailed offline review of these sorts of things. And also clear explanations of what our goals are with grammar. + +SYG: I’m also somewhat – to be clear I’m pretty neutral on the syntax versus the helper function thing. I admit I’m not fully convinced by the syntax use cases over a helper function but I don’t see much harm for a language usability point of view. So I think for me basically comes down to which of the three grammar options is the simplest to implement and the fastest by which I mean really that it’s the most localized, that there’s the smallest likelihood that it will change parts and performance in any other way? So which option would you say that is? + +RBN: The option that’s likely to have the least impact on parts or performance most likely in my opinion would be option 1. Because it just is parsed as a top level expression with the single restriction on expression statement. That said, it’s also the least convenient for users, both option 1 and 2 are inconvenient for the 95% use case which are the things that are – which are many of the things that are marked as currently illegal on this slide. The 90% use case for ThrowExpressions is the right-hand side of a null coalesce operator, and in the true and false branches of a ternary, and initializer of the parameter. Those are the places that are the most useful. Also potentially useful in a concise error body, but that’s not – you can just write curlies versus parentheses in those cases, it doesn’t really matter. Those cases coalesce and conditional and parameters are the most likely conditions you see in use. And again there’s an issue with – my concern is user convenience when it comes to man dating parentheses because that adds additional typing that you have to do for any place that you want to throw. It potentially unnecessary complexity and the 95% use case is throwing an existing thing or throwing a new error. You rarely will throw a comma separated list or throw A and B. That can happen. But again it’s not the majority use case. And goal with the parsing restrictions was to have the ergonomics and convenience for users with a hopefully limited effect on parsing since parsing can continue to parse as sequence of tokens and as it producers the production says the left-hand side is the ThrowExpression and you check is the right-hand side wants a set of specific tokens. The only corner cases being ASI handling and I’m sure that how the spec is written to discuss ASI handling isn’t necessarily how parsers often will implement it if consistent with it and easier and cheaper ways than we do it in the specification as well. + +KG: I don’t think any of these options would affect the performance of parsing code that is not using throw expressions. It is really an obvious token. So once you are in the branch where you’re parsing a ThrowExpression, option 3 is definitely a little bit more complicated. But it’s like the difference between 20 lines in your parser and 5 lines in your parser. Since it doesn’t affect anything outside of ThrowExpressions, I wouldn’t worry too much about it. USA: Next in the queue is Michael. MF: I wanted to clarify a couple of things that are bothering me during this discussion. So the first thing is that I don’t believe option one requires more parentheses than option two. Option one requires more parentheses than option two, only if you do not change any of the current uses of the expression production to the new nested expression production. Option one gives us the opportunity to choose every single place where we want to not require parentheses. So that's just up to us. The most restrictive form of option one requires parentheses everywhere, but you can granularly, location by location, choose where not to use parentheses. So that's the first point I wanted to make, and that's why option one is strictly superior to option two. But I don't think that these should be the options in the first place. This is a false choice. The committee should be deciding which cases are important to write in which ways, and then leaving it up to the editors to figure out how to structure the grammar. We shouldn't be dictating a grammar or choosing a grammar and having some set of semantics fall out of that. That's just not how to do this design. Choose the things that are important and you want to write, and then the editors will figure out how to encode that in spec. -RBN: I agree. -I don’t have one clarification, now you seem to favour option 1 but in the discussions we had on the issue tracker, you seem to prefer option 2. I wanted to clarify what your preference was if we had to choose one of these. +RBN: I agree. I don’t have one clarification, now you seem to favour option 1 but in the discussions we had on the issue tracker, you seem to prefer option 2. I wanted to clarify what your preference was if we had to choose one of these. MF: Which is the one that I’m looking at on the screen? @@ -457,58 +369,39 @@ MF: You’re numbering this what? RBN: This is option 2 on the screen. This is one that you expressed preference for. -MF: It's possible that I said things backwards when I was making my statements. -I apologize for that. +MF: It's possible that I said things backwards when I was making my statements. I apologize for that. RBN: I just wanted to clarify which direction you were – MF: I was numbering them incorrectly, sorry. -RBN: Yes. -Really it is a minor difference between options 1 and 2 here. It’s just a matter of do you want to allow it in certain nonsensical places? But this does show that one or the other of the actual spec for this is fairly simple. The specification necessary for option 3 which is kind of the status quo as the proposal is -right now and the fixes for ASI that we discussed in September is slightly more complicated from the spec text perspective. But I think it still provides the best usability for users. +RBN: Yes. Really it is a minor difference between options 1 and 2 here. It’s just a matter of do you want to allow it in certain nonsensical places? But this does show that one or the other of the actual spec for this is fairly simple. The specification necessary for option 3 which is kind of the status quo as the proposal is right now and the fixes for ASI that we discussed in September is slightly more complicated from the spec text perspective. But I think it still provides the best usability for users. NRO: I agree with Michael here that regardless of how this is, we can use granularly where we want option if it’s an option and double parentheses and if possible to specify that and the specs are similar and just a matter of choosing which places to put which double expression. USA: Next in the queue is MM. -MM: So first of all just want to be clear. I think we have been clear. I just want to make sure that we’re clear that I didn’t find any of the arguments for syntax over helper compelling and based on that we would -not be willing to see this advanced as a syntax proposal. +MM: So first of all just want to be clear. I think we have been clear. I just want to make sure that we’re clear that I didn’t find any of the arguments for syntax over helper compelling and based on that we would not be willing to see this advanced as a syntax proposal. -We would support exploration of helper function proposals having a standard helper function than just say write your own helper function would be a valuable addition to the language. I want to ask two questions with regard to one of your statements about the problems with the helper function. Regarding static analysis. There were two static analysis arranged. The first is TypeScript point of view and keep it specific to TypeScript but probably applies to other TypeScript like systems if we’re concerned about those. +We would support exploration of helper function proposals having a standard helper function than just say write your own helper function would be a valuable addition to the language. I want to ask two questions with regard to one of your statements about the problems with the helper function. Regarding static analysis. There were two static analysis arranged. The first is TypeScript point of view and keep it specific to TypeScript but probably applies to other TypeScript like systems if we’re concerned about those. -TypeScript has a never return type, and it’s in statement position when you call a function that has a never return type, TypeScript seems to understand the implications of that well in expression position for like a nested expression after an or. We know that TypeScript doesn’t infer as much about a never return type as -we would like it to. I believe it is filed a bug on TypeScript about that or found there was already a bug filed. I don’t remember which one. So I want to verify that a never return type on a helper function solves the TypeScript Problem. That’s the first of my questions about the static analysis. +TypeScript has a never return type, and it’s in statement position when you call a function that has a never return type, TypeScript seems to understand the implications of that well in expression position for like a nested expression after an or. We know that TypeScript doesn’t infer as much about a never return type as we would like it to. I believe it is filed a bug on TypeScript about that or found there was already a bug filed. I don’t remember which one. So I want to verify that a never return type on a helper function solves the TypeScript Problem. That’s the first of my questions about the static analysis. -RBN: It does not currently. So the reason why TypeScript – let me back up for just a moment. -If you have a throw statement or you have an expression that returns never and you invoke that -at a statement level, we treat that as program termination. So if you were to declare a variable before that, not reference it before the expression that returns never and then do an assignment to it afterwards or try to use it afterwards for some reason, that would as part of the control flow analysis be considered a variable that has not yet been referenced and affects certain other behaviours that fall out of that. -So if you report on errors related to unused locals, if you want to use, do type analysis or control flow analysis to determine what is the type of this thing after I have gone through an if statement and then or else branches might throw, only at the statement level we check that. We don’t currently check that at the expression level because it’s too expensive. It is too costly for us to descend through the entire expression tree and do a control flow also through the node to see if something is program terminating and instead we used things fall out of saying the never expression is returned and if you try to do something on it you -get an error on the subsequent line. It won’t be unused levels and we don’t do the tracking on that level because of the cost. The error.throw would only work at the statement position and in this case might just use a throw statement. Doesn’t help us in an expression case. +RBN: It does not currently. So the reason why TypeScript – let me back up for just a moment. If you have a throw statement or you have an expression that returns never and you invoke that at a statement level, we treat that as program termination. So if you were to declare a variable before that, not reference it before the expression that returns never and then do an assignment to it afterwards or try to use it afterwards for some reason, that would as part of the control flow analysis be considered a variable that has not yet been referenced and affects certain other behaviours that fall out of that. So if you report on errors related to unused locals, if you want to use, do type analysis or control flow analysis to determine what is the type of this thing after I have gone through an if statement and then or else branches might throw, only at the statement level we check that. We don’t currently check that at the expression level because it’s too expensive. It is too costly for us to descend through the entire expression tree and do a control flow also through the node to see if something is program terminating and instead we used things fall out of saying the never expression is returned and if you try to do something on it you get an error on the subsequent line. It won’t be unused levels and we don’t do the tracking on that level because of the cost. The error.throw would only work at the statement position and in this case might just use a throw statement. Doesn’t help us in an expression case. MM: What I don’t understand about that answer is that in order to make the adjustments to TypeScript so that you get the TypeScript benefit from the ThrowExpression syntax, whatever those adjustments that you make the TypeScript would have the expense that you’re currently trying to avoid – -RBN: No, they would not. -I’m sorry for interrupting. -Please continue. +RBN: No, they would not. I’m sorry for interrupting. Please continue. -MM: No, no. -If that’s wrong, then that’s the high priority to focus on. +MM: No, no. If that’s wrong, then that’s the high priority to focus on. -RBN: The difference is we can perform static analysis and analysis of the code based on syntax and know it's a ThrowExpression with a throw keyword and keep track of the AST as we do with many other things. -We know as we parse the tree, that is what is going to happen at that point so we know we can dig into that specific part of the expression to know the things after it that follow are, might have the no unused local errors, et cetera. For any given call expression, that is literally everything. -That is every single function you call anywhere. If you do that for every single expression it would increase the compile time drastically. Now we’re having to do these other checks we don’t normally do for expressions everywhere and too expensive for that to work. Syntax is extremely beneficial for TypeScript in this case because we don’t require type analysis, we only can require analysis of the syntax that is a much smaller area to look at. +RBN: The difference is we can perform static analysis and analysis of the code based on syntax and know it's a ThrowExpression with a throw keyword and keep track of the AST as we do with many other things. We know as we parse the tree, that is what is going to happen at that point so we know we can dig into that specific part of the expression to know the things after it that follow are, might have the no unused local errors, et cetera. For any given call expression, that is literally everything. That is every single function you call anywhere. If you do that for every single expression it would increase the compile time drastically. Now we’re having to do these other checks we don’t normally do for expressions everywhere and too expensive for that to work. Syntax is extremely beneficial for TypeScript in this case because we don’t require type analysis, we only can require analysis of the syntax that is a much smaller area to look at. -MM: Okay. -I’m glad I asked. -That answer is a surprise to me and new information. +MM: Okay. I’m glad I asked. That answer is a surprise to me and new information. -SYG: This is one of the reasons why I reached out to SYG prior to the presentation to kind of get some idea if there would be any benefit to escape analysis. Currently there isn’t. -As I understand it, the escape analysis that it uses is fairly coarse grained. I see there is a potential future for that. And other mechanism for static analysis that I think would benefit from the syntax that you really can’t get from the expression case, with the method case. +SYG: This is one of the reasons why I reached out to SYG prior to the presentation to kind of get some idea if there would be any benefit to escape analysis. Currently there isn’t. As I understand it, the escape analysis that it uses is fairly coarse grained. I see there is a potential future for that. And other mechanism for static analysis that I think would benefit from the syntax that you really can’t get from the expression case, with the method case. -MM: So I think that these static – and my second question was going to be about the what the -optimization opportunity for the engine, so that was relevant. So I’m open to doing a deeper investigation into the difficulties with the helper function. I’m not willing to write those problems off as impossible to solve in TypeScript and clearly the engine thing since it’s not an opportunity that can yet be realized at least in V8 what do we take to realize it and whether it can also be realized for helper function given the kinds of Git optimization that we have that are conditional on very static methods being the original -static method like the math functions. So I would be open to doing a deeper investigation into both of those static issues to see what the genuine costs are. Until I’m convinced that those costs are unsolvable with the helper function, I’m unwilling to give up on the helper function. +MM: So I think that these static – and my second question was going to be about the what the optimization opportunity for the engine, so that was relevant. So I’m open to doing a deeper investigation into the difficulties with the helper function. I’m not willing to write those problems off as impossible to solve in TypeScript and clearly the engine thing since it’s not an opportunity that can yet be realized at least in V8 what do we take to realize it and whether it can also be realized for helper function given the kinds of Git optimization that we have that are conditional on very static methods being the original static method like the math functions. So I would be open to doing a deeper investigation into both of those static issues to see what the genuine costs are. Until I’m convinced that those costs are unsolvable with the helper function, I’m unwilling to give up on the helper function. RBN: I think there’s a clarifying question and there’s one other topic I wanted to discuss in relation to this. @@ -518,38 +411,23 @@ RBN: Not more than – not that it is more difficult for error.throw but that it GCL: Okay, thank you. -RBN: And to a point that I made earlier that we couldn’t just look at error.throw and give it special handling because if anyone says const equals error.throw we can’t handle that case. The moment that you step off of the path you have issues with it not doing things that you expect. -We don’t want to break user expectations in those cases. So I wanted to go before we continue with the queue, I wanted to go briefly to another point around syntax related to Mark’s question. +RBN: And to a point that I made earlier that we couldn’t just look at error.throw and give it special handling because if anyone says const equals error.throw we can’t handle that case. The moment that you step off of the path you have issues with it not doing things that you expect. We don’t want to break user expectations in those cases. So I wanted to go before we continue with the queue, I wanted to go briefly to another point around syntax related to Mark’s question. I have and this has been something that I brought to committee and I have considered in the past a proposal to allow you to have an expression-less throw that could be used inside of an catch clause, especially a catch clause that has no binding, where I don’t necessarily need the binding because I’m not going to use the expression, I’m going to do some type of work related to an exception being thrown and just rethrow. There are a number of languages that have that capability. That’s not something we could do in the expression case if we use a method because that would require the method having some understanding of whatever the current ambient exception is when I’m being called and so I don’t think that’s viable in my – well, again, this hasn’t been presented. It’s something that I might bring to committee in the future.The only way that that might eventually work and still be supported in both the statement of expression cases that you have a syntactic equivalent. I think we can continue with SYG. -SYG: So I want to – a bunch of the conversation just happened around speculative benefits to Gites, I don’t think there’s much. I’m kind of confused why this is touted as a big benefit now. For throws, it makes the rest of the code dead, there’s some pretty simple unreachable thing that happens for throw statements since their statements that clearly makes the rest of the thing dead. -For expressions, that might help there but I’m not sure what benefit that is. I guess who is writing code where the rest of the body is obviously dead in expression? An expression context. Like, I’m not sure that would help any performance. And I think I just didn’t understand what the escape analysis question was that you were asking me, Ron. -What was it that +SYG: So I want to – a bunch of the conversation just happened around speculative benefits to Gites, I don’t think there’s much. I’m kind of confused why this is touted as a big benefit now. For throws, it makes the rest of the code dead, there’s some pretty simple unreachable thing that happens for throw statements since their statements that clearly makes the rest of the thing dead. For expressions, that might help there but I’m not sure what benefit that is. I guess who is writing code where the rest of the body is obviously dead in expression? An expression context. Like, I’m not sure that would help any performance. And I think I just didn’t understand what the escape analysis question was that you were asking me, Ron. What was it that -RBN: I was trying to better understand if there are specific optimizations that happen in V8 related to if you were to early return or early throw and aware that V8 has escape analysis of object construction and object properties if the values don’t escape the method or function, they could be converted into local variables, but then I guess in my question was, are there constraints around when it can do this such as branching? If I throw in one branch, if I throw something that might use the value in one branch but not -another or if I can – are there any type of optimizations how that happens? Does it have it in an option in the branch where it needs it as an object or try to maintain it as just local variables for those cases and if that is the case for something like throw and return as a statement, is that something that could conceivably be applied to throw in an expression case to assist with the escape analysis in a branch? -That is what that question was related to. In general I think the syntax is more interesting for static analysis cases even above and beyond escape analysis. If that’s something that V8 has optimizations for throw statements already. +RBN: I was trying to better understand if there are specific optimizations that happen in V8 related to if you were to early return or early throw and aware that V8 has escape analysis of object construction and object properties if the values don’t escape the method or function, they could be converted into local variables, but then I guess in my question was, are there constraints around when it can do this such as branching? If I throw in one branch, if I throw something that might use the value in one branch but not another or if I can – are there any type of optimizations how that happens? Does it have it in an option in the branch where it needs it as an object or try to maintain it as just local variables for those cases and if that is the case for something like throw and return as a statement, is that something that could conceivably be applied to throw in an expression case to assist with the escape analysis in a branch? That is what that question was related to. In general I think the syntax is more interesting for static analysis cases even above and beyond escape analysis. If that’s something that V8 has optimizations for throw statements already. -SYG: At the point where escape analysis is usually applied in JSVMs is pretty late or pretty high tier optimization that would apply and escape analysis is expensive and not do it in earlier with byte code analysis or compilation. By the time you get to the higher tier JIT you have intermediate representation than a syntax obviously and this could be something – V8 has something like sea of nodes that JVM compilers use and I think it’s more widespread to use an SSA like form but at that point, you have some -IR, and once you have the IR which is likely to be post in-lining in the higher tiered JITs, whether you have the helper function with the ThrowStatement or ThrowExpression, I think the difference between those is basically nothing unless like you’re helping function is really not in lineable for whatever reason. If it is a built in helper, then that gets straight forwardly in line as like a node in our IR that says throw and there’s no difference between the statement and expression there. If it is a user written helper, that is not used in some weirdly wild metamorphic way chances are it can get inlined and it will be a tiny helper and that ThrowStatement will be a throw node in the IR of the function it was inlined into and I don’t think it will be a big difference or any difference between the statement and the expression. So in terms of enabling more optimizations, I don’t think this would enable any more than what would otherwise be possible today via a user written helper. As for what optimizations can utilize, you know, try to leverage dead branches for narrowing or something and I think that’s orthogonal question and I don’t see how this would really make a -difference there. +SYG: At the point where escape analysis is usually applied in JSVMs is pretty late or pretty high tier optimization that would apply and escape analysis is expensive and not do it in earlier with byte code analysis or compilation. By the time you get to the higher tier JIT you have intermediate representation than a syntax obviously and this could be something – V8 has something like sea of nodes that JVM compilers use and I think it’s more widespread to use an SSA like form but at that point, you have some IR, and once you have the IR which is likely to be post in-lining in the higher tiered JITs, whether you have the helper function with the ThrowStatement or ThrowExpression, I think the difference between those is basically nothing unless like you’re helping function is really not in lineable for whatever reason. If it is a built in helper, then that gets straight forwardly in line as like a node in our IR that says throw and there’s no difference between the statement and expression there. If it is a user written helper, that is not used in some weirdly wild metamorphic way chances are it can get inlined and it will be a tiny helper and that ThrowStatement will be a throw node in the IR of the function it was inlined into and I don’t think it will be a big difference or any difference between the statement and the expression. So in terms of enabling more optimizations, I don’t think this would enable any more than what would otherwise be possible today via a user written helper. As for what optimizations can utilize, you know, try to leverage dead branches for narrowing or something and I think that’s orthogonal question and I don’t see how this would really make a difference there. RBN: Thank you. -NRO: This is regarding the topic from earlier when like Kevin said that actually we could have -an expression of throw and you mentioned WH objected to that. I was checking the notes and WH was actually pushing for a solution with the lower precedence at some point and all of his concerns that are captured in the notes are about like how syntax and make it look like some sort of weird precedence for the expression. Like, I want to not speak for him, so I’m trying to – I have notes in front of me. -He explicitly said the solution using the grammar and not having verse or restrictions is his -preferred solution. And there was the discussion about syntax or function and syntax we could still consider having an expression of a throw, regardless of whether we go with option 1 or 2 and in the future we could still like once this gets used we can see if we can relax parentheses and how much is in -practice. +NRO: This is regarding the topic from earlier when like Kevin said that actually we could have an expression of throw and you mentioned WH objected to that. I was checking the notes and WH was actually pushing for a solution with the lower precedence at some point and all of his concerns that are captured in the notes are about like how syntax and make it look like some sort of weird precedence for the expression. Like, I want to not speak for him, so I’m trying to – I have notes in front of me. He explicitly said the solution using the grammar and not having verse or restrictions is his preferred solution. And there was the discussion about syntax or function and syntax we could still consider having an expression of a throw, regardless of whether we go with option 1 or 2 and in the future we could still like once this gets used we can see if we can relax parentheses and how much is in practice. -RBN: I wanted to point out – not point out, rather, I would say I attempted to have this discussion with Kevin prior to plenary. -When I checked my chat history in matrix I have a bunch of things that say that the message was -unable to be sent. -But I was – I have actually been investigating a way of doing this without the look ahead restriction. Potentially feasible. So it’s something that I need to talk more with the editors in which case again this becomes if we went with the approach it’s more editorial discussion if we write it. I know it would be more complex how the grammar is specified because it requires using something akin to the in parameter production where we allow it in some cases and not and some things related to how we do things like multiplicative expression – right associative that would actually be able to solve this without the look ahead restriction. There is a potential for that to work in that vain where it’s just how it falls out in the grammar. -So WH did say that his preference would be throw UnaryExpression and like just working out something in the grammar, but we did also discuss the specific alternative of having parentheses throw and in that case didn’t want to have the right-hand to be UnaryExpression. That is sort of the intersection of all of the proposals is parent thesed expression of UnaryExpression and that can be relaxed to the proposed alternatives and we discussed the specific alternative and WH specifically did not like it for reasons that are not totally clear to me but I think that would be a weird thing to do, not on any practical concerns. +RBN: I wanted to point out – not point out, rather, I would say I attempted to have this discussion with Kevin prior to plenary. When I checked my chat history in matrix I have a bunch of things that say that the message was unable to be sent. But I was – I have actually been investigating a way of doing this without the look ahead restriction. Potentially feasible. So it’s something that I need to talk more with the editors in which case again this becomes if we went with the approach it’s more editorial discussion if we write it. I know it would be more complex how the grammar is specified because it requires using something akin to the in parameter production where we allow it in some cases and not and some things related to how we do things like multiplicative expression – right associative that would actually be able to solve this without the look ahead restriction. There is a potential for that to work in that vain where it’s just how it falls out in the grammar. So WH did say that his preference would be throw UnaryExpression and like just working out something in the grammar, but we did also discuss the specific alternative of having parentheses throw and in that case didn’t want to have the right-hand to be UnaryExpression. That is sort of the intersection of all of the proposals is parent thesed expression of UnaryExpression and that can be relaxed to the proposed alternatives and we discussed the specific alternative and WH specifically did not like it for reasons that are not totally clear to me but I think that would be a weird thing to do, not on any practical concerns. RBN: This is the sliding question from the September plenary where we basically were proposing so mething akin to what we’re talking about today but maintaining UnaryExpression precedence. And I believe his response at the time was that this results in a confusing precedence where we parenthesizing and still only allowing Unary and he suggested we pick a precedence and our position at the time was that we essentially had picked the precedence. The precedence generally preferred by the champion and generally preferred also by Kevin despite his objection around the restriction, the restriction here is UnaryExpression is the right precedence and the most convenient for users and fits the majority use case but we would have to find if we wanted to meet that have to find an alternative syntax that would leverage just the grammar as it is. That’s something that I need to continue pursuing if that’s the direction we want to keep looking at. @@ -559,67 +437,60 @@ RBN: So the thing that I would like to get to before I close is that again we ha USA: I would take that as consensus. -KG: I don’t think you necessarily need to rule out using lookaheads in option 3. -I think we should consider the grammar that we want and what is the editorial simplest way to get it; that might not be lookaheads but it might be lookaheads. I think that the important thing is we figure out which programs we want to be legal and which programs we don’t want legal and then it is a strictly editorial concern how we write that down. +KG: I don’t think you necessarily need to rule out using lookaheads in option 3. I think we should consider the grammar that we want and what is the editorial simplest way to get it; that might not be lookaheads but it might be lookaheads. I think that the important thing is we figure out which programs we want to be legal and which programs we don’t want legal and then it is a strictly editorial concern how we write that down. RBN: I agree with that. So in summary we’ll continue to investigate the syntax options with the editors to find something that we think will generally be palpable and try to continue along the UnaryExpression path if we we find this is viable alternative or option and we will also be investigating the static method case and look for some feedback on to the possible caveats with that and I would appreciate if implementers can reach out to me after the meeting and I regularly speak with Shu and can talk to him about this as well and if anyone else can reach out to me afterwards to get an idea – so I can get an idea are there concerns they would have or benefits to syntax versus expression in those engines, that I won’t necessarily have direct visibility to myself. -RBN: I have it on here potential for seeking advancement and only if we had a definitive conclusion -as a result of this. -I did also catch this as potentially just an update because there are some depending on whether we went with any of these options may have been open questions and I’m happy to just leave this as an update for now as we continue to pursue these options. Nothing else, I think I am done, thank you. - +RBN: I have it on here potential for seeking advancement and only if we had a definitive conclusion as a result of this. I did also catch this as potentially just an update because there are some depending on whether we went with any of these options may have been open questions and I’m happy to just leave this as an update for now as we continue to pursue these options. Nothing else, I think I am done, thank you. ### Speaker's Summary of Key Points -* List -* of -* things + +- List +- of +- things ### Conclusion -* List -* of -* things +- List +- of +- things ## Incubation call chartering -Presenter: Shu-yu Guo (SYG) -- [proposal]() -- [slides]() +Presenter: Shu-yu Guo (SYG) +- Slides: See Agenda -USA: Is somebody ready to do incubation call chart, Shu, do you have any pointers as to who took -over from you? +USA: Is somebody ready to do incubation call chart, Shu, do you have any pointers as to who took over from you? SYG: I do not. USA: It seems like we don’t have anybody to take this idea on. -SYG: It’s been ad hoc for what it is worth. -Since I’ve stepped away due to lack of cycles, I think there have been a few like about decimal and maybe something elsewhere it’s just people coordinate on the GitHub issue if given that it's been fewer proposals wanting to do that, with having just a few like zero to two between meetings I think coordinating on GitHub might work out well instead of this formal thing and shelf this formal chartering for now. +SYG: It’s been ad hoc for what it is worth. Since I’ve stepped away due to lack of cycles, I think there have been a few like about decimal and maybe something elsewhere it’s just people coordinate on the GitHub issue if given that it's been fewer proposals wanting to do that, with having just a few like zero to two between meetings I think coordinating on GitHub might work out well instead of this formal thing and shelf this formal chartering for now. USA: Sounds about right, but just in case I could do a quick call again: Are there any proposal champions on this call who would like to have an incubation call or during this meeting that mention the possibility of having an incubation call later? [silence] ## Function and Object Literal Element Decorators for stage 1 -Presenter: Firstname Lastname (RBN) -- [proposal]() -- [slides]() +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/rbuckton/proposal-function-decorators) +- Slides: See Agenda -RBN: Good afternoon. So I am Ron Buckton from Microsoft corporation. I wanted to speak to you for the next hour about extending the decorators proposal further than we currently have. -So a brief overview of how this presentation is laid out: I am going to talk about the motivations and then there are two parts to the proposal I want to discuss. Function decorators, and then kind of a more specific focus on object literal element decorators and then discuss the broad scope of the proposal towards the end. +RBN: Good afternoon. So I am Ron Buckton from Microsoft corporation. I wanted to speak to you for the next hour about extending the decorators proposal further than we currently have. So a brief overview of how this presentation is laid out: I am going to talk about the motivations and then there are two parts to the proposal I want to discuss. Function decorators, and then kind of a more specific focus on object literal element decorators and then discuss the broad scope of the proposal towards the end. -RBN: The main goal here is to kind of look at the things that are not currently decorable, but are – have similarities to what you can do with classes, class fields and methods. So that’s why these are grouped together as opposed to two separate proposals. There’s a lot of cross-cutting concerns across these that are worth addressing together. The main motivations for this is function and object literal element [dem] rarities, would allow the meta programming capabilities beyond classes and class elements. Those capabilities that we now provide with or will be soon providing with decorators are the ability to attach or create reusable blocks providing logging, facing and entwined things for HTTP or REST API routing. Authorizes flows, paired with things like AsyncContext in a multiuser service. +RBN: The main goal here is to kind of look at the things that are not currently decorable, but are – have similarities to what you can do with classes, class fields and methods. So that’s why these are grouped together as opposed to two separate proposals. There’s a lot of cross-cutting concerns across these that are worth addressing together. The main motivations for this is function and object literal element [dem] rarities, would allow the meta programming capabilities beyond classes and class elements. Those capabilities that we now provide with or will be soon providing with decorators are the ability to attach or create reusable blocks providing logging, facing and entwined things for HTTP or REST API routing. Authorizes flows, paired with things like AsyncContext in a multiuser service. -RBN: The ability to perform registration of classes such as HTML custom elements. The recent addition of metadata reaching stage 3. The subscription or recording of metadata of objects that can later be used. -And some other additional things interesting like the ability to write generator trampolines, that the generator mechanisms. +RBN: The ability to perform registration of classes such as HTML custom elements. The recent addition of metadata reaching stage 3. The subscription or recording of metadata of objects that can later be used. And some other additional things interesting like the ability to write generator trampolines, that the generator mechanisms. RBN: Also, we want to really be able to promote decorator reuse regardless of your programming style. Decorators today as they are tailored towards class, class methods are focussed on OOB based development. Where the object oriented style. But kind of leave in – leave in the lunch things like functional programming, folks who attend to use object literals versus classes for instance values. And there’s some benefit in having a shared and reusable mechanism for defining how these decorators apply. Because if we are able to define a decorator that can be used on class method and on a function in a way that is consistent, it simplifies how we do validation for decorator placement. Where can he get this used or, test where whether the thing that the decorator itself is decorating is valid? We get API consistency for decorator – with the decorator context, allowing us to have essentially simple switches at the top of a decorator determine, am I targeting the right element? And generally improvements that we get with a context over just a DIY unary function pipelining RBN: So with those in mind, again this first section is about the function decorators. Here, decorators we already know are useful for class methods, they’re not convenient wrapping mechanisms for methods and class syntax. A field, a function, you could pipe through a function, using a call back like approach or any other type of function wrapping, but that doesn’t work for overriding base class methods and defer actual logic to the base class definition. It doesn’t work well with field-like definitions in those cases. So we have this decorator syntax for class methods to give us the metaprogramming compatibility that aren’t convenient to do when you’re talking about an object literal or a function. And for regular functions we can and do write things like what is picked here: call function, function with the thing you want to wrap. And that’s kind of like a decorator. It can do something similar. Why would we want or need the decorators? There’s a number of reasons that we want to consider this. One is decorator reusability. We can write decorators that are designed to work with a class method, a class static method, designed to work with a class constructor. That could also be reused for functions. Things like logging and tracing. If I have @log decorator I could apply a method, I could apply that to a function, and have that same benefit. The API consistency is key. Unlike the regular function pipelining approach, I don’t have to – I can make a determination as to the thing I am decorating whether that is a valid decorated thing by looking at it and accessing the kind property. That makes the programming model for decorates applying to – function declarations in FunctionExpressions consistent with methods today. And how those checks were-pd and allows us to look at other things we might want to decorate in the future, and if those things were not to have – not to also have some type of differentiating context then how do we determine the thing I called with as a function is not actually a decorator? Today we do that by validating the arguments provided to us. -RBN: One of other things you don’t get with pipelining functions to be function wrapping is, this post-wrap registration compatibility that you can get today with `context.addInitializer.` It’s a condition text used for custom elements where you can define the class context; other decorators that run, might do a constructor replacement. And at the end of the day, what is the final result is what is passed to the add extra initializer, to those call backs, allowing you to do this post-hoc registration of classes. You won’t get something that is partially ready because there are other deck rates to be replied after it. +RBN: One of other things you don’t get with pipelining functions to be function wrapping is, this post-wrap registration compatibility that you can get today with `context.addInitializer.` It’s a condition text used for custom elements where you can define the class context; other decorators that run, might do a constructor replacement. And at the end of the day, what is the final result is what is passed to the add extra initializer, to those call backs, allowing you to do this post-hoc registration of classes. You won’t get something that is partially ready because there are other deck rates to be replied after it. RBN: Some other things we get as a benefit that we don’t get with the function wrapping is reflection. We get access to lexical assigned names after declaration via context.name. If you are some a number of levels deep with multiple decorators, and one of them doesn't forward the function name, but you want the logging purposes, the pipeline functioning approach that you could do today is not really sufficient for that. It requires quite a bit of trust or work on other function decorators, and ensuring you get the right name for this. @@ -627,8 +498,7 @@ RBN: Another – compatibility we provide here is the ability to define metadata RBN: Function decorators are something I have for a long time considered a separate proposal. It was originally discussed as part of the original decorators proposal when it was first introduced. And the only reason why parameter decorators came first is there is an apparent and immediate need on the TypeScript side to determine whether or not those are something that is viable, something we have the compatibility to do. Function decorators are something we don’t support in TypeScript, not because we did not think it was a valuable direction, but by the time we were implementing this, the idea of function decorators had already been kind of pulled out of what we were considering for this proposal. -RBN: So this gives us a way to provide the future extensibility. What would it look like? A couple of examples here on the screen. You might have an event handler, you want to add debounce semantics to a method or a function, in this case, than arrow functionment I might asynchronously retry in the error during IO operations. I might have an authorization mechanism in a multiuser application in something like node JS, where I want to have multiple incoming callers access information at the same time and be able to thread through context right armedless as to whether I pass it as an argument. I could use this for that -And the idea of being able to use a generator tramp [poe] lean that does something like data flow, that you might hold in the body of a generator function and then have that be presented as just a regular function to the person evoking it. +RBN: So this gives us a way to provide the future extensibility. What would it look like? A couple of examples here on the screen. You might have an event handler, you want to add debounce semantics to a method or a function, in this case, than arrow functionment I might asynchronously retry in the error during IO operations. I might have an authorization mechanism in a multiuser application in something like node JS, where I want to have multiple incoming callers access information at the same time and be able to thread through context right armedless as to whether I pass it as an argument. I could use this for that And the idea of being able to use a generator tramp [poe] lean that does something like data flow, that you might hold in the body of a generator function and then have that be presented as just a regular function to the person evoking it. RBN: Now there are some other interesting use cases. So one that I was pointed to, to look into was, AWS – this is to simplify the creation of services in AWS. And [chal]ice uses python. It is in the Python language, which is the class decorator we have today that was very heavily influenced by the decorator in PSI. And PSI does have these abilities to decorate at regular functions. @@ -636,17 +506,13 @@ RBN: And this here is an example of the same example you can find on the README RBN: Another capability this has is the capability to write function decorators that decorate decorators which sounds redundant, but the ability to define mechanisms to reduce boiler plate. I mentioned before that one of the things that is valuable with the decorator syntax and context is the ability to validate the thing that you’re decorating is correct. We have `context.kind` for that. That’s a boilerplate to add to every function. In a language like C#, the attributes have an attribute you can put on an attribute, that allows you to specify what the valid targets are for the attribute. Those are processed at compile time versus what we support in JavaScript would have to be validated at runtime. This gives you the same capability. I can have a decorator and decorate it with additional context that can simplify the process of defining these and easier to see at a glance by putting in documentation, you document those are to see them and know what effects they will have. Same thing happens here with the allowed decorator, on another decorator that says, I am allowed to be placed on methods and functions and then reduce and remove that boiler plate that you have to otherwise provide. And have the other additional context. Here, you can see that I can infer from the context of the decorator that this is targeting the name of the function that I am decorating, which is useful for producing a TypeError that can be caught and reported upon and gives you more insight into what the method or function or whatever it was that you were decorating, whether that is valid. This is a function inside of the method without having to be repeatable and rewrite the name yourself or using stacked traces to try to figure it out. This is a more reliable and clear mechanism for it. And give the types of tests. So you can see with this example here, if I tried to decorate a function with this decorator, it’s okay because it would be allowed to decorate a class with this, it’s not okay and throw an error. It gives us those capabilities. -RBN: So, how this works, we expect or would anticipate that decorator evaluation order and application order are the same as what specified for class elements. Decorator evaluation is when we evaluate the expressions that are part after the decorator. So if you see `@foo`, it’s the `foo` part. Whatever that expression is, is evaluated at that time. In order, in document order, essentially left to right, top to bottom. As we expect. Decorator application is the reverse. It preserves the application order it sees for this function application mechanism so we apply them right to left, like you have nested them. Evaluation order and application order differ. We want to make them consistent throughout the language. This is similar to how methods or classes work. You replace the thing with something that wraps it using a function. Return and defining it has no change; anything else is an error. -And much like with class decorators we could have extra initializers after they are applied and get the final functions for things like registration. And we again would have the ability to define metadata on the function itself. +RBN: So, how this works, we expect or would anticipate that decorator evaluation order and application order are the same as what specified for class elements. Decorator evaluation is when we evaluate the expressions that are part after the decorator. So if you see `@foo`, it’s the `foo` part. Whatever that expression is, is evaluated at that time. In order, in document order, essentially left to right, top to bottom. As we expect. Decorator application is the reverse. It preserves the application order it sees for this function application mechanism so we apply them right to left, like you have nested them. Evaluation order and application order differ. We want to make them consistent throughout the language. This is similar to how methods or classes work. You replace the thing with something that wraps it using a function. Return and defining it has no change; anything else is an error. And much like with class decorators we could have extra initializers after they are applied and get the final functions for things like registration. And we again would have the ability to define metadata on the function itself. RBN: So a lot of what is here, it might be more than what you normally see for a Stage 1 proposal because it goes into the API shape, but this is all designed to – borrows and base from the existing syntax and existing API, the goal to show we are intended if this is adopted to align completely with how decorators are defined today. In that case, just like with any other decorator, the function get the target you’re decorating and the context, which is a kind that we switch on, the name, the object that can be used to attach metadata and the post-wrap, post-decorators initializers like registration. -RBN: And as we said before, just like normal decorator, you can return something that IsCallable, that is the wrapper or the function replacing it or undefined which means no change. -So for most functions, the FunctionExpressions and error functions, this is fairly simple. It’s just taking what we already can do and have already specified for decorators and applying it to these types of functions. -And for the FunctionExpression cases, this is fairly simple: there’s not anything beyond that that we have to consider to be all terribly complicated, but there is a problem with function declarations. And this is something that is a little bit difficult because of hoisting semantics. +RBN: And as we said before, just like normal decorator, you can return something that IsCallable, that is the wrapper or the function replacing it or undefined which means no change. So for most functions, the FunctionExpressions and error functions, this is fairly simple. It’s just taking what we already can do and have already specified for decorators and applying it to these types of functions. And for the FunctionExpression cases, this is fairly simple: there’s not anything beyond that that we have to consider to be all terribly complicated, but there is a problem with function declarations. And this is something that is a little bit difficult because of hoisting semantics. -RBN: Now this is something we have discussed in previous plenary sessions outside of plenary as well, since 2016. When decorators were advanced first, first proposed and Stage 1, we have had numerous discussions about this and a lot of time was spent on trying to figure out does this mean introducing a pre-evaluation step? What would pre-evaluation step mean? How does that affect imports and exports and there’s complexity here that needs to be further explored that we really made progress on since the idea of function decorators was passed on for the MVP portion of the proposal -There are issues with a pre-evaluation step that we have investigated. +RBN: Now this is something we have discussed in previous plenary sessions outside of plenary as well, since 2016. When decorators were advanced first, first proposed and Stage 1, we have had numerous discussions about this and a lot of time was spent on trying to figure out does this mean introducing a pre-evaluation step? What would pre-evaluation step mean? How does that affect imports and exports and there’s complexity here that needs to be further explored that we really made progress on since the idea of function decorators was passed on for the MVP portion of the proposal There are issues with a pre-evaluation step that we have investigated. RBN: Decorators functions, we have suddenly become observably initialized one at a time for others – this isn’t the case for functions today. It makes it problematic. For a pre-evaluation work, decorators wouldn't local variables, they have to hoist to the top of the block that might have a const declaration that is a constant variable, that we might want to reference from a decorator. This works fine for a class decorator, or a field one, fine for even function expression, but for a hoisted function declaration, that variable reference would be in TDZ and not apparently why. We changed the order of the evaluation from the user’s perspective. Is this workable? I don’t really think it is. But I think there are other alternatives. And that’s one of the things we want to explore further in Stage 1. And hoisting is a thing we talked about in the past. I do have another topic on the agenda to further explore this. In some of the direction, further exploration and getting committee feedback as well, that follows this topic. @@ -654,114 +520,101 @@ RBN: If you do have topics specifically about function hoisting, I would – unl RBN: So as I said, there are some options that we have been considering. Again, introducing a pre-evaluation step is one. Another which is something that I initially experienced with, way back before deck decker rarities were proposed, it was based on one that I had written and handed off, and myself and Eric, worked together on the original decorators proposal. But my earliest implementation as I was fleshing out the design tried to do something that you – dynamically on command. There are problems. But it is something to consider. -RBN: The third bullet I have here is an option which is that you don’t hoist decorated functions. This is the champion’s preferred preference. There’s a reason for that, it fixes a lot of issues. That decorates on functions might have a one-time cost of possibly having to move that code. But there is – that cost of having to move mode is potentially there even if you were to do a pre-validation step due to decorator order observable. So it’s a cost you might have to pay, even if we allowed hoisting. In general, and the conversations I have had over several years now in this proposal – as I have been working putting this together, the best solution is not to hoist. But again we will go more on that, shortly. Also, we could decide to not allow decorators on function declarations. We can make those metaprogram capabilities reasonable across the language. +RBN: The third bullet I have here is an option which is that you don’t hoist decorated functions. This is the champion’s preferred preference. There’s a reason for that, it fixes a lot of issues. That decorates on functions might have a one-time cost of possibly having to move that code. But there is – that cost of having to move mode is potentially there even if you were to do a pre-validation step due to decorator order observable. So it’s a cost you might have to pay, even if we allowed hoisting. In general, and the conversations I have had over several years now in this proposal – as I have been working putting this together, the best solution is not to hoist. But again we will go more on that, shortly. Also, we could decide to not allow decorators on function declarations. We can make those metaprogram capabilities reasonable across the language. RBN: We really don’t think that’s a viable solution. It may be the one we have to pick, if that’s the case. But I think as we go through more investigation into this, it’s – I think it’s a fairly easy thing to rule out. Another thing I have considered over the years is a opt-in marker. I can get to more of that later. My preference is do not hoist. And what means is that non-decorated functions continue to do the thing they have always done, hoisted. Decorating function declaration would basically end that value hoisting behavior. Still hoisting the variable declaration like we do with any function or var. The actual value that that is initialized to is not yet defined. That would mean that code authorization might need to reorder the code, and again that could be something that you actually need to do anyways, even if we did hoisting because of the observability of evaluation in those cases. That also means that this is something that you – you’re paying the cost for it, when you use it, you don’t have to pay the cost for existing code. You don’t have to pay the cost for – somewhere else in the code, that just happens to be in the same lexical scope and that causes other confusing things to happen that aren’t expected. We think this is the clearest and most reliable mechanism. You don’t use second-guess evaluation order and it means all the decorator applications would align with other usages. This in general, the champion, hopes to take this approach. Everything is under discussion as we continue to do this investigation. RBN: So we do believe this hoist issue can be addressed, and we believe it’s warranted. This is something we have discussed, on and off over the years since the initial proposal for decorators that’s not resolved and it’s worth getting into, finding out is this feasible? Something we can do? I would like to have a longer, more focussed discussion on the hoisting issues that’s been scheduled later after this topic. So we can kind of get some perspectives and hear some of the concerns that folks have or might have raised in the past, or raised off-line. So we have a direction to go if and when this advances to Stage 1. Given that and couching this in the exact mechanics of hoisting are something I want to get into later. I would be perfectly happy to take discussion topics to the function decorators part of this now is if anyone has anything they want to add to the queue. -DE: There’s a – there’s a lot of discussion on matrix about the fundamental motivation for this proposal. And I think we should probably discuss that before getting to the hoisting discussion. In my opinion, this is a lot clear and very different from how a functional application feels. And will encourage the construction and use of libraries that do these sorts of, you know, function application over – over function declarations. Anybody else want to talk about this? +DE: There’s a – there’s a lot of discussion on matrix about the fundamental motivation for this proposal. And I think we should probably discuss that before getting to the hoisting discussion. In my opinion, this is a lot clear and very different from how a functional application feels. And will encourage the construction and use of libraries that do these sorts of, you know, function application over – over function declarations. Anybody else want to talk about this? -KG: Yes. I would like to hear more about the motivation. The slides were not - it did not feel like there was motivation other than "I want a new way of calling functions" and I don’t think we need to have a new way of calling functions that only takes functions arguments. And most of the examples on the slides, like I was doing the rewrite in my head to what that looks like if you do it as existing JavaScript, and it's almost indistinguishable. Maybe I am missing something, but . . . +KG: Yes. I would like to hear more about the motivation. The slides were not - it did not feel like there was motivation other than "I want a new way of calling functions" and I don’t think we need to have a new way of calling functions that only takes functions arguments. And most of the examples on the slides, like I was doing the rewrite in my head to what that looks like if you do it as existing JavaScript, and it's almost indistinguishable. Maybe I am missing something, but . . . RBN: Again, one of the – the motivations for why decorators in general apply to functions as much as they apply to classes, class methods and fields. The same capabilities and expressivity you want to have there are the same types of expressivities to have for functions. As to specifically if we already have this for classes, and class constructors and methods, why do we extend to have it on functions? And a lot of that is described in the slide. The specific motivations that are on top of the existing decorator motivations are primarily around consistency. You could take a decorator that you would write specifically to use on a function, and write one that is specifically – specifically used on method and write a version that is specifically for use on a function, that works conveniently. But if I wanted to take a decorator, that I write to be used on methodthat has – a I want to reuse the compatibility on a function, it’s not so simple. It’s not just I am going to call it with a function. It’s now, the person who wrote the decorator now has to be defensive around am I being beaten with one argument or 2. If there are other things we do in the future, we also would need some type of additional context to know, is the one argument call for a function or for something else. We don’t know yet. -RBN: So trying to write a decorator that is defensive to be called in both cases is not easy to do and it’s easy to forget how things work, easy to piece the pieces. Having a consistent API design is very efficient and helpful for end-users writing them. There are things that you cannot easily do with function pipelining that registration case, which is heavily used in the examples I showed relating to AWS Chalice. These are function registration. They do something after the fact and they are simple cases that show one decorator. You could have multiple. If you have a decorator that does registration that has to be held at the end of a decorator call, you have to be careful about how you document it and explanatory about what this means and have to be concerned about if someone later on comes and adds another decorating, not looking at the documentation, but for the thing decorator they are applying, so there’s a convenience with the function call – function lending approach, `F o G o H` of the thing I am doing as a the function to make sure I don’t put the G in the wrong place. It’s one of the values of add initialization that you get with decorators you don’t get with regular function calls. The other thing too is that – +RBN: So trying to write a decorator that is defensive to be called in both cases is not easy to do and it’s easy to forget how things work, easy to piece the pieces. Having a consistent API design is very efficient and helpful for end-users writing them. There are things that you cannot easily do with function pipelining that registration case, which is heavily used in the examples I showed relating to AWS Chalice. These are function registration. They do something after the fact and they are simple cases that show one decorator. You could have multiple. If you have a decorator that does registration that has to be held at the end of a decorator call, you have to be careful about how you document it and explanatory about what this means and have to be concerned about if someone later on comes and adds another decorating, not looking at the documentation, but for the thing decorator they are applying, so there’s a convenience with the function call – function lending approach, `F o G o H` of the thing I am doing as a the function to make sure I don’t put the G in the wrong place. It’s one of the values of add initialization that you get with decorators you don’t get with regular function calls. The other thing too is that – -KG: Hang on. You have listed three things already. I am not going to be able to remember all of them and my responses to all of them. So maybe we can talk about them one at a time. +KG: Hang on. You have listed three things already. I am not going to be able to remember all of them and my responses to all of them. So maybe we can talk about them one at a time. -RBN: All right. +RBN: All right. KG: So let’s see. The first thing you said was that this is just a generalization of existing decorators. But that’s not true. There’s no convenient way to do class element decorators or the things they do. Because you can't declare a method that is the result of a function call. But you can have a function, a binding, that holds the result of a function call and that was the only reason I was okay with class element decorators at all. They are adding expressivity and therefore generally useful. KG: With regards to the registration case, I grant this is something that is not 100% straightforward to do by function application. Because with function application you have to care about the order you’re doing things in and here you don’t. And so maybe that’s the case we think is worth adding new syntax to solve that particular problem. If so, it should be the focus of what we are doing and not like this is an incidental benefit. When you showed the AWS Chalice example, I am not familiar with AWS Chalice and it was not obvious it doing something other than straightforward application. Now, it sounds like the only benefit there is just that if you have a thing which needs to be called last, you have to call it last and document that. And that doesn’t really sound like something that is worth adding new syntax to solve. But maybe that’s something to talk about. -KG: And there was a third one, I have forgotten sorry. I have to check the notes. +KG: And there was a third one, I have forgotten sorry. I have to check the notes. -RBN: And again, one of the things that I touched on that you didn’t there was the ease of reuse of existing decorators. As people are writing the decorators, being able to apply them in multiple cases without having to significantly overcomplicate the – the loading behavior. +RBN: And again, one of the things that I touched on that you didn’t there was the ease of reuse of existing decorators. As people are writing the decorators, being able to apply them in multiple cases without having to significantly overcomplicate the – the loading behavior. -KG: That’s right. So the – on that point, there’s two relevant things to say there: the first is that it’s not obvious to me that this is something that really comes up a lot. Like, decorating a class field is a super different operation than decorating a function. And like I would expect to be treated differently most of the time. And it’s not like it’s hard to extract a common thing that can be used in that case, if that’s something that comes up. You also mentioned the possibility of more things to decorate in the future and that this would be consistent with the other things. And granted, in a world in which you can decorate many other things, it could be weird not to decorate functions, but I don’t think that’s a motivation for decorating functions. If we had bunches of other stuff, and then we decorate functions, sure – but that is not a motivation to decorate functions in the absence of decorating a bunch of other things. I am quite skeptical of the other things that we have discussed; of decorating parameters, especially. So I don’t think that can be an argument for doing function decorators. +KG: That’s right. So the – on that point, there’s two relevant things to say there: the first is that it’s not obvious to me that this is something that really comes up a lot. Like, decorating a class field is a super different operation than decorating a function. And like I would expect to be treated differently most of the time. And it’s not like it’s hard to extract a common thing that can be used in that case, if that’s something that comes up. You also mentioned the possibility of more things to decorate in the future and that this would be consistent with the other things. And granted, in a world in which you can decorate many other things, it could be weird not to decorate functions, but I don’t think that’s a motivation for decorating functions. If we had bunches of other stuff, and then we decorate functions, sure – but that is not a motivation to decorate functions in the absence of decorating a bunch of other things. I am quite skeptical of the other things that we have discussed; of decorating parameters, especially. So I don’t think that can be an argument for doing function decorators. -RBN: So regarding . . . this is something I hope to address more, when we talk about parameter decorators in the future, but we have experience in TypeScripts with folks that love this feature that make heavy use, large applications use this. So we do have experience with shipping this feature that shows that there is definitely a use case for it, a need for it and it has a very valid place. -And this proposal itself is going to be in the second half discussing the [voe] – many other things to decorate. And we want – if we do go down this path, and I really do think we should, we should be considering classes and functions only differ one is called . . . well, both are constructible things. The main difference is the body of the syntax in these cases. But there is no reason if I can decorate a class that I could not theoretically apply the same decorator to a function and that is designed to look like a class and the same type as the output. That reuse is extremely important. I think there are broader things to consider. Will is a limit. I don’t think we should decorate variables or any random expression. But I think that many declarative things have value. Specifically functions because we already have multiple types of functions you can decorate, you can decorate a constructor, a method, a setter, we have a lot of the cases in the language, and having this consistency across the things to reuse the functionality, I think is extremely important. Especially logging, tracing, those are things that make no sense to do functions that you can use methods. +RBN: So regarding . . . this is something I hope to address more, when we talk about parameter decorators in the future, but we have experience in TypeScripts with folks that love this feature that make heavy use, large applications use this. So we do have experience with shipping this feature that shows that there is definitely a use case for it, a need for it and it has a very valid place. And this proposal itself is going to be in the second half discussing the [voe] – many other things to decorate. And we want – if we do go down this path, and I really do think we should, we should be considering classes and functions only differ one is called . . . well, both are constructible things. The main difference is the body of the syntax in these cases. But there is no reason if I can decorate a class that I could not theoretically apply the same decorator to a function and that is designed to look like a class and the same type as the output. That reuse is extremely important. I think there are broader things to consider. Will is a limit. I don’t think we should decorate variables or any random expression. But I think that many declarative things have value. Specifically functions because we already have multiple types of functions you can decorate, you can decorate a constructor, a method, a setter, we have a lot of the cases in the language, and having this consistency across the things to reuse the functionality, I think is extremely important. Especially logging, tracing, those are things that make no sense to do functions that you can use methods. -SYG: Okay. So before I say my personal take here, my understanding of the two sides here is that it basically boils down to vibes. So given it is not new expressivity, one side basically likes this kind of declarative metaprogramming and one side does not. Given it might be helpful to do the old day exercise, if people recall, when – what is that thing called? -Comprehension expressions were killed, the exercise was, a white board, and you would write the comprehension expression on one side, and then the nested for loop on the other side and then that was enough of a visceral thing, to get a more honest, signal from people instead of just thinking in your head, I like decorates type of programming. Or I don’t like it. It might help for Ron or somebody else to provide side by side examples. In this case, for the function case, because it’s not new expressivity, because in the matrix there were cases where someone said, here is how you do it with functions and some people said that’s terrible. Others said that is fine. I don’t know how to weigh that. +SYG: Okay. So before I say my personal take here, my understanding of the two sides here is that it basically boils down to vibes. So given it is not new expressivity, one side basically likes this kind of declarative metaprogramming and one side does not. Given it might be helpful to do the old day exercise, if people recall, when – what is that thing called? Comprehension expressions were killed, the exercise was, a white board, and you would write the comprehension expression on one side, and then the nested for loop on the other side and then that was enough of a visceral thing, to get a more honest, signal from people instead of just thinking in your head, I like decorates type of programming. Or I don’t like it. It might help for Ron or somebody else to provide side by side examples. In this case, for the function case, because it’s not new expressivity, because in the matrix there were cases where someone said, here is how you do it with functions and some people said that’s terrible. Others said that is fine. I don’t know how to weigh that. -SYG: So secondly, I agree with Kevin on the skepticism for parameter decorators. I am not completely convinced they are a good idea. And I would – I am more neutral on function decorators. But even if function decorators were to advance, the last bullet point – I will like there to be some explicit recognition that that does not in any way open the door to parameter decorators. +SYG: So secondly, I agree with Kevin on the skepticism for parameter decorators. I am not completely convinced they are a good idea. And I would – I am more neutral on function decorators. But even if function decorators were to advance, the last bullet point – I will like there to be some explicit recognition that that does not in any way open the door to parameter decorators. -RBN: I’m sorry, I am trying to clarify. Are you stating that you would – that this proposal should advance – if this proposal were to advance, it should not care about parameter decorators or something that you would block? +RBN: I’m sorry, I am trying to clarify. Are you stating that you would – that this proposal should advance – if this proposal were to advance, it should not care about parameter decorators or something that you would block? -SYG: More the latter, but I don’t want to use such strong language right now. I am unconvinced we should have parameter deck rates. Parameter decorates, if that changes, that speculative changes how you think, we should talk about it. +SYG: More the latter, but I don’t want to use such strong language right now. I am unconvinced we should have parameter deck rates. Parameter decorates, if that changes, that speculative changes how you think, we should talk about it. -RBN: If possible, I would like to chat with you more about parameter decorators off-line. I will say that the way that TypeScript implements parameter decorators for legacy decorator support, you could declare them on parameters, they were essentially transformed into a function decorator that had additional condition text about the parameter there. If we already have method decorators, they are a convenient syntax for doing more things. I can go into more in depth conversation off-line. +RBN: If possible, I would like to chat with you more about parameter decorators off-line. I will say that the way that TypeScript implements parameter decorators for legacy decorator support, you could declare them on parameters, they were essentially transformed into a function decorator that had additional condition text about the parameter there. If we already have method decorators, they are a convenient syntax for doing more things. I can go into more in depth conversation off-line. -SYG: Sounds good. +SYG: Sounds good. GCL: So first qualify this by saying, I am speaking purely about function decorators. I don’t really – I am neutral on all the other decorators. But coming from the perspective of other languages, I started working at a company that uses a lot of python a few years ago and was new to the decorator pattern at that point. And since then, I think it’s just like it’s not expressing anything that is impossible, which I think has been a point of contention here. But the way it allows you to structure your code is just, frankly, different from anything you can do. In the language. Not by the literal expressivity of what values you can pass around the VM, but in the – like, how you are expressing your code to humans. And I think that is a very important thing that we should not discount. -GCL: Simply by this argument of the – I don’t know, exact semantics to achieve this behavior, even though it’s completely readably different. +GCL: Simply by this argument of the – I don’t know, exact semantics to achieve this behavior, even though it’s completely readably different. -KG: Just responding somewhat narrowly . . . I think python decorators make sense because it doesn’t have function expressions and if it did, decorators would not make sense in python. My experience of decorators in python is they're not meaningfully different from function calls. They just are functions. And the structure of the code is really not that different, except that you can't have function expressions in python. So decorators are a workaround. They’re not anything other than that. +KG: Just responding somewhat narrowly . . . I think python decorators make sense because it doesn’t have function expressions and if it did, decorators would not make sense in python. My experience of decorators in python is they're not meaningfully different from function calls. They just are functions. And the structure of the code is really not that different, except that you can't have function expressions in python. So decorators are a workaround. They’re not anything other than that. GCL: You are correct that they are mechanically function calls. But they are like in this case in front of you reading code, they are different. Different ways of organizing the way you are structuring your code. -RBN: I will also agree that the case of – function method, the way you read and approach them is preferable to the pipelining case because in the pipelining case, especially if you have arguments that you have to supply before or after, or you need to get to the actual function definition, depending on how many levels deep of decoration you’re performing, the pipelining case is only convenient at one level. If I need to apply two decorators or three, the pipelining case quickly becomes much harder to read. The decorating case is much more convenient and I wish I had a slide that illustrated that as well. - +RBN: I will also agree that the case of – function method, the way you read and approach them is preferable to the pipelining case because in the pipelining case, especially if you have arguments that you have to supply before or after, or you need to get to the actual function definition, depending on how many levels deep of decoration you’re performing, the pipelining case is only convenient at one level. If I need to apply two decorators or three, the pipelining case quickly becomes much harder to read. The decorating case is much more convenient and I wish I had a slide that illustrated that as well. + KG: Yeah. This is another sort of narrow response. Ron, you mentioned that people have adopted decorators in VSCode or found they liked them. I am not denying that there are people that like decorators. Every feature of every language has proponents. But we should not add features purely on the basis that there exist people who like that feature in other languages. That’s – that would be far too much. So there needs to be something more than that. -RBN: In a way you’re presupposing that parameter decorators a new thing in TypeScript. But they were designed – I don’t know where things ended up when we first proposed it, but this is something proposed for TypeScript – JavaScript not nor TypeScript, but they adopted it because there was a need, we had feedback from folks like the angular team who had an interest in this. So we pushed for that within TypeScript but it was based on a design initially included in the original decorators proposal. This isn’t new. And we have been making changes to kind of unify how decorators work in the TypeScript case, use parameters decorators, many of these people use that in a language that is essentially JavaScript with some extra – with a few extra bells and whistles. But it ends up being an adoption blocker. The folks that use parameter decorators can’t. So ideally, we would like to evolve. It was part of the initial design, but the focus we had was on specific need for classes and parameters, so function decorators got pushed back a bit and by the time the proposal for decorators was getting to Stage 2, function decorators were not part of the design. We wanted to wait until it shook out. Things that are related to decorators, need to hold until they reach Stage 3. We can get implementation experience. A lot of the proposals have been waiting in the wings for five or six years. Till the point we could start getting that broader adoption and bringing these in. Again, these aren’t – this is a feature that another language has that I would like to bring this. This is a feature that is literally designed for JavaScript from the get-go. +RBN: In a way you’re presupposing that parameter decorators a new thing in TypeScript. But they were designed – I don’t know where things ended up when we first proposed it, but this is something proposed for TypeScript – JavaScript not nor TypeScript, but they adopted it because there was a need, we had feedback from folks like the angular team who had an interest in this. So we pushed for that within TypeScript but it was based on a design initially included in the original decorators proposal. This isn’t new. And we have been making changes to kind of unify how decorators work in the TypeScript case, use parameters decorators, many of these people use that in a language that is essentially JavaScript with some extra – with a few extra bells and whistles. But it ends up being an adoption blocker. The folks that use parameter decorators can’t. So ideally, we would like to evolve. It was part of the initial design, but the focus we had was on specific need for classes and parameters, so function decorators got pushed back a bit and by the time the proposal for decorators was getting to Stage 2, function decorators were not part of the design. We wanted to wait until it shook out. Things that are related to decorators, need to hold until they reach Stage 3. We can get implementation experience. A lot of the proposals have been waiting in the wings for five or six years. Till the point we could start getting that broader adoption and bringing these in. Again, these aren’t – this is a feature that another language has that I would like to bring this. This is a feature that is literally designed for JavaScript from the get-go. -KG: I mean, you have designed it because people like the feature in other languages. -Yes. I recognize that this has been a long project, but I’ve been clear in my opposition, as long as we have been discussing it. Method decorators had a reasonable use case. Parameter decorators I have always opposed. And function decorators, I have been skeptical of. I recognize this is a long discussion, but it’s not been – yes, people use this feature in TS and other languages, but there are lots in other languages that people use and like features of; sometimes, anyway. Decorators especially I would not describe as universally beloved in every language or by every user. So I don’t think that can be a reason on its own to add something to the language. It has to be argued on its own merits. +KG: I mean, you have designed it because people like the feature in other languages. Yes. I recognize that this has been a long project, but I’ve been clear in my opposition, as long as we have been discussing it. Method decorators had a reasonable use case. Parameter decorators I have always opposed. And function decorators, I have been skeptical of. I recognize this is a long discussion, but it’s not been – yes, people use this feature in TS and other languages, but there are lots in other languages that people use and like features of; sometimes, anyway. Decorators especially I would not describe as universally beloved in every language or by every user. So I don’t think that can be a reason on its own to add something to the language. It has to be argued on its own merits. -RBN: I will say on the function decorators case that even once decorators reached Stage 2 in multiple discussions, that we had, in the various decorators calls over the years as things were getting to the point to get to Stage 3 we had a number of individuals ask about function decoration. The only solution with the proposal was you write as a static method and tear off the method. It’s not really a convenient way to do that. And this is – I have seen this discussed in I matrix and Mathieu’s as well, one of the main goal is improving the ergonomics. I have a use case . . . that ends up unergonomic to do from an I am – implementing those decorators. I am hoping to dig into this more and provide feedback and more of the discussions. +RBN: I will say on the function decorators case that even once decorators reached Stage 2 in multiple discussions, that we had, in the various decorators calls over the years as things were getting to the point to get to Stage 3 we had a number of individuals ask about function decoration. The only solution with the proposal was you write as a static method and tear off the method. It’s not really a convenient way to do that. And this is – I have seen this discussed in I matrix and Mathieu’s as well, one of the main goal is improving the ergonomics. I have a use case . . . that ends up unergonomic to do from an I am – implementing those decorators. I am hoping to dig into this more and provide feedback and more of the discussions. -MAH: Yeah. I am not convinced that decorators would be really – introducing new syntax, because the syntax costs are paid when we add them for classes. I view this as making the language more consistent, or authors being able to use the decorators they use on a class method, and being able to use it on an object literal function. -This is actually a use case that I believe – we would have to be able to similarly apply the same decorator on an ObjectLiteral function, ObjectLiteral the same way you use it on method. +MAH: Yeah. I am not convinced that decorators would be really – introducing new syntax, because the syntax costs are paid when we add them for classes. I view this as making the language more consistent, or authors being able to use the decorators they use on a class method, and being able to use it on an object literal function. This is actually a use case that I believe – we would have to be able to similarly apply the same decorator on an ObjectLiteral function, ObjectLiteral the same way you use it on method. -MAH: If you can apply it there, why not on freestanding functions as well? -I think in my opinion, this is [inaudible] exploration for Stage 1. As extending the already existing decorators to other places where authors feel natural to use them. +MAH: If you can apply it there, why not on freestanding functions as well? I think in my opinion, this is [inaudible] exploration for Stage 1. As extending the already existing decorators to other places where authors feel natural to use them. -LCA: Yeah. I also don’t want to – like have a specific preference here for parameter decorators or not, but I totally agree with Mathieu that like Kevin’s point about we don’t need to add any feature that anyone likes in any language, I totally agree with this. We already have the syntax, for classes and methods, it only makes sense we add the syntax to functions. It is – like this comes up that people ask us, why can’t I decorate a class, a method, a function? And RBN has shown that there’s real use cases to do this. There is affordability, use cases of – yeah. Decorator reusability. And like if we were in a situation where we had to decide whether to do decorators at all or not, I think Kevin your point makes sense. Yes, we don’t need to add every feature. But we already have this feature. It’s a matter of extending the feature to functions, to make it consistent across different pieces of the language. +LCA: Yeah. I also don’t want to – like have a specific preference here for parameter decorators or not, but I totally agree with Mathieu that like Kevin’s point about we don’t need to add any feature that anyone likes in any language, I totally agree with this. We already have the syntax, for classes and methods, it only makes sense we add the syntax to functions. It is – like this comes up that people ask us, why can’t I decorate a class, a method, a function? And RBN has shown that there’s real use cases to do this. There is affordability, use cases of – yeah. Decorator reusability. And like if we were in a situation where we had to decide whether to do decorators at all or not, I think Kevin your point makes sense. Yes, we don’t need to add every feature. But we already have this feature. It’s a matter of extending the feature to functions, to make it consistent across different pieces of the language. KG: I really don’t think that class element decorators are that similar to function declarations. You are not – class element decorators can affect placement and the relationship of the thing to the class. They are not just a transformation. -LCA: Yeah. But they can also just log out whenever the function is called. Or do things like that. Right? And these are case that are totally valid on regular functions. And people do this with class methods. And it would be really nice if they can use the same – the same bit of code to add tracing or to add logging or whatever to class methods, but if they could use that for functions. Like there’s – yeah. This is not any more complex for users to read, they know about it because of classes. +LCA: Yeah. But they can also just log out whenever the function is called. Or do things like that. Right? And these are case that are totally valid on regular functions. And people do this with class methods. And it would be really nice if they can use the same – the same bit of code to add tracing or to add logging or whatever to class methods, but if they could use that for functions. Like there’s – yeah. This is not any more complex for users to read, they know about it because of classes. -KG: Other than the hoisting thing. +KG: Other than the hoisting thing. -RBN: To respect the remaining timebox and I don’t have much time left. I have more slides to get to. I want to object to literal syntax and come back to this more. -Since I think the object literal syntax also has some bearing on this discussion. +RBN: To respect the remaining timebox and I don’t have much time left. I have more slides to get to. I want to object to literal syntax and come back to this more. Since I think the object literal syntax also has some bearing on this discussion. RBN: I don’t have that many slides for object literals. So we can come back to some of the topics in a moment. The other half of the proposal was to talk about the other things that are also things to decorate. Again this is all things I would like to explore in a Stage 1 proposal. So having the same types of benefit that for class field decorators and getters, et cetera, this would be nice to extend to object literals. We got requests over the years, why don’t we support object-methods? This was considered in and out of the proposal at various times for the normal decorators as well. -RBN: We want to investigate supporting decorators on object literal methods, including getters and setters. Property assignments and shorthand property assignments they have – they are essentially the same as a singleton instance class field. They can be evaluated in a similar way to have some of the same benefits there. Possible investigating using accessors avoid the – anyone doing any type of like – define property shenanigans to turn a property into a getter setter instead of the same level of syntactic transformation. Not on the object literal itself, with the exception of fields. Decorators today generally apply to funky things. The accessors are added because there’s a get set. But producing a pair of functions and fields are still function like because even though it’s not effectively refiled, the function initialize – other, the field initializer is treated as a function in the specification as far as span semantics and everything else works. We think there’s a value there. But not in the object literal itself. We think that might be a step too far. Getting closer to the put a decorator in any value and I think that’s too far afield. If we allowed this, we would allow it to spread too. There’s other things that happen there and it’s not – wanting to bring in something else’s property, to not mutate that thing. That’s another area to look at. This is roadway iterating before evaluation and application order. That we want these things to be consistent across the language. Wherever they are used. So methods getters and setters would behave the same as class methods do. Property assignments would behave the way class fields do. We reuse the same capabilities on a property assignment. Allow accessor, the same thing for accessor fields and classes and the same return semantics in those cases. Extra initializers are run after the decorators are applied, and than you can reuse the same functionality to run on whatever the final – what the object literal looks like at the end. And get their own metadata referenced as necessary. These are all things to investigate the viability of. And as we talked about with functions, they would have again the same type of shapes so we use the same programming model around how we write decorators in those cases. There are a couple of questions we want to investigate here: like do we need to indicate specific differences between object-methods and gets versus regular methods and getters. [sop] other distinction? Metadata, where does it get placed for object literals versus where it’s placed for classes? -Things like private and static make any sense or if they do how and that relates to the overlap of methods, getters and fields and how these work. Those are things we want to address there. - -RBN: And since we are almost out of time, I will skip the object literals-specific discussion and go back to a general purpose discussion here to say, we are looking for adoption of Stage 1 to continue to explore this. I am hoping that at Stage 1 we will have the opportunity to have more of these in-depth discussions about viability of the proposal, about the specific areas we think have some contention and need consensus and resolution on and have a chance to really dig in and explore of the features with anyone who has concerns or specific needs to have addressed. I am hoping we get Stage 1 to keep the investigation going. I will open up to the queue. I don’t know if it’s possible to request a short extension for the topics remaining. +RBN: We want to investigate supporting decorators on object literal methods, including getters and setters. Property assignments and shorthand property assignments they have – they are essentially the same as a singleton instance class field. They can be evaluated in a similar way to have some of the same benefits there. Possible investigating using accessors avoid the – anyone doing any type of like – define property shenanigans to turn a property into a getter setter instead of the same level of syntactic transformation. Not on the object literal itself, with the exception of fields. Decorators today generally apply to funky things. The accessors are added because there’s a get set. But producing a pair of functions and fields are still function like because even though it’s not effectively refiled, the function initialize – other, the field initializer is treated as a function in the specification as far as span semantics and everything else works. We think there’s a value there. But not in the object literal itself. We think that might be a step too far. Getting closer to the put a decorator in any value and I think that’s too far afield. If we allowed this, we would allow it to spread too. There’s other things that happen there and it’s not – wanting to bring in something else’s property, to not mutate that thing. That’s another area to look at. This is roadway iterating before evaluation and application order. That we want these things to be consistent across the language. Wherever they are used. So methods getters and setters would behave the same as class methods do. Property assignments would behave the way class fields do. We reuse the same capabilities on a property assignment. Allow accessor, the same thing for accessor fields and classes and the same return semantics in those cases. Extra initializers are run after the decorators are applied, and than you can reuse the same functionality to run on whatever the final – what the object literal looks like at the end. And get their own metadata referenced as necessary. These are all things to investigate the viability of. And as we talked about with functions, they would have again the same type of shapes so we use the same programming model around how we write decorators in those cases. There are a couple of questions we want to investigate here: like do we need to indicate specific differences between object-methods and gets versus regular methods and getters. [sop] other distinction? Metadata, where does it get placed for object literals versus where it’s placed for classes? Things like private and static make any sense or if they do how and that relates to the overlap of methods, getters and fields and how these work. Those are things we want to address there. -MAH: Yeah. So I am actually interested in object literal methods and being able to decorate them. One thing is not possible with – even if you had a property that had a literal function, if you were trying to wrap using a function wrapper, you couldn’t get the property at that point. So with decorators, from what I understand, get it the context of did it name. There are a lot of things for object literal, it seems deck rares would be more similar to classes there and that’s what I am looking for. +RBN: And since we are almost out of time, I will skip the object literals-specific discussion and go back to a general purpose discussion here to say, we are looking for adoption of Stage 1 to continue to explore this. I am hoping that at Stage 1 we will have the opportunity to have more of these in-depth discussions about viability of the proposal, about the specific areas we think have some contention and need consensus and resolution on and have a chance to really dig in and explore of the features with anyone who has concerns or specific needs to have addressed. I am hoping we get Stage 1 to keep the investigation going. I will open up to the queue. I don’t know if it’s possible to request a short extension for the topics remaining. -SYG:So, I said this in Matrix as well, just for the record, RBN mentioned angular in passing for some of the use cases. And talking with the Angular folks internally at Google, what I have heard is that there are at least Some contingent of angular people that can sit at the current state of affairs +MAH: Yeah. So I am actually interested in object literal methods and being able to decorate them. One thing is not possible with – even if you had a property that had a literal function, if you were trying to wrap using a function wrapper, you couldn’t get the property at that point. So with decorators, from what I understand, get it the context of did it name. There are a lot of things for object literal, it seems deck rares would be more similar to classes there and that’s what I am looking for. +SYG:So, I said this in Matrix as well, just for the record, RBN mentioned angular in passing for some of the use cases. And talking with the Angular folks internally at Google, what I have heard is that there are at least Some contingent of angular people that can sit at the current state of affairs -SYG:With its decorators to be full of foot guns and unwieldy, and they have regrets there. They told me the at inject decorator is on the way out. And I’m not aware that there are plans for angular to move -Off of it’s dependence of nonstandard decorators. And I say that as, as – as a point that Angular is not likely to be a motivating customer. Not saying there are not others, but I don’t think angular is one in this case. +SYG:With its decorators to be full of foot guns and unwieldy, and they have regrets there. They told me the at inject decorator is on the way out. And I’m not aware that there are plans for angular to move Off of it’s dependence of nonstandard decorators. And I say that as, as – as a point that Angular is not likely to be a motivating customer. Not saying there are not others, but I don’t think angular is one in this case. -RBN: I would like to talk about that a little bit, I mentioned Angular. I was not talking to them specifically as a motivator, but they were the initial motivation for the original decorators proposal and what we discussed. We had numerous discussions about angular decorators over the years. The one distinction I recall was that the big push for decorators both on the typeScript side and being proposed in plenary was around something that allowed you to do met programming, and allow you to have runtime impact on behavior. At the time angular primary focus was on annotations, just things that would effect compile time behavior. And I believe most of angular current use of decorators is something get transported away at compile time. They don’t generally have runtime dependence on runtime decorators I do agree that angular is not a particular motivating case right now, but I also think the way that angular chose to use decorators within their framework and applied it really didn’t speak to the benefits of decorators to begin with. And like, the idea around Metadata and how that works was initially informed by some of those early -requests for how do we do things like attach annotations when they didn’t want to have or didn’t, their use cases weren’t concerned with full metaprogramming capabilities. But full metaprogramming capabilities is where we are with decorators are powerful and flexible. Without angular as a motivating use case, -There are a significant number of existing use cases in the ecosystem today that are using TypeScript decorators or using Babel decorators and using decorators or transformed with ESbuild and other, and other compilers and generators and, and whatnot, and found them extremely useful. So regardless of the Angular Case, I think there is more than enough motivating cases in the long-term. +RBN: I would like to talk about that a little bit, I mentioned Angular. I was not talking to them specifically as a motivator, but they were the initial motivation for the original decorators proposal and what we discussed. We had numerous discussions about angular decorators over the years. The one distinction I recall was that the big push for decorators both on the typeScript side and being proposed in plenary was around something that allowed you to do met programming, and allow you to have runtime impact on behavior. At the time angular primary focus was on annotations, just things that would effect compile time behavior. And I believe most of angular current use of decorators is something get transported away at compile time. They don’t generally have runtime dependence on runtime decorators I do agree that angular is not a particular motivating case right now, but I also think the way that angular chose to use decorators within their framework and applied it really didn’t speak to the benefits of decorators to begin with. And like, the idea around Metadata and how that works was initially informed by some of those early requests for how do we do things like attach annotations when they didn’t want to have or didn’t, their use cases weren’t concerned with full metaprogramming capabilities. But full metaprogramming capabilities is where we are with decorators are powerful and flexible. Without angular as a motivating use case, There are a significant number of existing use cases in the ecosystem today that are using TypeScript decorators or using Babel decorators and using decorators or transformed with ESbuild and other, and other compilers and generators and, and whatnot, and found them extremely useful. So regardless of the Angular Case, I think there is more than enough motivating cases in the long-term. KG: Yeah. As you can probably infer from my reasons for not liking function decorators, I’m much happier about object literal element decorators because they do in fact add considerable expressivity. So I’m fine with going forward to stage one for the proposal as a whole. I’m much less skeptical of this part of it. Since it's not just function application. RBN: I do think it would be unfortunately, even if we don’t get parameter decorators, which I still hope we do it, it would be unfortunately to decorate every other function except a function, I think it would be a huge inconsistency in the language in the long-term and even if we don’t get object literal decorators, it’s still that inconsistency that I think is unfortunate that I would love for us to eventually be able to address in some way. - -KG: I just don’t see that you are decorating functions. You are decorating class elements or object elements, and in addition classes, but that is primarily to interact with class element decorators. The proper understanding of most decorators is that they are of class and object elements, not of functions. Most of the kinds of decorators are of class fields or with this proposal it would be object fields. It’s not of -functions. It is not that it is like this weird kind of function application that only applies when the thing you are applying it to is a function; that would be super weird. It is elements that you are decorating. -RBN: I would argue that I have seen many decorator implementations other classes that just use functioning. They are working with them as if they are Functioning. And callable decorator could theoretically produce a class that works like built in contractors like arrow, you can call it get the right thing. -People are doing this, it does exist. +KG: I just don’t see that you are decorating functions. You are decorating class elements or object elements, and in addition classes, but that is primarily to interact with class element decorators. The proper understanding of most decorators is that they are of class and object elements, not of functions. Most of the kinds of decorators are of class fields or with this proposal it would be object fields. It’s not of functions. It is not that it is like this weird kind of function application that only applies when the thing you are applying it to is a function; that would be super weird. It is elements that you are decorating. + +RBN: I would argue that I have seen many decorator implementations other classes that just use functioning. They are working with them as if they are Functioning. And callable decorator could theoretically produce a class that works like built in contractors like arrow, you can call it get the right thing. People are doing this, it does exist. KG: Granted that people are doing this. I’m just making the narrow point about consistency. The consistency that would be present, is that you can decorate class and object Elements. That's a coherent story to tell. @@ -771,12 +624,7 @@ CDA: We have supporters for stage one from LCA and from DE. I also support Stage MF: Okay. If this goes to stage one, given the discussion we’ve just had it seems like the exploration is not on literally these particular solutions to this problem. Is it maybe more appropriate to formulate your problem statement a little bit differently and possibly rename the proposal based on that? So that you can possibly address these concerns in other ways like some of the many solutions we’ve discussed on the matrix throughout this – -RBN: Well, I haven’t been able to follow Matrix. I want to have a chance to look at that. I will look at that, but I still would like to couch this as like my main interest here is around the Decorator meta programmatic space and the capabilities that this provides. I have considered potentially splitting function and object literal element decorators into two separate proposals, but really I think they need to be discussed. We already have a decorator propose. For classes and class elements. We have a proposal for parameter decorators. But I would like to be able to discuss moose other kind of core decoratable things in the context of each other. Because I think it is important to be able to talk about object literal methods versus -decorating a function expression that is attached to a property assignment and what the differences are. Are of those things are part and parcel to the same Package. So without having any context of what is going on in Matrix right now. I feel this is essentially the right scope for this. Because I think that this -Is something that, if it is going to advance. It’s likely to advance together. I know parameter decorator have other concerns that are different from these here, that make them better off as a separate proposal. But that proposal, I expect if this advances, would increase its scope to cover parameters on -Function decorators. Right now the parameters decorator proposal is very specifically scoped the class constructers and class elements. Those are the only places you can have decorator today. In the March -plenary, I brought it forward and reached stage one, one of the field back, I believe from Kevin, there is no way parameters decorators could be advance if there were not function decorators. Therefore, there’s a little bit of a dependency order there, but I also don’t think that they are specifically in lock stem with each other. Because we could possibly have a future where we only – potentially have a future where we have parameter decorators and not this or function decorators and not this. I do believe this is the appropriate -Context. Because we’re talking about, I want to introduce net new syntax and new capabilities to your decoration or decorator like for the functions, I don’t think this is in line with the proposal. The proposal is aiming for a broad consistent mechanism for programming across all functionality things basically expanding what we already have. I’m worried about reframing it that would push scope and direction that is not consistent with the rest of the language. I am not sure I’m comfortable changing it. I will sail the URL at the end of my slide says function decorators. But it is not, I will change the name of that before this would be adopted into TC39 so it matches. +RBN: Well, I haven’t been able to follow Matrix. I want to have a chance to look at that. I will look at that, but I still would like to couch this as like my main interest here is around the Decorator meta programmatic space and the capabilities that this provides. I have considered potentially splitting function and object literal element decorators into two separate proposals, but really I think they need to be discussed. We already have a decorator propose. For classes and class elements. We have a proposal for parameter decorators. But I would like to be able to discuss moose other kind of core decoratable things in the context of each other. Because I think it is important to be able to talk about object literal methods versus decorating a function expression that is attached to a property assignment and what the differences are. Are of those things are part and parcel to the same Package. So without having any context of what is going on in Matrix right now. I feel this is essentially the right scope for this. Because I think that this Is something that, if it is going to advance. It’s likely to advance together. I know parameter decorator have other concerns that are different from these here, that make them better off as a separate proposal. But that proposal, I expect if this advances, would increase its scope to cover parameters on Function decorators. Right now the parameters decorator proposal is very specifically scoped the class constructers and class elements. Those are the only places you can have decorator today. In the March plenary, I brought it forward and reached stage one, one of the field back, I believe from Kevin, there is no way parameters decorators could be advance if there were not function decorators. Therefore, there’s a little bit of a dependency order there, but I also don’t think that they are specifically in lock stem with each other. Because we could possibly have a future where we only – potentially have a future where we have parameter decorators and not this or function decorators and not this. I do believe this is the appropriate Context. Because we’re talking about, I want to introduce net new syntax and new capabilities to your decoration or decorator like for the functions, I don’t think this is in line with the proposal. The proposal is aiming for a broad consistent mechanism for programming across all functionality things basically expanding what we already have. I’m worried about reframing it that would push scope and direction that is not consistent with the rest of the language. I am not sure I’m comfortable changing it. I will sail the URL at the end of my slide says function decorators. But it is not, I will change the name of that before this would be adopted into TC39 so it matches. MF: Can I suggest an alternative formulation possibly? @@ -789,27 +637,20 @@ RBN: I’m sorry. Which two parts separately? MF: Exploring the space of applicability of decorators to other constructs, which I think you summarized just before I spoke. And the space of the ergonomics around modifying function behavior or augmenting functions. RBN: I’m not sure if I’m willing to, I would be interested in trying to split the proposal at this point in that way, because I still don’t feel like I have enough context what is going on in the matrix. But if we need to make changes to this as part of this further discussion at stage one, I’m not opposed. I'll say also that we have had many discussions in plenary and out of plenary over the years about the applicability of decorators to other things. My understanding and a conversation of others and my own hard line in the sand, as far as I know, what is currently proposed here in the parameters decorator proposal and in the existing class elements proposal is essentially it. I don’t see a value in decorating variables original imports and exports, at least not yet. And many of the cases are things I had gone into presentations and plenary of things that were completely nonviable. I’m not sure a broader discussions on all of the things we can decorate, it might be too broad because we already have ruled out many of those cases and these, basically what we have at stage one and what is I’m proposing here is the net sum of the things that we have considered. - -DE: I agree that this is probably the set of next things we should consider. Since there was the question of how should we define the scope or the problem statement, I think we could consider this as extensions to decorators. I think it would be confusing or counterproductive to make like a number of other proposals that we’re kind of juggling. We have parameters here. We have like function modifying there. We are just thinking about what should we do next in Decorators. Where decorators are about, you know, modifying a particular construct. And a way to do that. So we could call that decorators V2. Or Decorator’s extensions and within that, you know, we heard strong objections to, you know, parameter decorators. So that’s, like, a piece of input. Considering parameters decorator out of scope, we would be concluding no rather than -maintaining a proposal at stage one or something. And – yeah. I think it is definitely worth investigating what should be decoratable next. +DE: I agree that this is probably the set of next things we should consider. Since there was the question of how should we define the scope or the problem statement, I think we could consider this as extensions to decorators. I think it would be confusing or counterproductive to make like a number of other proposals that we’re kind of juggling. We have parameters here. We have like function modifying there. We are just thinking about what should we do next in Decorators. Where decorators are about, you know, modifying a particular construct. And a way to do that. So we could call that decorators V2. Or Decorator’s extensions and within that, you know, we heard strong objections to, you know, parameter decorators. So that’s, like, a piece of input. Considering parameters decorator out of scope, we would be concluding no rather than maintaining a proposal at stage one or something. And – yeah. I think it is definitely worth investigating what should be decoratable next. RBN: I’m a little bit concerned about saying decorator V2. Because it provide as very broad scope. I think having more focused scope is more likely to succeed. Big, big broad proposals have that, had this problem about getting weighed down by all of the differences of opinion across all of the different capabilities. And having more focused proposals that pay attention to crosscutting concerns I think are more likely to actually succeed. So we’ll be considering, so as I think, Kevin as on here, he wants them to be explicitly out of scope. And functional decorators are not part of this proposal. I’m not planning to bring them into this proposal. Having both of the proposal as stage one allow us to discuss the crosscutting concerns, if the concerns are met, we are advancing proposals that are consistent. We don’t push something one way that completely prevents something another way, even if we were be able to make convincing arguments for the other’s case. I think it is important for us to achieve the level of consistency across the proposals and again not get bogged down not taking too much. Which is why I was considering breaking this one up into two separate proposals. -KG: Yeah, the things RBN said sound good to me. I’m fine with this going forward as function and object literal element decorators for stage one. And then exploring – it’s not a commitment to that being the precise scope at stage one certainly. As I have expressed, my personal preference would be during stage one we narrow the down to object literal Elements, but certainly I’m not asking for that to be done now. And I think that it does form a sort of coherent body of work with just the things Ron has -presented today. So I’m happy for that to go to stage one. As a bit offeedback for going to stage two, I would like to see more discussion of the motivation; and in particular, I think it would be very helpful to compare what one writes with decorators to what one writes without decorators. There are a lot of people transforming functions today. Someone in Matrix mentioned that they have a web component framework that does registration despite not having decorators and it's been fine. But maybe not is not true for other frameworks. So more discussion of the before and after and how it is better, since it is being primarily proposed as an ergonomics feature. Again that can happen within stage one. +KG: Yeah, the things RBN said sound good to me. I’m fine with this going forward as function and object literal element decorators for stage one. And then exploring – it’s not a commitment to that being the precise scope at stage one certainly. As I have expressed, my personal preference would be during stage one we narrow the down to object literal Elements, but certainly I’m not asking for that to be done now. And I think that it does form a sort of coherent body of work with just the things Ron has presented today. So I’m happy for that to go to stage one. As a bit offeedback for going to stage two, I would like to see more discussion of the motivation; and in particular, I think it would be very helpful to compare what one writes with decorators to what one writes without decorators. There are a lot of people transforming functions today. Someone in Matrix mentioned that they have a web component framework that does registration despite not having decorators and it's been fine. But maybe not is not true for other frameworks. So more discussion of the before and after and how it is better, since it is being primarily proposed as an ergonomics feature. Again that can happen within stage one. -RBN: I can reach out, there’s a number of groups that I think use thisFunction application mechanism toed that, that already use this today in shipping code that would be able to provide feedback on their preference around Decorators and how they would find them useful. I will reach out to some of the -Groups. Material UI is one of them, react has things they can do them in their Code base, but cannot use decorators for higher function components. A lot of Cases where it looks like people really want to use it, but haven’t been able to, I can reach out and get more feedback there as well. +RBN: I can reach out, there’s a number of groups that I think use thisFunction application mechanism toed that, that already use this today in shipping code that would be able to provide feedback on their preference around Decorators and how they would find them useful. I will reach out to some of the Groups. Material UI is one of them, react has things they can do them in their Code base, but cannot use decorators for higher function components. A lot of Cases where it looks like people really want to use it, but haven’t been able to, I can reach out and get more feedback there as well. RBN: I would like to add one more thing to Michel’s point. I think this is the right scope for the proposal, but as we discuss stage one, if we need to broaden or decrease the proposal or split the proposal before this can ever get to stage two, that could be a perfect time to discuss the scope, right now, my primarily focus is on the things presented here. I’m hoping that is enough to at least get us to stage one and then, again, have these discussions as we continue along the process. -CDA: Okay. It seems like your call for consensus was ages ago. So I’m going to ask everyone again for explicit support to advance stage one. We have a thumbs up from LCA. I also explicitly support stage one from this. Do we have any other voices? Do we have any objections to advancing to stage one? Do we -have any dissenting views that anyone would like to record? -Hearing nothing, seeing nothing, Ron, you have stage one. +CDA: Okay. It seems like your call for consensus was ages ago. So I’m going to ask everyone again for explicit support to advance stage one. We have a thumbs up from LCA. I also explicitly support stage one from this. Do we have any other voices? Do we have any objections to advancing to stage one? Do we have any dissenting views that anyone would like to record? Hearing nothing, seeing nothing, Ron, you have stage one. -RBN: Thank you. - [ Applause ] +RBN: Thank you. [ Applause ] CDA: Which I believe means we get to segue into your next topic. @@ -817,7 +658,8 @@ RBN: Yeah, let me bring that proposal up or the presentation up in just a moment CDA: We can accommodate your 30 minutes. Yeah. Don’t worry. -#### Summary +### Summary + - KG believes that decorators should focus on cases where they actually create new capabilities that wouldn’t be syntactically possible otherwise, the way that object field decorators are. He pointed out that function decorators are generally equivalent to a function call, meaning that they don’t add new capabilities. Asked for more motivation for function decorators, especially before/after demonstrations of improved ergonomics. - DE, GCL, LCO expressed support for function decorators due to mental model, ergonomics and analogy to class decorators - Future work on decorators should include more example cases for using decorators, and comparing to the baseline case, possible today, of function calls. @@ -825,59 +667,41 @@ CDA: We can accommodate your 30 minutes. Yeah. Don’t worry. #### Conclusion/Resolution Function and object property decorators reaches Stage 1 + ## Decorated Function Declarations and Hoisting + Presenter: Ron Buckton (RBN) - [slides](https://onedrive.live.com/view.aspx?resid=934F1675ED4C1638%21299455&authkey=!AHrmKC0xH_s815Q) +RBN: All right. So let me start this up. All right. I asked people to hold off with object hosting. This is likely a topic with mixed opinions on, and I wanted to get more feedback and need to start collecting that feedback. We discussed this multiple times in plenary over the years it is worthwhile for us to bring all of the people together to discuss this complexly. I will say that we’ve already been discussing various opinions on the viable of function Decorators in general. And I’d like to couch this in keeping those discussions, we can have the discussions offline on the issue tracker and try to keep this more focused on the hosting scenarios that I’m, that we need, would need to resolve if this were to move forward. So this is originally pulled from the function and object limit are decorator presentation when I first posted it on the agenda. I made minor changes just to move a lot of content here to make sure this could be more focused about the specific need. As I mentioned before, se talked about evaluation order. Right now decorator evaluation order is left-to-right, top to bottom. Document order. That’s important to be able to preserve the user’s expectation around when things happen, if they capture a temporary variable in a decorator that uses the parenthesized syntax and do certain things or evaluate in a decorator that has a side effect that effect as later, a later computer property name or vice-versa, that is also important to be able to preserve that evaluation order. And application order occurs on decorators right-to-left, preserving function application, like G here. And it happens on, in a specific order currently based on the order which those declarations are defined on a class as part of the class definition evaluations. So that class order is disconnected from the order, but evaluation order is really important to discuss in this case. We talked about arrow functions and function expressions have semantics, because classes are evaluated in the code. They don’t have any value hosting semantics like function declarations do, but Function declarations do have this interesting capability that they had for a while it doesn’t matter where you write them within the body, within the body or block, they get moved to the top of that block essentially. you can call functions that haven’t actually been reach to point of being declared or might return before you get to the declarations. Those things are still successful. And introducing decorators does complicate that. There are various options we are considering. The five we talked about in the previous presentation I can kind of go into a little bit more detail here and then start taking some questions and getting some feedback. So one option is if you introduce a preevaluation Step. This is something that, I think, has been discussed at lengths in prior plenary sessions many years ago, probably the thing we are spent the most time on for function declarations and no progress has been made. there is talk about introducing a step, and deal with circularities in the input graph, how does that work with decorated functions and then there’s other issues So make it so this doesn’t really work. At least not anyway that seems consistent. One thing that is decorator, if you decorate a function declaration it seems hoisted, the variable is, but you’re moving execution to the top. You’re not moving just the value initialization to the top. You’re moving actual executionables. But decorators function that run actual code. And things you would have normally expected to work because of hosting, such as if I want F to be able to reference itself, I can do this with a function, I have a function in the body references F. I can even take a reference to this here, because it actually doesn’t have a value yet. That value is dependent on the execution of decorators. And the same vein, this function F can’t use the G decorator, because the G decorator is itself decorator, and the G function is decorated and hoisted and doesn’t have a value. So hosting decorated functions doesn’t work if you’re trying to work around that case. -RBN: All right. So let me start this up. All right. I asked people to hold off with object hosting. This is likely a topic with mixed opinions on, and I wanted to get more feedback and need to start collecting that feedback. We discussed this multiple times in plenary over the years it is worthwhile for us to bring all of the people together to discuss this complexly. I will say that we’ve already been discussing various opinions on the viable of function Decorators in general. And I’d like to couch this in keeping those discussions, we can have the discussions offline on the issue tracker and try to keep this more focused on the hosting scenarios that I’m, that we need, would need to resolve if this were to move forward. So this is originally pulled from the function and object limit are decorator presentation when I first posted it on the agenda. -I made minor changes just to move a lot of content here to make sure this could be more focused about the specific need. As I mentioned before, se talked about evaluation order. Right now decorator evaluation order is left-to-right, top to bottom. Document order. That’s important to be able to preserve the user’s expectation around when things happen, if they capture a temporary variable in a decorator that uses the parenthesized syntax and do certain things or evaluate in a decorator that has a side effect that effect as later, a later computer property name or vice-versa, that is also important to be able to preserve that evaluation order. And application order occurs on decorators right-to-left, preserving function application, like G here. And it happens on, in a specific order currently based on the order which those declarations are defined on a class as part of the class definition evaluations. So that class order is disconnected from the order, but evaluation order is really important to discuss in this case. We talked about arrow functions and function expressions have semantics, because classes are evaluated in the code. They don’t have any value hosting semantics like function declarations do, but Function declarations do have this interesting capability that they had for a while it doesn’t matter where you write them within the body, within the body or block, they get moved to the top of that block essentially. you can call functions that haven’t actually been reach to point of being declared or might return before you get to the declarations. Those things are still successful. And introducing decorators does complicate that. There are various options we are considering. The five we talked about in the previous presentation I can kind of go into a little bit more detail here and then start taking some questions and getting some feedback. So one option is if you introduce a preevaluation Step. This is something that, I think, has been discussed at lengths in prior plenary sessions many years ago, probably the thing we are spent the most time on for function declarations and no progress has been made. there is talk about introducing a step, and deal with circularities in the input graph, how does that work with decorated functions and then there’s other issues -So make it so this doesn’t really work. At least not anyway that seems consistent. One thing that is decorator, if you decorate a function declaration it seems hoisted, the variable is, but you’re moving execution to the top. You’re not moving just the value initialization to the top. You’re moving actual executionables. But decorators function that run actual code. And things you would have normally expected to work because of hosting, such as if I want F to be able to reference itself, I can do this with a function, I have a function in the body references F. I can even take a reference to this here, because it actually doesn’t have a value yet. That value is dependent on the execution of decorators. And the same vein, this function F can’t use the G decorator, because the G decorator is itself decorator, and the -G function is decorated and hoisted and doesn’t have a value. So hosting decorated functions doesn’t work if you’re trying to work around that case. - -RBN: So that’s a problem. Another problem I mentioned is that decorators would be able to reference local variables. Even in class decorators this is a very common thing. You have, you will have decorators that have like specific, specific values that they support. Those might be declared as constants, Numeric value that are new raimented object literal. And in this case, I have part of an A Mi8 I want to reuse across multiple decorators. This is something you can do in collapses, but not with function. Decorators, because they would be hoisted above the base and in TDZ. They would not work and be able to maintain evaluation and declaration decorators that you would use on function decorators. You withed not be able to say cage decorator equals arrow function, or await import and use a conditional thing from another module. None of those things would work from functional decorators, that means hosting does not work in this case. And the other thing that function hosting hosting is complicated by decorators is how you expect a evaluation to work. Introducing a valuation, and hoisted functions are run before decorators on other code, that breaks the evaluation order expectations that users already have or would already have for decorators, that they are evaluated Top-down, left-to-right, in document order. That has consequences on ordering Side effects. Etc. So we generally, we discussed this before, that we generally do not recommend this approach. If we move forward with this, because it doesn’t work the way you would think. We find there are too many issues make that is nonviable. +RBN: So that’s a problem. Another problem I mentioned is that decorators would be able to reference local variables. Even in class decorators this is a very common thing. You have, you will have decorators that have like specific, specific values that they support. Those might be declared as constants, Numeric value that are new raimented object literal. And in this case, I have part of an A Mi8 I want to reuse across multiple decorators. This is something you can do in collapses, but not with function. Decorators, because they would be hoisted above the base and in TDZ. They would not work and be able to maintain evaluation and declaration decorators that you would use on function decorators. You withed not be able to say cage decorator equals arrow function, or await import and use a conditional thing from another module. None of those things would work from functional decorators, that means hosting does not work in this case. And the other thing that function hosting hosting is complicated by decorators is how you expect a evaluation to work. Introducing a valuation, and hoisted functions are run before decorators on other code, that breaks the evaluation order expectations that users already have or would already have for decorators, that they are evaluated Top-down, left-to-right, in document order. That has consequences on ordering Side effects. Etc. So we generally, we discussed this before, that we generally do not recommend this approach. If we move forward with this, because it doesn’t work the way you would think. We find there are too many issues make that is nonviable. -RBN: That said, we do think there are other alternatives to consider. One is some type of dynamic mechanism. Where function decorators get run when the declaration gets hit as if the execution program had actually reached that point in the code and then it is evaluated like it would have classes. But for things that are not yet available, that’s they would suddenly pick up and do their thing. We first looked at this in function decorators. In TypeScript back in 2016. But that was harder to read over the other -Sematics. So you have code that never gets called or reached by statement Execution. So what happens with registration, when do the things get called? It is too complicated. Finally, deterministic, you cannot be sure the order your code will run in. And if we did an appropriate that is similar to that, that would require a suboptimal first use for all references. I might take a reference to it and pass it to something, that’s just probably completely impossible, or the performance would be so terrible because it would have to -Be applied to every variable reference, everywhere in every one’s code, and no one wants that. +RBN: That said, we do think there are other alternatives to consider. One is some type of dynamic mechanism. Where function decorators get run when the declaration gets hit as if the execution program had actually reached that point in the code and then it is evaluated like it would have classes. But for things that are not yet available, that’s they would suddenly pick up and do their thing. We first looked at this in function decorators. In TypeScript back in 2016. But that was harder to read over the other Sematics. So you have code that never gets called or reached by statement Execution. So what happens with registration, when do the things get called? It is too complicated. Finally, deterministic, you cannot be sure the order your code will run in. And if we did an appropriate that is similar to that, that would require a suboptimal first use for all references. I might take a reference to it and pass it to something, that’s just probably completely impossible, or the performance would be so terrible because it would have to Be applied to every variable reference, everywhere in every one’s code, and no one wants that. -RBN: So I left often discussing first use checks. This is definitely something we don’t want. We had similar discussions in the past. We generally consider dynamic applications of these things to preserve a level of hoisting to be problematic and not a recommended approach. So the approach that I have -recommended and been discussing with others over the years and even recently, as recently as I think as in Matrix on the suspect plenary, or the march plenary, talking about where the, what we can do about hosting to find something that works. And in the champion’s opinion, the most reliable approach to this, we just don’t hoist decorated functions. There is like class declarations aren’t hoisted. Or at least their values are not hoisted, decorators require evaluation, because your attaching syntax to them that requires evaluation and therefore they can no longer be hoisted. Hopefully, we team this is an acceptable cames because this is new syntax. And new syntax means new semantics and there is no cost paid for using code bases and the cost with the dynamic execution case. The readers have no reason to guess evaluation orders because they would follow the collapse element decoration order. You don’t have to second guess evaluation order. What you do have to do is reorder your code before definition, but you probably have to do that in any of these scenarios. What this does give us we definitely align with other decorator usages if we move forward with this, this is the approach that champion strongly recommends, it is definitely preferred in this case. +RBN: So I left often discussing first use checks. This is definitely something we don’t want. We had similar discussions in the past. We generally consider dynamic applications of these things to preserve a level of hoisting to be problematic and not a recommended approach. So the approach that I have recommended and been discussing with others over the years and even recently, as recently as I think as in Matrix on the suspect plenary, or the march plenary, talking about where the, what we can do about hosting to find something that works. And in the champion’s opinion, the most reliable approach to this, we just don’t hoist decorated functions. There is like class declarations aren’t hoisted. Or at least their values are not hoisted, decorators require evaluation, because your attaching syntax to them that requires evaluation and therefore they can no longer be hoisted. Hopefully, we team this is an acceptable cames because this is new syntax. And new syntax means new semantics and there is no cost paid for using code bases and the cost with the dynamic execution case. The readers have no reason to guess evaluation orders because they would follow the collapse element decoration order. You don’t have to second guess evaluation order. What you do have to do is reorder your code before definition, but you probably have to do that in any of these scenarios. What this does give us we definitely align with other decorator usages if we move forward with this, this is the approach that champion strongly recommends, it is definitely preferred in this case. -RBN: Another option, you cannot decorate function declarations. You can decorate methods. You -can theoretically decorate object literal methods and maybe even function expressions, but not function -declarations and the question is why the that break in consistency would be a word in the language that would be very unfortunate and doesn’t meet the goal for decorator reuse that being a major motivate recall for this, and being able to leverage the decorators to apply to multiple function-like things without having to have very specific cases for those. So – not decorating functions when you cannot reuse those decorators for function decorations. Maybe that is not so bad. You have to use a function decorator and use it that way, but I don’t think the meets the goals for Reusable for this proposal, it is not an approach we would generally recommend either. One other approach that I kind of shopped around a little bit and -considered is that we kind of mix the no-hoisted and no-decorators approach. -That is to have an an opt-in keyword, so eventually saying `let function f` is roughly transposed `let f =` -the function and carries over the assigned name from the variable declaration like you would have function -express, but more of a convenient way to label a function. So you have to opt into it, by opting into it, more so than adding the decorator, you have to understand evaluation order. It has the same benefits of the option three approach which is not hoist decorated functions. But there’s maybe a little bit of value it in beyond that and you can just use a let function or a cont function to declare functions that have the same let and const semantics. I don’t – saw mantics, I don’t know if that is enough to visit the marker, but -that is another option over the years. +RBN: Another option, you cannot decorate function declarations. You can decorate methods. You can theoretically decorate object literal methods and maybe even function expressions, but not function declarations and the question is why the that break in consistency would be a word in the language that would be very unfortunate and doesn’t meet the goal for decorator reuse that being a major motivate recall for this, and being able to leverage the decorators to apply to multiple function-like things without having to have very specific cases for those. So – not decorating functions when you cannot reuse those decorators for function decorations. Maybe that is not so bad. You have to use a function decorator and use it that way, but I don’t think the meets the goals for Reusable for this proposal, it is not an approach we would generally recommend either. One other approach that I kind of shopped around a little bit and considered is that we kind of mix the no-hoisted and no-decorators approach. That is to have an an opt-in keyword, so eventually saying `let function f` is roughly transposed `let f =` +the function and carries over the assigned name from the variable declaration like you would have function express, but more of a convenient way to label a function. So you have to opt into it, by opting into it, more so than adding the decorator, you have to understand evaluation order. It has the same benefits of the option three approach which is not hoist decorated functions. But there’s maybe a little bit of value it in beyond that and you can just use a let function or a cont function to declare functions that have the same let and const semantics. I don’t – saw mantics, I don’t know if that is enough to visit the marker, but that is another option over the years. -RBN: That gets to the discussion kind of things. That in, with the focus on if we do eventually get consensus of advancement of function decorators beyond stage one, what are kind of the opinions and concerns that people have with the various choices? Is there agreement with champion recommendation -or options we haven’t considered. I would like to keep these a little bit focused what we talked about -here. Here are ideas and then go into a little bit more in-depth on the issue tracker as we +RBN: That gets to the discussion kind of things. That in, with the focus on if we do eventually get consensus of advancement of function decorators beyond stage one, what are kind of the opinions and concerns that people have with the various choices? Is there agreement with champion recommendation or options we haven’t considered. I would like to keep these a little bit focused what we talked about here. Here are ideas and then go into a little bit more in-depth on the issue tracker as we DE: I’m happy with this do not hoist decorator functions, I have to admit when we reviewed this internally within Bloomberg, multiple people found the nonhoisted version unfortunate -- the lack of capability to hoist this is is feedback, some of us felt that there is a technically correct answer, but that is not universal. It is unclear how to proceed. -RBN: A lot of people I talked to about this, their first reaction, I don’t want to break hosting. It is usually kind of showing the problems that I described in the earlier slide that it really comes, it really becomes clear that it, if you decorated, you can’t hoist. It just does not work. There are just too many things -that break expectations, evaluations and expectations and whatnot. And most of the people, everyone that I talked to that has kind of brought up, that is there a way to make hosting work or just with the way we understand things as they are right now. -They come to the conclusion, this approach doesn’t work. I’m willing to open to hear if there are other ways of working this. But from my understanding, we spent two to three years between various plenary sessions with updates in history to update this, and never came to a solution to make hoisting it wo +RBN: A lot of people I talked to about this, their first reaction, I don’t want to break hosting. It is usually kind of showing the problems that I described in the earlier slide that it really comes, it really becomes clear that it, if you decorated, you can’t hoist. It just does not work. There are just too many things that break expectations, evaluations and expectations and whatnot. And most of the people, everyone that I talked to that has kind of brought up, that is there a way to make hosting work or just with the way we understand things as they are right now. They come to the conclusion, this approach doesn’t work. I’m willing to open to hear if there are other ways of working this. But from my understanding, we spent two to three years between various plenary sessions with updates in history to update this, and never came to a solution to make hoisting it wo GCL: Yeah, overall plays one to option three. I think, mostly in agreement with DE here, probably a little bit less concerned about the loss of hoisting. But yeah, I think as long as people have to modify their code to add decorators, that is the appropriate point at which they will need to do other things and I’m comfortable with that trade-off. -RBN: Yeah, I think I heard from the others in the past, many people generally think it is a bad idea to rely on hosting anyways. Anything that uses method. So decorating those are kickier, because we have to move things around. Bought that is a cost you end up having to pay, it is just much clearer with, with this third option, that, that cost has to be paid and you should be paying it, then the does with one where -things seem like they are working and suddenly stop working. I think it is more consiste +RBN: Yeah, I think I heard from the others in the past, many people generally think it is a bad idea to rely on hosting anyways. Anything that uses method. So decorating those are kickier, because we have to move things around. Bought that is a cost you end up having to pay, it is just much clearer with, with this third option, that, that cost has to be paid and you should be paying it, then the does with one where things seem like they are working and suddenly stop working. I think it is more consiste CDA: Nothing else on the queue. -RBN: I actually expected a lot more commentary on just the hoisting scenario, I know it was a contentious topic in the past. I’m not 100% ready yet to specifically request consensus for option three, there are other things we need to figure out with the proposal. But I think option three is the most likely to move forward with researching and looking into more of this topic. I can maybe ask for consensus, but I’m going to couch that in, I’m not expecting consensus or specifically need it at this point. But it would be helpful as we make these types of determinations. -So, if I can be more specific, I would like to ask if, if we do end up moving forward with function decorators, we would consider consensus with going with option three, if not, we will consider this is something that needs to be more broadly researched. +RBN: I actually expected a lot more commentary on just the hoisting scenario, I know it was a contentious topic in the past. I’m not 100% ready yet to specifically request consensus for option three, there are other things we need to figure out with the proposal. But I think option three is the most likely to move forward with researching and looking into more of this topic. I can maybe ask for consensus, but I’m going to couch that in, I’m not expecting consensus or specifically need it at this point. But it would be helpful as we make these types of determinations. So, if I can be more specific, I would like to ask if, if we do end up moving forward with function decorators, we would consider consensus with going with option three, if not, we will consider this is something that needs to be more broadly researched. DE: As much as I personally would be happy with settling on option three, it feels somehow a little early to conclude on this. Both because we haven’t gotten further in our investigation into motivation and whether we want to do this at all. And also because we – I don’t know, we haven’t come to a real reckoning with the feelings of missing hoisting would be bad. So we – unfortunately, I think, have to take seriously the possibility that no hoisting is fatal to the proposal. I don’t think it should be considered that. But somehow, we have to evaluate this. So I think it is just too early to, you know, call for committee consensus one way or the other on the conclusion. -RBN: I think that is a completely fair position. It is one that I expected. The main reason I even entertained the possible tale is that there has been a history of having this discussion in the past, I knew some folks on the committee would bring that to this discussion and might have an impact there. But again, I’m not, I wasn’t expecting consensus. I’m perfectly fine moving forward without it. But this gets -the topic on the table and this is something to consider, and I do need additional feedback on and can provide more of the context to the research that has gone into this as well. I’m happy to do this offline as well. +RBN: I think that is a completely fair position. It is one that I expected. The main reason I even entertained the possible tale is that there has been a history of having this discussion in the past, I knew some folks on the committee would bring that to this discussion and might have an impact there. But again, I’m not, I wasn’t expecting consensus. I’m perfectly fine moving forward without it. But this gets the topic on the table and this is something to consider, and I do need additional feedback on and can provide more of the context to the research that has gone into this as well. I’m happy to do this offline as well. DE: This is a very good analysis. I hope this is a step forward. I hope we can develop this shared understanding that basically we won’t be able to get hoisted decorated functions and we have to decide whether we can live with that. So I’m glad that you brought this so we could – so people can think about it. @@ -893,8 +717,8 @@ CDA: Did you want to dictate any key points or summary or conclusion for the not RBN: The main key points or summary, I think on this one, I have introduced five alternatives we have currently been considering to address issues with hoisting decorated function declarations. We don’t have a specific consensus discussion as it may be too early on this. And champions current preference is to not hoist decorated functions. But it needs further discussion. - ## Approval of ES2024 and opt-out period + Presenter: Jordan Harband (JHD) JHD: Hi. So, in order to have the office specification every year and follow all of the appropriate steps, there is a 60-day opt out period that, it is a royalty free patent opt out period is what it is called that nobody has ever exercised in the past, but we need to provide to make sure that if any company has contributed something that they need to be removed from the spec, they have to opportunity to do so. That period must be completed before the General Assembly meeting which is typically in June. But also our second meeting of the year is typically in March, this year it is in April. That’s a tight timeline. So on behalf of the editors we’re going to ask plenary for official approval of the specification, which would be whatever is merged in main plus the normative, there are three possible, or three normative changes that would be added in addition to that, one is a ray buffer transfer, one of them is NRO’s normative PR about host, or sorry, is about HTML comments. And another one is PFC compile strings normative PR, open in November and received consensus in a previous plenary. The three normative changes are open and expected to be reviewed and merged in the ES2024 specification produced within the next week. So that, that and at that time, I wid file an issue on the reflector as I have done in previous years to notify everyone that the spec is available. I will also create a GitHub release on the repo if you are perhaps watching releases on the spec repo. So that’s the ask for the room. Also plenary approval for that. @@ -923,7 +747,7 @@ MLS: Okay. Thank you. [Break] -RPR: Okay. Let’s begin then. I think we’re back with RBN. And before we start that, it’s – you have a clarifying question about ES2024. +RPR: Okay. Let’s begin then. I think we’re back with RBN. And before we start that, it’s – you have a clarifying question about ES2024. IS: Yes. So actually, I didn’t understand it. So “ES2024”, which standard is it? “ECMA-262” or “ECMA-262 + ECMA402”. That’s all. @@ -932,7 +756,9 @@ JHD: That’s a good question. I was asking about 262, but today, 402 editors sh RPR: SFC, maybe that’s something you can do later? Please keep that in mind and we can return back to it. RPR Okay. Thank you, IS. + ## "Discard" (void) Bindings for stage 1 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-void-binding) @@ -948,15 +774,15 @@ RBN: Some of the places we discussed, and considering, are function and method p RBN: Right now, we are proposing the use of the void key word. In place – as something you would use in place of the BindingIdentifier. We are open to other other suggestions. I don’t have a specific need for it to be void. There are reasons and rationale as to why I picked this key word for use. In this case, I will go into that a bit. The idea is not a new thing in this programming. It’s been around forever. C and C++ allow functions to have unnamed parameters. C# has the ability to introduce discards in pattern matching. And python has this in some spaces. Rust has this and Go. And in all of these cases they use the underscore character. -RBN: There is a little bit of a problem with underscore that we will get into here in a moment. But to really set the motivations for looking for this, there are often needed for declaration side effects without the variable binding. For places that I have to have the variable – a binding to have things handled properly, I don’t need the first parameter of a callback but there has to be something in place. And given need back in committee over the years, anything that could be legal JavaScript identifier is essentially off-limits for these use cases. We can’t use underscore. There is existing code that does that. We have had discussions about the possibility of letting you repeat underscore if you’re not going to use it. There are side effects that could be problematic with that and there is still legacy code in the wild using lodash and that wouldn’t be consistent with this use. +RBN: There is a little bit of a problem with underscore that we will get into here in a moment. But to really set the motivations for looking for this, there are often needed for declaration side effects without the variable binding. For places that I have to have the variable – a binding to have things handled properly, I don’t need the first parameter of a callback but there has to be something in place. And given need back in committee over the years, anything that could be legal JavaScript identifier is essentially off-limits for these use cases. We can’t use underscore. There is existing code that does that. We have had discussions about the possibility of letting you repeat underscore if you’re not going to use it. There are side effects that could be problematic with that and there is still legacy code in the wild using lodash and that wouldn’t be consistent with this use. -RBN: Another thing is tooling often with a – and parameters and things where you have to use disable line comments or prefix parameter names with underscores or variable names where things you may not use right now. More of this – this tends to be the case with parameter declarations but things you want to ignore but you have to give it a binding. And there’s been cases of this in the past where we have had this compatibility. We introduced bindingless catch and that was – that practically made it to Stage 3 in one meeting. It didn’t. But to Stage 2 when it was first adopted. Because it was so valuable to people to have a mechanism in declaring this without a user's binding I am not going to use. That is a single purpose. Bindingless works with catch because of how it’s written. We also have elision, which works with arrays and we will get to the ish in a minute. +RBN: Another thing is tooling often with a – and parameters and things where you have to use disable line comments or prefix parameter names with underscores or variable names where things you may not use right now. More of this – this tends to be the case with parameter declarations but things you want to ignore but you have to give it a binding. And there’s been cases of this in the past where we have had this compatibility. We introduced bindingless catch and that was – that practically made it to Stage 3 in one meeting. It didn’t. But to Stage 2 when it was first adopted. Because it was so valuable to people to have a mechanism in declaring this without a user's binding I am not going to use. That is a single purpose. Bindingless works with catch because of how it’s written. We also have elision, which works with arrays and we will get to the ish in a minute. RBN: There are alternatives. Could you use an empty object pattern instead? No. Empty object patterns have semantics associated with them. You can use an empty object pattern as a parameter if – to replace a parameter if it might be null or undefined. Using declaration specifically has very specific handling for null and undefined: it doesn’t throw, because it’s not an object with a [goes]d method. And those specific semantics avoid complicated cases using switch and other things or having to duplicate code for resources only conditionally available. So null and undefined are valuables for using, soes not sufficient there. In addition, using doesn’t allow binding patterns. This was a decision during Stage 2. There are differences of opinion as to what a binding pattern actually becomes the resource tracked? Variable you’re assigning – the value you’re reading from to be assigned to the variable or the individual properties? And change there is an – ordering semantics around how resources get tracked that don’t work if I can change the order of the parameters in those cases. And the decision was there to not allow it. Object literal patterns don’t work here. And array patterns are less likely to work because not everything has a Symbol.iterator. RBN: Using simple elision is not sufficient. If I had _lock = new whatever locking mechanism or primitive, object to create, I can’t just do the underscore lock. = is syntax. We can’t leverage elision. And we can’t use open close paren, like it doesn’t exist, it looks like a CallExpression. We run into cases where we just can’t use the existing mechanisms that we have in ECMAScript to define those. This was originally part of the resource management proposal. This proposal again has had a non-binding form since Stage 1. And it was cut just before we got to – reach Stage 3. And the reason it was cut was not because we didn’t think the feature was viable or important, but because we needed the scope to the proposal that would reach consensus and we needed more investigation into the void because the – the discard portion of this as a broader proposal. -RBN: And as I mentioned before, there are numerous use cases for these bindingless things in using await and using. Scope locking with workers and SharedArrayBuffers, logging, tracing, and avoiding the prefixes. +RBN: And as I mentioned before, there are numerous use cases for these bindingless things in using await and using. Scope locking with workers and SharedArrayBuffers, logging, tracing, and avoiding the prefixes. RN: And again one of other things that is interesting and useful with discards and used in language, to skip parameters I don’t need. They don’t need a name. One of the hardest problems in programming is naming things. Like calculating the leap years for February 29. Naming things is hard. Especially when you don’t care what the name is. So underscore really isn’t that useful because I can’t use it multiple times, you end up with `_`, `__`, `___`, et cetera. And it just throws and is unreadable and isn’t that useful. So the idea here is that you could replace the binding you use for variable with some type of thing that indicates that there’s nothing there. Again we are using void here. I will get to the rationale for that shortly. @@ -970,8 +796,7 @@ RBN: There is something how assignment patterns, that I think I go into more det RBN: The other case that this is useful for is in pattern matching. We would like a mechanism for an irrefutable match. It always matches. Always true no matter what. Every single pattern matching system that I can think of across multiple languages has some mechanism for discard pattern. The idea here is because something always matches, you can use it to test for the shape of a thing without necessarily needing to care about what the value or type of that property is. So this mechanism, you can do things like matching against "is this something that has a Y and X property". Draw a point of that shape. This can’t be done in pattern matching without it. Unless you do something else. I might have to introduce a custom matcher that just returns true and now the overhead of following this in the runtime to interpret its results. So get the Symbol.matcher property. It’s expensive when you don’t care what the result; but want to say always. And there is no really – really no other way to do this pattern matching without some mechanism of disregarding that result. -RBN: Pattern matching also has cases like the terminal case where if nothing matches, what do you do? -And this is something that requires some discussion in the pattern matching proposal and more focus on that proposal when it comes back to plenary for an update, but one thing that it does currently is uses the default key word, like a case statement, this is what we do last, it’s potentially not necessary, when void. Because that would be the irrefutable match and does the same thing. We don’t necessarily need that case specifically. We might use default for that case. Being able to use void anywhere in a pattern and useful and almost a core principle. It’s something we can’t do without having some syntax. The main discussion we had in pattern matching: if we have this in pattern matching, make sure it’s consistent with destructuring in other places to use this so we use the same syntax. +RBN: Pattern matching also has cases like the terminal case where if nothing matches, what do you do? And this is something that requires some discussion in the pattern matching proposal and more focus on that proposal when it comes back to plenary for an update, but one thing that it does currently is uses the default key word, like a case statement, this is what we do last, it’s potentially not necessary, when void. Because that would be the irrefutable match and does the same thing. We don’t necessarily need that case specifically. We might use default for that case. Being able to use void anywhere in a pattern and useful and almost a core principle. It’s something we can’t do without having some syntax. The main discussion we had in pattern matching: if we have this in pattern matching, make sure it’s consistent with destructuring in other places to use this so we use the same syntax. RBN: Other places we might want to use this have similar needs that we might see with the structures and pattern matching. Extractors, I might not care about the first argument that comes out after an extractor. I need a discard binding. You could use elision and spec does, but this runs into the same issues with elision that the regular rules do, having to count trailing commas depending on how things go with pattern – or with the custom matching result. This is something we need for extractors, but not do that in the proposal, because it’s a broader-reaching area to discuss. @@ -989,77 +814,77 @@ RBN: Then there’s the possibility for other places. We could do, do we need vo RBN: So that’s basically what I am trying to propose. I did mention void -- the reason why we are using the void keyword in the proposal, I meant to circle back to this but jump back to an earlier slide. We can’t use underscore, which is again the de facto standard across every language that almost every language I checked discards. It’s very common to use underscore, but – for reasons discussed in the past and prior plenaries using anything that is an identifier is out. We could use a different hash or anything symbol, but what – they don’t have a semantic meaning to provide. Anything, using at = really doesn’t convey what that means. Using void though, we feel the void does have a semantic meaning that is representable. Because a void expression evaluates the expression and disregard the result. This is to evaluate the expression or initializer and discard the binding. There is semantic meaning maintained between void as a discard versus void as an expression. That is well-maintained with this proposal. But that, like I said, I am open and welcome to looking and exploring other tokens, other punctuators, if necessary, other key words, I don’t think there’s any reserved words in JavaScript that are as semantically similar to discard more than void is. I am using void because that’s what was in the – using declarations proposal, but not tied to that specific name or token going forward. -RBN: So at that point, I will go to the queue. We can discuss what is on there and I can talk about potential advancement. +RBN: So at that point, I will go to the queue. We can discuss what is on there and I can talk about potential advancement. -NRO: I strongly support this. I was recently trying to use `using` declarations for real. And I find myself writing `using _1 = something`, `using _2 = something` because I was forced to give a name and I didn’t care about naming them. And there are other cases in which I use something similar. I tend to use just the underscore void, that works well, except when you need it twice. But it’s – like, it’s good to not have like – when I have to use underscore, then I have to add something else in the same scope, I need to go – like, the second one has just to be something different, following a different pattern. Strong support for Stage 1 for this. +NRO: I strongly support this. I was recently trying to use `using` declarations for real. And I find myself writing `using _1 = something`, `using _2 = something` because I was forced to give a name and I didn’t care about naming them. And there are other cases in which I use something similar. I tend to use just the underscore void, that works well, except when you need it twice. But it’s – like, it’s good to not have like – when I have to use underscore, then I have to add something else in the same scope, I need to go – like, the second one has just to be something different, following a different pattern. Strong support for Stage 1 for this. RPR: Thank you. JSC says: “This is interesting. And I support exploring this. End of transmission.” JRL: Also support. However, I hate to void keyword. It looks too much like a regular parameter name. And one – the underscore works so well, I came from Rust, is because you scan over it. It doesn’t take any weight when you’re looking at it – a list of other things that are actually important. If you go back to any of the slides that have multiple void key words and the things that you want, void looks like everything else that’s in that list. Sorry. The parameters or destructures. It makes it more obvious. Exactly. Void looks like a parameter here. And it’s on syntax highlighting or colouring to figure out it’s not. Underscore didn’t have the issue. It’s absent. It looks insignificant in comparison to everything else around it. If we can find a non-ASCII keyword, or underscore, like, a symbol, or a – one of the operators or anything like that, that is small and insignificant in relation to the rest of the code, I think that would be much better. -RBN: One of the values that I think it has as the key word is essentially the exact opposite of what you said, which is you won’t see it in – most often case you see it in a using declaration, you can have using declarations that spans multiple lines, you can have comma delimited declarations. And having it formatted with the void keyword keeps it balanced with more things that look like identifiers. If you do need to scan, because it is a reserved word in JavaScript, it will show up with syntax highlighting in that respect, and it is possible, something you want to scan over, that people could update a TextMate language file, within the context, the void keyword is coloured as something that is harder to see if you really don’t want to look at it. I am not opposed to looking at other symbols. I have had a hard time finding one that might have better use as something else in the future. So I’ve been weary about taking up that syntax space. Things like tilde don’t work because I have not yet fully abandoned partial application, and tilde is important in partial application as one of the few things I can make infix within a call to give it that indicator that partial application needed early on in its proposal process. Other symbols, most of them just don’t really speak to the semantic meaning of discard. Maybe star is an option. But I am wary about using star for something, where other languages might consider that to be something that is a pointer declaration. Like there’s a lot of contention with the semantic meaning behind a lot of the symbols that could appear in that case that I am wary of running afoul of. That’s why I chose void. It’s a similar semantic meaning to what we are trying to accomplish and doesn’t require someone to like mentally leap over what is the specific sigil in this case mean. But again, I agree we should explore that. +RBN: One of the values that I think it has as the key word is essentially the exact opposite of what you said, which is you won’t see it in – most often case you see it in a using declaration, you can have using declarations that spans multiple lines, you can have comma delimited declarations. And having it formatted with the void keyword keeps it balanced with more things that look like identifiers. If you do need to scan, because it is a reserved word in JavaScript, it will show up with syntax highlighting in that respect, and it is possible, something you want to scan over, that people could update a TextMate language file, within the context, the void keyword is coloured as something that is harder to see if you really don’t want to look at it. I am not opposed to looking at other symbols. I have had a hard time finding one that might have better use as something else in the future. So I’ve been weary about taking up that syntax space. Things like tilde don’t work because I have not yet fully abandoned partial application, and tilde is important in partial application as one of the few things I can make infix within a call to give it that indicator that partial application needed early on in its proposal process. Other symbols, most of them just don’t really speak to the semantic meaning of discard. Maybe star is an option. But I am wary about using star for something, where other languages might consider that to be something that is a pointer declaration. Like there’s a lot of contention with the semantic meaning behind a lot of the symbols that could appear in that case that I am wary of running afoul of. That’s why I chose void. It’s a similar semantic meaning to what we are trying to accomplish and doesn’t require someone to like mentally leap over what is the specific sigil in this case mean. But again, I agree we should explore that. NRO: I go with JRL, this feels too visible. It should be more visible than the elision. Like one of the strengths is that elision is to see because you need to do the commas. But also, I would very much prefer this to not extract me when I am reading the variables that I have. -JHD: Yeah. Just a comment about the specific example of a tilde. It means something different. `void` is an existing operator that means basically the same thing, void. It’s fine if we want to explore alternatives and non-ASCII. If it’s something that has an existing meaning, that’s probably going to be too confusing to be an option. +JHD: Yeah. Just a comment about the specific example of a tilde. It means something different. `void` is an existing operator that means basically the same thing, void. It’s fine if we want to explore alternatives and non-ASCII. If it’s something that has an existing meaning, that’s probably going to be too confusing to be an option. GCL: Yeah. I feel like overall, I feel good about this. One thing I am a little worried about is like there are some places in the language where introducing this makes sense, and there’s some other places in the language where it seems like we were just sprinkling it in there because we can. I would like to be explicit about which things we are trying to at least target at this point before we, you know, just to define the problem space before we start getting more into like any other, you know, stage 2-type concerns. So I think usage in object literals was an example, to me, where we were just putting it there, rather than actually needing it there. -RBN: The object literal example is something that I expressly was interested in providing. It provides a benefit – +RBN: The object literal example is something that I expressly was interested in providing. It provides a benefit – -GCL: Not – on the right-hand side. +GCL: Not – on the right-hand side. -RBN: On the right-hand? I don’t know if I had an example on the right-hand side. I had array literals. The array literal example, I think, was one – +RBN: On the right-hand? I don’t know if I had an example on the right-hand side. I had array literals. The array literal example, I think, was one – -GCL: I believe it was void – it’s not. I’m sorry. It says not. +GCL: I believe it was void – it’s not. I’m sorry. It says not. -RBN: Other than this is already legal syntax and creates a property named void. +RBN: Other than this is already legal syntax and creates a property named void. GCL: Yeah. Okay. I did not see the little tiny not proposed text -RBN:I tried to make this more obvious by colouring them in red. But yes, this is not something I think is worth pursuit in this case. It doesn’t make sense for these. +RBN:I tried to make this more obvious by colouring them in red. But yes, this is not something I think is worth pursuit in this case. It doesn’t make sense for these. -GCL: Okay. That sounds good then. +GCL: Okay. That sounds good then. -RBN: Static blocks and object literals you can spread or use parenthesized. It’s not something that comes up in those cases. +RBN: Static blocks and object literals you can spread or use parenthesized. It’s not something that comes up in those cases. -NRO: Yes. So this was proposed in the past in matrix. I would say like for it to be discussed generally how we feel about it. We could consider representing a valid identifier for this. Consider using underscore, how it’s used in other languages and advantages, and it already happens to be used like in different projects, to just avoid a value. We determined it only works for a single value. Even if it's an identifier, we could make it still be backwards compatible by relaxing the restriction to declare a variable with the same name and use const. And in this case, where we – when referencing the binding, +NRO: Yes. So this was proposed in the past in matrix. I would say like for it to be discussed generally how we feel about it. We could consider representing a valid identifier for this. Consider using underscore, how it’s used in other languages and advantages, and it already happens to be used like in different projects, to just avoid a value. We determined it only works for a single value. Even if it's an identifier, we could make it still be backwards compatible by relaxing the restriction to declare a variable with the same name and use const. And in this case, where we – when referencing the binding, RBN: I know what JHD is going to say. But this is something because this proposal also overlaps with pattern matching, we discussed that there and it’s a direction that no one in the champions' group wants to go with for the reasons discussed before. And having underscore, you can’t reference anything that is outside of that scope if you tried to use that, which is something like lodash and underscore member commonly use as `import _ from` – from I am 0.er star from lodash, et cetera. I will let JHD speak to that, if he wants. -JHD: Yeah. I mean, so there’s a number of problems there. First of all, you can already in array and object destructuring, repeat a binding name. And only the last one will win. I verified with arrays at least. I am assuming it’s the same with objects. And that’s not really – that’s fine. That’s a workaround. I do that. But it’s not clear or readable or ergonomic. It’s a hack. Repurposing an existing valid identifier, or existing valid one, I think in general that’s a bad idea for confusion and learnability of the language and google-ability of features we add. But also, underscore in particular is used probably second only to the dollar sign in terms of like deny I in terms of single letter variables in JavaScript codebases. Underscore if lodash, is underscore. And I have used it for other things. It violates TCP: If you move around code, it would – mean different things, because the underscore takes on a new meaning. So even if were web compatible, that’s aesthetically nice, I think that should be a non-starter. Not a viable option to even consider. +JHD: Yeah. I mean, so there’s a number of problems there. First of all, you can already in array and object destructuring, repeat a binding name. And only the last one will win. I verified with arrays at least. I am assuming it’s the same with objects. And that’s not really – that’s fine. That’s a workaround. I do that. But it’s not clear or readable or ergonomic. It’s a hack. Repurposing an existing valid identifier, or existing valid one, I think in general that’s a bad idea for confusion and learnability of the language and google-ability of features we add. But also, underscore in particular is used probably second only to the dollar sign in terms of like deny I in terms of single letter variables in JavaScript codebases. Underscore if lodash, is underscore. And I have used it for other things. It violates TCP: If you move around code, it would – mean different things, because the underscore takes on a new meaning. So even if were web compatible, that’s aesthetically nice, I think that should be a non-starter. Not a viable option to even consider. -JHD: All right. Just to be clear, MF has clarified that duplicate bindings are an error in strict mode. My test was in sloppy mode. Yeah. +JHD: All right. Just to be clear, MF has clarified that duplicate bindings are an error in strict mode. My test was in sloppy mode. Yeah. -RPR: Yeah. You are next with seems like it only makes sense. +RPR: Yeah. You are next with seems like it only makes sense. JHD: So yeah. My next topic is, in general, I feel like this proposal only makes sense in places that are conceptually a comma separated list. The only places -- off the top of my head, where that bucket might not – be something to apply is in array and object literals as opposed to destructuring patterns. The literals creating a value, being referred to as the right-hand side. Even if it’s not always in that exact position. So like I think it’s fine if we like – in object literals, this is clear why it’s not an option. Similarly with class declarations. And in array literals, it seems kind of nice to me that the destructuring and construction of the array could both have the void, but I don’t care that much. And like I still think there’s value even if we exclude it from array literals, as long as array destructuring syntax – that’s a mental model, a simple way to describe it. Places comma separated list except for these two items. RBN: The mental model is things that can declare something or assign to something. Things that seem a little bit arbitrary like whether you would support it in let const, and var, here, this is still a list. But it follows the guidance things that could declare or assign to a thing so it's maybe not as useful here. Array literals, you are provided a value. So if the rule of thumb is that you could have a binding or assignment pattern, then I think it makes sense. -NRO: Yeah. Once before, like – I want to say that I think we should not have another syntax for array elision because people should not use that. We should not encourage it. +NRO: Yeah. Once before, like – I want to say that I think we should not have another syntax for array elision because people should not use that. We should not encourage it. -RBN: And I also agree. I have had this discussion with others. Elision is a bad idea. Because of the conNetflix with trailing comma. So elision when constructing an array literal generally is bad because you end up with a holey array. It’s not a packed array so it’s not quite as efficient and you'll have weird things happen. So in general, I don’t think making this a better – elision better here makes sense. But on the other side where I have to deal with elision because it’s the left-hand side of an assignment or declaration, then I think it makes sense to have something that lets me take the place of elision. Instead of saying, let’s make elision better on this side – for expressions, it's let’s make elision more explicit on the declaration side. I don’t really want this to be part of this proposal. It’s included here so I can specifically state that if we do want this, I think that it must be consistent with how elision works, not with how void 0 works. I am perfectly fine with not continuing with that as part of the proposal. +RBN: And I also agree. I have had this discussion with others. Elision is a bad idea. Because of the conNetflix with trailing comma. So elision when constructing an array literal generally is bad because you end up with a holey array. It’s not a packed array so it’s not quite as efficient and you'll have weird things happen. So in general, I don’t think making this a better – elision better here makes sense. But on the other side where I have to deal with elision because it’s the left-hand side of an assignment or declaration, then I think it makes sense to have something that lets me take the place of elision. Instead of saying, let’s make elision better on this side – for expressions, it's let’s make elision more explicit on the declaration side. I don’t really want this to be part of this proposal. It’s included here so I can specifically state that if we do want this, I think that it must be consistent with how elision works, not with how void 0 works. I am perfectly fine with not continuing with that as part of the proposal. -RPR: DE is +1. LCA has the strong +1 about exploring this. I don’t like void. NRO has done the research where underscore is possible and suggests it maybe. +RPR: DE is +1. LCA has the strong +1 about exploring this. I don’t like void. NRO has done the research where underscore is possible and suggests it maybe. -RBN: And just to speak to this again, JHD had responded as to why it’s not viable. I would love to use underscore, but there’s too much prevalence in JavaScript to be usable and someone would inadvertently use the underscore within their code and break other scope in the same scope because it used underscore or lodash. And that means that they can’t use underscore or lodash so change or can’t use the underscore discard. I would avoid anything with an identifier in those cases. I don’t think that’s an option. +RBN: And just to speak to this again, JHD had responded as to why it’s not viable. I would love to use underscore, but there’s too much prevalence in JavaScript to be usable and someone would inadvertently use the underscore within their code and break other scope in the same scope because it used underscore or lodash. And that means that they can’t use underscore or lodash so change or can’t use the underscore discard. I would avoid anything with an identifier in those cases. I don’t think that’s an option. -DMM: So in Java, when we introduced the underscore for this purpose, it was done over a period of time. And the compiler started to emit warnings long before the use of it was banned. And then it – then you have a break ask it was introduced with a new meaning and this gave everybody time to switch over. And the two cases are a formalization of how people were sort of using it anyway. I agree, we can’t do it this this JavaScript. We cannot use underscore because it’s too widely deployed. void has a plus point, but it’s at least kind of – it looks identifier-ish. It doesn’t look like an operator or anything, which I think almost all of the other cases won't. If we find an operator that is not got any other valid use in the relevant context, especially I think pattern matching is going to be an important one moving forward, I think something with brevity is useful because as people say, it skips over, even though we can’t make it look like identifier-like in the way underscore does. +DMM: So in Java, when we introduced the underscore for this purpose, it was done over a period of time. And the compiler started to emit warnings long before the use of it was banned. And then it – then you have a break ask it was introduced with a new meaning and this gave everybody time to switch over. And the two cases are a formalization of how people were sort of using it anyway. I agree, we can’t do it this this JavaScript. We cannot use underscore because it’s too widely deployed. void has a plus point, but it’s at least kind of – it looks identifier-ish. It doesn’t look like an operator or anything, which I think almost all of the other cases won't. If we find an operator that is not got any other valid use in the relevant context, especially I think pattern matching is going to be an important one moving forward, I think something with brevity is useful because as people say, it skips over, even though we can’t make it look like identifier-like in the way underscore does. -RGN: This is another one of those syntax proposals that is unusual in actually seeming to pay for itself. I appreciate the plethora of examples and strongly support exploring it. +RGN: This is another one of those syntax proposals that is unusual in actually seeming to pay for itself. I appreciate the plethora of examples and strongly support exploring it. -RBN: Thank you. +RBN: Thank you. -MLS: Maybe I am missing something for right-hand side, non-bindings why is undefined not acceptable? +MLS: Maybe I am missing something for right-hand side, non-bindings why is undefined not acceptable? -RBN: Are you talking about this slide? +RBN: Are you talking about this slide? -MLS: For this slide, why can’t you replace, you know, the void 0 with undefined? +MLS: For this slide, why can’t you replace, you know, the void 0 with undefined? -RBN: It’s not the same as elision. If you have elision for a reason, where you want the semantics that in the bottom example, one in ar2 is false because it doesn’t create the property because you want a holey array and not packed. That’s what is voice 0 is undefined without using the E word – not key word, using the identifier. It will give it a value at that place and property in that position. +RBN: It’s not the same as elision. If you have elision for a reason, where you want the semantics that in the bottom example, one in ar2 is false because it doesn’t create the property because you want a holey array and not packed. That’s what is voice 0 is undefined without using the E word – not key word, using the identifier. It will give it a value at that place and property in that position. -MLS: Okay. Elision just creates the whole? +MLS: Okay. Elision just creates the whole? RBN: Yes. It creates a hole. A semantic meaning to what elision in the literal. It differs here. I use void 0 because it uses the void key word. @@ -1067,23 +892,23 @@ MLS: Certainly, in function parameters, undefined works fine. RBN: No. Sloppy mode, undefined is perfectly valid. And it’s one of the reasons why strict mode made that restrict. In the ES3 days, people would declare functions with a parameter named undefined specifically to make sure there was something called undefined because it wasn’t consistently implemented in earlier versions of the language. Undefined I think it was ES3 or 5? It’s a little bit hazy on that one. Undefined was not reliable and there is existing code that is out there that who knows how old it is. We couldn’t change the meaning of that without possibly breaking old code somewhere. -RBN: Also, you couldn’t use it on the left-hand side because again – +RBN: Also, you couldn’t use it on the left-hand side because again – MLS: Yeah. Yeah. Left-hand side, I agree. RBN: I consider parameter to essentially be a left-hand, the assignment is coming through a declartion like a variable – like a variable declaration would be. -NRO: Yeah. I think you talk past each other. When calling a function, undefined is fine. And not when declaring a function. +NRO: Yeah. I think you talk past each other. When calling a function, undefined is fine. And not when declaring a function. -MLS: Yes. That’s what I was talking about. +MLS: Yes. That’s what I was talking about. -NRO: Okay. So this slide is, ible, most of the instruction, and the main goaling of the proposal is for the right-hand side. +NRO: Okay. So this slide is, ible, most of the instruction, and the main goaling of the proposal is for the right-hand side. RBN: Yes. This is just an expository. To explain why it’s a bad idea. And just that corner case, if we decided it, there are semantics that I think – that we have to maintain but I don’t believe we should consider this. -DE: Yeah. I agree we shouldn’t consider this. This is a left-hand side feature we are talking about. So in expression context, void is already defined. It’s an operator. In the right-hand side, I agree with you, that if it were to be defined, sure, it might have to be a particular way, but this whole discussion has gone off in a confusing direction. Let’s just focus on the left-hand side destructuring use cases. +DE: Yeah. I agree we shouldn’t consider this. This is a left-hand side feature we are talking about. So in expression context, void is already defined. It’s an operator. In the right-hand side, I agree with you, that if it were to be defined, sure, it might have to be a particular way, but this whole discussion has gone off in a confusing direction. Let’s just focus on the left-hand side destructuring use cases. -RBN: I am going to change this. It says proposed, but there is no proposed syntax. IT is current ECMAScript. We're spending a lot of time on this slide for something we don’t intend to include it. It’s important it keep a historical record. But this definitely won’t be – I will remove anything related to this in the proposal. +RBN: I am going to change this. It says proposed, but there is no proposed syntax. IT is current ECMAScript. We're spending a lot of time on this slide for something we don’t intend to include it. It’s important it keep a historical record. But this definitely won’t be – I will remove anything related to this in the proposal. RPR: And MF, says, +1 for support. Void is a great solution. @@ -1092,17 +917,19 @@ RBN: At this point, I would like to propose adoption at Stage 1 and seeking cons RPR: I think we heard quite a lot of support, but please be explicit. NRO is +1. JWK has explicit support as well. RGN has + 1. JHD, +1. Lots of support. Any objections to Stage 1? There have been no objections. So congratulations, RBN. You have Stage 1! RBN: Thank you very much. + ## Approval of ECMA-402 for 2024 + Presenter: Ujjwal Sharma (USA) -USA: Thank you, JHD for reminding us that it’s actually time to do this. But yeah. As you might be able to see quickly, there’s very few normative pull requests in ESM 402. As mentioned earlier in the meeting, on the first day, quite a while ago, all things considered. There are no normative issues open right now that are not blocked due to various reasons. So I wouldn’t waste your time going into why they are blocked, each of them. But yet, there’s some major editorial updates that would actually improve substantially the readability of the spec. So you may feel free to go through any of them. This is just one example. And actually, on that note, we really appreciate any reviews. The goal would be to get the signoff of the committee on the current version, the main branch, if you will, of the ECMA402 spec, which is available on the regular URL, and without any normative changes. We plan to go through and merge any editorial changes that began before the freeze, but yeah. That’s it. No – +USA: Thank you, JHD for reminding us that it’s actually time to do this. But yeah. As you might be able to see quickly, there’s very few normative pull requests in ESM 402. As mentioned earlier in the meeting, on the first day, quite a while ago, all things considered. There are no normative issues open right now that are not blocked due to various reasons. So I wouldn’t waste your time going into why they are blocked, each of them. But yet, there’s some major editorial updates that would actually improve substantially the readability of the spec. So you may feel free to go through any of them. This is just one example. And actually, on that note, we really appreciate any reviews. The goal would be to get the signoff of the committee on the current version, the main branch, if you will, of the ECMA402 spec, which is available on the regular URL, and without any normative changes. We plan to go through and merge any editorial changes that began before the freeze, but yeah. That’s it. No – (technical difficulties) USA: Okay. All right. I will just quickly go through them again. While talking about the normative pull requests, I went through this when I mentioned the big editorial pull request. This is the one I was going through, but actually, there is another by RGN. But yeah, like there’s a few editorial updates that we believe make the readability of the spec better, and therefore we try to get in before the freeze. But no normative changes in sight. That’s it. I would like to ask for your approval of the – not this one. Okay. The master branch for ECMA402 -RPR: This is only approval on what has already landed? +RPR: This is only approval on what has already landed? -USA: Yes, no normative changes. +USA: Yes, no normative changes. RPR: Okay. Do we have approval on calling it for ES2024? Okay. JHD is + 1 for approval speccize thank you, JHD. Any objections? I think we are good. All right. Congratulations. And we are positive + 1 from Dan.