From dffacc7ce0b82746a3b6892877df5e421d66568d Mon Sep 17 00:00:00 2001 From: ctcpip Date: Fri, 20 Oct 2023 10:22:22 -0500 Subject: [PATCH] =?UTF-8?q?=E2=9C=8F=EF=B8=8F=20fix=202020=20notes?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .markdownlint-cli2.jsonc | 1 - meetings/2020-02/february-4.md | 179 ++++++++++------------ meetings/2020-02/february-5.md | 96 ++++++------ meetings/2020-02/february-6.md | 107 ++++++------- meetings/2020-03/april-1.md | 107 +++++++------ meetings/2020-03/april-2.md | 35 ++--- meetings/2020-03/march-31.md | 52 ++++--- meetings/2020-06/june-1.md | 67 +++++--- meetings/2020-06/june-2.md | 67 ++++---- meetings/2020-06/june-3.md | 124 +++++++-------- meetings/2020-06/june-4.md | 189 ++++++++++------------- meetings/2020-07/july-20.md | 78 +++++----- meetings/2020-07/july-21.md | 47 ++++-- meetings/2020-07/july-22.md | 63 +++++--- meetings/2020-07/july-23.md | 74 +++++---- meetings/2020-07/summary.md | 4 +- meetings/2020-09/sept-21.md | 109 +++++++------ meetings/2020-09/sept-22.md | 74 ++++----- meetings/2020-09/sept-23.md | 41 ++--- meetings/2020-09/sept-24.md | 144 ++++++++++-------- meetings/2020-11/nov-16.md | 254 ++++++++++++++++-------------- meetings/2020-11/nov-17.md | 271 ++++++++++++++++++--------------- meetings/2020-11/nov-18.md | 169 ++++++++++---------- meetings/2020-11/nov-19.md | 185 +++++++++++----------- 24 files changed, 1318 insertions(+), 1219 deletions(-) diff --git a/.markdownlint-cli2.jsonc b/.markdownlint-cli2.jsonc index e81a6dd0..e7af140b 100644 --- a/.markdownlint-cli2.jsonc +++ b/.markdownlint-cli2.jsonc @@ -5,7 +5,6 @@ "ignores": [ "node_modules/**", "meetings/201*/*.md", - "meetings/2020*/*.md", "scripts/test-samples/*" ] } diff --git a/meetings/2020-02/february-4.md b/meetings/2020-02/february-4.md index b819f3b1..417762cb 100644 --- a/meetings/2020-02/february-4.md +++ b/meetings/2020-02/february-4.md @@ -1,4 +1,5 @@ # February 4, 2020 Meeting Notes + ----- **In-person attendees:** Aki Braun (AKI), Andrew Paprocki (API), Rob Palmer (RPR), Waldemar Horwat (WH), Chip Morningstar (CM), Shane F Carr (SFC), Shu-yu Guo (SYG), Jordan Harband (JHD), Michael Saboff (MLS), Keith Miller (KM), Michael Ficarra (MF), Jonathan Keslin (JKN), Kevin Gibbons (KG), Andrew Paprocki (API), Richard Gibson (RGN), Justin Ridgewell (JRL), Zibi Braniecki (ZB), Myles Borins (MBS), Bradley Farias (BFS), Bradford C. Smith (BCS) Rick Button (RBU) @@ -6,6 +7,7 @@ **Remote attendees:** Dan Ehrenberg (DE), Brian Terlson (BT), David Rudin (DRN), Jason Nutter (JAN), Ron Buckton (RBN), Pieter Ouwerkerk (POK), István Sebestyén (IS), Min Qi Wu(WMQ), Leo Balter (LEO), Valerie Young (VYG), Jack Works (JWK), Mathieu Hofman (MAH), John Hax (JHX), Caridy Patino (CP), Sergey Rubanov (SRV), Rajiv Batra (!!!), Yulia Startsev (YSV), Caio Lima (CLA) ## Housekeeping + ### Adoption of the agenda Adopted by consensus. @@ -17,22 +19,24 @@ Adopted by consensus. ### Next meeting Cupertino, CA hosted by Apple. Will be at the Infinite Loop campus. Host requests that folks register via Doodle ASAP so Apple Security can review. + ## Process changes to accommodate US members and US delegates + Presenter: Michael Ficarra (MF), Myles Borins (MBS) - [Slides](https://docs.google.com/presentation/d/1Om59leOYIgGBbQtVRKpC4dnhljhGUD5cTSI1Q876PDQ/edit) -MBS: There has been a lot of discussion around export controls in ECMA. We’ve discussed with a number of councils and our suggestions are based on that, and this is not legal advice. One other thing is that while we are making a number of suggestions, these suggestions were made by enumerating over the spaces in which we collaborate. It does not mean that these need to land in the way we present them. But as we go through each of the items, we have clear reasons for why we are making these decisions. +MBS: There has been a lot of discussion around export controls in ECMA. We’ve discussed with a number of councils and our suggestions are based on that, and this is not legal advice. One other thing is that while we are making a number of suggestions, these suggestions were made by enumerating over the spaces in which we collaborate. It does not mean that these need to land in the way we present them. But as we go through each of the items, we have clear reasons for why we are making these decisions. MF: None of this is legal advice. -MF: We have a 2-part proposal. First, the goal of this proposal is to publish… +MF: We have a 2-part proposal. First, the goal of this proposal is to publish… CM: What is the problem we're trying to solve? MF: We are trying to make US delegates feel more comfortable. -MBS: The export control guidance as they were put out by the BIS (Bureau of Industry and Security, part of the US Department of Commerce) are ambiguous about how to collaborate. A member of ECMA is on the export control list. Counsel who have reviewed the guidelines have a variety of risk tolerance. The purpose of these proposals is for risk reductions for US delegates. +MBS: The export control guidance as they were put out by the BIS (Bureau of Industry and Security, part of the US Department of Commerce) are ambiguous about how to collaborate. A member of ECMA is on the export control list. Counsel who have reviewed the guidelines have a variety of risk tolerance. The purpose of these proposals is for risk reductions for US delegates. CM: No, I feel like there is some fundamental context I am missing or not understanding. @@ -48,23 +52,22 @@ MF: (presents slide "Proposal 1") Yeah, so the first proposal is to issue a comm MBS: ECMA itself is not concerned with TC39 operating openly. What we are trying to do is find cases where we can improve our existing properties of being open. -MF: We want to clarify what is an existing channel for communication in TC39. We’ve identified some private channels, the TC39 IRC channels, have logs published. All of the channels that are currently private, (#tc39-delegates), those should be changed to moderated, which would allow anyone to join, but only delegates can contribute. We would still log those channels. The discourse has a delegates forum, which we will need to make publicly visible. And the chairs will need to move technical discussions from the Reflector to a public repo. +MF: We want to clarify what is an existing channel for communication in TC39. We’ve identified some private channels, the TC39 IRC channels, have logs published. All of the channels that are currently private, (#tc39-delegates), those should be changed to moderated, which would allow anyone to join, but only delegates can contribute. We would still log those channels. The discourse has a delegates forum, which we will need to make publicly visible. And the chairs will need to move technical discussions from the Reflector to a public repo. MBS: This is specifically related to technical discussion. Things in the reflector, for example “where is the room”, there is no concern about that. Issues that are primarily administrative or non-technical do not need to be published. Any communications that are non technical can remain non-disclosed. -MF: The next step is to live-stream meetings? We should take detailed notes on whether we decide to do that or not. +MF: The next step is to live-stream meetings? We should take detailed notes on whether we decide to do that or not. MBS: Where all of this is coming from, is specifically the word public in the export control guidance. The word public is really ambiguous. It is really unclear what constitutes as public and what constitutes as private. This intends to minimize the amount of things that fall into that ambiguous territory. -MF: (presents slide "Proposal 2") For cases when it's not possible to make technical discussion public, we should have a plan in place. I know there are some discussions we've had that are sensitive, like embargoed or export-controlled topics. Sometimes these are time-sensitive. The proposal we have is to create a limited-membership TG. We can define what is in the scope of that TG for handling those scenarios. +MF: (presents slide "Proposal 2") For cases when it's not possible to make technical discussion public, we should have a plan in place. I know there are some discussions we've had that are sensitive, like embargoed or export-controlled topics. Sometimes these are time-sensitive. The proposal we have is to create a limited-membership TG. We can define what is in the scope of that TG for handling those scenarios. MBS: One of the really good examples here is disclosure of security vulnerabilities. Enumerating these spaces can improve our process, independent of the situation we find ourselves in. Currently there is no way for someone to report a security vulnerability to the committee.In the NodeJS project, we have a lot of collaborators, but there are only 10 people that receive security vulnerability reports. -MF: (presents slide "Appendix: terms") We have some official guidance from the government on the terms, and we also have a working definition of "private". +MF: (presents slide "Appendix: terms") We have some official guidance from the government on the terms, and we also have a working definition of "private". BT: Invited our attorney that has been working on this issue, DRN. - DRN: Hi, I can help answer questions. AKI: I’d like this to be more like the TC39 discourse forum where chairs/editors move content out of the private space to the public as needed. @@ -75,7 +78,7 @@ AKI: That's a really good point. MF: That's the only reason I can see to have the delegates channel currently. -AKI: Yeah, to discuss things that don't really have a comments thread. I'm convinced. +AKI: Yeah, to discuss things that don't really have a comments thread. I'm convinced. MLS: You talked about taking reflector issues, and at some point the chair group will publish them in a public forum. Does publishing after the fact preclude discussion on them as part of what we are doing, or do we need to move all technical discussion into a public forum? @@ -93,18 +96,17 @@ MF: That has to be worked out, we have not discussed it with them yet. IS: The limited membership task group would mean, I don’t think that is easily possible. ECMA has accepted members and currently there is no way to exclude them from technical work in the TC they are allowed to participate (Note: Not said in the meeting, but SPC and NFP members can per definition only participate in one TC. But if they selected e.g. TC39, then you can not say in a special TG of TC39 you can not participate). Some accepted members not being able to participate is not compatible with the current Ecma Bylaws and Rules. This is only my personal view. You should discuss it with the current management of ECMA. -MBS: If the limitation of the TG is that there was a limited number of participants, if it was just a subset of the group, maybe that would be okay. If no Huawei members participated in that TG, then it is okay, and if we accept the Huawei delegate by their own merits into the TG, then US delegates can choose not to participate. +MBS: If the limitation of the TG is that there was a limited number of participants, if it was just a subset of the group, maybe that would be okay. If no Huawei members participated in that TG, then it is okay, and if we accept the Huawei delegate by their own merits into the TG, then US delegates can choose not to participate. IS: Limited membership task group is, a group of 5 people and no more? Is it something like that? - MBS: Yeah something like that. IS: ??? (Note: I am not sure what I said under ???, but certainly I missed some supporting concrete explanation and contribution to understand what the real rational for proposal 2 was) MBS: I think we are trying to be extra careful. -IS: The import control and export control belongs to certain government policies. It is not international. It is critical for an international organization to not put these policies into our charter. I'm okay with having the size of TGs be limited, where only the best experts would participate. But it should not have restrictions on membership from embargoed organizations. I have seen one concrete example of this type in my long standardization work, the solled “Okubo Group” in the ITU that defined in 1988-1992 the first modern video codec that became ITU-T H261. In order to have effective and speedy standardization each participating ITU State Member was allowed to send max. 2 experts in the group. So e.g. in Germany my company Siemens had no room among the two from our national Administration. But no Administration that wanted to participate in the work was refused. +IS: The import control and export control belongs to certain government policies. It is not international. It is critical for an international organization to not put these policies into our charter. I'm okay with having the size of TGs be limited, where only the best experts would participate. But it should not have restrictions on membership from embargoed organizations. I have seen one concrete example of this type in my long standardization work, the solled “Okubo Group” in the ITU that defined in 1988-1992 the first modern video codec that became ITU-T H261. In order to have effective and speedy standardization each participating ITU State Member was allowed to send max. 2 experts in the group. So e.g. in Germany my company Siemens had no room among the two from our national Administration. But no Administration that wanted to participate in the work was refused. MBS: If we removed export control, and clarified that security embargo means not disclosed publicly yet, that would alleviate your concerns correct? @@ -112,21 +114,21 @@ IS: For me, this entire first bullet point (the first sub-bullet of item 1 on Pr AKI: Would you feel better if we came up with a specific framework for what this TG is responsible for and why? It seems you are concerned with the import/export control. -MBS: I think we gathered the correct feedback to adjust Proposal 2. I don't think more time to discuss Proposal 2 is a good use of time. +MBS: I think we gathered the correct feedback to adjust Proposal 2. I don't think more time to discuss Proposal 2 is a good use of time. MF: The acceptance I’m looking for is not of any specific detail of this proposal, but instead that we should have a solution prepared for these scenarios. -IS: About the private IRC channels, I cannot say anything, you know much better than I know. We are, in TC39, rather open. ECMA as an organization is membership based, so anyone can be a member who qualifies for it. If they qualify, then they can become members. Depending on the topic, we are opening up our communication to the external world. TC39 is an excellent example of collaborating with the open source community, web standards, etc. My feeling is that in principle we do this. For public live-streaming the meeting, I don't know if that level of openness is necessary. Member organizations can join ECMA and get access to internal documents and the de-facto internal live-streaming of TC39 if they have registered in the TC39 RFTC. +IS: About the private IRC channels, I cannot say anything, you know much better than I know. We are, in TC39, rather open. ECMA as an organization is membership based, so anyone can be a member who qualifies for it. If they qualify, then they can become members. Depending on the topic, we are opening up our communication to the external world. TC39 is an excellent example of collaborating with the open source community, web standards, etc. My feeling is that in principle we do this. For public live-streaming the meeting, I don't know if that level of openness is necessary. Member organizations can join ECMA and get access to internal documents and the de-facto internal live-streaming of TC39 if they have registered in the TC39 RFTC. AKI: Now that we’ve discussed the ECMA stance on proposal 2, why don’t we move on to the next item in the queue. IS: If someone has undesired features, I won't keep my mouth shut. (Note: I am not sure if I said this and if so in what context?) -WH: I’m very uncomfortable with the idea of live streaming meetings. We can satisfy the requirements of export controls by publishing notes. But live-streaming is just an invitation for harassment, considering past events such as "smooshgate". I would not want people setting up Twitter mobs because of something someone said at the meeting 20 minutes ago. I think live-streaming is going overboard. +WH: I’m very uncomfortable with the idea of live streaming meetings. We can satisfy the requirements of export controls by publishing notes. But live-streaming is just an invitation for harassment, considering past events such as "smooshgate". I would not want people setting up Twitter mobs because of something someone said at the meeting 20 minutes ago. I think live-streaming is going overboard. MBS: I think that is really reasonable. As someone who has been a host, adding that requirement to the host is not a good requirement to put on people. Is there anyone here that wants live streaming, or do we have consensus that we don’t want it. -AKI: I agree with WH's concern. The two details being harassment, and the host burden. I am affected by both of those things and agree with them. +AKI: I agree with WH's concern. The two details being harassment, and the host burden. I am affected by both of those things and agree with them. KM: It seems like in order for US trade law, these meetings are already public, as we don’t expose ourselves to ant-trust law because we publish notes. So I would assume that by the same logic, and maybe it's a different law, that would also apply to export-control-related things. @@ -134,11 +136,11 @@ WH: Export control is specific. AKI: Let’s let the lawyer answer that one. -DRN: I think it's fair to say that we're trying to interpret vague language. We (Microsoft) would consider a process to create publicly available material to be a public meeting under the regulations. But some companies don't share that view. We think we're currently in the clear for what we're doing here, but other companies might not share that interpretation. +DRN: I think it's fair to say that we're trying to interpret vague language. We (Microsoft) would consider a process to create publicly available material to be a public meeting under the regulations. But some companies don't share that view. We think we're currently in the clear for what we're doing here, but other companies might not share that interpretation. API: I do think that it creates a host burden, not only from the video conferencing side, if its open to anyone in the world, or if there is some limit. There might need waivers to be involved if there are employees walking by or other tenants. It makes it very difficult to pull off in a safe way. -DE: I'm really happy about the work that was put into this presentation. On Proposal 1, it looks great to me. I think it's good to build on our property of being open. On point 2, I'm really strongly in favor of that. On point 3, livestreaming meetings, that seems good to me, too. I understand the harassment concern, but if people were able to see our meetings, they would understand better how we work, and that might address some of those concerns. (something about people watching meetings causing those people to not want to attend meetings) (audible laughter.) +DE: I'm really happy about the work that was put into this presentation. On Proposal 1, it looks great to me. I think it's good to build on our property of being open. On point 2, I'm really strongly in favor of that. On point 3, livestreaming meetings, that seems good to me, too. I understand the harassment concern, but if people were able to see our meetings, they would understand better how we work, and that might address some of those concerns. (something about people watching meetings causing those people to not want to attend meetings) (audible laughter.) WH: Regarding livestreaming meetings, I’m not worried about the people who sit through the entire meeting. But what would happen is that someone posts 10-second soundbites where someone says something awkward, and those would go viral — most people would see them out of context. @@ -148,9 +150,9 @@ JHD: I think it's critical not just for meetings but also on IRC and reflector t MBS: Thinking to that, I agree with the point you are raising with having a, I’m not sure if these need to be officially sanctioned TC39 spaces or not. We should be able to have discussions in official TC39 spaces. With this proposal and 2, are there specific spaces you have in mind. This probably comes up to the TC39 IRC channel. Could you clarify? -JHD: I'm primarily interested in meetings and the delegates IRC channel. That's where we don't censor ourselves as much as when we make a GitHub post. But I think we should also have the reflector as a place to have technical discussions before we bring them public. If we don’t have a sanctioned place, this will result in an unsanctioned space, which can be exclusionary. +JHD: I'm primarily interested in meetings and the delegates IRC channel. That's where we don't censor ourselves as much as when we make a GitHub post. But I think we should also have the reflector as a place to have technical discussions before we bring them public. If we don’t have a sanctioned place, this will result in an unsanctioned space, which can be exclusionary. -MBS: If we wanted to keep the delegates channel closed, we could (1) have someone go through the logs, turns them into notes, and publish those, which seems like a ton of labor, or (2) maintain logs of the private channel, and maintain them on ECMA servers. Part of the guidance is that anyone can join ECMA, and anyone can join and access that data. It's also keeping a record of propriety so that we can go back and make sure that nothing went wrong. +MBS: If we wanted to keep the delegates channel closed, we could (1) have someone go through the logs, turns them into notes, and publish those, which seems like a ton of labor, or (2) maintain logs of the private channel, and maintain them on ECMA servers. Part of the guidance is that anyone can join ECMA, and anyone can join and access that data. It's also keeping a record of propriety so that we can go back and make sure that nothing went wrong. JHD: I would be fine with logs of these spaces that are not intended to be public, and are auditable. If we wanted to record meetings, that should be fine, as ECMA members are available to view them. I'm concerned about widely publishing off-the-cuff comments that delegates might not have self-censored properly. @@ -160,36 +162,35 @@ AKI: The distinction between whether we are conversing in public or what we prod **Rajiv Batra**: I just wanted to emphasize the standard I would love US companies to meet. Less about being open, but instead being public. Everything is public by default in proposal 1, and if there needs to be a space where not every word is published, then that becomes the subject of proposal 2, a second subgroup with limited participation is created. That is what helps meet the standard. -MBS: One thing I want to throw out there. Just a thought. We could always choose to take a more conservative approach, which is applying these restrictions for two months. Then we can understand whether these rules would limit participation by other members. And none of these changes need to be permanent. And then we can re-evaluate in a few months whether these changes work. I would rather we err on the side of allowing all members to participate. +MBS: One thing I want to throw out there. Just a thought. We could always choose to take a more conservative approach, which is applying these restrictions for two months. Then we can understand whether these rules would limit participation by other members. And none of these changes need to be permanent. And then we can re-evaluate in a few months whether these changes work. I would rather we err on the side of allowing all members to participate. MLS: We are talking about being public, but ECMA is a membership organization. You can’t participate in discussion or other forums without being a member. Is the result being public, or the discussion itself? -DRN: I think, if we're talking about the language of the regulations, (indecipherable). It's not clear what "public" is. Companies differ. From our perspective, a standards organization that is open to any companies is public, and meetings are public, even if those meetings are not open to non-members. +DRN: I think, if we're talking about the language of the regulations, (indecipherable). It's not clear what "public" is. Companies differ. From our perspective, a standards organization that is open to any companies is public, and meetings are public, even if those meetings are not open to non-members. MBS: The one thing that would distinguish this meeting from the public IRC channel, is that this meeting has notes that are published, where the IRC channel is a black box. The problem that we run into is a disagreement about how to interpret that guidance. - -SYG: To clarify, what are you asking here? These changes are not motivated by making our work more efficient. They're motivated by our desire to reduce legal risk. And that's not something that we can answer. Is this what your company counsel is recommending? +SYG: To clarify, what are you asking here? These changes are not motivated by making our work more efficient. They're motivated by our desire to reduce legal risk. And that's not something that we can answer. Is this what your company counsel is recommending? MF: Proposal 1 without the live streaming meetings is OK with us. Proposal 2 is a measure that we would really like to be in place so that we don’t have to scramble if the situation arises. It also lowers the risk for this process to be used. Proposal 2 is a like to have, Proposal 1 except for point 3 is a must have. -IS: In Proposal 1, "publish all technical discussion", organizations like ISO, ITU, etc. DO NOT publish everything. There is no formal SDO that does that, to my knowledge. (Note: Did not say in the meeting but we have a conflict here: TC39 has a RF patent policy which includes creating a “walled garden” and members have to register to be inside of that “garden” committing themselves to RF policies. We opened the garden to the public already as much as possible, where also parties - maybe only in a listening mode - are sitting with no patent licensing commitments at all, which can be a danger for a RF project). In TC39, we've had an excellent practice. So we can change it a little, but why do we have to change it completely? Did we get signals from US authorities that we have to change? Or is it only an internal interpretation? +IS: In Proposal 1, "publish all technical discussion", organizations like ISO, ITU, etc. DO NOT publish everything. There is no formal SDO that does that, to my knowledge. (Note: Did not say in the meeting but we have a conflict here: TC39 has a RF patent policy which includes creating a “walled garden” and members have to register to be inside of that “garden” committing themselves to RF policies. We opened the garden to the public already as much as possible, where also parties - maybe only in a listening mode - are sitting with no patent licensing commitments at all, which can be a danger for a RF project). In TC39, we've had an excellent practice. So we can change it a little, but why do we have to change it completely? Did we get signals from US authorities that we have to change? Or is it only an internal interpretation? MBS: I think it’s less about the authorities and more about that the counsels will not allow delegates to continue participating without these changes. -IS: 40 years ago, I was in Austria involved in a much more sensitive process. That was when we had East and West. We did set up the first computer network connections between East and West. What we did was we talked with the involved governments, security guys, technical guys on both sides to have a common understanding what you can do in sucha project. I have the feeling we are talking only to ourselves. ECMAScript is a web scripting language specification. It's not a security product, just a tool for many purposes. +IS: 40 years ago, I was in Austria involved in a much more sensitive process. That was when we had East and West. We did set up the first computer network connections between East and West. What we did was we talked with the involved governments, security guys, technical guys on both sides to have a common understanding what you can do in sucha project. I have the feeling we are talking only to ourselves. ECMAScript is a web scripting language specification. It's not a security product, just a tool for many purposes. MBS: There are council on this call that have done a ton of research. Would anyone like to share their opinion? -DRN: What this comes down to is that each company needs to arrive at their own decision. We came up with an interpretation that allows our delegates to continue participating in TC39 as it currently stands. I can't point to past precedents because this hasn't happened before in this way. +DRN: What this comes down to is that each company needs to arrive at their own decision. We came up with an interpretation that allows our delegates to continue participating in TC39 as it currently stands. I can't point to past precedents because this hasn't happened before in this way. *Rajiv*: To clarify, are we interpreting this in a vacuum. Always, but also we are responding to specific published guidance from a US regulator that sets standards. They are probably thinking of certain more sensitive technology. The regular published this. There is a question about how risk averse the council is. Some people would not be able to participate because of this. The history is useful but this is real. -DRN: I don't think there's specific guidance on what standards bodies can and cannot do. There's no official guidance. +DRN: I don't think there's specific guidance on what standards bodies can and cannot do. There's no official guidance. *Rajiv*: There is always ambiguity. -DRN: I think the issue is that many companies are having good minds that can differ. So the goal is how we can find the lowest common denominator where all organizations can participate. +DRN: I think the issue is that many companies are having good minds that can differ. So the goal is how we can find the lowest common denominator where all organizations can participate. MF: Our goal here is not to give advice to companies. I was very open about what my company is comfortable with. I want to see if those changes are OK and to see if there are other changes that will make companies more comfortable. @@ -213,7 +214,7 @@ AKI: A lot of these things are regarding ECMA, which we can’t speak for. I don API: One thing that came up, individual members asked the BIS for guidance, that would not necessarily be acceptable to other members. If the end result is that the notes and logs are published and they came back with guidance, the only risk is that you might not like the answer they give. Why are we not just asking ECMA to ask BIS. -DRN: (indecipherable) People are asking not for point-by-point guidance but more general guidance. They may release additional guidance. +DRN: (indecipherable) People are asking not for point-by-point guidance but more general guidance. They may release additional guidance. MBS: With ECMA being a European org, is it appropriate for them to ask BIS for guidance? @@ -227,13 +228,13 @@ AKI: It’s the part of the US government that handles …. Regarding exports. KM: Could we in order to move forward, by applying this for this meeting, or instead not use private channels for the duration of this meeting, that way the problem is sidestepped for the next three days. -MF: We don't want any of the changes we're requesting to rely on delegates acting in good faith. If you're asking delegates not to use private channels, you're relying on them not to do that. +MF: We don't want any of the changes we're requesting to rely on delegates acting in good faith. If you're asking delegates not to use private channels, you're relying on them not to do that. KM: We could just close the IRC channel for the duration of the meeting. MBS: We could agree on these more conservative approaches today, and move forward with discussing these other topics. -KM: We say that from this moment to the end of the meeting, we publish the IRC logs. That would enable us to have a productive meeting. +KM: We say that from this moment to the end of the meeting, we publish the IRC logs. That would enable us to have a productive meeting. API: Wouldn’t that only be a concern if they (Huawei) are actually here? @@ -253,13 +254,13 @@ MBS: We should get through this, lunch is in 8 minutes. API: We have a pass for this meeting because they haven’t been added yet. -DRR: I think there's a meta-question here. Even if we say, let's shut down the channel for the duration of the meeting, there are more questions we can ask. If someone makes a new channel, if we have conversations in the hallway, are those things we need to publish as well? There's a weird inability to make a distinction between a TC39 channel and "Daniel's BFF" channel. +DRR: I think there's a meta-question here. Even if we say, let's shut down the channel for the duration of the meeting, there are more questions we can ask. If someone makes a new channel, if we have conversations in the hallway, are those things we need to publish as well? There's a weird inability to make a distinction between a TC39 channel and "Daniel's BFF" channel. MBS: Not a lawyer. But that we clearly enumerate the sanctioned channels. Members can communicate in other channels, but they are not official. This is about an enumeration of these channels. DRR: You could imagine that there was discussion on a non-official channel as well. -MBS: JHD brought that up, having the undesired effect of creating many back-channels, which is exactly the kind of thing we want to avoid. I think for the sake of this particular conversation, we should scope it to the list of channels here. +MBS: JHD brought that up, having the undesired effect of creating many back-channels, which is exactly the kind of thing we want to avoid. I think for the sake of this particular conversation, we should scope it to the list of channels here. MBS: What’s next on the queue? @@ -269,12 +270,11 @@ MF: Is it OK if we look for consensus on individual points? MLS: Is this permanent or this meeting? - AKI: We could call it going forward. MLS: Permanent, until we change it. -MBS: In our next meeting and the following meeting, we will discuss in a timebox the effects of these changes. Can we reserve a timebox in both of the next two meetings to discuss this again? +MBS: In our next meeting and the following meeting, we will discuss in a timebox the effects of these changes. Can we reserve a timebox in both of the next two meetings to discuss this again? (No objections) @@ -284,7 +284,7 @@ MF: Is there opposition to TC39 creating a communication about these public chan AKI: Yeah, we can do that. I hear no opposition, so we’re good. -BT: Sounds like consensus to write documentation. We'll do that. +BT: Sounds like consensus to write documentation. We'll do that. MF: For the IRC channels. @@ -302,7 +302,7 @@ WH: #tc39-delegates is not public. MBS: First I would like to propose that we maintain public logs for all currently public channels. -JHD: We already do that. The logs are in the subject line of #tc39. +JHD: We already do that. The logs are in the subject line of #tc39. MBS: We should commit to it going forward. @@ -378,7 +378,7 @@ AKI: For (2d) [in the slides], we have consensus? MLS: I have some concerns about names being used. -AKI: This is only for going forward. When have we actually had a technical discussion on the Reflector? +AKI: This is only for going forward. When have we actually had a technical discussion on the Reflector? MBS: Can I make a suggestion that can allow us to move forward. Would you be comfortable with this for at least this meeting, and we can come back with a proposal for publishing summaries, and see if that is acceptable. @@ -459,9 +459,6 @@ API: We can submit a PR that includes this as well. DE: We can also discuss this. - - - ## Conclusion - "TC39 issues a public communication on its existing property of being open" is approved @@ -476,8 +473,8 @@ DE: We can also discuss this. - Chairs will document TC39's implementation of Ecma's invited expert policy, as it's put in practice in TC39. - We will work on a proposal for limited participation technical groups for discussion of technical topics that cannot be disclosed publicly in a reasonable timeframe. - ## Test262 Report + Presenter: Leo Balter (LEO) - [Test-262 repo](https://github.com/tc39/test262) @@ -490,7 +487,9 @@ LEO: Any questions? (silence) AKI: I'm impressed by the number of contributions you're responsible for. + ## Elections at TC39: Introducing a process + Presenter: Yulia Startsev (YSV) [Slides](https://docs.google.com/presentation/d/1u435-e43kQNWfYONE89CdzpPitCKLItXLObeuvdhRr4/edit#slide=id.p) @@ -499,31 +498,31 @@ YSV: (presents slides) AKI: It’s a simple majority, from ECMA. -YSV: There's still some discussion going on. I just wanted to bring this to the room to discuss it. +YSV: There's still some discussion going on. I just wanted to bring this to the room to discuss it. -IS: We would need a bylaws change if voting were not a simple majority. It could be done but for now it's not done, so we would only be able to do a simple majority. But there are bylaws changes relatively often. The overriding Swiss law is quite specific that it is a simple majority, which is currently what we use. +IS: We would need a bylaws change if voting were not a simple majority. It could be done but for now it's not done, so we would only be able to do a simple majority. But there are bylaws changes relatively often. The overriding Swiss law is quite specific that it is a simple majority, which is currently what we use. YSV: I think that’s a good information position to have. -JHD: In 2018, at one point, KS and I told the committee that we wanted to be co-editors and skip the hassle of an election. Then we were sent out of the room for an hour and a half. Two months ago, deciding that we all had enough time and commitment from our organizations, we wanted to run as a slate, but unannounced to us, the committee decided to vote on us individually. I find it bad that process decisions are made while delegates are out of the room. It’s disturbing to me that multiple years in a row, the preferences of the candidates has been disregarded for the editor elections. I’m really glad that Yulia has put this together. I would expect that delegates, even if they are running for a position, get a say in how that election is conducted. +JHD: In 2018, at one point, KS and I told the committee that we wanted to be co-editors and skip the hassle of an election. Then we were sent out of the room for an hour and a half. Two months ago, deciding that we all had enough time and commitment from our organizations, we wanted to run as a slate, but unannounced to us, the committee decided to vote on us individually. I find it bad that process decisions are made while delegates are out of the room. It’s disturbing to me that multiple years in a row, the preferences of the candidates has been disregarded for the editor elections. I’m really glad that Yulia has put this together. I would expect that delegates, even if they are running for a position, get a say in how that election is conducted. -YSV: Your second topic is addressed by the presentation. On the first topic, it would be helpful for the candidates to give a presentation on what they intend to be doing. It wasn't clear that there was a plan of work that would be happening over the year. Another issue is that we could get into a situation where someone nominates themselves to the chair group, and then all the chairs should feel they should go all-in or nothing, and someone could be brought into that role by social pressure rather than qualifications. I want to impose specific goals about how to discuss candidates. If it happens that we have an unsuccessful election, I think we could work through it in this way. +YSV: Your second topic is addressed by the presentation. On the first topic, it would be helpful for the candidates to give a presentation on what they intend to be doing. It wasn't clear that there was a plan of work that would be happening over the year. Another issue is that we could get into a situation where someone nominates themselves to the chair group, and then all the chairs should feel they should go all-in or nothing, and someone could be brought into that role by social pressure rather than qualifications. I want to impose specific goals about how to discuss candidates. If it happens that we have an unsuccessful election, I think we could work through it in this way. -KG: I wanted to confirm, you showed a picture of a ballot. For both editor and chair group, we've previously said that we are looking for a specific number. But this would mean that everyone who reaches a majority would be able to be added to the group? +KG: I wanted to confirm, you showed a picture of a ballot. For both editor and chair group, we've previously said that we are looking for a specific number. But this would mean that everyone who reaches a majority would be able to be added to the group? YSV: Yes that is correct. -CM: I somewhat concur with JHD’s reservations, particularly the editorship. If you have a group of more than 1 person who has a concept that they're going to divvy up responsibilities in some way and present themselves as a slate, we should allow for that. And requiring that we vote individually seems unjustified. I think that if people want to run a group, they should be able to run as a group. +CM: I somewhat concur with JHD’s reservations, particularly the editorship. If you have a group of more than 1 person who has a concept that they're going to divvy up responsibilities in some way and present themselves as a slate, we should allow for that. And requiring that we vote individually seems unjustified. I think that if people want to run a group, they should be able to run as a group. -YSV: I agree it could be disturbing if you want to make plans. I think it could help for the candidates to give a presentation on their plans if elected together. +YSV: I agree it could be disturbing if you want to make plans. I think it could help for the candidates to give a presentation on their plans if elected together. AKI: Where does that leave us on this proposal? -YSV: I would propose that we do this iteratively. If it comes to the point that this process creates problems, then we can fix it later. For now, maybe we can adopt what I’ve suggested and use that for the chair elections in this meeting. +YSV: I would propose that we do this iteratively. If it comes to the point that this process creates problems, then we can fix it later. For now, maybe we can adopt what I’ve suggested and use that for the chair elections in this meeting. JHD: Feedback is great. In my job, the way I get feedback is not people talking in a room, instead by my manager collecting private feedback from colleagues, and then organizing it, and then providing it to me. I would feel this way even if I wasn’t someone this applied to. It really sucks to have to leave the room for an hour, and have people potentially talk negatively about me. I would really prefer a process where this feedback is delivered in a more polite setting than discussed openly in this room. I don’t think that someone leaving a room and having people talk about them is fair. -KG: I share some of your feelings about the social dynamics of the thing, but especially for chairs, I think it is important that if someone has something they are concerned about, that might interfere with someone's ability to serve as an effective chair that they bring that up with the rest of the group before the rest of the group votes. That affects the entire committee. It is certainly something I would want to know about before voting. For editors, less so, because the editor role is more technical. Editors have to coordinate with members of the public and committee in a somewhat similar way. So I don't think performance reviews are the right analogy to draw here. +KG: I share some of your feelings about the social dynamics of the thing, but especially for chairs, I think it is important that if someone has something they are concerned about, that might interfere with someone's ability to serve as an effective chair that they bring that up with the rest of the group before the rest of the group votes. That affects the entire committee. It is certainly something I would want to know about before voting. For editors, less so, because the editor role is more technical. Editors have to coordinate with members of the public and committee in a somewhat similar way. So I don't think performance reviews are the right analogy to draw here. JHD: I agree with you that if there is something that impacts the ability to do the job, that should be brought up. It seems like it is “alright, they are out of the room, what can we say about them”. @@ -563,7 +562,7 @@ MLS: Who is the host? AKI: IBM in Budapest in November -DE: There was a plurality of IBM at Budapest, there were comments on the reflector about a 3rd place for a runoff, but it would be hard to get everyone to fill out the form again. Ultimately it seemed like the best option with advanced notice. There's an issue, which is that we want to preserve the confidentiality of the vote. I hope this can be a way to gather feedback from the committee. I agree with AKI that, I mean, I want to learn more about MLS's concerns here so if we do this again, we know what the problems were. +DE: There was a plurality of IBM at Budapest, there were comments on the reflector about a 3rd place for a runoff, but it would be hard to get everyone to fill out the form again. Ultimately it seemed like the best option with advanced notice. There's an issue, which is that we want to preserve the confidentiality of the vote. I hope this can be a way to gather feedback from the committee. I agree with AKI that, I mean, I want to learn more about MLS's concerns here so if we do this again, we know what the problems were. MLS: This is the first time that I heard that the official meeting was in Budapest. @@ -581,8 +580,7 @@ YSV: Do we go to elections, or do we want time to think? AKI: I don’t know the answer to that question. - -YSV: If we follow the process I suggested, we're a bit behind. We have a presentation for the chair group. Should we follow up with that? +YSV: If we follow the process I suggested, we're a bit behind. We have a presentation for the chair group. Should we follow up with that? AKI: I certainly would like to get that out of the way. The schedule is in a hilarious mess. @@ -664,14 +662,12 @@ YSV: Everyone who is volunteering to take on this role please leave the room. (candidates leave the room) -## Conclusion +### Conclusion KG: Ya’ll are chairs. (Note: the Chair Team 2020 (all four in a “package”) has been elected by consensus by TC39.) - ## Editor’s Report (ECMA262) - Presenter: Jordan Harband (JHD) - [Slides](https://j.mp/262editor202002) @@ -694,7 +690,6 @@ DE: We can follow up offline. I mentioned earlier that I prefer if we got implem Presenter: Istvan Sebestyen (IS) - - [Slides](https://github.com/tc39/agendas/blob/master/2020/02.GA-2020-12_R1.pdf) IS: (presents slides) will focus on new / value add items. Many points in the slides were already discussed in the earlier discussions, so he is only bringing those topics which are additional. @@ -707,14 +702,13 @@ IS: This is new to me. I thought he was doing it voluntarily :-). MBS: The way in which I frame it may not be totally accurate. - IS: Indeed it was for the JS foundation, not yet during the Bocoup membership..He did a brilliant job. -MBS: So one thing that I think we can look at, and I don't think it fills all the gaps, I think support from ECMA would be good here so we have consistency, but that we are floating at OpenJSF, about trying to see if there are people from our foundation that we could sponsor to come. For what it's worth, I don't think that would be sufficient to call this "solved", but I think it would be a good way to bring in new community members. But I still think we need to follow up with ECMA and Istvan. +MBS: So one thing that I think we can look at, and I don't think it fills all the gaps, I think support from ECMA would be good here so we have consistency, but that we are floating at OpenJSF, about trying to see if there are people from our foundation that we could sponsor to come. For what it's worth, I don't think that would be sufficient to call this "solved", but I think it would be a good way to bring in new community members. But I still think we need to follow up with ECMA and Istvan. IS: That would be great. I always accept any sort of help and appreciate anybody that did work. -MBS: The big thing to me is that we have something identifiable and scalable. I think it would be good to set it up in such a way that there isn't a single point of failure. +MBS: The big thing to me is that we have something identifiable and scalable. I think it would be good to set it up in such a way that there isn't a single point of failure. IS: I could also imagine that it is not one person, but it really should be someone in the meeting room, or remotely participating. Aki has asked if we can publish the last ECMA TC39 minutes, but in these minutes we have the technical notes and it starts with “you cannot publish” this..., So we cannot publish it publicly with that notice. Obviously Ecma internally it is published as a TC39 and GA document. But TC39 wants full public transparency. @@ -761,14 +755,12 @@ MBS: Does anyone have objections to delaying the coffee break? MBS: Thank you IS - ## ECMA-402 Update Presenter: Valerie Young (VYG) - [Slides](https://docs.google.com/presentation/d/19w-MiEmxsrGEp8F4LR6DfLWadimt8fW1wgwSE33FCK8/edit?usp=sharing) - VYG: (presents slides) SFC: Note that the MessageFormat Working Group is formalized under the Unicode Consortium, not TC39, so it's governed by the Unicode Consortium rules. @@ -789,15 +781,12 @@ MBS: My understanding regarding public vs private, there are no plans or need fo (scheduled break) - - ## ToInteger normalizes -0 to +0 Presenter: Jordan Harband (JHD) - [PR](https://github.com/tc39/ecma262/pull/1827) - JHD: (Presents PR) JHD: Are we ok with this largely-editorial change? @@ -842,24 +831,21 @@ JHD: Are there any objections for proceeding with this change? This also needs t (silence) - MM: This is great! - ### Conclusion consensus reached ## Async initialization for stage 1 - Presenter: Bradley Farias (BFS) - [Slides](https://docs.google.com/presentation/d/1DsjZAzBjn2gCrr4l0uZzCymPIWZTKM8KzcnMBF31HAg/edit?usp=sharing) BFS: (presents slides) -JHD: `new Promise` _does_ catch exceptions. +JHD: `new Promise` *does* catch exceptions. MM: It causes the promise to become a rejected promise. @@ -899,7 +885,7 @@ JHD: I understand why returning a promise in the super-class constructor would b BFS: It will still not install the fields on anything but the promise. -JHD: The super class would define a then method on the prototype. You can’t wait in a constructor, your subclass can have a then method that calls super.then. What can you not do with those sorts of patterns that you need async constructor for? +JHD: The super class would define a then method on the prototype. You can’t wait in a constructor, your subclass can have a then method that calls super.then. What can you not do with those sorts of patterns that you need async constructor for? BFS: You can’t really coordinate in a way that prevents people from touching your class before it’s initialized. You would be trusting your subclasses to not do anything to your fields before it’s initialized @@ -907,7 +893,7 @@ JHD: And when you say fields, you mean setters in particular? BFS: No, anything. If a subclass were to change the length property, and then it gets initialized later in the superclass async initialization it would be bad. The problem is just a coordination problem. -KG: Usually with classes, you don't want to expose your class in a partially constructed state. The way you normally do that is hide data until you are initialized yet. With this proposal, you get an instance, it just isn't "ready yet". +KG: Usually with classes, you don't want to expose your class in a partially constructed state. The way you normally do that is hide data until you are initialized yet. With this proposal, you get an instance, it just isn't "ready yet". JHD:I can understand why you don’t want a partially-constructed instance out there. But in the async constructor approach, how are you preventing someone from synchronously receiving something that’s not ready? @@ -921,7 +907,7 @@ JHD: So, to paraphrase, with just async constructor alone, you're saying that th BFS: Correct, that is the idea -MM: In the baseline design here, and you do an await.super call instead of a super call, so that you have what is effectively a normal construction process that is simply spread out over multiple turns, where you are always coordinating with awaits and asyncs, I would argue that the consistency of that design extends into that you need `await.new` not `await new`; with the space, it is not adequate. By the time you continue, you need to have the instance. Just like you can't do a normal super chain, you shouldn't be able to do a promise chain. +MM: In the baseline design here, and you do an await.super call instead of a super call, so that you have what is effectively a normal construction process that is simply spread out over multiple turns, where you are always coordinating with awaits and asyncs, I would argue that the consistency of that design extends into that you need `await.new` not `await new`; with the space, it is not adequate. By the time you continue, you need to have the instance. Just like you can't do a normal super chain, you shouldn't be able to do a promise chain. BFS: I am neutral. I’m just trying to make this easy. That seems fine to discuss at stage 1 or 2 @@ -941,32 +927,31 @@ BFS: If we were trying to make a subclass X of Y, there is no way to coordinate BCS: I’m proposing that it just be a normal class. You wouldn’t have the constructor return anything. -BFS: I don't understand how it would work. Let's discuss offline. +BFS: I don't understand how it would work. Let's discuss offline. -JKN: I would argue that this makes APIs less consistent. It seems like if we can have a language feature that makes this more straightforward, we can say, sometimes this returns a promise, sometimes a value, but at least it's a more consistent and familiar interface. +JKN: I would argue that this makes APIs less consistent. It seems like if we can have a language feature that makes this more straightforward, we can say, sometimes this returns a promise, sometimes a value, but at least it's a more consistent and familiar interface. BCS: So, you're saying the idea is for building apis, to be consistent. But then you would need the entire API to be built using async classes otherwise it wouldn’t be consistent. JKN: You already have classes where some methods are async and others are sync -BCS: You could say that you just always call methods to create objects. Don't worry about the constructor. +BCS: You could say that you just always call methods to create objects. Don't worry about the constructor. -JKN: With precedence for some classes that have async and sync methods on it. Look at Node `fs`, for example, which has some Promise methods. Sometimes when I’m writing code some data is sync and some is async. +JKN: With precedence for some classes that have async and sync methods on it. Look at Node `fs`, for example, which has some Promise methods. Sometimes when I’m writing code some data is sync and some is async. BCS: We are talking about methods, not constructors. -BFS: Just to clarify, this proposal is about developers having a better experience in JavaScript. I'm not arguing that you can't make async stuff work. I'm saying that it's hard to do it properly and we should make it easier. +BFS: Just to clarify, this proposal is about developers having a better experience in JavaScript. I'm not arguing that you can't make async stuff work. I'm saying that it's hard to do it properly and we should make it easier. KG: I’m in favor of this, not any particular solution, but that is a post stage 1 concern. I am strongly in favor of stage 1. This is not a stage 1 concern, but I want to mention it. An advantage of putting the async keyword on the class instead of the constructor, would allow you to use await in initializers for fields. - MM: Yeah. JHX: At first it seems useful, but I don't know how it would work. I really hope we can see what other languages do, especially languages like C# that have had async for a while. BFS: C# does not, but for other languages we can certainly investigate. -DRR: This is probably a later-stage concern, but there are some questions about the axes of whether asynchronous classes can extend a non-async class and vice-versa. My assumption is that you can have both types of classes extend the opposite type. But I think there will be some confusion around that. How do you have a sync side of an async class extend another type? Because of similar considerations, like where to the prototype methods get installed and the like, would be hard to think about as well. I have teams at Microsoft who ask me about async initializers (?), but I’m not sure how deep a person gets before this gets confusing. I’m concerned that while we may fix some problems, we may make some things harder as well. +DRR: This is probably a later-stage concern, but there are some questions about the axes of whether asynchronous classes can extend a non-async class and vice-versa. My assumption is that you can have both types of classes extend the opposite type. But I think there will be some confusion around that. How do you have a sync side of an async class extend another type? Because of similar considerations, like where to the prototype methods get installed and the like, would be hard to think about as well. I have teams at Microsoft who ask me about async initializers (?), but I’m not sure how deep a person gets before this gets confusing. I’m concerned that while we may fix some problems, we may make some things harder as well. BFS: My inkling is on how this is going to go, is that we make this somewhat a limited feature. I would not expect that you are able to mix these kinds of classes. @@ -974,13 +959,12 @@ BFS: Can we reach consensus for stage 1? JWK: I'm wondering if this kind of async class works with high-level class (clz => class A extends clz {}) that the subclass doesn't know who the parent class is. Does it throw when a normal class extends a async class, if not, how to call the super in the subclass? -BFS: I would look at this in a later stage. I don’t have concrete semantics. My inkling is that it would throw. I don't expect mixing an abstract subclass or superclass to work if it does not understand the initialization timing. +BFS: I would look at this in a later stage. I don’t have concrete semantics. My inkling is that it would throw. I don't expect mixing an abstract subclass or superclass to work if it does not understand the initialization timing. RPR: No objections? (silence) - ### Conclusion Consensus reached for stage 1 @@ -1002,19 +986,19 @@ DE: (continues to present slides) DE: Questions? -WH: The trouble with tribbles and precision, is that they multiply. I've had a number of conversations with DE about this. I'm strongly in the `Decimal128` camp for a few reasons. (1) `BigDecimal` is hard to use. The precision can get away from you if you are not careful, getting into the hundreds of thousands of digits. You will be forced to have functions which do arithmetic on decimals also take a precision parameter. Unfortunately that doesn't work well either, except in simple cases. For example, if you have a function that computes the harmonic mean of N numbers (take the reciprocal of every number, take the mean, and take the reciprocal again) and just take a precision parameter for the intermediate divisions, you’ll get the correct value `1.6m` if you take the harmonic mean of `1m` and `4m` with a precision of two digits after the decimal point, but this will blow up if you take the harmonic mean of `1000000m` and `4000000m`. You end up with nonsensical results with only a single precision argument. +WH: The trouble with tribbles and precision, is that they multiply. I've had a number of conversations with DE about this. I'm strongly in the `Decimal128` camp for a few reasons. (1) `BigDecimal` is hard to use. The precision can get away from you if you are not careful, getting into the hundreds of thousands of digits. You will be forced to have functions which do arithmetic on decimals also take a precision parameter. Unfortunately that doesn't work well either, except in simple cases. For example, if you have a function that computes the harmonic mean of N numbers (take the reciprocal of every number, take the mean, and take the reciprocal again) and just take a precision parameter for the intermediate divisions, you’ll get the correct value `1.6m` if you take the harmonic mean of `1m` and `4m` with a precision of two digits after the decimal point, but this will blow up if you take the harmonic mean of `1000000m` and `4000000m`. You end up with nonsensical results with only a single precision argument. -DE: I don’t know if you saw, the rounding parameter had two modes for significant digits as well as maximum decimal places. +DE: I don’t know if you saw, the rounding parameter had two modes for significant digits as well as maximum decimal places. WH: Yes. It’s used like that in financial calculations, which brings us to the next topic. I don’t think it’s ok to say that we’ll never support things like sqrt, exp and so on on decimals. I don’t think it’s ok to say that you need to implement those as a user library. -DE: I didn't say never. I said as a follow-on proposal, but I want to hear more feedback about if it should be included in this one. +DE: I didn't say never. I said as a follow-on proposal, but I want to hear more feedback about if it should be included in this one. WH: You said as a follow on proposal but there is an enormous gap between the complexity of supporting those in BigDecimal vs. Decimal 128. If we’re doing Decimal128, we can just get those from the standard library and be done. If we go with BigDecimal, we’d need to compute those to arbitrary precision, and I don't want to include Mathematica in every copy of the language (audible chuckles). Just the complexity difference is large if you want to get the value pi. In Decimal128 it’s just a constant — we know the value of pi to 34 decimal digits. In BigDecimal, pi would take a precision argument, and you know that people will ask for the first million decimal places of pi. This puts me firmly in the position of supporting `Decimal128`. Doing math will be much harder to implement if we choose `BigDecimal`. Furthermore, `BigDecimal` is much harder to *use* because of the proliferation of precision arguments. WH: The next question is, what happens when you convert a Number to a Decimal, regardless of the variant of Decimal we pick. If it is an exact number, OK, but what happens if you do 1/10 using regular Numbers, and then ask to convert to Decimal. “You can’t do that” is a fine answer. Another answer is that you get whatever mathematical value you had but in a Decimal format, which happens to be `0.1000000000000000055511151231257827m` for Decimal128 or `0.1000000000000000055511151231257827021181583404541015625m` for BigDecimal. -DE: We have an open issue about this question. The QuickJS author implemented BigDecimal, and he raised this particular question. You're right: there are multiple possible answers, and I'm not sure what the best possible answer is. +DE: We have an open issue about this question. The QuickJS author implemented BigDecimal, and he raised this particular question. You're right: there are multiple possible answers, and I'm not sure what the best possible answer is. WH: Either you don’t do that, or you give an exact value. Are you including a precision/rounding parameter in the conversion? What I would not be OK with is, if you do this then you convert a Number to a String, and then to a Decimal. @@ -1022,7 +1006,7 @@ DE: There are some people on the issue tracker that have supported that specific WH: ok -SFC: Regarding conversion from numbers to decimals, the other way that I understand is well-defined is basically to take the number with the fewest significant digits that round-trips when converting back and forth to the double. See [double-conversion ToShortest](https://github.com/google/double-conversion/blob/master/double-conversion/double-to-string.h#L142). That’s the conversion that ICU and V8 uses. But again, this is not a stage 1 blocker, so I am in favor of stage 1 for the proposal. +SFC: Regarding conversion from numbers to decimals, the other way that I understand is well-defined is basically to take the number with the fewest significant digits that round-trips when converting back and forth to the double. See [double-conversion ToShortest](https://github.com/google/double-conversion/blob/master/double-conversion/double-to-string.h#L142). That’s the conversion that ICU and V8 uses. But again, this is not a stage 1 blocker, so I am in favor of stage 1 for the proposal. API: I posted a link in IRC for the C++ implementation of this stuff. We use a parameter or a standard conversion. @@ -1052,7 +1036,7 @@ DE: The current operator overloading proposal does permit operator overloading o MM: I am. I put the extended literal syntax in the same category as operator overloading. I would really like to see the prelude for the existing numeric types as well as Decimal. My inclination would be that once we have operator overloading and extended literals. After that we don’t need new values types. -DE: The cross-realm registry was the part I couldn't piece out. Is that what you were thinking about, too? +DE: The cross-realm registry was the part I couldn't piece out. Is that what you were thinking about, too? MM: Yeah, that is the main one. I actually do have something in mind, but it is not the kind of thing I would take to the committee for stage advancement. @@ -1060,12 +1044,10 @@ DE: Please get in touch if you have further feedback. (thumbs up/silence) - -## Conclusion +### Conclusion Stage 1 reached - ## Preserve Host Virtualizability Presenter: Mark Miller @@ -1080,25 +1062,24 @@ MM: Could you give me an example of the kind of thing you're worried about? MF: I’m not worried about a particular thing. Can I get an idea of where you would like to draw this line on how much virtualizability you would like? -MM: What I would like to see is that the criterion is host-virtualizability, and the syntactic issues are the user-provided machine instruction set, that the virtualizability does not change the instruction set. But I would want it ideally to be flawless. Through this research we will discover what we can't virtualize, and enumerate/grandfather them. The ideal is that when JS runs on any host, JS can act as any other host. +MM: What I would like to see is that the criterion is host-virtualizability, and the syntactic issues are the user-provided machine instruction set, that the virtualizability does not change the instruction set. But I would want it ideally to be flawless. Through this research we will discover what we can't virtualize, and enumerate/grandfather them. The ideal is that when JS runs on any host, JS can act as any other host. KM: Can you go back to your slide on peek/poke? You don’t require enumerability. How do you find them if they are not enumerable? -MM: We introduced in ES5 exactly for this purpose `Object.getOwnPropertyNames`. This is the reason we introduced that. In ES3, the primordial objects had a dangerous amount of crap on them. We consulted with the browser vendors and were able to determine that that method and `Object.getPrototypeOf` make it safe to walk the prototype chain and not miss any dangerous properties. +MM: We introduced in ES5 exactly for this purpose `Object.getOwnPropertyNames`. This is the reason we introduced that. In ES3, the primordial objects had a dangerous amount of crap on them. We consulted with the browser vendors and were able to determine that that method and `Object.getPrototypeOf` make it safe to walk the prototype chain and not miss any dangerous properties. SYG: If the threat to virutalizability is not in theory but in practice. If we tighten the spec language, that the host shouldn’t do this, what teeth do we have? It might not be malicious. If I want to provide peek/poke for my users or my runtime. -MM: That the decision by the host is to provide peek/poke and be virtualizable? The committee would take the stance that any implementation, unless it gets an exception, would be a non-conformant implementation. +MM: That the decision by the host is to provide peek/poke and be virtualizable? The committee would take the stance that any implementation, unless it gets an exception, would be a non-conformant implementation. SYG: If you point out that the threat is in practice, not in theory... -MM: Right now, we're safe in practice because we've succeeded at getting rid of all this dangerous crap. But the spec does not state that these things should be deletable. +MM: Right now, we're safe in practice because we've succeeded at getting rid of all this dangerous crap. But the spec does not state that these things should be deletable. MM: For example, on Firefox, the legacy RegExp statics are not deletable. Was that decided because users wanted it to be non-deletable, or was it an accident? I want to make it clear that there is a way for JS to provide a host for JS. -SYG: In general, I am for this research. In general, I am wary of restricting the host that we don’t know…. What changes are you asking for? +SYG: In general, I am for this research. In general, I am wary of restricting the host that we don’t know…. What changes are you asking for? -MM: That's what I'm going to do the research on. I will try to determine which invariants I'm asking for. Before, there was a lot of crazy stuff hosts did. There was a re-write of WebIDL so that it could only specify behaviors that obey the object invariants, and which can be faithfully emulated by proxies. With two exceptions, document.all and the browser WindowProxy (which, btw, is not a proxy). And WindowProxy is now very close. +MM: That's what I'm going to do the research on. I will try to determine which invariants I'm asking for. Before, there was a lot of crazy stuff hosts did. There was a re-write of WebIDL so that it could only specify behaviors that obey the object invariants, and which can be faithfully emulated by proxies. With two exceptions, document.all and the browser WindowProxy (which, btw, is not a proxy). And WindowProxy is now very close. MBS: We're at time for the day, so we'll continue tomorrow. - diff --git a/meetings/2020-02/february-5.md b/meetings/2020-02/february-5.md index 1800fb97..70dbdf74 100644 --- a/meetings/2020-02/february-5.md +++ b/meetings/2020-02/february-5.md @@ -1,11 +1,11 @@ # February 5, 2020 Meeting Notes + ----- **In-person attendees:** Aki Braun (AKI), Andrew Paprocki (API), Rob Palmer (RPR), Waldemar Horwat (WH), Chip Morningstar (CM), Shane F Carr (SFC), Shu-yu Guo (SYG), Jordan Harband (JHD), Michael Saboff (MLS), Keith Miller (KM), Michael Ficarra (MF), Jonathan Keslin (JKN), Kevin Gibbons (KG), Richard Gibson (RGN), Justin Ridgewell (JRL), Zibi Braniecki (ZB), Myles Borins (MBS), Bradford C. Smith (BCS) Rick Button (RBU), Mary Marchini (MAR), Guilherme Hermeto (GHO) **Remote attendees:** Dan Ehrenberg (DE), Brian Terlson (BT), David Rudin (DRN), Jason Nutter (JAN), Ron Buckton (RBN), Pieter Ouwerkerk (POK), István Sebestyén (IS), Min Qi Wu (WMQ), Leo Balter (LEO), Valerie Young (VYG), Jack Works (JWK), Mathieu Hofman (MAH), John Hax (JHX), Caridy Patiño (CP), Sergey Rubanov (SRV), Rajiv Batra (!!!), Yulia Startsev (YSV), Bradley Farias (BFS), Gus Caplan (GCL), Caio Lima (CLA) - ## Preserve Host Virtualizability (Continue from Day 1) Presenter: Mark Miller @@ -20,7 +20,7 @@ SYG: I’ve done a little more research. When I first put it yesterday, I was ex MM: There has been some change since I first heard that proposed. Could you define it? -SYG: Unforgeable is an extended attribute, non-static regular property that is non-writable/non-configurable. This is used in legacy stuff and in the trusted types proposal, making some things unforgeable. This is something I would like to retain. +SYG: Unforgeable is an extended attribute, non-static regular property that is non-writable/non-configurable. This is used in legacy stuff and in the trusted types proposal, making some things unforgeable. This is something I would like to retain. CP: Trusted types was actually the thing that triggered us to look into this problem. @@ -36,7 +36,7 @@ MM: No, I'm not. WH: So you want to support the cases of scripts hostile to virtualization? There is quite a bit of an industry of people who write scripts that don’t want to run virtualized, and I’m sure you can think of reasons why you might want to do that. So I’m curious how this proposal deals with that use case. -MM: I don't understand that use case, and I'm not familiar with it. I'll say that there are 2 aspects, that we are unlikely to get to perfect virtualizability in JS on the web platform in any case. We're holding pure virtualizability as a goal to approach as close as we can get. If a script can detect that it is not on the host we don't consider that a violation of the goals. The core goals are around controlling access to the outside world, and in general control of integrity. And furthermore, for scripts that are not actively trying to subvert virtualizability, there's a nice part about testing. Being able to create a test harness and run the tests on another host. So there are a variety of benefits of virtualizability that does not achieve perfection. +MM: I don't understand that use case, and I'm not familiar with it. I'll say that there are 2 aspects, that we are unlikely to get to perfect virtualizability in JS on the web platform in any case. We're holding pure virtualizability as a goal to approach as close as we can get. If a script can detect that it is not on the host we don't consider that a violation of the goals. The core goals are around controlling access to the outside world, and in general control of integrity. And furthermore, for scripts that are not actively trying to subvert virtualizability, there's a nice part about testing. Being able to create a test harness and run the tests on another host. So there are a variety of benefits of virtualizability that does not achieve perfection. WH: OK @@ -49,8 +49,8 @@ MM: Does anyone object to stage 1? Consensus reached for stage 1 ## Update on Realms -Presenter: Caridy Patiño (CP) +Presenter: Caridy Patiño (CP) - [Proposal](https://github.com/tc39/proposal-realms) - [Slides](https://github.com/tc39/proposal-realms/#presentations) @@ -69,7 +69,7 @@ CP: Yes SYG: Good job pairing down the parts of the proposal. I like the current proposal without the hooks and with the lightweight creation. And I would like to review it for stage 3. -GHO: I would like to see some examples about error handling. How would error handling happen across realms? +GHO: I would like to see some examples about error handling. How would error handling happen across realms? CP: there was an issue open a while ago about error handling. The error handling would happen at the agent level rather than the realm. There was another issue for completion record access. The compartment’s evaluator is really an API that allows you to control everything. We still don’t have that formalized. In terms of errors I don’t think the Realms will provide this. @@ -89,7 +89,6 @@ CP: I plan in the next couple of weeks to resolve the open issues and then go fo DE: Are you going to make a mechanism for CSP style environments before advancement? - CP: We are not changing the mechanism in the spec for CSP rules. In future we will have a hook to control that. DE: Let’s follow up offline. I think it’s important to have a method for environments that can’t evaluate strings. @@ -100,7 +99,7 @@ CP: yes, I was hoping to. SYG: Why is it a problem? -DE: It’s fine. And I can help a review. +DE: It’s fine. And I can help a review. SYG: Is it a precedent that editors cannot review? @@ -108,7 +107,7 @@ MF: When we go for stage 3, we like to have reviewers that are not editors as we MM: CP was vague on compartments, my presentation on SES will be on the compartment API. To answer DE’s question, part of the renaming of evaluator to compartment, it is designed to be used in situations where there are no runtime evaluators. -GCL: On the first slide you mentioned ES module support, and I couldn't find any info on that in the repo. Is that a follow-on proposal? +GCL: On the first slide you mentioned ES module support, and I couldn't find any info on that in the repo. Is that a follow-on proposal? CP: We do have some work that was done by Dave Herman around the APIs, maybe 3 years ago, but we don’t have a champion for that proposal. That is something you would be able to use via the compartment api, using the hooks via the API you can control the modules. @@ -119,6 +118,7 @@ GCL: Ok, thank you Not seeking stage advancement this meeting. Stage 3 reviewers: + - MF (@michaelficarra on GitHub, @smooshMap on Twitter) - SYG (@_shu on Twitter) - DE (littledan@igalia.com, @littledan on GitHub and Twitter) @@ -147,6 +147,7 @@ SFC: Stage 4? Approved for Stage 4 ## `Intl.Segmenter` Stage 2 update + (Richard Gibson) RGN - [Proposal](https://github.com/tc39/proposal-intl-segmenter) @@ -166,7 +167,7 @@ RGN: The concept of a sentence is as universal as any concept can be in language KG: Do you intend to resolve the open issues before stage 3? -RGN: I was prepared to ask for Stage 3 with these issues. Now that we have the extra time, I expect them to all be resolved. +RGN: I was prepared to ask for Stage 3 with these issues. Now that we have the extra time, I expect them to all be resolved. RPR: Queue is empty. @@ -175,6 +176,7 @@ RPR: Queue is empty. Update done, no stage advancement requested. Likely being requested at next meeting ## `Intl.Locale` for Stage 4 + (Zibi Braniecki) ZB - [Proposal](https://github.com/tc39/proposal-intl-locale/) @@ -191,6 +193,7 @@ ZB: I would like to request Stage 4. Consensus reached for Stage 4 ## Legacy reflection features for functions in JavaScript for Stage 1 + (Mark Miller) MM - [Proposal](https://github.com/claudepache/es-legacy-function-reflection) @@ -251,6 +254,7 @@ MM: Do we have any objections to Stage 1? Consensus reached for stage 1 ## Updates on Explicit Resource Management + (Ron Buckton) RBN - [Proposal](https://github.com/tc39/proposal-explicit-resource-management) @@ -302,9 +306,9 @@ RBN: I hadn’t considered that point. I do think it’s useful to have the decl MM: I like that answer. I think that the issue of burying that operation so it’s easy to miss is a good point. Thank you. -RBN: It can get even more confusing if you have await in the mix. Now you have an `await` in the mix that might look like I’m awaiting some value. So I'm a little bit concerned about the expression form. +RBN: It can get even more confusing if you have await in the mix. Now you have an `await` in the mix that might look like I’m awaiting some value. So I'm a little bit concerned about the expression form. -MM: Yeah, that one kills it for me. Now that you pointed it out, it would exactly lead to that misreading. +MM: Yeah, that one kills it for me. Now that you pointed it out, it would exactly lead to that misreading. JHD: Could it be `using async value`? @@ -314,7 +318,7 @@ RBN: async today is a marker keyword. If you look like something like `for await MM: I agree. Right now we’ve succeeded in saying that all interleaving points, except some esoteric cases, are marked with `await` or `yield`. -WH: To MM's suggestion, `using value expr`, I can think of a lot of havoc this could cause. Did you mean this to dispose locally within the subexpression … +WH: To MM's suggestion, `using value expr`, I can think of a lot of havoc this could cause. Did you mean this to dispose locally within the subexpression … MM: That is not what I would expect it to mean. @@ -322,7 +326,7 @@ WH: … or the end of the block containing the expression? MM: Yes. -WH: Unless you mean to scope it specifically to the sub-expression, this causes havoc. You have to answer things like, what if I am using this in a function argument initializer? Or a class that has a `using value expr` subexpression in its `extends` clause? +WH: Unless you mean to scope it specifically to the sub-expression, this causes havoc. You have to answer things like, what if I am using this in a function argument initializer? Or a class that has a `using value expr` subexpression in its `extends` clause? RBN: There is a lot of complexity there that I would like to avoid. @@ -330,11 +334,11 @@ MM: I think my suggestion has been killed, I would like to retract. WH: How do these relate to the concept of things in the tail position? -RBN: Are you specifically talking about tail call optimizations? I'd have to think about that more. The idea with TCO is that you can re-use the stack. The values can’t be replaced on the stack, they must still exist when the function exits. It's as if there are more statements that happen that have to return. I'd have to look at how TCO handles `try..finally`. I think these are calls that can't be tail-optimized. +RBN: Are you specifically talking about tail call optimizations? I'd have to think about that more. The idea with TCO is that you can re-use the stack. The values can’t be replaced on the stack, they must still exist when the function exits. It's as if there are more statements that happen that have to return. I'd have to look at how TCO handles `try..finally`. I think these are calls that can't be tail-optimized. WH: That’s the answer I would expect. Syntactically, `try..finally` wraps around whatever it is. Meanwhile you can stick `using value` in the middle of some block somewhere, it doesn’t wrap around the sequel. It is a TCO defeating mechanism that’s not syntactically apparent. -RBN: One of the reasons I became more interested in pursuing the possibility of the `using` declaration form was Dean Tribble (DT) who originally pioneered this work at Microsoft. He said that there were concerns from some developers about the using block form that there were certain cases that weren't meeting the needs of developers. It introduced block scope in areas where they didn't want to introduce block scope. And the using block form was a long request from the C# language. The motivation for `using value` was that, if we could abandon the `try using` block form in favor of this, it would introduce benefits such as `try using`, it would allow you to have a catch or finally block when those are released. By making these block scoped, it is very explicit when they are released. +RBN: One of the reasons I became more interested in pursuing the possibility of the `using` declaration form was Dean Tribble (DT) who originally pioneered this work at Microsoft. He said that there were concerns from some developers about the using block form that there were certain cases that weren't meeting the needs of developers. It introduced block scope in areas where they didn't want to introduce block scope. And the using block form was a long request from the C# language. The motivation for `using value` was that, if we could abandon the `try using` block form in favor of this, it would introduce benefits such as `try using`, it would allow you to have a catch or finally block when those are released. By making these block scoped, it is very explicit when they are released. WH: I agree with the ergonomics of `using value` and friends. C++ does this everywhere with the RAII idiom. Ergonomically, it does work quite nicely, though there are some corner cases. @@ -350,17 +354,17 @@ RBN: If the dispose throws an error? This is defined in the proposal spec text t JRL: IF you are using async expose, does it create a rejected AggregateError? -RBN: If you are in an async function and use are using `try using await`, or `using const` declaration form, `AggregateError` would be thrown after all promises are evaluated. There are cases where if you have a Promise that never completes, there are concerns of what that means. Does it suspend the execution of other disposes that are pending? But that's no different than if you tried to do this manually with try..finally. +RBN: If you are in an async function and use are using `try using await`, or `using const` declaration form, `AggregateError` would be thrown after all promises are evaluated. There are cases where if you have a Promise that never completes, there are concerns of what that means. Does it suspend the execution of other disposes that are pending? But that's no different than if you tried to do this manually with try..finally. JRL: Ok RBN: I’m not looking for stage advancement. Just an update. Looking to get feedback on the using value syntax in general, and whether we should consider moving specifically to this form and abandoning the try using form. -MM: I like the using forms, I’m fine with using value. The destructuring issue is best dealt with by prohibiting it. With those things in, the try syntax should be out. We should have only the using syntax. +MM: I like the using forms, I’m fine with using value. The destructuring issue is best dealt with by prohibiting it. With those things in, the try syntax should be out. We should have only the using syntax. RBN: Are there any other questions? -GHO: I agree with MM. The `using value` syntax is preferred for us as well. +GHO: I agree with MM. The `using value` syntax is preferred for us as well. RBN: I don’t have anything else for this proposal @@ -391,9 +395,9 @@ MM: What happens if the object is modified during iteration? JKN: It will match other collection types. For example, in an array [modifying the collection during iteration] will mess with things. For example, deleting an array item will cause it to get skipped. -MM: The spec text you were showing used `getEnumerableOwnProperties` or something. Those spec operations take snapshots. I would prefer those semantics to having iteration be disrupted by mutation. Were we to have it be sensitive to object mutation, it would have to be precisely and deterministically specified, such that we don't re-create the `for..in` nightmare. If it were a snapshot, there would be no memory benefit at all. I’m in the same camp as KG, the only case where you get a memory benefit is the more problematic semantics. +MM: The spec text you were showing used `getEnumerableOwnProperties` or something. Those spec operations take snapshots. I would prefer those semantics to having iteration be disrupted by mutation. Were we to have it be sensitive to object mutation, it would have to be precisely and deterministically specified, such that we don't re-create the `for..in` nightmare. If it were a snapshot, there would be no memory benefit at all. I’m in the same camp as KG, the only case where you get a memory benefit is the more problematic semantics. -JKN: You have a good point that these two spec versions are different: one takes a snapshot at the beginning, and the other takes a new snapshot at every iteration. I prefer the second. It is a fraught concern, but it is consistent. +JKN: You have a good point that these two spec versions are different: one takes a snapshot at the beginning, and the other takes a new snapshot at every iteration. I prefer the second. It is a fraught concern, but it is consistent. MM: It means that something that currently refactoring keys to `iterateKeys` introduces a semantic change that might surprise people. @@ -411,9 +415,9 @@ BFS: That seems fine to me. JKN: The proposal makes it very clear for developer intent to work on the live object and not a copy. The more developer intent is clear, the easier it is for an engine to optimize it. -WH: I'm confused by what you just said. Since this thing does a snapshot of the object at the beginning, how does this express intent to work on the live object? +WH: I'm confused by what you just said. Since this thing does a snapshot of the object at the beginning, how does this express intent to work on the live object? -JKN: The first alternative in the proposal snapshots at the beginning, but the second alternative in the proposal expresses intent at the spec level because it snapshots at every iteration. It's almost pulled from what we currently do with `Map` iterator. +JKN: The first alternative in the proposal snapshots at the beginning, but the second alternative in the proposal expresses intent at the spec level because it snapshots at every iteration. It's almost pulled from what we currently do with `Map` iterator. WH: So if you delete a property you haven't seen it, then you won't see it? @@ -435,11 +439,11 @@ JKN: I’m referring to arrays. JHD: Sure, but iterating over objects does not. -JKN: This differs from what you can currently do when iterating over objects today. Objects are treated as a collection type very frequently. +JKN: This differs from what you can currently do when iterating over objects today. Objects are treated as a collection type very frequently. YSV: We did a review of both spec texts. The second one is one we wouldn’t advise, the first makes more sense. It simply creates an array iterator. We don’t think this will achieve the goal you have. It is unclear how much benefit there will be for developers. There is not enough information for us to justify this going to stage 2. -JKN: I hear the concerns over the first set of spec text. I spent time chatting with BFS about this. Our thought was that since the array was not observable, then it didn't really need to be an array. Obviously deeper spec text concerns like what KG mentioned is a Stage 3 concern. When we're looking at Stage 2 advancement, we're looking at whether this is a valuable thing. I believe it is helpful to make objects more consistent [with other collections]. Seems like the next step is to work with implementers to find where the value lies. +JKN: I hear the concerns over the first set of spec text. I spent time chatting with BFS about this. Our thought was that since the array was not observable, then it didn't really need to be an array. Obviously deeper spec text concerns like what KG mentioned is a Stage 3 concern. When we're looking at Stage 2 advancement, we're looking at whether this is a valuable thing. I believe it is helpful to make objects more consistent [with other collections]. Seems like the next step is to work with implementers to find where the value lies. YSV: I wonder if perhaps this would encourage anti-patterns, where if another data structure would be better than an object. @@ -451,7 +455,7 @@ ZB: Stage 2 requires precisely described semantics, which we do not have. Kumavis?: Yulia stated what I wanted to say. Adding these utilities encourages using objects as collections. Should we do that? -JKN: Whether it's appropriate or not, it's done very commonly today. If you're going to represent key-value data in JSON, it's an object. +JKN: Whether it's appropriate or not, it's done very commonly today. If you're going to represent key-value data in JSON, it's an object. Kumavis?: Do we want to make it easier to use collections? @@ -473,7 +477,6 @@ Not requested for stage advancement. Presenter: Justin Ridgewell (JRL) - - [proposal](https://github.com/tc39/proposal-logical-assignment) - [slides](https://docs.google.com/presentation/d/1XbYMm7IkHef6hpvwQlLSxb_b5gSkf1g6iuNL9WM0DQ8/edit?usp=sharing) @@ -493,11 +496,11 @@ SYG: I think the value of short-circuiting, given the additional use cases that YSV: So, we support this overall. But we also feel that the nullish coalescing operator is more useful than the other two. -JRL: That was brought up in March 2018. The additional complexity is that if we only have some operators, but not others, that's confusing to users. The nullish version (`??=`) is most useful, but I have also used the `&&=` and `||=` forms with booleans in for loops. +JRL: That was brought up in March 2018. The additional complexity is that if we only have some operators, but not others, that's confusing to users. The nullish version (`??=`) is most useful, but I have also used the `&&=` and `||=` forms with booleans in for loops. DRR: I think there were two things brought up here and on GitHub. The first, people have a mental model of the compound assignment operators having consistent semantics. I was pointed out that it was a lie because you never re-eval the LHS. It’s a “yes sure, technically” detail, but it doesn’t correspond to how I as a user think about things. Either way, I don’t think that is a sufficient reason to weigh us one way or the other. I also wanted to mention that some of the use cases seem like they're optimizations but they also potentially hide bugs at a later stage when you do perform the assignment. E.g. if you were conditionally triggering a set, and that can do something destructive like changing focus, that might be undesirable behavior that you would want to know about ahead of time. So personally I would think you would always want the effect to happen, to avoid these sort optimizations that might lead to unpredictable code. -JRL: I think that these stem from the fact that whether or not we should do short circuiting. The destructive nature of setters is important. I think people would be surprised if `innerHTML ||= ` is a huge performance penalty just because they didn't understand they would be getting a re-render. +JRL: I think that these stem from the fact that whether or not we should do short circuiting. The destructive nature of setters is important. I think people would be surprised if `innerHTML ||=` is a huge performance penalty just because they didn't understand they would be getting a re-render. I think that the large majority of code that I have ever written, I don’t use setters. The fact that this internally has an optimization, it won’t affect 99% of the code I write. Whether it uses a simple or short-circuit set, the users won’t care one way or another. But in cases where it does matter I think people would want short-circuiting. @@ -537,7 +540,7 @@ KG: I agree with the idea of trying to avoid surprises in edge cases, but you ha DRR: I think we need to use some of the use cases. Without a concrete example, it is hard to argue one way or the other. -SFC: How does this proposal relate to optional chaining? It seems reasonable that you would want to be able to do `a?.b ??= c`. +SFC: How does this proposal relate to optional chaining? It seems reasonable that you would want to be able to do `a?.b ??= c`. JRL: Optional chaining is not a valid LHS. It is left for a future proposal. @@ -636,13 +639,11 @@ RBG: If we do extend serialization, it would be a requirement that that is maint MM: What is the thing you want to emit that you can't right now? -[Misc]: BigInt - MM: I would be in favor of just changing BigInt. -YSV: I think there's a lot of good ideas. Revivers need some help to be more useful for JavaScript developers. But because the spec text was not available on time, I can't support stage advancement right now. +YSV: I think there's a lot of good ideas. Revivers need some help to be more useful for JavaScript developers. But because the spec text was not available on time, I can't support stage advancement right now. -MF: I don’t think that we should have the parse index included, because we should not treat the thing we're parsing differently in different positions: e.g. that could allow a deserializer to change behaviour based on whitespace. I also don't think serialization should be a part of this proposal. I'm also generally against supporting serialization at all. So if you would raise that proposal, I have reasons I wouldn't want to do that. We don't need to be able to generate all valid JSON, just parse all valid JSON. We only need to be able to generate things we want to accept as input to JSON.stringify. That may or may not include BigInts. +MF: I don’t think that we should have the parse index included, because we should not treat the thing we're parsing differently in different positions: e.g. that could allow a deserializer to change behaviour based on whitespace. I also don't think serialization should be a part of this proposal. I'm also generally against supporting serialization at all. So if you would raise that proposal, I have reasons I wouldn't want to do that. We don't need to be able to generate all valid JSON, just parse all valid JSON. We only need to be able to generate things we want to accept as input to JSON.stringify. That may or may not include BigInts. RGN: The noteworthy aspect of BigInt is that there is no language ability to emit lossless representations of BigInts. You cannot emit a sequence of digits that is a BigInt. @@ -652,7 +653,7 @@ RBN: About parse indices: I think it's less useful to look at the index as how t MF: I think if that's the case we're considering we need the information of where you're already parsing it. I think there's more discussion to be had. -API: Because BigInt's toJson throw by default, keep in mind if one piece of code attached a function to toJSON, and then fromJSON has a different function written by a different author, what happens? That could be a problem. +API: Because BigInt's toJson throw by default, keep in mind if one piece of code attached a function to toJSON, and then fromJSON has a different function written by a different author, what happens? That could be a problem. RGN: THat's one of the reasons serialization doesn't currently appear. I personally think it's worth pursuing but not worth pursuing together. It sounds like there's loose agreement to that. @@ -668,7 +669,7 @@ RGN: It's parsing subject to ECMAScript minus Annex B. WH: In the spec you are saying you are parsing it with respect to ECMA-404. -CM: Putting on my hat as the guardian of the perpetual immutability of JSON, one of the frustrations with BigInt is why can't you just serialize a BigInt as a BigInt, and you're starting to address that on the read side. … I think what you're doing is the right flavor of approach to dealing with the issue of the immutability of JSON. I'm not sure I buy your particular mechanism but I'm not sure I have a better one. I've always found the reviver/replacer mechanism awkward. I don't know if you want to sign up for more than you've signed up for but I think it would be worth investigating, could this entire parsing pathway be made better with the right dimensions of control and flexibility with some of the context sensitive things you’ve flagged as use cases. But I'm also a little concerned that we risk ending up with something clunky and complicated in its own way that is a nightmare. So it really calls for a tour-de-force of integrated thinking. In summary, I’m not sure I buy this particular proposal, but I buy the direction of this proposal. I don’t think I’m prepared to support stage 2, I’m not sure if I like the specific mechanism but totally buy the goal. +CM: Putting on my hat as the guardian of the perpetual immutability of JSON, one of the frustrations with BigInt is why can't you just serialize a BigInt as a BigInt, and you're starting to address that on the read side. … I think what you're doing is the right flavor of approach to dealing with the issue of the immutability of JSON. I'm not sure I buy your particular mechanism but I'm not sure I have a better one. I've always found the reviver/replacer mechanism awkward. I don't know if you want to sign up for more than you've signed up for but I think it would be worth investigating, could this entire parsing pathway be made better with the right dimensions of control and flexibility with some of the context sensitive things you’ve flagged as use cases. But I'm also a little concerned that we risk ending up with something clunky and complicated in its own way that is a nightmare. So it really calls for a tour-de-force of integrated thinking. In summary, I’m not sure I buy this particular proposal, but I buy the direction of this proposal. I don’t think I’m prepared to support stage 2, I’m not sure if I like the specific mechanism but totally buy the goal. MLS: I don't want to take serialization off the table. I don't like that we can't round-trip the JSON that we can parse. Since we can’t serialize it, how do we expect to generate JSON that we can parse with these constructs? My concern is that we're talking about half the problem, not the whole problem. @@ -676,7 +677,7 @@ RGN: On where does it come from, the answer is elsewhere. Not every language has API: We have a cross language/cross env situation, with a lowest common denominator. We have to be vigilant that we don’t … It’s a real problem. There is no kind of versioning or capability flags. We’ve run into this with C++ code trying to emit Decimals where there was nothing you could do in other environments. -KG: To reply to MLS: There are lots of other ways we can parse JSON that can't be produced: weird escaping, whitespace, etc. The parser is already more liberal than the serializer. +KG: To reply to MLS: There are lots of other ways we can parse JSON that can't be produced: weird escaping, whitespace, etc. The parser is already more liberal than the serializer. RGN: I’m hearing objections. I will not push for advancement. There is enough positive reception that I will continue pursuing this. @@ -684,12 +685,10 @@ RGN: I’m hearing objections. I will not push for advancement. There is enough Given the objections, not requesting stage advancement. But will pursue further - ## `ArrayBuffer.fillRandom` for Stage 1 Presenter: Ron Buckton (RBN) - - [proposal](https://github.com/rbuckton/proposal-arraybuffer-fillrandom) - [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkeB85XK3cCAu49gunA?e=Ksabft) @@ -699,7 +698,7 @@ MM: I like a lot of things about this and am in favor of Stage 1. I especially l RBN: I did investigate the possibility of creating a global for this. The most obvious namespace is `Crypto`, I had a concern that this would conflict with the WebCrypto API, but could have conflicts with the Node REPL. I was concerned about introducing a new global that would conflict with or compete with the WebCrypto API existing definitions. I felt `ArrayBuffer` was a satisfactory place for it because it deals with mutating `ArrayBuffers`. Please open an issue on the tracker. I wouldn't want to tie this to something like builtin modules - I am concerned with having to many things tied together. -MM: Ok. I agree that this shouldn't go on Crypto. I detest adding new global variables. I’m very concerned that we keep polluting the global namespace because we haven’t advanced builtin modules. +MM: Ok. I agree that this shouldn't go on Crypto. I detest adding new global variables. I’m very concerned that we keep polluting the global namespace because we haven’t advanced builtin modules. KG: You may not have noticed that this is a CSPRNG, which is a lot like not having state. @@ -741,7 +740,7 @@ DE: WebCrypto is not under active development. We can contact people who work on DE: If we are talking about moving WebCrypto into 262, I suggested that we use an IDL. For the JS stdlib I was suggesting that we use an IDL, possibly using WebIDL. It differs a bit from TC39 conventions, and would take a lot of work to get something that we would agree on. So these are things to consider. -MBS: There is active work in Node.js core to add support for Web Crypto to Node, that's hooking into the platform tests. If there are questions about what we're doing, we can reach out. With regard to duplicating APIs, with Node adopting these APIs, they are becoming more reliable cross-platform. So that probably plays to SYG's point about not introducing more redundancy. One thing we've done when making things that might be redundant is trying to make an API which is slightly lower level, though I don't know if that works for crypto in particular. I wonder if there is a space that we could explore where we bring something into our spec that serves as building blocks. But I do know that we run a risk of "standards poaching", where if something is happening that we like, then we pull it in, so I don't know if we can reference some other spec. +MBS: There is active work in Node.js core to add support for Web Crypto to Node, that's hooking into the platform tests. If there are questions about what we're doing, we can reach out. With regard to duplicating APIs, with Node adopting these APIs, they are becoming more reliable cross-platform. So that probably plays to SYG's point about not introducing more redundancy. One thing we've done when making things that might be redundant is trying to make an API which is slightly lower level, though I don't know if that works for crypto in particular. I wonder if there is a space that we could explore where we bring something into our spec that serves as building blocks. But I do know that we run a risk of "standards poaching", where if something is happening that we like, then we pull it in, so I don't know if we can reference some other spec. RBN: There are different ways you would reach the API, in WebCrypto it is a global. In node it would be a module. We would need to make sure we have a consistent mechanism to address that. @@ -767,7 +766,7 @@ WH: I get 404s on those: https://rbuckton.github.io/proposal-arraybuffer-fillran RBN: On the proposal repo, the links are correct. I just clicked on them all, they worked, it might be a networking issue. -WH: I’m going to strongly disagree with SYG here. A cryptographically secure random number generator is basic functionality, this should be accessible within the core ES language. Referring people to other specs to implement this is a little like not having subtraction because some other web spec defines it. We don’t need to coordinate with Node or whatever to define subtraction in the language. The definition of a secure pseudorandom number is something we need in the language. It needs to go in before UUID to keep folks from parsing UUIDs to get random numbers. In addition, relative to some of the other options for where to put the random number generating functionality, I think ArrayBuffer is a fine fit; I can’t think o a better one at the moment. I really like this proposal as-is. +WH: I’m going to strongly disagree with SYG here. A cryptographically secure random number generator is basic functionality, this should be accessible within the core ES language. Referring people to other specs to implement this is a little like not having subtraction because some other web spec defines it. We don’t need to coordinate with Node or whatever to define subtraction in the language. The definition of a secure pseudorandom number is something we need in the language. It needs to go in before UUID to keep folks from parsing UUIDs to get random numbers. In addition, relative to some of the other options for where to put the random number generating functionality, I think ArrayBuffer is a fine fit; I can’t think o a better one at the moment. I really like this proposal as-is. KG: It sounded like SYG would be ok with the outcome that we define it in this spec, so that they reference us. @@ -795,7 +794,7 @@ WH: What is bad about that? SYG: In isolation, you can argue that any thing we duplicate is ok, but it does not set a good precedent for the platform. Is your argument specifically for CSPRNGs. -WH: Yes, this is specifically for CSPRNGs. This would be very different if it had a larger API surface like crypto or Intl or something. +WH: Yes, this is specifically for CSPRNGs. This would be very different if it had a larger API surface like crypto or Intl or something. SYG: what is the harm in then a possible path of getting WebCrypto layered with ours? @@ -825,7 +824,7 @@ WH: I’m not sure what you mean. RBN: The WebCrypto algorithm is intentionally vague. It is mostly host defined. There is a certain amount of entropy, but it doesn’t specify where the entropy comes from. -So it's not even well-defined in the web platform. It's more of a host implementation, where when you call this function, you expect the results to have a certain amount of entropy. So the reason this is proposed as a function on this object is ???. +So it's not even well-defined in the web platform. It's more of a host implementation, where when you call this function, you expect the results to have a certain amount of entropy. So the reason this is proposed as a function on this object is ???. KG: I have opened WebCrypto repo, there is an issue opened by the current maintainer saying, “If getRandomValues is upstreamed in ECMA-262, we should update the Web Crypto API to reference their definition (assuming it's compatible).”, so we should upstream it. @@ -856,7 +855,6 @@ RPR: Is there anyone opposed to stage 1? Advancement to stage 1 assuming changes: proposal is renamed to clarify that it is just exploring the space of making crypto random numbers available to users - ## ArrayBuffer view stride argument for Stage 1 Presenter: Shu-yu Guo (SYG) @@ -912,7 +910,6 @@ KM: Maybe this isn’t worth fighting now. MM: Proxies can be made a lot faster. Despite that, I agree with SYG. Once you have a view with an offset, you’ve bought into views. For array data, strides are natural. The issue is not how common it is. The issue is, when you need it, how important is it for it to be fast? It would never be fast enough to use in an inner loop. - MM: One of the things that is nice about the shimability of JS is that shimmable proposals give us the ability to get a feeling for the API. Even though this should not be implemented for real with a proxy, it should be shimmed with a proxy so that we can start using it experimentally. JRL: I don’t think proxies can ever be fast enough to justify not having a stride parameter. Can you explain the proposed API slide? I'm confused by bytes. @@ -965,7 +962,6 @@ Consensus reached for stage 1 Presenter: Daniel Ehrenberg (DE) - - [proposal](https://github.com/tc39/proposal-weakrefs) - [slides](https://docs.google.com/presentation/d/1a4hrdlEcpyKmBj6VtAVYDkokeW_HLFXcg11xIxhwlWM/edit#slide=id.p) @@ -1031,9 +1027,7 @@ MM: The line of reasoning KM used is wrong, and is an example of what is so tric KG: I think MM’s point is that you are correct in that you can do this, but not the way you described. -### Revisited as the first agenda item TOMORROW: -1 New Topic: Think about what references your finalization group -Daniel Ehrenberg (Igalia) -2 New Topic: Weakref-oblivious execution -Waldemar Horwat +### Revisited as the first agenda item TOMORROW +1 New Topic: Think about what references your finalization group Daniel Ehrenberg (Igalia) +2 New Topic: Weakref-oblivious execution Waldemar Horwat diff --git a/meetings/2020-02/february-6.md b/meetings/2020-02/february-6.md index ed0ecce5..5a9f18f1 100644 --- a/meetings/2020-02/february-6.md +++ b/meetings/2020-02/february-6.md @@ -1,4 +1,5 @@ # February 6, 2020 Meeting Notes + ----- **In-person attendees:** Aki Braun (AKI), Andrew Paprocki (API), Rob Palmer (RPR), Waldemar Horwat (WH), Chip Morningstar (CM), Shane F Carr (SFC), Shu-yu Guo (SYG), Jordan Harband (JHD), Michael Saboff (MLS), Keith Miller (KM), Michael Ficarra (MF), Jonathan Keslin (JKN), Kevin Gibbons (KG), Richard Gibson (RGN), Justin Ridgewell (JRL), Zibi Braniecki (ZB), Myles Borins (MBS), Bradford C. Smith (BCS) Rick Button (RBU), Mary Marchini (MAR), Guilherme Hermeto (GHO) @@ -24,39 +25,37 @@ MM: Now I will take questions. DE: You talked about host hooks. Where can I learn more about host hooks? -MM: There are 2 incomplete lists of host hooks. Host hooks are, at this point in time, a neglected part of this effort. We need to get back to it. There was a much earlier version in which we had an API specified for a small number of host hooks. Since then, there was a gathering of things that look like host hooks. There was an issue posted on the Realms proposal. I did a separate gathering on one of the pages in the SES proposal repo, not in the READMEs, summarizes my attempt to gather host hooks. I only became aware of the other list on the repo this morning, but I think they are in agreement. Most host hooks will transform to APIs relatively easily, but we need your help. +MM: There are 2 incomplete lists of host hooks. Host hooks are, at this point in time, a neglected part of this effort. We need to get back to it. There was a much earlier version in which we had an API specified for a small number of host hooks. Since then, there was a gathering of things that look like host hooks. There was an issue posted on the Realms proposal. I did a separate gathering on one of the pages in the SES proposal repo, not in the READMEs, summarizes my attempt to gather host hooks. I only became aware of the other list on the repo this morning, but I think they are in agreement. Most host hooks will transform to APIs relatively easily, but we need your help. DE: I’m curious about modules. The module host hooks are not laid in a way that corresponds to a way you might implement it. We might want to refactor. -MM: Suggestions certainly welcome. Just need to keep in mind all the simultaneous needs we need to satisfy. The important thing, which may not be obvious, is the staging separation, which is what I tried to emphasize in explaining the MMU. Things that are already loaded, you are just doing instantiation and linking. So in terms of a full module loading API, there are a lot of things that are not relevant. It was interesting to untangle that from what you want to do during runtime. +MM: Suggestions certainly welcome. Just need to keep in mind all the simultaneous needs we need to satisfy. The important thing, which may not be obvious, is the staging separation, which is what I tried to emphasize in explaining the MMU. Things that are already loaded, you are just doing instantiation and linking. So in terms of a full module loading API, there are a lot of things that are not relevant. It was interesting to untangle that from what you want to do during runtime. DE: Are you saying that you don't want these virtualized environments to have the same capabilities as the more general module graph case? -MM: I’m saying that I want them separated, so that in a TC53 configuration, the loading abilities might be absent, whereas the instantiation and linking abilities might be present. In a full web context, the loading will still be there. Loading is source files in directories loaded with IO to static module records. I want the Compartment API to go from that phase (static module records) to linked module instances. Those two things are really conceptually different. TC53 brought home the huge payoff of separating them in the API. So what I'm presenting here with the compartment API is only the second phase for that. I'm expecting a separate loading API that can provide arguments to the compartment API. +MM: I’m saying that I want them separated, so that in a TC53 configuration, the loading abilities might be absent, whereas the instantiation and linking abilities might be present. In a full web context, the loading will still be there. Loading is source files in directories loaded with IO to static module records. I want the Compartment API to go from that phase (static module records) to linked module instances. Those two things are really conceptually different. TC53 brought home the huge payoff of separating them in the API. So what I'm presenting here with the compartment API is only the second phase for that. I'm expecting a separate loading API that can provide arguments to the compartment API. DE: Thanks for explaining. - - -JRL: Can you go back to the endowments slide? So you were saying that the endowments object are things that are copied to the new global object. Wouldn't that leak the outside state? Doesn't that give the compartment access to the parent? +JRL: Can you go back to the endowments slide? So you were saying that the endowments object are things that are copied to the new global object. Wouldn't that leak the outside state? Doesn't that give the compartment access to the parent? MM: Yes. That is the intention. One of the reason we separated Realms and Compartments, is that we found it much harder to prevent leakage over a realm boundary than we expected. It is hard to do controlled sharing of what you intend across a realm boundary without leaking other objects from the realm that you did not intend. When you are within one set of shared frozen primordials, we found that not to be a danger. The things you share via endowments are the things you are sharing on purpose. -GHO: Thanks Mark. When you have code being evaluated in a compartment, and the code has a handle rejection, what happens? +GHO: Thanks Mark. When you have code being evaluated in a compartment, and the code has a handle rejection, what happens? -MM: So you and I talked last night. For the rejections that are about turn boundaries, about what happens at top of turn --- such the motivating case is the Node unhandled rejection, where you have a Promise that entered a rejected state and no one was listening for it --- since it's about the turn, is on most grounds not the right ergonomics, division of concerns, to put a hook on that, neither at the realm nor compartment level. What the turn is really about is the agent. And just like we're taking the spec concept of a realm record and turning it into a reified Compartment, and taking the spec concept of the intrinsics record and reifying it into a Realm, what we decided last night was to create a proposal repo about the agent API where we can put host hooks that are per-agent. That is about installing the error diagnostics that are about turn boundaries as well as host hooks for defining scheduling policy. +MM: So you and I talked last night. For the rejections that are about turn boundaries, about what happens at top of turn --- such the motivating case is the Node unhandled rejection, where you have a Promise that entered a rejected state and no one was listening for it --- since it's about the turn, is on most grounds not the right ergonomics, division of concerns, to put a hook on that, neither at the realm nor compartment level. What the turn is really about is the agent. And just like we're taking the spec concept of a realm record and turning it into a reified Compartment, and taking the spec concept of the intrinsics record and reifying it into a Realm, what we decided last night was to create a proposal repo about the agent API where we can put host hooks that are per-agent. That is about installing the error diagnostics that are about turn boundaries as well as host hooks for defining scheduling policy. BFS: I just wanted to say that at least with unhandled rejections, it would be pertinent to move this to the older proposal called Zones. -MM: There is one option for how you could put it into this Compartment proposal. I don't remember who suggested it. You could associate with the realm that created the rejected promise. You still couldn’t invoke this hook until the turn boundary. But on the turn boundary, the invoked hook could be the one associated with the promise’s realm of origin. +MM: There is one option for how you could put it into this Compartment proposal. I don't remember who suggested it. You could associate with the realm that created the rejected promise. You still couldn’t invoke this hook until the turn boundary. But on the turn boundary, the invoked hook could be the one associated with the promise’s realm of origin. DE: When we were talking about realms and interaction with modules. It sounds like a realm can create a compartment inside of it, with its own module map. Is there any way that realms can use modules without SES compartments? -MM: The realms proposal is now delegating many things to the compartments proposal. The original motivation for compartments proposal was for SES, and in SES is where you get a lot of nice properties for why it's organized this way. But the compartment API I presented could be made available as a standard API whether or not you are in SES. Although you don't have frozen primordials, it still lets you have separate global objects, host hooks, etc. So although it doesn't provide separation, it gives you the ability to customize an execution context. +MM: The realms proposal is now delegating many things to the compartments proposal. The original motivation for compartments proposal was for SES, and in SES is where you get a lot of nice properties for why it's organized this way. But the compartment API I presented could be made available as a standard API whether or not you are in SES. Although you don't have frozen primordials, it still lets you have separate global objects, host hooks, etc. So although it doesn't provide separation, it gives you the ability to customize an execution context. DE: Are you set on realms not having access to the parent realm’s module graph? -MM: Quite the opposite. This was a late realization. (slide on frozen intrinsics). The start compartment has not lost it’s host objects, the powers associated with being in the initial evaluation. There is a separation between it and the intrinsics, the fact that the document is available here is interesting and surprising. The start compartment is in complete control on how it passes forward the powers it has. By the same token, the start compartment can have as its same namespace, in its loading behavior, provided by the host compartment. The import map provided by the browser is fixed before code starts running in that frame. So it's a fixed mapping provided essentially to that start compartment, and then the start compartment can create new compartments, either providing mappings to names it has, or giving a separate loading mechanism, e.g., loading things from a packed set of modules (you don't completely ignore the host-given modules, like built-in modules), and then you provide to the child compartment mappings only to those, insulating them from host-granted powers. So this is less than an abrupt transition from how JavaScript runs on hosts now than we were expecting. +MM: Quite the opposite. This was a late realization. (slide on frozen intrinsics). The start compartment has not lost it’s host objects, the powers associated with being in the initial evaluation. There is a separation between it and the intrinsics, the fact that the document is available here is interesting and surprising. The start compartment is in complete control on how it passes forward the powers it has. By the same token, the start compartment can have as its same namespace, in its loading behavior, provided by the host compartment. The import map provided by the browser is fixed before code starts running in that frame. So it's a fixed mapping provided essentially to that start compartment, and then the start compartment can create new compartments, either providing mappings to names it has, or giving a separate loading mechanism, e.g., loading things from a packed set of modules (you don't completely ignore the host-given modules, like built-in modules), and then you provide to the child compartment mappings only to those, insulating them from host-granted powers. So this is less than an abrupt transition from how JavaScript runs on hosts now than we were expecting. DE: It sounds like you are saying that with the combination of the compartments, host hooks, there should be flexibility among the uses of the API to grant host capabilities. In the near term there would be no way to grant host abilities? @@ -64,7 +63,6 @@ MM: I’m going to provisionally say that no, what is a start compartment immedi ### Conclusion - Update without stage advancement ## Time Duration Format Proposal for Stage 1 @@ -78,7 +76,7 @@ YMD: (presents slides) SFC: I just wanted to provide some context. We’ve been discussing this in TG2, ECMA402 that we have monthly. This is a longstanding feature request. The fact that Temporal.Duration is coming gives this some urgency, but this is a longstanding request. We’ve discussed some ins and outs of this API, this is only a stage 1 proposal, before stage 2 there are still some open questions on the repo. I encourage you to look there for additional feedback. -JRL: Throughout the presentation it said "1 hours". That's supposed to be "1 hour", right? +JRL: Throughout the presentation it said "1 hours". That's supposed to be "1 hour", right? YMD: Yeah, that's a typo. @@ -108,7 +106,6 @@ WH: But you just said there is a proposal for formatting feet/inches? SFC: An open issue we want to address in 2020, but not part of a currently active proposal. - YMD: Asking for Stage 1. RPR: Any objections? @@ -119,10 +116,8 @@ RPR: Any objections? Consensus reached for stage 1 - ## Request for reviewers for Logical Assignment - JRL: I need stage 3 reviewers for logical assignment. Kevin, Daniel. KG and DRR @@ -131,7 +126,6 @@ Kevin, Daniel. KG and DRR Presenter: Daniel Ehrenberg (DE) - - [proposal](https://github.com/tc39/proposal-weakrefs) - [slides](https://docs.google.com/presentation/d/1a4hrdlEcpyKmBj6VtAVYDkokeW_HLFXcg11xIxhwlWM/edit#slide=id.p) @@ -144,7 +138,7 @@ DE: I’m confused by the question. Either the object refers to the storage or t MAH: With a WeakRef, I suppose, And if it doesn’t that means the store isn’t collected anyway. -DE: I don't see why you would use a WeakRef there. Can you explain? +DE: I don't see why you would use a WeakRef there. Can you explain? MAH: If you don’t use a WeakRef, that means your object has a strong reference to a backend store. @@ -152,13 +146,13 @@ DE: That is what I think would make sense for the cases I can think of. If you h MAH: No, we can use that one. I just don’t understand the advantage of letting the group die if the backing store isn’t collected. -SYG: I think, MAH, you can imagine for the WASM case that the things you hand out are WeakRefs, and you manually dispose the memory somehow from the main linear memory. you would not need to run the finalizers anymore. +SYG: I think, MAH, you can imagine for the WASM case that the things you hand out are WeakRefs, and you manually dispose the memory somehow from the main linear memory. you would not need to run the finalizers anymore. MAH: Right, so that is what I am saying. The live object would have a weakref to the backing store. Which means when the object tries to access the backing store it would blow up. DE: This just seems independent--whether or not the instance holds the WASM memory, you’re already buying into this blowing up design decision—and opt-into doing the finalizer work. -SYG: Imagine your API was that you attach the finalizer per WeakRef. In that API, I think it is unsurprising that if the WeakRef died before the finalizer died, then the finalier would not die. But now with a FinalizationGroup as an aggregation, it seems like the same behaviour would happen. The idea with things like fetch() is that you fire away an operation that may be completed, and you just don't have that with GC. +SYG: Imagine your API was that you attach the finalizer per WeakRef. In that API, I think it is unsurprising that if the WeakRef died before the finalizer died, then the finalier would not die. But now with a FinalizationGroup as an aggregation, it seems like the same behaviour would happen. The idea with things like fetch() is that you fire away an operation that may be completed, and you just don't have that with GC. DE: I think the practical effect of not having independent lifetimes, would be that the object has a reference to the backing store. If not, you can just hold the reference yourself. @@ -168,11 +162,11 @@ DE: I agree we need to follow up with improvements to the documentation. KM: We should consider if there is a better way to name things, to make it clear that it is not going to behave the same way. -DE: I like that idea. Do you have ideas for a name? +DE: I like that idea. Do you have ideas for a name? KM: I don’t have any right now, but I’m happy to brainstorm. -WH: I'd like to re-raise the issue I raised yesterday. What is a WeakRef oblivious execution? I read the document and it says that all WeakRefs pointing to a specific object are null and whether the object is reachable. +WH: I'd like to re-raise the issue I raised yesterday. What is a WeakRef oblivious execution? I read the document and it says that all WeakRefs pointing to a specific object are null and whether the object is reachable. SYG: Not reachable, but if the identity is observed. @@ -182,9 +176,9 @@ SYG: That means that if you continue with the evaluation, with the extra rule th WH: What does “identity gets observed” mean? -MM: A `===` observes an identity. A map lookup on a key observes the identity. The idea is that two objects that are otherwise identical will act in otherwise the same ways; they can still differ in identity. For example, executing a function does not observe identity, because you can have two functions with the same execution with different identity. +MM: A `===` observes an identity. A map lookup on a key observes the identity. The idea is that two objects that are otherwise identical will act in otherwise the same ways; they can still differ in identity. For example, executing a function does not observe identity, because you can have two functions with the same execution with different identity. -WH: I have an empty object, and a WeakRef pointing to the empty object. What observes the identity of that object? +WH: I have an empty object, and a WeakRef pointing to the empty object. What observes the identity of that object? MM:Triple equals, using it as a key in a map, etc. @@ -194,7 +188,7 @@ MM: That’s correct. WH: In that case, the definition of liveness is completely broken. If you have a cycle of objects with weak references pointing to more than one member of that cycle, you can't collect that cycle given the definition you just stated. -SYG: People found this issue yesterday. Issue #179 On GitHub +SYG: People found this issue yesterday. Issue #179 On GitHub KG: I have a proposed fix on GitHub. @@ -202,7 +196,7 @@ WH: Ok. DE: Any thoughts on the name FinalizationHandler or FinalizationRegistry? -SYG: Do people feel that if the handler to handle something dies, the thing that it is supposed to handle doesn't happen? Does that make more sense than if a FinalizationGroup dies? +SYG: Do people feel that if the handler to handle something dies, the thing that it is supposed to handle doesn't happen? Does that make more sense than if a FinalizationGroup dies? JHD: It seems to me that it is a FinalizationGroup but it does not group finalizers. @@ -216,12 +210,11 @@ DE: Any concerns with moving forward with FinalizationRegistry? Not pushing for KM: It sounds better. -JHD: I'm not sure I like FinalizationRegistry. I'll post more on GitHub. +JHD: I'm not sure I like FinalizationRegistry. I'll post more on GitHub. ### Conclusions -Retained consensus for independent lifetimes -Contingent consensus on FinalizationRegistry, meaning barring feedback from folks on the exact name of FinalizationRegistry, implementations reserve the right to ship WeakRefs with the FinalizationRegistry name before the next meeting. Please give feedback on #180. +Retained consensus for independent lifetimes Contingent consensus on FinalizationRegistry, meaning barring feedback from folks on the exact name of FinalizationRegistry, implementations reserve the right to ship WeakRefs with the FinalizationRegistry name before the next meeting. Please give feedback on #180. ## Syntax for Explicitly this argument for Stage 1 @@ -248,7 +241,7 @@ BF: Even though we have .length, there are some bugs around default values in ce BF: I do like a lot of the motivations of this proposal, of trying to solve different kinds of cases. I don’t think that putting it on the syntax of this is the best way, but if all we are trying to do is solve for motivating use cases. Things like marking functions as methods rather than something constructable, and likewise things to be constructed and not called. Furthermore this proposal adds a lot of complexity to the syntax, and can be solved by adding a statement. -WH: You raised the issue of `Promise.reject` as a motivation for this feature. I agree `Promise.reject` is a nasty gotcha. However, this feature fails to do anything to solve that problem, so I don’t know why that is in the presentation. The problem that this proposal does solve is that there is only one way to refer to `this` in ECMAScript. Now we’d have two. And now we can rename `this` and change its value by assigning to it. I have yet to see a good motivation for doing this. +WH: You raised the issue of `Promise.reject` as a motivation for this feature. I agree `Promise.reject` is a nasty gotcha. However, this feature fails to do anything to solve that problem, so I don’t know why that is in the presentation. The problem that this proposal does solve is that there is only one way to refer to `this` in ECMAScript. Now we’d have two. And now we can rename `this` and change its value by assigning to it. I have yet to see a good motivation for doing this. JHX: The issue of Promise.reject is in the second part, the next proposal. @@ -258,7 +251,7 @@ JHX: Why don't I finish the second part and then we can discuss. JWK: Allowing destructuring on this binding might be new problems if a property is a method, you would lose the this binding. -JHX: It's possible, but you already can do it now. So I don't think it's adding any new requirements. +JHX: It's possible, but you already can do it now. So I don't think it's adding any new requirements. SYG: I agree with WH, it does not meet the bar for new syntax. @@ -272,9 +265,8 @@ JHX: Ok. RPR: Thank you. - - ## Remote plenaries and SLTG/incubator calls + Presenters: Shu-yu Guo (SYG) and Dan Ehrenberg (DE) - [proposal](https://github.com/tc39/Reflector/issues/264#issuecomment-577316380) @@ -284,30 +276,29 @@ SYG & DE: (presents slides) JHD: When I’ve been on calls where quorum is an issue, typically its X members on a team, and there are some required number of members on the team, we count them. In the past, whether delegates or members, I don’t remember counting for quorum. Have we ever done that in the past? How would it be structured? -DE: I'm not aware of that kind of counting. I suggest that the calls be treated just like the in-person meetings. +DE: I'm not aware of that kind of counting. I suggest that the calls be treated just like the in-person meetings. MBS: In theory, quorum here, could be the subset of people with interest in a topic. We can say, for this topic, who are the delegates who want to participate, that group can become the group quorum is based around? -DE: I disagree with that proposal. This is plenary. This is not a way to set up smaller groups. SYG's half of the presentation is about that. So everyone who is interested in the consensus process is expected to join these calls. +DE: I disagree with that proposal. This is plenary. This is not a way to set up smaller groups. SYG's half of the presentation is about that. So everyone who is interested in the consensus process is expected to join these calls. IS: There is a need for a quorum among Ordinary Members the GA (see Bylaws 8.5), but in a TC there is no quorum, independently from the number of participants, every decision has to be a simple majority. WH: You do have rules about notice for what decisions are up for a meeting. - DE: About ECMA having a 3 week deadline, TC39 uses a 10 day deadline, IS and I were talking about this. Given that we have been making decisions using a 10 day deadline, it makes sense to change the bylaws. I’m not proposing to weaken that. IS: On the paper it is 3 weeks, 10 days in practice for TC it is ok, in my opinion it is not even necessary to change the bylaws. If TC39 finds that 10 days is better, then do it… JHD: The reason I am asking is with 6 opportunities a year, there are 6 times I have to call in to impact a decision. -IS: Regarding the proposal for remote GAs my feeling is that we have to find out first what is practical and what not. First, we have to find out whether for 40-60 people, the conference call may work or not work. And how long it should be. The proposal had 2-3 hours in mind, which is tiring, but if we look at the timing of it, according to what other bodies do, that might be okay. But if we have participants from all the different major time zones (US East, US West, Europe, Asia Far East), someone is always suffering. So what I see in other organizations what they are doing is quite different. Then they are rotating from meeting to meeting. So these are the things, before making a final time zone decision, we have to hash out a little bit. +IS: Regarding the proposal for remote GAs my feeling is that we have to find out first what is practical and what not. First, we have to find out whether for 40-60 people, the conference call may work or not work. And how long it should be. The proposal had 2-3 hours in mind, which is tiring, but if we look at the timing of it, according to what other bodies do, that might be okay. But if we have participants from all the different major time zones (US East, US West, Europe, Asia Far East), someone is always suffering. So what I see in other organizations what they are doing is quite different. Then they are rotating from meeting to meeting. So these are the things, before making a final time zone decision, we have to hash out a little bit. JHD: With 6 opportunities a year, that is how many times I have to make myself available. If there are 14-16 times a year that I have to make myself available, or risk making myself not available, then that has consequences of my not being available to push one way or another on proposals. The general concern I have is that people interested in these topics will miss them. DE: This seems tied to the baking period. -JHD: Sometimes in the past, proposals have jumped multiple stages in one meeting. Often, people have expressed concern about high velocity advancement. They would prefer to wait for the next meeting for the next stage. Because we have 2 months between meetings, we have a default imposed buffer, where people have time to discuss issues on GitHub, etc. If we have the potential for advancement every 2 weeks, in the span of 2 months, the current period between meeting, there are more opportunities for advancement. You could push a proposal from stage 0 to stage 4 in that time! +JHD: Sometimes in the past, proposals have jumped multiple stages in one meeting. Often, people have expressed concern about high velocity advancement. They would prefer to wait for the next meeting for the next stage. Because we have 2 months between meetings, we have a default imposed buffer, where people have time to discuss issues on GitHub, etc. If we have the potential for advancement every 2 weeks, in the span of 2 months, the current period between meeting, there are more opportunities for advancement. You could push a proposal from stage 0 to stage 4 in that time! DE: It might allow things to be more fluid, if we want to talk about proposals for limits then that would be interesting. @@ -315,7 +306,7 @@ SYG: I think allowing for a higher velocity is a good thing. There are proposals AKI: Nothing is going to make its way to Stage 4 in two months given that they need implementations. -JHD: Given that an npm package could be considered an implementation? And even getting to Stage 3 in that time is too fast. +JHD: Given that an npm package could be considered an implementation? And even getting to Stage 3 in that time is too fast. DE: There is a lot we do leading up to stage 3 where we want to be sufficiently open to feedback. @@ -327,7 +318,7 @@ JRL: Is that ok with ECMA? SYG: That is a great question. That stuff can be streamlined. -IS: I think we would have to basically ……. They don’t have to go to the meeting, unless it is a crisis situation. Certainly it gives a lot more flexibility. (Note: not said at the meeting. About 4-5 years ago we had already the complaint at the TC39 meeting that we take up too much time with process and administrative tasks. For many participants that is annoying. So, we have generally decided to have minimal process and administration related discussions in the F2F TC39 meeting, but push those items to the GitHub Reflector. I think we have to go back to that situation). +IS: I think we would have to basically ……. They don’t have to go to the meeting, unless it is a crisis situation. Certainly it gives a lot more flexibility. (Note: not said at the meeting. About 4-5 years ago we had already the complaint at the TC39 meeting that we take up too much time with process and administrative tasks. For many participants that is annoying. So, we have generally decided to have minimal process and administration related discussions in the F2F TC39 meeting, but push those items to the GitHub Reflector. I think we have to go back to that situation). JRL: Ok @@ -341,7 +332,7 @@ WH: Let’s go through the comments first. SYG: Sure. -MLS: 1 hour meeting - 1 item meeting, I’m not going to show up if I don’t care about it. Even if we have a 3-hour meeting, it's a 3-topic or 4-topic meeting, and I'm not going to attend if I'm not interested in the topics. I think it's important that all delegates have an understanding of what's going on. Do I like to travel? No. I think there is something to be said about getting together for several days, including side conversations. +MLS: 1 hour meeting - 1 item meeting, I’m not going to show up if I don’t care about it. Even if we have a 3-hour meeting, it's a 3-topic or 4-topic meeting, and I'm not going to attend if I'm not interested in the topics. I think it's important that all delegates have an understanding of what's going on. Do I like to travel? No. I think there is something to be said about getting together for several days, including side conversations. SYG: It is not going to eliminate that. @@ -349,7 +340,7 @@ MLS: well, we will still have 4, I’m concerned how this would work. I think th DE: Would you recommend that we limit the scope of what we discuss in these calls? -MLS: I have the same concern as JHD, having to be present for many calls in the year. It’s a catch-22. I don’t want the greater frequency/less time, I think it is important for all delegates to participate. +MLS: I have the same concern as JHD, having to be present for many calls in the year. It’s a catch-22. I don’t want the greater frequency/less time, I think it is important for all delegates to participate. SFC: I agree with MLS. @@ -369,7 +360,7 @@ WH: The rule needs to be, we will not add last-minute agenda items, instead of t SYG: I agree with that. Part of the reason is that the next chance we have is in two months, if we have more calls, the burden is lifted. -YSV: From my perspective, it would be better to have longer, less-frequent meetings, because I schedule review periods for my team. If they were more frequent, it would be harder to schedule review periods. It would be a much higher burden to handle that going forward. I would propose 5 (?) calls that are 5 hours long. +YSV: From my perspective, it would be better to have longer, less-frequent meetings, because I schedule review periods for my team. If they were more frequent, it would be harder to schedule review periods. It would be a much higher burden to handle that going forward. I would propose 5 (?) calls that are 5 hours long. GHO: This opens the door for companies that are in the bay area to attend the remote plenary in together. @@ -385,7 +376,7 @@ SYG: It doesn’t seem forbidden, if you can show up to someone’s office at 9A MBS: I think this is a yes-and situation, considering co-located plenaries. I know how much work it is. If we have hubs for NY, SF, and Europe, maybe we can run co-located plenaries for them. It would limit the amount of travel required. I don’t think this would be perfect, but it would be less disruptive. -JHD: The downside is that if a meeting is not 100% remote, the inevitable outcome is that the remote experience degrades, whereas if the experience is equal to everyone, people have an incentive to make sure the A/V works, etc. If we have a remote plenary it would be a better experience if everyone is remote. +JHD: The downside is that if a meeting is not 100% remote, the inevitable outcome is that the remote experience degrades, whereas if the experience is equal to everyone, people have an incentive to make sure the A/V works, etc. If we have a remote plenary it would be a better experience if everyone is remote. API: In my mind it couldn’t work if there were hub, because one region is always going to be out. If you shift to accommodate Europe, Asia is out. @@ -403,11 +394,10 @@ API: I have a suggestion, it’s my queue item. MF: We could do these remote meetings at the same time as the in person, but not have stage advancement, have commitment from everyone on the call for stage advancement, but do the stage advancement at the next meeting, so that it doesn’t immediately start out moving the ability to attend. -SYG: I disagree with starting off with calls without stage advancement. The feedback thing could be handled in the context of the SLTG, which I haven't presented yet. If the trial run is to see if people can show up for a call, I hope people can do that. +SYG: I disagree with starting off with calls without stage advancement. The feedback thing could be handled in the context of the SLTG, which I haven't presented yet. If the trial run is to see if people can show up for a call, I hope people can do that. KM: +1 - YSV: +1 SYG: I would like to present the second part of the presentation. Is that OK? @@ -426,7 +416,7 @@ CM: I’m uncomfortable with getting squeezed by the timebox. I’m sympathetic YSV: We do still have the other 4 meetings. Many delegates don’t go to all six. By reducing the number, we might increase the number of delegates that go to the same meeting. -CM: If there are only 4 meetings, that makes it more likely that delegates attend all four? That's plausible. I’m just very concerned about any time you have delicate social dynamics disturbing it is risky. +CM: If there are only 4 meetings, that makes it more likely that delegates attend all four? That's plausible. I’m just very concerned about any time you have delicate social dynamics disturbing it is risky. SFC: I echo CM and MLS and WH. In general I am supportive of the idea of incubator calls. I also feel that alone solves a lot of the goals of remote plenaries, identifying the issues being raised. Even starting with just incubator calls, and seeing if it makes in person plenaries more efficient, I don’t see why you don’t need both. If it does make it more efficient, we could go to 4 in-person plenaries per year and not have remote plenaries. @@ -491,7 +481,7 @@ BF: It seems like this would need to affect any host provided API, not just thro JHX: that would be discussed later. it is not very hard to define this argument. e.g. most methods on prototype should be true and all constructors it should be null. -BF: I do not believe that is the case, but we can discuss that later. +BF: I do not believe that is the case, but we can discuss that later. BF: This proposal is leading towards a much more rich code expectation/reflection, I don’t think it should be limited to the usage of this. You can’t tell if a thing is intended to be constructed. You can sometimes tell via prototype, but not 100%, this should be generalized. @@ -503,7 +493,6 @@ JHD: The question I put on the queue is Array.of, thisArgumentExpected => false, JHX: The feature is to mark the intention of api. Frameworks/libraries and language can use it for defensive programming, depends on author of library how it should use this information. - JHD: You are intending use cases where someone receives a function and dynamically decides whether to give it a receiver based on this data property? JHX: For example, when you add an event listener, it can throw in an early stage to give better errors. @@ -538,7 +527,7 @@ JHX: Yes SYG: This is similar to the other proposal I asked to be rebranded (ArrayBuffer.fillRandom). In order to get stage 1, we have ignored the presentation, which is mostly about solution. The only part we are getting to stage 1. I am completely uncomfortable without paring down the proposal. If people look at the proposal list and see thisArgumentExpected, that is not at all what reached stage 1. -MF: That's a common theme, with delegates overworking their Stage 1 request. And it's useful for the committee to see the solution they have in their mind, but I agree that from a marketing perspective, we need to be clear. +MF: That's a common theme, with delegates overworking their Stage 1 request. And it's useful for the committee to see the solution they have in their mind, but I agree that from a marketing perspective, we need to be clear. SYG: When people discuss this when they want to participate, if the explainer discusses this, I don’t think we should encourage the particular shape. @@ -566,7 +555,6 @@ JHX: Yes. I would like to ask for stage 1? RPR: Any objections to stage 1? - (several objections) RPR: Will they be provided? @@ -577,7 +565,6 @@ WH: This does not solve any problem and introduces a new way of saying `this` in RPR: Thank you for your presentation JHX. - ### Conclusion - `thisArgumentExpected` proposal not advanced to stage 1, waiting for additional clarification of intent of proposal and renaming explainer. @@ -587,7 +574,6 @@ RPR: Thank you for your presentation JHX. Presenter: Myles Borins (MBS) - - [proposal](https://github.com/tc39/proposal-module-attributes) - [slides](https://docs.google.com/presentation/d/1m6J33TeFHnkFOKXqBnkhS6RqBVsJV4n70X_j5ALI47g/edit) @@ -597,13 +583,13 @@ MBS: We are not looking for stage advancement, but are looking for stage advance BFS: I’m not comfortable to ship the unkeyed form. Using two keywords instead of one is vital. Having as type instead of as seems important for a variety of reasons. We don’t know if other environments are going to use this. It remains specific to web concerns. I would want to hear from other environments. If we do ship an unkeyed form, and need to upgrade to a keyed form. I am totally OK with it being a following for the keyed form. -DE: I thought that for one, going backwards, with the keyed/unkeyed form, I thought environments could be responsible for taking a string and considering it equivalent to the object. So I wonder if it would be okay to have both the string and the object literal being accepted. +DE: I thought that for one, going backwards, with the keyed/unkeyed form, I thought environments could be responsible for taking a string and considering it equivalent to the object. So I wonder if it would be okay to have both the string and the object literal being accepted. -BFS: I am not okay with that. It is a major interoperability concern. +BFS: I am not okay with that. It is a major interoperability concern. DE: I care alot about interop. We can discuss this further. You are characterizing this as web specific, but this applies to any environment where you load modules based on mime type. -BFS: The type check I have no problems with. I don't like the framing of your security concerns. For example, I've talked with the WASI folks, and normally WASM should have no side-effects, but now that's no longer true with this proposal. +BFS: The type check I have no problems with. I don't like the framing of your security concerns. For example, I've talked with the WASI folks, and normally WASM should have no side-effects, but now that's no longer true with this proposal. DE: I don’t know who claimed that. The WASM stuff is about it being equivalently powerful to JS. We’ve left whether WASM should be marked as an open question. @@ -619,7 +605,7 @@ BFS: But they want web compatibility. MBS: With that in mind, if we want to have, in the case you mentioned, if the web is unable to support this without this mechanism. -BFS: That's not true, because there are out-of-band solutions. We can discuss offline. +BFS: That's not true, because there are out-of-band solutions. We can discuss offline. MBS: Ok. @@ -635,7 +621,7 @@ GCL: Like CommonJS has done that for years. DE: Ok, but in that case you would change the file extension. -GCL: Yeah, but the consumer doesn't necessarily type in the file extension. If the host didn't want to subscribe to the thing about not branding a module when you import/export it, how do you deal with this? +GCL: Yeah, but the consumer doesn't necessarily type in the file extension. If the host didn't want to subscribe to the thing about not branding a module when you import/export it, how do you deal with this? DE: I think if you want to use modules that would also work on the web, you would use a module with a JS mime type, and would have a module that redirects one to another. @@ -651,7 +637,7 @@ MF: I would like to see this proposal. MLS: Since we have a generalized constant form, we need a better keyword than as. -DE: Myles and I were debating whether to show the keyword bikeshedding slide. We agree and aren't sure what keyword to use. +DE: Myles and I were debating whether to show the keyword bikeshedding slide. We agree and aren't sure what keyword to use. KM: With would make more sense. @@ -679,7 +665,6 @@ Not asking for stage advancement, will follow up with MF, BFS, GCL, RGN. Presenter: Shane F. Carr (SFC) - - [proposal](https://github.com/tc39/proposal-temporal/issues/310) - [slides](https://docs.google.com/presentation/d/1nx3Gq2orWoKYbjeQJuJQFEIh1Rk_bqf9Rb4o-D8I3x0/edit) SFC: (presents slides, quickly). @@ -716,7 +701,6 @@ MM: I’m not understanding some depth here. My reaction is that this is normal SFC: Similarity to? - MM: Similarity to symbol usage in the cancel token proposal. SFC: I’m not familiar with that. @@ -732,4 +716,3 @@ SFC: In general we have support for strings. We will follow up on the 1 symbol a SFC will follow up with RBN regarding using a similar pattern to cancellation tokens, where one symbol is used, returning this. General feeling to using strings in meeting. - diff --git a/meetings/2020-03/april-1.md b/meetings/2020-03/april-1.md index c54c6017..d0dfcdc2 100644 --- a/meetings/2020-03/april-1.md +++ b/meetings/2020-03/april-1.md @@ -1,11 +1,11 @@ # April 01, 2020 Meeting Notes + ----- **In-person attendees:** **Remote attendees:** Yulia Startsev (YSV), Mark Cohen (MPC), Jeff Long (JHL), Bradley Farias (BFS), Rick Button (RBU), Michael Ficarra (MF), Mathias Bynens (MB), Myles Borins (MBS), Caio Lima (CLA), Dave Poole (DMP), Jason Williams (JWS), Kevin Gibbons (KG), Chip Morningstar (CM), Philip Chimento (PFC), Mary Marchini (MAR), Rob Palmer (RPR), Ross Kirsling (RKG), Waldemar Horwat (WH), Pieter Ouwerkerk (POK), Bradford C. Smith (BCS), Ujjwal Sharma (USA), Richard Gibson (RGN), Felienne Hermans (FHS), Nicolò Ribaudo (NRO), Shane F Carr (SFC), Justin Ridgewell (JRL), Jack Works (JWK), Philipp Dunkel (PDL), Robin Ricard (RRD), Ben Newman (BN), Sergey Rubanov (SRV), Jordan Harband (JHD), Guilherme Hermeto (GHO), Robert Pamely (RPY), Edgar Barragan (EB), Mark Miller (MM), Hemanth HM (HHM), Aki (AKI), HE Shi-Jun (John Hax) [JHX], Daniel Rosenwasser (DRR) - ## Record and Tuple Update Presenter: Rick Button (RBU) and Robin Ricard (RRD) @@ -34,7 +34,7 @@ RBU: Yes. RBU: (returns to slides) -MM: There is a way to delete. I presume you're doing the natural extension of this for pattern matching. (Note: I clarified below that I meant destructuring.) If you pattern match against the pattern `#{foo, ...rest}` then `rest` is the thing you've pattern-patched against without `”foo”`. Pattern-match removes properties from things. So your deep update proposal is very nice - I also want to ask have you thought about providing the same kind of deepness on pattern matching? And does the deepness on both the expression and matching side - is that also an extension we should apply to regular objects? +MM: There is a way to delete. I presume you're doing the natural extension of this for pattern matching. (Note: I clarified below that I meant destructuring.) If you pattern match against the pattern `#{foo, ...rest}` then `rest` is the thing you've pattern-patched against without `”foo”`. Pattern-match removes properties from things. So your deep update proposal is very nice - I also want to ask have you thought about providing the same kind of deepness on pattern matching? And does the deepness on both the expression and matching side - is that also an extension we should apply to regular objects? RBU: The first statement about deleting in a pattern match: that sounds reasonable. I'm not super familiar with the pattern matching proposal but I think that makes sense. There has been discussion about what to do about a record that it spread and destructured produces an object or a record - right now we’re just making it a record, but there’s active discussion on Github. @@ -86,8 +86,7 @@ RRD: Same here. And to pre-empt one question, we have pure userland solutions th BFS: I want to state that there are various semantics for RefCollection, we can discuss it elsewhere and it can mostly be done in userland. -1 Reply: Userland can do RefCollection - Bradley Farias -2 Reply: Userland cannot do RefCollection with JS as it is. This proposal is an attempt to support "boxing" objects as many people have requested - Daniel Ehrenberg (Igalia) +1 Reply: Userland can do RefCollection - Bradley Farias 2 Reply: Userland cannot do RefCollection with JS as it is. This proposal is an attempt to support "boxing" objects as many people have requested - Daniel Ehrenberg (Igalia) SFC: To override a field deep in the structure, can we have a Record.assign() like Object.assign()? @@ -117,13 +116,13 @@ GCL: I’ll follow up on Github. RBU: Issues 20 or 65 that I mentioned in the slides. -WH: I like the clean solution for equality, with one caveat: I'm curious about the interaction between this proposal and value types, which was mentioned as a way to implement complex numbers by MM in yesterday’s discussion. Would we need something that's similar but different for value types, or could we re-use records and tuples for complex value types? The dilemma being that for complex numbers, if you want IEEE complex numbers, they need the IEEE notion of equality across the real and imaginary parts. +WH: I like the clean solution for equality, with one caveat: I'm curious about the interaction between this proposal and value types, which was mentioned as a way to implement complex numbers by MM in yesterday’s discussion. Would we need something that's similar but different for value types, or could we re-use records and tuples for complex value types? The dilemma being that for complex numbers, if you want IEEE complex numbers, they need the IEEE notion of equality across the real and imaginary parts. RBU: I’m not as deeply familiar with value types as DE, so I'll defer to him on that. DE: Record and Tuple is sort of a more programmer-friendly name for value types. Records and Tuples are kind of a data model for value types. The idea of IEEE semantics for complex numbers is an interesting one. We could start with more NaN-like equality, and later on introduce equality for complex numbers. But for now we could say that these object-like things and array-like things don’t support these things and later on expose something supporting that. Value types will require many additional capabilities, that Records and Tuples don't provide, but will build on the same data model. -WH: It seems like a reasonable evolutionary path. I want to see something in the proposal to remind us about that direction. +WH: It seems like a reasonable evolutionary path. I want to see something in the proposal to remind us about that direction. RBU: I think there’s a small snippet but we could expand it. @@ -131,7 +130,7 @@ DE: Yeah, I would be happy to discuss this in greater detail. I was planning to BN: It feels like there's a lot in this proposal right now. Some people will say that syntax is expensive. I wonder if the runtime part (i.e. the record and tuple functions, which I think are totally adequate for getting the sort of immutable triple equals objects you want) . Might be something that we can prioritize as a proposal in itself. Then a follow-up proposal to add the shorthand hash syntax, or do we need to include everything that's eventually going to be in the proposal up front? -RBU: From the champions group perspective, I think there’s always a fine line you have to draw between how fine-grained and how large you want these proposals to be. We're trying to strike that balance by splitting out RefCollections, removing with syntax, etc. I still think the syntax is very valuable for two reasons: 1) you don't just repeat the word Record or Tuple over and over again. 2) We already have significant runtime experience with this, with libraries that implement immutable data structures in userland. So the goal with record and tuple is to not only provide an alternative to those, but to provide something with a very terse syntax integrated into the language so you use it as much as possible. The addition of the shorthand syntax adds a lot of value to the proposal. +RBU: From the champions group perspective, I think there’s always a fine line you have to draw between how fine-grained and how large you want these proposals to be. We're trying to strike that balance by splitting out RefCollections, removing with syntax, etc. I still think the syntax is very valuable for two reasons: 1) you don't just repeat the word Record or Tuple over and over again. 2) We already have significant runtime experience with this, with libraries that implement immutable data structures in userland. So the goal with record and tuple is to not only provide an alternative to those, but to provide something with a very terse syntax integrated into the language so you use it as much as possible. The addition of the shorthand syntax adds a lot of value to the proposal. BN: I agree and I like this syntax, but it’s important to mention that it’s completely separable. @@ -148,6 +147,7 @@ JRL: I just wanted to voice strong support for RefCollection. The ability to pla Chair: Champions group, please ensure that the remaining comments about RefCollection are discussed. ### Conclusion/Resolution + - https://github.com/rricard/proposal-refcollection/ - https://github.com/rickbutton/proposal-deep-path-properties-for-record @@ -161,13 +161,14 @@ Presenter: Jack Works (JWK) JWK: (presents slides) WH: [Discussing the “Wait for discussions” slide]: -* Decreasing ranges seem fine. -* `Number.range(0, 10, -5)` should yield nothing. -* I wouldn’t worry about preventing infinite loops such as `Number.range(42, 100, 1e-323)`; there are lots of ways to get an infinite loop, including the obvious `Number.range(0, Infinity)`. -* `BigInt.range(0, Infinity)` seems useful, so we should allow an `Infinity` limit when making BigInt ranges. -* I wouldn’t bother with `includes`. -SFC: [Regarding issue #17](https://github.com/Jack-Works/proposal-Number.range/issues/17), when you have a Number.range(), does it return an iterator that has the next() method directly, so it's consumable once? Richard wrote that one way to think about Number.range() is that it’s an iterator factory - you call it and it gives you an iterator that’s consumable. The other model is that you call it and it gives you a Number.range first class object that’s not an iterator, but has a `[Symbol.iterator]` method that gives the iterator. The advantage of it returning an immutable is you can take it and use it in other places, e.g. with a NumberRangeFormatter or as an argument to a function. Generally, you may want to reuse it. I think both approaches have clear advantages and disadvantages. +- Decreasing ranges seem fine. +- `Number.range(0, 10, -5)` should yield nothing. +- I wouldn’t worry about preventing infinite loops such as `Number.range(42, 100, 1e-323)`; there are lots of ways to get an infinite loop, including the obvious `Number.range(0, Infinity)`. +- `BigInt.range(0, Infinity)` seems useful, so we should allow an `Infinity` limit when making BigInt ranges. +- I wouldn’t bother with `includes`. + +SFC: [Regarding issue #17](https://github.com/Jack-Works/proposal-Number.range/issues/17), when you have a Number.range(), does it return an iterator that has the next() method directly, so it's consumable once? Richard wrote that one way to think about Number.range() is that it’s an iterator factory - you call it and it gives you an iterator that’s consumable. The other model is that you call it and it gives you a Number.range first class object that’s not an iterator, but has a `[Symbol.iterator]` method that gives the iterator. The advantage of it returning an immutable is you can take it and use it in other places, e.g. with a NumberRangeFormatter or as an argument to a function. Generally, you may want to reuse it. I think both approaches have clear advantages and disadvantages. GCL: I think reuse via iterable as you’ve described it is actually a downside, because you end up having implicit reuse instead of say, passing around a function, where you call it each time and it’s explicit that you would get a fresh iterator each time. I think reuse is good but I don’t understand why you’d want it implicit. @@ -192,6 +193,7 @@ RBN: I’m not willing to block stage 1, but I think we should consider explorin No other objections. ### Conclusion/Resolution + - Number.range and BigInt.range advances to Stage 1. ## Introducing: `this` argument reflection functions for stage 1 @@ -255,7 +257,7 @@ JHD: That was regarding making sure we have a repository. Materials were not agr AKI: At least if there’s a repository we can get more information. -SYG: I'm not sold on the motivation from last time, and it seems like the motivation remains the same from last time this was presented. The only difference I can tell is that the APIs are different. As far as I can see the motivation is for better error handling when it’s used by user code. I think we can say something pretty narrow about function ??? I don't think we can use this in the way the proposal is motivated, which is for throwing better errors. Are we going to be mandating that something like this is provided for ?? methods? For other things? We can say something narrow like “this binding is in user code”, but that doesn't’ make sense for a method provided as a native thing on the platform. And if this were in the language, how would we adopt it to do what it’s designed to do? Would you need to monkey patch all platform code to check for this? I'm confused how this would actually be used. If it's only for solving user code, I'm not sure why you cannot check this in user code today. +SYG: I'm not sold on the motivation from last time, and it seems like the motivation remains the same from last time this was presented. The only difference I can tell is that the APIs are different. As far as I can see the motivation is for better error handling when it’s used by user code. I think we can say something pretty narrow about function ??? I don't think we can use this in the way the proposal is motivated, which is for throwing better errors. Are we going to be mandating that something like this is provided for ?? methods? For other things? We can say something narrow like “this binding is in user code”, but that doesn't’ make sense for a method provided as a native thing on the platform. And if this were in the language, how would we adopt it to do what it’s designed to do? Would you need to monkey patch all platform code to check for this? I'm confused how this would actually be used. If it's only for solving user code, I'm not sure why you cannot check this in user code today. JHX: THe problem here is that currently we do not have the mechanism. THere’s no way frameworks and libraries can give the better error message, and not only that but to allow them to throw other errors in earlier stages. Currently it only triggers a reference error at runtime in a very late stage. For example, only when you click it and the code runs to the callsite, only then will it be a reference error. It's hard for users to find their bug in the first place. And about the platform and native things, I think it could also - it should be provided, this feature. Currently, the native APIs have the behavior, for example the constructors never expect a this argument. Arrow functions never expect this argument. No API tool exposes this information. @@ -279,16 +281,15 @@ SYG: Happy to discuss offline. AKI: Use IRC to set up a time to clarify. I’ll save the queue. -1: Reply: fundamentally undecidable -Gus Caplan (@nodejs @tc39 @WebAssembly @OpenJS-Foundation) +1: Reply: fundamentally undecidable Gus Caplan (@nodejs @tc39 @WebAssembly @OpenJS-Foundation) -2: New Topic: function proxies are interesting -Mark S. Miller (@Agoric ) +2: New Topic: function proxies are interesting Mark S. Miller (@Agoric ) 3: New Topic: `if (this !== undefined) throw 'did not expect this'` Gus Caplan (@nodejs @tc39 @WebAssembly @OpenJS-Foundation) ### Conclusion/Resolution + - Not proposed for Stage 1, will discuss offline JHX/SYG ## Relax hashbang syntax for Stage 1 @@ -327,35 +328,31 @@ KM: If I had ???... before it was at the global scope AKI: We are at time and the queue is full. We can try to come back tomorrow, but I don’t think a 15 minute time box was enough. ### Conclusion/Resolution + - Not proposed for stage advancement Remaining queue: 1 New Topic: the context (an inline include, eg) is required information when authoring `test.js` Jordan Harband -2 New Topic: I'm in favor of tolerating initial whitespace. But not other places -Mark S. Miller (@Agoric ) +2 New Topic: I'm in favor of tolerating initial whitespace. But not other places Mark S. Miller (@Agoric ) 3 New Topic: why should we cater to simplistic/brittle code transform tools? Jordan Harband -4 New Topic: Should not do this -Waldemar Horwat +4 New Topic: Should not do this Waldemar Horwat -5 New Topic: We should not set a precedent that simple concatenation preserves semantics -Shu-yu Guo (@google) +5 New Topic: We should not set a precedent that simple concatenation preserves semantics Shu-yu Guo (@google) 6 New Topic: Are there examples in other languages that don't use # for comments, but ignore hashbangs? Philip Chimento -7 New Topic: There's already some precedent with `sourceURL` and `sourceMappingURL`, which only have an effect once -Devin Rousso +7 New Topic: There's already some precedent with `sourceURL` and `sourceMappingURL`, which only have an effect once Devin Rousso 8 New Topic: Hashbang starting with a space is not a valid hashbang on Linux (and probably other UNIX systems) Mary Marchini (@Netflix) -9 New Topic: Even simple ASCIIfiers need to understand JS semantics -Justin Ridgewell (@google -> @ampproject ) +9 New Topic: Even simple ASCIIfiers need to understand JS semantics Justin Ridgewell (@google -> @ampproject ) ## WeakRefs FinalizationRegistry API change @@ -370,18 +367,18 @@ DE: Consensus on removing the FinalizationRegistry iterators? MM: There was some terminology about microtask checkpoints in the materials. -DE: I've done my best to eliminate references to microtask checkpoints. If there are any remaining, let's work offline to remove it. +DE: I've done my best to eliminate references to microtask checkpoints. If there are any remaining, let's work offline to remove it. MM: OK, so the issue is not to remove it, but what is it you’re trying to explain? I don’t know the details of the web scheduling semantics and how that translates to JavaScript. DE: So, in the WeakRef proposal, I want to apologize for any layering errors, the spec doesn’t refer to microtask checkpoints, but the documentation does refer to the web embedding. In particular, the WeakRef spec deferst to the host in how to to handle the finalization callbacks. -The way the web does it, and all the embedders such as Node.js, is that these don't interrupt a chain of Promise resolutions. Promise jobs are treated as higher priority than WeakRef cleanup jobs. The spec leaves it open to JS hosts to treat them at the same priority. My understanding is that Moddable wants to treat them at the same priority. I don't see a problem with that. But the web, it’s motivated by wanting to not allow observability of the GC at higher granularity to prevent compatibility issues. +The way the web does it, and all the embedders such as Node.js, is that these don't interrupt a chain of Promise resolutions. Promise jobs are treated as higher priority than WeakRef cleanup jobs. The spec leaves it open to JS hosts to treat them at the same priority. My understanding is that Moddable wants to treat them at the same priority. I don't see a problem with that. But the web, it’s motivated by wanting to not allow observability of the GC at higher granularity to prevent compatibility issues. -MM: So, I want to make sure that this particular invariance is preserved—the host given more freedom. The host cannot schedule a callback other than at a turn/job boundary. It cannot do the callback in the middle of an existing job. +MM: So, I want to make sure that this particular invariance is preserved—the host given more freedom. The host cannot schedule a callback other than at a turn/job boundary. It cannot do the callback in the middle of an existing job. DE: That’s correct. I don’t know if we’ve ever had language to prevent code from interrupting for any reason. It would be interesting to investigate adding such a language to the specification. When I read the spec, I always think in terms of that invariant being preserved. -MM: So first of all, I agree that we should tighten the spec, but I think it's urgent on this one especially. For this one, the callback happens spontaneously in a way that is unrelated with what the user code is doing at that moment., Because garbage collection happens interleaved at fine grain with what the user is doing. So if the language in the spec is phrased such that the host has to choose the order, as jobs, they can only be scheduled among jobs. That language would not imply that the host can do it during a job. +MM: So first of all, I agree that we should tighten the spec, but I think it's urgent on this one especially. For this one, the callback happens spontaneously in a way that is unrelated with what the user code is doing at that moment., Because garbage collection happens interleaved at fine grain with what the user is doing. So if the language in the spec is phrased such that the host has to choose the order, as jobs, they can only be scheduled among jobs. That language would not imply that the host can do it during a job. DE: The plan for this is to respect that JavaScript works in the framework that we have established. I think we should review that language offline and see if we need to add any clarification that jobs are not to run in parallel. Let’s follow up about this offline. I have not rebased WeakRefs to the new thing that has landed for Promises. As a part of the rebase we should handle that. @@ -391,7 +388,7 @@ SYG: To your earlier concern about the current language in the spec text, specif MM: Ok. It does. If you don’t have a good answer to KM about motivating cleanupSome I can do that. -YSV: The proposal to use an iterator comes from the Mozilla side. We've been talking about this in depth. We totally support moving this to a per-item callback. We are waiting to hear back from one person, but this won’t change that, so we can go ahead with this. +YSV: The proposal to use an iterator comes from the Mozilla side. We've been talking about this in depth. We totally support moving this to a per-item callback. We are waiting to hear back from one person, but this won’t change that, so we can go ahead with this. DE: Thank you. @@ -399,7 +396,7 @@ SYG: The test262 PR does exist, if we have consensus on this it should be ready KM: I was trying to figure out the use case for cleanupSome. -MM: When we were first doing the WeakRef proposal, there were two use cases that needed cleanupSome, but we didn’t have cleanupSome. We invented cleanupSome for those two use cases: the two use cases are similar. One is long running WASM computations that never return to the event loop. The other is, in a worker, a long running JS computation that simply doesn’t use the JS event queue, instead some internal control structure. That might arise when the JS code is transpiled form some other language that doesn't have loop event loop semantics. Any such long-running program would know based on its semantics where the safe points are. It should be able to say, here are the safe points. +MM: When we were first doing the WeakRef proposal, there were two use cases that needed cleanupSome, but we didn’t have cleanupSome. We invented cleanupSome for those two use cases: the two use cases are similar. One is long running WASM computations that never return to the event loop. The other is, in a worker, a long running JS computation that simply doesn’t use the JS event queue, instead some internal control structure. That might arise when the JS code is transpiled form some other language that doesn't have loop event loop semantics. Any such long-running program would know based on its semantics where the safe points are. It should be able to say, here are the safe points. DE: In JS workers as well as WASM threads, you can use SAB i.e WASM shared memory for computation that is long running And might communicate with shared memory with Atomics in other threads. @@ -418,7 +415,7 @@ MM: An individual worker computation can be quite specialized. What are the func KM: The postMessage ones are already… you can't even initialize a shared memory without using postMessage first. -DE: Once you initialize it and start the computation, then you could use this API. I'm wondering how we should proceed from here. You've raised this concern some time ago, and we've addressed it as the champions group. +DE: Once you initialize it and start the computation, then you could use this API. I'm wondering how we should proceed from here. You've raised this concern some time ago, and we've addressed it as the champions group. KM: I’m very concerned about this API on the main thread for sure in the web. It feels like a big compatibility risk. I expect my thing to be cleaned up because I called cleanupSome after a GC. The main place you would care about this, at least on the web, but certainly other platforms, Where the main use case for this is probably things like "I have some giant existing code base which cares nothing about how to make a good ux on the web, and I want to just open a socket on that page and then draw something on a canvas on that page". I don’t want to enable APIs that enable that to be easier. @@ -439,13 +436,13 @@ WH: It says that, but it also says that implementations must empty weakrefs in a DE: But that’s if they choose that larger set. They could also choose [that smaller set?] -WH: Ok. So the choice is that's the part that's missing from the wording. +WH: Ok. So the choice is that's the part that's missing from the wording. DE: Right, so… SYG: Implementations do not have the obligation to choose the maximal set. -DE: I'd like to follow up offline to refine the wording. I don't understand exactly what. +DE: I'd like to follow up offline to refine the wording. I don't understand exactly what. WH: I will file an issue on that. @@ -453,11 +450,11 @@ DE: Perfect SYG: Can we come back to KM’s question on cleanupSome given that this has been stage 3 for a while, and we have made many late stage changes based on implementation feedback? For cleanupSome, the consensus is that in general we agree cleanupSome is useful. I hear some performance concerns that it might be abused to degrade main thread performance in a web setting. I think that concern is not something we should change about cleanupSome. Concretely I want to ask KM, because I would like to ship WeakREf soon to get real world feedback, do you feel strongly about cleanupSome to ask for a change in the API? -KM: I'm torn on this. On one hand, I think this is an API we may regret for compatibility reasons. It's a concern as a historically smaller marketshare engine… Chrome does one thing, Safari does another, then you have to hack your website to clean up an object that's still live. (???) +KM: I'm torn on this. On one hand, I think this is an API we may regret for compatibility reasons. It's a concern as a historically smaller marketshare engine… Chrome does one thing, Safari does another, then you have to hack your website to clean up an object that's still live. (???) SYG: It’s not my understanding that removing cleanupSome would have consensus. If you are asking for a change it would be asking for something on the main thread? -KM: I haven't thought about that yet. I should speak to the architecture people on WebKit about it. I think they're currently unhappy with potential compatibility risks for the tradeoff of value proposition because the main use case they envisioned were the things I pointed out. That's why it came up so late. But I don't have an answer for you right now. +KM: I haven't thought about that yet. I should speak to the architecture people on WebKit about it. I think they're currently unhappy with potential compatibility risks for the tradeoff of value proposition because the main use case they envisioned were the things I pointed out. That's why it came up so late. But I don't have an answer for you right now. RPR: we’ve come to the end of the time box. @@ -473,15 +470,12 @@ RPR: Is that the conclusion? DE: I’d say we have consensus to remove the cleanup iterators and pass values directly. - -1 -Clarifying Question: Cleanup some doesn't mean "please perform cleanup", it means "do you have anything I can clean" +1 Clarifying Question: Cleanup some doesn't mean "please perform cleanup", it means "do you have anything I can clean" Mathieu Hofman (@stripe) -2 -New Topic: Hosts can choose to have cleanuoSome be empty on main thread -Mark S. Miller (@Agoric ) +2 New Topic: Hosts can choose to have cleanuoSome be empty on main thread Mark S. Miller (@Agoric ) ### Conclusion/Resolution + - consensus to remove FinalizationRegistry cleanup iterators and pass values directly - SYG will follow up with KM on cleanupSome before the end of this meeting ([link](https://github.com/tc39/notes/blob/master/meetings/2020-03/april-2.md#revisit-weakrefs-finalizationregistry-api-change)). @@ -502,7 +496,7 @@ USA: Yes, it returns a new fresh date with a different Calendar at this time. USA, JWS: (returns to slides) -MM: This is not a blocker; I'm curious. With the operations being named things like "plus", I'm wondering: If we already had DE's operator overloading proposal in the language, would it make sense to have dates overload some arithmetic operators so you could express date arithmetic? +MM: This is not a blocker; I'm curious. With the operations being named things like "plus", I'm wondering: If we already had DE's operator overloading proposal in the language, would it make sense to have dates overload some arithmetic operators so you could express date arithmetic? USA: I just wanted to point out that according to my understanding right now, operator overloading works when adding two objects of the same type. @@ -550,11 +544,10 @@ DE: It's not like all the champions have the same point of view on this. SFC: I don’t mean to say all champions have the same point of view, I’m sorry for saying that. It’s a meta question. I encourage people to participate more on these Github threads, especially this one. ----- +----- MBS: I’d like to update on the current plan for voting on the 2020 specification as well on the plan around surrogate pairs in regular expressions. We have the return of surrogate pairs in regular expression capture groups, it has 30 minutes available at the end of the day today, having talked to a few folks discussing this matter, it seems hopeful to reach consensus on this discussion. After that discussion, the plan is for the editor group to make a PDF of the spec that we would be voting on, and we have the vote on the agenda for tomorrow later in the day. If anyone has concerns about this specific plan of action please reach out to me or other chairs. I want to make sure we are all aware and can comment on it. - ## Ergonomic brand checks for Private Fields for Stage 1 Presenter: Jordan Harband (JHD) @@ -662,7 +655,7 @@ BFS: These have use cases without security concerns. SYG: For stage 1 that's perfectly sufficient. -MM: The Moddable implementation of Compartments is in a single Realm, so it does not have a reified Realms API, so Compartments really is independent of Realms. +MM: The Moddable implementation of Compartments is in a single Realm, so it does not have a reified Realms API, so Compartments really is independent of Realms. MF: Yesterday, we were talking about host hooks in the import.meta discussion, and DE brought up a good point that we shouldn’t just directly expose the host hooks we have in the spec today. When we proceed with this proposal, we should actually intentionally design how we want to expose the functionality that the host hooks provide. I just wanted to make that suggestion. @@ -697,7 +690,6 @@ MAR: My question is more on how we decide which hooks we’re going to have? Cou BFS: I agree there is some historical complaint on the web side about exposing such a hook. We could reevaluate that if we go discuss it, but it would have to be a coordinated discussion about whether we want it in the language because it would be disabled in a number of environments. - MAR: Sure but if it was less constrained on the compartment it would be a less contentious contentious thing? BFS: We would need to coordinate with the people who would want to disable it. @@ -733,7 +725,7 @@ GCL: I just wanted to be very careful because BFS said something about layering, BFS: The concern with layering is about what we would make to make an import map extension to ecma262. If we tried to remove it from ECMA262 it would be ??? for TC53 in such a way that it doesn’t ??? our use for promise values. -SYG: This is not a stage 1 blocker, but as it moves forward I think the implementability of it is not entirely clear on a hook-by-hook basis. If the current implementation is implemented in such a way that it's not easy to invoke a host hook, then... You are asking to make a bunch of things that are not observable now observable if you make compartments. +SYG: This is not a stage 1 blocker, but as it moves forward I think the implementability of it is not entirely clear on a hook-by-hook basis. If the current implementation is implemented in such a way that it's not easy to invoke a host hook, then... You are asking to make a bunch of things that are not observable now observable if you make compartments. BFS: That’s not necessarily true. I think that’s true of this rough draft design. Some of these may actually be static values where we expose the ability to ??? at runtime. @@ -760,7 +752,9 @@ SYG: I’d be happy to answer questions but I don’t have the cycles to proacti BFS: That’s perfect, thank you. ### Conclusion/Resolution -- Consensus reached for Compartments to Stage 1. + +- Consensus reached for Compartments to Stage 1. + ## Intl.NumberFormat v3 for Stage 1 Presenter: Shane F. Carr (SFC) @@ -787,7 +781,8 @@ AKI: I’m over here marveling on IRC because every time anything from Intl come RPR: No objections. You have stage 1. ### Conclusion/Resolution -- Consensus reached for Intl.NumberFormat V3 for Stage 1. + +- Consensus reached for Intl.NumberFormat V3 for Stage 1. ## import.meta for Stage 4 ([continued from previous day](https://github.com/tc39/notes/blob/master/meetings/2020-03/march-31.md#importmeta-for-stage-4)) @@ -879,6 +874,7 @@ DRR: I just want to know because for our compiler, and we have ES versions we ca MBS: The chair team and the editor team have been talking about this process, and the hope would be after we get through this meeting and the 2020 spec we would make better documentation about this process to ensure that we have clear times and clear process about when the spec can be relied on. This has been tribal knowledge, and that’s part of why things were delayed this year, so we wanted to make sure that it’s clearly documented and followed with good intent in future iterations of the spec. DRR: Thanks. + ### Conclusion/Resolution - Consensus to move import.meta to Stage 4, as is currently in the PR. @@ -908,7 +904,7 @@ WH: I concur with KG’s choices. The tooling argument dominates. Symmetry also MPC: While all of those examples are true, I think it's subjective that consistency within regexp is more important. Intuitively, at a base level, I would expect this thing to behave like an identifier. It is reasonable to expect that this `/(?<\u…\u.../>.)/` thing behaves like an identifier. -WH: I stand by my position. In the interest of time I don't want to digress to debating the relative importances of symmetries. If we want to fix the symmetry between regexes, strings, and identifiers, we could expand identifier token escape syntax to allow surrogate pairs, but that would be a different proposal. +WH: I stand by my position. In the interest of time I don't want to digress to debating the relative importances of symmetries. If we want to fix the symmetry between regexes, strings, and identifiers, we could expand identifier token escape syntax to allow surrogate pairs, but that would be a different proposal. KG: Before we get more into this particular topic, I don’t want to resolve the question of which kind of symmetry is more important. My hope is that the other arguments outweigh the symmetry goals. @@ -952,15 +948,14 @@ BFS: I have a slight preference for allowing `/(?<\u{...}>./` because it seems l KG: Ok. I am leaning towards this is legal too. Unless anyone has strong feelings. - RGN: Strong opinions against making it legal. SFC: I have preferences for allowing it, wearing my unicode hat. KG: I’m asking for consensus on making all of these [see ‘Open questions’ slide] legal, despite concerns from RGN. - ### Conclusion/Resolution + - Consensus that both kinds of escape sequences in both kinds of regular expression, as well as unescaped identifier characters even outside BMP, will be legal and will be included in the 2020 cut of the spec. - `/(?<\ud835\udc9c>.)/` will be legal - `/(?<\ud835\udc9c>.)/u` will be legal diff --git a/meetings/2020-03/april-2.md b/meetings/2020-03/april-2.md index 3c5f284b..6dda6ac6 100644 --- a/meetings/2020-03/april-2.md +++ b/meetings/2020-03/april-2.md @@ -1,4 +1,5 @@ # April 02, 2020 Meeting Notes + ----- **Notetakers:** Mark Cohen (MPC), Jason Williams (JWS), Rick Button (RBU), Dave Poole (DMP), Philip Chimento (PFC) @@ -36,7 +37,7 @@ MPC: I wanted to express being in favour of 4 days 5 hours in June. Regarding se CM: The long meeting is much more draining in the remote mode, because you just sit in a chair not moving the whole time. MBS: So i guess a question to that, do people have a preference for 5 days 4 hours vs. 4 days 5 hours? -MLS: I like 4 days 5 hours. +MLS: I like 4 days 5 hours. CM: Yes. @@ -75,7 +76,7 @@ MBS: One of the things I want to mention - I agree that this meeting has been su WH: To reply to MPC, the hallway track didn't work for me. Hubs makes my computer go haywire. I had a really bad experience with Zoom breakouts — it took a long time and many attempts to get the right people into a breakout session, and that was with extensive moderator help. I wasn't able to set one up on my own without the help of moderators. There is just no way to grab somebody in a hallway and chat with them during a break. We have no good solution for that. -MLS: I second what MBS said, I believe this did work because of the rapport we have, especially for the people who have been attending a year or two, or longer. I agree that 2020 I don’t think we’re going to be able to see each other face-to-face, given the way things are going. I concur that hubs - didn’t give it much chance. I had IRC open, I had the presentation, everybody's face, hubs is one too many things to figure out how to work. I agree the hallway channel - we need to figure out a better solution until we meet face to face. I’d advocate that we meet face to face regularly. Originally I was saying that the 3 days and longer days would be better, but because of time zones, I actually think that 4 days shorter schedule would be better. +MLS: I second what MBS said, I believe this did work because of the rapport we have, especially for the people who have been attending a year or two, or longer. I agree that 2020 I don’t think we’re going to be able to see each other face-to-face, given the way things are going. I concur that hubs - didn’t give it much chance. I had IRC open, I had the presentation, everybody's face, hubs is one too many things to figure out how to work. I agree the hallway channel - we need to figure out a better solution until we meet face to face. I’d advocate that we meet face to face regularly. Originally I was saying that the 3 days and longer days would be better, but because of time zones, I actually think that 4 days shorter schedule would be better. CM: I second the various comments about how we are leveraging our personal relationships that are rooted in our face to face meetings. It will be difficult for new people to build those kinds of relationships. I think that Hubs as an experiment was very much worth trying, but in contrast to MPC’s comment I think it was a total and utter failure, it just does not work on any level for me. Maybe some of this is colored for me by the fact that I’ve been doing VR stuff since the 1990s and this felt like a 1990s VR demo to me. There may be other tools, and the selective pressure for people to come up with alternatives will get stronger so I don’t think it’s a complete write-off, but I think I'd pronounce the Hubs experiment a failure. @@ -195,7 +196,6 @@ DE: Yes, maybe anyone who disagrees can - no, I’m not going to call for volunt AKI: Every single week for months! - DE: Right, so anyone who wants to join that process can try to join the chair group next year. It’s not like you make some comment that’s so insightful that everyone agrees. MBS: Summarizing that there are things that we are discussing. I’d like to make a formal proposal, to what DE was saying I’m not asking for consensus. Just feedback. I suggest we do a 4 day 5 hour format, at the end of the June meeting we can discuss the 3 day vs 4 day format. @@ -261,7 +261,7 @@ SYG: It will wait for the unlock. And then if it’s in the contended state, the RW: Real quick, sorry, in that case isn’t the atom index 2 so when you get to await/async it’ll say -- nevermind, it will unlock -SYG: The fail fast case in the async version, it has to wait for a ??? +SYG: The fail fast case in the async version, it has to wait for a ??? SYG: (continues to present PR) @@ -366,14 +366,13 @@ SYG: Is number 3, to satisfy the performance goal I layed out, the wrapper has a RPR: Are there any objections? No objections. ### Conclusion/Resolution -- Consensus reached for Option 3. +- Consensus reached for Option 3. WH: [After lunch break] I really wish we had a good solution for the hallway track. There were a few difficulties that arose this morning and I had no good way of resolving them. Some of them are doable by GitHub but many are not; I’ve tried. An example of a conversation I wish I could do in a hallway track is the performance implications of always allocating a fresh object by `waitAsync`. RW: I also just had some of the same conversations on that topic. - ## Revisit WeakRefs FinalizationRegistry API change Presenter: Keith Miller (KM) @@ -407,7 +406,7 @@ MM: I appreciate that it might be a controversial position. But I want to hear t MM: Let’s say we had not bought into it at all. Why would it be controversial? What is the reason to avoid introducing a long running single job application in a worker? -KM: There are a couple of different things. One of the biggest historical problems is how do you handle termination of the main page (if I understand it correctly). It's not spec-complaint to kill a worker if not waiting on the event loop. +KM: There are a couple of different things. One of the biggest historical problems is how do you handle termination of the main page (if I understand it correctly). It's not spec-complaint to kill a worker if not waiting on the event loop. There is a whole list of things - there is the quesiton of idle sleep, if the user is not doing something, does the long running event thing handle sleeping so that it doesn’t burn through battery life? MM: It blocks on communication through a SAB. @@ -474,9 +473,9 @@ SYG: So narrowly normative optional means allowing behavior that can be there or DE: I think it would be better to be there or not there. If we make it not there it is easier to feature detect. I think normative optional gives the exact right amount of power to hosts. We already have hosts that require things that are normative optional to be present, for example HTML requires Intl to be present. I think this is the most parsimonious host hook we could provide - we were previously talking about parsimony of host hooks, so the cases where it is present or not present or whether it is never present on the web, that is the simplest way to do it. -GCL: I wanted to point ot the distinction of main thread at a spec level might not be ideal, because Node is OK putting this API on its main thread. And JS setting things aside for the main thread might be a problem in that regard. +GCL: I wanted to point ot the distinction of main thread at a spec level might not be ideal, because Node is OK putting this API on its main thread. And JS setting things aside for the main thread might be a problem in that regard. -KM: That wasn't the intention. Just like you have a canBlock flag, you would have a can synchronized. +KM: That wasn't the intention. Just like you have a canBlock flag, you would have a can synchronized. GCL: Ok @@ -493,6 +492,7 @@ SYG: Are there any objections to making cleanup on FinalizationRegistration.prot KM: One quick note. I would like to apologize for bringing up this so late. ### Conclusion/Resolution + - Consensus on making cleanup on FinalizationRegistration.prototype literally normative optional. - Stakeholders to follow up on the HTML spec for the layering PR to debate on what to do on different threads. @@ -508,7 +508,7 @@ DE: (presents slides) No comments. -AKI: The last time decorators were on the agenda, it was a deep deep queue… very surprising that there's no queue today. Is the lack of a queue because people are good and going to work on this offline, or is it because people are tired? +AKI: The last time decorators were on the agenda, it was a deep deep queue… very surprising that there's no queue today. Is the lack of a queue because people are good and going to work on this offline, or is it because people are tired? MM: Dan was not proposing anything, but just updating on what has happened so far. @@ -520,8 +520,7 @@ DE: Please email me if you want an invitation. Presenter: Jordan Harband (JHD) - -JHD: I have created a GH release for 2020, I have posted it on the reflector. The URL for the main spec is https://github.com/tc39/ecma262/releases/download/es2020/ECMA-262.11th.edition.June.2020.pdf (GH is down at the moment, I will upload the PDF) We are asking if there is consensus for approval to send this to ECMA. +JHD: I have created a GH release for 2020, I have posted it on the reflector. The URL for the main spec is https://github.com/tc39/ecma262/releases/download/es2020/ECMA-262.11th.edition.June.2020.pdf (GH is down at the moment, I will upload the PDF) We are asking if there is consensus for approval to send this to ECMA. MBS: Thank you JHD. Let’s look at the queue. @@ -590,7 +589,9 @@ AKI: Just think about the fame and prestige! (silence) MBS: If you are interested in helping with the editorship, it would be much appreciated. + ### Conclusion/Resolution + - The vote for ECMAScript 2020 passes ## Incubator call chartering @@ -643,7 +644,6 @@ WH: Are you talking about participants or proposers when recommending that folks SYG: I’m talking about participants. - WH: Ok, that wasn’t clear. WH: What is the mechanism for, if when you look at the agenda for a particular meeting, you think there’s an item that you should be in the discussion for, but you cannot make that particular meeting? What’s the mechanism to request deferral? @@ -675,6 +675,7 @@ JRL: One question. Is this going to be open to external people or is it just res SYG: For now, restrict to delegates and invited experts. If they’re currently not a TC39 member we go through the same IPR policy as for invited experts. Part of the intended goal is that there are less surprises and less off the cuff things during plenary. If the feedback that's given ends up not having relevance in TC39 itself, that ends up not serving that goal as well. ### Conclusion/Resolution + - SYG will create a Reflector thread, move the agenda repo into the TC39 org, and reach out to stakeholders interested in attending the incubator calls. ## Discuss process changes we implemented in February to accommodate US members and US delegates @@ -685,7 +686,7 @@ Presenter: Myles Borins (MBS) MBS: (Presents slides) -JHD: The sense I had at the end of that was that there was not consensus, that we were doing it as a forced emergency response. So I am pretty sure I made clear that it was in the notes that this was begrudging acceptance because it’s more important to allow all our delegates to participate than to preserve the way things are, but my question is now it has been two months, has there been any updates from those delegates on relaxing that interpretation? +JHD: The sense I had at the end of that was that there was not consensus, that we were doing it as a forced emergency response. So I am pretty sure I made clear that it was in the notes that this was begrudging acceptance because it’s more important to allow all our delegates to participate than to preserve the way things are, but my question is now it has been two months, has there been any updates from those delegates on relaxing that interpretation? MF: I don’t think it was a minority of member companies that had concerns of participation due to this. We haven’t asked member companies to state that they have concerns about it but I spoke to a number of other representatives who voiced concerns, and then when we discussed this there were more people who had concerns, so I really don’t think it is just going to go away. @@ -804,13 +805,13 @@ DE: And to follow through with them posting the proposed resolution to the refle MBS: I guess the ast thing there would be: do we want to add this again in the June agenda as a follow up, or is it ok to just follow up on the Reflector. I think we should just handle it on the reflector, I don’t think this needs more committee time unless there’s an explicit proposal for changes. Does anyone disagree with that? -DE: I just want to say, even if we agree with this now, we should agree with the concrete policy on the reflector, and if anyone wants to bring it to the -But we would have it be async only by default. +DE: I just want to say, even if we agree with this now, we should agree with the concrete policy on the reflector, and if anyone wants to bring it to the But we would have it be async only by default. MBS: I just want to make sure that if there is anything else about this, that can be on the Reflector. + ### Conclusion/Resolution -- Further discussion will continue on the reflector for now. +- Further discussion will continue on the reflector for now. ## Hubs Hubs Hubs diff --git a/meetings/2020-03/march-31.md b/meetings/2020-03/march-31.md index d22787b8..f2d852bf 100644 --- a/meetings/2020-03/march-31.md +++ b/meetings/2020-03/march-31.md @@ -1,4 +1,5 @@ # March 31, 2020 Meeting Notes + ----- **In-person attendees:** @@ -6,9 +7,8 @@ **Remote attendees:** Yulia Startsev (YSV), Mark Cohen (MPC), Jeff Long (JHL), Bradley Farias (BFS), Rick Button (RBU), Michael Ficarra (MF), Mathias Bynens (MB), Myles Borins (MBS), Caio Lima (CLA), Dave Poole (DMP), Jason Williams (JWS), Kevin Gibbons (KG), Chip Morningstar (CM), Philip Chimento (PFC), Mary Marchini (MAR), Rob Palmer (RPR), Ross Kirsling (RKG), Waldemar Horwat (WH), Pieter Ouwerkerk (POK), Bradford C. Smith (BCS), Ujjwal Sharma (USA), Richard Gibson (RGN), Felienne Hermans (FHS), Nicolò Ribaudo (NRO), Shane F Carr (SFC), Justin Ridgewell (JRL), Jack Works (JWK), Philipp Dunkel (PDL), Robin Ricard (RRD), Ben Newman (BN), Sergey Rubanov (SRV), Jordan Harband (JHD), Guilherme Hermeto (GHO), Robert Pamely (RPY), Edgar Barragan (EB), Mark Miller (MM), Hemanth HM (HHM), Aki (AKI), Daniel Rosenwasser (DRR) Not present, but reviewed the notes: Istvan Sebestyen (IS) - - ## Housekeeping + ### Adoption of the agenda Adopted by consensus. @@ -22,6 +22,7 @@ Adopted by consensus. AKI: The meeting at PayPal in early June will be canceled, unfortunately. The meeting will be held remotely. The chairs will continue to work on managing remote plenaries going forward. There will be more discussion later in this meeting. ## Secretary’s Report + Presenter: Rob Palmer (RPR) in place of Istvan - [slides](https://drive.google.com/open?id=1Orrdz7YMZIVmmy5IjKpDKLLk0egYFU9X) @@ -175,7 +176,6 @@ RW: (presents without slides) RW: Any questions? - (silence) ## Updates from Coc Committee @@ -300,6 +300,7 @@ RW: So there is this hypothetical scenario where you have a piece of code; If I SYG: From the codegen point of view, there’s no reason to require separate codegen here, whereas for wait it’s just not .?? That was Thomas’ motivation. RW: Works for me. + ### Conclusion/Resolution - No objections, consensus on the PR. @@ -323,7 +324,7 @@ Presenter: Kevin Gibbons (KG) KG: (presents slides) -WH: You mentioned that the spec contains some missing production. Can you be more specific? +WH: You mentioned that the spec contains some missing production. Can you be more specific? KG: Specifically says that it is an early error if the SV of the capture group is not an identifier or not IdentifierStart, It refers to the SV of this capturing group name production, but that operation is not defined. It's linked in the PR. @@ -333,9 +334,9 @@ KG: All of the grammar productions exist. There's a reference to a syntax-direct WH: I can take a good guess as to why it’s there. It’s the same in the identifier grammar. The idea is to prevent creating identifiers that contain things like spaces or nulls. -MPC: Is the spec coherent about the other two ways about writing the code point? Is it incoherent about all three, or just the surrogate pairs? +MPC: Is the spec coherent about the other two ways about writing the code point? Is it incoherent about all three, or just the surrogate pairs? -KG: It is incoherent about all of them. The reason why I am presenting this as the only place: This is the place where unicode regexes disagree with identifiers on how things work. In Unicode regexes, the two surrogate pairs are equal to the long form all of the time. +KG: It is incoherent about all of them. The reason why I am presenting this as the only place: This is the place where unicode regexes disagree with identifiers on how things work. In Unicode regexes, the two surrogate pairs are equal to the long form all of the time. The two surrogate halves is the place where it seems like the intent was not clear. MF: Why do we care that they are identifier names? @@ -430,7 +431,6 @@ WH: I don’t see what you gain by disallowing the `\u` surrogates. They are all KG: They do expect unicode regular expressions. - MB: What you gain is symmetry between `match.groups.whatever` and the capture group name. WH: There are lots of things you can do in a regex that you cannot do in other source code. @@ -526,7 +526,9 @@ WH: This is blocking the 2020 release. We need time to draft the resolution and DE: I agree that the 2020 release is high priority, but we can continue to resolve this for 2021. KG: Let’s come back in an hour and follow up with SFC in the meantime. + ### Conclusion/Resolution + - Return to this discussion later. - This is blocking ES2020. @@ -631,6 +633,7 @@ SYG: Are there any objections to strictly optional? Again it’s just for the SA PHE: I wanted to add a voice in support of making this optional: MM said the blockchain case, but in embedded JavaScript there are plenty of scenarios where SABs don't make any sense. Making it optional is great for us in terms of code size and complexity. ### Conclusion/Resolution + - Consensus is to make the global property SharedArrayBuffer strictly optional for the host, without contingencies. ## Add support for 'OptionalChain'.PrivateIdentifier in class features proposals @@ -676,7 +679,7 @@ MM: Class C, you said to see the proposal for the semantics, please state the se JHD: This is relevant to my queue topic. The semantics of o?.anything, if o is non-nullish, return the rest of the chain. -MM: Maybe I’m confused about the semantics of optional chaining, what is the semantics of o?.c.f; +MM: Maybe I’m confused about the semantics of optional chaining, what is the semantics of o?.c.f; JHD: If o is nullish, undefined, otherwise it returns o.c.#f. @@ -703,9 +706,12 @@ MM: With the second bullet semantics? CLA: Yes. (silence) + ### Conclusion/Resolution + - Consensus for both syntax forms - `o?.#f` still throws if `o` is a non-null/undefined object and`#f` is not present. + ## Process: require public repo for stage 1 Presenter: Jordan Harband (JHD) @@ -776,6 +782,7 @@ JHD: Do we have a consensus to make a repo somewhere on GH a requirement? And th (silence) ### Conclusion/Resolution + - Consensus reached to require public repos for stage 1 proposals. ## TypedArray stride parameter for Stage 2 @@ -805,7 +812,7 @@ YSV: There's another use case brought up by the graphics team, which is interlea SYG: That sounds good. I will review those issues and work through those. As for the JIT concerns, what exactly are Firefox’s JIT concerns here? That you would need to track the exact instance of the TypedArray to get the stride out to it? -YSV: The stride parameter will have influence on the JIT because accessing an index of a TypedArray currently calculates index * byte size where (index & bite size ???). With this proposal, it would need to become index * stride * typedArray.byteSize. We would not be able to use the constant as it exists now. +YSV: The stride parameter will have influence on the JIT because accessing an index of a TypedArray currently calculates index *byte size where (index & bite size ???). With this proposal, it would need to become index* stride * typedArray.byteSize. We would not be able to use the constant as it exists now. SYG: You already need to save the offset of the TypeArray somewhere. @@ -889,14 +896,15 @@ SYG: I see. That is a good data point. RPR: The queue is empty. SYG: My take-away is that there is suspicion and concern around the graphics use-case. If it’s not a sufficiently expressive API to address many common graphics use-cases, is it still useful enough? There seems to be arguments for both sides. As for the implementability, the operative question is the one that KM raised; a lot of work needs to go into it to make it performance-neutral with user code, what's the price we're willing to pay for the convenience of it being built in? It sounds like we don't have consensus for Stage 2 today. It sounds like it’s not useful enough for the simple stride case. + ### Conclusion/Resolution + - Not asking for Stage 2. ## import.meta for stage 4 Presenter: Gus Caplan (GCL) and Myles Borins (MBS) - - [PR](https://github.com/tc39/ecma262/pull/1892) - [slides](https://docs.google.com/presentation/d/1dXono-H8VjmihAM9bel1RuPvHoSFOqRZ-WprVWUQ3EI/edit#slide=id.p) @@ -916,7 +924,6 @@ MF: That’s fair. MM: First, I agree that one host hook rather than two, if it covers what we need to cover is an improvement. If one is more powerful than the other, is there any reason we need the more powerful one, because the less powerful one has stronger guarantees?. I'd be in favour of reducing it to the less powerful one if it covers the actual need. The big question - the request that I have is that I think this is a simple enough proposal that I would like the presenter to actually summarize the proposal itself, because as presented it mixes what’s being proposed and what’s in various uses of this. For example the URL is not actually part of the ES proposal, but exists in the use of it in both the browser and node. I make the request that the presenter actually describe the proposal. - GCL: The reason that there are two hooks is ~3 years old at this point, so I'm going to call it historical. There was this idea that the object returned from import.meta should be a plain object with a null prototype. Then hosts would just add properties to it, not change the prototype or do anything weird. But then later, the second hook HostFinalizeImportMeta was added - so when the host wants to finalize the prototype they can’t make it an ordinary object but they can add other things to it. So we ended up with two. MM: Was there a motivation for the second one? Was there something that people actually wanted to do with it, or was it that the host might want to do something with it just in case? @@ -1031,11 +1038,11 @@ MM: It sounded like anyone who would be fine with both host hooks should be fine MBS: We have more time tomorrow. Let’s put a 15 minute time box for tomorrow at the end of the day. ### Conclusion/Resolution + - Will continue discussion [tomorrow](https://github.com/tc39/notes/blob/master/meetings/2020-03/april-1.md#importmeta-for-stage-4-continued-from-previous-day) in a 15 minute timebox. New Topic: Do we need to worry about this prior to Compartments actually exposing hooks? -Jordan Harband -New Topic: I would prefer a stronger argument to preclude hosts from doing things because we can happen to enumerate use cases today +Jordan Harband New Topic: I would prefer a stronger argument to preclude hosts from doing things because we can happen to enumerate use cases today ## Decimal update @@ -1084,12 +1091,11 @@ DE: Yeah, and I agree with you. KM: MM, in what way would having special engine hooks for all the various operator overloading - how do you envision that being incompatible with a future operator overloading proposal? All of the work we do for add - hooking into an add would be pretty intuitive. Why is it so important that we have the actual operator overloading proposal before we add quote-unquote magic here? -MM: The object or primitive is an example of this, but in general the operator overloading thing, took a long time for DE to convince me that it was a good proposal. Decimal could very well on edge cases do something different on those subtle issues. In particular, if user written rationals and complex numbers, matrices, suffer from the === problem because they are objects and not values, then decimal should also suffer from the problem. There’s lots of other subtle issues where the only way to get them right is to see where operator overloading lands to make sure that all the edge cases land the same way. +MM: The object or primitive is an example of this, but in general the operator overloading thing, took a long time for DE to convince me that it was a good proposal. Decimal could very well on edge cases do something different on those subtle issues. In particular, if user written rationals and complex numbers, matrices, suffer from the === problem because they are objects and not values, then decimal should also suffer from the problem. There’s lots of other subtle issues where the only way to get them right is to see where operator overloading lands to make sure that all the edge cases land the same way. DE: That's one possible goal for language design, but you talk about solving a lot of problems; I think decimal itself would solve a lot of problems for programmers. -MM: I’m not saying that we should never have decimal, but it should be made consistent with -user written complex/rational/matrix etc. +MM: I’m not saying that we should never have decimal, but it should be made consistent with user written complex/rational/matrix etc. RPR: We’ve got 8 minutes left on this topic. @@ -1124,7 +1130,6 @@ Remainder of the queue: 3 Reply: Regardless of what you think of IEEE, have to mention it + history - Andrew Paprocki (Bloomberg L.P.) 4 New Topic: Given reservation around operator overloading, could BigDecimal decisions influence operator overloading instead of the other way around? - Shu-yu Guo (@google) - ## LogicalAssignment for stage 3 Presenter: Justin Ridgewell (JRL) @@ -1151,6 +1156,7 @@ WH: LGTM! ??: Awesome stuff. ### Conclusion/Resolution + - Stage 3. ## Pattern Matching Update @@ -1179,7 +1185,7 @@ KG: We have had people uncomfortable with making the current spec the 2020 candi JHD: The other option is that since this was an issue in 2018 and 2019, we cut the draft now. -WH: I'm uncomfortable shipping a standard with known bugs. It's one thing if we don't know about the bug, but it's another thing if we know about it. +WH: I'm uncomfortable shipping a standard with known bugs. It's one thing if we don't know about the bug, but it's another thing if we know about it. JHD: The challenge here is that we have the 2-month opt-out period, and we have 2 weeks until that must start, and we have to agree in a plenary. @@ -1193,15 +1199,15 @@ WH: We should have a longer timebox than that. MBS: The spec does have known bugs. A: Do we think we are going to get through this, B: If we can’t reach a consensus does that block the 2020 spec? -WH: We shouldn't talk about talking about things. It's not a productive use of time. +WH: We shouldn't talk about talking about things. It's not a productive use of time. MBS: I’m not talking about hypotheticals. I’m asking very literally, should we cut it now? -MF: At the same time, I'm not comfortable making a change I haven't fully thought through. I don't think there's going to be enough time. +MF: At the same time, I'm not comfortable making a change I haven't fully thought through. I don't think there's going to be enough time. MBS: Would it be safe to say that if we pick a time box tomorrow, and if it doesn’t reach consensus by the end of the time box, it doesn’t make it into the ES2020 spec. -WH: If you make the timebox something reasonable, like 30 minutes, we'll see what happens. I don't want to just say yes to what MBS said because it may encourage people to block consensus. +WH: If you make the timebox something reasonable, like 30 minutes, we'll see what happens. I don't want to just say yes to what MBS said because it may encourage people to block consensus. MLS: WH, If we say 30 minute time box, and we don’t reach consensus, are you ok with not making it into 2020, and revisit for 2021? @@ -1213,7 +1219,7 @@ WH: I don't know what you mean. KG: The proposal is to pick a semantics even though not everyone agrees with it and come to a consensus later. -KM: We normally allow forward changes with exceptions. It won't prohibit improving the semantics in the future if the committee decides to do that. +KM: We normally allow forward changes with exceptions. It won't prohibit improving the semantics in the future if the committee decides to do that. WH: There are a couple cases that are ok, 0-3, or 0,2,3 from the IRC discussion. @@ -1231,7 +1237,7 @@ WH: I don’t want to ship a standard that breaks ASCII-fiers, they are possible MF: Does that include tagged template rules? I doubt that. -WH: I want ASCII-fiers to be possible with regular expressions. I can't see anyone writing this code on their own. +WH: I want ASCII-fiers to be possible with regular expressions. I can't see anyone writing this code on their own. KG: We do not have agreement on the semantics. We need to cut the specification with some answer, either no answer or an answer someone is not happy with. diff --git a/meetings/2020-06/june-1.md b/meetings/2020-06/june-1.md index 6995dd11..6e904105 100644 --- a/meetings/2020-06/june-1.md +++ b/meetings/2020-06/june-1.md @@ -1,9 +1,10 @@ # June 01, 2020 Meeting Notes + ----- -**In-person attendees:** (none) +**In-person attendees:** (none) -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Yulia Startsev | YSV | Mozilla | @@ -56,15 +57,18 @@ | Rick Waldron | RW | Bocoup | | Daniel Ehrenberg | DE | Igalia | - ## Housekeeping + ### Adoption of the agenda + Adopted ### Approval of minutes + Approved ### Volunteers for note taking + * Robin Ricard (RRD) * Mark Cohen (MPC) * Shane Carr (SFC) @@ -72,6 +76,7 @@ Approved * Ilias Tsangaris (IT) ## Next meeting host and logistics + [Slides](https://docs.google.com/presentation/d/1NyD7mS7qFXUPVWtUhCsR7gPGEZJKCRwznx4a6efz9yU) RPR: (presents slides) @@ -81,6 +86,7 @@ RPR: no questions at the moment, please give feedback to the chair group via ref RPR: chair group will have final say ## Secretary’s report + Presenter: Istvan Sebestyen (IS) * [slides](https://github.com/tc39/agendas/blob/master/2020/06.tc39-2020-06-slides_Istvan.pdf) @@ -88,9 +94,11 @@ Presenter: Istvan Sebestyen (IS) IS: (presents slides) It is published as TC39/2020/026. In general very quiet since the April TC39 meeting, which is a good sign at this phase of the year. The two drafts (ECMA-262 and ECMA-402 2020 editions) have been published within Ecma. The RF “Opt-out” has been launched on April 2, 2020 and closes tomorrow on June 2, 2020. So far we have received nothing, which is good. I think the ES2020 on June 16-17 by the Ecma GA will go without problems. Thanks for the hard work. ## ECMA404 Status Update + CM: No news is good news. ## ECMA262 Status Update + [Slides](https://docs.google.com/presentation/d/1PxrkXXrtgnTgE14k8WnuKtjjyNDJ9ce15ZL_uvl-P9U/) JHD and KG: (present slides) @@ -99,7 +107,7 @@ SYG: agenda item later - we’d like to settle how we layer with other specs - l BSH: BigInt spec issues - any effect expected on JS engine behavior? Are they inconsistent as a result? -KG: mostly editorial - assuming I have the right idea, it’s just the arithmetic in the spec that is badly defined but everyone is already the right thing. In some cases the thing the spec says not even defined. In other cases there are arguably wrong things that show up with arbitrarily large numbers in counters, etc. +KG: mostly editorial - assuming I have the right idea, it’s just the arithmetic in the spec that is badly defined but everyone is already the right thing. In some cases the thing the spec says not even defined. In other cases there are arguably wrong things that show up with arbitrarily large numbers in counters, etc. BSH: But you’re not expecting your work to result in engine teams having to go change things? @@ -107,9 +115,9 @@ JHD: No, but the things in 262’s release candidate are worth paying attention BSH: So the tests tested the correct thing, even though the spec said the wrong thing? -MF: The spec was updated to the wrong thing about a year ago - engines have been doing the shift op correctly for many years +MF: The spec was updated to the wrong thing about a year ago - engines have been doing the shift op correctly for many years -WH: Regarding your last point. There were lots of spec bugs that were introduced involving abstract numbers. Those are being addressed. +WH: Regarding your last point. There were lots of spec bugs that were introduced involving abstract numbers. Those are being addressed. WH: On the topic of Syntax-Directed Operations, whichever way you gather them, I’d like there to be links to them from the other place. If you gather all the productions for an SDO in one place, I’d like there to be a link from the grammar production to the SDOs that they affect. It would make it much easier to read the spec. @@ -118,19 +126,22 @@ KG: I can make that happen automatically in ecmarkup probably so thank you for t WH: If you're reading a production and you want to know everything about what a grammar production does, this will help. Thank you. MF: please leave a comment on the issue + ## ECMA402 Status Update + * [slides](https://docs.google.com/presentation/d/1leorSs4oYKFh7WYxoR5H2YtYANM8YgYpGQuEW1jMFBc) SFC: (presents slides) Items seeking consensus: + * ecma402#430: https://github.com/tc39/ecma402/pull/430 * ecma402#438: https://github.com/tc39/ecma402/pull/438 * ecma402#444: https://github.com/tc39/ecma402/pull/444 - not had the chance to discuss that PR yet SFC: No one on the queue - if there are no comments by end of presentation, we can record a tc39 consensus on those PRs -LEO: A note about the new Editorship. Thank you for this work. SFC hugely facilitated that work. Very thankful and will help Richard Gibson. Thanks to everyone in ECMA402! +LEO: A note about the new Editorship. Thank you for this work. SFC hugely facilitated that work. Very thankful and will help Richard Gibson. Thanks to everyone in ECMA402! AKI: Clapping! @@ -139,16 +150,19 @@ RG: Thanks to Leo for assistance! ### Conclusion Consensus on: + * ecma402#430: https://github.com/tc39/ecma402/pull/430 * ecma402#438: https://github.com/tc39/ecma402/pull/438 * ecma402#444: https://github.com/tc39/ecma402/pull/444 ## Test262 + Presenter: Leo Balter (LEO) LEO: (presents active proposals in stage 3) everything is good so far. I ask the champions to frequently review the Stage 3 proposals table at the tc39/proposals repo as it guides us on what needs to be tested. ## Updates from CoC committee + Presenter: Aki (AKI) * [CoC](https://tc39.es/code-of-conduct) @@ -156,13 +170,16 @@ Presenter: Aki (AKI) AKI: the coc committee met twice since the last meeting, fortnightly meeting, we did get 1 report in the past 2 months, was discussed it, all parties involved are satisfied. Importantly, reminding that the code of conduct exists, I should remind better but please review and understand what it asks of you and be prepared to bring that sense of respect to each others to every meeting… It has been a bit of a distracting world in the last few days. Each of us is trying to find ways to communicate better. We had a budget for training but couldn’t use it due to situation but are going to look for a communication training online. That requires time though so please stay in timeboxes. If you have been optimistic about them, please tell the chairs. Now requiring more note takers Additional notetakers: + * Kris Kowal (KKL) * Mark Cohen (MPC) * Robin Ricard * Shane Carr (SFC) * Ujjwal Sharma (USA) * Ilias Tsangaris (IT) -## Promise.{all,allSettled,race} should check "resolve" before iterating + +## Promise.{all,allSettled,race} should check "resolve" before iterating + https://github.com/tc39/ecma262/pull/1912 KM: (discusses PR) @@ -175,7 +192,7 @@ RPR: There is no-one on the queue. KM: I will take that no-body has joined the queue and that there are no objections. -LEO: We need to coordinate someone to write a test for it. We have tests for the behavior you want to change. +LEO: We need to coordinate someone to write a test for it. We have tests for the behavior you want to change. KM: yea we need but I did not bother doing that without consensus but I will do that @@ -211,21 +228,21 @@ RPR: Congratulations on Stage 2 ### Conclusion -Stage 2 -Reviewers: WH, KG, and SYG. +Stage 2 Reviewers: WH, KG, and SYG. ## Logical assignment status update + Presenter: Justin Ridgewell (JRL) * [proposal](https://github.com/tc39/proposal-logical-assignment) -* [slides]() +* slides * [issue for impl status traking](https://github.com/tc39/proposal-logical-assignment/issues/25) * [main issue today](https://github.com/tc39/proposal-logical-assignment/issues/23) -JRL: Currently spidermonkey has this behind a flag, JSC has a flag, V8 has a flag. Spidermonkey is waiting for resolution before they unflag. We have implementations in Engine262 … Babel has had this forever. We are just waiting for unflagged implementations to move on to stage 4.Onto [the main topic](https://github.com/tc39/proposal-logical-assignment/issues/23) We realized a gotcha during implementation in Babel around the transform. When we assign foo with a logical assignment to an anonymous function, what do we expect the name after assignment? +JRL: Currently spidermonkey has this behind a flag, JSC has a flag, V8 has a flag. Spidermonkey is waiting for resolution before they unflag. We have implementations in Engine262 … Babel has had this forever. We are just waiting for unflagged implementations to move on to stage 4.Onto [the main topic](https://github.com/tc39/proposal-logical-assignment/issues/23) We realized a gotcha during implementation in Babel around the transform. When we assign foo with a logical assignment to an anonymous function, what do we expect the name after assignment? -The transform that all of the implementations Babel and TS do is transform foo to a logical operator and on the RHS foo = anonymous function actually assigns a name to the function. After this the foo function will have the name foo. The easy workaround is to use a sequence expression (0, fn) to skip the named evaluation of the function. The topic is whether we should use named evaluation for this case. It would make named evaluation easier. Asking the committee to help decide. +The transform that all of the implementations Babel and TS do is transform foo to a logical operator and on the RHS foo = anonymous function actually assigns a name to the function. After this the foo function will have the name foo. The easy workaround is to use a sequence expression (0, fn) to skip the named evaluation of the function. The topic is whether we should use named evaluation for this case. It would make named evaluation easier. Asking the committee to help decide. WH: Does somebody know under which conditions an anonymous function gets its name? Especially related to parentheses, comma expressions, and the like? @@ -251,7 +268,7 @@ WH: The analogous code for `??=` would be `foo = foo ?? someFunction()`. What ha Named evaluation does not carry through to the function in that case. -``` +```js foo = function(){} || true foo = foo ?? function() {} assert(foo.name === “”) @@ -267,7 +284,7 @@ Perhaps we should discuss this with code samples. KG: I have a response for that: we decided... The desugaring suggests named evaluation. -JRL: (writing on issue) https://github.com/tc39/proposal-logical-assignment/issues/23 +JRL: (writing on issue) https://github.com/tc39/proposal-logical-assignment/issues/23 WH: I haven’t had time to think about this. @@ -298,10 +315,11 @@ MLS: Those of us on JSC would need to talk about it internally to make sure it JTO : SM will implement ### Conclusion/Resolution -* Tentatively ok to go forward w/ named evaluation +* Tentatively ok to go forward w/ named evaluation ## Iterator Helpers + Presenter: Jason Orendorff [Mozilla] (JTO) * [proposal](https://github.com/tc39/proposal-iterator-helpers) @@ -309,12 +327,11 @@ Presenter: Jason Orendorff [Mozilla] (JTO) JTO: (presents slides) - JHD: regarding opt 1, those iterators should not be generators in any way. They need to all be consistent incl. Strings iterator methods, don’t know if the existing onex could be iterator methods -JTO: it would be a change +JTO: it would be a change -RPR: Point of order, note taking is lagging... +RPR: Point of order, note taking is lagging... JHD: will fill in my own comments later @@ -337,7 +354,7 @@ BSH: ok JHD: would we want `return` and/or `throw` to only be present on the iterator object when they’re present on the iterator object? -BSH the object always call through the iterator +BSH the object always call through the iterator GCL: return and throw is more than just passing but ensuring it is used correctly, if you are in a loop it is not just about calling but about the life-cycles, so the path of directly mimicking them. So for example, option 3, if you call .return on the map wrapper, it doesn’t directly call .return on the iterator that is mapped, but .return is called because of the way generators work. So it’s a balance. @@ -349,7 +366,7 @@ https://gist.github.com/jorendorff/35504c2553170be98fc2810ccf60c608 JTO: This is untested code, just a sketch really. -YSV: I do have a working version of this if anyone wants to play around with it. +YSV: I do have a working version of this if anyone wants to play around with it. JTO: It may be surprising that option 3 is about 68loc but that is the result of the spec just having a little bit more control over looking at completion values and seeing what they are, and being able to separate them out into algorithms as needed, it’s a little bit harder to do that in JavaScript the language. @@ -398,6 +415,7 @@ JTO: thank you for your time Feedback given; no approach has consensus yet. Please see this [issue](https://github.com/tc39/proposal-iterator-helpers/issues/97) ## Do expressions for stage 2 + Presenter: Kevin Gibbons (KG) * [proposal](https://github.com/tc39/proposal-do-expressions/) @@ -447,7 +465,6 @@ MPC: Would it be possible to add some sort of keyword, effectively a return for KG: In principle, but I would prefer not to do that. It makes do expressions less like current statements. Main advantage is that the body is like any other collection of statements. If there is this new statement specifically for do expressions there is less value in them, so yes it would be possible but my preference is to not do that. - SYG: Strongly agree that we should not apply any Annex B hoisting behavior to do expressions. WH: I agree with SYG. @@ -468,7 +485,6 @@ DRO: I don’t see how this provides a benefit over an anonymous IIFE. I think i KG: They won’t be able to write that. - DRO: Right, they’re going to try that and it won’t work, and they’ll be confused. DRO: I just don’t think that this provides much benefit over an IIFE. I think it introduces a huge amount of complexity and foreknowledge to be useful. @@ -516,6 +532,7 @@ BSH: I would like this best if it worked like an inline function, so you have to * Pattern Matching group will include discussions on the interactions with do expressions ## Record & Tuple (status update) + Robin Ricard (RRD) (Bloomberg) * [proposal](https://github.com/tc39/proposal-record-tuple) @@ -557,7 +574,7 @@ DE: I suppose for a protocol that wasn't methods but instead data properties, th JHD: Is there value for allowing Records to participate in protocols, with Symbol-keyed properties containing methods? Or would that break equality semantics? -DE: [Records cannot have objects (e.g., functions) as Symbol-keyed properties of Records because it wouldn’t interact with membranes well. A membrane typically wraps an object such that property accesses have their results themselves membrane-wrapped. Primitives, such as Records, cannot be wrapped this way, so there would be no way to membrane-wrap the function reached from it.] We can't pierce membranes. We have this split between primitives and objects. It gets complicated with proxies. I don't think it would make sense to have objects hanging off of records via symbol keys. +DE: [Records cannot have objects (e.g., functions) as Symbol-keyed properties of Records because it wouldn’t interact with membranes well. A membrane typically wraps an object such that property accesses have their results themselves membrane-wrapped. Primitives, such as Records, cannot be wrapped this way, so there would be no way to membrane-wrap the function reached from it.] We can't pierce membranes. We have this split between primitives and objects. It gets complicated with proxies. I don't think it would make sense to have objects hanging off of records via symbol keys. DE: The champion group presented a preference for strict equality semantics, where we don't recurse deeply. A tuple containing -0 wouldn't be equal to a tuple containing +0. Do people have concerns with this semantic direction? In terms of, would people be okay with going for this as the initial direction for stage 2? diff --git a/meetings/2020-06/june-2.md b/meetings/2020-06/june-2.md index 1aa198cc..e48c7389 100644 --- a/meetings/2020-06/june-2.md +++ b/meetings/2020-06/june-2.md @@ -1,7 +1,8 @@ # June 02, 2020 Meeting Notes + ----- -**In-person attendees:** (none) +**In-person attendees:** (none) **Remote attendees:** | Name | Abbreviation | Organization | @@ -57,6 +58,7 @@ | Ron Buckton | RBN | Microsoft | ## Hallway track update + YSV: Online towns did not really work, there was another alternative, shane are you in the call? We can try that alternative or Mozilla Hubs today SFC: A couple alternatives are posted on the GitHub issue on the Reflector. @@ -68,6 +70,7 @@ MPC: The alternative was spatial.chat. YSV: I'll get a link set up for that before lunch. ## String.prototype.replaceAll for Stage 4 + Presenter: Mathias Bynens (MB) * [proposal](https://github.com/tc39/proposal-string-replace-all) @@ -82,6 +85,7 @@ MB: Does anyone have any objections moving this to Stage 4? AKI: Sounds like consensus to me ### Conclusion + Consensus for Stage 4! ## `AggregateError` `errors` update @@ -132,13 +136,16 @@ YSV: Any objections to that? MB: In that case we can merge #64, and move onto the next topic. ### Conclusion -- Consensus on PR 64. -- Need to resolve the SES concern on PR 59; no consensus on this general constraint. + +* Consensus on PR 64. +* Need to resolve the SES concern on PR 59; no consensus on this general constraint. ## `AggregateError` constructor update + Presenter: Shu-yu Guo [Google] (SYG) -- [Proposal](https://github.com/tc39/proposal-promise-any) -- [Slides](https://docs.google.com/presentation/d/1juwk662pDATPCPqPxlE8M9rBGeA9zAp0_sJBoxu3eMc/edit) + +* [Proposal](https://github.com/tc39/proposal-promise-any) +* [Slides](https://docs.google.com/presentation/d/1juwk662pDATPCPqPxlE8M9rBGeA9zAp0_sJBoxu3eMc/edit) SYG: (presents slides) @@ -168,7 +175,7 @@ SYG: I can give some background on the order. There is a long GitHub discussion KG main reason for pushing - aggregate err obj should not treat error prop as optional same for message but there are other prop as opt. Optional should come always after required arguments. I also want to mention that there are some types on the web platform that take an optional argument first, and a non-optional argument second. But those have already been shipped. -KKL: counter arg already recorded, super class constructor should be a … I do agree …??? Would make sense for messages to be required in spirit. +KKL: counter arg already recorded, super class constructor should be a … I do agree …??? Would make sense for messages to be required in spirit. DE: About WebIDL, we are not currently following WebIDL conventions, I don’t feel like I have enough interest in the complexity it would entail that I am not currently pursuing it. So I don't think WebIDL should be considered a reason for adopting any particular conventions currently in JS. @@ -177,9 +184,11 @@ SYG: queue empty, asking for consensus for slide about #59, should be trivial to (Silence) ### Conclusion -- New semantics for AggregateError constructor received consensus + +* New semantics for AggregateError constructor received consensus ## Temporal Update + Presenter: Philip Chimento (PFC) * [proposal](https://github.com/tc39/proposal-temporal) @@ -279,7 +288,7 @@ TAB: Okay, so it’s always like +/- 24 hours, so adding a day cannot change the SFC: The way that the spec is written is that adding a civil day is equivalent to incrementing the day counter by 1, which may or may not be 24 hours. -DE: Temporal is about having datetimes in different logical manipulation spaces. So it’s not like moment.js, where you have one type that represents a date and time with a timezone. With Temporal, if you’re presenting a DateTime and you add a day, then you’re just adding a day. If you do DST calculations it will happen when you switch to a timezone [with the .inTimeZone method]. +DE: Temporal is about having datetimes in different logical manipulation spaces. So it’s not like moment.js, where you have one type that represents a date and time with a timezone. With Temporal, if you’re presenting a DateTime and you add a day, then you’re just adding a day. If you do DST calculations it will happen when you switch to a timezone [with the .inTimeZone method]. JHD: Is the concept that DST changes are not calendar-dependent because they only deal with the date and the day, whereas timezones deal with the clock? @@ -290,7 +299,9 @@ JHD: I’m just trying to understand because it seems like the concepts of timez PFC: I would be happy to go into that somewhere else. We are almost out of time and it looks like SFC has another remark. SFC: I’ve spent a great deal of time, as have other champions, on developing options for the Calendar system, specifically for the default calendar. It would be great to have more reviews on that and comparing and contrasting all the different options. We do feel we understand what all the different pros and cons are, but we need more voices on what’s best for end developers both from an ergonomic point of view and from an i18n correctness point of view. So any time committee members have to look at the documentation would be much appreciated. + ## Introducing: Unicode support + Presenter: Michael Ficarra (MF) * [discussion](https://github.com/tc39/ecma262/pull/1896#issuecomment-628271681) @@ -339,7 +350,7 @@ MF: The difference is in eventual inclusion of those properties. MB: We don’t currently guarantee eventual inclusion, and I don’t think we want to given what I said earlier. -DE: This comes back to the other thing MB was saying before, where we don’t include - for the key-value properties, we don’t include all of them. So we may have a property that comes along in a future unicode release that we don’t want to support. If we allow this to be expanded by different implementations then we would be cutting ourselves off from that path. We'd be assuming that everything that gets into the Unicode standard will eventually be making it in. +DE: This comes back to the other thing MB was saying before, where we don’t include - for the key-value properties, we don’t include all of them. So we may have a property that comes along in a future unicode release that we don’t want to support. If we allow this to be expanded by different implementations then we would be cutting ourselves off from that path. We'd be assuming that everything that gets into the Unicode standard will eventually be making it in. MB: Exactly. @@ -358,6 +369,7 @@ MB: What do you think about the idea of doing what you were planning on doing, b MF: I would be fine with that. Thank you. Think we’re done with this topic. ## Decorators update + Presenter: Kristen Hewell Garrett (KHG) * [proposal](https://github.com/tc39/proposal-decorators) @@ -365,8 +377,7 @@ Presenter: Kristen Hewell Garrett (KHG) KHG: (presents slides) -KHG: Decorators Design Space Analysis - https://docs.google.com/document/d/1DSuLlEbAjBImDutX_rhjnA6821EUyj9rANzDVJS3QV0 -Decorator Use Case Analysis - https://docs.google.com/spreadsheets/d/1QP0hfXkkkAXTktGrI7qrt-RUqKp2KtsVKuPo4yuoZZI/edit?ouid=115900510010132195082&usp=sheets_home&ths=true +KHG: Decorators Design Space Analysis - https://docs.google.com/document/d/1DSuLlEbAjBImDutX_rhjnA6821EUyj9rANzDVJS3QV0 Decorator Use Case Analysis - https://docs.google.com/spreadsheets/d/1QP0hfXkkkAXTktGrI7qrt-RUqKp2KtsVKuPo4yuoZZI/edit?ouid=115900510010132195082&usp=sheets_home&ths=true RPR: Empty queue, which is weird for decorators. @@ -374,9 +385,8 @@ AKI: I am stunned that the queue is empty. SYG: Thanks for taking implementer feedback to this level of seriousness. - - ## Function Implementation Hiding for stage 3 + Presenter: Michael Ficarra (MF) * [proposal](https://github.com/tc39/proposal-function-implementation-hiding) @@ -428,8 +438,7 @@ MF: We could ask for confirmation between implementing and shipping. JHD: Ok, cool. -YSV: We reviewed it again this month and the feedback was much more negative this time. Basically the opinion is that there isn’t enough of a justification for this proposal by itself. We’re talking about introducing potentially just one directive, and that hide implementation by itself is better than sensitive, ???. In our opinion, both of them not being done would be better -But now we’re getting into smaller and smaller use cases and the implication of moving forward with this proposal is to add a new directive which is something we said we wouldn’t do, and I think there should be a high bar for introducing a new directive. +YSV: We reviewed it again this month and the feedback was much more negative this time. Basically the opinion is that there isn’t enough of a justification for this proposal by itself. We’re talking about introducing potentially just one directive, and that hide implementation by itself is better than sensitive, ???. In our opinion, both of them not being done would be better But now we’re getting into smaller and smaller use cases and the implication of moving forward with this proposal is to add a new directive which is something we said we wouldn’t do, and I think there should be a high bar for introducing a new directive. I looked at the issue from React where this was raised as a use case, and that use case can be achieved much better through developer tools. Black-boxing is much easier from developer tools than it is from the engine. So we are not convinced that this proposal is worth the precedent of adding a new directive. DRO: I’d second all that, from Safari Web Inspector. @@ -499,9 +508,8 @@ LEO: Yep, I understand, I just want to remove the ambiguity. ### Conclusion/Resolution -- “hide source” Blocked from advancement for stage 3 -- “sensitive” raised concerns from implementers - +* “hide source” Blocked from advancement for stage 3 +* “sensitive” raised concerns from implementers ## Intl.NumberFormat V3 for stage 2 @@ -572,15 +580,18 @@ LEO: It’s definitely something where we can get reviewers from TG2, because no SFC: Ok sounds good, I will reach out later in the summer when ready for stage 3 review. So for now there’s no work for those reviewers. RPR: So for the notes, this has achieved stage 2. Congratulations, SFC. + ### Conclusion/Resolution -- Stage 2 -- Stage 3 reviewers: - - DE working with USA - - YSV; will shadow JSW (via IRC) - - WH, only for the decimal portion - - SRV (via IRC) + +* Stage 2 +* Stage 3 reviewers: + * DE working with USA + * YSV; will shadow JSW (via IRC) + * WH, only for the decimal portion + * SRV (via IRC) ## Intl.DurationFormat for Stage 2 + Presenter: Younies Mahmoud (YMD) * [proposal](https://github.com/tc39/proposal-intl-duration-format) @@ -649,14 +660,17 @@ RPR: The queue is empty. YMD: So we are asking for stage 2. -RPR: No objections to stage 2? +RPR: No objections to stage 2? + ### Conclusion/Resolution + * Stage 2 * Stage 3 Reviewers: * MF (via IRC) * RBN ## Symbols as WeakMap keys for Stage 1 + Presenter: Daniel Ehrenberg (DE) * [proposal](https://github.com/rricard/proposal-symbols-as-weakmap-keys) @@ -721,9 +735,10 @@ BFS: Didn’t we have a request in the comments here to not move it past Record DE: Right. I think it would make sense for us to do both the things where people say don’t move it past, and move it past, where the plan would be to advance both this and records & tuples. So I think we could hopefully advance both for the next meeting. Do we have consensus for Stage 1? AKI: Sounds like consensus to me. I’m going to call it consensus—congrats on stage 1! + ### Conclusion/Resolution -* Stage 1! +* Stage 1! ## Arbitrary Module Namespace Names diff --git a/meetings/2020-06/june-3.md b/meetings/2020-06/june-3.md index 0ddfe619..d6243ea5 100644 --- a/meetings/2020-06/june-3.md +++ b/meetings/2020-06/june-3.md @@ -1,10 +1,10 @@ # June 03, 2020 Meeting Notes ------ +----- -**In-person attendees:** (none) +**In-person attendees:** (none) -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Robin Ricard | RRD | Bloomberg | @@ -48,8 +48,8 @@ | Jordan Harband | JHD | Invited Expert | | Daniel Ehrenberg | DE | Igalia | - ## Module Attributes for Stage 2 + Presenters: Sven Sauleau (SSA), Daniel Ehrenberg (DE), Myles Borins (MBS), Dan Clark (DDC) - [proposal](https://github.com/tc39/proposal-module-attributes) @@ -57,11 +57,11 @@ Presenters: Sven Sauleau (SSA), Daniel Ehrenberg (DE), Myles Borins (MBS), Dan C DE: (presents slides) -DE: Asking for Stage 2 +DE: Asking for Stage 2 JHD: the name of the proposal is “module attributes”. regardless of the name, to me the current proposal is not defining attributes of a module, it’s defining attributes of an import, because it’s being used at the import site. It's like an assertion, checking the module. It's up to the provider of the module to decide what kind of module it is. So I don’t have any mismatch around that, but around a lot of the possible attributes that can be added, they seem like they’re defining attributes about the module or they're affecting its evaluation as opposed to merely checking or halting it. I'm concerned about that conceptual mismatch. I feel it’s a mismatch to me. -DE: there are many proposals people have raised on the issue tracker that I don’t personally like either. [That doesn't invalidate the whole proposal; I'm not championing their attributes.] +DE: there are many proposals people have raised on the issue tracker that I don’t personally like either. [That doesn't invalidate the whole proposal; I'm not championing their attributes.] I want to have this basic concept validated. If there’s no way we can have inline module attributes at all, that’s a useful signal to have from the committee. We're only proposing one attribute, "type", in a way that makes it possible to add new attributes in the future. JHD: Right, and that’s totally fair - leaving syntactic extensibility is wise in most proposals. What concerns me about that though is because of the current lack of host invariance or the current situation around them, there won't be anything preventing individual hosts adding semantics that won’t be creating the mismatch that I’m talking about. That also concerns me. @@ -72,8 +72,7 @@ JHD: Right and I think specifiers are under the control of hosts meaning they ca DE: So you know, going back to this I think it would be a bad thing if we had hosts adding more syntax within their module specifiers. Generally they are complex things, urls & paths already have complex grammars. So that’s why I think it would be cleaner for us to have separate syntax for module attributes that really separates out these other things, we can see in the history of security issues that different syntax for strings is a frequent source of security bugs. -RBN: I wanted to comment on JHD's statement that it must check but not define but -For a module to be able to inform the host on how to handle an import absent an adequate extension or MIME type. I’m curious if you’re restricting this ??? - if I wanted to be able to have the module attribute be of type JSON and have it treat the module differently and modify the Accepts header so there is multiple types of a resource so that I receive the JSON vs the HTML version of the resource, that seems like that is part of the point of this. +RBN: I wanted to comment on JHD's statement that it must check but not define but For a module to be able to inform the host on how to handle an import absent an adequate extension or MIME type. I’m curious if you’re restricting this ??? - if I wanted to be able to have the module attribute be of type JSON and have it treat the module differently and modify the Accepts header so there is multiple types of a resource so that I receive the JSON vs the HTML version of the resource, that seems like that is part of the point of this. DE: Yes, that’s what I presented on in this slide. @@ -119,7 +118,7 @@ BFS: my recommendation for stage 3 and it is to split evaluator attributes and c DE: We’re only proposing a check attribute for now. What are you proposing? -BFS: I would be more comfortable if we had a carve-out saying you should not do evaluator attrs currently. +BFS: I would be more comfortable if we had a carve-out saying you should not do evaluator attrs currently. DE: I’d be comfortable with that, and that would solve this particular thing about making a separate copy. You’d never have to make a separate copy if you only had check attributes. However, many people have proposed evaluator attributes, so it's worth investigating them before rejecting them. @@ -138,15 +137,12 @@ But then the corollary to that is that the motivating use case for that, as I un Assuming that my understanding of the motivation is correct, then all you need is some privilege that says 'this thing can execute,' or to fit well with the default of everything can execute, to say “ “this thing can’t execute”. And it seems like it’s adding a lot of complexity to add type because the potential module types are very large. But if they will all fall into full perms or cannot execute; or if we envision additional privileges then it seems like it would be better to talk about that category, to have this proposal deal with ways to designate the privileges of the module, and not the type of the module. -DE: SYG proposed that last year and proposed to use `noexecute` but people in thread found that type or as was more like what they expected. The earlier form of this proposal left all the attributes up to the host, which allows them to go with the no execute option. I think we’ve heard a similar concern already that just restricting this proposal to being one single bit would not fit with the extensibility goals that other people had. +DE: SYG proposed that last year and proposed to use `noexecute` but people in thread found that type or as was more like what they expected. The earlier form of this proposal left all the attributes up to the host, which allows them to go with the no execute option. I think we’ve heard a similar concern already that just restricting this proposal to being one single bit would not fit with the extensibility goals that other people had. JHD: We can preserve the syntactic / object like nature & put a boolean in there... DE: Is this something that we could discuss within Stage 2? The possibility to switch from `type` to a `noexecute` boolean? - - - JHD: If additional attributes would require future proposals then yes, the nature of the thing that determines whether to execute or not seems fine to iterate on within stage 2. BFS: I just want to state that implementing a `noexecute` is actually pretty nontrivial. I don’t think we should try to block progress on trying to get that completely figured out. @@ -216,8 +212,6 @@ JHD: I doubt we'll come up with a resolution in the next few minutes. DE: I’m comfortable adopting this checker restriction. Concretely we did come to the committee last meeting with this single-attribute version, and that was met with the exact opposite concern that it should be extensible. So I would be comfortable going to Stage 2 conditionally requiring that the module attributes be this check attribute. Could we do that? I’d be uncomfortable going to stage 2 with a repudiation of the feedback from last meeting. - - JHD: In spirit what you're saying is OK, but I don’t think it’s reasonable to apply pressure on me or on folks that want extensibility to resolve this in a few minutes. DE: We’ve been discussing this for months, in committee and offline, so I don’t think this is unreasonable pressure. You’re raising this blocker, I’m saying let’s go with much of what you're saying. @@ -231,32 +225,33 @@ The text is already there for `type`, we would just raise it to all module attri RPR: Our timebox has been exceeded, since there is no immediate consensus, please talk offline today so we can bring that back tomorrow. ### Conclusion/Resolution + - No consensus on stage 2, but people will talk offline to potentially revisit tomorrow. ## Built in modules Update Towards Stage 2 + Presenter: Michael Saboff (MLS) -* [proposal](https://github.com/tc39/proposal-built-in-modules) -* [slides](https://github.com/msaboff/tc39/blob/master/Built%20In%20Modules%20TC39%20June%202020.pdf) +- [proposal](https://github.com/tc39/proposal-built-in-modules) +- [slides](https://github.com/msaboff/tc39/blob/master/Built%20In%20Modules%20TC39%20June%202020.pdf) MLS: (presents slides) WH: You’re proposing adding a `BuiltInModule` object that lets you shim and then import built-in modules via `BuiltInModule.import`. I couldn’t tell from the presentation: Is that the only way to import a built-in module? -MLS: I should have stated that. Both the import declaration and import() function work with built-in modules. What is described here is a Built In Module specific API. +MLS: I should have stated that. Both the import declaration and import() function work with built-in modules. What is described here is a Built In Module specific API. The `BuiltInModule.import` would not allow you to import anything else, it would throw on a module specifier that doesn’t match a built-in module, and it would probably throw on anything else. WH: How would shimming work if you used an import declaration? MLS: If you don't have the module in the module map you do a different kind of shimming. -If you do have the module, then you’re not going to provide an initial implementation. I expect that shimming code will do something like: - if (!BuiltInModule.hasModule(“js:Foo”)) { - … // provide shim for js:Foo - BuiltInModule.export(“js:Foo”, myFooExports); - } +If you do have the module, then you’re not going to provide an initial implementation. I expect that shimming code will do something like: +if (!BuiltInModule.hasModule(“js:Foo”)) { +… // provide shim for js:Foo BuiltInModule.export(“js:Foo”, myFooExports); +} - // Any shimming to be applied to the base js:Foo +// Any shimming to be applied to the base js:Foo WH: Would the import declaration take effect first? @@ -284,8 +279,7 @@ BFS: I just want to clarify that a script tag is not the only work-around, you c KG: MLS, first thanks for moving this forward. I especially like this design, it seems like it does a good job of making things shimmable in ways the web relies on. Can you go into what you see the advantages of this design are over sticking more stuff on the global object? -MLS: There has been lots of discussion on why we’d grow a global object -To me it seems like there are a couple advantages, people may contend with this, but I believe that an implementation, if it provides an implementation of a builtin module on the fs instead of in memory It makes it a bit easier, someone would say you would use something on the global object, you can do that in the global object if you intercept the first reference. I think yes you can do that, but having built in modules in the file system is a little easier on implementations. So you would save startup memory and things like that. +MLS: There has been lots of discussion on why we’d grow a global object To me it seems like there are a couple advantages, people may contend with this, but I believe that an implementation, if it provides an implementation of a builtin module on the fs instead of in memory It makes it a bit easier, someone would say you would use something on the global object, you can do that in the global object if you intercept the first reference. I think yes you can do that, but having built in modules in the file system is a little easier on implementations. So you would save startup memory and things like that. If you think about it, a built-in module map would map moduleSpecifier keys to some notion of a location in a file system, that’s going to use a lot less memory than having code that will bring in the module’s implementation. It’s also a good way to organize things, that you have modules that are self contained, that can be implemented as contained modules. It’s standard software practice to build things on libraries or other modules, and this is more in line with that from that perspective. @@ -296,7 +290,7 @@ BFS: I had a question about the lifecycle of builtinmodule and import. If we imp MLS: The export is going to change the built in module map. Any subsequent import will get the updated map. I’m not sure I understand the question. -BFS: So say we have the primordial form of a module and a shim form. +BFS: So say we have the primordial form of a module and a shim form. If we import the primordial form and then we shim it, when we import the same specifier, we would see the shimmed module namespace and not the primordial form, correct? When we import the same specifier we’d see the shimmed module namespace, not the original one? @@ -306,21 +300,18 @@ BFS: OK that’s all I wanted to know. TLY: It seems one of the advantages that built-in modules was going to have is that you can trust them to be more intact than random keys on the global. -Once the application runs, one thing that we would always want is to make sure that the other things they import dont shim it -It would make more sense to have a separate phase for shimming rather than a workaround with a script tag that does it. +Once the application runs, one thing that we would always want is to make sure that the other things they import dont shim it It would make more sense to have a separate phase for shimming rather than a workaround with a script tag that does it. - -MLS: We’ve discussed this, yes the design is that you have to call freezeModules(). It was discussed that shimming was only possible in the first script tag, but that isn’t how all shimming code works, so that isn’t acceptable. If you don’t call `freezeModules` the application code would change things itself. +MLS: We’ve discussed this, yes the design is that you have to call freezeModules(). It was discussed that shimming was only possible in the first script tag, but that isn’t how all shimming code works, so that isn’t acceptable. If you don’t call `freezeModules` the application code would change things itself. Given that we need to allow shimming that builds upon other shimming, we need to allow modules to generally not be frozen and leave it up to the application's discretion of if / when the modules get frozen. -JHD: To answer TLY’s question. Currently if you want to lock things down you need to call a freeze or do an SES.lockdown or something like that -And most apps just don’t do that, like there’s even a technique from MetaMask that will freeze things like that but only in tests, so that you can verify that you’re not modifying things later on but then you don’t actually freeze them in production. So while if everybody needs to lock things down, it becomes an ergonomics cost. It’s already the case that most apps don’t do that. In other words I’m not worried about that ergonomics cost. +JHD: To answer TLY’s question. Currently if you want to lock things down you need to call a freeze or do an SES.lockdown or something like that And most apps just don’t do that, like there’s even a technique from MetaMask that will freeze things like that but only in tests, so that you can verify that you’re not modifying things later on but then you don’t actually freeze them in production. So while if everybody needs to lock things down, it becomes an ergonomics cost. It’s already the case that most apps don’t do that. In other words I’m not worried about that ergonomics cost. RGN: I noticed that for module specifiers you have ASCII letters and then a colon, and I’m curious what the relationship between those and URI schemes (which are registered) will be. MLS: Originally we wanted the specifier of built-in modules to have an IANA registered prefix. -There was pushback, we want them in the form of a URI to match other module specifiers, but there currently isn’t an intent to register one. We are open to registering “js:”. +There was pushback, we want them in the form of a URI to match other module specifiers, but there currently isn’t an intent to register one. We are open to registering “js:”. RGN: Does that mean you can never use http: inside of them? @@ -382,7 +373,7 @@ SYG: So you are actually proposing not integration with modules maps but to a se Then I will modify my comment. This is not stage 2 blocker, but you probably need buy-in from import maps folks. I'll try to build bridges there. If you’re not using them, then you’re proposing something else that could be complementary. I think we need some conciliation there. -MLS: I'll give you a little background. When we last discussed, we talked about building a JS version in import maps. When we started thinking about it, we determined that the built in module shimming feature is actually a self-contained problem and didn’t require integration with import maps. if you bound the problem to just builtin modules then you can do it synchronously and eliminate the problem. Import maps operate above the built in module map and the API’s we are proposing. +MLS: I'll give you a little background. When we last discussed, we talked about building a JS version in import maps. When we started thinking about it, we determined that the built in module shimming feature is actually a self-contained problem and didn’t require integration with import maps. if you bound the problem to just builtin modules then you can do it synchronously and eliminate the problem. Import maps operate above the built in module map and the API’s we are proposing. So that’s why we want this to be a self-contained solution. @@ -390,7 +381,7 @@ SYG: That clarifies for me. I am looking forward to reading the full semantics t JWK: if we have a BuiltinModules object, why do we need to have a “builtin module” when we could just use the BuiltinModules global object to get them? -MLS: Yes, but in a module centric world where every script is also a module you would probably want to use the declarative form as a statement because that is more familiar with a module developer. So that’s why that method would be available as well. +MLS: Yes, but in a module centric world where every script is also a module you would probably want to use the declarative form as a statement because that is more familiar with a module developer. So that’s why that method would be available as well. JWK: I have another question. Now the proposal is proposing a special prefix “js:”. Today the module specifier has no meaning and the meaning is left to the host. If we make this change, the specifier will have a meaning. If we could move it to the module attributes Like “with “std”: “js””… @@ -400,7 +391,7 @@ Adding it as a module attribute, I don’t want to make it dependent upon anothe TLY: I think you can avoid all of this ambiguity by making it not a URI. Have something that is not valid syntactic URI syntax. -MLS: We've done lots of bikeshedding on this. In other module loading schemes, they use a @ or other characters We thought it would make sense to be a URI. Using “js:” is compatible with the mostly de-facto URI scheme of ModuleSpecifiers. +MLS: We've done lots of bikeshedding on this. In other module loading schemes, they use a @ or other characters We thought it would make sense to be a URI. Using “js:” is compatible with the mostly de-facto URI scheme of ModuleSpecifiers. TLY: If you want it to look like a URI, it should be a valid registered IANA URI, I believe. @@ -414,7 +405,8 @@ SFC: Being involved with existing global object standard libraries like Intl and MLS: I’m also troubled by that, but the language should have had a built-in module or library scheme many years ago. -I don't think we will take new features and make them both new globals and new builtins. New built in module features should be contained and not impact the engine internals. Proposals typically implementable completely as JS would be builtin modules. Other things like new language syntax and language features their API’s would appear as part of the global object. That’s my opinion of how the committee would move forward. +I don't think we will take new features and make them both new globals and new builtins. New built in module features should be contained and not impact the engine internals. Proposals typically implementable completely as JS would be builtin modules. Other things like new language syntax and language features their API’s would appear as part of the global object. That’s my opinion of how the committee would move forward. + ## Deep path properties Presenter: Rick Button (RBU) @@ -459,19 +451,19 @@ WH: If the first item (`counters` in the example) is a computed prop name, how w RBU: Instead of the counter's identifier you would use the computer property’s identifier name? It would work the same way in that you would have the computed property—if the computed property evaluated to counter, the If you change it further it would update inside of counters. -WH: One last question: How do you delete a property this way? +WH: One last question: How do you delete a property this way? RBU: Good question, came up in calls, we don’t know—this is a missing part of this proposal and a missing part of shallow spread. I’m interested in investigating how that would be possible. WH: OK. Thank you. -DE: To respond to those questions, you were saying it would be inefficient to do the repeated updates. There’s no way to observe the intermediate values of the record. So even though logically one thing happens after the other. Although there’s no structural sharing in the record and tuple proposal, the record and tuple proposal is made so engines can do structural sharing that could mitigate that impact. +DE: To respond to those questions, you were saying it would be inefficient to do the repeated updates. There’s no way to observe the intermediate values of the record. So even though logically one thing happens after the other. Although there’s no structural sharing in the record and tuple proposal, the record and tuple proposal is made so engines can do structural sharing that could mitigate that impact. For deleting a property, I think this would logically be a part of destructuring [which we explained is omitted, but could be added later]. … -I don’t know if I’m convinced we need a delete syntax, You could also do that by calling Object.entries, manipulating it procedurally, and I don’t know if we need a syntax for every single thing you could do. +I don’t know if I’m convinced we need a delete syntax, You could also do that by calling Object.entries, manipulating it procedurally, and I don’t know if we need a syntax for every single thing you could do. WH: DE, your first point is not true, it is possible to observe intermediate mutations. If you have the mutations `[foo]: bar, counters[0].value: 2`, then the first mutation is observable when making the second one. The second one will do different things depending on whether `foo` is "counters" or not. @@ -489,7 +481,7 @@ I don’t know of other dynamic languages that have this. Normally the way you optimise this is that you observe that you are the only person looking at this tuple and you can just mutate the underlying object. This is very hard for us to do in JS core, we only do that analysis in the most optimizing compiler, so you’d have very bad interpreter performance. Hard to get right. -RBU: I sympathize. My argument is not against but I believe this is already a used pattern with nested spreads +RBU: I sympathize. My argument is not against but I believe this is already a used pattern with nested spreads I believe that this pattern, I mentioned, people are already doing this but still doing nested spreads. This is still a thing because of nested spreads with worse performance because you can do it already. @@ -503,7 +495,6 @@ TLY: It’s an idiom, not a library feature. KM: Sure, it’s an idiom but you’re not getting it from the language itself. - KG: I like exploring the problem space, but given that it’s focused on records and tuples, I don't want to advance past stage 1 until records and tuples advance. RBU: I would 100% agree with that if this proposal only applied to records, which I don’t imagine it would. @@ -516,13 +507,14 @@ RBU: Not familiar with the syntax, please open an issue on the repo. MF: I appreciate early stage proposals bringing examples that explain what a solution could look like. But I want to make sure we’re not committing to a solution like this. I have other ideas for what a solution would look like. I want to ask the champions to look into an API based solution, something like lenses, because I think that it might not be worth the syntactic space. -RBU: 100% agree, The overall goal of the proposal is to solve the problem of providing an ergonomic way of providing this computation, and ideally not breaking the performance ideals of record/tuple, i.e. not block optimizing compilers from doing good work. So as long as we stay as close to that as possible. If this results in a library, I think it’s perfectly valid. +RBU: 100% agree, The overall goal of the proposal is to solve the problem of providing an ergonomic way of providing this computation, and ideally not breaking the performance ideals of record/tuple, i.e. not block optimizing compilers from doing good work. So as long as we stay as close to that as possible. If this results in a library, I think it’s perfectly valid. MF: I think that would be exactly what we should have as the conclusion in the notes. (notetaker: spooky meta) AKI: do we have stage 1? (silence) Sounds like consensus to me? RBU: thank you very much! + ### Conclusion/Resolution Consensus on Stage 1 @@ -538,9 +530,8 @@ YSV: (presents slides) YSV: asking for stage 1, staging process to track research -KG: I wanted to say that I am in support of this. I wanted to mention an additional type of complexity that this brings that YSV kind of mentioned, and I wanted to emphasize it - it’s not just new builtins but proposals to add methods to Set.prototype and Map.prototype -Stalled in part because I brought up what we would need to do about Symbol.species. -If I were designing a language with no concern for runtime performance, then the purist in me is like “well, it’s nice to make things pluggable and subclassable”, but that’s not what we’re doing. The decisions that we make on performance affects billions of people. So I’m strongly in support of removing this despite the language purist in me being slightly sad. +KG: I wanted to say that I am in support of this. I wanted to mention an additional type of complexity that this brings that YSV kind of mentioned, and I wanted to emphasize it - it’s not just new builtins but proposals to add methods to Set.prototype and Map.prototype Stalled in part because I brought up what we would need to do about Symbol.species. +If I were designing a language with no concern for runtime performance, then the purist in me is like “well, it’s nice to make things pluggable and subclassable”, but that’s not what we’re doing. The decisions that we make on performance affects billions of people. So I’m strongly in support of removing this despite the language purist in me being slightly sad. JHD: I think I might know the answer but I wanted to hear your explanation for why we would need to remove the static methods, because they can just look up the receiver? If the receiver is MyArray then Array.from makes MyArray. @@ -587,7 +578,7 @@ If I really wanted to have map return an A but I dont have the instance creation YSV: Thank you for that comment, that’s a very interesting thing that you bring up. One thing we’ve been looking at a lot, in a bit of disbelief, is that with symbol species, one thing that’s kind of amazing is that we haven't really seen it being used in user code that we've examined. We haven’t looked at the entire web of course so we don’t really know if it is being used. If you know of a case where this is being done ,and would love to take a look at what they are doing in the code base. Maybe we can take a look at that, that’d be interesting too. -DE: Question for RBN, would you be okay with type 1? Where you’re saying you construct this subclass and inject logic to determine the class bieng constructed in the middle, is that essential functionality to you? +DE: Question for RBN, would you be okay with type 1? Where you’re saying you construct this subclass and inject logic to determine the class bieng constructed in the middle, is that essential functionality to you? RBN: What i was trying to point out (using the type 2 slides), but the issue with Type 1 - @@ -597,18 +588,16 @@ RBN: So if we don’t use Symbol.species and instead use `this.constructor`, I h DE: I agree [that there is this implication and difficulty with Type II]. I was just asking are you okay with that option where you have to override the map thing [Type I]? -BFS: We have a crawler at work that we’ve been rewriting to do audits of what the actual usage type of species is in the browser. It would be good if anyone with specific hooks or traps they want to see usage amounts for or if they have lists of sites they’re concerned with -I think we need to be very careful there. I think a lot of what we’ve seen from the crawler is false positives. It is absolutely stunning how many false positives there are. Also things are never using @@species properly, when they do use it. Which is interesting, ANother topic which we can discuss elsewhere. The problem is that people are assigning values to @@species, and not delegating it like ???, which ??? the `this` value, and then that kills subclassing, and people are subclassing their subclass & it’s broken. So, this is not just about performance. There is nobody using species correctly except for that one library by Feross & he removed support for it. It would be good to know what statistics people want to see, because there are so many noisy statistics going on. +BFS: We have a crawler at work that we’ve been rewriting to do audits of what the actual usage type of species is in the browser. It would be good if anyone with specific hooks or traps they want to see usage amounts for or if they have lists of sites they’re concerned with I think we need to be very careful there. I think a lot of what we’ve seen from the crawler is false positives. It is absolutely stunning how many false positives there are. Also things are never using @@species properly, when they do use it. Which is interesting, ANother topic which we can discuss elsewhere. The problem is that people are assigning values to @@species, and not delegating it like ???, which ??? the `this` value, and then that kills subclassing, and people are subclassing their subclass & it’s broken. So, this is not just about performance. There is nobody using species correctly except for that one library by Feross & he removed support for it. It would be good to know what statistics people want to see, because there are so many noisy statistics going on. -RW: So you were asking for examples of Type III. In Test262, we use Type III extensively for testing the behavior of built-ins across realms. We are heavily reliant on setting the species with a cross realm copy of the constructor to make sure the lookup chain of the constructor is preserved correctly. To make sure that the lookup chain is preserved correctly. If you look at it, I don’t want to rathole into that, we can look at it together offline. But that’s a pretty substantial example of where it’s being used in the wild. And I don’t know how else we would test cross realm behavior which is important to the language cause we have access to multiple realms in any given runtime. So I just wanted to put that on the board and say let’s chat about it offline. +RW: So you were asking for examples of Type III. In Test262, we use Type III extensively for testing the behavior of built-ins across realms. We are heavily reliant on setting the species with a cross realm copy of the constructor to make sure the lookup chain of the constructor is preserved correctly. To make sure that the lookup chain is preserved correctly. If you look at it, I don’t want to rathole into that, we can look at it together offline. But that’s a pretty substantial example of where it’s being used in the wild. And I don’t know how else we would test cross realm behavior which is important to the language cause we have access to multiple realms in any given runtime. So I just wanted to put that on the board and say let’s chat about it offline. YSV: Sounds great. WH: I also have concerns about how you would remove type 3 without also removing type 2. If you remove both of them, when you subclass Array to make MyArray, then .map will create Array instances. If you just remove type 3 without type 2, then there is no way to make it return Array instances instead of MyArray [slide link needed] — that’s currently done by setting a null species. I don’t consider replacing all of the array methods a good solution. SYG: I think that is correct, I think it is not realistic. I don’t think we are proposing removing 3 without removing 2. I want to add more data points, I wasn’t around for this, but when this was discussed in ES6 era, the species machinery wasn’t around, and they used ??? And microsoft actually shipped that and found it to be incompatible, and had to un-ship it, and the species machinery kind of came out of that data point. -I don’t think it is realistic to remove 3 without 2. Also from the original motivation of decreasing complexity -Removing 3 without removing 2 is not worth the tradeoff anyway. Like if you just remove 3, you remove maybe 1 branch. But the complexity comes from that you'd look up a property at all in this case ?? and then you’d look up species. And because you look up a dynamic property by constructor, that adds all of the complexity and maintenance headaches. 4 can be split out, but 3 and 2 are a package deal. But I thought it was useful to highlight the slight difference in expressivity. +I don’t think it is realistic to remove 3 without 2. Also from the original motivation of decreasing complexity Removing 3 without removing 2 is not worth the tradeoff anyway. Like if you just remove 3, you remove maybe 1 branch. But the complexity comes from that you'd look up a property at all in this case ?? and then you’d look up species. And because you look up a dynamic property by constructor, that adds all of the complexity and maintenance headaches. 4 can be split out, but 3 and 2 are a package deal. But I thought it was useful to highlight the slight difference in expressivity. WH: I agree with what you said but what you said contradicts what’s on the final slides of the presentation. @@ -616,14 +605,14 @@ YSV: I agree with everything that SYG just said, and in fact - I wasn’t entire WH: If you just remove species, then the scenario that I pointed out arises, in that there’s no way to not delegate to a subclass, which you can currently do by setting species to null. -KM: KM: I’m just curious, how did symbol.species, IE just shipped ?? +KM: KM: I’m just curious, how did symbol.species, IE just shipped ?? How did species fix that problem? Because species is just a getter that returns whatever you called it with. WH: Species can be used to turn off the getting of the constructor. KM: I understand that, my understanding was that IE, before species shipped, they changed all the Array prototypes to use this.constructor. -SYG: I think it was something like, I’m speculating here, I looked at some code, I think there is ES5 era code, due to mixin pattern override the constructor with their own constructor. +SYG: I think it was something like, I’m speculating here, I looked at some code, I think there is ES5 era code, due to mixin pattern override the constructor with their own constructor. And that constructor function doesn’t have a species. KM: Oh and because when a species doesn’t get found, you default to [crosstalk] @@ -655,14 +644,15 @@ YSV: Yes, asking for stage 1. RPR: Congratulations, you have stage 1. ### Conclusion/Resolution + Stage 1 - + Remaining queue items + elaboration: I believe that core-js and other shims contribute a significant amount of false positives to measuring web compat data - specifically, I’m convinced that all use of Symbol.match on a non-regex object is core-js doing a feature detection. If browser telemetry can account for these false positives, that would be very helpful for lots of other potential spec cleanups in the future. -RW: https://github.com/tc39/notes/blob/master/meetings/2014-11/nov-18.md#46-zepto-broken-by-new-thisconstruct-usage-in-some-arrayprototype-methods -RW: use of Symbol.species when testing cross realm behavior of built-ins (will follow up) +RW: https://github.com/tc39/notes/blob/master/meetings/2014-11/nov-18.md#46-zepto-broken-by-new-thisconstruct-usage-in-some-arrayprototype-methods RW: use of Symbol.species when testing cross realm behavior of built-ins (will follow up) ## Async Context + Presenter: Chengzhong Wu (CZW) - [proposal](https://github.com/legendecas/proposal-async-context) @@ -702,14 +692,11 @@ I agree that the problem that motivates this proposal — in particular nested e I don’t think this proposal gets us there. I think this just looks like a bug farm to me. Taking a step back and tracking causality (say, along the promise graph), there may be real value to be had in that. -CZW: regarding cancelation discussion, as far as I can tell there is already an in-progress discussion in the NodeJS community. So if the cancelation design is going to be designed in the async context model are there major issues regarding cancelation? - - - +CZW: regarding cancelation discussion, as far as I can tell there is already an in-progress discussion in the NodeJS community. So if the cancelation design is going to be designed in the async context model are there major issues regarding cancelation? CM: I'm not talking about there being an issue relative to cancellation, I'm talking about they’re having most of the same problems. This requires the same machinery as cancellation. If it’s done right they should share about 90% of their underlying machinery. -RBN: Just wanted to say that I’m not entirely sure that I agree with that notion about cancellation, mostly because I’m still coming from the perspective that cancellation comes from a token that gets passed along, so what actually happens when you cancel is very explicit. Though there have been suggestions. There have been requests to implicitly pass along a cancellation token along the stack. An async local store could provide some capabilities for level of complexity that async tasks & locals do. +RBN: Just wanted to say that I’m not entirely sure that I agree with that notion about cancellation, mostly because I’m still coming from the perspective that cancellation comes from a token that gets passed along, so what actually happens when you cancel is very explicit. Though there have been suggestions. There have been requests to implicitly pass along a cancellation token along the stack. An async local store could provide some capabilities for level of complexity that async tasks & locals do. CM: RBN was talking about where you’re explicitly passing a token - I think that’s exactly the sort of thing that’s called for here, which is why I say it shares a lot of mechanism in common with cancellation. @@ -724,7 +711,6 @@ One of those is something that exposes the lifetimes of tasks. I am not sure if BFS: I just want clarification - SYG, do you still have those concerns with async local storage if we allow WeakRefs on async local storage? You were concerned about exposing the lifetime of tasks, but if we can expose a finalizer on the duration of the task (?), do you have the same concerns? - SYG: I hadn't thought of that, I'll have to think it through. I was talking about the hooks like async tasks like before it runs pre/post async hooks CZW: Regarding SYG’s concern, we expose the lifetime status through pre hooks, and there are no collecting let’s say weak refs regarding the object garbage collection state. That's what weak refs do. So the task lifetime provided in the hooks is just pre hooks there are no gc hooks in the async hooks API. @@ -742,9 +728,8 @@ RPR: Would someone like to summarize the objection? CM: I would say from my perspective that the problem domain is compelling but this solution is sufficiently tangled up that it’s not ready. If CZW does further exploration of the problem space I’d be open to new solutions, but anything that has the whiff of dynamic scope is a non-starter for me. ### Conclusion/Resolution -Stage 1 not achieved. - +Stage 1 not achieved. ## Intl Enumeration API for Stage 1 @@ -775,7 +760,6 @@ RW: By choice or by requirement? I ask that because when I was doing the initial WH: Who is “us”? - RW: Us was Toby Lenjel(?) and myself, and the editors that came and went afterwards as well. You can think about how there can be potential for figuring out ambient light patterns in a room in a browser. Comes down to slipping permission door-hangers on everything. WH: In the context of things that we do in TC39, whose job is it to limit the fingerprinting surface? @@ -802,8 +786,7 @@ RPR: Who’s that question to WH? FYT: You mentioned the fingerprinting budget - is that part of a standard or part of a feature in Chrome? -JRL: It’s intended to be implemented as a part of the DOM or HTML, something in WHATWG -As an actual specification for multiple browsers to implement. So it’s not something that JavaScript specifically needs to concern itself with, but something that browser implementations of JavaScript need to concern themselves with. +JRL: It’s intended to be implemented as a part of the DOM or HTML, something in WHATWG As an actual specification for multiple browsers to implement. So it’s not something that JavaScript specifically needs to concern itself with, but something that browser implementations of JavaScript need to concern themselves with. FYT: So is that in… what committee is talking about that? @@ -820,4 +803,5 @@ FYT: I would like to ask for stage 1 advancement. And we can resolve the issue d RPR: Congratulations on achieving stage 1. ### Conclusion/Resolution -Stage 1 achieved. \ No newline at end of file + +Stage 1 achieved. diff --git a/meetings/2020-06/june-4.md b/meetings/2020-06/june-4.md index 7194d091..537e504a 100644 --- a/meetings/2020-06/june-4.md +++ b/meetings/2020-06/june-4.md @@ -1,22 +1,22 @@ # June 04, 2020 Meeting Notes ------ +----- -**In-person attendees:** (none) +**In-person attendees:** (none) -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Robin Ricard | RRD | Bloomberg | | Yulia Startsev | YSV | Mozilla | | Jack Works | JWK | Sujitech | | Rick Waldron | RW | Bocoup | -| Caridy Patiño | CP | Salesforce | +| Caridy Patiño | CP | Salesforce | | Ross Kirsling | RKG | Sony | | Sergey Rubanov | SRV | | | Rick Button | RBU | Bloomberg | -| Sven Sauleau | SSA | Babel | -| Istvan Sebestyen | IS | Ecma | +| Sven Sauleau | SSA | Babel | +| Istvan Sebestyen | IS | Ecma | | Keith Miller | KM | Apple | | Michael Saboff | MLS | Apple | | Waldemar Horwat | WH | Google | @@ -24,7 +24,7 @@ | Bradford C. Smith | BSH | Google | | Mark Cohen | MPC | PayPal | | Chip Morningstar | CM | Agoric | -| Jason Williams | JWS | Bloomberg | +| Jason Williams | JWS | Bloomberg | | Felienne Hermans | FHS | Leiden University | | Richard Gibson | RGN | OpenJS Foundation | | Ukyo Pu | PSY | Alibaba | @@ -44,8 +44,8 @@ | Michael Ficarra | MF | F5 Networks | | Justin Ridgewell | JRL | Google | - ## Iterator helpers update + Presenter: Jason Orendorff (JTO) JTO: Iterator helpers: want to make sure that the proposal champions/spec authors - compromised approach, want to briefly present that @@ -59,6 +59,7 @@ JHD/KG: ok KM: Why have one function object? Answered in IRC please report here ## Realms, Stage 2 update + Presenter: Caridy Patiño (CP) - [proposal](https://github.com/tc39/proposal-realms) @@ -66,7 +67,7 @@ Presenter: Caridy Patiño (CP) CP: (presents slides) -GCL: Clarifying question: import() on realm vs compartment - +GCL: Clarifying question: import() on realm vs compartment - CP: I think that they are analogous. When you import inside a realm, you are running the realm itself with the intrinsics of the realm and the module graph of the realm, versus in a compartment, when you run import, you are importing in the existing module graph, where you incubate the compartment, does that answer your question? @@ -101,19 +102,16 @@ CP (continues presenting slides) SYG: Thank you for taking feedback to heart, I heard you were getting a TAG review as well. I think speaking purely as an engine implementer, I think realms are not very problematic, the concern here is more integration with the web platform, and it seems like there is a path forward there maybe, given the use cases you have presented which I found very useful, I was missing that from the explainer. There is one thing from the template use case - it seems like you have some performance expectations around how realms are implemented. I don’t use lodash tpls, don’t know how they work, And you are saying that with realms, for each template, it would create a new realm? Or it would create a new realm - I can’t imagine it would create a new realm given that the point is to not reuse globals. -CP: we have JDD in the call, he created Lodash so he can provide some feedback -I think the point is that you want this generated code to run in an environment that is as clean as possible, and that way the code that you’re running there, which in this case is a template which is compiling to a function, ??? +CP: we have JDD in the call, he created Lodash so he can provide some feedback I think the point is that you want this generated code to run in an environment that is as clean as possible, and that way the code that you’re running there, which in this case is a template which is compiling to a function, ??? You could set the `_.template` to reuse the same Realm as you can freeze the Global object from it, so you don't need a new Realm for each usage. You could do this in an iframe today, you’d need to keep it connected, which leaks access into the iframe’s global object, plus the iframe is a lot heavier. SYG: So there could be different expectations around how lightweight realms are. V8 in particular - this is not a blocker or even really a concern - but currently V8 does not lazily load any globals, including the intrinsics, it has made the tradeoff that instead of lazy-loading stuff it would eagerly do everything to save on latency later. So if we have realms as a way that folks think are a lighter weight solution to creating this stuff (because iframes are currently very heavy-weight), that might force changes I guess, it's a implementation concern but its not really a problem with the design, it could force architecture changes and that might be a bigger ask then than what it seems like right now. I just wanted to put that out there, it’s not really an issue for stage 2. I’m happy with this for stage 3 in the future, I’d like to see acceptance from the web platform with the HTML folks and the TAG review. - - JWK: [slide 15](https://docs.google.com/presentation/d/1TfVtfolisUrxAPflzm8wIhBBv_7ij3KLeqkfpdvpFiQ/edit?ts=5ed5d3e7#slide=id.g86384024ee_3_0) In this slide, it imports the plugin API from the main realm, and expose it to the subrealm, so it’s possible to get the main realm’s Object constructor from the subrealm. Is this another concern of the realms API? or another of the SES proposal? -CP: As I mentioned when I stopped by this particular example, if you really want to have a clear separation between the two sides you could use a membrane, at Salesforce we do use a membrane there, that way the identity of the objects coming through the membrane are fixed, so that you don't leak constructors as a way to access globals from the other sides. At this point we have very advanced membranes. We think a membrane there would work just fine. This hazard is not something new, this hazard exists today when you go to load the VM module in node, when you go to create a new Context to evaluate code. By piping new globals into the new realm. And in many cases you don’t really care because the code that you’re going to evaluate there, you’re not worried about it doing its own thing, because it’s not about security it’s about boundaries, but if you really want to have the separation you can still do it with some fancy code, like a membrane in this case will just do the job. +CP: As I mentioned when I stopped by this particular example, if you really want to have a clear separation between the two sides you could use a membrane, at Salesforce we do use a membrane there, that way the identity of the objects coming through the membrane are fixed, so that you don't leak constructors as a way to access globals from the other sides. At this point we have very advanced membranes. We think a membrane there would work just fine. This hazard is not something new, this hazard exists today when you go to load the VM module in node, when you go to create a new Context to evaluate code. By piping new globals into the new realm. And in many cases you don’t really care because the code that you’re going to evaluate there, you’re not worried about it doing its own thing, because it’s not about security it’s about boundaries, but if you really want to have the separation you can still do it with some fancy code, like a membrane in this case will just do the job. JWK: So is the membrane included in the SES proposal? @@ -141,13 +139,12 @@ LEO: I actually have a question for the group, just want to make sure, I’m goi YSV: mention that we agree a lot with what SYG mentioned, there is skepticism on our side regarding how this fits in with the broader web architecture and we have some concerns there, but we’ve been in touch with the champion group and will continue to do so. - - - ### Conclusion/Resolution -* Capture thumbs up reviews from the HTML and W3C TAG groups before Stage 3 advancement + +- Capture thumbs up reviews from the HTML and W3C TAG groups before Stage 3 advancement ## Smart Unit Preferences in Intl.NumberFormat for Stage 1 + Presenter: Younies Mahmoud (YMD) - [proposal](https://github.com/younies/proposal-intl-number-format-usage) @@ -183,7 +180,7 @@ YMD: Do you mean like for knowing where the user is or what is their preference RRD: Yes. Using all of that preference system to actually uniquely identify that user. -SFC: What I was just talking about in terms of user preferences, that’s specific to the user preferences proposal, and fingerprinting is going to be one of the top things to discuss. When it comes to the proposals that are only dependent on CLDR data for reasons i've mentioned before…ss This CLDR data is based only on the browser version. The browser chooses which version of CLDR to ship, so we’re not exposing a new fingerprinting vector. +SFC: What I was just talking about in terms of user preferences, that’s specific to the user preferences proposal, and fingerprinting is going to be one of the top things to discuss. When it comes to the proposals that are only dependent on CLDR data for reasons i've mentioned before…ss This CLDR data is based only on the browser version. The browser chooses which version of CLDR to ship, so we’re not exposing a new fingerprinting vector. RRD: that absolutely answered my question @@ -193,12 +190,11 @@ MPC: back to RRD question, not sure SFC answered, idk if the individual preferen the attack I'm envisioning is you check for edge cases to find a combination of preferences that uniquely identify the user. - SFC: I think DE can speak a bit to this as well. I think there’s 2 places where user preferences can originate from. The one source is CLDR data which is deterministic based on the browser version, the other source is the user preferences, which is currently not available on the web platform. Currently in the web platform, the only available locale information are language, region, and scripts. There's no other way to access the user preferences. That vector simply doesn't exist. Supporting additional preferences, such as preferred units or first day of week, is what the user preferences proposal is hoping to add. We want to champion that proposal because this is one of the top feature requests that we get. It's a separate proposal - currently the scope of that proposal is to add a new property called navigator.locales, and navigator.locales would fully encompass all user preferences. So in terms of crawling the API for edge cases in user preferences, those would only come from two sources, CLDR or navigator.locales, the latter of which does not exist yet but will be proposed. MPC: It sounds like this proposal doesn't add any fingerprinting possibilities, but the user preferences proposal might. -SFC: The attack is potentially relevant to the user preferences proposal. On the one hand user preferences is a feature request that we get over and over again, but on the other hand it's a fingerprinting vector. We want to support user preferences while balancing the new fingerprinting vector. So, not for this proposal. +SFC: The attack is potentially relevant to the user preferences proposal. On the one hand user preferences is a feature request that we get over and over again, but on the other hand it's a fingerprinting vector. We want to support user preferences while balancing the new fingerprinting vector. So, not for this proposal. DE: This proposal exposes non-preferenced locale user data. These are separate things that are both valuable. If you pick through a server environment, it would never make sense to expose user preferences to a server environment. Instead, you'd thread through preferences from some other source, such as saved user preferences or, maybe in the future, HTTP headers. @@ -230,14 +226,16 @@ RPR: Congratulations, you have stage 1. YMD: Thanks to my colleague Hugo as well. ### Conclusion/Resolution + - Stage 1 ## Intl.Segmenter for Stage 3 + Presenter: Richard Gibson (RGN) -* [proposal](https://github.com/tc39/proposal-intl-segmenter) -* [slides](https://docs.google.com/presentation/d/1Pe9eVhgK93cgB3KCufTQvzqCjIYj3RRxJaOeNIbWN_A) -* [spec text](https://tc39.es/proposal-intl-segmenter/) +- [proposal](https://github.com/tc39/proposal-intl-segmenter) +- [slides](https://docs.google.com/presentation/d/1Pe9eVhgK93cgB3KCufTQvzqCjIYj3RRxJaOeNIbWN_A) +- [spec text](https://tc39.es/proposal-intl-segmenter/) RGN: (presents slides) @@ -280,7 +278,7 @@ DE: Why are getters different from other things that access internal slots? RGN: Because if you're accessing an internal slot directly as an own-data property, the proxy gets to intercept that. The getter bypasses the proxy. There’s no handler that’s invoked when I call the getter and pass the proxy as the receiver. -DE: But that's what a membrane is the membrane wraps the object and proxies it what it gets, +DE: But that's what a membrane is the membrane wraps the object and proxies it what it gets, I still don’t understand what’s different between getters and other methods that access internal slots. Just because getters have a way of getting around it, doesn't mean that you have to have a different membrane unwrapping. RGN: There is no opportunity to intercept anything if I invoke the getter with the proxy as a receiver. @@ -305,7 +303,7 @@ DE: Why are we suddenly talking about internal slots being exotic? RGN: That is vocabulary from MM, I’m also a little concerned as to why it’s being called exotic, but it doesn’t change the nature of the issue -JRL(KM?): I thought that was what we just called objects that have internal slots per the spec. +JRL(KM?): I thought that was what we just called objects that have internal slots per the spec. SYG: Exotic objects are not those with internal slots. Any object can have those. Exotic means the object has special behavior. @@ -318,11 +316,10 @@ You mentioned this precedent from AggregateError. I mentioned in that discussion RGN: I think that is a good idea and I am willing to table that in pursuit of; in order to not make a regrettable mistake here. -So there are 4 options for ways to sidestep the issue: +So there are 4 options for ways to sidestep the issue: Change from accessor properties to own data properties, the same as we did for AggregateError; Change from an accessor property to a method that returns a fresh segmenter; -Strip off the segmenter property altogether; or -Strip off all properties (“segmenter” and “string”) +Strip off the segmenter property altogether; or Strip off all properties (“segmenter” and “string”) DE: My current feeling is that we do none of these, and I’d like to discuss this more in the context of the internal slot hazard. @@ -362,8 +359,7 @@ RGN: There's no code like that, I can never pass a closure as an argument. WH: I don’t think you understand what a closure is. -JHD: First of all, to clarify terminology, the issue is not objects in internal slots. Tons of internal slots already hold objects. The issue is things that expose objects held in internal slots -Closures do not do this because the code that is in the function already has all that access, it’s just preserving that access. What this means is that if I’m holding a closure I cannot get access to the variables it can see unless it returns them, and that can be wrapped by membrane-like patterns. The separate thing is, the fact that most, but not all, internal slots that are exposed to users are primitives, does not change the fact that… there actually is an internal slot, [TypedArray.prototype.buffer](https://tc39.es/ecma262/#sec-get-%typedarray%.prototype.buffer). +JHD: First of all, to clarify terminology, the issue is not objects in internal slots. Tons of internal slots already hold objects. The issue is things that expose objects held in internal slots Closures do not do this because the code that is in the function already has all that access, it’s just preserving that access. What this means is that if I’m holding a closure I cannot get access to the variables it can see unless it returns them, and that can be wrapped by membrane-like patterns. The separate thing is, the fact that most, but not all, internal slots that are exposed to users are primitives, does not change the fact that… there actually is an internal slot, [TypedArray.prototype.buffer](https://tc39.es/ecma262/#sec-get-%typedarray%.prototype.buffer). The other question is, there was some conversation earlier about the throwing behavior, everything except Array and Error methods have prototype ??? So that is a decision or flaw in the design of Proxy that is not relevant at all, and I think we should leave it to MM or maybe KKL to present arguments about communication channels around that. @@ -381,12 +377,15 @@ RPR: On the subject of tabling, we’re at the end of the timebox. RGN do you wa RGN: Obviously we need to bring this up as a distinct issue for conversation at the next meeting. In the meantime, I would appreciate input on the preferred means of bypassing it for Intl.Segmenter. https://github.com/tc39/proposal-intl-segmenter/issues/96 -KKL: If I may, I can volunteer MM to give a presentation on this at the next meeting. +KKL: If I may, I can volunteer MM to give a presentation on this at the next meeting. + ### Remaining items in the queue 1. New Topic: Presentation on hazard (KKL) 2. New Topic: Please move the SES discussion offline, and we can discuss API changes in TG2 (SFC) + ## Announcements + YSV: I intended to announce a research call that is happening. If you have questions about collecting data or the psychology of the programmer feel free to join. The first one is going to be June 25th at 5:45pm CEST. JHD: Istvan posted on the Reflector that the opt out period for ES2020 is over. That will then be going to the Ecma GA to be the final version of ES2020. So just a heads up for the group. @@ -396,10 +395,11 @@ MLS: Did we vote on this contingent on the opt-out period? JHD: Yes, at the last meeting. ## Generic Comparison + Presenter: Hemanth HM (HHM) -* [proposal](https://github.com/hemanth/generic-comparison) -* [slides](https://docs.google.com/presentation/d/1OO3QwtP4S0SOXGW9m4pdgG_CHo2eCz0sA6u3NXAgb9M) +- [proposal](https://github.com/hemanth/generic-comparison) +- [slides](https://docs.google.com/presentation/d/1OO3QwtP4S0SOXGW9m4pdgG_CHo2eCz0sA6u3NXAgb9M) HHM: (presents slides) @@ -409,13 +409,12 @@ JHD: Given that we’re going for stage 1, we’re proposing that we continue to WH: “Exploring the problem of generically comparing values” is too vague. -JHD: I phrased it that way because stage 1 is about addressign a problem. spaceship operator is what I would like as a solution. The point of stage 1 is to address a problem. If we showed up with a problem and no idea for a solution, I agree that would be too vagueThat’s what we hope we can all explore during stage one +JHD: I phrased it that way because stage 1 is about addressign a problem. spaceship operator is what I would like as a solution. The point of stage 1 is to address a problem. If we showed up with a problem and no idea for a solution, I agree that would be too vagueThat’s what we hope we can all explore during stage one WH: Okay. I’m still not satisfied with that answer because Array.prototype.compare and the spaceship operator are two very different things with two very different use cases. JHD: the intention would be that if we had the op object.compare could still exist and would delegate to the operator. -It would take 1 arg and return this spaceship argument and delegate to the protocol for the operator -That’s what we have in mind but no spec text is written or whatnot. +It would take 1 arg and return this spaceship argument and delegate to the protocol for the operator That’s what we have in mind but no spec text is written or whatnot. WH: Okay, so you want to explore adding the spaceship operator to the language? @@ -423,12 +422,7 @@ JHD: Yes. The intention for the slide show was to discuss the process that HHM a WH: Is the intention of the proposal that <=> be consistent with <, <=, >, >=, ==, and !=? -JHD: that question came up in the hallway track -it would be weird if they didn't agree -I'm nervous about suggesting that we change the way < > work -1 one is allow them to disagree -2 change the way < > work -3 if return result doesn't agree, then throw +JHD: that question came up in the hallway track it would be weird if they didn't agree I'm nervous about suggesting that we change the way < > work 1 one is allow them to disagree 2 change the way < > work 3 if return result doesn't agree, then throw Very personally nervous of changing how the less-than and greater-than operators work. @@ -442,20 +436,19 @@ WH: Supporting incomparable is not an edge case. JHD: Sorry, let me rephrase. All the cases with -0, NaN, infinities, and so on, we would have to answer those (before stage 2?) - -If the result does not agree with less and greater it should fail +If the result does not agree with less and greater it should fail all of the core cases around special values (nan etc) we would address but aren't prepared to do that now. WH: What would the result be for `3 <=> NaN`? -JHD: I don't know. We would think about that and come back to the community with an answer. Unless the discussion is that you think it’s impossible for us to come up an answer to that, which would tank the whole proposal, in which case let’s discuss it now. +JHD: I don't know. We would think about that and come back to the community with an answer. Unless the discussion is that you think it’s impossible for us to come up an answer to that, which would tank the whole proposal, in which case let’s discuss it now. WH: Yes I do. For other languages four possible <=> results (less-than, equal, greater-than, or incomparable) are sufficient to match the behavior of <, <=, >, >=, ==, and !=. In ECMAScript they’re not. There are cases that don’t fall into any of the {less-than, equal, greater-than, or incomparable} buckets. JHD: Given that ???. perhaps when <=> used with NaN it would return NaN -In js you can’t take the result of ??? +In js you can’t take the result of ??? WH: In ECMAScript that’s not sufficient. In addition to NaN, there are comparisons among primitives for which there is nothing sensible the spaceship operator can give. @@ -486,8 +479,7 @@ SYG: I mean yes, you could supply a comparator, that would be a more scoped solu JHD: If we only - the primary use case is arrays, like you said, if we had a ??? that took an optional comparator function, certainly that would work. The user has to handle recursion in there a little bit. And separately there is no way for userland types to generically participate in comparison. You’d have to, as the author of the comparator function, know how to compare every kind of value, and you may not have an opinion on everything, and if you don’t, it’s nice to delegate to the implementation. -SYG: to be more concrete, the can of worms of all languages is not worth the time right now when array comparison is what we’re looking for right now, I’m ok to explore that -then that seems fine to me. If the stage 1 is “let’s figure out how to generically compare anything in JavaScript”, I’m not comfortable with that. +SYG: to be more concrete, the can of worms of all languages is not worth the time right now when array comparison is what we’re looking for right now, I’m ok to explore that then that seems fine to me. If the stage 1 is “let’s figure out how to generically compare anything in JavaScript”, I’m not comfortable with that. RBN: I’m going to focus more on the symbol than the operator. I’m not convinced on the operator at the moment, but I have for some time now been discussing interest in investigating equality in certain other cases. We've talked about things like wanting to provide Map keys that allow using a complex object as a key but allow you to have another complex object that uses that key but has a different reference identity. They are very different ??? one thing I'm interested in is adding symbols for equals and comparison not related to the operators. That are not related to the operators, but merely a means of defining a protocol for a common API that library authors and developers could use to say, if you want to determine if I am equal to something else that is not necessarily satisfied by ===, then you could use these symbols, and it would be useful in cases in a map or a set for determining equality. @@ -505,7 +497,7 @@ WH: Subtraction doesn’t work, it doesn’t always give you the correct result JHD: for finite numbers perhaps -WH: For finite numbers it works but if you include infinities it doesn’t. For finite numbers, you might get an overflow. However, `+Infinity == +Infinity` is true, but `+Infinity - +Infinity` gives you a NaN. +WH: For finite numbers it works but if you include infinities it doesn’t. For finite numbers, you might get an overflow. However, `+Infinity == +Infinity` is true, but `+Infinity - +Infinity` gives you a NaN. JHD: So yeah, we’d have to handle the infinities and the NaNs, just like all the Math operations in the spec, but the rest of it would be mostly subtraction and that’s the sort of thing we’d have to handle in stage 2. @@ -513,11 +505,9 @@ RPR: Queue is empty JHD: Sounds like there is pretty strong opposition to spend committee time exploring generic comparison, but people are roughly okay with addressing the problem of comparing arrays. I am unclear how we can compare arrays without requiring a comparator without also addressing generic comparison of values. But either way it seems like this problem space has even more to talk about even though we’ve been given strong feedback on which parts to focus on and which parts to avoid. So it seems worthy of exploring further in stage 1. Can we have stage 1 for the proposal given that all the strong feedback we’ve received would be weighted highly? - TLY: The original pb was how do you compare equality of arrays rather than comparing any value. I think it’s a lot easier to talk about a new way of determining equality than it is to try to give a total ordering to all values in JavaScript, or even a partial ordering. -JHD: Okay. So you’re just saying that the spaceship operator doesn’t give an ordering (?) but if we’re trying to - you’re talking about the first problem we focused on, ordering arrays, which would recurse into arrays, but then not knowing which is bigger than the other -Would they spaceship to zero or not is that you want to explore right? +JHD: Okay. So you’re just saying that the spaceship operator doesn’t give an ordering (?) but if we’re trying to - you’re talking about the first problem we focused on, ordering arrays, which would recurse into arrays, but then not knowing which is bigger than the other Would they spaceship to zero or not is that you want to explore right? TLY: I wouldn’t phrase it that way but yes. @@ -527,10 +517,9 @@ SYG: not comfortable to go to stage 1 with “we will take your feedback strongl WH: I find the framing of defining array equality by invoking <=> on the elements and checking if it returns 0 or not to be very strange. In other languages the concept of equality generally does not depend on the existence of any ordering defined between unequal elements. Array equality should depend only on element equality. -KKL: Briefly echoing WHs point, take care not to consider equality equivalent to <=> returning zero, because zero has the meaning of incomparability so it’s not a bijection +KKL: Briefly echoing WHs point, take care not to consider equality equivalent to <=> returning zero, because zero has the meaning of incomparability so it’s not a bijection -HHM: So to confirm again are we going to pause the 3 way comparison operator for now? Or rephrase the proposal -And probably if there is support for three-way we can take it as a different proposal in the future. +HHM: So to confirm again are we going to pause the 3 way comparison operator for now? Or rephrase the proposal And probably if there is support for three-way we can take it as a different proposal in the future. JHD: But we are very aware that in order to bring it back to the future we have to take in account all of that feedback. Would not want to waste committee time until we can persuade all people that have given feedback. @@ -538,7 +527,7 @@ DE: Can I ask that before this is moved into the tc39 org that there’s an expl JHD: What we will likely do is call this proposal withdrawn or rejected, and make a brand new one with array equality pieces of this one and say that is stage 1. And then there’s no confusion about what you just talked about. We'll put a note on this one to point to the new one. Does that seem like an ok approach? -DE: Yeah that sounds like a great way to clarify publicly, glad you’re being careful about that. +DE: Yeah that sounds like a great way to clarify publicly, glad you’re being careful about that. JHD: I think the title of proposals, particularly early proposals, should reflect the problem space. So if we’re agreeing on array equality, then that’s what we should title it. @@ -547,21 +536,20 @@ DE: In general I’m happy with an early proposal proposing a concrete straw-per JHD: Do we have consensus for stage 1 for array equality, and we will consider this other thing withdrawn? ### Conclusion/Resolution + - Stage 1 with reframing to array equality ## .item() for Stage 1 -Presenter: Shu-yu Guo (SYG) -* [proposal](https://github.com/tabatkins/proposal-item-method) -* [slides](https://docs.google.com/presentation/d/1vRjhR1Vl9GeOeXno-s8DkQppeZFE3xx59Od91HG6db4/edit) +Presenter: Shu-yu Guo (SYG) +- [proposal](https://github.com/tabatkins/proposal-item-method) +- [slides](https://docs.google.com/presentation/d/1vRjhR1Vl9GeOeXno-s8DkQppeZFE3xx59Od91HG6db4/edit) SYG: (presents slides) - Just to make note taking easier please use the queue for everything & please not to talk over each other. Harder to capture cross talk in a remote format. - MF: Prefacing this, I am totally on board with this proposal. Are you considering arguments exotic objects to be indexable, and do they ... ? SYG: Damn fine question. What is the argument exotic object's prototype? @@ -572,47 +560,42 @@ SYG: It seems like probably not, I don’t have a good answer for you. MF: I’d love for them to be able to get it but I don’t see a technical way for how to do it. -SYG: I’m not sure we want to start saying like all remote exotic objects get their own copy of an item method? That seems to be undesirable if they don’t have a prototype right now. It seems okay to me right now that you’d have to cast it to an array to get that, but - +SYG: I’m not sure we want to start saying like all remote exotic objects get their own copy of an item method? That seems to be undesirable if they don’t have a prototype right now. It seems okay to me right now that you’d have to cast it to an array to get that, but - MF: We could put an own-property on arguments exotic objects with the value of a shared intrinsic. SYG: That’s kinda weird and magical, thanks for raising it, I hadn’t thought about it. -RBN: I noticed you mentioned WebIdl for concern. But ActiveX/COM has similar concerns, it exposes collection indexers as `item` to javascript and it also exposes it as `.Item` with a capital `I` to languages that are not javascript. +RBN: I noticed you mentioned WebIdl for concern. But ActiveX/COM has similar concerns, it exposes collection indexers as `item` to javascript and it also exposes it as `.Item` with a capital `I` to languages that are not javascript. I think most of the times when it works with JS, it expects the 'i' in item to be lowercase. -I don't think it would be an issue with Array.prototype, though there might be some possibly issues with activeX objects using the DOM -But then I haven’t looked at how or whether there are any differences in how Chakra handles ActiveX anymore, but I think that applied to IE/old Edge, so I’m not sure if that still applies with Chromium Edge. +I don't think it would be an issue with Array.prototype, though there might be some possibly issues with activeX objects using the DOM But then I haven’t looked at how or whether there are any differences in how Chakra handles ActiveX anymore, but I think that applied to IE/old Edge, so I’m not sure if that still applies with Chromium Edge. SYG: Fortunately, I know nothing about exposing ActiveX and COM to JS. RBN: they get a - in old IE you would get an ActiveX object that looks like a JS object. the API was complicated it's hard to explain. -[...?] with methods that were exposed from the ActiveX or COM object -And it’s hard to explain how that works, the APi was somewhat complicated and it was treated as an object, but it’s also where we’ve run into issues in the past with things like Document.all, and things like that. +[...?] with methods that were exposed from the ActiveX or COM object And it’s hard to explain how that works, the APi was somewhat complicated and it was treated as an object, but it’s also where we’ve run into issues in the past with things like Document.all, and things like that. SYG: Ok, thanks for the heads up. DE: So I think it’s great that this proposal is coming along, indexing from end of the array comes up all the time & you have to type `length` -And also simplifying something with the web is good. I want to raise a related proposal that could be potentially taken on which is to get the last element of the array, if we just add this `item` we’re going to have a lot of code that uses item(-1) & I think that’s ugly just to use this sentinel to get the last item -And I think it would be nice to have a method just to get the last element. We’ve had investigations in having a method getter for the last element, but that’s kind of poisoned by ???. +And also simplifying something with the web is good. I want to raise a related proposal that could be potentially taken on which is to get the last element of the array, if we just add this `item` we’re going to have a lot of code that uses item(-1) & I think that’s ugly just to use this sentinel to get the last item And I think it would be nice to have a method just to get the last element. We’ve had investigations in having a method getter for the last element, but that’s kind of poisoned by ???. If someone wanted to champion an array.prototype.lastElement that would be good. That could be a nice thing that could be more ergonomic than item(-1). Or maybe we do just want people to do .item(-1). I don't want to expand the scope of this proposal, but it comes up because item(-1) idiom would result from this. This could set that idiom, that’s what raises it mentally for me. SYG: I see, I think for that I would like to… Technically I don’t see having it in addition to .item() I am not sure if list[-1] in the python ecosystem is considered a usability issue -DE: don’t think it is an issue there but I don’t think it is a javascript idiom though -Bringing that in where we didn’t previously have it, I don’t think that’s bad I don’t think that should slow down this proposal but I would like if someone made progress on the last element proposal cause I think it could be independently valuable. +DE: don’t think it is an issue there but I don’t think it is a javascript idiom though Bringing that in where we didn’t previously have it, I don’t think that’s bad I don’t think that should slow down this proposal but I would like if someone made progress on the last element proposal cause I think it could be independently valuable. RBN: I was interested in the proposal for last() when it came up before. more interested in changing the name at the time. library issues prevented "last" -I was interested in "peek" as a parallel to "push" and "pop". +I was interested in "peek" as a parallel to "push" and "pop". having "item" and "peek" or "lastitem" might make sense. we have methods that can index from the front of the array and from the end of the array. peek/pop/shift/unshift JHD: in response to DE, passing -1 in arrays or slices is a very common idiom. The workaround that people are already using for `.last()` is `.slice(-1)[0]`. -I pasted a link in irc -Rails has a .forty_two method on arrays as a joke. The cheekiness aside it raises the question about why one of those methods is special. There's a slippery slope argument: if we have "last' do we need "first", "second" etc. We could still add a last but it seems like an improvement to have it.(?) +I pasted a link in irc Rails has a .forty_two method on arrays as a joke. The cheekiness aside it raises the question about why one of those methods is special. There's a slippery slope argument: if we have "last' do we need "first", "second" etc. We could still add a last but it seems like an improvement to have it.(?) TLY: JHD covered my question @@ -636,18 +619,17 @@ SYG: Again, asking stage 1 RPR: Okay, no objections to stage 1. Congratulations on stage 1. - ### Conclusion/Resolution -- Stage 1 +- Stage 1 ## Incubation call chartering + Presenter: Shu-yu Guo (SYG) SYG: introducing incubation calls again -Every 2 week hourly call -there is an every 2 week hour call in which we call out proposals that could benefit from video call feedback. in order to come back to committee with a better understanding of what probable issues may be. +Every 2 week hourly call there is an every 2 week hour call in which we call out proposals that could benefit from video call feedback. in order to come back to committee with a better understanding of what probable issues may be. With the champions and other stakeholders, so as to come back to committee with a more polished picture or a better understanding of what possible issues may be. So the chartering process here is that I’m going to ask for participation for the earlier proposals here at this plenary and proposals that may have gone stagnant for a while. And see if the stakeholders and the champions are willing to be on the lookout for a schedule for the incubator call where we discuss these in between plenaries. Does that all make sense? @@ -656,14 +638,10 @@ There is a "how the incubation call works" to explain how it works. in the refle For the previous set of proposals, check the Reflector. Previously, we talked about realms, we talked about the this reflection proposal, and then we talked about module attributes as well. And I think largely other than some scheduling mishaps, it was a net benefit and it was hopefully useful especially to the realms folks to get more time to hear the feedback. So this time I’m calling out the following 3 proposals: -UUID -some concerns there around webcrypto -BC has agreed to participate in an incubation call about the UUID proposal, so if you are interested in that space, be on the lookout for that. +UUID some concerns there around webcrypto BC has agreed to participate in an incubation call about the UUID proposal, so if you are interested in that space, be on the lookout for that. -.item -I was going to put `.item` on it, but given that it didn’t seem very controversial, it doesn’t seem like there’s a need for high-bandwidth feedback from delegates, so I will strike that one. -Generic Comparison -The other was generic comparison, given there is contention around the problem space to be explored. Especially for the champions who still are interested in generic comparison. Are the champions of UUID and generic comparison open to incubation call participation? +.item I was going to put `.item` on it, but given that it didn’t seem very controversial, it doesn’t seem like there’s a need for high-bandwidth feedback from delegates, so I will strike that one. +Generic Comparison The other was generic comparison, given there is contention around the problem space to be explored. Especially for the champions who still are interested in generic comparison. Are the champions of UUID and generic comparison open to incubation call participation? JHD: For comparison, certainly. @@ -680,11 +658,9 @@ SYG: there may be other folks who feel strongly in the opposite direction. I thi YSV: OK I would like to offer one other thing, we discussed our approach to security in the chat as something we should nail down. not as a proposal, but I think it’s worth discussing. -In parallel to that discussed to people in mozilla and there have been eyes on UUID -It would be really cool to have a holistic view on that, so that’s another option. +In parallel to that discussed to people in mozilla and there have been eyes on UUID It would be really cool to have a holistic view on that, so that’s another option. -SYG: for UUID item it could be expended in scope -Because there is a desire to move it to another venue, but to talk about the crypto space in general, which I think BCE would be very open to discussing as well, plus perhaps another item discussing our approach to security in general. Given the light list of proposals I would accept both for folks who have an interest in the security model of js for which we as a committee want to design our language around. I could see that being very contentious, so it would be good to have a lot of high-bandwidth discussion before coming to committee. +SYG: for UUID item it could be expended in scope Because there is a desire to move it to another venue, but to talk about the crypto space in general, which I think BCE would be very open to discussing as well, plus perhaps another item discussing our approach to security in general. Given the light list of proposals I would accept both for folks who have an interest in the security model of js for which we as a committee want to design our language around. I could see that being very contentious, so it would be good to have a lot of high-bandwidth discussion before coming to committee. YSV: We might want to limit it more but that’s all I’ll say. @@ -703,6 +679,7 @@ YSV: Aware of that, will respond in chat RBU: we would want to bring up deep path properties and specifically how it interacts with objects SYG: Ah yes, of course. Thank you for bringing that up, I completely forgot, in fact I talked to some folks and that was on the original list. So to recap: + - UUID (cut off) @@ -716,16 +693,18 @@ SYG: Sure? I don’t have anything against that. The original intent of the call DE: I agree with that prioritization. -LEO: As SYG said, it applies for any discussion if anything already have a regular meeting then it is not needed to go in incubator calls since they can’t really benefit from the incubation more than smaller proposals. __Note: I made this horrible choice of calling "smaller proposals". I don't mean smaller, but any proposals without frequent meetings that need to solve specific challenges before stage advancement.__ +LEO: As SYG said, it applies for any discussion if anything already have a regular meeting then it is not needed to go in incubator calls since they can’t really benefit from the incubation more than smaller proposals. **Note: I made this horrible choice of calling "smaller proposals". I don't mean smaller, but any proposals without frequent meetings that need to solve specific challenges before stage advancement.** ### Conclusion / Resolution SYG: Thanks. To sum up, the five topics we have identified are UUID, security at large, generic comparison, deep path properties, and Temporal. If you are a stakeholder in any of these topics, please look out for Reflector threads for these issues including the scheduling and video calls. The scheduling is done ad-hoc, because constraints vary proposal to proposal, and we want to accommodate timezone needs of champions and stakeholders. + ## (Continuation) Module attributes for Stage 2 + Presenter: Daniel Ehrenberg (DE); Sven Sauleau, Myles Borins, Dan Clark -* [proposal](https://github.com/tc39/proposal-module-attributes) -* [slides](https://docs.google.com/presentation/d/1MOVBh0gw7-tqEx-maEvS2HsgwXd5X5pcwL80V67xCIg/edit#slide=id.g8634fc5940_28_0) +- [proposal](https://github.com/tc39/proposal-module-attributes) +- [slides](https://docs.google.com/presentation/d/1MOVBh0gw7-tqEx-maEvS2HsgwXd5X5pcwL80V67xCIg/edit#slide=id.g8634fc5940_28_0) [PR](https://github.com/tc39/proposal-module-attributes/pull/66) DE: (presents slides) @@ -741,14 +720,14 @@ AKI: That seems like consensus to me. Congratulations on stage 2 and on compromi DE: Thank you. ### Conclusion/Resolution -* Reached consensus for Stage 2. - +- Reached consensus for Stage 2. ## Editorial Direction + Presenter: Shu-yu Guo (SYG) -* [slides](https://docs.google.com/presentation/d/14NsIoRhr-z7HvRG0laq_F2c4iNPHF-Ld17-Yibshdo0/edit?usp=sharing) +- [slides](https://docs.google.com/presentation/d/14NsIoRhr-z7HvRG0laq_F2c4iNPHF-Ld17-Yibshdo0/edit?usp=sharing) SYG: (presents slides up to “Normatively, they all mean the same thing” slide and asks if this is contentious) @@ -792,7 +771,7 @@ SYG: It is useful to the readers of the spec to see, for example, job scheduling We conflate what is a host and what is an implementation. A host to the HTML folks is what is specified by HTML whereas an implementation is a particular browser. It would be good to record that intention. -WH: I am concerned about recording that implementation-defined is too much of a catch-all. +WH: I am concerned about recording that implementation-defined is too much of a catch-all. Implementation-defined means that it has a number of options that are equally good. @@ -816,7 +795,6 @@ WH: Yes. SYG: I take your point is that OOMs are observable, therefore would all points be implementation-defined? - WH: Yes, if implementation-defined were the only choice of wording. SYG: Currently we say nothing. @@ -833,7 +811,7 @@ SYG: Completely agree. DE: I think this is a great clarification. I think it is useful for layering with HTML, which benefits us because it is a part of many TC39 proposals, how it makes it to many JS users. The idea of host hooks makes things clearer both for the web and for other places where JavaScript is used, engines often have APIs that don’t correspond exactly to host hooks, but there is often some kind of layering that relies on the spec. We have a coherent thing we are looking at. Separating host hooks from implementation-defined things solidifies that a bit. It’s a net positive for our definition of the language. Even though this is editorial it is a significant clarification. I want to thank the editorial group. -CM: I think the distinction you are calling out is useful in clarifying, simply saying we are going to be more explicit is a good thing. You gave the example of the embedded people hypothetically wanting to nail down something that would cause something to change from being impl defined to host defined. I want to make sure that it is not regarded as purely an editorial choice, it would be a normative change that should run through TC39. +CM: I think the distinction you are calling out is useful in clarifying, simply saying we are going to be more explicit is a good thing. You gave the example of the embedded people hypothetically wanting to nail down something that would cause something to change from being impl defined to host defined. I want to make sure that it is not regarded as purely an editorial choice, it would be a normative change that should run through TC39. SYG: My question to you, CM, is to say that it’s a normative change, my understanding of a normative change is that it does not change the behaviour. I’m not sure how changing something from implementation-defined to host-defined would change the behaviour. @@ -845,8 +823,6 @@ CM: This is a case where the committee as a whole is deferring to the judgement SYG: OK, noted. - - KM: What qualifies as a host here? If I have like “Keith’s dope spec” and I come to TC39 and I want it to be host defined, is that sufficient? SYG: At least CM feels strongly that calling something host vs. implementation-defined changes the intent enough that it should be brought to committee deliberation anyway. @@ -863,14 +839,13 @@ KM: I see. Is there plan or record for when something requests such a change, I SYG: I wanted to leave that to the editor group to make a judgement, and that is a case by case basis. It doesn't sound like we have agreement for that, though. I guess you’re asking what is required to add a host hook [??] to a place that is currently implementation defined - YSV: make sure that I understand fully where we’re going here. The goal here will be to make it clear what parts will be further detailed by ECMA 262 spec and which ones are going to be sort of static, I understood from the issue. Like the things that are implementation-defined would eventually change from the TC39-defined implementation. Did I understand that correctly? SYG: I don’t understand the question. Both host & impl defined stuff will both be deliberated upon within TC39. -YSV: Yes we would still decide which parts are going to be host defined and impl defined, but if I understood impl defined specifically means if someone from HTML sees the spec and they see “host” defined they would be able to say this is something that I can understand as “our” area and things that are specific might change like array.sort +YSV: Yes we would still decide which parts are going to be host defined and impl defined, but if I understood impl defined specifically means if someone from HTML sees the spec and they see “host” defined they would be able to say this is something that I can understand as “our” area and things that are specific might change like array.sort SYG: That’s the intention, yes. @@ -883,7 +858,7 @@ JHD: It sounds like WH, that you’re not concerned about differentiating betwee WH: I also have the same concerns about implementation-defined vs. host-defined that were stated by other people, so I am not going to repeat those. The implementation-defined vs. host-defined distinction is unclear for some cases. My main point here is there’s a big difference between implementation-defined and implementation-dependent. -JHD: So could we call a third answer “implementation-approximated”? +JHD: So could we call a third answer “implementation-approximated”? WH: We could. I’m also not saying all existing usages of implementation-dependent are correct. A lot of international stuff falls into that category for example, which may be better written as implementation-defined or host-defined. @@ -915,9 +890,9 @@ WH: “Thing we were going to do” is not taking into account the distinction b KG: Can I make a proposal? The main thing the Editors want is clarity about when each of the terms are used and the list of the terms to use. I would be happy enough to write down the definition WH used, that -* “Implementation-defined” means that the spec does not have a notion of the objectively best behavior and implementations are free to choose within whatever constraints the spec puts on them without preference between them. -* “Implementation-dependent” means there is a best possible behaviour that implementations should strive for as best they can, but there's no normative requirement on how much they should strive. -* “Host-defined” is what Shu has in the slides and would, to Chip's point, only change with discussion in plenary. +- “Implementation-defined” means that the spec does not have a notion of the objectively best behavior and implementations are free to choose within whatever constraints the spec puts on them without preference between them. +- “Implementation-dependent” means there is a best possible behaviour that implementations should strive for as best they can, but there's no normative requirement on how much they should strive. +- “Host-defined” is what Shu has in the slides and would, to Chip's point, only change with discussion in plenary. Editors would fix up the spec to ensure the terms are used consistent with those definitions. I would be happy with that outcome since that gives us a way to proceed on this kind of question. diff --git a/meetings/2020-07/july-20.md b/meetings/2020-07/july-20.md index 62429729..36d29885 100644 --- a/meetings/2020-07/july-20.md +++ b/meetings/2020-07/july-20.md @@ -1,9 +1,10 @@ # July 20, 2020 Meeting Notes + ----- -**In-person attendees:** +**In-person attendees:** -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Yulia Startsev | YSV | Mozilla | @@ -38,24 +39,28 @@ | Mary Marchini | MAR | Netflix | | Rob Palmer | RPR | Bloomberg | - ## Adoption of the Agenda + No objections to adopting the agenda as proposed. ## Approval of the minutes from last meeting + No objections to approving the minutes from the previous meeting. ## Next meeting host and logistics + BT: The next meeting will be remote. More to come. ## Secretary's Report + Presenter: Istvan Sebestyen (IS) - [Slides](https://github.com/tc39/agendas/blob/master/2020/tc39-2020-036.pdf) -IS: (presents slides) ECMAScript 2020 has been approved by the Ecma GA on June 16, 2020. Many, many thanks to everybody in TC39 who has made this possible. Otherwise the GA was via conference call, very short. For TC39 relevant: to elaborate a liaison Agreement with CalConnect was encouraged by the GA. +IS: (presents slides) ECMAScript 2020 has been approved by the Ecma GA on June 16, 2020. Many, many thanks to everybody in TC39 who has made this possible. Otherwise the GA was via conference call, very short. For TC39 relevant: to elaborate a liaison Agreement with CalConnect was encouraged by the GA. ## ECMA262 Update + Presenter: Jordan Harband (JHD) - [Slides](https://docs.google.com/presentation/d/1O8wGWehzMhqb_Jz2JfmyUxyUepxanc8sEVMlTRUVnfo) @@ -63,6 +68,7 @@ Presenter: Jordan Harband (JHD) JHD: (presents slides) ## ECMA402 Update + Presenter: Shane F. Carr (SFC) - [Slides](https://docs.google.com/presentation/d/1C54jVjcuE27wq658CbMi0KEfa5ded_WIWvdq1JP8QDI) @@ -73,7 +79,7 @@ WH: Does #471 affect output or only avoid errors? I looked at it, it’s full of SFC: It only affects the RangeError, there is no change to the rounding behaviour. The current problem is that if you have this code, this throws a range error because the currency EUR requires two fraction digits, but you set maximumFractionDigits to 0, and that’s a conflict. However, if your currency had been JPY, for example, then there wouldn't have been a problem since the maximumFractionDigits is already 0 for that currency. It doesn’t fix any other behavior other than eliminating this RangeError. maximumFractionDigits will win out and override the currency. The current workaround is to specify both minimum and maximum at the same time, but that’s undesirable. -``` +```js new Intl.NumberFormat("en", { style: "currency", currency: "EUR", maximumFractionDigits: 0 }); ``` @@ -92,33 +98,26 @@ SFC: I’ll take that as consensus. And we’ll get consensus on 471 when it com Consensus on PR 459. ## Test262 status update + Presenter: Rick Waldron (RW), Leo Balter (LEO) - [slides](https://docs.google.com/presentation/d/1tsqTUZioHi8YxRF_CapxcZTjZQYDClEgXOFUm0W4gHg) RW, LEO: (presents slides) -### Updates by the numbers, since last TC39 Meeting... +### Updates by the numbers, since last TC39 Meeting -113 new test files -92 commits -50 Merged PRs -26 Closed issues +113 new test files 92 commits 50 Merged PRs 26 Closed issues ### General Outstanding Updates -Coverage improvement for Atomics, e.g. relaxation and waitAsync -Coverage improvement for Promise functions -Intl on the fast track -`eval?.()` w/ further discussions in this TC39 meeting -Improvements to IsHTMLDDA -Improvements to Optional Chaining tests +Coverage improvement for Atomics, e.g. relaxation and waitAsync Coverage improvement for Promise functions Intl on the fast track +`eval?.()` w/ further discussions in this TC39 meeting Improvements to IsHTMLDDA Improvements to Optional Chaining tests ### Meta: Renaming master branch to main The etymology of "master" in git branch name conventions has been traced to the master/slave metaphor. -Tracker issue: https://github.com/tc39/test262/issues/2699 -Current status: Test262 default's branch is now `main` +Tracker issue: https://github.com/tc39/test262/issues/2699 Current status: Test262 default's branch is now `main` ### Extras @@ -140,9 +139,11 @@ CM: And that’s it! BT: Okay, thank you for the update on ECMA404. ## Update from the Code of Conduct Committee + JHD: There was a tense discussion on the pipeline repo which Aki was able to step in and moderate. Otherwise, it's been uneventful. ## Retroactive consensus on Unicode 13 property names and aliases (#1896, #1939) + Presenter: Michael Ficarra (MF) - [PR #1896](https://github.com/tc39/ecma262/pull/1896#issuecomment-642301441) @@ -165,6 +166,7 @@ MF: Okay, great, thank you. BT: Consensus that these two PRs stand as merged. ## Specify \8 and \9 in sloppy (non-template) strings (#2054) + Presenter: Ross Kirsling (RKG) - [PR](https://github.com/tc39/ecma262/pull/2054) @@ -188,6 +190,7 @@ BT: Okay, any objections to merging this PR? BT: This PR is good to go. ## Adding Reflect[Symbol.toStringTag] (#2057) + Presenter: Jordan Harband (JHD) - [PR](https://github.com/tc39/ecma262/pull/2057) @@ -211,6 +214,7 @@ BT: Any concerns? BT: Sounds like consensus. ## Should eval?.() be direct eval? (#2062, #2063) + Presenter: Ross Kirsling (RKG) - [Issue #2062](https://github.com/tc39/ecma262/issues/2062) @@ -218,11 +222,11 @@ Presenter: Ross Kirsling (RKG) RKG: (presents PR) -MM: First, let me clarify that with regard to the specific syntax on the table, I don’t care, in the sense that I don’t think that people are going to be accidentally writing this, so the hazards of having it mean the wrong thing is not a big issue. But the reason why I’m going to spend airtime debating is because of the issue of taking this as a precedent for a general policy, in particular what the consideration should be, and whether or not it should be a syntax error. +MM: First, let me clarify that with regard to the specific syntax on the table, I don’t care, in the sense that I don’t think that people are going to be accidentally writing this, so the hazards of having it mean the wrong thing is not a big issue. But the reason why I’m going to spend airtime debating is because of the issue of taking this as a precedent for a general policy, in particular what the consideration should be, and whether or not it should be a syntax error. -MM: One of the criteria that we have applied and should continue to apply is what I’m going to call the hierarchy of painful surprises. If there’s a programming construct that some people who have never seen it before will assume means one thing, and some people who have never seen it before will assume means something else, then whichever one of those choices we make, some part of that population will be surprised. With something like optional chaining, it’s not a rare construct, and the thing that has a potential surprise has tremendous utility. So the cost of making it a syntax error is incredibly high. Once it becomes available in the language, it will be used a lot, and those who use the language will rapidly become familiar with it. +MM: One of the criteria that we have applied and should continue to apply is what I’m going to call the hierarchy of painful surprises. If there’s a programming construct that some people who have never seen it before will assume means one thing, and some people who have never seen it before will assume means something else, then whichever one of those choices we make, some part of that population will be surprised. With something like optional chaining, it’s not a rare construct, and the thing that has a potential surprise has tremendous utility. So the cost of making it a syntax error is incredibly high. Once it becomes available in the language, it will be used a lot, and those who use the language will rapidly become familiar with it. -MM: With -2**3, the issue for me was not that the person might make different decisions based on whitespace. The issue was that the surprise in both directions would lead to programs silently proceeding with wrong data, leading to painful runtime surprises possibly after code is deployed. The utility of allowing it (i.e. the cost of disallowing it with a syntax error) was very small. The extra cost of having to put it into parentheses to disambiguate it was very small and it avoids anyone having to face the cognitive overhead of remembering which of these two choices to take. The eval?. is not going to be common, it’s going to be very rare, when you’re reading someone else’s code you’ve probably never seen it before, so the reason I don’t care is that you’re not going to assume what it means either way. +MM: With -2**3, the issue for me was not that the person might make different decisions based on whitespace. The issue was that the surprise in both directions would lead to programs silently proceeding with wrong data, leading to painful runtime surprises possibly after code is deployed. The utility of allowing it (i.e. the cost of disallowing it with a syntax error) was very small. The extra cost of having to put it into parentheses to disambiguate it was very small and it avoids anyone having to face the cognitive overhead of remembering which of these two choices to take. The eval?. is not going to be common, it’s going to be very rare, when you’re reading someone else’s code you’ve probably never seen it before, so the reason I don’t care is that you’re not going to assume what it means either way. BT: Point of order, there are 2 minutes left in this timebox. @@ -261,6 +265,7 @@ RKG: Okay. Keep spec as it is (indirect eval). ## Forbid Numeric Separators in NonOctalDecimalIntegerLiteral fractional / exponent parts + Presenter: Leo Balter (LEO) - [Proposal issue](https://github.com/tc39/proposal-numeric-separator/issues/49) @@ -300,7 +305,6 @@ LEO: Yes. WH: What are we agreeing to here? We’re only discussing LegacyNonOctals, not numeric separators. So what is being proposed? - LEO: I’m asking to not block numeric separators from advancing to stage 4, because a change would ??? and I’m not really a big fan of that. Also we can tackle a follow-up for a better solution to disallowing exponential parts - perhaps fractional parts as well - from NonOctals, which I believe is also what WH and SYG expressed as what they would want. In this case we would not try to fix numerical separators on Non-octals and ??? and in this case the issue should be closed WH: I’m fine with that. @@ -312,6 +316,7 @@ RPR: Sounds like consensus. LEO: PR will be closed, numeric separators will be unblocked on this matter. ## Cognitive Dimensions of Notation: a framework for reflecting on language design + Presenters: Yulia Startsev (YSV), Dr Felienne Hermans (FHS) - [slides](https://docs.google.com/presentation/d/1OpKfS5UYgcwmBuejoSOBpbgsYXXzO0gG7GJHo65UXPE) @@ -356,29 +361,31 @@ FHS: I don't think I'd say that it's purely about syntax. Syntax, or even a diag SYG: This would help us have a shared language with the PL community. But coming from ACM on programming languages, I am not familiar with this framework of languages. Which PL community were you referring to? -FHS: It’s not necessarily the PL community as in PoPL, it’s more at PPIG (Psychology of Programming Interest Group) for example. This is one of the conferences where much of the work in this area has been appearing. But also for example, the international conference for program comprehension that looks at existing comprehension/codebases (...???). So maybe it's less PoPL side of programming and more ??? that I'm referring to. I do think it's sad that the PoPL crowd doesn't embrace this yet. +FHS: It’s not necessarily the PL community as in PoPL, it’s more at PPIG (Psychology of Programming Interest Group) for example. This is one of the conferences where much of the work in this area has been appearing. But also for example, the international conference for program comprehension that looks at existing comprehension/codebases (...???). So maybe it's less PoPL side of programming and more ??? that I'm referring to. I do think it's sad that the PoPL crowd doesn't embrace this yet. -SYG: I'm unfamiliar with that side of PL academia. I'm more familiar with POPL, PLDI, etc. Are there papers from the PPIG side for example, that have had impacts on academic prototypes, industry prototypes, etc, where they put this framework to work and build things out and see how it goes? If so I’d love to hear more about that. +SYG: I'm unfamiliar with that side of PL academia. I'm more familiar with POPL, PLDI, etc. Are there papers from the PPIG side for example, that have had impacts on academic prototypes, industry prototypes, etc, where they put this framework to work and build things out and see how it goes? If so I’d love to hear more about that. -FHS: Some of my work, which I showed here, designing an alternative interface for spreadsheet formulas, is being picked up by Microsoft Excel. Also lots of the evolution papers might not necessarily impact the design of a programming language, but they might impact the design of an API, or the design of how an IDE works to help with understanding. Because if a language has low visibility, then an IDE might help increase that. (???) It's probably because this framework is not so very well known, and also this type of research is not very well-regarded by the technical side of the programming language design community. There's some part of people who publish in PoPL who would say oh, well we just design programming languages, whereas people in software engineering like to look at how people actually use programming languages. +FHS: Some of my work, which I showed here, designing an alternative interface for spreadsheet formulas, is being picked up by Microsoft Excel. Also lots of the evolution papers might not necessarily impact the design of a programming language, but they might impact the design of an API, or the design of how an IDE works to help with understanding. Because if a language has low visibility, then an IDE might help increase that. (???) It's probably because this framework is not so very well known, and also this type of research is not very well-regarded by the technical side of the programming language design community. There's some part of people who publish in PoPL who would say oh, well we just design programming languages, whereas people in software engineering like to look at how people actually use programming languages. CM: First of all, very interesting talk. Certainly TC39 has no shortage of peculiar local jargon, though I’m not sure any of the examples you gave are actually examples of that, because most of those are importations from outside TC39. I think there's an anthropology dimension: someone will take things from another group they're a member of, and it will spread like contagion. And I think tracing the patterns of those might be interesting. But the thing I wanted to focus on was error-prone-ness. I think it's a slippery idea. A lot of language design features, you might think of them as speed bumps, because they increase the probability that you'll make certain kinds of errors that are easier to deal with, and decrease the probability that you'll make certain kinds of errors that occur later down the road and are more difficult to deal with. A lot of the programming tools and IDEs and things we've been developing have been designed to try to front-load things, so you tend to have more errors earlier but they’re simpler and easier to deal with. So I think error-prone-ness is way too slippery to be tossed off as a dimension in the way you have. FHS: Yeah, so I can be brief. I think the only thing I can say there is that error proneness can be clearer than things we use now like “foot-gun” which has the same slipperiness. -CM: Footgun is… it has an extended penumbra of cultural meaning. But that refers to something where it's not only easy to make a mistake, but it leads you down a garden path to making decisions where you'll be sorry later. +CM: Footgun is… it has an extended penumbra of cultural meaning. But that refers to something where it's not only easy to make a mistake, but it leads you down a garden path to making decisions where you'll be sorry later. YSV: We’re focusing very much on specific dimensions, and I think one thing that was very interesting when I started looking at the framework is that many communities develop their own dimensions as they go. But the framework has three parts: there’s the dimension, there’s the user, and there’s the activity. And having a list of activities that interact in this space. And not all dimensions interact in all spaces equally or in the same way. And for me what was really interesting was the emphasis that this framework puts on how people experience a programming language. Specifically, when we design, we design for its use, and it's being used by people. The emphasis in this framework on the user, who the user is, and what they're doing would be useful when we are evaluating and designing and asking questions. I see the framework as a starting point for asking questions, like how is this affecting people, and who is it affecting, and what can we find out about those things. FHS: For anyone remaining on the queue, please reach out through the discourse page (link?), IRC, email, or the August 27 call (details?). -### Remaining queue: +### Remaining queue + 1. New Topic: This will lead to time-consuming taxonomical debates. (WH) 2. New Topic: These words seem more precise and easier to learn than our current jargon. I support gradually adopting these terms. (DE) 3. New Topic: While this introduces a more formal framework to discuss language constructs, the impact is still subjective. (MLS) ## Class static blocks for Stage 2 + Presenter: Ron Buckton (RBN) - [proposal](https://tc39.es/proposal-class-static-block/) @@ -387,7 +394,8 @@ Presenter: Ron Buckton (RBN) KG: About private declarations in particular, I think that most of the examples with private declarations, I feel like those would be nicer using private declarations outside of classes - that proposal. In particular - RBN: This example? (link) -``` + +```js private #x; class C { outer #x = 1; } class D { constructor(c) { c.#x; } } @@ -421,7 +429,7 @@ RBN: No contention about that at all. I agree that’s the case. MM: Oh! Okay then I misunderstood something. -RBN: That's how it currently works. I'm trying to say the class will be TDZ outside the class definition until the class is fully initialized. Which should align with how static fields are initialized. +RBN: That's how it currently works. I'm trying to say the class will be TDZ outside the class definition until the class is fully initialized. Which should align with how static fields are initialized. MM: Great. I very much support this for stage 2. @@ -431,8 +439,7 @@ RBN: My question would be, how would you imagine that `var` would work in that c SYG: Yes -RBN: I’ll be honest, if I could’ve written this as `static(){`, then I would have, but that's already legal JS -Because it has special semantics around how `this` works, because otherwise who knows what `this` would be referring to, and then we have the complexity around what happens with return, await, yield. I saw on IRC that there’s some discussion about the class access expressions, that one, I may eventually bring back, but that one had some strange things around evaluation, since its main motivation was around private static and subclassing. +RBN: I’ll be honest, if I could’ve written this as `static(){`, then I would have, but that's already legal JS Because it has special semantics around how `this` works, because otherwise who knows what `this` would be referring to, and then we have the complexity around what happens with return, await, yield. I saw on IRC that there’s some discussion about the class access expressions, that one, I may eventually bring back, but that one had some strange things around evaluation, since its main motivation was around private static and subclassing. So again the main reason that it’s essentially treated kind of like a function but I can’t put parens in because it would conflict with existing syntax, and as such vars would not be hoisted out. SYG: So you really want this to be thought of as a static constructor, not a static block. @@ -459,6 +466,7 @@ DE: I think this is a really great proposal. I’m definitely sympathetic to YSV Not advancing at this meeting. ## Host hooks for Job callbacks + Presenter: Shu-yu Guo (SYG) - [Slides](https://docs.google.com/presentation/d/19S97ZqhibJABqzeP5ZU6Flk6TVgWzXuvJWFbNTTfpWs) @@ -512,16 +520,16 @@ YSV: It's been a bit mischaracterized as Firefox being the only browser doing th SYG: That's correct. If there's no JavaScript on the stack then we are aligned. (???) - - SYG: The conclusion is that I will chat with BFS about the polyfill point and will talk with MM about dynamic import but I don’t really know what conclusion to get there MM: The issue is understanding what dynamic import does according to the EcmaScript spec, because if it implied dynamic scoping, the language would be badly broken in ways we never intended. ### Conclusion / Resolution + - Meet together offline to continue discussing the issue ## Handle awkward rounding behaviour + Presenter: Ujjwal Sharma (USA) - [PR](https://github.com/tc39/ecma402/pull/471) @@ -529,8 +537,7 @@ Presenter: Ujjwal Sharma (USA) USA: (presents slides) -SFC: I'm really happy with the work USA has been doing on this and -I think it definitely fixes a bug. It’s never good when the same code works in some locales and not in others - this code can throw an exception in some locales and not in others. So I’m very happy with the work USA has been doing in this PR. +SFC: I'm really happy with the work USA has been doing on this and I think it definitely fixes a bug. It’s never good when the same code works in some locales and not in others - this code can throw an exception in some locales and not in others. So I’m very happy with the work USA has been doing in this PR. USA: Thank you SFC. I believe I can ask for consensus pending the remaining editorial changes and the test262 PR. @@ -539,4 +546,5 @@ USA: Thank you SFC. I believe I can ask for consensus pending the remaining edit RPR: Congratulations, you have consensus. ### Conclusion/Resolution + - Consensus diff --git a/meetings/2020-07/july-21.md b/meetings/2020-07/july-21.md index 8527801a..eac6c85e 100644 --- a/meetings/2020-07/july-21.md +++ b/meetings/2020-07/july-21.md @@ -1,4 +1,5 @@ # July 21, 2020 Meeting Notes + ----- **In-person attendees:** @@ -40,6 +41,7 @@ None :( | Rob Palmer | RPR | Bloomberg | ## Promise.any & AggregateError for stage 4 + Presenter: Mathias Bynens (MB) - [proposal](https://github.com/tc39/proposal-promise-any) @@ -53,9 +55,13 @@ MB: Any objections to stage 4? [silence] RPR: You have consensus, congratulations. + ### Conclusion/Resolution + - Stage 4! + ## Strictness check for object's SetMutableBinding + Presenter: Leo Balter (LEO) - [PR](https://github.com/tc39/ecma262/pull/2094) @@ -104,7 +110,7 @@ KG: Safari might not. SYG: Ok. -KG: But it would be - the `has` trap would be triggered at least twice, once in the initial lookup and once in the assignment. Yes. I should also point out that there are a lot of cases where the reference type, which is sort of what’s involved in this, doesn’t match engine behavior. It’s one of the oldest web reality issues on 262, and this fixes it in one particular case. So yes, engines would have work to do, but engines already have a bunch of work to do if they want to be correct about all the edge cases for references. +KG: But it would be - the `has` trap would be triggered at least twice, once in the initial lookup and once in the assignment. Yes. I should also point out that there are a lot of cases where the reference type, which is sort of what’s involved in this, doesn’t match engine behavior. It’s one of the oldest web reality issues on 262, and this fixes it in one particular case. So yes, engines would have work to do, but engines already have a bunch of work to do if they want to be correct about all the edge cases for references. SYG: To confirm again, this is for the object environment records, which are only `with` scopes and not global scopes? @@ -151,9 +157,13 @@ LEO: Any objections? I am still asking for consensus. LEO: I believe this is consensus. RPR: Yes, consensus on this PR. + ### Conclusion/resolution + - Consensus on the PR. + ## Intl.ListFormat for Stage 4 + Presenter: Zibi Braniecki (ZB) - [proposal](https://github.com/tc39/proposal-intl-list-format) @@ -166,9 +176,13 @@ ZB: Stage 4? [silence] RPR: You have stage 4. + ### Conclusion/resolution + - Stage 4! + ## Intl.DateTimeFormat dateStyle/timeStyle for Stage 4 + Presenter: Zibi Braniecki (ZB) - [proposal](https://github.com/tc39/proposal-intl-datetime-style) @@ -201,9 +215,11 @@ RPR: Any objections to Stage 4? RPR: Congratulations, you have stage 4. ### Conclusion/resolution + - Stage 4! ## Fix Function.toString for builtins + Presenters: Gus Caplan (GCL), Jordan Harband: (JHD) - [PR](https://github.com/tc39/ecma262/pull/1948) @@ -274,7 +290,7 @@ KM: there’s definitely a web compatibility risk, I agree. But it seems like th GCL: I agree. -SYG: Relaying a point from a colleague: if we get rid of the get and set keywords now, the higher level question is - who are we trying to serve with "toString"? If it's round-tripping through an "eval" that's not possible anyway. If it's diagnostics, ??? If you get rid of `get ` and `set`, I imagine it’s common for getters and setters to share the same name, so if you print it out and there's no `get` or `set` in the name, how do you know which is which? I think that’s an argument for whatever we do here should probably keep `get` or `set` in the name. +SYG: Relaying a point from a colleague: if we get rid of the get and set keywords now, the higher level question is - who are we trying to serve with "toString"? If it's round-tripping through an "eval" that's not possible anyway. If it's diagnostics, ??? If you get rid of `get` and `set`, I imagine it’s common for getters and setters to share the same name, so if you print it out and there's no `get` or `set` in the name, how do you know which is which? I think that’s an argument for whatever we do here should probably keep `get` or `set` in the name. GCL: I’d be fine with consensus on removing `function` or even keeping `get` as it is right now, I just don’t know what the effects of that would be. @@ -358,6 +374,7 @@ RPR: Are you happy with that? GCL: Yeah. ## WeakRefs for Stage 4 / CleanupSome for Stage 2/3 + Presenters: Daniel Ehrenberg (DE), Yulia Startsev (YSV) - [WeakRefs proposal](https://github.com/tc39/proposal-weakrefs/) @@ -438,10 +455,12 @@ AKI: Consensus? AKI: I’m going to call that yes. ### Conclusion/Resolution + - Stage 2 for cleanupSome - Stage 4 for WeakRefs + FinalizationRegistry ## Logical Assignment for Stage 4 + Presenter: Justin Ridgewell (JRL) - [proposal](https://github.com/tc39/proposal-logical-assignment) @@ -469,9 +488,11 @@ AKI: Consensus for stage 4? AKI: Congratulations! Another one! Yay! ### Conclusion/Resolution + - Stage 4! ## Decorators status update + Presenter: Kristen Hewell Garrett (KHG) - [proposal](https://github.com/tc39/proposal-decorators/) @@ -520,6 +541,7 @@ DE: A lot of people are talking about tooling solutions, as KHG mentioned in the KHG: DE summed it up very well and we’d like to continue in that direction and the group will continue to work that way ## NumericLiteralSeparator for Stage 4 + Presenter: Rick Waldron (RW) - [proposal](https://github.com/tc39/proposal-numeric-separator) @@ -559,9 +581,11 @@ RW: So do I have stage 4? RPR: Congratulations, you have stage 4. ### Conclusion / Resolution + - Stage 4! ## Slice notation for Stage 2 + Presenter: Sathya Gunasekaran (SGN) - [proposal](https://github.com/tc39/proposal-slice-notation) @@ -620,7 +644,7 @@ And those are likely to be different sets. And then I see that we’re discussin LEO: (queue reply: This improves Developer Experience) -DRR: To further WH’s point, the fact that slice already exists as a method and has the semantics I intend means that this is a nice-to-have for me. With this syntax, when I show it to people, they’re like “oh cool, I can get the last element of an array”, but then it was weird, because everyone would realize that, oh wait, I still have to re-index with 0 to get the last element. Maybe this leads into SYG’s topic, but the fact that you can’t do a negative index on these things makes it a little confusing. I know that there is a related proposal that we'll discuss about later, but I think that that is something to consider. +DRR: To further WH’s point, the fact that slice already exists as a method and has the semantics I intend means that this is a nice-to-have for me. With this syntax, when I show it to people, they’re like “oh cool, I can get the last element of an array”, but then it was weird, because everyone would realize that, oh wait, I still have to re-index with 0 to get the last element. Maybe this leads into SYG’s topic, but the fact that you can’t do a negative index on these things makes it a little confusing. I know that there is a related proposal that we'll discuss about later, but I think that that is something to consider. SGN: Can you repeat more concisely the problem? @@ -643,10 +667,12 @@ WH: I and YSV were not the only people who expressed concerns. There were others SGN: I will talk with the others as well. If anyone else has strong opinions, please contact me. ### Conclusion / Resolution + - Remains at Stage 1 - SGN to follow up with people who have concerns ## Temporal stage 2 update + Presenter: Philip Chimento (PFC) - [proposal](https://tc39.es/proposal-temporal/) @@ -659,6 +685,7 @@ YSV: Thanks, one thing I’d like to suggest is to translate the survey so that PFC: That's a really great idea, thanks. ## Import Conditions for Stage 3 + Presenters: Daniel Ehrenberg (DE), Sven Sauleau (SSA), Dan Clark (DDC), Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-import-conditions) @@ -803,13 +830,10 @@ DE: Okay, thank you very much. ### Conclusion / Resolution -Land patches for s/if/assert/ and permitting quotes around condition keys -Split JSON modules into a separate Stage 2 proposal -Land weakening of the host constraint, iterating on the wording in cooperation with SYG and BFS -SYG, BFS, and WH to review before Stage 3 -Proposal remains at Stage 2 +Land patches for s/if/assert/ and permitting quotes around condition keys Split JSON modules into a separate Stage 2 proposal Land weakening of the host constraint, iterating on the wording in cooperation with SYG and BFS SYG, BFS, and WH to review before Stage 3 Proposal remains at Stage 2 ## Intl.Segmenter for Stage 3 + Presenter: Richard Gibson (RGN) - [proposal](https://github.com/tc39/proposal-intl-segmenter) @@ -836,9 +860,11 @@ YSV: I’d like to explicitly say I support stage 3 and congratulations. AKI: Congratulations! ### Conclusion/Resolution + - Stage 3! ## Iterator Helpers update + Presenter: Adam Vandolder (AVR) - [proposal](https://github.com/tc39/proposal-iterator-helpers) @@ -865,9 +891,11 @@ AKI: Do we need a third stage 3 reviewer? MF: I think two should be sufficient. ### Conclusion -* RGN and MPC are reviewers + +- RGN and MPC are reviewers ## .item() for Stage 2 + Presenters: Shu-yu Guo (SYG), Tab Atkins (TAB) - [proposal](https://github.com/tabatkins/proposal-item-method) @@ -908,5 +936,6 @@ MLS: These corner cases are for stage 2 right? So before stage 3? JHD: Correct. ### Conclusion/Resolution + - Stage 2! - RGN, KM, LEO to review for stage 3 diff --git a/meetings/2020-07/july-22.md b/meetings/2020-07/july-22.md index dd27309a..e56f25dd 100644 --- a/meetings/2020-07/july-22.md +++ b/meetings/2020-07/july-22.md @@ -1,11 +1,11 @@ # July 22, 2020 Meeting Notes + ----- Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. +**In-person attendees:** -**In-person attendees:** - -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Yulia Startsev | YSV | Mozilla | @@ -25,7 +25,7 @@ Delegates: re-use your existing abbreviations! If you’re a new delegate and do | Daniel Ehrenberg | DE | Igalia | | Nicolò Ribaudo | NRO | Babel - Invited Expert | | Hemanth HM | HHM | PayPal | -| Ben Newman | BN | Meteor/Apollo | +| Ben Newman | BN | Meteor/Apollo | | Jordan Harband | JHD | Invited Expert | | Bradley Farias | BFS | GoDaddy | | Mattijs Hoitink | MHK | Apple | @@ -37,6 +37,7 @@ Delegates: re-use your existing abbreviations! If you’re a new delegate and do | Rob Palmer | RPR | Bloomberg | ## Ergonomic brand checks for private fields for stage 3 + (Jordan Harband, JHD) * https://github.com/tc39/proposal-private-fields-in-in/issues/7 @@ -71,10 +72,11 @@ JHX: I don't have the confidence that the current syntax would work without it. BT: We are overtime. Maybe you two can discuss this offline and then update the notes. ### Conclusion/Resolution -* Will be discussed offline +* Will be discussed offline ## Upsert (now renamed emplace) updates ~& for Stage 3~ + Presenter: Bradley Farias (BFS) BFS: (presents slides) @@ -110,7 +112,7 @@ BFS: [interrupts] WH: I don’t want to debate this here. We have a long queue; let’s let others speak. MM: Generally with options bags, the choice to provide parameters with an options bag is driven by a certain expectation of supporting introduction of new options over time. -Code written for later versions that recognise new options would still work in older versions, where these options would just be ignored. I don't think that kind of evolution expectation is an issue with regard to this operation. I also want to draw a hard distinction between option bags and handles. Maybe proxy handlers should have been an option bag with eager sampling, but the key thing there is the `this` binding. The fact that it's always looked up on demand is (???) the handle object is an API. I think it's important to keep those psychologically very different from each other. So I agree with WH that the option bag in this case is overkill. +Code written for later versions that recognise new options would still work in older versions, where these options would just be ignored. I don't think that kind of evolution expectation is an issue with regard to this operation. I also want to draw a hard distinction between option bags and handles. Maybe proxy handlers should have been an option bag with eager sampling, but the key thing there is the `this` binding. The fact that it's always looked up on demand is (???) the handle object is an API. I think it's important to keep those psychologically very different from each other. So I agree with WH that the option bag in this case is overkill. BFS: We have a comment in the repo about using what we call this param, let’s say a “handler”, as the "this" value in order to achieve a specific use case. We didn't initially put it in the spec text, and this is where we got the comment that people wanted it. @@ -183,9 +185,11 @@ BFS: So we have a lot of conflicts about people wanting specific behaviors or no We have to agree either to solve problems with different priorities, or just to abandon it. ### Conclusion/Resolution + Follow up on conflicts on the repo. ## Number.range for Stage 2 + Presenter: Jack Works (JWK) * [proposal](https://github.com/tc39/proposal-Number.range) @@ -242,13 +246,14 @@ BT: We are out of time. SFC: Do we have consensus? WH: No; we still have several items in the queue. There is a completely different issue no one has mentioned yet that I wanted to discuss. + ### Conclusion/Resolution + * No consensus for Stage 2 * Needs more discussion that didn't fit the timebox. - - ## await operations for Stage 1 + Presenter: Jack Works (JWK) * [proposal](https://jack-works.github.io/proposal-await.ops/) @@ -306,10 +311,12 @@ BT: Stage 1? [no objections] ### Conclusion/Resolution + * Approved for Stage 1 * Follow up in a possible incubator call ## Array.prototype.unique() proposal for Stage 1 + Presenter: Jack Works (JWK) * [proposal](https://github.com/TechQuery/array-unique-proposal) @@ -358,11 +365,12 @@ BT: JWK is asking for Stage 1. Any objections? [silence] ### Conclusion/Resolution + * Stage 1 * The name might not be web compatible - ## Record and Tuple for Stage 2 + (Robin Ricard (RRD) , Nicolò Ribaudo (NRO) and Rick Button (RBU) present) * [proposal](https://github.com/tc39/proposal-record-tuple) @@ -380,7 +388,7 @@ RRD: Yes WH: I like how you dealt with ±0. It’s very important to not silently alter values stored into records and tuples, and this proposal avoids that. The proposal diverges from existing practice by making a record containing `NaN` equal to itself. While different, I see the rationale for it and I don’t think it will cause significant problems. -WH: I see that element order is significant in the Record equality algorithm and you sort elements by their property names when creating the Record. As long as the property names are strings, you can always sort them, so that works. Do you have any plans for allowing record property names that are something other than strings, like symbols? +WH: I see that element order is significant in the Record equality algorithm and you sort elements by their property names when creating the Record. As long as the property names are strings, you can always sort them, so that works. Do you have any plans for allowing record property names that are something other than strings, like symbols? RRD: Good point. Two things. First, last meeting, it's not in the slides, but you can't use symbols as keys in records. Second thing is, the way records are created when we create those literals is that they're stored as a sorted list of keys. You can create them in any order, and in the spec, we sort the keys before creating the structure. @@ -420,23 +428,23 @@ MM: I don’t feel I should do an example now, but there is a side channel, I wi DE: You have provided the example, but the actual communication is missing. [We can discuss with offline as well] -MM: The example is communication. I agree this is a post-stage-2 concern. +MM: The example is communication. I agree this is a post-stage-2 concern. DRR: Concerns with cognitive overhead of deciding between objects and records. I think as we've look at this, there is a little bit of potential decision fatigue if you have to decide if you're passing objects or records. Maybe that's not as much of a user concern. But if you are using a static type checker, then you need to be able to predict whether or not you're going to pass an object, record, or potentially both. Maybe from the type system perspective, that's not something where you'll have to find a good workaround. But it is something that might end up frustrating users quite a bit. I hope that we can find something there. -RBN: Your concern is valid, that Robin mentioned, but I am interested to know more about TypeScript. RRD said at the top of the presentation that records and tuples are intended to be parallel with objects and arrays in terms of prototypes, methods, etc. I would like more feedback from TypeScript about how this interacts with type systems. +RBN: Your concern is valid, that Robin mentioned, but I am interested to know more about TypeScript. RRD said at the top of the presentation that records and tuples are intended to be parallel with objects and arrays in terms of prototypes, methods, etc. I would like more feedback from TypeScript about how this interacts with type systems. RRD: I see a record as a subset of what an object could do (?). We can go over this at another moment. DE: I think RRD and RBN are articulating an interesting hypothesis. We heard from FHS about doing research to investigate these hypotheses. I think that this is a mental model that jibes with people would be tested. I also think we need to investigate type systems. If some type system is interested. -DRR: If we aren’t shipping at stage-3. It is fine to wait to stage-3. We can also collaborate more with the proposal. +DRR: If we aren’t shipping at stage-3. It is fine to wait to stage-3. We can also collaborate more with the proposal. RRD: That's very good for us. DRR: We would entertain a pull request, for example, and then discuss as we have implementation. Either way would work. -SYG: I'd like to say that we will review; V8 is neutral on the implementability of it. We need to do more research here. V8 will research and comment on the implementability before stage 3. I think getting implementer sign-off is important before going to Stage 3. +SYG: I'd like to say that we will review; V8 is neutral on the implementability of it. We need to do more research here. V8 will research and comment on the implementability before stage 3. I think getting implementer sign-off is important before going to Stage 3. DRR: Yes, thank you, we will work with you. @@ -455,11 +463,14 @@ RBU: It would be during implementation phase feedback. WH: Yes, I don’t know if we’ll run into implementation concerns. We will cross that bridge if we run into problems, but I’m hoping we won’t need to cross any further bridges. RBU: Yes, we too. + ### Conclusion/Resolution + * Approved for Stage 2! * Reviewers: BN, SYG, ?? ## Symbols as WeakMap keys for stage 2 + Presenter: Daniel Ehrenberg (DE) * [proposal](https://github.com/tc39/proposal-symbols-as-weakmap-keys) @@ -491,7 +502,7 @@ MM: Moddable is a single-realm JS implementation (for memory overhead reasons). BFS: Looking back at the composite keys proposal, there is a workflow you can use to store the data on objects rather than on the realm. Interested parties should follow up offline w/ me. -YSV: We're now accepting symbols through Symbol.for(). I know MM was a strong proponent for not having permanent entries in the weakmap. I would like to hear what the argument that convinced him on this is, beyond wanting to avoid having long-lived keys in a WeakMap. +YSV: We're now accepting symbols through Symbol.for(). I know MM was a strong proponent for not having permanent entries in the weakmap. I would like to hear what the argument that convinced him on this is, beyond wanting to avoid having long-lived keys in a WeakMap. MM: I don't like registered symbols being weakmap keys, initially against that. The usability of trying to distinguish what can be done w/ a symbol regarding being registered or not causes bad surprises. There is a fundamental cross-Realm memory leak problem here, with a primitive (the registered symbol) that is immune from GC while the weakmap exists. Tradeoff is eating the cost of the memory leak. @@ -517,7 +528,7 @@ WH: A tiny point of order: Link to this spec on the agenda is broken. It produce DE: Yes. We’ve been having problems with diffs. [Gives alternate instructions on how to get to the spec.] -YSV: For me, what's important is *what* we intend to get to Stage 2. Because stage 2 signifies we have a problem that we want to solve, it’s important to know what problem we’re solving here. Need more investigation into problems this would solve. But I don't feel super comfortable saying that this should go forward on its own merit yet. That's why I want to clarify what the motivation is exactly. +YSV: For me, what's important is _what_ we intend to get to Stage 2. Because stage 2 signifies we have a problem that we want to solve, it’s important to know what problem we’re solving here. Need more investigation into problems this would solve. But I don't feel super comfortable saying that this should go forward on its own merit yet. That's why I want to clarify what the motivation is exactly. DE: Other delegates have said we shouldn’t let this advance past R&T. I'm in no rush for this proposal; I'm fine making it dependent on R&T. I believe the R&T proposal stands on its own merit but I am happy to keep these proposals in lock-step. @@ -530,7 +541,9 @@ RGN: Records and Tuples definitely can move independently of symbols-in-weakmaps DE: Can you elaborate on why? BT: No, because we are out of time.= + ### Conclusion/Resolution + * No Stage 2 advancement * Work to make the motivation more clear * Follow up more on use cases / motivation on the proposal repository @@ -540,8 +553,7 @@ BT: No, because we are out of time.= * [proposal](https://tc39.es/proposal-json-parse-with-source/) * [slides](https://docs.google.com/presentation/d/1MGJhUvrWl4dE4otjUm8jXDrhaZLh9g7dnasnfK-VyZg/edit?usp=sharing) -RG: presents slides -MM: I don't understand the motivation for the serialization slide, the enhanced replacer. If there is a reason to allow it to generate JSON. What is the motivation on the replacer side? +RG: presents slides MM: I don't understand the motivation for the serialization slide, the enhanced replacer. If there is a reason to allow it to generate JSON. What is the motivation on the replacer side? RG: Motivation hovers around BigInt , the JSON I received I would like to generate values of the same fidelity. @@ -593,7 +605,7 @@ WH: Just like MM, I would very much prefer to keep the proposals together, since JRL: I'm confused what serialization solves for this proposal. For stage 3, we need much better examples of what serialization is actually trying to do. -RG: In that example, we’re trying to preserve the precision of a BigInt being serialized. Without this facility, you would either see a 1 followed by a bunch of zeros, or a string with quotes inside it. +RG: In that example, we’re trying to preserve the precision of a BigInt being serialized. Without this facility, you would either see a 1 followed by a bunch of zeros, or a string with quotes inside it. MF: This could be represented in other ways than a number in JSON. You can use a 2-layer approach where you describe the type of everything you're encoding using a wrapper. @@ -623,19 +635,20 @@ BFS: The spec text doesn't mention UTF-8. I mentioned it to ensure that it's com RGN: The concept within Unicode that we’re looking for is “well-formed Unicode.” That covers all forms of Unicode (UTF-16, UTF-8, etc.). Also, I agree this proposal can manifest as a PR. ### Conclusion/Resolution + * This can be presented as a "Needs consensus" PR * Will need to be approved in a future committee meeting - ## Host hooks for Job callbacks (consensus-seeking continuation from day 1) * [PR](https://github.com/tc39/ecma262/pull/2086) * [slides](https://docs.google.com/presentation/d/19S97ZqhibJABqzeP5ZU6Flk6TVgWzXuvJWFbNTTfpWs/edit?usp=sharing) SYG: Addressed concerns with concerned parties. The two new host hooks that I am proposing will be browser only. The prose now says that hosts that are not browser must follow the default behavior. I also added a note that any host cannot override behavior that ECMA262 specifies. The concern here was from mark that this would allow dynamic scoping. With that, I would like to ask consensus again for adding these two host hooks for adding callbacks that are passed by apis asynchronously. That is promises and finalization registry. Any objections? + ### Conclusion/Resolution -* Host hooks for job callbacks has Consensus +* Host hooks for job callbacks has Consensus ## Function toString for builtins (consensus-seeking continuation from day 1) @@ -647,11 +660,11 @@ GCL: What was discussed last time was to explicitly allow the `get` and `set` in WH: What would the output be? -GCL: the keyword `function` followed by the keyword `get` or `set`, followed by the usual output. +GCL: the keyword `function` followed by the keyword `get` or `set`, followed by the usual output. RGN: Having it specified like this results in the observable difference between user code and native code? -GCL: That is also my opinion but that wasn’t the goal here. The goal here is to align with implementations. +GCL: That is also my opinion but that wasn’t the goal here. The goal here is to align with implementations. RGN: Does this do that? @@ -663,12 +676,12 @@ JHD: I agree with WH. We should land this now, but it is important to do the fol GCL: Does anybody object? -Robert: No objections, you actually have consensus this time. - +Robert: No objections, you actually have consensus this time. GCL: Whoo! [no objections] ### Conclusion/Resolution + * Consensus to merge the PR diff --git a/meetings/2020-07/july-23.md b/meetings/2020-07/july-23.md index 5b1f62a3..756481a3 100644 --- a/meetings/2020-07/july-23.md +++ b/meetings/2020-07/july-23.md @@ -1,11 +1,11 @@ # July 23, 2020 Meeting Notes + ----- Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream. +**In-person attendees:** -**In-person attendees:** - -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Yulia Startsev | YSV | Mozilla | @@ -33,6 +33,7 @@ Delegates: re-use your existing abbreviations! If you’re a new delegate and do | Rob Palmer | RPR | Bloomberg | ## Examining Structural Racism in TC39 + Presenter: Mark Cohen (MPC) - [issue](https://github.com/tc39/Reflector/issues/305) @@ -47,6 +48,7 @@ AKI: To everyone: if you have questions about this type of things, don’t ask y MPC: Thanks everyone, and please come to #tc39-inclusion to continue the discussion ## *continuation* Ergonomic brand checks for private fields for stage 3 + Jordan Harband (JHD) - JHX’s position: https://gist.github.com/hax/5e94c7959703ea95d4ff9c9deac12988 @@ -64,7 +66,7 @@ JHX: I think we can check our documents about that. If we use the object access JHD: There’s two worlds we’re talking about, one where the reification proposal doesn’t advance and one where it does. If it does advance, #x would be a private symbol (based on the current shape of that proposal), `obj[#x]` and `obj.#x` would work identically, and `#x in obj` would work as JHX expects. In that world, when reification happens, everything is that consistent. -JHD: If it _doesn’t_ happen, what pieces of that world are missing, and what happens when a user runs into them? If they type `#x` by itself (not in front of `in`) or in square brackets, they’ll get a syntax error. Which seems consistent with other places in the language that have “missing pieces” that are errors. +JHD: If it *doesn’t* happen, what pieces of that world are missing, and what happens when a user runs into them? If they type `#x` by itself (not in front of `in`) or in square brackets, they’ll get a syntax error. Which seems consistent with other places in the language that have “missing pieces” that are errors. JHD: If we don't do reification, than obj[#x] would just be a syntax error everywhere, and tools would just crash. It's not possible to get it wrong. As for many things in the language, if a developer might expect something to work but then it just crashes early, it's something we have generally considered acceptable. @@ -73,12 +75,12 @@ JHD: It might be different if we knew reification was never possible, in that wo JHD: The committee already made a choice about private static ???. You could make a table, see that there is a missing piece, try to use that and it just crashes. And that's something we have been considering acceptable. -JHX: When we overload the `.` the private fields use a very different matching/semantics (?) of the property. +JHX: When we overload the `.` the private fields use a very different matching/semantics (?) of the property. We current don't have a simple line that if you only use the obj.#x and do not care about reification about #x it's ok. Now we overload the second one, the "in" syntax, which is much dynamic and it calls to reification. This does not only give you a syntax error, it makes the mental model more complex. This is the concern, and I think that we can not accept that. If we reification there would still be a different line between the syntaxes, but this proposal is in the middle of them. SYG: My understanding of the linked document is that it uses a farly pedantic understanding of the semantics of the "in" operator, where the LHS is any dynamic value. -In my experience, this is not the mental model of developers. You are not checking if that value exists in the object, you are checking if it exists as a _key_. This proposal is not at all inconsistent, it makes it _more_ consistent because… obj.#x looks like a property access, and I feel like allowing it on the lhs of "in" would make it more consistent. +In my experience, this is not the mental model of developers. You are not checking if that value exists in the object, you are checking if it exists as a *key*. This proposal is not at all inconsistent, it makes it *more* consistent because… obj.#x looks like a property access, and I feel like allowing it on the lhs of "in" would make it more consistent. MM: I support this proposal, but I think that the discussion of the interaction with possible reified private names was too simplistic. If you'll be able to reify them, they wouldn't be private symbols because of security reasons. It would be more something like a PrivateName object, which is a weakmap-like object. This proposal would preclude the #x syntax from being how you reify a PrivateName. @@ -94,7 +96,7 @@ I really think that this is a constraint on future proposals and not on the curr The objection isn't for the current proposal, but rather it creates requirements for future proposals. Although there's a valid objection, it is not actually on this proposal. It would be a burden on the ability to reify private names. -There is no clear reason why it clear rea +There is no clear reason why it clear rea DE: maybe at this point summarize consistency argument: @@ -156,18 +158,16 @@ JHD: Any objections? JHD: I’ll be happy to work with you JHX during stage 3. ### Conclusion/Resolution + - ~~Advance to stage 3?~~ Nope, challenged in IRC ## Async Context updates & for Stage 1 + Chengzhong Wu (CZW) - [proposal](https://github.com/legendecas/proposal-async-context) - [slides](https://docs.google.com/presentation/d/1Ef2JI4ntkWd-M8fDqOGZGGh7CiPD05L39CZRSv1II_0/edit?usp=sharing) - - - - CM: Can you describe what it is that this does, and how it does it? “The problem with being confused is that, first of all, you’re confused,” and I’m confused. DE: I wanna suggest that people take a look at the readme and they described APIs and what APIs does @@ -176,32 +176,29 @@ CZW: last time there was a major security issue with async hooks and the ability CM: Rather than explaining this in terms of how it differs from the previous proposal, let’s talk about it in its own right. First with an example of how it gets used? -CM: wanted a walkthrough of the examples +CM: wanted a walkthrough of the examples DE: sounds like a good plan to me to do that before going to queue items [Presents slide 13] https://docs.google.com/presentation/d/1Ef2JI4ntkWd-M8fDqOGZGGh7CiPD05L39CZRSv1II_0/edit#slide=id.g86607955ef_0_435 -CZW: This is an example of a tracking server, with AsyncLocal stuff. Getting the request object for each incoming HTTP request. So, we do a lot of Async operations and here we can get the context from the async local. And there are no additional params to the function, so the request is lost. +CZW: This is an example of a tracking server, with AsyncLocal stuff. Getting the request object for each incoming HTTP request. So, we do a lot of Async operations and here we can get the context from the async local. And there are no additional params to the function, so the request is lost. [Presents slide 14] https://docs.google.com/presentation/d/1Ef2JI4ntkWd-M8fDqOGZGGh7CiPD05L39CZRSv1II_0/edit#slide=id.g4446cf2007b006c1_0 We can get the exact initiating request from the database. So this can address the issue. - - CM: In this case, when we call `getContext`, where does `getContext` get the context from? What asyncLocal object is being used? Is it a global variable? Where is the scoping control? -CZW: Each instance of asyncLocal has to declare their own instance, and the instances will not conflict with each other. - +CZW: Each instance of asyncLocal has to declare their own instance, and the instances will not conflict with each other. CM: So the context is being held in a variable inside the context.js module? I’m just trying to understand the scoping behavior. I’m not sure thrashing through this is the most productive use of the committee’s time. CZW: So we in Node.js handle the request concurrently. One request may be active for a short time, and yield for the I/O operation. In the access of the request object, in the context for the global, they may conflict with each other because they are concurrently accessed. Async logic flows let each request store things. -CM: If I understand, previously we were keeping it in a global, and now we’re keeping it in a module scope var in context.js but there is nothing in the queryDatabase call. If someone else invokes import `queryDatabase`, whose execution is interleaved with this, there’s a hidden dep between db.js and context.js module and that ambient state is being passed between both implicitly. +CM: If I understand, previously we were keeping it in a global, and now we’re keeping it in a module scope var in context.js but there is nothing in the queryDatabase call. If someone else invokes import `queryDatabase`, whose execution is interleaved with this, there’s a hidden dep between db.js and context.js module and that ambient state is being passed between both implicitly. CZW: The db.js has to explicitly depend on context.js to get the state passed through. @@ -209,13 +206,13 @@ CM: gonna have to think about this one some more; I think there are some not ver WH: A lot of us are in the same boat. -GCL: If this helps for those of you who are confused about what async local storage is. In Node, we have Async hooks, which track when promises are created, resolved, or rejected. When the promise goes through fulfillment or rejection, and through the fulfillment queue. One level up abstraction of this is AsyncLocalStorage, where you store the state of these promises. Having an async local instance means that you must explicitly participate in holding this data and if you do async code, async stuff that access your data, ??? One of the big reasons this is useful is application performance management (APM) and tracking how resources are used and how long they are being used. Basically one of the big reasons for async hooks and async local storage is to track the reading of async files, HTTP requests, and then, recording that data. But one of the problems here is that, doing this without support from the engine, (or even with the support of the engine in the case of Node), when the abstraction has to be such that it is not part of the language itself, but some kind of sidechannel thing, results in a large amount of overhead and I think there was an issue in node.js that reported 99perc overhead when using async local storage. By adding this to the language itself, we can integrate these behaviors directly into the language itself and get rid of the overhead. +GCL: If this helps for those of you who are confused about what async local storage is. In Node, we have Async hooks, which track when promises are created, resolved, or rejected. When the promise goes through fulfillment or rejection, and through the fulfillment queue. One level up abstraction of this is AsyncLocalStorage, where you store the state of these promises. Having an async local instance means that you must explicitly participate in holding this data and if you do async code, async stuff that access your data, ??? One of the big reasons this is useful is application performance management (APM) and tracking how resources are used and how long they are being used. Basically one of the big reasons for async hooks and async local storage is to track the reading of async files, HTTP requests, and then, recording that data. But one of the problems here is that, doing this without support from the engine, (or even with the support of the engine in the case of Node), when the abstraction has to be such that it is not part of the language itself, but some kind of sidechannel thing, results in a large amount of overhead and I think there was an issue in node.js that reported 99perc overhead when using async local storage. By adding this to the language itself, we can integrate these behaviors directly into the language itself and get rid of the overhead. CM: That has a little bit of a "doctor it hurts when I do this" vibe. What you're saying is, it’s very expensive to do this thing you shouldn’t do. In the example in front of us here the database module is completely non reentrant in a dangerous way so I’m very concerned DE: For the reasons Gus explained, I think this is an important problem space, I think this is a really important problem space and there were examples where people were saying “I don’t really understand where scopes are made…” The fact that you have these distinct AsyncLocal object seems like a good basis for figuring out these details. I’ve worked in other programming languages that have dynamically-scoped ??? and they seemed useful. We have a presentation of incumbent realms and better tracking about baseUrl and imports. I think the underlying primitive is really something like this. I know there are problems for us to work out, I know there are a lot of problems to solve and figure out before stage 2 but it seems to be a very important problem for us as a committee to solve. -SYG: I want to push back a little about what DE said about incumbents—I think there’s a big difference between fully defined semantics and exposing programmatic control with something similar to dynamic scoping, even when the concepts are similar. +SYG: I want to push back a little about what DE said about incumbents—I think there’s a big difference between fully defined semantics and exposing programmatic control with something similar to dynamic scoping, even when the concepts are similar. I really could not understand the audio but one of the main thing is the removal of the async hook but the dynamic scoping thing remains and it is still not addressed, which was the main committee concern last time. And that’s it CZW: AsyncLocal needs to be explicitly referenced to get a value or to change it. The dynamic scoping issue DE mentioned, ??? Dynamic scoping for the issue correctly ??? @@ -224,7 +221,7 @@ SYG: I did not understand, sorry CZW: Explicitly reference to get the value and there is a better asynclocal ??? Has to be triggered by referencing a single instance. So ??? AsyncLocal provide any dynamic scoping (it was very difficult to understand with the audio feedback) That’s my point for the concern -MM: I will keep it short, because of sound issues my understanding is incomplete, but this still seems like dynamic scoping. The behavior of a callee depends on elements of a calling context that are not explicitly passed as calling arguments. And any such implicit context breaks many algebraic properties of the language. The question is, can a closure capture the dynamic context that is relevant at a given moment in time? Both the answer “yes” and “no” lead to unpleasant consequences. There is no good answer to the closure question when you have dynamic scoping. We have a large complex language that already has a complex computational model for people to form intuitions, and adding dynamic scoping to it pushed it too far. I am against seeing anything like this advance. +MM: I will keep it short, because of sound issues my understanding is incomplete, but this still seems like dynamic scoping. The behavior of a callee depends on elements of a calling context that are not explicitly passed as calling arguments. And any such implicit context breaks many algebraic properties of the language. The question is, can a closure capture the dynamic context that is relevant at a given moment in time? Both the answer “yes” and “no” lead to unpleasant consequences. There is no good answer to the closure question when you have dynamic scoping. We have a large complex language that already has a complex computational model for people to form intuitions, and adding dynamic scoping to it pushed it too far. I am against seeing anything like this advance. CZW: Async local storage can be treated as a value store, where the value has to explicitly reference the variable. It can also get and set the global. It is also possible to not set it in the global, I’m unsure what issue can be with asynclocal and async scoping. Since the async local has to be explicitly referenced in the code, So, I’m not sure if this can solve the concern. The async local itself has to be explicitly referenced, to set the value or get the value. It can be treated as an exclusion global. @@ -236,7 +233,7 @@ MM: The second. Both answers are wrong, but in opposite ways. If you say the clo GCL: Got it, thank you. -DRO: This is not an objection. It is a question. I am not sure how any of this is not already doable with existing language features. It seems like AsyncLocal is just a wrapper around a static global, that you can get and set with function wrappers in those specific circumstances. Is there anything special about AsyncLocal that isn’t just a wrapper around a value? I'm fine with introducing a language construct that it's already doable with other language constructs, and I just want to understand what makes this special. +DRO: This is not an objection. It is a question. I am not sure how any of this is not already doable with existing language features. It seems like AsyncLocal is just a wrapper around a static global, that you can get and set with function wrappers in those specific circumstances. Is there anything special about AsyncLocal that isn’t just a wrapper around a value? I'm fine with introducing a language construct that it's already doable with other language constructs, and I just want to understand what makes this special. CZW: The difference between asynclocal and a global is ??? the global may be overwritten by another js execution and they are interleaved in the async operations. They can be kept safely in the async local. It’s not accessible from another execution or another request—they are different for each run of the request. This is different from the global, which is global to every execution. @@ -246,9 +243,9 @@ CZW: Yes. DRO: I think that is a potential point of massive confusion for developers.It breaks the expectation that one variable has one value. Now you can have an AsyncLocal, and in your debugger, you switch to one promise exec to another and something that should be global is now something completely different. -CZW: before the feature we had to implement the same function in the host nod.js and it is hard to have devtools understand that concept. It will be harder to implement without the concept existing in the language itself. This would expand the use-case by adding the ability to the language. +CZW: before the feature we had to implement the same function in the host nod.js and it is hard to have devtools understand that concept. It will be harder to implement without the concept existing in the language itself. This would expand the use-case by adding the ability to the language. -DRO: Right, but getting it into the language still requires that developers understand why it's there. There’s still that first part of explaining to developers this is one reference to multiple values. +DRO: Right, but getting it into the language still requires that developers understand why it's there. There’s still that first part of explaining to developers this is one reference to multiple values. CZW: this is not a new concept, there are many prior art on this , an??? Language like java, I’m believing it would be possible since there are many prior art examples for this @@ -258,7 +255,7 @@ WH: I also had a hard time following the presentation due to audio quality issue MBS: CM are you ok deferring your question? -CM: My topic entry in the queue, “thread local may be a good point of comparison”, captures what I had to say. Thread local is indeed a feature that many languages have, but it’s still a very bad idea. This proposal seems to have the analogous features with the analogous hazards. +CM: My topic entry in the queue, “thread local may be a good point of comparison”, captures what I had to say. Thread local is indeed a feature that many languages have, but it’s still a very bad idea. This proposal seems to have the analogous features with the analogous hazards. MBS: Any final decision or consensus? @@ -269,19 +266,22 @@ MLS: I agree. YSV: We have hitten an invariant here that was not written down. A later presentation will cover this. ### Conclusion/Resolution + - Not going to stage 1 -## *continuation* Ergonomic brand checks for private fields for stage 3 +## *continuation 2* Ergonomic brand checks for private fields for stage 3 JHX: I wasn’t able to respond earlier; I would like to object to this proposal in its current form, because it seems that obj[#x] will never happen. We can try to find alternatives in the future. JHD: to summarize: JHX believes that there should be an invariant that `x in o` implies `o[x]`, and this proposal does not support that in its current form. I will be discussing this with JHX over the next two months, and will bring it back for discussion in September with a longer timebox. ### Conclusion/Resolution + - Does not yet advance; remains at stage 2 - No consensus yet on this invariant in either direction ## Flex Incubator Calls to weekly meetings + Leo Balter (LEO) LEO: I’m proposing weekly incubator calls. SYG has been organizing these meetings, and I’d like to have more. @@ -293,7 +293,7 @@ SYG: Part of the intention of incubator calls is that because they are so focuse I understand that as implementers our nets are wider and feel more responsibility to attend all of them. -YSV: I feel like I have a responsibility to attend all of them, actually. +YSV: I feel like I have a responsibility to attend all of them, actually. SYG: I understand that for myself too, as the V8 representative, as I should have some familiarity in what's going on since we'll eventually be going to implement that. I want to understand how the other people in the room feel, because I would preferably not want you to feel that way. @@ -305,13 +305,12 @@ While for some bigger topics we can have this, for other smaller topics or speci I do not know want to make implementers feel to be overburdened byt hose calls and it could alleviate burden in tc39 meetings before going to the meetings -SYG: I want to apologize for how badly I’ve been scheduling the calls. +SYG: I want to apologize for how badly I’ve been scheduling the calls. Specially for the last one where I misread my own doodle, it is not a skillset that I have to smoothly schedule things. Having a weekly cadence sounds good to me, but I would like someone else to sign up as an additional facilitator to help with scheduling and running the calls. - LEO: I’m in this spot although I feel like I’m pretty bad to do this, I hope someone else could help here. I don't think that I would do a very good job, but I can try if there is noone else MF: I think LEO might have misunderstood the written topic. What I’m suggesting is that we schedule those meetings in pre-dedicated timeslots, but not necessarily every week. @@ -328,7 +327,7 @@ SYG: ??? that seems the most sensible point for us to fill in the schedule. MBS: I have seen in many cases that if you keep up a regular cadence and then drop things if they are not needed, it ends up working better. If you have like one hour per week dedicated to this,... YSV, in your case if you know that there is this possibility at a fixed time every week and then it can be cancelled, ??? If we’re not proactively chartering off that time, it’s more likely not going to happen at all. -YSV: I guess I should also respond to that. I would be more open to this if there is demonstrated pressure to have more meetings rather than doing it preemptively. I would prefer not to have it aggressively scheduled and then cancelled, but I'm open if it _has_ to be done on a weekly basis because we need it. +YSV: I guess I should also respond to that. I would be more open to this if there is demonstrated pressure to have more meetings rather than doing it preemptively. I would prefer not to have it aggressively scheduled and then cancelled, but I'm open if it *has* to be done on a weekly basis because we need it. I’m also concerned that if we’re doing too much work between meetings that the meetings themselves will be overburdened. @@ -354,10 +353,13 @@ LEO: you’re important for us so I don’t want to go forward if you don’t wa I'm not comfortable proposing a change if you are discomfortable with it. It's important for me that people are onboard, and I would prefer to wait. AKI: We can discuss this in an issue further. + ### Conclusion/Resolution + - Incubator calls remain bi-weekly ## Incubation call chartering + Presenter: Shu-yu Guo (SYG) SYG: At the end of each plenary I call out a few proposals that could benefit from a higher frequency feedback loop. To either hear feedback or iterate feedback and bring it back to committee. First, overflow: the security model that we want JS to have. This is a conversation for plenary but it would be good to hash out preliminaries in a call first. That is overflow due to scheduling mishaps. Before I get into nominating specific proposals, with a fortnightly cadence, we usually have 5-6 slots depending if the plenaries are farther apart. So I think a comfortable number of proposals is probably 6. So with that, `Number.range` is probably the first one I’d like to have an incubator call for. There was a lot of back and forth between the iterator and iterable design. There were a lot of voices on either side, so that’d be a good thing to hash out. If the stakeholders for that are still on the call, champions, it would be good to get confirmation to participate with `Number.range`. I think await operations could also use some feedback given that there were some concerns about the DX improvement of these await ops, and if we should have them, if we should have something that’s just for Promise.all, or if we should not have them at all because maybe the DX thing is not as clear cut as we thought. So the second is await operations. The third one is Array.prototype.unique. Everyone seemed to agree from my reading of the room that having a unique operation on arrays would be useful, but disagreed on the particular semantic and especially the proposal as currently written. So that’d be a good item to get feedback on. Right now we have 4 items, to recap, security, Number.range, await operations, and Array.prototype.unique. Are there other proposals that people would like to discuss? I know LEO said earlier in the meeting that there are a bunch of proposals that he wanted to see discussed in the calls, do you have anything to say? @@ -411,6 +413,7 @@ MM: I want to expand on the question that was answered between WH and SYG. A lot SYG: Thank you. Ok, I think that's it. Be active on the Github issues if you're interested in any one of those six topics. Also there's a TC39 calendar, if you're not signed up, please do. ## ResizableArrayBuffer and GrowableSharedArrayBuffer for Stage 1 + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/syg/proposal-resizablearraybuffer) @@ -506,9 +509,11 @@ AKI: Do we have consensus on Stage 1? WH: I think you do! ### Conclusion/Resolution + - Stage 1! ## Documenting invariants + Presenter: Yulia Startsev (YSV) - [repo](https://github.com/codehag/documenting-invariants) @@ -536,7 +541,7 @@ YSV: That’s a very good point. SYG: So I think the rationale point is really key here, we have to be really careful. -YSV: I consider the rationale to be crucial to this, and hope that we can add rationale for the invariants we already have. One of the reasons why I want us to start having this discussion is so that we do start talking about the invariants that we are bringing up to the committee. So that either- and I think we will have to come up with a process for rejected invariants that people deem important that are not held up by the committee, those should be recorded as well. That is a good point. I don’t have an idea yet about how we are going to do that. +YSV: I consider the rationale to be crucial to this, and hope that we can add rationale for the invariants we already have. One of the reasons why I want us to start having this discussion is so that we do start talking about the invariants that we are bringing up to the committee. So that either- and I think we will have to come up with a process for rejected invariants that people deem important that are not held up by the committee, those should be recorded as well. That is a good point. I don’t have an idea yet about how we are going to do that. SYG: Thanks @@ -587,7 +592,7 @@ AKI: You are always so thoughtful about this kind of things that I'm sure that m CM: I just want to endorse your proposal YSV, in the whole vein of documenting our invariants, these are sort of meta-invariants, and I like that we have this quite a bit. -WH: I like the idea, I don’t agree with the content. This is too strict and it’s different from what we’ve actually been doing in practice. We’ve blocked things from stage 1 for stage 2 reasons if we really thought they wouldn’t have much of a chance to advance past stage 2. But that wouldn’t be allowed under this proposal, so we wouldn’t be able to do that. +WH: I like the idea, I don’t agree with the content. This is too strict and it’s different from what we’ve actually been doing in practice. We’ve blocked things from stage 1 for stage 2 reasons if we really thought they wouldn’t have much of a chance to advance past stage 2. But that wouldn’t be allowed under this proposal, so we wouldn’t be able to do that. YSV: That's good feedback. Do you have other comments about the other stages? @@ -626,21 +631,24 @@ MF: My concern was with general public contribution. I don’t think it is neces YSV: That is a good clarification, thank you. I will think about this. I’ll post this on the reflector. Remaining queue: + 1. Reply: We could limit participation in a PR by locking it (MBS) 2. New Topic: where is its home? (AKI) ### Conclusion/Resolution + - We will start work on documenting invariants. - We will start iterating on the process documentation on a private Github repository. ## Many specific invariants to consider + Mark S. Miller (MM) - [slides](https://github.com/tc39/agendas/raw/master/2020/07-slides-some-invariants.pdf) JHD: The typeof ===/== invariant. People have a wide assumption that they are interchangable. Eslint has the eqeqeq rule, it doesn’t autofix. I think it is an important invariant to maintain. -MM: +MM: BFS: There is ecosystem reliance on emergent behavior as if it were an invariant. Particularly minifiers. I think it would be important to evaluate the current ecosystem tooling. While we may want to keep invariant themselves somewhat private while we discuss them, we need to do an ecosystem audit before add invariants for operators in particular. diff --git a/meetings/2020-07/summary.md b/meetings/2020-07/summary.md index d9dbddfd..ab1501ab 100644 --- a/meetings/2020-07/summary.md +++ b/meetings/2020-07/summary.md @@ -21,7 +21,7 @@ Ecma Technical Committee 39 held a four day meeting hosted remotely on July 20th ## Advancing Proposals -### No Stage +### No Stage - Arbitrary Module Namespace Identifiers: [proposal](https://github.com/bmeck/proposal-arbitrary-module-namespace-identifiers). Consensus, but this is becoming an immediate PR to ECMA-262 without full proposal process due to its size. @@ -85,5 +85,3 @@ Mainstream browsers were shipping `eval?.(str)` as a direct eval, using the func Link: tc39/ecma262#2090 The numeric separators are forbidden in legacy (_Annex B_) number notations, like non-octals `081`. Although, TC39 will not disallow separators in the exponential parts and fraction parts of these legacy numbers due to current implementations support and excessive tailoring of the spec text. This means `08.1_0` and `08e1_0` are allowed and will remain allowed by the specs. Non octals are only possible in non-strict mode anyway. - - diff --git a/meetings/2020-09/sept-21.md b/meetings/2020-09/sept-21.md index 58c6b3cb..483238ba 100644 --- a/meetings/2020-09/sept-21.md +++ b/meetings/2020-09/sept-21.md @@ -1,9 +1,10 @@ # September 21, 2020 Meeting Notes + ----- -**In-person attendees:** +**In-person attendees:** -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -33,26 +34,28 @@ | Istvan Sebestyen | IS | Ecma International | | Shu-yu Guo | SYG | Google | - ## Opening, welcome, housekeeping + Presenter: Aki Braun (AKI) AKI: (presents slides) + ## Secretary’s Report + Presenter: Istvan Sebestyen (IS) - [slides](https://github.com/tc39/Reflector/files/5246401/tc39-2020-041.pdf) -IS: (presents slides). The usual information, about TC39 standards access and download, status of meeting planning and news in Ecma, etc. The only relevant news are that formal liaison with CalConnect not dot done yet due to the incompatibility of the two patent policies (Ecma TC39 = RF and CalConnect = RAND). Suggestion was discussed with CalConnect that CalConnect should establish an “experimental RF patent policy” (takes of course some time), until then CalConnect experts should participate in TC39 work as invited experts in personal capacity accepting the TC39 RF patent policy. +IS: (presents slides). The usual information, about TC39 standards access and download, status of meeting planning and news in Ecma, etc. The only relevant news are that formal liaison with CalConnect not dot done yet due to the incompatibility of the two patent policies (Ecma TC39 = RF and CalConnect = RAND). Suggestion was discussed with CalConnect that CalConnect should establish an “experimental RF patent policy” (takes of course some time), until then CalConnect experts should participate in TC39 work as invited experts in personal capacity accepting the TC39 RF patent policy. ## ECMA262 Status Updates + Presenter: Jordan Harband (JHD), Kevin Gibbons (KG) - [slides](https://j.mp/262editor202009) JHD: (presents slides) - (Re. normative PR slide) KG: Re. 2094, this is something that we discussed last time where if you do `x = delete x, 1` then you can get into a situation where the implementation's concept of references don't match the spec's concept of references. There are still a lot more of those that we will need to sort out eventually. @@ -77,24 +80,22 @@ KG: Thank you for pointing it out! This uncovered a number of places where imple MF: (presents slides, “GitHub Project: Major Editorial Work”) - - ## ECMA402 Status Updates + Presenter: Shane F. Carr (SFC) - [slides](https://docs.google.com/presentation/d/1FeyW-QdZqAQ0xvrPacI347gYRuBtZ8NR6lV6n0bca4M/edit?usp=sharing) SFC: (presents slides) - ## ECMA404 Status Updates -Presenter: CM +Presenter: CM CM: ECMA404 sleeps happily, let’s try not to wake it up. - ## ECMA TC53 Liaison Report + Presenter: Peter Hoddie (PHE) - [slides](https://www.icloud.com/keynote/0_6IXVVSlbeV1Dm2OdEJt-5kQ#tc53_liaison_tc39_-_september_2020) @@ -102,12 +103,13 @@ Presenter: Peter Hoddie (PHE) PHE: (presents slides) ## Updates from the CoC Committee -Presenter: Jordan Harband (JHD) +Presenter: Jordan Harband (JHD) JHD: (presents CoC updates) ## The last 5 years of Test262, a brief review + Presenter: LEO LEO: (presents slides) @@ -117,6 +119,7 @@ RKG: Bravo. Let’s give applause. (many people give virtual applause over webcam) ## Explicitly specify order of operations in MakeTime + Presenter: Kevin Gibbons (KG) - [pull request](https://github.com/tc39/ecma262/pull/2120) @@ -133,7 +136,8 @@ KG: I will take that as consensus. Consensus Reached. -## Move __proto__ out of Annex B +## Move `__proto__` out of Annex B + Presenter: Gus Caplan (GCL) - [pull request](https://github.com/tc39/ecma262/pull/2125) @@ -144,7 +148,7 @@ JHD: Annex B represents two things. 1 is web browsers need to do this thing, but GCL: To clarify, your concern is about the spec sending a message to non-implementers, people who use JavaScript? -JHD: Yes, not to implementers, but to practitioners. +JHD: Yes, not to implementers, but to practitioners. GCL: I don’t know if we care about that, but it is interesting. @@ -175,17 +179,17 @@ BFS: There was something interesting I noted, we discussed if it is something re BT: PHE wants to clarify on this topic, if no objections I’d like to let him skip. -PHE: On the specific point raised by BFS, he is absolutely correct that XS does implement these functions. The reason why is important. We never needed them until we wanted to pass certain tests in test262. There is an unexpected dependency in test262 on Annex B. We would be thrilled to take it out. +PHE: On the specific point raised by BFS, he is absolutely correct that XS does implement these functions. The reason why is important. We never needed them until we wanted to pass certain tests in test262. There is an unexpected dependency in test262 on Annex B. We would be thrilled to take it out. GCL: You actively want them to be optional? PHE: Yes. I have no problem with them being normative optional, but I don’t think that devices should be required to carry them around. -JWK: Why do you need to implement in engine if you can manually define in userland? Deno does not implement this, and if we are using a library and it’s broken, I manually define __proto__ to fix the lib. +JWK: Why do you need to implement in engine if you can manually define in userland? Deno does not implement this, and if we are using a library and it’s broken, I manually define `__proto__` to fix the lib. -BFS: Deno does ship with __proto__, but they delete it on purpose. +BFS: Deno does ship with `__proto__`, but they delete it on purpose. -GCL: For what it's worth, that's the normative optional part here. I don't think that conflicts with anything. It's very clear you do not need to implement __proto__ to be a ??? engine. +GCL: For what it's worth, that's the normative optional part here. I don't think that conflicts with anything. It's very clear you do not need to implement `__proto__` to be a ??? engine. SYG: For `__proto__`, it is uncontroversial, everyone thinks it ought to be discouraged. I know we had a lot of discussion about ASI, and remain neutral in the spec language. If we're to set a precedent here, if we go with an "icky" note, this would be the first thing in ECMA-262 that we say is deprecated and discouraged. I’m trying to clarify what we would be agreeing to. It sounds like case-by-case, we would discourage things in the main body, we would need to get consensus that it is a thing that we want to discourage? @@ -193,11 +197,11 @@ JHD: That matches what I'm asking for. MM: I would certainly that the “icky” designation should require consensus. ASI did not acheive consensus. -GCL: Can we skip past the ickyness and switch directly to the __proto__ business? +GCL: Can we skip past the ickyness and switch directly to the `__proto__` business? BT: 15 minutes left, up to you GCL. -KG: Which __proto__? +KG: Which `__proto__`? GCL: All the methods and syntax. @@ -209,7 +213,7 @@ GCL: The syntax is also moved. It is not normative optional. KG: The only normative change is making syntax mandatory? -GCL: __proto__ is normative optional. __defineGetter__ etc is not optional. The syntax is moved into the main body, it is not optional. +GCL: `__proto__` is normative optional. **defineGetter** etc is not optional. The syntax is moved into the main body, it is not optional. MM: That seems right to me. @@ -217,11 +221,11 @@ JHD: My topic isn’t ickyness, but I can skip the ickyness part. BT: Take it away. -JHD: BFS made the point that "icky" and "optional" are different things. Should we make everything required except what we can't? Or should we make everything optional except what we have to make required? __proto__ has to be required, but the _defineGetter__ stuff, I’m asking the general question. +JHD: BFS made the point that "icky" and "optional" are different things. Should we make everything required except what we can't? Or should we make everything optional except what we have to make required? `__proto__` has to be required, but the _defineGetter__ stuff, I’m asking the general question. MM: I think it’s complicated. My original presentation on moving things out of Annex B goes into some of the tradeoffs. I don’t think there is a single answer; it depends on why we think something may be optional. -GCL My goal is to have as much as possible be required. The only reason __proto__ is not required is because implementations actively get rid of it. Node has an option. deno gets rid of it. I think we should aim for one JS as much as possible. +GCL My goal is to have as much as possible be required. The only reason `__proto__` is not required is because implementations actively get rid of it. Node has an option. deno gets rid of it. I think we should aim for one JS as much as possible. WH: I don’t think we have consensus on that. That (make as much as possible be required) is an opinion and we have different opinions on that in the committee. @@ -245,7 +249,7 @@ PHE: I want to make sure where we landed. I don’t want to summarize, I don’t KG: We have not agreed to anything yet. -MM: Somebody said that __proto__ should be normative optional because of a security concern. That one is surprising to me because we do have Object.setPrototypeOf and Reflect.setPrototypeOf, both of which are required, which provide all the capabilities of __proto__ through other means. I don’t see what security concern __proto__ would raise. +MM: Somebody said that `__proto__` should be normative optional because of a security concern. That one is surprising to me because we do have Object.setPrototypeOf and Reflect.setPrototypeOf, both of which are required, which provide all the capabilities of `__proto__` through other means. I don’t see what security concern `__proto__` would raise. GCL Its not setting the prototype, it’s that it is a property of objects. Lodash had a bug, an interaction with JSON.parse, it would overwrite the prototype if you weren’t careful. It isn’t setting the prototype itself but a domain problem. @@ -255,11 +259,11 @@ BFS: You can look up papers on prototype pollution. MM: This is not prototype pollution. -BFS: The easiest way is what GCL said. The difference is between method call and assignment, when you do a JSON.parse, and has a __proto__ property, people will do that as a dynamic access using that as a key. Deep cloing libraries will naively use that __proto__ key and assign something to an object. This replaces the object’s prototype. For example, Node introduced a flag to disable the proto getter and setter, because there are so many notifications of these bugs, because there are so many find, that they are prohibitive to actually fixing the bug. +BFS: The easiest way is what GCL said. The difference is between method call and assignment, when you do a JSON.parse, and has a `__proto__` property, people will do that as a dynamic access using that as a key. Deep cloing libraries will naively use that `__proto__` key and assign something to an object. This replaces the object’s prototype. For example, Node introduced a flag to disable the proto getter and setter, because there are so many notifications of these bugs, because there are so many find, that they are prohibitive to actually fixing the bug. MM: Thank you, I have a sense of it now. -BFS: As a personal preference, we should move the syntax for __proto__ that is in Annex B into the main body regardless of the methods. You can have two JS source texts that execute differently based on their environments. Because there is wide enough usage it is problematic that we maintain this divergence. There are inconsistencies with JSON.parse. I think that’s fine. I like moving the syntax into the main spec. +BFS: As a personal preference, we should move the syntax for `__proto__` that is in Annex B into the main body regardless of the methods. You can have two JS source texts that execute differently based on their environments. Because there is wide enough usage it is problematic that we maintain this divergence. There are inconsistencies with JSON.parse. I think that’s fine. I like moving the syntax into the main spec. KG: Agree on not marking “icky”. @@ -269,7 +273,7 @@ BFS: Sounds good that’s all. KG: Can you list all of the things that we are asking for consensus on? -GCL: The __proto__ accessor, to be normative optional. +GCL: The `__proto__` accessor, to be normative optional. KG: Is it also being marked as icky? @@ -277,7 +281,7 @@ GCL: Do we need consensus on that here? MM: We need consensus on anything in the main spec that is marked as “icky”. We should not assume in migrating things from Annex B into the main spec that Annex B assumes that it is "icky". It does not. -GCL: I’m just gonna list this off. __proto__ accessor will be normative optional and icky, the __define and __lookup {Getter/Setter} methods will be non-optional and “icky”. Do we have consensus on them being optional? +GCL: I’m just gonna list this off. `__proto__` accessor will be normative optional and icky, the __define and__lookup {Getter/Setter} methods will be non-optional and “icky”. Do we have consensus on them being optional? JWK: I agree. @@ -285,7 +289,7 @@ JHD: Is there anyone that wants the define/lookup methods to be required? GCL: That's what I'm asking. (silence) Seems like no. -GCL: the syntax for __proto__ will be required and will not be icky. +GCL: the syntax for `__proto__` will be required and will not be icky. MM: All of that sounds exactly right to me. @@ -299,13 +303,13 @@ GCL: Seems like we have consensus for optional. Let’s move forward with that? KG: Can I recap? To make sure we all know what we agree to? -- The __proto__ syntax will be required and not marked as discouraged -- The __proto__ accessor will be optional and marked as discouraged. -- The __defineGetter__, __defineSetter__, __lookupGetter__, and __lookupSetter__ will be optional and discouraged. +- The `__proto__` syntax will be required and not marked as discouraged +- The `__proto__` accessor will be optional and marked as discouraged. +- The **defineGetter**, **defineSetter**, **lookupGetter**, and **lookupSetter** will be optional and discouraged. Note: icky means discouraged -SYG: Given that this is the first time that we are marking these things as “icky”, I would like some acknowledgement that the editor group be given some independence over ??? +SYG: Given that this is the first time that we are marking these things as “icky”, I would like some acknowledgement that the editor group be given some independence over ??? Vs. if you are bootstrapping a greenfield ecosystem you are discouraged from implementing this. JHD: SYG, this is not the first time we’re marking things as “icky”. There is precedence for that, for example Annex B itself (quotes the beginning of Annex B on the definition of "discouraged") That said, I agree with your request for editorial leeway. I just wanted to clarify. @@ -319,12 +323,13 @@ WH: MM, read the beginning of Annex B. It currently states that everything in th MM: Ok. ### Conclusion/Resolution -- The __proto__ syntax will be required and not marked as discouraged -- The __proto__ accessor will be optional and marked as discouraged. -- The __defineGetter__, __defineSetter__, __lookupGetter__, and __lookupSetter__ will be optional and discouraged. +- The `__proto__` syntax will be required and not marked as discouraged +- The `__proto__` accessor will be optional and marked as discouraged. +- The **defineGetter**, **defineSetter**, **lookupGetter**, and **lookupSetter** will be optional and discouraged. ## Align detached buffer semantics with web reality + Presenter: Ross Kirsling (RKG) - [pull request](https://github.com/tc39/ecma262/pull/2164) @@ -333,7 +338,7 @@ RKG: (presents PR) PHE: I don't have a strong position. I remember the GitHub issue. There are 2 issues that pass this: Two implementations, Moddable and ???, both get this right, conforming to the spec. I completely agree that the spec should address web reality. I appreciate the work you and others have done to pin that down. A different way to look at this is that this web reality is another Annex B behavior. The web is looser with enforcing some of the requirements in the spec than the language would prefer. These differences could be put into Annex B as required for web browsers. It would maintain the intent of the specification better. -RKG: That makes sense. My understanding (which may be imperfect) is this has been pretty consistent in browser-hosted implementations all along; that TC39 inherited the spec’ing out of ArrayBuffer et al. from Khronos during the ES6 era and wanted to make these cases more stringent, but the ship had already sailed in engines. It's just taken this long to be an official web compat issue. +RKG: That makes sense. My understanding (which may be imperfect) is this has been pretty consistent in browser-hosted implementations all along; that TC39 inherited the spec’ing out of ArrayBuffer et al. from Khronos during the ES6 era and wanted to make these cases more stringent, but the ship had already sailed in engines. It's just taken this long to be an official web compat issue. GCN: This should not be in Annex B especially for node because there is ton of code to handle it. It would be more appropriate to do the “icky thing” but I think it should not be added to Annex B. @@ -363,7 +368,7 @@ RKG: I will say that I’ve already been planning on correcting JSC’s behavior KM: I don't think it would be a problem. -SYG: I think it the issue that turning an error to a non-error has minimal compat risk. +SYG: I think it the issue that turning an error to a non-error has minimal compat risk. I am willing to add Chrome to be the first do that change. KM: Compat risk is always a problem. @@ -377,7 +382,9 @@ RKG: Seems like everyone is onboard? ### Conclusion PR Approved. + ## Specify order of name and length for built-in functions + Presenter: Kevin Gibbons (KG) - [pull request](https://github.com/tc39/ecma262/pull/2116) @@ -408,14 +415,15 @@ KG: I might come back with a prototype in a future meeting. Consensus. -## Arbitrary Strings as export/import names +## Arbitrary Strings as export/import names + Presenter: Bradley Farias (BFS) - [pull request](https://github.com/tc39/ecma262/pull/2154) BFS: (presents slides) -WH: Is this the first place in the spec where you check for “valid unicode”? +WH: Is this the first place in the spec where you check for “valid unicode”? BFS: To my knowledge, yes, but there is iteration of code points for a specific operation. The spec doesn't check for isValidUnicode anywhere, but it does iterate code points. @@ -459,16 +467,16 @@ JHD: MM, there are plenty of references to the UTF-8 concept in the spec: valid MM: I didn’t know that; thanks! -BFS: We can also change it in some way that we - I don’t like the idea of phrasing it, that it doesn’t contain any lone surrogates explicitly, although to the most technical letter what it is doing. If we name it specifically that, it won’t get updated to valid problems in the future, +BFS: We can also change it in some way that we - I don’t like the idea of phrasing it, that it doesn’t contain any lone surrogates explicitly, although to the most technical letter what it is doing. If we name it specifically that, it won’t get updated to valid problems in the future, If there is a problem found with this method, that basically asserts that it does return code points that are complete, then we wouldn't get that if it states "doesn't contain lone surrogates". MM: I have no objection to either way of phrasing it, I prefer the lone surrogate but if you prefer UTF-8, I am ok with this. -YSV: There is no objection from our side, but to JHD’s point, We’d like to prefer UTF-16 wherever possible unless there is a conscious decision. In this case UTF-8 does make sense due to the WASM case. +YSV: There is no objection from our side, but to JHD’s point, We’d like to prefer UTF-16 wherever possible unless there is a conscious decision. In this case UTF-8 does make sense due to the WASM case. DE: I think we’ve been talking about UTF8/16 for a while, we should focus on editorial issues on the PR. I don’t think we should focus on UTF8/16, I like the feature and it makes sense. I'm the champion on the WebAssembly side of the ESM integration. I think it's good that we have this field in. It makes sense that we only import or export valid Unicode code point strings, as this does. -AKI: Queue empty. BFS? +AKI: Queue empty. BFS? BFS: I would like to ask for consensus, remaining editorial issues to be discussed with people on it. @@ -493,6 +501,7 @@ GCL: Engine262 has this implemented already. Approved, pending the Test262 and implementations. ## Numeric literal suffixes update: separate namespace version + Presenter: Daniel Ehrenberg (DE) - [proposal](https://github.com/tc39/proposal-extended-numeric-literals) @@ -562,7 +571,7 @@ DE: more about engines to be able to keep their structures. Literals should also MM: Any developer who wants to understand their code should stick to strict mode. -DE: I see two options: (1) making a separate namespace, with no prefix, or (2) making the underscore an explicit separation. I don’t think we can make it so that it only refers to the lexical scope with no prefix like `_`. I was laying motivations for why those were the two options. +DE: I see two options: (1) making a separate namespace, with no prefix, or (2) making the underscore an explicit separation. I don’t think we can make it so that it only refers to the lexical scope with no prefix like `_`. I was laying motivations for why those were the two options. MM: I very much prefer the polyfillable principle: that whatever new suffixes for built-in types going forward, the things you can do at the user level can look like that. It's a good design principle that the mechanisms available to language designers, as much as that power can be given to the user can be good, that doesn’t mean bare m can’t be the suffix could be ??? instead of _m. So, as this discussion has proceeded, I'm leaning toward the underbar, even though I wasn't used to. That way, we solve the need for the separate namespace without creating a separate namespace mechanism. I'd say that new builtins after BigInt all have the same preceding underbar. @@ -602,9 +611,10 @@ WH: A couple quick observations: Excluding a bunch of different letters `a`, `b` DE: I don’t know who made the claim. There would be problems even with an underscore. -WH: Second observation: I do very much like the separate namespace. For example, a good use case of this would be representing complex numbers, `3+4i` shouldn't conflict with an index variable `i`. I think of the `i` in the number more as syntax than a variable name, just like I don’t think of the `e` in `3e-4` as a variable. +WH: Second observation: I do very much like the separate namespace. For example, a good use case of this would be representing complex numbers, `3+4i` shouldn't conflict with an index variable `i`. I think of the `i` in the number more as syntax than a variable name, just like I don’t think of the `e` in `3e-4` as a variable. ## Need another stage 3 reviewer for iterator helpers + Presenter: Michael Ficarra (MF) No slides @@ -630,6 +640,7 @@ JHD: in general authors reviewing what they wrote can be a problem but here it i YSV/JTO will review ## Withdrawing TypedArray stride + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-typedarray-stride) @@ -638,18 +649,18 @@ SYG: The feedback from engines is that this would slow down the index operator, AKI: You already answered my question, is this still a problem to be solved? The answer seems to be yes. -SYG: Archive the proposal repo? +SYG: Archive the proposal repo? AKI: Archive and remove it from the proposals listing. JHD: Already got that. + ### Conclusion/Resolution Proposal will be withdrawn - - ## F.p.bind with infinite-length functions + Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/ecma262/issues/2170) @@ -679,7 +690,7 @@ That is the only case you need to deal with here (showing up spec patch). MM: I have mild distaste for adding another branch to the spec. I was hoping that you had phrasing in mind that would not add a separate condition. -KG: Unfortunately no; everywhere toInteger is used, it must check for infinity. I have renamed it to ToIntegerOrInfinity to take up on that case. +KG: Unfortunately no; everywhere toInteger is used, it must check for infinity. I have renamed it to ToIntegerOrInfinity to take up on that case. MM: Ok, yes @@ -694,6 +705,7 @@ KG: I would like to ask for a consensus for infinity as a correct answer to this Consensus. We will use Infinity. ## Date arithmetic + Presenter: Kevin Gibbons (KG) - [slides](https://docs.google.com/presentation/d/1gePsNmlP2u0pYXm0LWO3d7eM4Q_y5Ozx0qXN1zWOv58/) @@ -756,6 +768,7 @@ Consensus on part 1: IEEE arithmetic in Dates No consensus on part 2 on grounds of late addition to the agenda. ## Move outreach groups to the TC39 org, like incubator calls? + Presenter: Daniel Ehrenberg (DE) - [outreach groups](https://github.com/js-outreach/js-outreach-groups/) diff --git a/meetings/2020-09/sept-22.md b/meetings/2020-09/sept-22.md index dc0c76c6..241994ab 100644 --- a/meetings/2020-09/sept-22.md +++ b/meetings/2020-09/sept-22.md @@ -1,9 +1,10 @@ # September 22, 2020 Meeting Notes + ----- -**In-person attendees:** +**In-person attendees:** -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -28,7 +29,7 @@ | Michael Saboff | MLS | Apple | | Keith Miller | KM | Apple | | Bradford C. Smith | BSH | Google | -| Jem Young | JZY | Netflix | +| Jem Young | JZY | Netflix | | Philip Chimento | PFC | Igalia | | Richard Gibson | RGN | OpenJS Foundation | | Robin Ricard | RRD | Bloomberg | @@ -39,11 +40,8 @@ | Pieter Ouwerkerk | POK | RunKit | | Shu-yu Guo | SYG | Google | - - - - ## Intl.DisplayNames for Stage 4 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-displaynames) @@ -63,10 +61,11 @@ FYT: Can we reach consensus? MBS: I’m not hearing any objections, so congratulations on stage 4! ### Conclusion/Resolution -- Stage 4! +- Stage 4! ## .item() for Stage 3 + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-item-method) @@ -108,7 +107,6 @@ WH: But, there has to be a way to index over strings. KG: That’s the iteration that strings have, they are by code points. - JTO: The committee already addressed this: the iteration on strings is by code point, not code unit. I was actually persuaded to not to String.p.item by SYG’s slide. The team consensus was that everyone would be confused by what we do. But the benefit - I don’t understand what the benefit is of doing this. We should not be in the business of facilitating bugs because users already don’t understand. SYG: In the interest of time, can we drop the code point vs code unit thing? I am presenting either code units or no String.prototype.item. @@ -148,10 +146,11 @@ SYG: It isn’t a unanimous glowing consensus, but it sounds like we have consen MBS: I’m not hearing any blocks so I think we can call this stage 3. ### Conclusion/Resolution -- Stage 3 including String.prototype.item with code unit indexing. +- Stage 3 including String.prototype.item with code unit indexing. ## Numeric literal suffixes - continued + Presenter: Daniel Ehrenberg (DE) - [proposal](https://github.com/tc39/proposal-extended-numeric-literals) @@ -160,18 +159,10 @@ Presenter: Daniel Ehrenberg (DE) DE: (presents last slide, “Summary of feedback”) Should we have this syntax at all? -Waldemar: Yes, a requirement for Decimal -Michael F: Better to use template strings -Should we use a separate namespace, or lexical scope with _? -Waldemar: Separate namespace seems fine -Mark: Lexical scope with _ preferred, but the separate namespace is well-formed -Some others: Separate namespace is not acceptable -Is it important to have this feature generalized for user definition? -Mark: Yes, a requirement for Decimal -Yulia: Asking if the value is sufficient -Is it acceptable to omit so many identifier start characters? -Waldemar: This might be too restrictive and bad for future-proofing -Chip: Permit more identifiers if you're not in a hex literal? +Waldemar: Yes, a requirement for Decimal Michael F: Better to use template strings Should we use a separate namespace, or lexical scope with _? +Waldemar: Separate namespace seems fine Mark: Lexical scope with_ preferred, but the separate namespace is well-formed Some others: Separate namespace is not acceptable Is it important to have this feature generalized for user definition? +Mark: Yes, a requirement for Decimal Yulia: Asking if the value is sufficient Is it acceptable to omit so many identifier start characters? +Waldemar: This might be too restrictive and bad for future-proofing Chip: Permit more identifiers if you're not in a hex literal? My feeling: Decimal is sufficiently important for JS developers that we should work through these issues one way or the other WH: This is a tough design area. You recorded my position as “being fine” with a separate namespace, but it’s actually stronger than that. I think that a separate namespace is pretty much essential, otherwise you get too many conflicts with accidentally captured variable names, especially if you want to use commonly-used index names like `i`, `n`, or `in` which is a keyword. Consistency with existing names of units and such also makes it bothersome to forbid various identifier start characters. I feel that a separate namespace is pretty much essential here. @@ -181,7 +172,9 @@ DE: Is it fair to say that we have these conflicting requirements from TC39 memb WH: Yes. And the slide you’re showing is a great summary of them. DE: Thank you everybody for giving this consideration and extra time. + ## Import Assertions for Stage 3 + Presenters: Sven Sauleau (SSA), Dan Clark (DDC) - [proposal](https://github.com/tc39/proposal-import-assertions) @@ -227,7 +220,7 @@ This came up in a recent framework outreach call. Someone said that they want to GCL: Sure, I’m not trying to solve HTML, I’m just trying to expand my understanding. So within the same source text, the same module with two different assertions, can lead to two different modules? -DE: If you use an assertion that is not recognized, that is what it would do. [Note: after revisiting the PR, it is clear that the HTML PR does *not* do this. The debate is between ignoring unrecognized assertions (not keying off of them) and erroring on them. Unrecognized attributes will not cause duplication in HTML.] +DE: If you use an assertion that is not recognized, that is what it would do. [Note: after revisiting the PR, it is clear that the HTML PR does _not_ do this. The debate is between ignoring unrecognized assertions (not keying off of them) and erroring on them. Unrecognized attributes will not cause duplication in HTML.] GCL: Ok, thank you. @@ -263,9 +256,11 @@ SYG: I feel somewhat strongly that I like the current status quo of the spec tex AKI: Alright, let’s call that stage 3! Congratulations! ### Conclusion/Resolution + - Stage 3 for the status quo spec. ## JSON Modules update + Presenter: Daniel Clark (DDC) - [proposal](https://github.com/tc39/proposal-json-modules) @@ -303,8 +298,7 @@ SYG: GCL was worried that as a general principle, it seems like a bad idea for u CM: I see. So the concern is that well you said JSON, but we know better than you and we’re going to give you some other thing. -GCL: Maybe an example -- it’s not the JSON part, it’s using the name type in the domain of a host boundary, maybe the host wants to use the type -And now all of the sudden we are taking up a string value from them. It’s not about the JSON part, it’s - +GCL: Maybe an example -- it’s not the JSON part, it’s using the name type in the domain of a host boundary, maybe the host wants to use the type And now all of the sudden we are taking up a string value from them. It’s not about the JSON part, it’s - CM: It’s like having a reserved word. @@ -314,7 +308,7 @@ DE: I think the idea of this - obviously we’re not talking about changing what I think it’s really important that you be able to import JSON modules in the same way. We could require ??? syntax in different environments, but it wouldn’t allow those things to be ??? in practice. I think this is a path we could take with future features, either through assertions or evaluator attributes, and this is called out in the Import Assertions explainer, that we could define these in the future, just like we declare globals, and we would just have to pragmatically deal with this potential namespace collision. I think that’s what’s going on here and it’s reasonable. -GCL: So maybe the problem there was my understanding of how hosts should ??? something would be bad form of doing that. +GCL: So maybe the problem there was my understanding of how hosts should ??? something would be bad form of doing that. DE: No, space is definitely free for hosts to use. It’s like globals, where hosts and JavaScript share the namespace and we work through the compatibility issues pragmatically as we need to. @@ -327,7 +321,7 @@ JTO: I think the issue is that JSON.parse is a function; you call it once, it re RBU: Slight favor of immutability. While JSON.parse does return a mutable object, the string that you put into JSON.parse is immutable. When you import a JSON module, you have no way of getting the equivalent of the input string to recreate that mutable(?) structure, so I don’t think this is quite the same as a simple JSON.parse. MBS: I want to remind folks about the history of where this proposal came from, and its relationship to Import Assertions. Many moons ago/about a year ago, some folks including DE worked on standardizing JSON modules. And part of the philosophy behind it was that there are a number of different module types that we’d like to see, including but not limited to JSON, CSS, HTML, and WASM, but that behavior was concerned as far as scope. And JSON was one of the ones that was rather straightforward. We have differing opinions with mutability, but there were at least a limited number of walls of the bikeshed to paint. After last TPAC(?), there was a security concern and we reversed it [TODO: what’s “it”?]. But we still need to allow JSON modules in the language. In node, you’re able to require JSON. Many people use this as a way to bootstrap applications. Configuration files, metadata, tiny replacement for databases -- there are so many uses for JSON in an application. And there’s value in asynchronously parsing jSON, which we don’t have right now. -Which is a whole other thing to get into. I will stop proselytizing for a second. But the thing is that when we look into import assertions and all the generic ways to use it, that proposal grew out of this one need to enable JSON modules. We’re getting to a point where we can have more consistency—but I think it’s just really good for us to step back and realize why we got here, what we’re trying to build, and maybe step back from some of the specifics. One thing I think about is enabling programming patterns that people are using today, and making it more consistent across the ecosystem. +Which is a whole other thing to get into. I will stop proselytizing for a second. But the thing is that when we look into import assertions and all the generic ways to use it, that proposal grew out of this one need to enable JSON modules. We’re getting to a point where we can have more consistency—but I think it’s just really good for us to step back and realize why we got here, what we’re trying to build, and maybe step back from some of the specifics. One thing I think about is enabling programming patterns that people are using today, and making it more consistent across the ecosystem. JRL: The mutable vs immutable discussion: first I want to say that I’m not going to block over this. Immutability isn’t in the language - nothing is immutable by default. If we had records already, maybe we can have this discussion. If you import something from node, it will be mutable. If you import from a module, it will be mutable. Surprising behavior should not be the default, and I think immutability would be surprising. @@ -358,15 +352,17 @@ JRL: I can review it. AKI: Thanks JRL. Well that’s two reviewers, more are welcome but two is the minimum. ### Conclusion / resolution + - Further discussion on immutability vs mutability to occur on proposal repo - RGN and JRL to review for stage 3 ## GetOption in ECMA-262 + Presenter: Philip Chimento (PFC) - [issue](https://github.com/tc39/ecma402/issues/480) - [PR](https://github.com/tc39/ecma402/pull/493) -- [explanatory code sample](https://gist.github.com/ptomato/7f13d17f092ab30872f5b5fe663ca507) +- [explanatory code sample](https://gist.github.com/ptomato/7f13d17f092ab30872f5b5fe663ca507) PFC: (presents explanatory code sample) @@ -395,6 +391,7 @@ DE: I’d be happy to consider this to be a separate decoupled change. I think w MM: I think null should be taken as an intentional null and not as a default. ## Intl Enumeration API for Stage 2 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-enumeration) @@ -441,7 +438,9 @@ FYT: I want to request TC39 to advance to stage 2, any objections? ### Conclusion Consensus on Stage 2! + ## Records & Tuples + Presenter: Rick Button (RBU) - [proposal](https://github.com/tc39/proposal-record-tuple/) @@ -451,7 +450,7 @@ RBU: (presents slides) WH: The difference between primitives and identity-less objects is subtle. Is the idea that if you go for the object approach, that these things would be objects, but that you're trying to avoid exposing any way of comparing their identities? -RBU: Yes, the current invariant around identity would maintain ??? +RBU: Yes, the current invariant around identity would maintain ??? WH: How would you specify something like that? We had a problem like that with NaN. How would you spec that these objects would have separate identities but there is no way to compare them? That's a negative assertion — you’re trying to assert that something is impossible in the spec. @@ -466,7 +465,7 @@ WH: Why the need for boxing instead of just directly storing arbitrary values in RBU: That’s a great question. We should address this in the context of this proposal. The main reason we wanted to add boxes is ergonomics. The driving force behind immutable data structures is that they're deeply immutable. There's no way to silently escape the immutable world. We talked about things like integrity domains. An integrity domain is a space over which we want to hold an invariant. For example, we think of Records as having an integrity domain of “string properties”, in that you can trust the string properties of the record to lead to immutable things. The trick with Box is that it forces you to verify the box on the way in and the way out. In order to put an object into a record, you need to box it. In order to use the object in a record, you need to unbox it. There's no chance that you accidentally escape immutability. That ergonomic idea is the main reason. RRD: You can assert whether there is a box or not in the R&T structure. If it doesn’t contains a box you can tell it’s immutable. If you see that there is no box in there, you know that you don't need to do a deep copy of the R&T. That could be done whenever someone passes an R&T to a function. - + WH: Does a box have to contain an object or can it contain a primitive? RBU: It's an open question. Do you have an opinion? @@ -483,7 +482,7 @@ MM: ??? JHD: ??? -JWK: I don’t like the idea of identity-less objects. I agree with Jordan in the issue. If it was an object, I think it should work with Proxy. +JWK: I don’t like the idea of identity-less objects. I agree with Jordan in the issue. If it was an object, I think it should work with Proxy. RBU: On the first point, I think we should dive in to whether the axiom is useful, whether the bifurcation between primitive and objects works, ??? We should continue discussion on the issue. @@ -529,7 +528,7 @@ BSH: OK, sorry I didn't catch that, sounds good! JHD: So the concept of identity is already confusing. Before ES6, it was objects. Now it's objects and symbols. This was already hard to explain and teach. One of the primary differences between symbols and objects is that objects have identity and symbols(?) do not. -RRD: I share your concerns about explaining the proposal we would have the same semantics we have so far it's mainly on how we explain things to people. We could even continue to explain things in terms of primitives or at least as a similar concept. I agree this is a step back in terms of explainability though so we have work to do here. +RRD: I share your concerns about explaining the proposal we would have the same semantics we have so far it's mainly on how we explain things to people. We could even continue to explain things in terms of primitives or at least as a similar concept. I agree this is a step back in terms of explainability though so we have work to do here. RBU: I think we should continue this discussion offline @@ -548,6 +547,7 @@ RRD: That would be very very interesting to have this type of optimization. Happ PDL: SYG, get in touch because I have an example. ## Class static initialization block for Stage 2 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-class-static-block) @@ -574,13 +574,14 @@ DE: Me! PFC: Me! ### Conclusion/Resolution + Stage 2 Reviewers: -DE -PFC +DE PFC ## Process document clarifications + Presenter: Yulia Startsev (YSV) - [pull-request](https://github.com/tc39/process-document/pull/29) @@ -620,15 +621,14 @@ WH: I agree with MM. YSV: I'll bring it back in November then. - ## Class Access Expressions for Stage 2 + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/tc39/proposal-class-access-expressions) - [slides](https://docs.google.com/presentation/d/1ATxFyZUYv9WvmLMFPDIuJ5QSpoVTIdqM0ThH2JPpzFw/edit?usp=sharing) - [spec](https://tc39.es/proposal-class-access-expressions/) - RBN: (presents slides) MM: When is this different than using the name of the class? In all the cases where it is useful, everywhere you use the `class` keyword, what happens if you use the name of the class instead? @@ -663,7 +663,7 @@ JHD: I'm very much in support of this proposal. One of the benefits of object de RBN: I'd like to point out that a lot of my prior experience is with C#, which doesn't let you override a static method (you need to shadow), but it also doesn't allow class name dot for fields. It’s one thing we were discussing for ?? There is no common form or dotless form that you can use to access these fields. ??? -YSV: Feels like a papercut. Does it need a solution? I agree with MM's comments that this doesn't seem to carry its own weight. I've only seen this pattern once while programming. The factory class was named "Foo" and it was hard for someone reading the code to figure out what the class did -- this was a place where this could have been used. But if there had been no name there, it would have been even more for comprehension of the code base. In this case, giving a name may make the code more clear than using a keyword. So I'm pretty -1 on this on the Apache Scale. +YSV: Feels like a papercut. Does it need a solution? I agree with MM's comments that this doesn't seem to carry its own weight. I've only seen this pattern once while programming. The factory class was named "Foo" and it was hard for someone reading the code to figure out what the class did -- this was a place where this could have been used. But if there had been no name there, it would have been even more for comprehension of the code base. In this case, giving a name may make the code more clear than using a keyword. So I'm pretty -1 on this on the Apache Scale. SYG: I argued for private static being not so bad -- naming your classes is indeed a small cost IMO, agree with MM. Agree with this sentiment. Previously I was arguing private static to be not so bad. I don’t think the C# comparison to be that valid. In C# there is no way to refer to the instance of the class isn’t it? diff --git a/meetings/2020-09/sept-23.md b/meetings/2020-09/sept-23.md index c16a0323..ff4face8 100644 --- a/meetings/2020-09/sept-23.md +++ b/meetings/2020-09/sept-23.md @@ -1,4 +1,5 @@ # September 23, 2020 Meeting Notes + ----- **In-person attendees:** @@ -36,8 +37,8 @@ | Shu-yu Guo | SYG | Google | | Hemanth HM | HHM | PayPal | - ## Status update for class fields, private methods, static class features + Presenter: Ujjwal Sharma (USA) - [proposal](https://github.com/tc39/proposal-class-fields) @@ -61,8 +62,8 @@ USA: Maybe we can address this in the chat, or in issues. LZJ: Ok no problem. - ## Ergonomic brand checks for private fields for stage 3 + Presenter: Jordan Harband (JHD) - [proposal](https://github.com/tc39/proposal-private-fields-in-in) @@ -116,7 +117,8 @@ YK: I just want to say positive things. JHD did a good job of presenting the men We will want to see how reification interacts with other proposals anyway, and there is no reason for us to wait on this for that to shake out. MM: The reified thing is a separate reflective level which is a much more specialized use, and we don’t know what the demand is. For the same rationale I don’t want to add syntax for other features. If there isn’t special syntax for it, how hard is it to do it yourselves? I pasted code in chat, that is a few lines of code that does this. If you have to do this all the time it will be a pain, but if you need to do it rarely you can roll it yourself. If we find that people end up doing this a lot even if it is 3 lines, then we have established a need. -``` + +```js ({ has: obj => #x in obj, get: obj => obj.#x, @@ -175,14 +177,12 @@ JHD: Can we squeeze in a couple of minutes later? AKI: Before or after lunch. - - - - ### Conclusion/Resolution Decision deferred until JHX can review notes. + ## Decorators: A new proposal iteration + Presenter: Kristen Hewell Garrett (KHG) - [proposal](https://github.com/tc39/proposal-decorators/) @@ -306,6 +306,7 @@ DE: Do people have concerns on the high level, thanks for feedback. (ends) ## String.dedent for Stage 1 + Presenter: Hemanth HM (HHM) - [proposal](https://github.com/mmkal/proposal-multi-backtick-templates) @@ -313,7 +314,7 @@ Presenter: Hemanth HM (HHM) HHM: (presents slides) -DRR: (asks on queue: I understand ``` ``` is not necessarily the syntax, but that code is valid today) +DRR: (asks on queue: I understand `````` is not necessarily the syntax, but that code is valid today) DRR: Weirdly enough, ``` followed by ``` is 2 tagged lit invocation, it is a runtime error, it has no problem with static syntax, it can be a syntax error. I don’t know if it’s worth changing in parser. Just wanted to mention that. @@ -327,8 +328,7 @@ JRL: If we `String.dedent` at the tag we can maintain semantics with syntax, and SYG: it seems there should be an opportunity for optimization? - -JRL: In syntax, yes. But the API form will not be able to do that, since the engine will maintain the underlying strings array and dedent will have to maintain its own cached version of that. +JRL: In syntax, yes. But the API form will not be able to do that, since the engine will maintain the underlying strings array and dedent will have to maintain its own cached version of that. SYG: That answered my question and if that is the main con I also prefer the function over syntax here. @@ -405,18 +405,18 @@ HHM: We will lean more towards the function form. JRL: There are fewer issues with the function form, but I personally prefer the syntax form. There is a risk with web compat here with syntax, but the API form has those memory issues. Would be happy with either. SYG: Now that I understand the actual con there I do have semi-significant impl concerns, but they're certainly not Stage 1 concerns. -### Conclusion/Resolution -Stage 1 +### Conclusion/Resolution +Stage 1 ## Temporal Stage 2 Update + Presenter: Ujjwal Sharma (USA) - [proposal](https://github.com/tc39/proposal-temporal/) - [slides](https://docs.google.com/presentation/d/1wkufbATeIxKvYZmd_hlM80x9zJC-C6pTHVwDFtUSqfM/edit?usp=sharing) - USA: (presents slides) JHD: Luckily you answered my question in the slides. It was suggested in July that the spec would be ready for this meeting because it will take more than 2 months. Please make sure the spec is frozen modulo naming concerns, so it won’t be overwhelming to review it and keep track of changes that are happening. That way you can ask in January. Second request is a document that explains the current mental model without the history and that document would be useful for the review. To reiterate, I love this proposal and I’d like it to advance. @@ -455,10 +455,10 @@ JHD: The own properties are only immutable if Temporal makes them immutable, and DE: What people have been saying about time for reviews is true, but a misunderstanding about the time to do that. I like the idea of a call to go over the tutorial with reviewers, the polyfill can help people trying, I also want reviews from web developers and we’ll be accepting feedback. People should raise issues if there are issues with the 3 months review. -PDL: On a side note, there will be a workshop at NodeConf.EU this year to gather some feedback. - +PDL: On a side note, there will be a workshop at NodeConf.EU this year to gather some feedback. ## Intl.DisplayNames V2 for Stage 1 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/FrankYFTang/intl-displaynames-v2/) @@ -481,10 +481,13 @@ PDL: Strong support for this. RPR: The queue is empty. FYT: Asking for Stage 1. + ### Conclusion/Resolution Stage 1 + ## Intl Locale Info for stage 1 + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/FrankYFTang/proposal-intl-locale-info) @@ -501,10 +504,13 @@ PDL: +1. This is really important, I am responsible for some of those hacky libr RPR: The queue is clear. FYT: Asking for stage 1. + ### Conclusion/Resolution + Stage 1 ## Conformance for enumerable options in 262 and 402 + Presenter: Shane F. Carr (SFC) - [ecma402 PR](https://github.com/tc39/ecma402/issues/467) @@ -540,7 +546,7 @@ JHD: It's potentially a breaking change… people are more likely to pass an obj JHD: With the latest timeStyle/dateStyle change, I had 3 or 4 reports of users passing date value in their option bags because the new version of chrome throws when passing on an unknown but did not before. Every time we decide to start throwing we potentially break people’s code. -SFC: I just wanted to be clear that I agree about “unknown properties” in options bags. We ignore them, which is the same approach we take in 402 and Temporal. The specific question I mentioned here is the case of unknown values to known arguments. Should we allow Chrome to accept “fortnight” as a unit and format it instead of throwing? +SFC: I just wanted to be clear that I agree about “unknown properties” in options bags. We ignore them, which is the same approach we take in 402 and Temporal. The specific question I mentioned here is the case of unknown values to known arguments. Should we allow Chrome to accept “fortnight” as a unit and format it instead of throwing? JHD: if there is a way to feature detect then it becomes easier to deal with throws which is better than to deal with try/catch. @@ -555,8 +561,7 @@ PDL: you ignore it, because anything else would break. SFC: In case of the unknown unit fortnight, if the browser doesn’t have that data it, what is the fallback behavior? What do you do if you just ignore unknown string options? PDL: right. This is a big reason why options bags have a lot of issues (despite of the pros). An unknown value needs to be ignored. -If it’s unknown and requires something, it is actually the new implementation that needs to deal with that -Throwing is breaking the web essentially. +If it’s unknown and requires something, it is actually the new implementation that needs to deal with that Throwing is breaking the web essentially. MM: We've been leaning toward ignoring unknowns rather than throwing. The dilemma between the two is addressable. You can feature test. However, it's difficult to feature test for what options a given procedure supports, and what enum options it supports, but good for calling code to detect and address the problem. diff --git a/meetings/2020-09/sept-24.md b/meetings/2020-09/sept-24.md index 90ff476e..de5eb1d9 100644 --- a/meetings/2020-09/sept-24.md +++ b/meetings/2020-09/sept-24.md @@ -1,9 +1,10 @@ # September 24, 2020 Meeting Notes + ----- -**In-person attendees:** +**In-person attendees:** -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -29,16 +30,16 @@ | Shu-yu Guo | SYG | Google | | Ross Kirsling | RKG | Sony | - ## Revisit Ergonomic Brand Checks for Private Fields + Presenter: Jordan Harband (JHD) - [proposal](https://github.com/tc39/proposal-private-fields-in-in) - [issue](https://github.com/tc39/proposal-private-fields-in-in/issues/7) -JHD: The request I made last time was, can this reach Stage 3? I wanted JHX to have time to respond. +JHD: The request I made last time was, can this reach Stage 3? I wanted JHX to have time to respond. -JHX: Thank you, JHD. Thanks everyone for the patience. Last meeting, I was the only one who thought this was not OK. Thank you everyone for giving me a chance to comment, especially JHD and DE. Before the meeting I created an issue in the repo that summarizes my reasons for blocking the proposal. Basically, there are 3 problems. (1) conflict with reification; (2) syntax issue; (3) process. I think I have read the notes and the most important part is… I got new information. I want to summarize them. +JHX: Thank you, JHD. Thanks everyone for the patience. Last meeting, I was the only one who thought this was not OK. Thank you everyone for giving me a chance to comment, especially JHD and DE. Before the meeting I created an issue in the repo that summarizes my reasons for blocking the proposal. Basically, there are 3 problems. (1) conflict with reification; (2) syntax issue; (3) process. I think I have read the notes and the most important part is… I got new information. I want to summarize them. The first is about reification. In the last meeting, I said I was unclear on the future of reification. In this meeting, I think that at least I figured out some opinions of other delegates about reification. One important point was that if there is reification, it shouldn’t use `#x` syntax. I understand that based on two reasons. @@ -46,7 +47,7 @@ First, the potential confusion of `#x` and `this.#x` that may return a different Second, reification is another level. You will not need that in most cases, so if there is reification, it should not use different syntax. I think I agree that… I think… it seems there may still be delegates who want reification in the `#x` syntax. I'm not sure about it. I can probably pass this part. The first reason I think it could be seen as solved. -The second thing is about the syntax: it is likely to have reification. I have to say, it's still not clear what the future of reification is. But I'll skip that because I think the reification problem can be accepted by me. On the second point, the syntax, I still feel the syntax is a problem. I think JHD said that private fields don't use the symbol semantics intentionally. I can't say whether this is a good idea. I prefer symbol semantics. The point is, I think that the current syntax, overloading `in`, actually violates this goal, and increases the mismatch of the syntax and semantics. I understand that most delegates think it is not so harmful. It seems it will not cause bugs in practice. I get this. It's hard to say how harmful it is. I think in the last meeting, there were some suggestions that we should discuss whether we want… +The second thing is about the syntax: it is likely to have reification. I have to say, it's still not clear what the future of reification is. But I'll skip that because I think the reification problem can be accepted by me. On the second point, the syntax, I still feel the syntax is a problem. I think JHD said that private fields don't use the symbol semantics intentionally. I can't say whether this is a good idea. I prefer symbol semantics. The point is, I think that the current syntax, overloading `in`, actually violates this goal, and increases the mismatch of the syntax and semantics. I understand that most delegates think it is not so harmful. It seems it will not cause bugs in practice. I get this. It's hard to say how harmful it is. I think in the last meeting, there were some suggestions that we should discuss whether we want… BT: We're over our timebox. JHX, can you summarize? @@ -61,6 +62,7 @@ JHD: At this point, I think JHX is the only one who hasn't provided consensus fo Proposal does not yet have consensus. ## Resizable and growable ArrayBuffers for Stage 2 + Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-resizablearraybuffer/) @@ -69,7 +71,7 @@ Presenter: Shu-yu Guo (SYG) SYG: (presents slides) -JHD: On slide "Auto-length TypedArrays", when you said that when any part of the array goes out of bounds, then the whole array is out of bounds (?). So If i don't interact with a typed array if the ??? goes out of bounds, does the whole array goes out of bounds. +JHD: On slide "Auto-length TypedArrays", when you said that when any part of the array goes out of bounds, then the whole array is out of bounds (?). So If i don't interact with a typed array if the ??? goes out of bounds, does the whole array goes out of bounds. SYG: The way I wrote the spec, then that case is fine. If you never observed it going out of bounds, then it's ok. @@ -91,7 +93,7 @@ MM: So if we're talking just about JavaScript interacting with these abstraction SYG: Correct. If we implement it badly we will have vulns for these new types and these new types only. But if we upgrade the existing paths, we will have vulnerabilities not only of the new types, but also of the existing types. -MM: You're assuming the users of the broken abstraction are the victims. I was thinking that you were worried about cases where the users of the broken abstraction were the attackers. If you introduce a broken abstraction that can be used for attack that's no less vulnerable than changing the old abstraction so that it's vulnerable to the attack. The attackers will go where the attack is possible. +MM: You're assuming the users of the broken abstraction are the victims. I was thinking that you were worried about cases where the users of the broken abstraction were the attackers. If you introduce a broken abstraction that can be used for attack that's no less vulnerable than changing the old abstraction so that it's vulnerable to the attack. The attackers will go where the attack is possible. SYG: I don't quite understand. For example, GoogleMaps uses typed arrays. If we screw this up due to implementation bugs, we break Google Maps. But if we implement a new type, Google Maps will not be affected, because they will not migrate to resizable array buffers. @@ -103,7 +105,7 @@ MM: That's a case where there's a host using the buffer as well. That's why when Once you involve hosts as potential victims, sharing the same abstraction with a potential attacker in JavaScript, then the distinction makes sense. -DE: I can understand this concern about web audio more easily than the Google Maps concern. I want to suggest a parallel concern exists in JavaScript. Previously, if you have an ArrayBuffer, you don't have the ability to change its length. But with RAB, now you have that capability. So that can have sort of downstream effects on other users. I was personally surprised about the initial version of this and making existing array buffers resizable. I'm pleased to see this restriction to be only resizable types. Tying down—not removing existing invariants about existing types. I am in support of the design that SYG has made. +DE: I can understand this concern about web audio more easily than the Google Maps concern. I want to suggest a parallel concern exists in JavaScript. Previously, if you have an ArrayBuffer, you don't have the ability to change its length. But with RAB, now you have that capability. So that can have sort of downstream effects on other users. I was personally surprised about the initial version of this and making existing array buffers resizable. I'm pleased to see this restriction to be only resizable types. Tying down—not removing existing invariants about existing types. I am in support of the design that SYG has made. MM: Having gone through this I'm in favor of it too. The WebAudio where there's a host victim and a JS attacker does make clear the need. @@ -131,8 +133,7 @@ SYG: If there were a clear predicate, I don't have too strong feelings one way o SFC: This question of should we have one type with a slot that determines its behavior etc. has come up over and over again in Temporal, and the approach we’ve taken is multiple types, for discoverability, education, it works better with type systems like TypeScript, vs. methods that throw in odd situations. I think it makes sense to have separate types with separate behaviors here too. -SYG: I do find that compelling. You would need a sophisticated system if all array buffers were resizable for example out of bounds accesses. - +SYG: I do find that compelling. You would need a sophisticated system if all array buffers were resizable for example out of bounds accesses. JWK: I think the maximum size does not make sense to me. Users will set it to a very big number so he won’t get [an out of memory error], this make the limit useless. @@ -152,7 +153,7 @@ SYG: That is a real worry. I agree there. WH: I see this settling one of two ways. Either every implementation will allow 2⁵³-1 and go on their merry way, or implementations with much lower limits will have compatibility issues, or we will have to provide guidance for what people should put in that number. Expecting web authors to get this right is too optimistic. -SYG: I share the concern. I have three sub-answers to that. (1) as pointed out by MLS, this is the problem of, let me reserve a very large number, has been seen in WASM memory. I'm hoping to engage with the WASM and Chrome team to see how they deal with it. If it's a problem that exists for them today, then we can learn something there. +SYG: I share the concern. I have three sub-answers to that. (1) as pointed out by MLS, this is the problem of, let me reserve a very large number, has been seen in WASM memory. I'm hoping to engage with the WASM and Chrome team to see how they deal with it. If it's a problem that exists for them today, then we can learn something there. MLS: I think I made that comment at the last meeting. Typically WASM allocates way more than they need, they just allocate the full 32-bit address space. At some point, you have a bunch of WASM web pages and you run out of memory on the system. You don't have enough memory to back it up on small systems. @@ -174,7 +175,7 @@ SYG: I do feel that as practically speaking if it is small, you shouldn't use a WH: If they only test on popular browsers which are permissive, there might be issues with other browsers or less popular implementations where it only fails on those platforms. -SYG: That brings me to (3), I have left a lot of latitude for implementations to throw here, if they can't reserve the new page. I can imagine a system that throws for what feels like unreasonable max sizes. I suppose memory issues are in general not interop but I hope to work with Safari in particular to come up with something that's reasonable as a heuristic. I hope that is sufficient for now. I definitely agree that it is a problem. We need to figure out without letting people exhaust their own virtual memory. +SYG: That brings me to (3), I have left a lot of latitude for implementations to throw here, if they can't reserve the new page. I can imagine a system that throws for what feels like unreasonable max sizes. I suppose memory issues are in general not interop but I hope to work with Safari in particular to come up with something that's reasonable as a heuristic. I hope that is sufficient for now. I definitely agree that it is a problem. We need to figure out without letting people exhaust their own virtual memory. WH: I'd like to either see guidance on this when we reach Stage 3, or assume all implementations will accept 2⁵³-1 and deal with it. @@ -198,7 +199,7 @@ SYG: I do think having JS dereferencing the typed arrays will be a common use ca JTO: You don't anticipate that this would affect the- … even in the IC-driven, non-optimizing tier, you don't expect this to affect the performance of existing tiers not backed by resizable array buffers? -SYG: Because these are new types and the ones that are backed by resizable buffers you would give them a different shape and a different hidden class . That doesn't mean there won’t be a slowdown. +SYG: Because these are new types and the ones that are backed by resizable buffers you would give them a different shape and a different hidden class . That doesn't mean there won’t be a slowdown. Where there's a slowdown is, if you have a program that mixes use of TAs that are backed by both fixed-size and resizable buffers in the same callsite, that will become polymorphic where currently they are monomorphic. That's a worry we will have to benchmark. @@ -216,7 +217,6 @@ JTO: I'll ask our team to look into it. KM: I can review if you want another browser person. - ### Conclusion Stage 2. Reviewer companies: @@ -226,6 +226,7 @@ Stage 2. Reviewer companies: - Apple ## Builtin Modules for Stage 2 + Presenter: Michael Saboff (MLS) - [proposal](https://github.com/tc39/proposal-built-in-modules) @@ -238,7 +239,7 @@ WH: You mentioned the `BuiltInModule` (BIM) API is for built-ins only. But in th MLS: Actually you can add any user-defined module you want with a prefix you select and it would work. `BuiltInModule` can store user modules as built-in ones. I can export `Apple:Foo` and into the system so it can accept `Apple:Foo`, no issues. So yes, you're right, but we think this is a way that developers can provide their own modules at start-up. -GCL: To me, this proposal seems strictly worse than just introducing a new `ES` global object that we put built-ins under. I think BIMs are cool, and designing JS from scratch, of course I'd use them, but given all the constraints we have to fulfil with our standard library and all the API we are adding here and all the weird behavior we are running into, we have already solved it using globals. I get the namespace is busy but it seems to be that adding a new object solves better than what is being proposed. +GCL: To me, this proposal seems strictly worse than just introducing a new `ES` global object that we put built-ins under. I think BIMs are cool, and designing JS from scratch, of course I'd use them, but given all the constraints we have to fulfil with our standard library and all the API we are adding here and all the weird behavior we are running into, we have already solved it using globals. I get the namespace is busy but it seems to be that adding a new object solves better than what is being proposed. MLS: How does that deal with the memory implications? @@ -246,9 +247,9 @@ GCL: Explain? MLS: That you don't pay the cost of the library that you don't use. -GCL: Implementations already lazily load all sorts of values and stuff. +GCL: Implementations already lazily load all sorts of values and stuff. -MLS: Implementations typically store only text. You are paying the price in text space, but worse than that you are going to incur some memory overhead before you lazily load features. So you're going to be paying a penalty for that. +MLS: Implementations typically store only text. You are paying the price in text space, but worse than that you are going to incur some memory overhead before you lazily load features. So you're going to be paying a penalty for that. GCL: Whatever magic happens to provide the API when you call the synchronous script API can also happen when you access a property on the global object. @@ -264,11 +265,11 @@ KG: Reload the agenda to get the latest slides. The link changed an hour ago. KG: I want to get more into the memory issue. I don't understand exactly where the benefit for memory in implementations come from. Is there a reason modules are significantly different here? -MLS: In WebKit, we link in all the code. If something is lazilly added to the global object… The reason we are doing this is because. The global object takes a long time to initialize. +MLS: In WebKit, we link in all the code. If something is lazilly added to the global object… The reason we are doing this is because. The global object takes a long time to initialize. MLS: Primarily, our lazy loading is due to startup time for the global object. Which means that we’re paying the price for part of the memory cost of the objects. When you activate an object--when it’s used--you pay more memory cost to create the object, but you pay some anyway. -KG: The thing I had in the queue was, there are APIs in the web platform that aren't synchronously available. You have to await a Promise. It seems that would solve the memory issue if a big complicated library is async available and you wait for the library to resolve. The only cost you are paying is for the library to load up at the start up. +KG: The thing I had in the queue was, there are APIs in the web platform that aren't synchronously available. You have to await a Promise. It seems that would solve the memory issue if a big complicated library is async available and you wait for the library to resolve. The only cost you are paying is for the library to load up at the start up. MLS: The existing import function fits that, right? @@ -278,7 +279,7 @@ MLS: We already have modules we can bring in from the network. You don't see an KG: The advantage of those is that it allows users to organize their code in a particular way. But engines don’t have the constraints as users-- Engines can organize their code behind the scenes in ways that are not available to users. -MLS: But users they are declaring their need for a particular module. they are going to load their module with an existing need they have. +MLS: But users they are declaring their need for a particular module. they are going to load their module with an existing need they have. KG: Sure, yes @@ -286,27 +287,27 @@ MLS: And, why don't we want to extend that to the local implementation? KG: Because you have to introduce a whole bunch of APIs. -MLS: I am talking about introducing 5 APIs. +MLS: I am talking about introducing 5 APIs. KG: They aren't trivial APIs. I’m not saying it’s a huge problem to add them; I just don’t see any advantage. I understand the memory issue with adding new synchronous complicated APIs to the global, just making the asynchronous. And then we don’t make everyone learn to virtualize things in a new way. BFS: In order to shim things, it must be done eagerly and synchronously. One of the concerns we brought up was loading and parsing source text. Somewhat on the tail end of KG, in order to avoid having shims always load source text, or any built-in that may not be present, we would have to wrap it in a Promise somehow. So, that means anything of sufficient size… the only way to avoid eagerly loading source text would be to have the promise be wrapped in an API, does not have to be a promise, could be something else. So it seems, if the expectation that BIMs are expensive to load, size, computation, etc., how are we expected to eagerly shim these without incurring that cost? -MLS: The synchronous nature of shimming was a requirement for Jordan’s Stage 2 blocker back in Berlin, that it needed to be synchronous, for the classic scripts--they can use Promises, but you need to be able to do your shimming before the rest of the application runs. Unlike a network module, the module is part of the implementation. It may be on the filesystem or some other storage. It could be already compiled as part of a native dynamic library. So I don’t know if we could if we have to think that there’s always going to be a notable time access built in time because it can be optimized by implementation beyond source text. +MLS: The synchronous nature of shimming was a requirement for Jordan’s Stage 2 blocker back in Berlin, that it needed to be synchronous, for the classic scripts--they can use Promises, but you need to be able to do your shimming before the rest of the application runs. Unlike a network module, the module is part of the implementation. It may be on the filesystem or some other storage. It could be already compiled as part of a native dynamic library. So I don’t know if we could if we have to think that there’s always going to be a notable time access built in time because it can be optimized by implementation beyond source text. BFS: Are we talking about users shimming it ? I can believe that, for host implementations, they always have it available ambiently, somehow. For shimming, they (???) MLS: If, shim or polyfill, if that comes across the network, you have the latency coming across the network. That no different whether its a built in module or global. There is no benefit in BIMs of something that needs to be shimmed. It shouldn't be any longer than it is now. Does that answer the question? -BFS: I think it does . it seems it is an accepted downside. +BFS: I think it does . it seems it is an accepted downside. GCL: If these builtins must be available synchronously, where does the load async benefit come from? -MLS: You do an async import of a BIM. That Promise… I expect the implementation would return the promise already resolved. I think thats more of allowing a programming model of someone wants to use than a performance issue. If someone wants to use modules whether BIM or network, I mean sure. +MLS: You do an async import of a BIM. That Promise… I expect the implementation would return the promise already resolved. I think thats more of allowing a programming model of someone wants to use than a performance issue. If someone wants to use modules whether BIM or network, I mean sure. GCL: So if you're an implementation that wants to take advantage of loading these async, how do you do that while also providing them sync? You said an implementation can load these modules async, but at the same time, the modules must be available synchrnously for scripts. -MLS: If I had an API that added two numbers and provided a sync version and async version, they take the same time. One returns a value one returns a promise. So, loading the BIM async, it's going to be resolved and loaded, and you get back a Promise. It's a developers choice whether they want to optimize over the network or module. There is no need to restrict async vs sync. +MLS: If I had an API that added two numbers and provided a sync version and async version, they take the same time. One returns a value one returns a promise. So, loading the BIM async, it's going to be resolved and loaded, and you get back a Promise. It's a developers choice whether they want to optimize over the network or module. There is no need to restrict async vs sync. GCL: So there's no benefit on the implementation on the engine side of the async option? @@ -322,51 +323,51 @@ MLS: On the lazy load code slide, if the application already accessed the module MM: I didn't hear; can you repeat? -MLS: JWK was basically asking, do you polyfill the code first, or do you access it? If you polyfill, when you go to lazy load what do you get. Let's suppose you polyfill lazy load, the property on complex is no longer going to be the property you defined. Its going to be the actual thing you polyfilled. +MLS: JWK was basically asking, do you polyfill the code first, or do you access it? If you polyfill, when you go to lazy load what do you get. Let's suppose you polyfill lazy load, the property on complex is no longer going to be the property you defined. Its going to be the actual thing you polyfilled. JWK: I'm not mentioning about the global variable. My concern is about, what if the partial polyfill is loaded, but it's not actually used anywhere else? Only the polyfill is importing the module to see if it needs to be polyfilled. It's not used anywhere else, but the import cost is raised. -MLS: Yea you pay the cost but the app has to have the disciple to only polyfill the things its going to use. If you don’t use this feature on the web page of application and you never use the module, in that case you polyfilled and brought it in later, that's just the way the application is written. +MLS: Yea you pay the cost but the app has to have the disciple to only polyfill the things its going to use. If you don’t use this feature on the web page of application and you never use the module, in that case you polyfilled and brought it in later, that's just the way the application is written. -SYG: We’ve been talking a lot about the technical parts about polyfilling, sync/async, etc. From Chrome's position, and while I roughly agree with the position, I did verify this position up my chain, so don't shoot the messenger… from Chrome's position, let's separate the semantics of the BIMs and, once we have BIMs, are we going to use it for the ecosystem. -To me, that has two sub-questions: Are we going to use this for the web at large (outside of TC39), things that users don’t necessarily know to not be part of the core language--streams, fetch, data blobs, etc. And (2), are we going to use this for the JS standard library in TC39 itself? This is the ecosystem divergence problem. We currently have globals, people disagree on how well they work, but Chrome's position is that they work fine. So given that, if we suddenly have BIMs, users would have to understand, is this thing a BIM or is it a global, and how do I get at it? The thinking from the Chrome web platform leadership is that this is harmful. We should not diverge from standardized platform features to be available in two different ways. Chrome would not use BIM currently for new web APIs standardized outside of TC39. They will continue to use globals. And (2), this is a weaker no, but should we use BIMs in TC39 itself? Chrome's position is a weaker no. I could reopen this question, but the thinking is that the same problem applies to JS features. -From the web platform’s point of view, there is not a clear cut line that is beneficial to web devs between things that were standardardized. For that reason, it is very unlikely that if we were to get BIMs, that Chrome would ship the ability to import them as BIMs. So if Temporal were to be spec'd as a BIM and shipped under js:Temporal, the current thinking is that that wouldn't be made available. Personally speaking I think there is tremendous value in having a built in module machinery for other ecosystems that are not the web. I understand that it is not current champions group desires, so its not a value add for them. But personally speaking, I think it's worth discussing the machinery separately from the ecosystem question. But on the ecosystem question, I think we are unlikely to ship new features as BIMs. If there were a sync capability that made it look like globals, the thinking is that we would ship those. The new BIM would only be available only via global like mechanism if available. +SYG: We’ve been talking a lot about the technical parts about polyfilling, sync/async, etc. From Chrome's position, and while I roughly agree with the position, I did verify this position up my chain, so don't shoot the messenger… from Chrome's position, let's separate the semantics of the BIMs and, once we have BIMs, are we going to use it for the ecosystem. +To me, that has two sub-questions: Are we going to use this for the web at large (outside of TC39), things that users don’t necessarily know to not be part of the core language--streams, fetch, data blobs, etc. And (2), are we going to use this for the JS standard library in TC39 itself? This is the ecosystem divergence problem. We currently have globals, people disagree on how well they work, but Chrome's position is that they work fine. So given that, if we suddenly have BIMs, users would have to understand, is this thing a BIM or is it a global, and how do I get at it? The thinking from the Chrome web platform leadership is that this is harmful. We should not diverge from standardized platform features to be available in two different ways. Chrome would not use BIM currently for new web APIs standardized outside of TC39. They will continue to use globals. And (2), this is a weaker no, but should we use BIMs in TC39 itself? Chrome's position is a weaker no. I could reopen this question, but the thinking is that the same problem applies to JS features. +From the web platform’s point of view, there is not a clear cut line that is beneficial to web devs between things that were standardardized. For that reason, it is very unlikely that if we were to get BIMs, that Chrome would ship the ability to import them as BIMs. So if Temporal were to be spec'd as a BIM and shipped under js:Temporal, the current thinking is that that wouldn't be made available. Personally speaking I think there is tremendous value in having a built in module machinery for other ecosystems that are not the web. I understand that it is not current champions group desires, so its not a value add for them. But personally speaking, I think it's worth discussing the machinery separately from the ecosystem question. But on the ecosystem question, I think we are unlikely to ship new features as BIMs. If there were a sync capability that made it look like globals, the thinking is that we would ship those. The new BIM would only be available only via global like mechanism if available. -MLS: How do you see that TC39 would spec BIMs if one of the major browsers would not deploy it? Is it a normative optional feature that would be useful for TC53 or NodeJS? How do we spec this? +MLS: How do you see that TC39 would spec BIMs if one of the major browsers would not deploy it? Is it a normative optional feature that would be useful for TC53 or NodeJS? How do we spec this? SYG: I think that the BIM, for the sake of argument, suppose that we agree to ship BIM global that you have proposed here. Suppose it was this thing like you have on the slide, where you have a magic globalThis.Temporal. We would ship that, but you can't get importjs:temporal. I am not sure(?) The contents of Temporal would need to be normative, but the way it were made available to users were flexible, globals or BIMs, the module part would have to be the normative optional. But thats a tech spec thing. I don't mean to imply in this position that, for example, the entire contents of js:Temporal would be normative-optional. I think that would be a bad outcome. -JTO: Mozilla's position is similar. The fact that there's little interest from web standards bodies in migrating from globals to BIMs means that it would be harmful for JS standards itself to do that. It would result in splitting the ecosystem. I want to emphasize that I am not opposed to the building blocks. Having a BIM system that could unify how BIMs work across other environments would be valuable even if the JS standard library doesn't go that route. +JTO: Mozilla's position is similar. The fact that there's little interest from web standards bodies in migrating from globals to BIMs means that it would be harmful for JS standards itself to do that. It would result in splitting the ecosystem. I want to emphasize that I am not opposed to the building blocks. Having a BIM system that could unify how BIMs work across other environments would be valuable even if the JS standard library doesn't go that route. MLS: My comment that I made is that I don't see that as providing one 1JS - which I think is the unspoken motto of TC39 - one JS. It is not a browser language or whatever. It is language. And so, to me, it seems like if something's in the spec, and we want to support and implement it, it seems incongruous to me that we would spec a feature that we never intend to use in the ecosystem even if we liked the feature. Why spec something we think is cool but we are not gonna use it? JTO: Mozilla would not be the target audience for that feature. We have no objection to the APIs being added and used by other hosts. -MLS: The other comment is that Apple that participates in web standards would like to implement those [web standards] as modules, not just in TC39. +MLS: The other comment is that Apple that participates in web standards would like to implement those [web standards] as modules, not just in TC39. -JWK: The requirements of “the shim code must be run before the main application” requires a big change in the module execution order. That requires some code be more prior to the other codes. I think this is not look good to me. +JWK: The requirements of “the shim code must be run before the main application” requires a big change in the module execution order. That requires some code be more prior to the other codes. I think this is not look good to me. -MLS: This is a requirement on classic scripts. Classic scripts must do shimming before the script runs. That is a req for classic scripts, and JHD can provide details there. +MLS: This is a requirement on classic scripts. Classic scripts must do shimming before the script runs. That is a req for classic scripts, and JHD can provide details there. JWK: I think it might be possible to work around this problem by providing a new syntax-level block to indicate it is shimming some module, but don't execute the block until the module is first being accessed. -MLS: I'm not sure how you'd spec that. It seems like you'd have a task you queue up that's dependent on the module. And there you would do the shimming on the first use. +MLS: I'm not sure how you'd spec that. It seems like you'd have a task you queue up that's dependent on the module. And there you would do the shimming on the first use. -BFS: Polyfilling globals must run before your main app code. There is some complex nuances to that. We have made it much more complicated to use top level await. Top-level await causes interesting behaviors in sibling modules. There's a double-wrapping thing. It's not like this requirement doesn't exist already, so I don't know if we need to do anything about it, because you already have to deal with it. +BFS: Polyfilling globals must run before your main app code. There is some complex nuances to that. We have made it much more complicated to use top level await. Top-level await causes interesting behaviors in sibling modules. There's a double-wrapping thing. It's not like this requirement doesn't exist already, so I don't know if we need to do anything about it, because you already have to deal with it. -MLS: Yeah. CJS I think you need to do it in the engine. +MLS: Yeah. CJS I think you need to do it in the engine. -WH: People were asking about how you lazily shim modules. Would a closure work? What do you mean by “some shims would need engine support”? +WH: People were asking about how you lazily shim modules. Would a closure work? What do you mean by “some shims would need engine support”? MLS: You could probably do it all in JS, but if someone needs to do it in the engine itself, we may not export enough APIs. -In the example on the slide, this is conceptually what you would do. But you need to do more to make the property defined here. It is JS 80% eq of what you would do. The engine would have to know the internal object. +In the example on the slide, this is conceptually what you would do. But you need to do more to make the property defined here. It is JS 80% eq of what you would do. The engine would have to know the internal object. WH: The thing I was thinking of was, you could have a `BuiltInModule` export API that, instead of exporting a module directly, takes a function that it would call to define the module the first time someone imported it. -MLS: okay so, on import kind of thing? +MLS: okay so, on import kind of thing? WH: Yeah. @@ -376,11 +377,11 @@ DE: I like the idea of an API that takes a closure for lazy polyfilling. There's MLS: You made that as a follow-on or separate proposal, right? -DE: I am very much in support of this proposal as is. +DE: I am very much in support of this proposal as is. MLS: For async shimming, you were thinking a separate proposal? -DE: I think it could be part of this or separate proposal. People were talking about the benefit. I feel like this benefit, of reducing the polyfill loading overhead from JS implementations in general that already have the BIM, is a big technical benefit to consider. But do you think it layers on top of the base that you have in a clear way. I can understand the problem of ecosystem split raised by Chrome and Mozilla but I don't understand the alternative raise of specifying something in TC39 for BIM that is not for the web. I think hosts already have the machinery they need to do this. In TC39, we'd be specifying something that's already there in code and common. I don't want us to get into - maybe we can make a task group, I would really like us to be unified. +DE: I think it could be part of this or separate proposal. People were talking about the benefit. I feel like this benefit, of reducing the polyfill loading overhead from JS implementations in general that already have the BIM, is a big technical benefit to consider. But do you think it layers on top of the base that you have in a clear way. I can understand the problem of ecosystem split raised by Chrome and Mozilla but I don't understand the alternative raise of specifying something in TC39 for BIM that is not for the web. I think hosts already have the machinery they need to do this. In TC39, we'd be specifying something that's already there in code and common. I don't want us to get into - maybe we can make a task group, I would really like us to be unified. JWK: Check out #67 on the repo. @@ -392,22 +393,22 @@ SYG: Since 1JS is a priority, the suggestion I had made is not coherent. So it's MLS: So what does the rest of the committee think about adding this as a mechanism knowing that multiple implementations won't use it? -DE: I think we are here to make a standard we all agree on somehow. +DE: I think we are here to make a standard we all agree on somehow. AKI: I feel not positive about it. WH: What is TC53’s position on this? Would they use it? -AKI: We are at time. Let's come back and discuss this more. +AKI: We are at time. Let's come back and discuss this more. -MLS: Google and Mozilla are on the record for blocking this. +MLS: Google and Mozilla are on the record for blocking this. ### Conclusion The proposal will not advance. - ## Error Cause for Stage 1 + Presenter: Chengzhong Wu (CZW) - [proposal](https://github.com/legendecas/proposal-error-cause) @@ -417,9 +418,9 @@ CZW: (presents slides) CZW: Asking for stage 1. -JHD: (in queue: why is this better than expandos, or an AggregateError, where the "errors" can be any values explaining the cause? ) Why is this better than one of a few alternatives, like expandos properties, or an Aggregate Error or a UserLand library? You also told us devtools could handle it. At the moment your example spec text allows it to be any value. +JHD: (in queue: why is this better than expandos, or an AggregateError, where the "errors" can be any values explaining the cause? ) Why is this better than one of a few alternatives, like expandos properties, or an Aggregate Error or a UserLand library? You also told us devtools could handle it. At the moment your example spec text allows it to be any value. -CZW: any value can be thrown so we are not limiting, so we can have any caught error as the cause even though js values don’t have much info, this is different from the stack. The cause is why this error happened so we can augment. For users if this array is thrown from deep internal, this might help to ?? these exceptions. The diagnosis will be more pleasant and easy to conduct. And why this should be in the spec, this way users can confidently rely on the error (?) If this is in the spec, the user can safely just construct this property with this argument. We can attach to the error instance and we can analyze it for you and this argument and all is done here is no additional contract between the user and the devtools. +CZW: any value can be thrown so we are not limiting, so we can have any caught error as the cause even though js values don’t have much info, this is different from the stack. The cause is why this error happened so we can augment. For users if this array is thrown from deep internal, this might help to ?? these exceptions. The diagnosis will be more pleasant and easy to conduct. And why this should be in the spec, this way users can confidently rely on the error (?) If this is in the spec, the user can safely just construct this property with this argument. We can attach to the error instance and we can analyze it for you and this argument and all is done here is no additional contract between the user and the devtools. JHD: To make sure I understood correctly, you're saying that even if this was just a convention, by putting it in the language, that encourages interop between languages and tools? @@ -450,19 +451,21 @@ JRL: (via queue message) Would love this for AMP. We currently mutate the error CZW: Can this advance to Stage 1? (silence) + ### Conclusion/Resolution Stage 1 + ## Double-Ended Iterator and Destructuring for Stage 1 -Presenter: John Hax (JHX) + +Presenter: John Hax (JHX) - [proposal](https://github.com/hax/proposal-deiter) - [slides](https://johnhax.net/2020/tc39-sept-deiter/slide#0) JHX: (presents slides) -MM: (from queue: destructuring better motivated than double iterator) I like the ideas about the better destructuring and having rest in the middle. Doing it both for destructuring and for parameters feels right. Parameters might be more complicated because of all the other semantics on parameters, but at least for destructuring. However, I don’t find that desire worth it for double iterators. I understand the scaling point but in my experience, the destructuring patterns and parameter patterns are not used at the scale where scaling is relevant. It is not worth adding a second iteration and taking it up apart in the iterator match -I think it's a fine way to implement it that doesn't imply any iterator change. +MM: (from queue: destructuring better motivated than double iterator) I like the ideas about the better destructuring and having rest in the middle. Doing it both for destructuring and for parameters feels right. Parameters might be more complicated because of all the other semantics on parameters, but at least for destructuring. However, I don’t find that desire worth it for double iterators. I understand the scaling point but in my experience, the destructuring patterns and parameter patterns are not used at the scale where scaling is relevant. It is not worth adding a second iteration and taking it up apart in the iterator match I think it's a fine way to implement it that doesn't imply any iterator change. JHX: There are two reasons. One is the scale issue, and the other is how you can double write the iterator in userland. @@ -474,7 +477,7 @@ MM: Thank you. KM: I agree it’s probably not a good idea, a footgun to (not??) have a double iterator. -JWK: The performance is ??? +JWK: The performance is ??? KM: If you have any kind of custom iterator, you'd have to run the whole thing. There is no way to optimise other than looping the whole thing. You can probably optimize but it would be complicated. @@ -501,10 +504,13 @@ WH: I'm reluctant about the utility of some of this. That's all I will say. JHX: Asking for stage 1. (silence) + ### Conclusion/Resolution + Stage 1 ## Standardized Debug for Stage 1 + Presenter: Gus Caplan (GCN) - [proposal](https://github.com/devsnek/proposal-standardized-debug) @@ -512,7 +518,7 @@ Presenter: Gus Caplan (GCN) GCN: (presents slides) -JWK: It's possible to make debugger an expression that prints and returns the value. +JWK: It's possible to make debugger an expression that prints and returns the value. GCN: Not entirely sure what you’re suggesting. @@ -565,7 +571,7 @@ WH: The syntax I was objecting to was things like adding a new `!` operator. If MM: Great! -JWK: (via queue) - https://github.com/devsnek/proposal-standardized-debug/issues/3 +JWK: (via queue) - https://github.com/devsnek/proposal-standardized-debug/issues/3 RBN: `debugger` being an expression in 2017-2018: We discussed the possibility of `debugger` being an expression back in 2017 or 18, when we were talking about throw expressions. At the time it seemed like an obvious "why wouldn't we do this?" If debugger was just debugger expression without any operands and there is almost no change to semantics it’s just syntax. I would consider using debugger.meta properties. There's a use case for debugger meta-properties that I was thinking about. VS Code shipped a new feature in their debugger for Node, that lets you evaluate an exprsesion in the debugger to give you information about the expression you have in your watch window. (??) I could see a possible use case of debugger. extensions something like debugger.typeProxy and watch it in another window. Or extensions that say I should step over this whenever I encounter it, for example. There's a number of interesting cases I could see that combine debugger metaproperties and decorators. This seems interesting for more use cases, and the idea is good but the syntax should be discussed. @@ -576,10 +582,13 @@ GCL: If the proposal were to go in the direction of moving console into the spec GCL: Does anyone object to Stage 1? (silence) + ### Conclusion/Resolution -* Stage 1 reached. + +- Stage 1 reached. ## Unused Function Parameters for Stage 1 + Presenter: Gus Caplan (GCL) - [proposal](https://github.com/devsnek/proposal-unused-function-parameters) @@ -596,7 +605,8 @@ GCL: there are definitely some thoughts to be put into this, I'm not proposing a WH: Syntactic support for `?` and `*` both seem really scary, for different reasons. RBN: I'm concerned about `?`. It conflicts with the partial application proposal which I still plan to advance at some point. It would lead to an extremely complex cover grammar to make that work. We already had to deal with complexity with arrow functions. Generally I prefer the elision mechanism (,, id) because it's similar to what we already do for elision in arrays. I’m also worried about star, it could be confusing just looking at documentation. Examples already legal since ES2015: -``` + +```js function f(...[, , x]) { } const g = (...[, , x]) => {} @@ -628,11 +638,11 @@ GCL: Just to clarify, underscore is one of the options here. YSV: I mean explicitly naming, that is underscore plus the variable name. -MM: I very much agree with YSV, I have used bare _ in other languages. I found it unpleasant when I came to JS and couldn't repeat the single underscore. In JS not having this, I found it irritating, but using linters and tools we have today led me into the _name pattern. I rapidly came to realize it's actually better than what I wanted to do. I think this proposal is solving a problem that is better addressed by adopting new habits to name unused parameters. The gain is small enough that adding new syntax (even elisions) is not worth it. +MM: I very much agree with YSV, I have used bare _in other languages. I found it unpleasant when I came to JS and couldn't repeat the single underscore. In JS not having this, I found it irritating, but using linters and tools we have today led me into the_name pattern. I rapidly came to realize it's actually better than what I wanted to do. I think this proposal is solving a problem that is better addressed by adopting new habits to name unused parameters. The gain is small enough that adding new syntax (even elisions) is not worth it. DRR: Technically it's possible to ignore something with an empty binding pattern. It looks pretty horrible but it is possible. -JHD: It doesn’t work as it throws on null. +JHD: It doesn’t work as it throws on null. JHX: [] doesn't work. @@ -663,9 +673,13 @@ JHX: if a proposal is still Stage 0, can it have a user babel plugin to test? RPR: Babel is not bound by the TC39 stage process. Anyone can add a plugin. JRL: I can speak to Babel. We don't allow people to do syntax plugins. If you want to do a normal transform such as a comment or something you could, but it would be a little bit weird. + ### Conclusion/Resolution + Not advancing + ## Modulus and Additional Integer Math for Stage 1 + Presenter: Peter Hoddie (PHE) - [proposal](https://github.com/phoddie/integer-and-modulus-math-proposal) @@ -673,12 +687,11 @@ Presenter: Peter Hoddie (PHE) PHE: (presents slides) -SYG: to make sure I understand: is the suggested semantics as `(x operator y) | 0`? +SYG: to make sure I understand: is the suggested semantics as `(x operator y) | 0`? PHE: I'm not certain about some of the edge cases of that. -SYG: I would like to know the answer because asm.js (??) is a thing now -That said, I do generally support your proposal and these methods. Even though asm.js is a thing now, I don’t assume engines will continue to special-case asm.js syntax going forward. +SYG: I would like to know the answer because asm.js (??) is a thing now That said, I do generally support your proposal and these methods. Even though asm.js is a thing now, I don’t assume engines will continue to special-case asm.js syntax going forward. WH: To answer SYG, if you’re proving int32 inputs, the answer is yes for some of the operations and no for some of the operations. @@ -727,16 +740,17 @@ PHE: Consensus for Stage 1? GCL: I’d almost say it could go for Stage 2. WH: I support this proposal. I’d like to see the semantics before this progresses to Stage 2. + ### Conclusion/Resolution -* Stage 1! +- Stage 1! ## Incubation call chartering - No notes - ## Mechanisms to lighten the load on note takers + Presenter: Philip Chimento (PFC), Mark Miller (MM) - [issue](https://github.com/tc39/Reflector/issues/227#issuecomment-691092343) diff --git a/meetings/2020-11/nov-16.md b/meetings/2020-11/nov-16.md index 58634e77..5e1d6fee 100644 --- a/meetings/2020-11/nov-16.md +++ b/meetings/2020-11/nov-16.md @@ -1,7 +1,8 @@ # 16 November, 2020 Meeting Notes + ----- -**Attendees:** +**Attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Michael Ficarra | MF | F5 Networks | @@ -12,7 +13,7 @@ | Jack Works | JWK | Sujitech | | Jordan Harband | JHD | Invited Expert | | Chip Morningstar | CM | Agoric | -| Ujjwal Sharma | USA | Igalia | +| Ujjwal Sharma | USA | Igalia | | Daniel Ehrenberg | DE | Igalia | | Michael Saboff | MLS | Apple | | Devin Rousso | DRO | Apple | @@ -32,8 +33,8 @@ | Zhe Jie Li | LZJ | 360 | | Shu-yu Guo | SYG | Google | - ## Opening, welcome, housekeeping + Presenter: Aki Braun (AKI) AKI: (presents slides) @@ -70,7 +71,7 @@ Discuss Current Topic allows you to reply to an active conversation, and should Clarifying Question will jump you to the top of the queue, you should only ever use it if you are otherwise unable to follow the conversation. Abuse of this button will not make you friends, but using it responsibly will. - The Point of Order button is like the nuclear option—it interrupts all conversation to be acknowledged. The best example of using it appropriately is when there are not enough note-takers, though I will come back to that in a moment. +The Point of Order button is like the nuclear option—it interrupts all conversation to be acknowledged. The best example of using it appropriately is when there are not enough note-takers, though I will come back to that in a moment. Once it's your turn to speak you'll have a little "I'm done speaking" button added to your view. Please remember to click it when you are done speaking. @@ -78,11 +79,11 @@ TC39 also has several IRC channels. These all have their purpose, but two in par Our hallway track will once again be Mozilla hubs, the link can be found in the Reflector. Thus far it's the closest we've found to the real experience—wander between conversations, sit down to put the finishing touches on your slides, or strike up a conversation about the botched PS5 pre-release. The sky's the limit. If your computer is struggling with the rendering, try tinkering with your settings to force Hubs to render at 800x600. -Now, let's talk IPR, or Intellectual Property Rights. Generally, in order to participate in the TC39 plenary or champion a proposal, you must represent an Ecma member organization as a delegate. There are exceptions, including Invited Experts, who may attend and participate with the permission of both the chair group and the Ecma Secretariat. Any participant who is not an active delegate for a member org must register by signing the R-F-T-G agreement. Those letters stand for Royalty Free Task Group, and I am not a lawyer but I'm pretty sure it means you are relinquishing your IP rights over your contribution to Ecma so we can publish the standard every year. For more information, see the CONTRIBUTING document on the Ecma262 repo. +Now, let's talk IPR, or Intellectual Property Rights. Generally, in order to participate in the TC39 plenary or champion a proposal, you must represent an Ecma member organization as a delegate. There are exceptions, including Invited Experts, who may attend and participate with the permission of both the chair group and the Ecma Secretariat. Any participant who is not an active delegate for a member org must register by signing the R-F-T-G agreement. Those letters stand for Royalty Free Task Group, and I am not a lawyer but I'm pretty sure it means you are relinquishing your IP rights over your contribution to Ecma so we can publish the standard every year. For more information, see the CONTRIBUTING document on the Ecma262 repo. Alright, let's talk notes! Thank you to anyone who signed up on the reflector for note-taking shifts. Your work is immensely appreciated. We're still a bit short for total coverage though, so if we could get some volunteers, I'd be forever grateful. Note-taking is a little different this meeting, due to a phenomenal new tool from Kevon Gibbons. Kevin, do you want to introduce that? -KG: Sure. So as you all know, notes are vitally important; as you also all know, we never have enough note takers. Thank you as always to the people who've been taking notes. I have tried to hopefully relieve some of the burden on y'all - or us all I should say - for this meeting and hopefully future meetings. So we're going to be trying a new tool which just glues the output of the meeting to the Google Cloud speech-to-text API and then glues that into Google Docs somewhat haphazardly. So if you look at the notes, you will notice that they are being taken in real time. That is my other computer doing that. You will also notice that they are full of typos and trying to capture literally what you said instead of what you meant to say, with all of your umms and ahhs and so on in there and with a bunch of typos, so we definitely will continue to need note takers. I will let the chairs do the calls for those, but hopefully it will be less work to take notes than it has been in meetings past and I am hoping we can get some people who have been hesitant to take notes because it would take too much of their attention to sort of just follow along and correct typos as they go and maybe swap out long sentences with summaries and that sort of thing. Please ping me if it falls over. +KG: Sure. So as you all know, notes are vitally important; as you also all know, we never have enough note takers. Thank you as always to the people who've been taking notes. I have tried to hopefully relieve some of the burden on y'all - or us all I should say - for this meeting and hopefully future meetings. So we're going to be trying a new tool which just glues the output of the meeting to the Google Cloud speech-to-text API and then glues that into Google Docs somewhat haphazardly. So if you look at the notes, you will notice that they are being taken in real time. That is my other computer doing that. You will also notice that they are full of typos and trying to capture literally what you said instead of what you meant to say, with all of your umms and ahhs and so on in there and with a bunch of typos, so we definitely will continue to need note takers. I will let the chairs do the calls for those, but hopefully it will be less work to take notes than it has been in meetings past and I am hoping we can get some people who have been hesitant to take notes because it would take too much of their attention to sort of just follow along and correct typos as they go and maybe swap out long sentences with summaries and that sort of thing. Please ping me if it falls over. AKI: Thank you very much. Talk about a great example of when it's the right time to use a point of order - if the notes transcriber falls over. Alright, so we will still be calling for note-takers. We will still need no takers. But the task of notetaking hopefully just got quite a bit easier. We'll see. @@ -90,13 +91,13 @@ Our next meeting will be January 25th through the 28th. It was meant to be at th On to some standard housekeeping. Has everyone had an opportunity to review last meeting's minutes? Do we have approval from the 22 of you or whatever who are present? [paused for input; group remained silent] I'm gonna say motion passes. We have approval of minutes for the last meeting. -How about this meeting's agenda which you all have seen because there was a 10-day deadline for the important bits to get onto it. Can we move ahead with the current agenda? [silence] Great, excellent. All right time for secretary and editor reports unless chairs. Do you have anything else you wanted to get in before we move on to two reports and updates? +How about this meeting's agenda which you all have seen because there was a 10-day deadline for the important bits to get onto it. Can we move ahead with the current agenda? [silence] Great, excellent. All right time for secretary and editor reports unless chairs. Do you have anything else you wanted to get in before we move on to two reports and updates? ## Plenary Scheduling -RPR: This is an update on scheduling of the plenaries. So just a recap of the last year's we really kept our usual long-standing meeting cadence of once every couple of months. So six meetings a year and we make sure now we've got our full 16 hours of that over four days. So we switched to fully remote so quite a big change, but actually everything seemed to be basically ok. And we've been doing each day has been split into two sessions which are two hours each. The reason for going to four days was basically that was it's a long time to stare at a computer if you are in there for the full six or seven hours and on the time zones, we still kept the notion of a geography and today we are in Budapest thankfully and so that's be retained as well. So looking ahead, for next year we talked about this or we've kind of advertised this a few times over the last last year in our next year's plans, but we're moving to quarterly meetings and this is kind of intended to reflect the full in person meetings that we would have done. Obviously it's not really going to be in person. I think the earliest we might consider going to real life meetings might be the end quarter of next year, but obviously we really have to see how everything plays out +RPR: This is an update on scheduling of the plenaries. So just a recap of the last year's we really kept our usual long-standing meeting cadence of once every couple of months. So six meetings a year and we make sure now we've got our full 16 hours of that over four days. So we switched to fully remote so quite a big change, but actually everything seemed to be basically ok. And we've been doing each day has been split into two sessions which are two hours each. The reason for going to four days was basically that was it's a long time to stare at a computer if you are in there for the full six or seven hours and on the time zones, we still kept the notion of a geography and today we are in Budapest thankfully and so that's be retained as well. So looking ahead, for next year we talked about this or we've kind of advertised this a few times over the last last year in our next year's plans, but we're moving to quarterly meetings and this is kind of intended to reflect the full in person meetings that we would have done. Obviously it's not really going to be in person. I think the earliest we might consider going to real life meetings might be the end quarter of next year, but obviously we really have to see how everything plays out -We then have smaller meetings that are two days. This is still official plenary. So it still means the 10-day advance notice you want if you want stage advancement, but it's a shorter session and perhaps this kind of pro forma - this kind of boilerplate we do at the start - might be a little bit shorter as well and the same structure of the day. So two sessions two hours each and what this amounts to is the same number of hours, but more frequent meetings. This is all based on the feedback that we've had. So the actual schedule is here. We've kind of worked hard to look at the delegate survey to influence the time zones and the locations that we've selected trying to be fair. And so there is a bias towards specific time, but that is because you know half our members are on Pacific Time. And so this is you'll be able to see this. Well, this is already in the tc39 calendar maintained by Yulia and we'll get this into the well. And then yeah just to be clear. This is still full plenary even for the shorter meetings. We haven't completely locked down the start and end times for some of these but we won't be a little bit flexible. We'll make sure that that's always set well in advance. If you have further thoughts, the chairs always like feedback, particularly constructive feedback, and so please let us know on the reflector. +We then have smaller meetings that are two days. This is still official plenary. So it still means the 10-day advance notice you want if you want stage advancement, but it's a shorter session and perhaps this kind of pro forma - this kind of boilerplate we do at the start - might be a little bit shorter as well and the same structure of the day. So two sessions two hours each and what this amounts to is the same number of hours, but more frequent meetings. This is all based on the feedback that we've had. So the actual schedule is here. We've kind of worked hard to look at the delegate survey to influence the time zones and the locations that we've selected trying to be fair. And so there is a bias towards specific time, but that is because you know half our members are on Pacific Time. And so this is you'll be able to see this. Well, this is already in the tc39 calendar maintained by Yulia and we'll get this into the well. And then yeah just to be clear. This is still full plenary even for the shorter meetings. We haven't completely locked down the start and end times for some of these but we won't be a little bit flexible. We'll make sure that that's always set well in advance. If you have further thoughts, the chairs always like feedback, particularly constructive feedback, and so please let us know on the reflector. LEO: I just have a question and suggestion to tweek later for the four days meetings. I'm not complaining about anything, but I'm just suggesting we consider some of the four days meetings, especially when they are too far from the Pacific time zone to actually start those meetings on a Tuesday. So from Tuesday to Friday. @@ -114,12 +115,12 @@ IS: Ninth and tenth. Wait a second. What your next month next week next month di WH: Well, this presents a problem because the TC39 meeting will be at the same time as the GA meeting. That's not good for those of us who need to go to both. -IS: Okay, that's that's a good good catch. OK, so apparently we have a conflict with TC39’s new schedule which was presented just half an hour ago and the GA meeting. Yes, so we since it is difficult to move the general assembly meeting. So I would say that we have to move the TC39 meeting. It would be better the week beforeI know can become brings theOut to the general assembly meeting. +IS: Okay, that's that's a good good catch. OK, so apparently we have a conflict with TC39’s new schedule which was presented just half an hour ago and the GA meeting. Yes, so we since it is difficult to move the general assembly meeting. So I would say that we have to move the TC39 meeting. It would be better the week beforeI know can become brings theOut to the general assembly meeting. RPR: Alright, will review that now and make a swift change to the tc39 meeting them? Yeah. thanks Waldemar. Thank you very much. That was a very good catch. - ## ECMA262 status update + Presenter: Kevin Gibbons (KG) - [spec](https://github.com/tc39/ecma262) @@ -129,7 +130,7 @@ KG: Alright, so starting off with an overview of changes since last meeting. The Also, we, or rather an external contributor - jmdyck, whose real name I can never remember - changed a few places in the spec where a built-in function was defined as a series of overloads into a single thing with explicit switches on the type of the first argument, which everyone thinks is much clearer and nicer. The overloads were extremely strange. -#2110 is just here so that people are aware of it going forward: the spec has various algorithms that are like some of them are defined in terms of static semantics, meaning purely in terms of the parse tree, and some of them are more runtime, meaning they rely on evaluation context, and some of the latter category were prefixed with the words "runtime semantics" in their name and others were not. The distinction was not clear and not well maintained as new parts have been added to the spec. So we just removed the "runtime semantics" prefix entirely so that an abstract operation is by default runtime semantics and those which are used statically are explicitly prefixed as such. +`#2110` is just here so that people are aware of it going forward: the spec has various algorithms that are like some of them are defined in terms of static semantics, meaning purely in terms of the parse tree, and some of them are more runtime, meaning they rely on evaluation context, and some of the latter category were prefixed with the words "runtime semantics" in their name and others were not. The distinction was not clear and not well maintained as new parts have been added to the spec. So we just removed the "runtime semantics" prefix entirely so that an abstract operation is by default runtime semantics and those which are used statically are explicitly prefixed as such This last one, #1966, is just a very minor tweak to the grammar. I'm sure any of you who have worked on c-like languages are familiar with the dangling "else" problem in grammars. That's where you have an if-else statement within an if statement and you don't necessarily know which if to associate the else with. You want it to be associated with the inner one, but the grammar needs to make that clear somehow. Previously this was done with a normative note that just says it's associated with the closest possible. Now, it's done with a lookahead restriction, to match the rest of this spec. I don't believe there's any problems with that. But if you believe there are, please let us know so we can back that out. @@ -139,7 +140,7 @@ And then 2120. This was one of those that fell out of 2007 where the distinction Upcoming work is basically the same list as it's been the last few meetings. Landing 2007 was the major project this last couple months, but just to recap, we are still intending to refactor syntax directed operations so that all of the definitions for a given syntax directed operation are in a single place rather than being co-located with the production. There will be links from where the syntax directed operations are currently defined under each production under a set of Productions to the SDOs that are defined over it. That hopefully will happen before the next meeting, it’s the next thing we intend to to tackle. There's a few other inconsistencies and general tweaks. I should call out 2045 which is a fairly major change, which is defining the notion of a built-in generator. This is purely editorial. It's just a different way of specifying iterators than what we currently do that allows you to use a Yield macro in the same way that we have an Await macro. That just makes it easier to specify operations. And that's intended to be used in the iterator helpers proposal which adds a lot of iterators and was looking for an easier way to specify them. Personally I think that it's a lot nicer. -MM: Generators have this weird bi-directional nature where you can feed values into the next(), whereas iterators are essentially unidirectional - is this really intended just to be sugar for iterators or you are also intending them to be able to accept values? +MM: Generators have this weird bi-directional nature where you can feed values into the next(), whereas iterators are essentially unidirectional - is this really intended just to be sugar for iterators or you are also intending them to be able to accept values? KG: It's the latter. So moving on, a couple of other consistency and clarity things that I'm not going to call out - actually in fact quite a lot of consistency and clarity things that I'm not going to call out. Oh, I should mention that we track these on the projects tab of the ecma262 repo. And then I wanted to call out specifically regarding 2007 the mathematical values versus number thing. Here we have a chart of the various kinds of numbers and operations you might want to do with them. So this middle column is the text that you would write in the specification and this rightmost column is the text that you would see rendered in your browser. So specifically mathematical values are not bolded and do not have a subscript you just write 0 to mean the actual 0. JS number values are bolded and all have this little F subscript to indicate that they are talking about the floating point numbers rather than real numbers, with the exception of NaN. NaN does not get a suffix because NaN could never have been a real number. BigInt values get this little Z suffix, the integer suffix, and are bold. And then you can convert between a number and or a bigint and a real - you must do so explicitly. There's no implicit coercion. You can convert using these three operators, and those are all defined precisely in the algorithm conventions or notational conventions portion of the spec. So, try to get these right. The linter will help you if you do something which is obviously wrong, but the linter does not have a type system. So it's only able to identify things that are obviously wrong. We will do our best in reviews to get these right as well. I'll give it over to Jordan. @@ -148,29 +149,30 @@ JHD: I've decided to step down as editor after ES2020 is completed. I've been se AKI: Thank you, Jordan and Kevin. ## ECMA-402 (Intl) status update + Presenter: Ujjwal Sharma (USA) - [spec](https://github.com/tc39/ecma402) - [slides](https://docs.google.com/presentation/d/1RwyJa7xpA8vn2TVFbHmyay4-SsouG48UGAFxfq9Nn6s/edit) +USA: Good morning everyone if it's morning for you. Shane like a lot of our friends in Pacific Time found it really hard to make it here, so I'm here to make sure that you all don't miss the amazing status update for Intl stuff. Just real quick, what is ECMA 402 for the uninitiated? ECMA 402 is JavaScript's own favorite built-in internationalization Library. So say you have a Date object. Those are fun, right? And you can then if you're printing out that data object to a website you get you could use DateTimeFormat and then you can produce formats that would be satisfying to people on both sides of the pond. How cool would that be? So how is Intl developed? Well Intl is part of a separate specification that is developed by TC39-TG2, which is different from TC39-TG1, and it's called ECMA402 it's a different specification from this specification though we move proposals through the TC39 standard process. And we have a monthly phone call to discuss the issues in greater detail. If you want to join this monthly call or if you're interested in getting more involved in the space, just send an email to ECMA 402 at men at chromium.org. He and we'd keep badgering you with all these reminders so that we show up and if you want some more information about the project and about the people who work on it, just follow this link.to the repository TC 39 TG2 is a project that requires a lot of people to keep functioning. There's people from all over the board from Google Mozilla. Igalia and Salesforce are helping out and let's see what we have in store. There's no normative PRs ready for review by this time, so let's go straight to these proposals. The first stage proposal is date format range by PFC. So this one stage 3 shipped in Chrome 76 and is behind a flag right now in Firefox. It's one of the candidates for stage 4 pending a few editorial issues. It's still being worked on in the repository about the issue about the proposal. It's pretty amazing.you can basically format a whole range of 8 so you can say well you're delivering with happen anywhere between January 10th to 20th 2007. I don't know why they use that date. But yeah, okay about Intl Segmenter that is championed by Richard Gibson. It's shipping in Chrome 87 without a flag, and it's enabled on the JSC trunk. This is also super interesting. You can use it to segment words or sentences or paragraphs and then iterate over them. How fun is that and you can also do that irrespective of which look at Kyle Durand, so it's even more fun.stage one and two proposals are there's a few so if you remember format range from date, there's format range for numbers as well. So you can say well your seven-foot-tall Marie plushie would cost anywhere from 70 to 50 dollars or something like that. I don't know it's championed by hey, there's a bunch more group into this than just format range. Go check it out, and it's pending resolution to design issues before promoting these days three the exact design details are still being worked out. So if you care about this proposal do check it out next up we have a duration format which was championed by YMD and myself, which is also pending afew design details, but it's if you're familiar with temporal durations this can help you format those durations. So you can have a duration of two hours and 30 minutes and can format it in Spanish, too. -USA: Good morning everyone if it's morning for you. Shane like a lot of our friends in Pacific Time found it really hard to make it here, so I'm here to make sure that you all don't miss the amazing status update for Intl stuff. Just real quick, what is ECMA 402 for the uninitiated? ECMA 402 is JavaScript's own favorite built-in internationalization Library. So say you have a Date object. Those are fun, right? And you can then if you're printing out that data object to a website you get you could use DateTimeFormat and then you can produce formats that would be satisfying to people on both sides of the pond. How cool would that be? So how is Intl developed? Well Intl is part of a separate specification that is developed by TC39-TG2, which is different from TC39-TG1, and it's called ECMA402 it's a different specification from this specification though we move proposals through the TC39 standard process. And we have a monthly phone call to discuss the issues in greater detail. If you want to join this monthly call or if you're interested in getting more involved in the space, just send an email to ECMA 402 at men at chromium.org. He and we'd keep badgering you with all these reminders so that we show up and if you want some more information about the project and about the people who work on it, just follow this link.to the repository TC 39 TG2 is a project that requires a lot of people to keep functioning. There's people from all over the board from Google Mozilla. Igalia and Salesforce are helping out and let's see what we have in store. There's no normative PRs ready for review by this time, so let's go straight to these proposals. The first stage proposal is date format range by PFC. So this one stage 3 shipped in Chrome 76 and is behind a flag right now in Firefox. It's one of the candidates for stage 4 pending a few editorial issues. It's still being worked on in the repository about the issue about the proposal. It's pretty amazing.you can basically format a whole range of 8 so you can say well you're delivering with happen anywhere between January 10th to 20th 2007. I don't know why they use that date. But yeah, okay about Intl Segmenter that is championed by Richard Gibson. It's shipping in Chrome 87 without a flag, and it's enabled on the JSC trunk. This is also super interesting. You can use it to segment words or sentences or paragraphs and then iterate over them. How fun is that and you can also do that irrespective of which look at Kyle Durand, so it's even more fun.stage one and two proposals are there's a few so if you remember format range from date, there's format range for numbers as well. So you can say well your seven-foot-tall Marie plushie would cost anywhere from 70 to 50 dollars or something like that. I don't know it's championed by hey, there's a bunch more group into this than just format range. Go check it out, and it's pending resolution to design issues before promoting these days three the exact design details are still being worked out. So if you care about this proposal do check it out next up we have a duration format which was championed by YMD and myself, which is also pending afew design details, but it's if you're familiar with temporal durations this can help you format those durations. So you can have a duration of two hours and 30 minutes and can format it in Spanish, too. - -USA: Well, okay, next up we have the Intl enumeration API. It's at stage 2, championed by Frank and it's a good API. It's blocked on a bunch of concerns regarding privacy or fingerprinting concerns by certain people. We've been working with certain folks to resolve these consensus. See what we can do within the constraints of a private API. If you're somebody who knows about these subjects and wants to help out, we’d really appreciate this help because we aren't experts in privacy. So yeah any help in this would be really appreciated. Smart unit preferences is a fun one. So it's champion also ins and is Vlog on discussion involving for a to scope. So there are a lot of concerns regarding if this proposal is even in scope or normative, you know, it's because it's not strictly formatting and now we're considering adding it to number format. So that sort of the concern here if you have any thoughts regarding that please chime in on this issue and that's no lie that for this proposal Intl display names v2 is another one. It's championed by Frank and it adds a bunch of amazing things to display names. So that's Wii games trying times earnings calendar names a bunch of things. It also adds two dialects, which is really cool. And there's an update schedule later in this meeting. So was the space same for into local info Frank is doing some amazing work on this one and it's going to expose a lot of amazing information.That's going to be really helpful friend off actualization apis for the Locale object and there's an update also scheduled later talking about the stage their proposal the fun one we have in the pipeline right now is user preferences. It's not actually a single coherent proposal yet. It's a bunch of issues that are proposals attached to them which when put together make for a more comsive solution, but we still need to figure out how exactly to navigate the space. So there's proposals like The Navigators Locale proposal and you know adding accept language headers stuff like that. This is really one of the things that we're expected to work on in the next few years and things that this is one of those proposals spaces that we're really excited about. So if this is something you're interested in also,Get involved! So that's that's my last ask for you if you want to help us out with documentation or implementing stuff in js engines and polyfills or if you're a C++ or Java visitor and want to help us with ICU stuff, please help us out and if you want to join our monthly call again this email is going to help you. Thank you. +USA: Well, okay, next up we have the Intl enumeration API. It's at stage 2, championed by Frank and it's a good API. It's blocked on a bunch of concerns regarding privacy or fingerprinting concerns by certain people. We've been working with certain folks to resolve these consensus. See what we can do within the constraints of a private API. If you're somebody who knows about these subjects and wants to help out, we’d really appreciate this help because we aren't experts in privacy. So yeah any help in this would be really appreciated. Smart unit preferences is a fun one. So it's champion also ins and is Vlog on discussion involving for a to scope. So there are a lot of concerns regarding if this proposal is even in scope or normative, you know, it's because it's not strictly formatting and now we're considering adding it to number format. So that sort of the concern here if you have any thoughts regarding that please chime in on this issue and that's no lie that for this proposal Intl display names v2 is another one. It's championed by Frank and it adds a bunch of amazing things to display names. So that's Wii games trying times earnings calendar names a bunch of things. It also adds two dialects, which is really cool. And there's an update schedule later in this meeting. So was the space same for into local info Frank is doing some amazing work on this one and it's going to expose a lot of amazing information.That's going to be really helpful friend off actualization apis for the Locale object and there's an update also scheduled later talking about the stage their proposal the fun one we have in the pipeline right now is user preferences. It's not actually a single coherent proposal yet. It's a bunch of issues that are proposals attached to them which when put together make for a more comsive solution, but we still need to figure out how exactly to navigate the space. So there's proposals like The Navigators Locale proposal and you know adding accept language headers stuff like that. This is really one of the things that we're expected to work on in the next few years and things that this is one of those proposals spaces that we're really excited about. So if this is something you're interested in also,Get involved! So that's that's my last ask for you if you want to help us out with documentation or implementing stuff in js engines and polyfills or if you're a C++ or Java visitor and want to help us with ICU stuff, please help us out and if you want to join our monthly call again this email is going to help you. Thank you. ## ECMA-404 (JSON) Status update AKI: Chip, your standard 30 seconds if you don't mind. CM: Nothing to report + ## Chairs group update -Presenter: Rob Palmer (RPR) +Presenter: Rob Palmer (RPR) RPR: We've had feedback that the committee doesn't really want to spend much time on this and so we are streamlining things quite a bit here. This year in 2020 we've had a chair group of four co-chairs. You saw their faces earlier on AKI's wonderful opening slide. You know who they are. And we also had previous chair Yulia Startsev assisting us in the Dowager role. This was something that was brought up in the February meeting, but I think it's also worth clarifying for everyone what that means for the Dowager role. That's someone who has access to pretty much all chair activities, but not have the same kind of final say. And they also have access to things like the weekly chairs meetings, the messaging channels that we use to organize and in set up meetings and so on but obviously it's a lower commitment and a lower time expectation than the chair role itself. And the reason for clarifying this role is that we are taking on a new Dowager in the form of MBS. MBS was a full co-chair this year and for 2021 he's enthusiastic and very excited to take on the title of Dowager. So we thank him for his service. This is the proposed set for 2021. It's not all that much change. So what this means is that we're not planning to have an election. You may already be a little bit weary of elections particularly those in the US. So if there are no objections, then we will adopt this in the January meeting and if you have any feedback, we have an open reflector issue. Thank you. - ## Handling of NaN & side effects in Date. prototype.set* methods + Presenter: Kevin Gibbons (KG) - [pr](https://github.com/tc39/ecma262/pull/2136) @@ -188,27 +190,31 @@ DE: This seems good to change to me. Once there are tests in my opinion it seems KG: Okay, we'll call that consensus. I'll make sure this has tests before it lands. AKI: Excellent. -#### Conclusion/Resolution -* Add tests and land. + +### Conclusion/Resolution + +- Add tests and land. + ## Handling await in left operands of exponentiation + Presenter: Daniel Rosenwasser (DRR) - [pr](https://github.com/tc39/ecma262/issues/2197) -- [slides](https://1drv.ms/p/s!AltPy8G9ZDJdqzSyim_5LybEkGUH?e=0fRaBZ) +- [slides](https://1drv.ms/p/s!AltPy8G9ZDJdqzSyim_5LybEkGUH?e=0fRaBZ) DRR: All right, I can get this started. Gooooood morning Budapest. I was really hoping to be presenting from Budapest this year, but maybe next year. Do you all see stuff? I'm going to assume you all see stuff. Do you see my presentation? Yes. Okay. Great. Okay. So this presentation - I'm here today because Kevin found an inconsistency between implementations and it stems from this original issue, right? So way back in the day when we added exponentiation there was this question of what is the order of operations when you have a unary operator on the left hand side and depending on what your background is the order of operations is arguable: in some context you might interpret this as negating first and then exponentiating by Y, in other contexts, you might exponentiate X by Y and then negate at the end and the solution that tc39 came to was to disallow this entirely and so this avoids visual ambiguity. If you want to get either one of those meanings you have to parenthesize, so you can parenthesize negative X then exponentiate, or parenthesize the exponentiation and then the negate. But part of the change also meant that certain other things were disallowed too - so specifically you can't have an operator on the left side of an exponentiation either. Basically this was another visual ambiguity issue and we disallow this as well. delete, typeof, etc were all disallowed. And specifically the one that I want to talk about today is await because await is one of them. You can see that if you just go through the grammar where if you get to ExponentiationExpression, there are two branches one where you're allowed to use the exponentiation operator and when we are not, and unary is the one where you're not allowed to use that and can clearly see "await" there. So where's this done correctly? Well TypeScript correctly disallows await x ** y, right? It doesn't allow that syntax (XS doesn't either) but unfortunately everyone else seems to allow this syntax. And from what KG has told me this sort of stems from everyone taking a very similar implementation approach, but just for completeness. These are the specific ones we know of that have this issue that KG told me about and it's very sad as you can see from all the Emoji. So the proposal that I have is implementations should ideally just fix the bugs and become spec compliant and because this seems to be a common issue across implementations have just some sort of notice within the specification to tell people "hey, just so you know, await really is supposed to be handled specially here" or I mean in a consistent way here. There is an alternative which is that maybe engines just can't change now, maybe there's some sort of like web reality thing that we have to account for here and to reflect that you can make a naive fix but that has some poor effects and specifying that is kind of tough specifically like naively the thing that you might just be prone to say is well, there are those two branches for unary for exponentiation. Just move await into that other branch where you do allow it, right? So you move it into UpdateExpression, but now you have this other issue where now you allow some of the original syntax that we found to be bad; you end up having the same issue as before where you can end up with at least part of that issue of `await -x**y`. And so I'd go as far as to say this is actually worse than what we were trying to avoid in the first place. So my recommendation is still we should try to continue with what the spec currently says today across implementations. So I'll leave the floor open to discussion at this point and I will stop presenting. MM: Historically the best way to get the implementations to follow the spec is to have a test262 test for the non-conformance. Does test262 test for this case? If not, adding that should be at least the same priority as adding a note and talking to the implementations. -DRR: I agree with you that we will need a test for this. I don't know the status of that Kevin might be able to speak to that better than I can. +DRR: I agree with you that we will need a test for this. I don't know the status of that Kevin might be able to speak to that better than I can. KG: I don't believe there is because I think if there were implementations would have gotten it right. MM: Exactly, exactly. -DRR: if that's the best way to go about this then I think that's all right. I think I just want to get a gauge of what implementations feel about this as well at this point, but that sounds like the right path forward. +DRR: if that's the best way to go about this then I think that's all right. I think I just want to get a gauge of what implementations feel about this as well at this point, but that sounds like the right path forward. -BSH: Yes, I just wanted to say that I think that this shouldn't be a problem in terms of web reality at least based on data available to me. I did a search through our very large code base in Google and could not find any JavaScript that actually would break if you change this at all now. We don't use this syntax anywhere. +BSH: Yes, I just wanted to say that I think that this shouldn't be a problem in terms of web reality at least based on data available to me. I did a search through our very large code base in Google and could not find any JavaScript that actually would break if you change this at all now. We don't use this syntax anywhere. DRR: I agree because I think if you wrote this I'd have questions for you. So seconded. @@ -218,33 +224,37 @@ DRR: Yep, I agree. All right. SYG: So yes, so I agree with what people have been saying. It seems unlikely that these would actually cause compat issues. It seems yeah, you just keep that this code is not something you would go right? I think to be extra careful. We can do a query over HTTParchive since this is a syntactic thing that we're trying to change. I am going to fix this without such a thing, but if someone wants to be extra careful, feel free to craft a query and then ask me to run it for you so you don't encourage large GCP costs. Yeah, seems good to me to fix the implementations. -DRR: sweet. +DRR: sweet. -LEO: Yeah, just a quick comment on what is disallowed in the grammar and why there might not be tests for things like this. ECMAScript does allow extensions of the language. So not having a grammar (the lack of a grammar allowing something) doesn't directly mean something is disallowed. So test262 cannot have a test for everything that is just not in the grammar because they are not disallowed. It is allowed to to have extensions conforming to ECMAscript unless it's explicitly said by static errors or forbidden extensions, etc. So this is very complicated in terms of like it doesn't - it's not implicit. So if you want to disallow something I recommend putting this [audio cut off] +LEO: Yeah, just a quick comment on what is disallowed in the grammar and why there might not be tests for things like this. ECMAScript does allow extensions of the language. So not having a grammar (the lack of a grammar allowing something) doesn't directly mean something is disallowed. So test262 cannot have a test for everything that is just not in the grammar because they are not disallowed. It is allowed to to have extensions conforming to ECMAscript unless it's explicitly said by static errors or forbidden extensions, etc. So this is very complicated in terms of like it doesn't - it's not implicit. So if you want to disallow something I recommend putting this [audio cut off] DRR: I think your audio cut off. However, that's a really good point and I'm not exactly sure how to address that. it seems like we have a lot of places or at least a couple of places where the intent is to explicitly disallow syntax may be the most recent case being knowledge nullish coalescing where we basically crafted the syntax in a way that it couldn't be mixed with other operators at a logical level, but then we punted on the idea of having an early error of some sort. So I still think implementations should switch their behavior up. But I don't know how to - Mark, you seem to have a comment. DE: So I want to disagree with LEO. I think even though there's some text about disallowed extensions, I don't think the traditional way of looking at JavaScript as allowing extensions is useful the same way today as it was in the like actionscript and jscript era. I think languages that extend JavaScript talk about themselves explicitly is supersets. And I don't think we should worry about the editorial difference between early errors and something not being present in the grammar. I think if we did then we would just have a have a huge amount of additional early errors to add in order to accurately describe the language, so I want to revisit this at a future point. If people feel that the distinction significant than I would really like to to discuss that further. -LEO: yeah, we always discuss this and we always disagree. I'm not sure we can make a productive use of time during this meeting. Okay. +LEO: yeah, we always discuss this and we always disagree. I'm not sure we can make a productive use of time during this meeting. Okay. WH: I agree with DE. I don't want to open the Pandora's box of explicitly specifying everything which should result in a syntax error in the spec. AKI: And also SYG +1 DE and has nothing more to add beyond that. All right. So where does that leave us? -DRR: It sounds like maybe we need to test but it's not clear to me. Who or how I can try to work on it a little bit, but I'd need some guidance. So can we have a volunteer to help coach me on writing a test 262 test for now. +DRR: It sounds like maybe we need to test but it's not clear to me. Who or how I can try to work on it a little bit, but I'd need some guidance. So can we have a volunteer to help coach me on writing a test 262 test for now. KG: Let's just open an issue on that repository and we can talk about it there. DRR: Sounds good. All right. Thanks Kevin. Thank you all and I'm done. Thank you. -#### Conclusion/Resolution -* Add tests. -## __proto__ normative optional options + +### Conclusion/Resolution + +- Add tests. + +## `__proto__` normative optional options + Presenter: Jordan Harband (JHD) - [pr](https://github.com/tc39/ecma262/pull/2125#issuecomment-698502404) -JHD: So this is talking about PR #2125 on the spec. The current status of the pull request is that it is a normative request that moves __proto__ out of Annex B. There was an open question on it about whether it's allowed for implementations to do all to do a partial implementation of Annex B in other words, can you have __defineGetter__ and omit __defineSetter__? The editor group decided that we would land 2125 once it had had appropriate updates without addressing the All or Nothing question, and Gus decided to add this to discuss in plenary about that question, which can be done in a separate PR or this one. +JHD: So this is talking about PR #2125 on the spec. The current status of the pull request is that it is a normative request that moves `__proto__` out of Annex B. There was an open question on it about whether it's allowed for implementations to do all to do a partial implementation of Annex B in other words, can you have **defineGetter** and omit **defineSetter**? The editor group decided that we would land 2125 once it had had appropriate updates without addressing the All or Nothing question, and Gus decided to add this to discuss in plenary about that question, which can be done in a separate PR or this one. JHD: As we move things out of Annex B, the question will come up if normative optionality comes in bundles or not. Meaning, is it acceptable to choose one normative optional item in the entire spec and implement it and then omit the rest, let's say - or, should we instead decide that there are certain normative packets of normative optional things that you either have to implement the entire packet or none of the packet. So for example, `__defineGetter__` and `__defineSetter__` are things that I don't think anyone has implemented without doing both or neither of them, and it doesn't seem to make sense to anyone I've spoken to that anyone would want to implement one of them without the other one. Perhaps more even relevant for these, it is likely that if you only Implement one of them you will not actually achieve the goal of being compatible with code that expects these things to be present. So essentially the question as I understand that then is: should we, and and how do we, specify groups of things to be required wholesale or omitted? Hopefully I explained that well if anyone wants to clarify me before we go to the queue, please feel free. @@ -260,7 +270,7 @@ JHD: The challenge, is that making the determination is a normative requirement, MM: I agree. -AKI: Okay, is the tension resolved? +AKI: Okay, is the tension resolved? MM: Yes. @@ -270,25 +280,27 @@ JHD: When Annex B discussions have come up before, where the committee landed ba DE: Yeah, I don't want to get us off topic right now. I just want to register my continued disagreement. I previously proposed that we make all these things normative, but we don't have consensus on right now. -JHD: okay, so there's nothing else on the Queue. What I think would be great to unblock Gus here is, do we have consensus that __defineGetter__ and __defineSetter__ should be implemented as a bundle, meaning do them both or don't do either. Let's start with that. Then if people are comfortable with that then the follow-up question would be should we then assume that we will continue to ask this question. As for each thing that has been brought out of Annex B so we can make a decision. +JHD: okay, so there's nothing else on the Queue. What I think would be great to unblock Gus here is, do we have consensus that **defineGetter** and **defineSetter** should be implemented as a bundle, meaning do them both or don't do either. Let's start with that. Then if people are comfortable with that then the follow-up question would be should we then assume that we will continue to ask this question. As for each thing that has been brought out of Annex B so we can make a decision. -MM: The obvious bundle is actually larger, despite my saying that we should err towards smaller bundles. __lookupGetter__ and __lookupSetter__ are clearly part of the same bundle as __defineGetter__ and __defineSetter__. I agree that each time this question comes up it should be brought to tc39 plenary, but I expect quick answers. +MM: The obvious bundle is actually larger, despite my saying that we should err towards smaller bundles. **lookupGetter** and **lookupSetter** are clearly part of the same bundle as **defineGetter** and **defineSetter**. I agree that each time this question comes up it should be brought to tc39 plenary, but I expect quick answers. JHD: So then to modify the proposal then Mark you're suggesting that all of `__defineGetter__`, `__defineSetter__`, `__lookupGetter__`, and `__lookupSetter__` be treated as one bundle. MM: exactly. -KG: But not the __proto__ accessor; the __proto__ accessor would be a separate part. +KG: But not the `__proto__` accessor; the `__proto__` accessor would be a separate part. + +MM: exactly. And in fact, I would be okay with the **defineGetter** bundle becoming mandatory. Salesforce had a security concern that surprised me, but that is real, that makes me agree that the `__proto__` accessor should remain optional. And in any case it's quite distinct from the defineGetter bundle conceptually. + +JHD: So I'm going to then assume that we have consensus on making those four items one bundle and leaving the `__proto__` accessor separate. And in the future as these questions come up we will bring them to plenary with the understanding that we are likely to decide on various little bundles ad hoc. Thanks everybody. -MM: exactly. And in fact, I would be okay with the __defineGetter__ bundle becoming mandatory. Salesforce had a security concern that surprised me, but that is real, that makes me agree that the __proto__ accessor should remain optional. And in any case it's quite distinct from the defineGetter bundle conceptually. +### Conclusion/Resolution -JHD: So I'm going to then assume that we have consensus on making those four items one bundle and leaving the `__proto__` accessor separate. And in the future as these questions come up we will bring them to plenary with the understanding that we are likely to decide on various little bundles ad hoc. Thanks everybody. -#### Conclusion/Resolution `__defineGetter__`, `__defineSetter__`, `__lookupGetter__`, `__lookupSetter__`, will be treated as one “bundle”: an implementation must implement either all or none of them -`__proto__` accessor will remain independently normative optional -Future questions of this nature will be addressed in plenary in an ad-hoc nature as items are lifted out of Annex B +`__proto__` accessor will remain independently normative optional Future questions of this nature will be addressed in plenary in an ad-hoc nature as items are lifted out of Annex B ## Re-resolve unresolvable bindings in PutValue + Presenter: Shu-yu Guo (SYG) - [pr]( https://github.com/tc39/ecma262/pull/2205) @@ -297,15 +309,15 @@ AKI: Lovely. Alright, so next up we have SYG with re-resolve unresolved resolvab SYG: All right, so pretty small item. Basically, it's this snippet you see here. The way the spec is written references. Are these things that remember the object that they were that they were resolved on. So when you resolve the left hand side, at the time, there is no binding; the name is not declared. But when you execute the right hand side, you can actually then make a binding with the same name. Came back because the reference that makes that the state of whether it was resolved or not is remembered. Come time to do the actual assignment it will double check. “Well is the left-hand side at the time I was offered before execution”. The right hand side was a result and if it's not resolved, it should throw in script mode. The problem is nobody actually does this and nobody has done this for a decade. There's that part that's like there is from 10 years ago in Firefox and I'm pretty sure this is just web reality now, for other implementations as well. So I am proposing to change the suspect Behavior to match implementations in this case, which is to re- resolve the left-hand side reference when you actually do the assignment. - [transcription error] So if it's defined after you execute the right hand side, you have a binding of name then the assignment succeeds. Yeah, that's about it. Let's go to the queue. +[transcription error] So if it's defined after you execute the right hand side, you have a binding of name then the assignment succeeds. Yeah, that's about it. Let's go to the queue. AKI: Queue is empty. -SYG: All right. Thanks, I don't know. A few items have come up in the queue. Give it maybe half a minute and I'll assume this is not controversial and we have consensus. To repeat here: no engine or other implementations need to do anything. +SYG: All right. Thanks, I don't know. A few items have come up in the queue. Give it maybe half a minute and I'll assume this is not controversial and we have consensus. To repeat here: no engine or other implementations need to do anything. MM: Okay, so it says re-resolve, why not just resolve it once but later rather than re-resolving it. -SGY:Good question, I don't remember. I have to check if the current implementations in fact due two resolutions or it is just late resolution or and if there is different behavior. Can we actually observe this - we can right? +SGY:Good question, I don't remember. I have to check if the current implementations in fact due two resolutions or it is just late resolution or and if there is different behavior. Can we actually observe this - we can right? MM: Yeah, if there's an accessor there the getter would be called twice. That seems very bad. @@ -317,10 +329,14 @@ SYG: No, I'm pretty sure it just doesn't come up with "with" but I also don't re MM: Yeah, I would really want to understand that before this goes forward. -SYG: Yeah. That's all good. I mean that seems fair. So let the notes reflect that no specific resolution for what to do because I forgot some of the details. -#### Conclusion/Resolution -* Revisit with more details. +SYG: Yeah. That's all good. I mean that seems fair. So let the notes reflect that no specific resolution for what to do because I forgot some of the details. + +### Conclusion/Resolution + +- Revisit with more details. + ## IntegerIndexedElementSet should always indicate success + Presenter: Kevin Gibbons (KG) - [PR](https://github.com/tc39/ecma262/pull/2210) @@ -333,13 +349,13 @@ KG: Yup. ¯\_(ツ)_/¯ AKI: Okay, do we have more to add to this line of thought or do we want to move on to Waldemar? -WH: Mine is an actual question. Just for my edification, I want to understand: how did we get into this situation? Did we get this wrong in the spec or did the implementations just implement something other than the spec here? +WH: Mine is an actual question. Just for my edification, I want to understand: how did we get into this situation? Did we get this wrong in the spec or did the implementations just implement something other than the spec here? KG: So I believe the history here - and please someone incorrect me if I'm wrong about this because this is before my time. The history is that the typed arrays specification wasn't originally ours. The typed array specification was something that the Khronos group did as part of the webgl stuff, and they did not specify things in quite the way that we would have specified things, and browsers implemented these things before there was a proper tc39 specification, and people started using them in the wild before there was a real specification. This is my understanding of history. And I think that what happened is that it's not exactly that we got the specification wrong or the implementations got it wrong. It's that those two efforts happened in parallel and produced different results. DE: To fill in a little more, browsers have been aware of this mismatch the whole time. So browsers shipped the Khronos typed arrays this way, then ES6 came out and said, please make several changes to typed arrays. It seemed like they would have web compatibility risks. But even though there were test262 tests indicating a lot of these kinds of failures browsers didn't implement those to avoid the web compatibility risks. There were other changes that were implemented, the smaller ones like the toLength change and adding all the methods, but I think it just wasn't really possible for tc39 to adopt the Khronos typed array spec and then make a bunch of changes to it. The plan was - my understanding of the plan of the committee at this time was to write this spec, throw it over the wall, and hope that browsers work things out much like the Annex B function hoisting incompleteness. And I think this is just a mode that we can't work in because we've seen time and time again that it doesn't work instead. We need to do work together and to assess the web compatibility of things before adding them into the specification. -AKI: Did we have an actual conclusion yet? +AKI: Did we have an actual conclusion yet? KG: Yes, so I'd like to ask for consensus for this PR that's been open for a while, which is just as I said: it makes IntegerIndexedElementSet, which is the operation for writing a integer index to a typed array buffer, always succeed regardless of if you are writing past the end of the array or to a detached buffer. @@ -349,7 +365,7 @@ KG: I should mention that no implementation is ever going to change this. SYG: The status quo is worse, I think. -MM: [sighs] I do not object, but very reluctantly. +MM: [sighs] I do not object, but very reluctantly. KG: I will capture that in the notes. @@ -362,11 +378,14 @@ BSH: I'm just curious - it was stated that we know that if we fix this it breaks KG: I don't believe we know why, unfortunately. BSH: Okay. Just wondering, yeah. -#### Conclusion/Resolution -* Accepted, reluctantly. -## clarifying conclusion to __proto__ normative optional agenda item -JHD: Should __proto__ getter/setter be bundled as well? I'm not a hundred percent sure of what to suggest actually because I believe only the center has a security concern not together. +### Conclusion/Resolution + +- Accepted, reluctantly. + +## clarifying conclusion to `__proto__` normative optional agenda item + +JHD: Should `__proto__` getter/setter be bundled as well? I'm not a hundred percent sure of what to suggest actually because I believe only the center has a security concern not together. MM: I feel strongly they should be bundled together. The way to remove the setting behavior to remove the accessor property. @@ -374,7 +393,8 @@ JHD: Does anyone else have thoughts on that? if not, I think we'll just go with MF: My preference aligns with Mark. -#### Conclusion/Resolution +### Conclusion/Resolution + `__proto__` getter and setter will be bundled ## Give %TypedArray% methods explicit algorithms @@ -386,11 +406,11 @@ Presenter: Shu-yu Guo (SYG) SYG: So this is another thing where in Ross's quest to get the spec to reflect reality about typed arrays, we found some other parts that were under specified and have some engine disagreement. So this is one of those follow-on PRs I'm presenting for Ross. So currently on TypedArray.prototype for the different TypedArrays, the first bullet point lists [on the slide] all the things that do not have algorithm steps. They only have prose. The only two methods that have algorithm steps are map and filter. So what the PR does is it tries to give everything algorithm steps, but we ran into some issues. So first of all what the prose currently says is the following three things. I'm paraphrasing of course, but they all basically say to implement the same thing as the Array.prototype counterparts except when you would read that length property, instead read the internal slot [[ArrayLength]] on the typed array, and at the start of every method call validate typed array basically all valid a type. All it does is it throws if the typed array is detached at the beginning of the method call. And what this PR does is then tries to follow the spirit of what the process would be. So it first copies the spec text word for word step for step from the Array.prototype, then inserts ValidateTypedArray call at the start, then substitutes all the reads of the length property with reads of the [[ArrayLength]] internal slot. And this is highlighted because this is the part that has disagreement and is under-specified and in red. The [[HasProperty]] checks because TypedArray doesn't have holes and implementations don't call [[HasProperty]]. Notably the prose doesn't say anything about what you should do about the [[HasProperty]] checks. So the implementations kind of interpreted some way and did something. Luckily only three methods care about the HasProperty checks. So steps 1 to 3 here is sufficient to fully define in a compatible way for all the methods except these three methods: includes indexOf and lastIndexOf. Once a typed array type to raise the underlying buffer is detached, type 3 behaves as if they have no index to come properties. So all calls to the hasProperty, the [[HasProperty]] internal method will return false for its properties. -So first, I'm going to show you how we differ. suppose we have the following code. We make a new Uint8array and then we make a poison pill that sneakily detaches the typed array when you use it as a value and then we check if the typedarray includes the value undefined. and the algorithm steps of aray.prototype.includes where we copied to the algorithm steps from what it does is, it gets the the value of the second argument which is the start index to search for my thing like it does the value of conversion and stuff after it validates that es that this argument so the validation in this case. At first it's not detached, so it doesn't throw but then it detaches when it tries to coerce or convert. So what should this do currently? JSC throws because JSC kind of throws on all the detached stuff; SpiderMonkey doesn't, it returns true meaning for sneakily detached; V8 returns false, saying the detached TypedArray does not include undefined or the proposal is to not aligned with you know, step by step what the array methods do but instead try to align with the spirit of what we interpret the spirit of the original Pros definition to try to mean which is aligned with the observable output of the array method. So what I think the observable behavior ought to be with the detached case is to be the same as the analog of setting the length of a regular array of zero. I consider that an analogous case to sneakily detaching typed array. I sneakily set the length of a regular rate to zero. So this is basically the same example as before but instead of detaching, the length is 0. Here the engines all agree. We all say that a sneakily truncated array behaves as if it includes undefined. So recall again, the only engine that agrees with arrays today is spider monkey. So the proposal is to have SpiderMonkey semantics, to align with what the array analogy does. I think the web compat risk is very low. I don't think people are writing code like this. It seems very unlikely to exist in the wild. IndexOf again. Here's two examples that kind of tries to sneakily detach a typed array and truncate a regular array. For indexOf, V8 is the one that aligns with the array behavior. meaning array index of and typedarray index of both have a hasProperty check. and because a detached typedarray behaves as if it has no own properties, it does not find the index of undefined. And same thing for lastIndexOf in reverse. So again aligned with the semantics here which aligns with the array semantics. For spider monkey least I don't think this is controversial. There's a join method that can also sneakily detach or truncate the input array and array that's joined is as if it's array full amount of undefineds. +So first, I'm going to show you how we differ. suppose we have the following code. We make a new Uint8array and then we make a poison pill that sneakily detaches the typed array when you use it as a value and then we check if the typedarray includes the value undefined. and the algorithm steps of aray.prototype.includes where we copied to the algorithm steps from what it does is, it gets the the value of the second argument which is the start index to search for my thing like it does the value of conversion and stuff after it validates that es that this argument so the validation in this case. At first it's not detached, so it doesn't throw but then it detaches when it tries to coerce or convert. So what should this do currently? JSC throws because JSC kind of throws on all the detached stuff; SpiderMonkey doesn't, it returns true meaning for sneakily detached; V8 returns false, saying the detached TypedArray does not include undefined or the proposal is to not aligned with you know, step by step what the array methods do but instead try to align with the spirit of what we interpret the spirit of the original Pros definition to try to mean which is aligned with the observable output of the array method. So what I think the observable behavior ought to be with the detached case is to be the same as the analog of setting the length of a regular array of zero. I consider that an analogous case to sneakily detaching typed array. I sneakily set the length of a regular rate to zero. So this is basically the same example as before but instead of detaching, the length is 0. Here the engines all agree. We all say that a sneakily truncated array behaves as if it includes undefined. So recall again, the only engine that agrees with arrays today is spider monkey. So the proposal is to have SpiderMonkey semantics, to align with what the array analogy does. I think the web compat risk is very low. I don't think people are writing code like this. It seems very unlikely to exist in the wild. IndexOf again. Here's two examples that kind of tries to sneakily detach a typed array and truncate a regular array. For indexOf, V8 is the one that aligns with the array behavior. meaning array index of and typedarray index of both have a hasProperty check. and because a detached typedarray behaves as if it has no own properties, it does not find the index of undefined. And same thing for lastIndexOf in reverse. So again aligned with the semantics here which aligns with the array semantics. For spider monkey least I don't think this is controversial. There's a join method that can also sneakily detach or truncate the input array and array that's joined is as if it's array full amount of undefineds. WH: You said that if you call includes on an empty array and you search for undefined it returns true, but it actually returns false. -SYG: what if you if you search on empty array for undefined you get false you right you search an empty array in this fashion where you sneakily truncate the array? in the conversion of the second argument you initially when you called includes the array is not yet empty. Okay. Does that clarify? +SYG: what if you if you search on empty array for undefined you get false you right you search an empty array in this fashion where you sneakily truncate the array? in the conversion of the second argument you initially when you called includes the array is not yet empty. Okay. Does that clarify? WH: Yeah, I'm just trying to figure out what the regular array semantics are that you're trying to match here. @@ -398,21 +418,21 @@ SYG: What I'm matching is that in TypedArrays when I first call includes, the Ty SYG: The thing I'm trying to match is I think the closest thing in regular arrays to a sneaky detached typed array is a sneakily truncated array where I make it empty by setting the length to 0. If you disagree with that we can discuss it after the presentation. I think this is the closest one. -SYG: I don't think there's anything controversial here. We should return empty string comma empty string. Just calling it up that spider monkey for some reason stringify is undefined in a typedarray of a detached typed array to the string "undefined" which does not do for stringifying undefined in regular arrays. And that's about it. We have more questions on the queue. But to be clear the consensus I'm asking for is that everything else, all the methods that are listed here except join, index of lastIndexOf, [?] and includes have agreement among all the engines.Once they're given explicit algorithm steps and for the ones that do disagree that we get consensus on. Specifically for includes to have spider monkey semantics for index of and last index of to have a V8 semantics and for join to have V8 semantics, which all aligned with what I think is the closest analogy to a detached typed array case on regular race. +SYG: I don't think there's anything controversial here. We should return empty string comma empty string. Just calling it up that spider monkey for some reason stringify is undefined in a typedarray of a detached typed array to the string "undefined" which does not do for stringifying undefined in regular arrays. And that's about it. We have more questions on the queue. But to be clear the consensus I'm asking for is that everything else, all the methods that are listed here except join, index of lastIndexOf, [?] and includes have agreement among all the engines.Once they're given explicit algorithm steps and for the ones that do disagree that we get consensus on. Specifically for includes to have spider monkey semantics for index of and last index of to have a V8 semantics and for join to have V8 semantics, which all aligned with what I think is the closest analogy to a detached typed array case on regular race. SYG: All right, so it's fine to me. I'll consider this consensus. Yeah, it looks like it's generally positive. AKI: Thank you for being so flexible. Thank you everyone who continues to be flexible. Thank you. All right. -#### Conclusion +### Conclusion Consensus on the PR, to wit: Algorithm steps copied exactly from Array counterparts for all methods where [[ArrayLength]] read instead of "length" and ValidateTypedArray is called at the start except includes, indexOf, and lastIndexOf. For `includes`, the TA version does not have HasProperty checks, just like the Array version. SM is the only web engine that currently implements the proposed behavior. For `indexOf` and `lastIndexOf`, the TA version does have HasProperty checks, just like the Array version. V8 is the only web engine that currently implements the proposed behavior. - ## Concurrent JS: A JS vision + Presenter: Shu-yu Guo (SYG) [slides](https://docs.google.com/presentation/d/1kqtsJfLVC-Nmcm2sveMRdJPjurwKKiiCGilK2_ladpw/edit?usp=sharing) @@ -425,15 +445,15 @@ SYG: The JS ecosystem has two models right now. On the one hand, we have somethi SYG: At the same time. We also have something that's thread-like by thread like, I mean synchronous API is with manual synchronization, right? We added Atomics with this low-level futex API for locking for building your own mutexes. We didn't even give you mutexes we gave you the building blocks of mutexes. It's really low level. By doing something thread-like you opt into Data races in shared memory. You have shared memory. You have workers. They have the same memory model luckily so you don't have to worry about the two different weak memory models, but you are opting into this very difficult to reason about the world. pictorially the web like - pictures this you got some agents. They have their own memory that separate they'll have an event Loop and they pass messages to each other thread like thing is they have their own memory, but they also have some shared memory and they all kind of point to each other and they all have the same execution. There's some locks to do the synchronization. The reality on the web and on node as well, and I can't say for moddable. I think from moddable I guess maybe you don't have multi threading in the IOT environments, but I have no idea. But the reality of where the node is at least that we just have both you have some shared memory with locks, but most of the time we want to stick with the actor-like async everything isolated kind of model. -SYG: That's for a good reason. The web-like model makes it easy to reason about and easier to use your execution is causal. Forget about interleaving like, you know causality applies. That's something that you take for granted, but that's something that goes out the window when you have the Web-like model: you don't have to worry about data rases by construction. You have things that are isolated. Asynchronous APIs not only mean more optimization opportunities, but it generally means that your program will be smoother. You're not accidentally blocking something with a lot of work. You have to up front. Just think about how to deal with things in an asynchronous way. It's less focused on manually synchronization mechanics and you're not blocking things, you're not manually using semaphores not manually using message queues. This is all kind of built into the system. The main downside I guess is that it leaves some performance on the table, especially when you're migrating from large existing code base that already is using thread-like things like Wasm. Or rather the wasm use case where you take an existing large code base and try to run it on the web. so thread-like, the good part of the thread like stuff is where webassembly is on the uptic, but the webassembly adopting some of the optic there's a future for webassembly things like wasm GC that in a couple years time two or three years will probably also have threading support and they will have threading support right? There is no like actor like model built into wasm. That's not Wasm’s MO, that MO is to support existing code bases right now.For existing code bases. It's important to have threading support. Plus if we do more work for the thread stuff, we will create a cottage industry of researchers and academics finding bugs in our memory model for years to come. The bad thing of threads is that it's basically what I've said before: you have to manually synchronize stuff you hopped into Data races. every once in a billion executions, there's an execution that's acausal and you're just like what is going on? and to hit home, this is hard to use thing - there's a funny picture I'll show later. And also it exposes climbing channels, but that is spilled milk under the bridge. And on the web att least we have chosen to combat the side channel issue by making sites explicitly opt into to using shared memory with some headers. So that will continue to be the case in the future. so here's a picture on the back 2015 as picture of David Barron [?] at the Mozilla San Francisco office where he printed out a sign that said "must be this tall to write multi-threaded code" and posted it much higher than a person could be. But this is the general sentiment around multithreaded code even by you know, World level expert programmers like David Baron. +SYG: That's for a good reason. The web-like model makes it easy to reason about and easier to use your execution is causal. Forget about interleaving like, you know causality applies. That's something that you take for granted, but that's something that goes out the window when you have the Web-like model: you don't have to worry about data rases by construction. You have things that are isolated. Asynchronous APIs not only mean more optimization opportunities, but it generally means that your program will be smoother. You're not accidentally blocking something with a lot of work. You have to up front. Just think about how to deal with things in an asynchronous way. It's less focused on manually synchronization mechanics and you're not blocking things, you're not manually using semaphores not manually using message queues. This is all kind of built into the system. The main downside I guess is that it leaves some performance on the table, especially when you're migrating from large existing code base that already is using thread-like things like Wasm. Or rather the wasm use case where you take an existing large code base and try to run it on the web. so thread-like, the good part of the thread like stuff is where webassembly is on the uptic, but the webassembly adopting some of the optic there's a future for webassembly things like wasm GC that in a couple years time two or three years will probably also have threading support and they will have threading support right? There is no like actor like model built into wasm. That's not Wasm’s MO, that MO is to support existing code bases right now.For existing code bases. It's important to have threading support. Plus if we do more work for the thread stuff, we will create a cottage industry of researchers and academics finding bugs in our memory model for years to come. The bad thing of threads is that it's basically what I've said before: you have to manually synchronize stuff you hopped into Data races. every once in a billion executions, there's an execution that's acausal and you're just like what is going on? and to hit home, this is hard to use thing - there's a funny picture I'll show later. And also it exposes climbing channels, but that is spilled milk under the bridge. And on the web att least we have chosen to combat the side channel issue by making sites explicitly opt into to using shared memory with some headers. So that will continue to be the case in the future. so here's a picture on the back 2015 as picture of David Barron [?] at the Mozilla San Francisco office where he printed out a sign that said "must be this tall to write multi-threaded code" and posted it much higher than a person could be. But this is the general sentiment around multithreaded code even by you know, World level expert programmers like David Baron. -SYG: Now, having presented those two models, my central thesis of this talk is that we need to push to improve both models simultaneously mainly because both models already here and the pragmatic person and where they're bad kind of complement each other. What I think should be out of scope is the kind of greenfield redesign right there. There's been good advancements in the PL world, with concurrent programming like Rust ownership is getting good reception, and honestly, it's pretty great. But that kind of heavy dependence on static checking is non-starter for JS. Similarly redesigning JS to being fully actor-like I think is not going to serve our present needs as well. So where I would like to see the road map for concurrent. This is on the web like sito push with like the web like and current model. We need language support for async communication and we have we need to have the ability to spot units of computation on the thread like side wouldn't have shared memory. It's a basic synchronization primitive like few, [?] and the ability to spawn threads. Luckily, we already done we have promises. We spent years doing promises we have async await, which is just really nice. We have workers. We have shared array buffers. We have a compass [?] so we have everything we need for the basic basic building blocks already. I think where we are now is face to where we need to make data transferred to message passing part of the web like model to be more ergonomic and more performant for both code and data. And for thread-like, I think we need to have higher level objects that allow concurrent access and higher level synchronization mechanisms in the bare bare minimum close to the metal shared array buffers and atomic that we currently have. because unlike wasm, we're not trying to be purely as a compilation Target. +SYG: Now, having presented those two models, my central thesis of this talk is that we need to push to improve both models simultaneously mainly because both models already here and the pragmatic person and where they're bad kind of complement each other. What I think should be out of scope is the kind of greenfield redesign right there. There's been good advancements in the PL world, with concurrent programming like Rust ownership is getting good reception, and honestly, it's pretty great. But that kind of heavy dependence on static checking is non-starter for JS. Similarly redesigning JS to being fully actor-like I think is not going to serve our present needs as well. So where I would like to see the road map for concurrent. This is on the web like sito push with like the web like and current model. We need language support for async communication and we have we need to have the ability to spot units of computation on the thread like side wouldn't have shared memory. It's a basic synchronization primitive like few, [?] and the ability to spawn threads. Luckily, we already done we have promises. We spent years doing promises we have async await, which is just really nice. We have workers. We have shared array buffers. We have a compass [?] so we have everything we need for the basic basic building blocks already. I think where we are now is face to where we need to make data transferred to message passing part of the web like model to be more ergonomic and more performant for both code and data. And for thread-like, I think we need to have higher level objects that allow concurrent access and higher level synchronization mechanisms in the bare bare minimum close to the metal shared array buffers and atomic that we currently have. because unlike wasm, we're not trying to be purely as a compilation Target. SYG: So for web-like where I would like to see us go is mainly trying to address where I've observed the biggest pain points, which is transferring data. one expensive transferable things very limited thing that many things are transferable. There's weird re-parenting of stuff like it's just not very ergonomic. And if it's not transferable it is structure cloneable, then you're just copying. We could be much better there. It's too expensive to really use in a way that's compelling to spread your workout over the cores. The serialization deserialization point for ergonomic use is about the same point. Is that when you copy not only do you copy? Sometimes you can even copy because the thing you want to copy is not structure-cloneable and you have to manually serialize and deserialize it into something else that is in fact copyable and postMessage-able across workers. And finally transferring code is basically not possible. You can't transfer functions. We can't transfer modules. We can only stringify them and there will be another presentation later in the plenary that specifically addresses this point. And to address the code transfer point, there will be a module blocks proposal by Surma. I thought I was going to present after Surma as I said see Surma's presentation, but I guess moved around but please wait till later to see how that proposal would help the situation. I'll take a quick detour here to talk about how I want now. I think we can make transfering and sharing of data more performing and more ergonomic with a straw present proposal. so the basic problem with the ergonomics of it is that as I said before this transferable thing exists, but it's very limited, array buffers for example are transferable, but you can't transfer regular objects and most JavaScript applications, you are not coming up with your own manual object layout system and layering them on top of array buffers. You are using plain JavaScript object. So if you want to transfer things you end up having to serialize them into the array buffer one side and deserializing them out of the original from the other. and if you're doing this by serializing and deserializing things that are wrappers it's tantamount to a copy so we're not really making it very easy. I think the basic goal should be that plain JavaScript objects should be transferable and shareable across workers, across agent boundaries rather, and in addition to transferring which is really about transferring ownership. We should also look doc the traditional reader/writer lock and say and also rust ownership kind of basic Insight, is that you either allow a single agent exclusive write access.If nobody has write access, then it is safe to have everybody read this case. You cannot havem in this kind of a single writer exclusive [?] off your reader, You cannot have data Ras by Construction. And so that's the ergonomic problem is that we want performance, when we want to transfer a plain JavaScript object. These objects exist in graphs. So fundamentally, what we need to do is not to be able to transfer your object, but we need to be attributed able to transfer a graph. And the problem is if you start with some graph some object to starting point is, you have to find out its closure you have to find out the transitive closure of all objects that are reachable from it, and the problem is that closure discovery from some starting point is linear in the size of the graph and you eventually probably bottomed out at some stuff. You cannot transfer like object.prototype and function.prototype stuff that's intrinsically tied to a particular global. You can't say I'm going to transfer this thing over to another worker. The second performance problem is that, as I alluded to with the serialization deserialization point around array buffers, earlier transferred objects probably should not create wrappers copies that point to the same backing store. That's the thing we are trying to solve because for complex object graphs if you have wrappers on the order of vertices that you have any object graph that costs a lot of memory that really eats into what you're trying to accomplish by sharing things and transparent things to begin with. It should be more or less performance writes in the memory will case be like copying pointers. There might be some overhead costs some constant overhead costs, but they certainly should not - that extra cost should not scale with the size of your object graphs. The straw person proposal is combining all these problems. We'd let the developer separate object heaps into shareable transferable and shareable parts and the not shareable nosferable parts manual. and we maintain the invariant that while the non-shared parts can point to a shared parts the shared parts cannot point back out. This invariant is very important because this invariant is what allows us to transfer something without having to discover its closure. We know that by construction at the closure that the object graph of the transferable heap is always closed. So the transferable and the shareable becomes the unit of share instead of individual objects. This is coarser grain. This is less expressive, but I think it's a promising avenue of investigation and it strikes the right balance between performance and ergonomic. Let me show you some code. I've been talking about heaps a lot. But you know, what is this object graph that's closed that we have. It sounds a lot like a Realm. the this is a different use case for a realm like thing then the existing rail proposal into compartments proposal, but the the core thing I'm interested in hearing what I mean by a realm is basically this this idea of a disjoint object graph that's closed under itself that's closed by construction and as an invariant that's maintained not just as a like initial state, which the current runs [?]. So walking through some strawperson code here. So the idea is that you transfer this Anonymous module block, which is unfortunately not yet a proposal. The proposal has not yet shown it hasn't been presented to tc39 yet. But the idea is you have some module block that you then transfer over to t to the shared realm to to run. And you get this object out which is the default support here. And this object, because it is allocated in a shared realm, is in fact shared and transferable. And it has this it has this property that it cannot ever point back out to agent-local stuff.. Any attempts to assign an agent local object to any object that is originated from a shared realm will throw. Primitives are okay though. So the string literal "foo" is okay here. now you make a new worker and you transfer the realm to the worker and after you transfer it you'd lose ownership of it and you can no longer mutate it as you did here. On the worker side you receive the realm you have then you also receive ownership of everything within the realm like this object we just created and you can need to t -WH: You said you can no longer mutate it. I assume you also cannot read it any more — or can you still read it? +WH: You said you can no longer mutate it. I assume you also cannot read it any more — or can you still read it? -SYG: You can no longer read it either. That's correct. That's a good point. If you touch any of the references, you still have access to an object in a shared realm it'll throw. If you want to read it, what you do is you can fork these read-only views. Once you fork a read-only view everybody who has access to a read-only view of the realm can read it but nobody can write. And I ran out of space here. But basically the idea is that all the workers who have read only access can read it, but nobody can write anything and you only regain access once you join all the [?]. Each worker has to explicitly give up access. It's read-only be used as I'm done with the read only part I gave you the view back you join it. And once there are no more outstanding views we gain exclusive write access back, but it's not fully fleshed out yet. These are some early ideas. It may require the re-architecting of memory subsystems of our engines and V8 isolates in particular probably need to be rethought a bit. and even though from a language level the thing that makes sense to share, the line we draw for things that are shareable versus not shareable should be primitive versus objects precisely because Primitives don't have identity and objects have identity. From an engine point of view that's not what matters but actually manages its allocated or not whether something is boxed. And implementations differ here. So we need to figure out if your implementation concerns there. For example, V8 basically allocates everything to be a heap object except small integers. Well javascriptcore and SpiderMonkey and and boxes and for example, do not allocate doubles as heap objects. We need to think through the performance implications of the various implementation techniques. Do we have a unified heap or a separate heap in the implementation? This is not the mental model we are talking about here. This is not whether we have one realm or multiple realm, but we're talking about - if we need to allocate primitives like strings out of somewhere. Do we allocate out of a single key or do we allocate out of multiple groups? And they both have trade-offs for how we coordinate GC of these separate workers that they have if they are different they all can have references to a shared GC Realm. This proposal I think should also work for sharing code but more needs to be thought through. like in particular, I think it might work for sharing code because if you make the unit of sharing a realm, the functions always kind of implicitly close over their realm up to the Prototype chain of to the global. So without making the unit of sharing a realm you have to answer the question, well what happens when you transfer function or share a function in read-only mode across different Realms like you re-parent things to the function prototype? What do they close over when they read a global like Math? So these are questions that you can just design out by making the unit of sharing a Realm. So that was a quick detour over a straw person proposal. I plan to propose in the future. +SYG: You can no longer read it either. That's correct. That's a good point. If you touch any of the references, you still have access to an object in a shared realm it'll throw. If you want to read it, what you do is you can fork these read-only views. Once you fork a read-only view everybody who has access to a read-only view of the realm can read it but nobody can write. And I ran out of space here. But basically the idea is that all the workers who have read only access can read it, but nobody can write anything and you only regain access once you join all the [?]. Each worker has to explicitly give up access. It's read-only be used as I'm done with the read only part I gave you the view back you join it. And once there are no more outstanding views we gain exclusive write access back, but it's not fully fleshed out yet. These are some early ideas. It may require the re-architecting of memory subsystems of our engines and V8 isolates in particular probably need to be rethought a bit. and even though from a language level the thing that makes sense to share, the line we draw for things that are shareable versus not shareable should be primitive versus objects precisely because Primitives don't have identity and objects have identity. From an engine point of view that's not what matters but actually manages its allocated or not whether something is boxed. And implementations differ here. So we need to figure out if your implementation concerns there. For example, V8 basically allocates everything to be a heap object except small integers. Well javascriptcore and SpiderMonkey and and boxes and for example, do not allocate doubles as heap objects. We need to think through the performance implications of the various implementation techniques. Do we have a unified heap or a separate heap in the implementation? This is not the mental model we are talking about here. This is not whether we have one realm or multiple realm, but we're talking about - if we need to allocate primitives like strings out of somewhere. Do we allocate out of a single key or do we allocate out of multiple groups? And they both have trade-offs for how we coordinate GC of these separate workers that they have if they are different they all can have references to a shared GC Realm. This proposal I think should also work for sharing code but more needs to be thought through. like in particular, I think it might work for sharing code because if you make the unit of sharing a realm, the functions always kind of implicitly close over their realm up to the Prototype chain of to the global. So without making the unit of sharing a realm you have to answer the question, well what happens when you transfer function or share a function in read-only mode across different Realms like you re-parent things to the function prototype? What do they close over when they read a global like Math? So these are questions that you can just design out by making the unit of sharing a Realm. So that was a quick detour over a straw person proposal. I plan to propose in the future. SYG: Coming back to what more needs to be done for thread-like things. The biggest pain point I observed is frankly nobody knows how to use shared array buffers and atomics very well. The impedance mismatch with idiomaticatic JavaScript is just way too high. I think it works fine for wasm integration. It simply does not work that well as the first class things to build more sophisticated libraries on top of it for use as applications. I think Spectre was in large part to blame here because we turned it off for a year or two and we're slowly turning it back on now. but even for Chrome and for software projects at Google with Google level of effort where Chrome desktop we had sight isolation. Shared array buffer were turned on shared array buffers. Were still not. They just aren't good trade-off currently between ergonomics performance the amount of the serialization deserialization you have to do to share things by shared array buffers, it's not something that software projects want to take on just too much maintenance and too slow. So another proposal here that Dan Ehrenberg is spearheading could be typed objects as something with as objects with fixed layout that could be concurrently accessed now typed objects. I think Dan and other champions of that proposal myself included have different goals. I think typed objects solve different problems. This is a particular lens that I would like typed objects to help solve. In the future looking further out. There are many more things we could do. Do we want to have a concurrent standard library that does concurrent stuff? better tooling integration with scheduling apis like the power considerations that talked about earlier as heterogeneous stuff gets more more mainstream via phones we need to be able to schedule better. And while systems of all the OS will be able to schedule things should we also give JS when times get ability to schedule things? And on the thread like side, we're going to need to carefully think through integration with the future of wasm who is definitely going to expose multi-threading in its way and maybe some tools and stuff. That's like a throwaway item. You can always do more work on tooling. now there's related work, of course. To pick a few. I personally worked on it data parallel research project for data parallel JavaScript between Mozilla way back, seven or eight years ago and that ultimately failed because the JIT and the lack of type stability meant to significant warm-up wap was required to really start up the JIT so you can start running the same workload in parallel, but the minute you hit type instability, where you Where you hit a type that your just in time compiled code can't handle you have to de-op and the drop the synchronization points and that just basically killed all data parallel performance. So that experiment ultimately failed. There's this PhD thesis of a blog post from Fil Pizlo, technical lead of javascriptcore, basically laid out an implementation plan for retrofitting concurrent access to plain JavaScript objects kind of like the way Java does where you can lock particular fields for compete access. It was JSC Focus, but he had many great ideas, but I think retrofitting existing objects is a non-starter and I the main difference, with the high level difference with that work is over that I think is that we should instead pursue concurrent access via a different kind of object instead of plain object. @@ -443,17 +463,17 @@ JWK: In the previous slide. I see it's mentioned the ergonomic problem of steril SYG: I'm not thinking about it. I think it could fit, I'm trying to address the ergonomic serialization issue - where people can't share plain object graphs. I'm trying to address that by letting you share plain object graphs. That doesn't mean that all use cases are like that. And sometimes you really want to also sterilize with plain object graphs for different reasons. So I think any serialization improvements here should be complementary and I would to work with that. -MM: One of the benefits of what you call the “web-like model”, the communicating event loops concurrency model, in the absence of shared array buffers, is that communicating event loops are only asynchronously coupled to each other. That makes them a clean unit of preemptive termination. you can kill one one event Loop, one worker, without having to kill all the workers coupled to it, because the synchronously accessible inconsistent state is all partitioned. I wanted to make sure that this is also the case in making this shared realm transferable. In particular, you're thinking about having the transfer only occur at turn boundaries where all the invariants of the mutated objects would be restored before the graph gets transferred, and there would not be any mixed call stack that might involve stack frames from the shared realm. If the stack is empty at the moment of transfer then you can't have a stack with mixed stack frames. +MM: One of the benefits of what you call the “web-like model”, the communicating event loops concurrency model, in the absence of shared array buffers, is that communicating event loops are only asynchronously coupled to each other. That makes them a clean unit of preemptive termination. you can kill one one event Loop, one worker, without having to kill all the workers coupled to it, because the synchronously accessible inconsistent state is all partitioned. I wanted to make sure that this is also the case in making this shared realm transferable. In particular, you're thinking about having the transfer only occur at turn boundaries where all the invariants of the mutated objects would be restored before the graph gets transferred, and there would not be any mixed call stack that might involve stack frames from the shared realm. If the stack is empty at the moment of transfer then you can't have a stack with mixed stack frames. -SYG: I am not yet thinking about those. I think there are more fundamental issues to be worked through but I take your point and they would need to be thought through and the implications that they have. I think I'm not yet convinced that it is implementable. But suppose I were convinced of that then yes, I would need to think through some of the issues that you have raised. +SYG: I am not yet thinking about those. I think there are more fundamental issues to be worked through but I take your point and they would need to be thought through and the implications that they have. I think I'm not yet convinced that it is implementable. But suppose I were convinced of that then yes, I would need to think through some of the issues that you have raised. MM: Okay, great. I think this is a very nice speculative direction and I'm glad you're provoking us to think this far ahead. This is great. Thanks. -SYG: And yeah if I ever get there to think through the isolation concerns I definitely would need your review and I would like to work with you on those problems. I think you have a much better grasp of what to look for than I would. +SYG: And yeah if I ever get there to think through the isolation concerns I definitely would need your review and I would like to work with you on those problems. I think you have a much better grasp of what to look for than I would. -MM: Very much looking forward to working with you on this. +MM: Very much looking forward to working with you on this. -JWK: What if you define a function that mutates local states in the shared realm? It seems like it's only preventing mutating the shared realm states with read-only view or transferred shared realm, but the state is “local” for the function defined in the shared realm. So if I call it can I bypass the read-only view limitation? +JWK: What if you define a function that mutates local states in the shared realm? It seems like it's only preventing mutating the shared realm states with read-only view or transferred shared realm, but the state is “local” for the function defined in the shared realm. So if I call it can I bypass the read-only view limitation? SYG: That's a great question. I haven't I don't have a good answer to this. I think ideally what you want if it's possible is that local state like purely local variables that do not Escape. You should be allowed to mutate those however you want. But whether or not that is doable, I don't know yet. Doable both from a spec point of view and from an implementation point of view. I just don't know that but yeah, that is the main problem, right? Now if we were to think of the read-only views naively and want to run functions in them. It just seems like they can't do anything. I hope there's a solution there. I don't have one. @@ -463,7 +483,7 @@ DE: Great presentation. I'm really excited about all the different directions. j SYG: Sorry for putting words in your mouth. I know you've been thinking about it, and I think you were going to champion at some point. -DE: I mean, I hope to but also other people have done good work, and I really don't want to be like claiming this out from under efforts. +DE: I mean, I hope to but also other people have done good work, and I really don't want to be like claiming this out from under efforts. WH: I’m quite confused by whether this can actually become a good future direction in terms of how this could be extended to functions. Jack and Mark already presented some of the questions I have which didn't have good answers. Without being able to transfer functions you have issues with composability, because functions are kind of a canary for other kinds of objects which might have hidden state — they all have similar kinds of concerns. I don't see how this can address issues that arise when you try to transfer things like functions or objects with hidden state. @@ -471,19 +491,19 @@ SYG: so I think functions are - so I think the transferring case is much easier WH: I'm using “transfer” more generically. I don't even understand how, when you have a function defined in one agent, you can transfer it to a different agent. What I see is that the realm can define a function that doesn't actually get transferred anywhere — it just stays in the realm it’s defined it in. There are a lot of nice goals to aspire to but I don't understand how this proposal holds together. -DE: This is really answered by multiple blocks. So the problem is not transferring basically bytecodes. It's not transferring behavior. The problem is transferring stuff that you close over. So what module blocks do is they give you realm independent code, it gives you modules modules that are not yet instantiated on a realm so you can share that and then within a realm you can you can instantiate that module and execute so it's all about the references to objects, but code itself can be shared. +DE: This is really answered by multiple blocks. So the problem is not transferring basically bytecodes. It's not transferring behavior. The problem is transferring stuff that you close over. So what module blocks do is they give you realm independent code, it gives you modules modules that are not yet instantiated on a realm so you can share that and then within a realm you can you can instantiate that module and execute so it's all about the references to objects, but code itself can be shared. -WH: I don't understand your answer. I have a closure which I got from somewhere. I want to send the closure to a different agent. +WH: I don't understand your answer. I have a closure which I got from somewhere. I want to send the closure to a different agent. -SYG: I think if you just have a closure that you just got like you made a closure syntactically in your agent local way that you normally would for a function and you would like for it to transfer that to a different agent, that is not possible in the thing I am envisioning. What I am envisioning is that the developer basiasically has to make an up front design choice to separate their heaps their object routes including function instances to be shareable parts and things that you would want to transfer between agents. You would need to instantiate from within the share ground to begin with, and then the thing you pass back and forth is the realm, to share ground level. +SYG: I think if you just have a closure that you just got like you made a closure syntactically in your agent local way that you normally would for a function and you would like for it to transfer that to a different agent, that is not possible in the thing I am envisioning. What I am envisioning is that the developer basiasically has to make an up front design choice to separate their heaps their object routes including function instances to be shareable parts and things that you would want to transfer between agents. You would need to instantiate from within the share ground to begin with, and then the thing you pass back and forth is the realm, to share ground level. WH: Yes, this will lead to a lot of fights over slicing complex objects where you have various complex structures,DE including some built-in ones like maybe regular expressions or arrays. You’ll want to instantiate some of their behavior in the transfer realm while keeping some of the behavior local. I'm worried about ecosystem implications of that. -SYG:I think I would need to see the issue to understand the concerns more concretely. It might be possible that the hypothesis of this like letting the user separate out what's shareabe and not shareable is a non-starter, but at the same time That's kind of the road. We went down. That's one of the only viable path forward that I see because I don't think we can actually retrofit existing objects to be shareable. +SYG:I think I would need to see the issue to understand the concerns more concretely. It might be possible that the hypothesis of this like letting the user separate out what's shareabe and not shareable is a non-starter, but at the same time That's kind of the road. We went down. That's one of the only viable path forward that I see because I don't think we can actually retrofit existing objects to be shareable. WH: Yeah, I agree about not trying to retrofit existing objects. -SYG: Yeah, so yeah, it's definitely a question to be answered because I don't know how to answer that exactly. I'm trying to get some early partner feedback from people who might be interested in using this and to see, you know, if we run into any issues +SYG: Yeah, so yeah, it's definitely a question to be answered because I don't know how to answer that exactly. I'm trying to get some early partner feedback from people who might be interested in using this and to see, you know, if we run into any issues WH: I would love to make something like this work, but I would need to understand better what the plan was. @@ -491,25 +511,25 @@ MS: So you started out this presentation talking about scheduling down the CPU l SYG: Understood. Yeah, you definitely have more experience than I do here and I would weigh your opinion very highly here. There's currently attempts and on the web platform to introduce a single thread like within one thread scheduling apis for the browser scheduler to better order it's tasks. and that topic was mainly about thinking if there's such a scheduling API which does exist like two or three years future should that scheduling API also take into consideration in these asymmetric architectures. -MS: I think that we found that actually the best API is more of a hint API -- that we don't want to constrain the scheduler, but we want to hint to the scheduler the intent of particular threads of execution. +MS: I think that we found that actually the best API is more of a hint API -- that we don't want to constrain the scheduler, but we want to hint to the scheduler the intent of particular threads of execution. -SYG: I imagine that must be how the schedule API works today. I can't imagine the web API offering any guarantees like you can obviously see here. +SYG: I imagine that must be how the schedule API works today. I can't imagine the web API offering any guarantees like you can obviously see here. -MS: We're not we're not running real time environments for say you don't even you know appear to you want that. So yeah, I want to be scheduled to be a little bit higher level because if we Implement a lower level we actually can cause more damage than good. +MS: We're not we're not running real time environments for say you don't even you know appear to you want that. So yeah, I want to be scheduled to be a little bit higher level because if we Implement a lower level we actually can cause more damage than good. -SYG: Yeah, that makes a lot of sense to me. +SYG: Yeah, that makes a lot of sense to me. -MM: The isolation of the shared realm, while still being able to entangle it in one Direction with the other Realms can point at arbitrary individual objects within the shared realm, but if the shared realm is transferred then all of those pointers have to get cauterized.That's that's it. That can be expensive. It very much reminds me of the problem that Mozilla faced when they wanted to implement the semantics of the weird thing on the web where you can truncate the domain of your web page and then object graphs that used to be entangled now becomes severed from each other. They did that by putting a full membrane between them and paying the overhead of that level of indirection. That's the only way I think you can get this reliable severing of object references on transfer. It's basically a full membrane between the plain Realms and the shared Realms. I want you to consider the alternative of, rather than transferring ownership of mutable objects, might we all together be better off by providing good support for transitively immutable objects and sharing of passing them by sharing in which case there is no loss of access that you need, there's no issue of readers/writers. There's the need to support, you know, the functional programming style of incremental derivation of new objects from old objects at low overhead, but the old objects would be safely shareable. +MM: The isolation of the shared realm, while still being able to entangle it in one Direction with the other Realms can point at arbitrary individual objects within the shared realm, but if the shared realm is transferred then all of those pointers have to get cauterized.That's that's it. That can be expensive. It very much reminds me of the problem that Mozilla faced when they wanted to implement the semantics of the weird thing on the web where you can truncate the domain of your web page and then object graphs that used to be entangled now becomes severed from each other. They did that by putting a full membrane between them and paying the overhead of that level of indirection. That's the only way I think you can get this reliable severing of object references on transfer. It's basically a full membrane between the plain Realms and the shared Realms. I want you to consider the alternative of, rather than transferring ownership of mutable objects, might we all together be better off by providing good support for transitively immutable objects and sharing of passing them by sharing in which case there is no loss of access that you need, there's no issue of readers/writers. There's the need to support, you know, the functional programming style of incremental derivation of new objects from old objects at low overhead, but the old objects would be safely shareable. -SYG:I think it's a bridge too far in my opinion. I think it's very appealing but I think mutation as a way to program on the web and node, that's just here to stay, and I think supporting the ability for supporting mutation is very important. Ask for the point of this would be very expensive. I think it seems like Dan is anticipating this a little bit. I agree that if you were to do a compartment style implementation of this the severing of the membrane would be extremely expensive as compartments work how I am envisioning this to be actually be implemented now polyfilled and might be implemented is that the objects that are allocated in the shared realm are represented by some by some kind of fat pointer and there is code in the engine that knows to check the the allocator basically of those objects can some thread local value of basically there's this current thread on this on the realm if not grow and you are not doing kind of a full graph walk when you Transfer but instead kind of amortize it out over each access. each access will then contain an extra branch, which probably will be a couple of loads but I think that is the right trade-off for the performance. +SYG:I think it's a bridge too far in my opinion. I think it's very appealing but I think mutation as a way to program on the web and node, that's just here to stay, and I think supporting the ability for supporting mutation is very important. Ask for the point of this would be very expensive. I think it seems like Dan is anticipating this a little bit. I agree that if you were to do a compartment style implementation of this the severing of the membrane would be extremely expensive as compartments work how I am envisioning this to be actually be implemented now polyfilled and might be implemented is that the objects that are allocated in the shared realm are represented by some by some kind of fat pointer and there is code in the engine that knows to check the the allocator basically of those objects can some thread local value of basically there's this current thread on this on the realm if not grow and you are not doing kind of a full graph walk when you Transfer but instead kind of amortize it out over each access. each access will then contain an extra branch, which probably will be a couple of loads but I think that is the right trade-off for the performance. MM: Let me clear up some confusion that I think just went by. Compartments do not bundle in membrane separation; compartments and membranes are orthogonal. -SYG: Sorry. I meant the Firefox implementation called compartments where you talking about the proposal? +SYG: Sorry. I meant the Firefox implementation called compartments where you talking about the proposal? MM: Ah, you're right. -DE: I don't think it makes sense for us to optimize for polyfilled performance any more than we would when designing something like atomics because this is just a thing that you should really just use when it's having us this multi-threaded impact. If we made modifications like this, it wouldn't take an extension to the object model. I think it membrane could Faithfully implements this. So I think that's a relevant irrelevant lens, but I wouldn't privilege sort of polyfill performance. +DE: I don't think it makes sense for us to optimize for polyfilled performance any more than we would when designing something like atomics because this is just a thing that you should really just use when it's having us this multi-threaded impact. If we made modifications like this, it wouldn't take an extension to the object model. I think it membrane could Faithfully implements this. So I think that's a relevant irrelevant lens, but I wouldn't privilege sort of polyfill performance. SYG: I definitely agregree with that I think. Where I am not clear is if a proxy can faithfully polyfill it is for whatever we designed for running functions in read-only mode, but that's only because it's not known. I don't know how that should work there. But for the data sharing case, I think it definitely can be Faithfullly polyfilled. @@ -523,75 +543,73 @@ SYG: Thanks. I would like to read more about it. Is there like a file or somethi PST: I will send you the link. -SYG: Thank you very much. +SYG: Thank you very much. DRR: It could be nice for TS like tor TS like tools but I mean as I watch this presentation like there have been so many speculations of oh, we could probably try to do, you know something like parallelism of basic tasks or just be able to serve up data based on the common data structures, but we just haven't been able to do that because of many of the reasons you allude to of just like the costs being prohibitive or just the memory model not playing well for that and so we end up often spinning up several servers for editor scenarios and we end up not being able to do things like, you know parallel sort of processing of like trees before a join point in an easy way. So this is just sort of like a vote of “I like this basically”. This looks good. It's a good direction. I'm happy to see that. WWe're exploring it. So I think that's it for me. Thanks. SHU: Yeah. I want to talk to you more about that. It sounds like where'd you would like more parallelism is this data parallel story where you have this big thing. They need to be crossing over that you can chunk pp in some logical way if only you could distribute that over a bunch of threads. Is that accurate? -DRR: it's partially that but being able to easily respond over the same data structures and because you can't really do this, you know sharing across threads right? Like if an IDE makes multiple requests some of those requests are independent of each other, right? So like a semantic request is often different from like a syntactic request and they can be answered independently. We're not wired up to take to leverage any of this stuff right? I'm just going to put it out there. I mean, other tools that could take inspiration, could leverage this sort of work and this sort of thing. They do leverage it in the C# ecosystem right like the way that their entire IDE experiences is wired up to leverage a lot of shared data structures across the read-through immutability. So yeah, but I'd be happy to talk through some scenarios with you if you like. +DRR: it's partially that but being able to easily respond over the same data structures and because you can't really do this, you know sharing across threads right? Like if an IDE makes multiple requests some of those requests are independent of each other, right? So like a semantic request is often different from like a syntactic request and they can be answered independently. We're not wired up to take to leverage any of this stuff right? I'm just going to put it out there. I mean, other tools that could take inspiration, could leverage this sort of work and this sort of thing. They do leverage it in the C# ecosystem right like the way that their entire IDE experiences is wired up to leverage a lot of shared data structures across the read-through immutability. So yeah, but I'd be happy to talk through some scenarios with you if you like. -SYG: Yeah. I think I think one very high level thing. I would love to partners about is basically, I think with how the web works and how node works and how JS works with these async communicating event loops I think the granularity is going to be fairly coarse such that.fine grained beta parallel algorithms aren't going to scale. The overhead is going to be too big and I am wondering what are the use cases we can? we can realistically help improve and what we can instead kind of punt to well, let's make the thread like model better. If you really need that kind of power. It just bleeds through your own manual multi-threading. +SYG: Yeah. I think I think one very high level thing. I would love to partners about is basically, I think with how the web works and how node works and how JS works with these async communicating event loops I think the granularity is going to be fairly coarse such that.fine grained beta parallel algorithms aren't going to scale. The overhead is going to be too big and I am wondering what are the use cases we can? we can realistically help improve and what we can instead kind of punt to well, let's make the thread like model better. If you really need that kind of power. It just bleeds through your own manual multi-threading. SYG: All right, times up. Thank you very much everybody. ## RegExp Matches Indices JSC Implementation feedback + Presenter: Michael Saboff [slides](https://github.com/tc39/proposal-regexp-match-indices/blob/master/RegExp%20Match%20Indices%20JSC%20Implementation%20Feedback%20Nov%202020.pdf) - MS: This is implementation feedback to the regex match indices proposal. It''s currently stage 3. I'll probably give them more laborious synopsis of the proposal than everybody wants to hear but maybe some people want to hear it. Basically when you do a match with like for example regex exec this adds an indices property which returns it's basically an array of arrays and indexes where match has occurred for the whole match and then any kind of sub pattern those are included as well. What is not on this slide is that there's if there's named captures then there's also named properties that have these types of arrays. I want to point out that I am not against this proposal, in fact I actually like it. I believe it's very useful for token scanners and other text parsers, especially when you want to report any kind of errors in the stuff you parse because it tells you exactly where you need to report those errors. -MS: So there were some concerns that were raised almost a year ago at the December 2019 meeting. Shu raised those issues that V8 had when they were implementing and resulted in two issues raised. First was greater amount of memory used obviously any natural objects of these include these properties and it's more of a it's a tree of properties as it were and the other was that there are some performance issues. Allocations: obviously for every time you create one of these match objects, and then there's that means there's more GC work to do. And so these penalized all regular expression use. And at the end of the day V8 decided that instead of materializing the indices properly greedy/eagerly, that they would do that lazily. And so what they ended up implementing was that they would only materialize indices properties when they when they were accessed and the way that they would do that was they actually re-run the regular expression. so effectively this would only penalize indices usage and it would basically wouldn't hurt the performance for existing use cases. So that's what they decided to do. So, let me explain or walkthrough some of the proposal or some of the implementations that we found with this proposal. We've actually tried four different implementations and have various results from that. the first implementation was basically just follow the spec. Everything's eagerly done. So you just create indices property on every on every match and that was was done just to see hey, what's this going to cost performance wise? JetStream2 is one of the benchmarks that we follow. We think it's fairly indicative of the JavaScript usage on the web. It's a couple of years old. It contains a lot of other benchmarks. One of those which was cited in the V8 experience a year ago is the octane 2 Regex Benchmark. and we slowed down 17 percent In that particular Benchmark. but overall jet stream slowed by about 1%. It wasn't that bad. But clearly for a regular expression heavy usage, this was probably unacceptable. So the second limitation was very similar to V8's proposal and that is, well, since we create the indices during a matching why don't we just save those indices and then when we, you know need to materialize the indices property on the matches object we use those indices to construct the the the tree of arrays and do things that way. Well you're much much better on the Regexp Benchmark of octane 2. It is only 3% slow down in jet stream 2 overall is about 1%. Now. Let me point out that jet stream 2 actually has about - one out of eight of the tests in jet stream 2 are regex sensitive to some degree or another. For example, there's a test we call offline assembler. It's basically we ported the offline independent machine-independent assembler. Similarly we have in Ruby that's part of javascriptcore we ported the front end of that to JavaScript. So it's basically a parser. It doesn't generate the code and it's actually was 8 percent slower on this first implementation and four and a half percent slower on the second. So it's still had some performance implications. There's another test called flight planner. It's a reg ex performance test that I wrote that's part of jet stream to 5 percent slower on this first of mutation the direct implementation about three and a half percent slower on the second and as we get to the o the third it's is in the noise.The third implementation is basically doing exactly what V8 did and that is to save the regular expression and and its input and when we materialize indices, we rerun the regular expression. And it was still a slight slowdown in regex, but it didn't appear overall to impact the jet stream 2 Performance. I think that that's acceptable. So we havee success right? Well, here's the concern: we both have the same path forward to implement match indices, but the problem is that we're getting the performance for existing record expressions or nearly so we're actually matching twice for match indices. So the new feature we're basically penalizing the use of this new feature and it for a regular expression that you want indices for you're going to run it twice. And I think the web will be aware of this and it may discourage people from using this feature, which I think we don't want to introduce a new feature. We're basically were implying, at least two implementations are implying, this is a good feature, but don't use it because it'll slow you down - unless you really really want it. And I think it can complicate performance sensitive code because there may be some code where you've had to in the past derive indices and now you want to get them directly, you may be reluctant to do that because it could actually be slower. So the the performance concerns were not raised initially by ourselves or or the V8 engine. It actually has been in the GitHub for this proposal for some time and there's been various alternatives that have been considered to alleviate The performance concerns. One was to add a new regex flagged, there was talk about different ways that we could hint that we want this ant this information, a call back function, new regex functions that would return this, or adding an option value or option bag. So a lot of things have been discussed, some of them have been presented to plenary. So what I'd like to do is, I would like us to revisit this performance mitigation. I had some discussion on the GitHub as well as some private conversation with Ron Buckton who is the champion of this proposal. And I think there's a few principles that I like to guide this discussion. One is that whoever's writing the code that's going to use this feature. They know intent and that intent I think is important. We should use that intent to help us mitigate this performance by using that intent in how we implement this, specifically when do we materialize the indices property. And the intent is actually in two places. There's regular expression itself and then its use. it's quite often the case, but not always that a regular expression is defined in one place and used in many places. There are other cases where regular expressions are defined and used a directly in one place. And there's also a pattern that I've seen is a regular expression is may be defined at one place in the code. - but it's only used in another place in that same code. I propose that that intent with the regular expression, specifically adding a flag is probably the best place to do that. if you look at all the standard - when we go to actually execute a regular expression there's this built-in regex ex function. It takes the regular Expressions as first argument. I imagine that most engines do what we do in JavaScriptCore, and use something like that abstraction. And I think it is a good place to do this. The proposal Champions at one point proposed a flag using the letter o. In my implementation that I tried out where I did this I used 'n'. Note that the Perl and dotnet have 'n' options. They mean a little bit different than what's here. So I don't really care too much just as long as the flag is somewhat meaningful. I mean we can't use I for indices because it's used for ignore case but something makes some kind of sense. I would propose we use that we we're talking about using a want indices fly internally for subsequent matches inside of JSC that is that if you have a regular expression, that doesn't have this some kind of new flag and you go to match it today the code that we've posted for review. It's going to take and say the regular expression and the input in a rematch it but then the idea is that we would flag that regular expression that next time it's run go ahead and eagerly populate the indices flag to to have slightly better performance on the all subsequent uses of that regular expression. There's some issues with that having to do with the shapes of The Returned object and optimizations that we make in our engine possibly others have the same kind of issue. But I would like us to revisit my suggestion. We do go with some kind of flag for the regular expression. So in summary, we tried actually for implementations we had similar performance issues as in V8. This is there's a current stage 3 conformant implementation that we posted that hasn't landed yet. But we're a little concerned that we punish people that use this feature. So, like I said, I suggest that we consider the developer intent and add a new flag to the regular expression itself. So with that I'm open for discussions and questions. And like I say, I'm not a champion of this proposal, but I'm providing feedback with recommendations based upon that feedback and I think it's some and I think Ron's on the call something that I think is worth discussing. +MS: So there were some concerns that were raised almost a year ago at the December 2019 meeting. Shu raised those issues that V8 had when they were implementing and resulted in two issues raised. First was greater amount of memory used obviously any natural objects of these include these properties and it's more of a it's a tree of properties as it were and the other was that there are some performance issues. Allocations: obviously for every time you create one of these match objects, and then there's that means there's more GC work to do. And so these penalized all regular expression use. And at the end of the day V8 decided that instead of materializing the indices properly greedy/eagerly, that they would do that lazily. And so what they ended up implementing was that they would only materialize indices properties when they when they were accessed and the way that they would do that was they actually re-run the regular expression. so effectively this would only penalize indices usage and it would basically wouldn't hurt the performance for existing use cases. So that's what they decided to do. So, let me explain or walkthrough some of the proposal or some of the implementations that we found with this proposal. We've actually tried four different implementations and have various results from that. the first implementation was basically just follow the spec. Everything's eagerly done. So you just create indices property on every on every match and that was was done just to see hey, what's this going to cost performance wise? JetStream2 is one of the benchmarks that we follow. We think it's fairly indicative of the JavaScript usage on the web. It's a couple of years old. It contains a lot of other benchmarks. One of those which was cited in the V8 experience a year ago is the octane 2 Regex Benchmark. and we slowed down 17 percent In that particular Benchmark. but overall jet stream slowed by about 1%. It wasn't that bad. But clearly for a regular expression heavy usage, this was probably unacceptable. So the second limitation was very similar to V8's proposal and that is, well, since we create the indices during a matching why don't we just save those indices and then when we, you know need to materialize the indices property on the matches object we use those indices to construct the the the tree of arrays and do things that way. Well you're much much better on the Regexp Benchmark of octane 2. It is only 3% slow down in jet stream 2 overall is about 1%. Now. Let me point out that jet stream 2 actually has about - one out of eight of the tests in jet stream 2 are regex sensitive to some degree or another. For example, there's a test we call offline assembler. It's basically we ported the offline independent machine-independent assembler. Similarly we have in Ruby that's part of javascriptcore we ported the front end of that to JavaScript. So it's basically a parser. It doesn't generate the code and it's actually was 8 percent slower on this first implementation and four and a half percent slower on the second. So it's still had some performance implications. There's another test called flight planner. It's a reg ex performance test that I wrote that's part of jet stream to 5 percent slower on this first of mutation the direct implementation about three and a half percent slower on the second and as we get to the o the third it's is in the noise.The third implementation is basically doing exactly what V8 did and that is to save the regular expression and and its input and when we materialize indices, we rerun the regular expression. And it was still a slight slowdown in regex, but it didn't appear overall to impact the jet stream 2 Performance. I think that that's acceptable. So we havee success right? Well, here's the concern: we both have the same path forward to implement match indices, but the problem is that we're getting the performance for existing record expressions or nearly so we're actually matching twice for match indices. So the new feature we're basically penalizing the use of this new feature and it for a regular expression that you want indices for you're going to run it twice. And I think the web will be aware of this and it may discourage people from using this feature, which I think we don't want to introduce a new feature. We're basically were implying, at least two implementations are implying, this is a good feature, but don't use it because it'll slow you down - unless you really really want it. And I think it can complicate performance sensitive code because there may be some code where you've had to in the past derive indices and now you want to get them directly, you may be reluctant to do that because it could actually be slower. So the the performance concerns were not raised initially by ourselves or or the V8 engine. It actually has been in the GitHub for this proposal for some time and there's been various alternatives that have been considered to alleviate The performance concerns. One was to add a new regex flagged, there was talk about different ways that we could hint that we want this ant this information, a call back function, new regex functions that would return this, or adding an option value or option bag. So a lot of things have been discussed, some of them have been presented to plenary. So what I'd like to do is, I would like us to revisit this performance mitigation. I had some discussion on the GitHub as well as some private conversation with Ron Buckton who is the champion of this proposal. And I think there's a few principles that I like to guide this discussion. One is that whoever's writing the code that's going to use this feature. They know intent and that intent I think is important. We should use that intent to help us mitigate this performance by using that intent in how we implement this, specifically when do we materialize the indices property. And the intent is actually in two places. There's regular expression itself and then its use. it's quite often the case, but not always that a regular expression is defined in one place and used in many places. There are other cases where regular expressions are defined and used a directly in one place. And there's also a pattern that I've seen is a regular expression is may be defined at one place in the code. - but it's only used in another place in that same code. I propose that that intent with the regular expression, specifically adding a flag is probably the best place to do that. if you look at all the standard - when we go to actually execute a regular expression there's this built-in regex ex function. It takes the regular Expressions as first argument. I imagine that most engines do what we do in JavaScriptCore, and use something like that abstraction. And I think it is a good place to do this. The proposal Champions at one point proposed a flag using the letter o. In my implementation that I tried out where I did this I used 'n'. Note that the Perl and dotnet have 'n' options. They mean a little bit different than what's here. So I don't really care too much just as long as the flag is somewhat meaningful. I mean we can't use I for indices because it's used for ignore case but something makes some kind of sense. I would propose we use that we we're talking about using a want indices fly internally for subsequent matches inside of JSC that is that if you have a regular expression, that doesn't have this some kind of new flag and you go to match it today the code that we've posted for review. It's going to take and say the regular expression and the input in a rematch it but then the idea is that we would flag that regular expression that next time it's run go ahead and eagerly populate the indices flag to to have slightly better performance on the all subsequent uses of that regular expression. There's some issues with that having to do with the shapes of The Returned object and optimizations that we make in our engine possibly others have the same kind of issue. But I would like us to revisit my suggestion. We do go with some kind of flag for the regular expression. So in summary, we tried actually for implementations we had similar performance issues as in V8. This is there's a current stage 3 conformant implementation that we posted that hasn't landed yet. But we're a little concerned that we punish people that use this feature. So, like I said, I suggest that we consider the developer intent and add a new flag to the regular expression itself. So with that I'm open for discussions and questions. And like I say, I'm not a champion of this proposal, but I'm providing feedback with recommendations based upon that feedback and I think it's some and I think Ron's on the call something that I think is worth discussing. WH: I'm curious if anybody has tried machine learning to try to predict ahead of time which regular expression executions will want indices. -MS: So we didn't try that.I think the cost of doing that would hurt performance right? We you can consider various different ways. You can do that. Certainly the one thing that were discussing doing if we keep the current proposal. All subsequent matches would produce indices so, you know, that's the very simple machine learning last time. We use this rig expression. We need it. So the next time will produce it but it doing some kind of flow of the code or things like that. More expensive than other alternatives. Okay. +MS: So we didn't try that.I think the cost of doing that would hurt performance right? We you can consider various different ways. You can do that. Certainly the one thing that were discussing doing if we keep the current proposal. All subsequent matches would produce indices so, you know, that's the very simple machine learning last time. We use this rig expression. We need it. So the next time will produce it but it doing some kind of flow of the code or things like that. More expensive than other alternatives. Okay. -SYG: Yeah, I would like to second. +SYG: Yeah, I would like to second. WH: I am in favor of using the flag here by the way. DE:. I'd like to propose a new method that parallels exec. this is not really feasible today because of the subclassing design. But if we remove that subclassing design its proposed in Shu and Yulia’s proposal. And so this would be a different method with a different signature that returns, you know the the offset -MS: but you have other apis that also produced match, like string match. +MS: but you have other apis that also produced match, like string match. DE: Yeah, this is just those two right? MS: You got matchAll - you know, how far do you go down right it was discussed at one point there. You're having a programmers Express intent on time of use not time of definition. But I say, you know, as I said a lot of times they're the same location. But yeah that's another way of doing it. -DE: It could also be like an options bag argument at the usage site. Anyway the flag does not seem too bad to me. I like the idea of making sure that we're not implicitly giving people this double matching performance cost. +DE: It could also be like an options bag argument at the usage site. Anyway the flag does not seem too bad to me. I like the idea of making sure that we're not implicitly giving people this double matching performance cost. -MS: right as far as an option bag or option flag that we put on the use that, that could have a performance implication for existing regular Expressions as well. Because now you have to at least check that before you do something. I should have added. That's one of the reasons why I like the flag is, the flag is no cost for existing code. I mean that there's one check you need to do, one compare and branch you need to do when you're materializing the matches object, and no expense, at least in our case, when you're actually doing the matching and things like that. +MS: right as far as an option bag or option flag that we put on the use that, that could have a performance implication for existing regular Expressions as well. Because now you have to at least check that before you do something. I should have added. That's one of the reasons why I like the flag is, the flag is no cost for existing code. I mean that there's one check you need to do, one compare and branch you need to do when you're materializing the matches object, and no expense, at least in our case, when you're actually doing the matching and things like that. -DE: yeah, I'm in favor of going ahead with the Practical approach. So that's the flag then than good. +DE: yeah, I'm in favor of going ahead with the Practical approach. So that's the flag then than good. -SYG: Yes, I would like to give a strong +1 to surfacing intent here. I think that would definitely alleviate the issue. And I completely agree with you that if we were to ship this re execution re-execution strategy, it basically dooms the long term performance viability of the Proposal. I would like to surface a concern about the flag which is not a personal one. I am in favor of the flag. I rarely use regular Expressions, but when I discussed this internally with the V8 team there were some concerns from devtools folks that an extra flag is added user complexity and that it is perhaps more undesirable than other ways of surfacing intent. I guess I guess I don't really share that view. I wonder if other folks in committee think that an extra flag here is more complexity for the developer to learn about regular expressions to the extent that we should weigh it against fixing the performance issues here. +SYG: Yes, I would like to give a strong +1 to surfacing intent here. I think that would definitely alleviate the issue. And I completely agree with you that if we were to ship this re execution re-execution strategy, it basically dooms the long term performance viability of the Proposal. I would like to surface a concern about the flag which is not a personal one. I am in favor of the flag. I rarely use regular Expressions, but when I discussed this internally with the V8 team there were some concerns from devtools folks that an extra flag is added user complexity and that it is perhaps more undesirable than other ways of surfacing intent. I guess I guess I don't really share that view. I wonder if other folks in committee think that an extra flag here is more complexity for the developer to learn about regular expressions to the extent that we should weigh it against fixing the performance issues here. -MS: So others may have better recollection than I do, but we've added, if I recall correctly, two flags in the last like three or four years. the Unicode flag obviously for Unicode regular expressions, and then we had the y flag, the sticky flag in the last several years as well. You know we went from like was it four Flags three or four flags to five or six flags now? so the reason I raise that is, I'm wondering is the concern with the Google Dev folks the number of flags, adding a flag... There's a lot of languages like per that have tons of flags. +MS: So others may have better recollection than I do, but we've added, if I recall correctly, two flags in the last like three or four years. the Unicode flag obviously for Unicode regular expressions, and then we had the y flag, the sticky flag in the last several years as well. You know we went from like was it four Flags three or four flags to five or six flags now? so the reason I raise that is, I'm wondering is the concern with the Google Dev folks the number of flags, adding a flag... There's a lot of languages like per that have tons of flags. SYG: Yeah. I think it's twofold one is that it seems to be indeed the sheer number of flags. I don’t think Mathias is in the call. Maybe he can speak for himself. I'm not sure. I can't really see the participation list. You might not be dialed in. I think one is the sheer number as the complexity. There was another argument. That it's kind of a category error for this to be a flag. Like it's not a think that other flags do. MS:, it is arguably it is different a little bit different, right? -SYG: Yeah again, I say non heavy use of regular Expressions. I don't really hold a model of what category of things a flag that is a regular expression like ought to do but that is also not an opinion I share but maybe just because I don't use regular expressions. +SYG: Yeah again, I say non heavy use of regular Expressions. I don't really hold a model of what category of things a flag that is a regular expression like ought to do but that is also not an opinion I share but maybe just because I don't use regular expressions. MS: I think one can argue that all the current flags modify how matching happens and this would be the first one for JavaScript which modifies how the results are presented. other languages already have flags that modify how results are presented in addition to how matches happen. It is definitely a new category as it were of flag for regular expressions for JavaScript, but not necessarily for other languages. +SYG: I see I think insofar as I'm plenipotentiary fo V8, I am happy with the flag to solve this issue. +RBN: I wanted to say that I've spoken with Michael about this, and there have been some discussions about this also in the IRC chat as well. Could you go back to the slide we talked about the different mitigations that have been discussed on the GitHub before? So I think of all of the mitigations that we've investigated for the proposal over the past two years as it progressed to up to Stage 2 and up to stage 3. We looked at using a regular expression flag. We looked at adding a callback that could provide different values, adding new methods, adding an optional value option bag. We looked at all of these options and the last three in this list here. Each of them had significant drawbacks.. Callbacks ere not obvious as to what they need to be. What needed to happen and the kind of thing you wanted to do with the Callback. You only ever really wanted one callback and it was the one that gave you the indices. Most other use cases were, you could just map over the result if that was necessary adding new. It complicates these things because you’re grossly increasing the size of the reg xapi just for one specific use case and then you don't marry not able to use things like string prototype match match all Etc and option values and option bags. And even the Callback function case didn't work very well when we had all of these have all of these symbol based methods that handle reg ex subclassing which we've discussed as being complicated already, but we have to thread these things through and there's possible issues with, if anyone has actually tried to subclass a regular expression. Then this wouldn't work with them if they're not just spreading all the arguments in that they get. It falls apart in a lot of places. The only solution that is simple is adding a flag, at least as far as I can tell. You mentioned “n” is used in Perl and does something a little different, actually end up rolling dotnet to something significantly different. I believe 'n' in Perl and .net handles whether or not captured groups are explicitly captured and only captures named capture groups, so I wouldn't recommend using n because there's a lot of folks who have experience with regular Expressions come from a language like Perl where it's kind of baked into the language and has been. And I have a number of reg ex features that I'm considering proposing and one of them is that capability in I don't want to step on that if that's possible. So I do agree that adding a flag is a good idea is the simplest approach to solving the issue at hand. It doesn't mean we necessarily would be replacing the value that we could just be that this just means that we'll get the indices array in addition to what you would get normally so that adding the flag doesn't break existing code but gives you additional value and it allows you to pay for the cost of allocating indices when you know, you actually want to allocate the indices. So I think that is in my opinion probably the best approach. -SYG: I see I think insofar as I'm plenipotentiary fo V8, I am happy with the flag to solve this issue. - -RBN: I wanted to say that I've spoken with Michael about this, and there have been some discussions about this also in the IRC chat as well. Could you go back to the slide we talked about the different mitigations that have been discussed on the GitHub before? So I think of all of the mitigations that we've investigated for the proposal over the past two years as it progressed to up to Stage 2 and up to stage 3. We looked at using a regular expression flag. We looked at adding a callback that could provide different values, adding new methods, adding an optional value option bag. We looked at all of these options and the last three in this list here. Each of them had significant drawbacks.. Callbacks ere not obvious as to what they need to be. What needed to happen and the kind of thing you wanted to do with the Callback. You only ever really wanted one callback and it was the one that gave you the indices. Most other use cases were, you could just map over the result if that was necessary adding new. It complicates these things because you’re grossly increasing the size of the reg xapi just for one specific use case and then you don't marry not able to use things like string prototype match match all Etc and option values and option bags. And even the Callback function case didn't work very well when we had all of these have all of these symbol based methods that handle reg ex subclassing which we've discussed as being complicated already, but we have to thread these things through and there's possible issues with, if anyone has actually tried to subclass a regular expression. Then this wouldn't work with them if they're not just spreading all the arguments in that they get. It falls apart in a lot of places. The only solution that is simple is adding a flag, at least as far as I can tell. You mentioned “n” is used in Perl and does something a little different, actually end up rolling dotnet to something significantly different. I believe 'n' in Perl and .net handles whether or not captured groups are explicitly captured and only captures named capture groups, so I wouldn't recommend using n because there's a lot of folks who have experience with regular Expressions come from a language like Perl where it's kind of baked into the language and has been. And I have a number of reg ex features that I'm considering proposing and one of them is that capability in I don't want to step on that if that's possible. So I do agree that adding a flag is a good idea is the simplest approach to solving the issue at hand. It doesn't mean we necessarily would be replacing the value that we could just be that this just means that we'll get the indices array in addition to what you would get normally so that adding the flag doesn't break existing code but gives you additional value and it allows you to pay for the cost of allocating indices when you know, you actually want to allocate the indices. So I think that is in my opinion probably the best approach. - -DRR: I'm having a hard time in understanding exactly how the flag would be used. I mean I get that you added to the regex itself, but then what do you do differently? Like are all your call sites changed in semantics like if I do matchAll do I get different results, or do I get something extra? Maybe that's something you could answer Michael. +DRR: I'm having a hard time in understanding exactly how the flag would be used. I mean I get that you added to the regex itself, but then what do you do differently? Like are all your call sites changed in semantics like if I do matchAll do I get different results, or do I get something extra? Maybe that's something you could answer Michael. -MS: You're not going to get different match results. You'll just get the indices property filled in with the appropriate values. +MS: You're not going to get different match results. You'll just get the indices property filled in with the appropriate values. -DRR: Okay. I guess I guess from the typescript side like I'm just trying to think about this because there's no concept where we change the type based on the flags. And so giving users editor support where we say “oh, yeah, you'll have the indices”. It's hard to model in that case. It's not undoable, but it could be misleading. and maybe that's just the compromise that we have to do. +DRR: Okay. I guess I guess from the typescript side like I'm just trying to think about this because there's no concept where we change the type based on the flags. And so giving users editor support where we say “oh, yeah, you'll have the indices”. It's hard to model in that case. It's not undoable, but it could be misleading. and maybe that's just the compromise that we have to do. -MS: Are you modeling the type beyond that it's an object and a specific type of object? +MS: Are you modeling the type beyond that it's an object and a specific type of object? DRR: Well, you know whenever you get one of these matches you need to be able to say whether or not it's going to have those those index properties filled in and really that's what we're trying to give people as like a some indication that it's not going to be there unless you use that flag on the regex. So you just kind forward that information along. @@ -599,9 +617,9 @@ RBN: I can also talk to you about this little bit more to offline Daniel because DRR: Yeah, but I don't really see that changing like fundamentally anytime soon. But again, this is something we can discuss offline I think. -RBN: I think there are solutions for this and even if they're not great solutions, there are solutions for this. +RBN: I think there are solutions for this and even if they're not great solutions, there are solutions for this. -MS: Okay, and not just group it's also named captured groups as well on these little deeper than having groups. +MS: Okay, and not just group it's also named captured groups as well on these little deeper than having groups. DRR: No, I understand that. It's I get that that's always been a limitation on our end. But yeah, I just don't want to make things worse on our end. Okay. Thanks. Any other questions or discussion? Can we get a temperature check? @@ -610,15 +628,19 @@ DRR: What I'd like to do Michael is probably circle back in the next meeting and RBN: I think this is something that we need to get solid solution or a solid answer to before it lands and we can make a patch to the both to the GitHub repositories proposal spec and the pr for the spec that exist today. MS: Okay. Thank you. -#### Conclusion/Resolution -* Revisit next meeting. + +### Conclusion/Resolution + +- Revisit next meeting. + ## Supporting MDN's documentation about TC39's output + Presenter: Daniel Ehrenberg (DE) - [issue](https://github.com/tc39/Reflector/issues/324) - [slides](https://docs.google.com/presentation/d/187-3wKYOJPmK4oLItIVROttt0OObaLx9BOWbKf8JqKU/edit#slide=id.p) -DE: So MDN Staffing to document TC39’s work. People told me not to be too emotional in this presentation, but I'm sad to hear about Mozilla's layoffs, including all but one of the MDN writers including there was somebody who's really focusing on TC39. MDN staffing is important for TC39. +DE: So MDN Staffing to document TC39’s work. People told me not to be too emotional in this presentation, but I'm sad to hear about Mozilla's layoffs, including all but one of the MDN writers including there was somebody who's really focusing on TC39. MDN staffing is important for TC39. DE: So MDN is this documentation website that has documentation for all of the web platform and also detailed documentation for JavaScript and it's really been staying up to date for the most part with things that TC39 does. It's really kind of the premier reference documentation besides the specification for our work. So they document what we do in an accessible way and including an introductory guide and it also documents compatibility for what environments which different JavaScript implementations support which which parts. it's really trusted by JS developers to be neutral and and correct and you know, although documentation across implementations is incomplete it's been increasing over time. Although there's been community volunteers the whole time, most things were not documented until MDN staffing, and I know this because I was involved in. I've been working on on some of these community contributions and it's good, but the staff are really necessary. Even if there is a contributor to mentor and make necessary changes to fix up the work. So I think it would really hurt adoption and intelligibility of TC39's work if we did not have professional technical writers. So we could, you know, expect community contributions, we could ask tc39 members to contribute to a fund to fund writers or even somehow employ a technical writer. Ecma could contribute for a contractor writer. The idea here would be to find a way to fund jobs for the laid-off MDN writers. So I'm kind of pushing on a couple of these one is TC39 members contributing to a writer fund. Another is ECMA contributing. So I'm proposing both within [?] and my level that we jointly sponsor this so been discussing it with the Ecma execcom. They're very interested in supporting TC39 for what I mean sir song is there they're practical so I disc the potential of contribution. like this that we could that we could fund a technical writer position for and so based on you know, we discussed this previously on the reflector, there were a lot of thumbs up. I wanted to ask here for TC39's feedback if we could come to consensus to making a request concretely for basically for funding one day a week of work for a technical writer, where we estimate I mean based on discussing with with people involved in this that we would need somebody basically full time to keep up to date if with everything between ecma262 and I can afford to the different kinds of work that we're doing. And then I'm proposing this with igalia that we do this and so hopefully this all together we'll add up to more contributions out of budgeting for technical writers for this. So I want to ask for consensus on that point and also ask more broadly what we want as a committee from JavaScript documentation? How should we work with MDN? I know going back a long time Rick Waldron set up a discussion with them and been acting as a liaison in some sense and they have some project management tools and wondering what people think is MDN's current state for JavaScript documentation. Do eople have input on the current direction, And in general what kinds of documentation are we interested in is TC 39. We have an educator Outreach Group, which is working on introductory documentation for proposals. Should we be integrating documentation into our stage process? I want to come back to this proposal to the ECMA GA. Can we focus on if we have consensus for this and discussing queue items for that and then we can talk about the broader discussion questions. @@ -628,7 +650,7 @@ MS: Okay, but do we know how much staffing they need to take input whether it's DE: The current way that MDN works is it's literally a wiki. So you just edit it and then it's there online. -YSV: I was going to say effectively what dan just said. It's a Wiki that you edit. There's no oversight which is why they want to move to a GitHub repository and then there may be someone checking it. But not all users are treated the same. Not everyone gets a carte blanche for writing their own pages. They had to get approval for that. That's one of the places where extra work comes from. But what will happe if we have a professional writer they will be given the right to edit Pages specific to tc39 and create pages. So they would basically be allowed to operate without a lot of supervision. +YSV: I was going to say effectively what dan just said. It's a Wiki that you edit. There's no oversight which is why they want to move to a GitHub repository and then there may be someone checking it. But not all users are treated the same. Not everyone gets a carte blanche for writing their own pages. They had to get approval for that. That's one of the places where extra work comes from. But what will happe if we have a professional writer they will be given the right to edit Pages specific to tc39 and create pages. So they would basically be allowed to operate without a lot of supervision. MS: Thanks @@ -638,7 +660,7 @@ WH: One Swiss franc is 1.09 dollars. DE: They estimate it probably takes about a hundred thousand of these currency units to hire a full-time Tech writer. I mean if you were in San Francisco there would be more but maybe the budget is somewhat higher because of labor costs, but it gives us a ballpark. -WH: Having been on the ECMA GA for a while, I'm really not in favor of asking ECMA for any significant amount of money — it's probably one of the poorest organizations attending this meeting. ECMA has had to lay off staff too. +WH: Having been on the ECMA GA for a while, I'm really not in favor of asking ECMA for any significant amount of money — it's probably one of the poorest organizations attending this meeting. ECMA has had to lay off staff too. DE: ECMA has expressed that they would be accommodating to this request. @@ -658,15 +680,15 @@ MBS: Waldemar are you saying you would bring this up at the GA? Is this a person WH: This is a personal opinion. No, you're misquoting me. I didn't say I would bring this up to the GA. What I'm saying is that the GA would have to decide on something like that. -DE: Okay is to input this input that I can then bring to the execom to form the budget proposal that we went to the GA and then we can then vote on it. And so Google will then take an opinion on whether they support or oppose this but it will be informed by you know, the opinions of tc39 members what's proposed to the GA? So it works procedurally +DE: Okay is to input this input that I can then bring to the execom to form the budget proposal that we went to the GA and then we can then vote on it. And so Google will then take an opinion on whether they support or oppose this but it will be informed by you know, the opinions of tc39 members what's proposed to the GA? So it works procedurally -CM: Yes. I'm just curious about how this would work structurally, assuming that that we can get some combination of ECMA and others in the community to pony up sufficient funds to underwrite this, how would this work? Would this be giving a grant to Mozilla to say “hey, keep doing what you're doing” or would this involve setting up some outside management or accountability for?Just how would it work structurally? +CM: Yes. I'm just curious about how this would work structurally, assuming that that we can get some combination of ECMA and others in the community to pony up sufficient funds to underwrite this, how would this work? Would this be giving a grant to Mozilla to say “hey, keep doing what you're doing” or would this involve setting up some outside management or accountability for?Just how would it work structurally? DE: It would not be a grant to Mozilla. We would have to figure out some kind of outde management with - and I think they're funding structures like open Collective and we could use so I think a lot of this is kind of a TBD. Yeah, that would obviously have to be worked out in detail before it was fully ready. CM: This is obviously early in the process. Who would be responsible for supervising the tech writer? -DE: Yeah, that's something we have to work out. But ultimately I feel like the individuals who were working for MDM before even though we probably should have some kind of oversight. I kind of think we can trust them. I think there should be some way that you see than can give direct feedback to them. Yeah. +DE: Yeah, that's something we have to work out. But ultimately I feel like the individuals who were working for MDM before even though we probably should have some kind of oversight. I kind of think we can trust them. I think there should be some way that you see than can give direct feedback to them. Yeah. DE: Can I get a call for temperature from TCQ? @@ -676,6 +698,6 @@ MF: So Dan is this what we would call a zero-sum, you know, if ECMA is funding t DE: I see this as the opposite of zero-sum. This is good. We'll be able to get this funding for other tasks. I joined ECMA partly because I really wanted to work out the invited expert issues partly because I really wanted to work out these funding issues. Many of us are spending thousands or tens of thousands of Swiss Francs a year to provide financing to execom which is great because it helps us, you know, we had expanded by ECMA I feel like TC39 deserves more services provided by execom and the execom come really does want to provide services that help us. We just have to identify these services to them in the right way. So we have to identify the services to them in a way where we clearly state as a committee that we want them and then do it by the time that budgets are created and then we can propose that to the GA. So this is something that I've been trying to work towards for a long time. And my understanding of ECMA finances is not one of them being extremely short on and members can look into financial reports on their website and see more details on the website and I'm happy to explain to people how to do that. -DE: we're at times so if there's time at the end for like a 15-minute overflowed item we can continue discussing this I do believe there will be great. Thanks. Thank you. Thank you very much. I'm very interested in following up on this slide and taking temperature on this, so I'm interested in both. +DE: we're at times so if there's time at the end for like a 15-minute overflowed item we can continue discussing this I do believe there will be great. Thanks. Thank you. Thank you very much. I'm very interested in following up on this slide and taking temperature on this, so I'm interested in both. AKI: I want to mention that we will come back to this. There are questions to be answered. All right, and I think that I think that closes us out for this Monday November 16th. diff --git a/meetings/2020-11/nov-17.md b/meetings/2020-11/nov-17.md index c959dd2f..dae79530 100644 --- a/meetings/2020-11/nov-17.md +++ b/meetings/2020-11/nov-17.md @@ -1,7 +1,8 @@ # 17 November, 2020 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Robin Ricard | RRD | Bloomberg | @@ -15,7 +16,7 @@ | Shaheer Shabbir | SSR | Apple | | Chengzhong Wu | CZW | Alibaba | | Richard Gibson | RGN | OpenJS Foundation | -| Istvan Sebestyen | IS | Ecma International | +| Istvan Sebestyen | IS | Ecma International | | Chip Morningstar | CM | Agoric | | Caio Lima | CLA | Igalia | | Sergey Rubanov | SRE | Invited Expert | @@ -26,13 +27,13 @@ | Daniel Ehrenberg | DE | Igalia | | HE Shi-Jun | JHX | 360 | - ## Default constructors and spread operations - Presenter: Gus Caplan (GCL) + +Presenter: Gus Caplan (GCL) - [PR](https://github.com/tc39/ecma262/pull/2216) -GCL: This is a needs consensus PR and it replaces the specification's default class constructors with - they are currently written in JavaScript; they are specified here to be the result of parsing source text and it replaces that with built-in spec steps and the reason that this done is to avoid delegating to Array.prototype[Symbol.iterator] in the case of a derived Constructor because that it uses the spread operator which uses array.prototypes of the iterator, which is I mean as you can see here like this code will just throw a type error like you can't even construct the class. It just breaks it completely. So in terms of this PR, as far as I can tell there are two normative changes. The first one is that it will no longer try to access or use Array.prototype[Symbol.iterator], and the second one is that the typeError created by trying to call the class changes from the callee realm of class from the caller realm to the callee realm which matches everything else in the language. Basically, this is kind of unique in that the type error is thrown in caller Realm. So it's not too complex. Aside from that it's only a few steps and there's still some editorial things that need to be taken care of but it's pretty simple. And so yeah, those are the two items worth worrying about. It should technically be possible to specify this so that the typeError still comes from the caller realm if anybody thinks that sort of really important invariant that we have in the language I don't see the reason to match that. I'm happy to hear from people on that though. So I think that's about it. All right, so yeah SYG the implementations all seem to match what the spec text says here, which is to throw in the caller realm. So this would be changing that. If we don't see that as being a worthwhile change to be made then we can do the necessary spec text to avoid that change. +GCL: This is a needs consensus PR and it replaces the specification's default class constructors with - they are currently written in JavaScript; they are specified here to be the result of parsing source text and it replaces that with built-in spec steps and the reason that this done is to avoid delegating to Array.prototype[Symbol.iterator] in the case of a derived Constructor because that it uses the spread operator which uses array.prototypes of the iterator, which is I mean as you can see here like this code will just throw a type error like you can't even construct the class. It just breaks it completely. So in terms of this PR, as far as I can tell there are two normative changes. The first one is that it will no longer try to access or use Array.prototype[Symbol.iterator], and the second one is that the typeError created by trying to call the class changes from the callee realm of class from the caller realm to the callee realm which matches everything else in the language. Basically, this is kind of unique in that the type error is thrown in caller Realm. So it's not too complex. Aside from that it's only a few steps and there's still some editorial things that need to be taken care of but it's pretty simple. And so yeah, those are the two items worth worrying about. It should technically be possible to specify this so that the typeError still comes from the caller realm if anybody thinks that sort of really important invariant that we have in the language I don't see the reason to match that. I'm happy to hear from people on that though. So I think that's about it. All right, so yeah SYG the implementations all seem to match what the spec text says here, which is to throw in the caller realm. So this would be changing that. If we don't see that as being a worthwhile change to be made then we can do the necessary spec text to avoid that change. SYG: And to be clear the spread operator delegating to Symbol.iterator for the iterator that is also correctly followed by implementations today. @@ -46,7 +47,7 @@ SYG:I think it's fine. Like I don't disagree that it seems better to not delegat YSV: look at this change. It looks like the change will actually be an improvement for things like iterator helpers and that it will remove some hooks as long as this web compatible. I don't know if It's a huge issue to make this change. And it may be a really good simplification from my perspective. -GCL: It's cool to hear. I'd be interested to know how that affects iterator helpers,. Good to hear. +GCL: It's cool to hear. I'd be interested to know how that affects iterator helpers,. Good to hear. DE: [unintelligible] @@ -59,10 +60,14 @@ GCL: Yeah, so it seems we are in agreement on both the normative change which is RPR: Sounds like no objections. GCL: Thank you very much everyone. + ### Conclusion/Resolution -* Consensus for both normative changes. + +- Consensus for both normative changes. + ## .item() rename + revisit inclusion on String -Presenter: Shu-yu Guo (SYG) + +Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-item-method) - [slides](https://docs.google.com/presentation/d/1UQGlq8t1zfAFa6TPvPpO9j6Pyk4EOv62MFQoC2NshKk/edit?usp=sharing) @@ -71,11 +76,11 @@ SYG: So first the easy part, we got to rename item. We found out that item is no MM: yeah, so when we were getting the uniformity with the DOM this was enough motivation to make it seem worth it without it. I really question whether it's worth doing this at all. What's the payoff that makes it worth adding a new method? -SYG: I think the payoff is still just ergonomics is better [transcription error]. There's evidence that people are reaching for relative indexing via slice because slice lets you express your expressed relative indexing by a negative index and then they're reaching for that by doing the tasks like creating the intermediate array and then getting the first element out. It seems like there's some evidence that people want that. So let's support that directly instead of making them reach for slice, but beyond that I don't have any strong motivation. +SYG: I think the payoff is still just ergonomics is better [transcription error]. There's evidence that people are reaching for relative indexing via slice because slice lets you express your expressed relative indexing by a negative index and then they're reaching for that by doing the tasks like creating the intermediate array and then getting the first element out. It seems like there's some evidence that people want that. So let's support that directly instead of making them reach for slice, but beyond that I don't have any strong motivation. MM: Do you have a strong desire to see this happen still? -SYG: on a scale of zero to five maybe a solid 3. +SYG: on a scale of zero to five maybe a solid 3. RPR: Okay, so 60% motivation I think JHX has a similar point. @@ -97,9 +102,9 @@ SYG: Yeah, we could do that. I could open the floor to that as well. But yeah, l Kris: As they say I even like the color. I think that rhymes with charCodeAt which has precedent in the language and I think the relative indexing sufficiently motivates proceeding with it and also there is some possibility of still satisfying the goal of normalizing with DOM if they were to add it. -SYG: So yeah, let me give a little more little bit more color on what the current plan is for ObservableArray and the DOM stuff. So to recap the project was - so we have these existing DOM node collections that have `.item()`. We would like to upgrade as many of them as we can to observable arrays which are proxies around arrays which present as api compatible. Not just compatible, but when used they would show up as JS arrays—like `instanceof Array` and the %Array.prototype% methods. They're basically just arrays except with some proxy behavior under the hood that the spec authors can use because `.item()` cannot be added to regular JS arrays. The plan currently is to probably add a new IDL class like LegacyObservableArray that has - since these are these are proxies that Legacy observable array would magically materialize dot item somehow as like an own method or something and I think that's the current plan where the old stuff we want to upgrade will have to get this legacy observable array and the new stuff hopefully would just use observable array. So whatever we do here or whatever we do any time in the future to arrays. Hopefully that will carry forward to new DOM APIs. And I guess old DOM apis if the old could be because the Legacy observable arrays will presumably also have all the array stuff but with some magical additions to make compat work. So yeah, I think we are still going to try to normalize the DOM but you know in the best way we can get away with. +SYG: So yeah, let me give a little more little bit more color on what the current plan is for ObservableArray and the DOM stuff. So to recap the project was - so we have these existing DOM node collections that have `.item()`. We would like to upgrade as many of them as we can to observable arrays which are proxies around arrays which present as api compatible. Not just compatible, but when used they would show up as JS arrays—like `instanceof Array` and the %Array.prototype% methods. They're basically just arrays except with some proxy behavior under the hood that the spec authors can use because `.item()` cannot be added to regular JS arrays. The plan currently is to probably add a new IDL class like LegacyObservableArray that has - since these are these are proxies that Legacy observable array would magically materialize dot item somehow as like an own method or something and I think that's the current plan where the old stuff we want to upgrade will have to get this legacy observable array and the new stuff hopefully would just use observable array. So whatever we do here or whatever we do any time in the future to arrays. Hopefully that will carry forward to new DOM APIs. And I guess old DOM apis if the old could be because the Legacy observable arrays will presumably also have all the array stuff but with some magical additions to make compat work. So yeah, I think we are still going to try to normalize the DOM but you know in the best way we can get away with. -DRR: I think the only thing I have is just if you're not locked into the original semantics anymore. This is something to consider. I'm not necessarily saying it was the right thing but given that in some other languages it also does have throwing semantics on vectors or arrays. This could be an opportunity to give this method those semantics so if you're out of out of bounds you throw. On the other hand, that's not necessarily how things like Maps work on gets. I think the only thing there is typically you know with arrays I typically try to think in terms of staying within the bounds. You always know what they're not you have an element in some position, but I guess it's not always true with sparse arrays. +DRR: I think the only thing I have is just if you're not locked into the original semantics anymore. This is something to consider. I'm not necessarily saying it was the right thing but given that in some other languages it also does have throwing semantics on vectors or arrays. This could be an opportunity to give this method those semantics so if you're out of out of bounds you throw. On the other hand, that's not necessarily how things like Maps work on gets. I think the only thing there is typically you know with arrays I typically try to think in terms of staying within the bounds. You always know what they're not you have an element in some position, but I guess it's not always true with sparse arrays. SYG: First blush it just seems to me the throwing thing is not something any other collection in our small standard library really does. I would like to understand the justification and the motivation better than just maybe it'd be nice. Like it seems more surprising to learn that it would throw than not throw. @@ -111,9 +116,9 @@ SYG: Correct. There are two questions that are open right now. One is, what is t WH: We're taking a temperature check here, but I'm not sure what exactly we're taking the temperature check of. -SYG: This temperature check is: should we have relative indexing as a standalone feature via a prototype method on indexables at all. This was raised a few minutes ago by folks like JHX who seem to say that it no longer seems worth it without the DOM unification motivation to have just a relative indexing feature. +SYG: This temperature check is: should we have relative indexing as a standalone feature via a prototype method on indexables at all. This was raised a few minutes ago by folks like JHX who seem to say that it no longer seems worth it without the DOM unification motivation to have just a relative indexing feature. -WW: My position very much depends on whether strings are included or not. So if we're taking a temperature check we should do it on something specific. +WW: My position very much depends on whether strings are included or not. So if we're taking a temperature check we should do it on something specific. AKI: You know this isn't binding, right? @@ -123,25 +128,25 @@ SYG: Fine, so I think Jack has a different question than well. I'll try to wrap Jack: Should we add `at` to DOM collections? -SYG: No, we were never going to add anything to dumb collections. That is beyond our purview at TC39. The whole point of our adding .item() previously was that DOM wanted to stop using their own data structures. They wanted to use just arrays. So I'll try to wrap up here. This room temperature thing seems to be 9 for & 4 unconvinced so we are split in the middle. Maybe I guess we can just continue on to the string question and then I'll try to wrap it up at the end once WH points out we have all the details laid out. All right, so to MLS’s suggestion - if we are to continue with this, it'd be good to have a couple names to try when one of them doesn't work. I think the two obvious top ones are `.at()` and `.itemAt()`. I guess we can go with that. I don't think I have anything better, I don't want to add something like `.get()` to indexables. I think that is too confusing. So let's roll with those two for now and then we'll spend the rest of the time box talking about string inclusion. I'll take a room temperature on inclusion for Strings, or, well let's talk about it first and then we'll get to take a temperature check on that. +SYG: No, we were never going to add anything to dumb collections. That is beyond our purview at TC39. The whole point of our adding .item() previously was that DOM wanted to stop using their own data structures. They wanted to use just arrays. So I'll try to wrap up here. This room temperature thing seems to be 9 for & 4 unconvinced so we are split in the middle. Maybe I guess we can just continue on to the string question and then I'll try to wrap it up at the end once WH points out we have all the details laid out. All right, so to MLS’s suggestion - if we are to continue with this, it'd be good to have a couple names to try when one of them doesn't work. I think the two obvious top ones are `.at()` and `.itemAt()`. I guess we can go with that. I don't think I have anything better, I don't want to add something like `.get()` to indexables. I think that is too confusing. So let's roll with those two for now and then we'll spend the rest of the time box talking about string inclusion. I'll take a room temperature on inclusion for Strings, or, well let's talk about it first and then we'll get to take a temperature check on that. -SYG: So there is no new information presented here. This is exactly the same set of reasons to have it and to not have it as last time. The only reason I am bringing this up again is that people want more time? And that seems pretty legit to hash it out some more with more time. So yeah, the reasons we remain exactly the same… if you think UTF-16 code unit indexing is bad and you should not do that thing, then you would think that you don't want to have a String.prototype.at. If you think on the other hand that UTF-16 code unit indexing is already a thing that is possible via bracket and slice in such a way that it is pretty ingrained into the language and preventing the addition of `.at()` is not really moving the needle on recommending people to not do to the bad thing of code unit indexing, then you would want to have `.at()` for consistency with all the other indexables and consistency with just how brackets work. You kind of want `.at()` to be available everywhere. You can index things by practice. And my weak preference as champion remains that we should just have it because all the other indexables have it and I see no super compelling reason not to. I don't think this would actually discourage the bad use case that people are really worried about. +SYG: So there is no new information presented here. This is exactly the same set of reasons to have it and to not have it as last time. The only reason I am bringing this up again is that people want more time? And that seems pretty legit to hash it out some more with more time. So yeah, the reasons we remain exactly the same… if you think UTF-16 code unit indexing is bad and you should not do that thing, then you would think that you don't want to have a String.prototype.at. If you think on the other hand that UTF-16 code unit indexing is already a thing that is possible via bracket and slice in such a way that it is pretty ingrained into the language and preventing the addition of `.at()` is not really moving the needle on recommending people to not do to the bad thing of code unit indexing, then you would want to have `.at()` for consistency with all the other indexables and consistency with just how brackets work. You kind of want `.at()` to be available everywhere. You can index things by practice. And my weak preference as champion remains that we should just have it because all the other indexables have it and I see no super compelling reason not to. I don't think this would actually discourage the bad use case that people are really worried about. JHD: I don't actually agree with one of the things on your slide show. I don't think we have to worry too much about someone making a package that does the bad thing. I think they'll do that anyway if they want to; and they probably won't regardless, but that said, I think it looks like there's more risks than having it that aren't on this slide. As you mentioned, people will reach for `.slice(-1)[0]`, or people will have to manually compute the index that they want. I think that's a risk. That's gross code that's easy to screw up that people do all over the place. Gross is of course subjective, but I hope that that's not a contentious subjective opinion. So I think it's really important to have it. Code points aren't enough to solve what people actually want; which is some form of grapheme clusters. I think that there's a very large amount of strings that aren't human sentences where surrogate pairs matter. There's all sorts of things like code enums, and plenty of ASCII strings in which code unit indexing is exactly what's desired, which is why the “slice” and “length minus 1” patterns are common. So I hope that we can continue including this, just as I said during the original stage 3 promotion. -SYG: Okay. Thank you. +SYG: Okay. Thank you. WH: My position on this is that the ship has sailed — we index strings via code units. Code points really don't offer much of an advantage. If you think that the advantage of code points is that you don't get to break up surrogate-like characters into multiple pieces, then you're in for a surprise because lately Unicode has grown to include “characters” which actually take several code points to represent. A good example is flags. There is a Unicode flag for each country. The flag is encoded by two or more Unicode characters, but they really form the equivalent of a “surrogate pair” to define what is essentially one virtual character. Splitting them does really bad things. Not indexing into strings at all I don't see as an alternative either since there are plenty of situations where you need to work with locations within strings. -KG: Yeah, I guess just I'm still mildly against having this for strings. I entirely agree with WH that code points are not significantly better than code units here. It's just that the alternative is that you don't provide this new convenience method at all in the hopes that people who would reach for it will instead learn why it doesn't do the thing that they actually want and find some way of doing whatever it was that they actually wanted. But again, that's weak opposition. +KG: Yeah, I guess just I'm still mildly against having this for strings. I entirely agree with WH that code points are not significantly better than code units here. It's just that the alternative is that you don't provide this new convenience method at all in the hopes that people who would reach for it will instead learn why it doesn't do the thing that they actually want and find some way of doing whatever it was that they actually wanted. But again, that's weak opposition. -SYG: Okay, I see the queue is empty. I'm trying to think of how to craft a question for the room because now there are kind of three questions... one is assuming that we still want to have a prototype method for doing relative indexing at all. We have a question also on String.prototype... should we add a method there? Should we do this proposal at all? I guess. First I would like to open the temperature check regarding inclusion on string and then I'll take the conclusion of that and ask the “should we do this at all?” question. So a different temperature check for strings. Just to clarify: should we include it on string given that this proposal happens. [pause] Okay. We don't have a formal definition of quorum. Well, this is okay, so we have gone over to the other side now with nine total. I see five that are indifferent. +SYG: Okay, I see the queue is empty. I'm trying to think of how to craft a question for the room because now there are kind of three questions... one is assuming that we still want to have a prototype method for doing relative indexing at all. We have a question also on String.prototype... should we add a method there? Should we do this proposal at all? I guess. First I would like to open the temperature check regarding inclusion on string and then I'll take the conclusion of that and ask the “should we do this at all?” question. So a different temperature check for strings. Just to clarify: should we include it on string given that this proposal happens. [pause] Okay. We don't have a formal definition of quorum. Well, this is okay, so we have gone over to the other side now with nine total. I see five that are indifferent. -AKI: It's the people who do care about it broadly speaking don't care about this detail. I think that's probably a good read on it. +AKI: It's the people who do care about it broadly speaking don't care about this detail. I think that's probably a good read on it. SYG: So I think the temperature is - please correct me if I'm misinterpreting this - that there is majority support for this feature though. -MM: The feature being conditional on it being included at all should also be included on strings, but I'm still against the feature being included. +MM: The feature being conditional on it being included at all should also be included on strings, but I'm still against the feature being included. MLS: I voted the same way as Mark did it except for I'm indifferent for the whole proposal. @@ -171,7 +176,7 @@ SYG: it is decided to be code unit indexing just like brackets currently works o MLS: So let me weigh in. I raised the web compat issue. I don't think you need to go back to stage two to find something that works. I just hope we're not going to chase our tail and get smooshed. -SYG: Yeah fair enough.so by the silence, I am assuming then I'll wait a little bit more - +SYG: Yeah fair enough.so by the silence, I am assuming then I'll wait a little bit more - JHX I still think maybe we can introduce syntax. I'd like to make a syntax proposal in the next meeting. If not, if so, maybe we can postpone the decision to the next meeting. @@ -202,14 +207,17 @@ SYG: Thank you for the clarification. YSV: I want to clarify that I believe in supporting the stage. Well, I support stage 3 our discussion is around the web compatibility naming of this feature and if we pulled it back into stage two, we would effectively just be spending extra time for no potential benefit because we need to test this to see if it's web compatible. We're going to run into that wall regardless of how much workshopping we do on the name and I think to be realistic about that. SYG: completely agree. Thank you. And I really want to thank Mozilla here; their shipping policy is what's really helping these surfacing these early name conflicts, and I'm trying to see if Chrome can help out better here. + ### Conclusion/Resolution -* Consensus on Stage 3 for arrays and typed arrays and strings, pending a rename + +- Consensus on Stage 3 for arrays and typed arrays and strings, pending a rename ## Standardized Debug for Stage 2 + Presenter: Gus Caplan (GCL) - [proposal](https://github.com/tc39/proposal-standardized-debug/) -- [slides]() [TODO: GCL] +- slides [TODO: GCL] GCL: This is standardized debug for stage 2 and just to recap what we discussed at the last meeting around. This is just it's really simple the motivations here are to have one standard debugging facility everywhere. That's not like, you know print versus console DOT log versus there was another one that I can't remember off the top of my head right now, but basically it's going to be just one thing available everywhere and it fulfills a specific API constraint, which is that the values passed into these this API are then returned out of it so it can be composed into existing code. And so the solution I am bringing here to request for stage two is debugger meta properties, which are syntax. And the reason that I have brought syntax here, although it was voiced in the last meeting that syntax was not preferable, was because I wanted to fulfill both the logging and breaking use cases, which you mentioned that there it makes sense to differentiate between those and if these are functions that they can be passed to languages that are not JavaScript, for example, C++ or webassembly whatever is you know, being used in the environment and that is just specifically for the break. That's just not something that I want to deal with and when these are syntax that ties them to a specific source location. It kind of makes it clear that you know, this is this is tied to the like the the expression you're passing into it is it's not just a value but as a piece of code that you're interacting with, so that's basically the approach that I've taken here and if that's totally unpalatable to people I'm happy to come back next time with something that does not introduce new syntax, but I would really like to try for this pathway. @@ -219,9 +227,9 @@ GCL: Yes, so this can obviously be changed in the future. But the reason that I JWK: about that I have an issue in the repo. -DRO: yeah, do you mind showing the slides again with the syntax? it might be easier for me. So based off of this that sort of means that `debugger.log` is like a regular function, which means that `debugger` now has to be an object. So how does this work with the existing behavior where `debugger` as a statement pauses a program? Like, how do you still preserve that if you have this fact that `debugger` now becomes an object instead of being statement. +DRO: yeah, do you mind showing the slides again with the syntax? it might be easier for me. So based off of this that sort of means that `debugger.log` is like a regular function, which means that `debugger` now has to be an object. So how does this work with the existing behavior where `debugger` as a statement pauses a program? Like, how do you still preserve that if you have this fact that `debugger` now becomes an object instead of being statement. -GCL: This is not an object. It's a it's like import meta or new DOT Target. +GCL: This is not an object. It's a it's like import meta or new DOT Target. DRO: So then a developer would not be able to say `let foo = debugger.log` and be able to pass that around? @@ -233,7 +241,7 @@ GCL: Yeah, and I think the expectation there would also be that the value you pa DRO: sure. Yeah in that case then I would probably agree with what someone mentioned earlier that you might want to accept either an object or a list of items so that you could show multiple things. But yeah it sounds fine. It's a little odd that we're kind of repeating the same existing behavior from `console` and `debugger`, but I can understand the desire for having something that returns and having sort of one unified thing. To bike shed a little bit, I think `pause` would be better than `break`. But you know, I could be overruled on that. -JWK: I have a question on this slide. What do you mean by “can only be invoked from JS”? Are there any other languages? +JWK: I have a question on this slide. What do you mean by “can only be invoked from JS”? Are there any other languages? GCL: so as a simple example here if I did promise dot resolve .then debugger dot break. Right assuming that was valid that it is. It's unclear what that should do. You should instead have to write this `(x => debugger.break(x))` to be clear that you want to break here. Does that help clarify? @@ -259,19 +267,19 @@ WH: Yeah, some prose which specifies the intent of those things because all we h SYG: Hi, so a couple questions so one. What is the difference between an unadorned debugger statement and debugger.break without any arguments? -GCL: if that's too overlapping I can remove the optionalness. +GCL: if that's too overlapping I can remove the optionalness. SYG: a bunch of stuff seems overlapping and I want to - I'm not necessarily against it, but I want to make sure I understand the motivation because I think the design has significantly changed since you presented it last time. Let me make sure I understand. So for both debugger.log and debugger.break, the evaluation of its argument expression is now unconditional, right? Like that always happens regardless of whatever implementation to debugging action is taken? GCL: Yes, so no matter what these should take a value and return that value and between those two things happening something else may happen. -SYG: okay, so at least it's up to the implementation to - like right now the way I would see this to be mostly due to implemented with not have anything that's like conditional on devtools being open. One of the concerns for the previous iteration was that, I think the general feeling from the from the Chrome devtools team was it would bad to have code that conditionally runs depending on it whether the tab was opened, and this is no longer the case for this current iteration because it says you always have to evaluate your assignment Expressions. +SYG: okay, so at least it's up to the implementation to - like right now the way I would see this to be mostly due to implemented with not have anything that's like conditional on devtools being open. One of the concerns for the previous iteration was that, I think the general feeling from the from the Chrome devtools team was it would bad to have code that conditionally runs depending on it whether the tab was opened, and this is no longer the case for this current iteration because it says you always have to evaluate your assignment Expressions. -GCL: My big motivation is it taking the expression evaluating it and returning it. So I would if this did something else I would not be motivated to continue championing it. +GCL: My big motivation is it taking the expression evaluating it and returning it. So I would if this did something else I would not be motivated to continue championing it. SYG: Given that, when would I use debugger.log versus console.log? -GCL: I mean wherever console.log isn't available or even where it is available if you just feel like being platform independent. +GCL: I mean wherever console.log isn't available or even where it is available if you just feel like being platform independent. SYG: It feels a little bit weird to me to have to achieve platform independence by a debugger meta properties in a way that explicitly do not compose. Yeah, I see that long. @@ -291,7 +299,7 @@ GCL: You mean there's information lacking in the repository. YSV: Yes, okay. Definitely. -GCL: That's fair. +GCL: That's fair. MLS: There is a lack of clarity of whether these are statements or are they expressions and we're talking about them as functions and if the committee's having difficulty making the distinction. I think the developers will also have difficulty making the distinction as to where these can be placed. @@ -307,7 +315,7 @@ MLS: statements debugger doesn't the debugger dot .whatever is is not a function KG: Yeah, someone mentioned something similar to this, but - I see motivation for debugger.break. That's genuinely a new capability. I don't see a lot of motivation for debugger.log. I appreciate the desire to have something cross-platform here, but like platforms that provide some sort of logging thing generally just call it console.log, and I don't think adding a different way of doing console.log would serve anyone. I would prefer that people who like feel the need to provide that kind of I/O. Just also cal it constant logs. debugger.break.I see a much stronger motivation. -GCL: That's fair. I might see motivation for debugger.log is specifically the value in value out like capability that it has which console.log doesn't. +GCL: That's fair. I might see motivation for debugger.log is specifically the value in value out like capability that it has which console.log doesn't. KG: Just use console.tap. @@ -321,21 +329,21 @@ KG: Right, I guess my point is I don't think it needs a standardization effort. MM: Yeah, so the comment was made about the import expression not causing significant confusion. It does, and I think that bears on how we evaluate this the scoping of this. import iis per module and the it creates my need for a new host hook in a system where you're trying to enable JavaScript code to act as host to other JavaScript code means that you have to reify the new host hook with debugger. There's already a need for a host hook for the debugger as a breakpoint. So extending that host hook to deal with the debugger dot break is we would just be elaborating a host hook that we need anyway. but console.log is already lexically bound which is a better way to parameterize evaluatialuation than a host hook and the debugger.log would require a new host hook for something that could be parameterised. -GCL: I would just say I want to keep whatever apis were introducing here consistent with each other. So if there are constraints on the, you know, the break API, I'd want to match the log API even if it doesn't need to strictly fulfill those constraints. But yeah, I I see're coming from.Okay, and with that +GCL: I would just say I want to keep whatever apis were introducing here consistent with each other. So if there are constraints on the, you know, the break API, I'd want to match the log API even if it doesn't need to strictly fulfill those constraints. But yeah, I I see're coming from.Okay, and with that YSV: we are at the end of the queue Gus. Do you want to ask for anything fromg from the committee chair did GCL: were you objecting to stage two earlier because of lack of documents? -YSV: Yes. I am objecting to Stage 2 due to a lack of motivation in the explainer. +YSV: Yes. I am objecting to Stage 2 due to a lack of motivation in the explainer. GCL: Okay, then. Yeah, I don't have anything to ask. Did anyone else have constraints on this for stage 2? SYG: For the constraints. I would like to agree with folks like Kevin and with Mark Miller have said I notice arguments I think reinforce. A hunch I had which is that I am now unconvinced of the logging use case and it reminds me of I think what the first time this came up at a quick discussion with the Chrome devtools team who also are kind of unconvinced of the logging use case. Log points are a thing in Chrome devtools at least. I don't know about other browser devtools. Perhaps they're also a thing already. Doing this in the language does like an odd layering Choice. I'm not sure if that's really well motivated for logging. -GCL: Okay. Well one thing I could ask are there people who would aside from Mozilla objections are there people who would object to the logging API because if that's objectionable I would want to Not continue with this proposal. +GCL: Okay. Well one thing I could ask are there people who would aside from Mozilla objections are there people who would object to the logging API because if that's objectionable I would want to Not continue with this proposal. -MM:I might object to it. I haven't decided yet whether I would have jumped to it, but I'm certainly not willing to not object to it today. +MM:I might object to it. I haven't decided yet whether I would have jumped to it, but I'm certainly not willing to not object to it today. WH: I would like to understand why we can't do these with just library functions. @@ -343,19 +351,22 @@ JWK: because Library functions cannot break the debugger. WH: Okay, let me rephrase that. I'd like to understand why we need new syntax for this. -GCL: The break one has to be new syntax because you the only break in the language right now is a statement and this would be an expression position. The log one does not need new syntax, but I would want it to be the same design as whatever the break one is. +GCL: The break one has to be new syntax because you the only break in the language right now is a statement and this would be an expression position. The log one does not need new syntax, but I would want it to be the same design as whatever the break one is. WH: I don't see why `break` needs to be new syntax. If you were to define it as a function which has implementation defined behavior, the implementation defined behavior could be that it breaks in the caller. GCL: Okay. I am not. I was I mentioned this earlier but it becomes like unclear what that's supposed to do when you you know, pass it directly to a prototype dot then thing, like when when you are involving things that are not JavaScript Source text. That gets weird. WH: Since you’re leaving it as implementation-defined behavior, I don’t see what gets weird here. - + YSV: Please take your concerns to the repository for the proposal and we're going to move on with the next agenda item and Gus if you have further questions, we can revive this if you want to have some more time of the committee. + ### Conclusion/Resolution -* No stage advancement + +- No stage advancement ## Import assertions status update + Presenter: Dan Clark (DDC), Daniel Ehrenberg (DE) - [proposal](https://github.com/tc39/proposal-import-assertions) @@ -367,25 +378,25 @@ DDC: I think I think the update here was just that if I recall correctly that Ba DE: No, that sounds right. I mean there's one webpack plug-in that did a previous version is semantics and it's being updated to the current version. -DDC: Okay. So one issue discussed today previously we had discussed when or two meetings ago this invariant about whether hosts are allowed to use import assertions as part of the cache key. I'm in ier version of this proposal.He had this restriction where hosts must not use import assertions as part of the cache key specifically like for any given specifier requesting module pair we said that must always result in the same module with or without the presence of assertions like changing which module return for that pair. HTML had requested that they actually want and it to use their module map mechanism keyed with the import assertions just in terms of how they wanted to integrate this integrated since this in the HTML spec to that end. We got consensus one or two meetings ago to loosen this restriction to where import assertions would be allowed to use them as part of the cache key. However, it turns out that after some back and forth with HTML editors they're going to use this module map mechanism. They actually would still need the earlier stronger restriction. Like the details briefly here would be that like they are ignoring unknown assertion and unknown types would unknown type assertions would always be rejected and like module types of mutually exclusive. So you'll never for a given specify or will only return you'll only return module for at least one type. Value so like they actually do meet this earlier stronger research certain. And so I think we're interested in actually just restoring the original the original restriction where hosts are not allowed to use not allowed Imports or as imported solution thanThe cache key and this would basically just be a straight revert of my earlier change to to loosen the Restriction. I think maybe I do we want to go to the queue now and just discuss that like +DDC: Okay. So one issue discussed today previously we had discussed when or two meetings ago this invariant about whether hosts are allowed to use import assertions as part of the cache key. I'm in ier version of this proposal.He had this restriction where hosts must not use import assertions as part of the cache key specifically like for any given specifier requesting module pair we said that must always result in the same module with or without the presence of assertions like changing which module return for that pair. HTML had requested that they actually want and it to use their module map mechanism keyed with the import assertions just in terms of how they wanted to integrate this integrated since this in the HTML spec to that end. We got consensus one or two meetings ago to loosen this restriction to where import assertions would be allowed to use them as part of the cache key. However, it turns out that after some back and forth with HTML editors they're going to use this module map mechanism. They actually would still need the earlier stronger restriction. Like the details briefly here would be that like they are ignoring unknown assertion and unknown types would unknown type assertions would always be rejected and like module types of mutually exclusive. So you'll never for a given specify or will only return you'll only return module for at least one type. Value so like they actually do meet this earlier stronger research certain. And so I think we're interested in actually just restoring the original the original restriction where hosts are not allowed to use not allowed Imports or as imported solution thanThe cache key and this would basically just be a straight revert of my earlier change to to loosen the Restriction. I think maybe I do we want to go to the queue now and just discuss that like DE: I wanted to make a quick comment about this before I want to say when we talk about cache key. We don't mean whether HTML can have a bath which is keyed by the imported search that that's kind of like an implementation detail of HTML and the current HTML integration spec the are does that but what we mean specifically is whether there can be multiple successful module loads with different import assertions and what we found is that because of how other layers of of caching work, you know fetch fetch caching the there we just not be this this kind of disagreement is multiple modules in practice. So we're still waiting to hear back from HTML editors about whether this is acceptable, but this is our current best best understanding based on everything. SYG: Sounds great if it works out, so it's let me see if I understand correctly. The thing that we'll be reverting to will be the prose invariant that still avoids talking about the cache key.but the invariant of this one to one that's of something. -DDC: Yes, it would be. Yeah, so this bullet point I've got highlighted on screen right now would be we would remove this module request dot assertions piece from this restriction. So it would go back to being each time. This is under a host result imported module we go back to beinor each reference descriptor. Jewel module or specifier pair that must always be if it succeeds. It must always resolved to the same module instance without regard to assertion. So like the the assertions are different like it doesn't matter. It's still if it still succeeds, you must get the same thing right? So there cannot be multiple different module records for ddifferent assertions because for HTML the assertions aree mutually exclusive. +DDC: Yes, it would be. Yeah, so this bullet point I've got highlighted on screen right now would be we would remove this module request dot assertions piece from this restriction. So it would go back to being each time. This is under a host result imported module we go back to beinor each reference descriptor. Jewel module or specifier pair that must always be if it succeeds. It must always resolved to the same module instance without regard to assertion. So like the the assertions are different like it doesn't matter. It's still if it still succeeds, you must get the same thing right? So there cannot be multiple different module records for ddifferent assertions because for HTML the assertions aree mutually exclusive. -SYG So, okay, so it seems like a win-win to me.I would love to see that as part of the HTML spec and not as part of this proposal. But I guess in personal communication or something. I would love to see the reasoning that brings it all together with the cash interactions or whatever that ensures that invariant is met on the HTML side. +SYG So, okay, so it seems like a win-win to me.I would love to see that as part of the HTML spec and not as part of this proposal. But I guess in personal communication or something. I would love to see the reasoning that brings it all together with the cash interactions or whatever that ensures that invariant is met on the HTML side. DDC: Yeah, yeah, that sounds good. Yeah,, so I'm not hearing objection -MM: I just you just want to really just for a clarification of the history here in thinking about the semantics of the feature. It seems clear to me that I never understood why HTML fought it needed it as the cache key. So could you clarify why?We thought that HTML needed it in the cache key. +MM: I just you just want to really just for a clarification of the history here in thinking about the semantics of the feature. It seems clear to me that I never understood why HTML fought it needed it as the cache key. So could you clarify why?We thought that HTML needed it in the cache key. DDC: This was like they have this mechanism, this module mechanism and it solves some requirements about the integration like their One requirement. They had was the like if you import say a specifier with the given type and that fails it shouldn't like Poison the Well bore future ports of that same specifier. with other types and like the module map that they had is just kind of like natural way to achieve this and that you could write it another way and I had a version of the integration written another way, but just in terms of how they want it to how they want it to look this just seemed like it solves some of the - it's like this mechanism that they have for solving some problems about problems. Well caching about modules and later. We learned that like they're willing to Like something changed with that they're willing to ignore unknown assertions and that's kind of open to open the door for us to retighten this restriction. They are still using this module met mechanism, but just in practice it turned out that some of these propertiesWith the integration is going to look like means that like in practice we can we can restore the Restriction. MM: So the poisoning issue if it's real sounds quite serious. Can you verify that that in fact without including it in the cache key that there is no poisoning issue - -DDC: yes. +DDC: yes. MM: Okay good. That's it. @@ -393,19 +404,19 @@ GCL: If I'm understanding this correctly, it would allow multiple import declara DDC: Yep, I think the key property in HTML is that no type values will ever succeed for the given specifier at most one set of import assertions will succeed in actually giving you a loaded module right? -GCL: I mean not not just necessarily the web like any any random implementation.for whatever myriad of assertions they have young if we saw +GCL: I mean not not just necessarily the web like any any random implementation.for whatever myriad of assertions they have young if we saw -DDC: I suppose if we restore this restriction, then it would like have that like it would have that property where you can if you have disparate sets of assertions, like you'll get it most one for each specify your still get at most one module back maybe more than one of those like sets of assertions will like succeed but like you're going to get the same you're going to get the the most one module from all of those. +DDC: I suppose if we restore this restriction, then it would like have that like it would have that property where you can if you have disparate sets of assertions, like you'll get it most one for each specify your still get at most one module back maybe more than one of those like sets of assertions will like succeed but like you're going to get the same you're going to get the the most one module from all of those. -GCL: Right and as my understanding was that because that's scoped on a per module basis. Not many module that that would be acceptable, but that might be my misunderstanding. +GCL: Right and as my understanding was that because that's scoped on a per module basis. Not many module that that would be acceptable, but that might be my misunderstanding. -DDC: I'm not sure. Yeah, I guess I'm not following the - +DDC: I'm not sure. Yeah, I guess I'm not following the - DE: I think we specifically do allow. I mean yes modules in general allow you to import the same as the same about full-time and when you have import assertions, you could apply the assertion to one and not the other the whole module will not load the old module graph won't load if one of the important sections fails. So I don't understand that issue. I think we should go on to the rest of the presentation and come back to more comments on this once it's done. DDC: That's fine. Okay, so I will move on here. The next thing is. so this was an idea raised by one of the HTML spec editor is annevk and the so HTML is as I mentioned earlier for assertion was that they don't care about which currently is everything but type. They're just going to ignore it. They're just going to know them entirely and we expect that like other implementations are likely going to follow in this behavior andThe suggestion is should we kind of build this into this back to ensure that unknown assertions are handled in the same way. Universally the proposed way to do. This would be the hosts actually States at the a state can statically which assertions they care about which assertions they want to get and only those specifying the keys of which assertions that they're interested in and those are the only assertions that they'll actually guess when they that will actually hand back to the host. So kind of enforcing that way that they must ignore unknown assertions, but this is the only one still even see. I have a PR for those that like achieves this one way, or maybe perhaps it could be achieved other ways, but I think we just like to discuss whether that seems like a good idea or not. -DDC: Just on a quick update, at TPAC we discussed CSS modules web components working group. This was one of those proposals that is like the motive one of the motivations for import assertions in the first place. There's that mime type security issue where I'm like loading a CSS module and I and I get a JavaScript module back from a server by surprise because they sent an unexpected mind type. This import assertion is built into CSS modules now to like resolve that problem and there's just general moments from that audience looking for networking group that like this does kind of solve the problem that it was set up to solve nothing more to say on that. Just give a nice validation that we're actually meeting the original goal of the proposal. There's still a couple remaining things open before CSS module would be landed in HTML, but they're related to specific HTML CSS stuff like about at Imports what exactly to do with those and this some questions around the document that aocument that adopted ted style sheets API, but yeah, it's going to evaluate. +DDC: Just on a quick update, at TPAC we discussed CSS modules web components working group. This was one of those proposals that is like the motive one of the motivations for import assertions in the first place. There's that mime type security issue where I'm like loading a CSS module and I and I get a JavaScript module back from a server by surprise because they sent an unexpected mind type. This import assertion is built into CSS modules now to like resolve that problem and there's just general moments from that audience looking for networking group that like this does kind of solve the problem that it was set up to solve nothing more to say on that. Just give a nice validation that we're actually meeting the original goal of the proposal. There's still a couple remaining things open before CSS module would be landed in HTML, but they're related to specific HTML CSS stuff like about at Imports what exactly to do with those and this some questions around the document that aocument that adopted ted style sheets API, but yeah, it's going to evaluate. DDC: And then just last thing to mention is evaluator attributes. As I mentioned at the beginning import assertions hosts are not allowed to change how modules are interpreted based on the assertions. It's just like a yes or no decision on whether to load the module we've kind of talked on and off about this idea of an evaluator attribute that actually would be allowed to do transformations on the module has kind of been some ideas kicked around about this, but there's not any super solid use case that I think we have at this point. So like we're not today bringing forward any proposal here, but the point of raising this today I think it's just to say that like if folks have use cases that they want to advocate for this, get in touch with us. We'd be interested in discussing this further, but we're not bringing anything forward today about that. Those are the end of the slides perhaps we could go back to this idea of providing host only with the assertions that they're interested in. @@ -417,11 +428,11 @@ MM: The scenario I have in mind is we as a committee have some future version st DE: So,I think it's always possible that that could happen, like one reason, then this is not very useful to do this compared to other things that we considered. We were previously considering exposing the assertions to the module that's being important, but we decided against that and so this makes it you know, you can import your you can't Implement your own assertions. I think it's always possible that we could retain conflicts and this occurs and in that case, we would choose a dient name for the assertion. So in this case, it's very similar. I don't think feature testing would be a barrier against this because name clashes still occur in the presence of feature testing. And I do expect that. Some sorry, -MM: the issue I was raising wasn't a name clash. But I think I think at this point we can take this offline. +MM: the issue I was raising wasn't a name clash. But I think I think at this point we can take this offline. DE: it doesn't mean that in that case, if that's discovered during the standards development process. We can choose a new name, I don't think it's a big barrier. It's not that the whole space of names is filled up. -MM: Since I don't have an alternative to offer, I think that's an adequate answer. +MM: Since I don't have an alternative to offer, I think that's an adequate answer. GCL: Yeah, just to add on to the like unknown attributes point. It seems that there may be assertions which are unsafe to ignore. For example, like a script which in its security constraints must assert that the the loaded module matches some like, you know, hash or something and they would it would be preferable to you -- [disconnected] @@ -439,9 +450,9 @@ DE:I want to note that this change does give us more guarantees across environme SYG: To clarify my understanding that the change you're asking for consensus for is one a new host hook for the host to fill in to give you the set of assertions that is supported and also a bit of spec text that filters the parsed assertions to for the intersection and to ignore the rest. Is that what you're asking for? -DDC: Yes. Yeah. I've got it on this.Here are links in the spec. I've got this written out with like textbook. Sounds yeah +DDC: Yes. Yeah. I've got it on this.Here are links in the spec. I've got this written out with like textbook. Sounds yeah -DE: I was a little surprised that it came out so procedurally, but when I tried to think about how to write this out declaratively I could go I couldn't think of how to do it in a more clear way. We have to flee open to suggestions because he had is a little there's a little weird to see a procedurally but yeah, same same boat for me. +DE: I was a little surprised that it came out so procedurally, but when I tried to think about how to write this out declaratively I could go I couldn't think of how to do it in a more clear way. We have to flee open to suggestions because he had is a little there's a little weird to see a procedurally but yeah, same same boat for me. JHD: More guarantees sounds great to me. I'm on board. @@ -451,13 +462,13 @@ RPR: Okay, you have consensus. Okay, thank you. Okay. Thank you everyone. DE: Can I confirm for the other point that was raised about reverting that patch about the cache key? It sounds also like we have sort of conditional consensus on confirmation from HTML about this being acceptable. Is that accurate? Should we be doing a formal call for consensus? -MM: I would state my opinion stronger that I would I agree to consensus on not having a key part of the cache key at this point if they come back and say no it needs to be part of the cache key after all I would need to be re convinced of that because it just doesn't look +MM: I would state my opinion stronger that I would I agree to consensus on not having a key part of the cache key at this point if they come back and say no it needs to be part of the cache key after all I would need to be re convinced of that because it just doesn't look DE: we've discussed this extensively a different TC39 meetings, you know, she worked hard on this compromise that we felt we needed about them being part of the cache key. So I'd like to maintain that existing consensus resolution and just maintain the option of moving this if it's determined that it's technically feasible. MM: I'm uncomfortable with maintaining my consensus on having it be in the cache key without coming back to an understanding as to why that made any sense. Or why it makes any sense. -DE: Yeah, okay. I think you can have that noted. I also don't think it should be part be part of the cache key. So I don't think we disagree on substance. It's justI agree. I think that this will, in all likelihood, just go forward without it being in the cache key and then there's no conflict. I think we I think the we do we already did establish consensus on this on this compromise, so we didn't we didn't come to committee to ask for that consensus to o be reaffirmed. if it turns out that HTML does have reasons why this change were proposing is not acceptable. Then we will come back and explain them to the committee, butBut my understanding is that we have this established and we've been moving forward on the assumption that that basis is true. +DE: Yeah, okay. I think you can have that noted. I also don't think it should be part be part of the cache key. So I don't think we disagree on substance. It's justI agree. I think that this will, in all likelihood, just go forward without it being in the cache key and then there's no conflict. I think we I think the we do we already did establish consensus on this on this compromise, so we didn't we didn't come to committee to ask for that consensus to o be reaffirmed. if it turns out that HTML does have reasons why this change were proposing is not acceptable. Then we will come back and explain them to the committee, butBut my understanding is that we have this established and we've been moving forward on the assumption that that basis is true. SYG: To clarify with Mark. I just want to make sure on the same page the current compromise is not that we have spec text that requires it to be part of the cache key, but we have some relaxed invariant that lets HTML have the right to put it in a cache key. Is that all your understanding? @@ -465,29 +476,31 @@ MM: I would not want to see that go forward without my understanding what the ra DE: Okay, personally. I don't understand the rationale but if we figure out that it is needed then we will come back to the committee and describe the rationale -MM: good. +MM: good. ### Conclusion/Resolution -* Consensus for new host hook -* Conditional consensus on the “cache key” issue -## Grouped Accessors and Auto-Accessors +- Consensus for new host hook +- Conditional consensus on the “cache key” issue + +## Grouped Accessors and Auto-Accessors + Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-grouped-and-auto-accessors) - [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkfZuc_nmAt65-JnCyA?e=1zqtoR) -RBN: a proposal for a change or not change, but some syntactic addition to the JavaScript or ecmascript language. to add grouped and auto accessor properties. This is a proposal that is intended to address a couple issues I've seen. with both accessors and Fields in in general and a number of other concerns that I've raised around the current decorators proposal and some ways that we can address some of those issues So I'm going to jump kind of right into what that syntax looks like just to give everybody an idea of what I mean by grouped accessors a grouped accessors essentially equivalent to a pair of individual get and set accessories except for the fact that they're grouped within a single declaration. There's a couple reasons that I'm looking to introduce this syntax. one is currently Getters and Setters can be spread across a class. It's generally bad practice, but it is perfectly feasible to have a getter at the top of the class etcetera at the bottom and thousands of lines of code in between. Most people. Don't do that. And that's a good thing. but there are some other issues with Getters Getters and Setters in that way one is that if you use Getters and Setters that use computed property names for example symbols that could theoretically require multiple evaluation steps because you have to evaluate the name of the property multiple times. There's also the possibility of side effects that can happen during computer property name access another reason and motivation for the proposal is currently the stage one decorators proposal works using descriptors, and we for stage 1 consensus describe worked on. how we do evaluation for accessors to get both together. the center to provide to The Decorator the updated proposal that was discussed in the last meeting and is still currently being. worked on by the Champions group. uses decorators on Getters and Setters instead of working with the entire descriptor instead now works on a function basis. So a decorator applied to a getter would only be decorating the get function. This can be problematic because there are some scenarios where a decorator would want to be able to get access to both the getter and the setter. And the current proposals don't provide a very easy way to actually correlate those two things. on a class without a lot of additional. access code. So this does provide a convenient place to attach decorators that would decorate both the getter and the setter at once. this Syntax for Getters and Setters. There's multiple languages that use it the example that I've used as prior art. Here comes from the C sharp language. Some of the semantics that this would have is that it would be a syntax error if grouped accessors share the same name of another member with the same placement on the class. So it's already an error if you have a get X and another get X this would also be an error if you had a get X and a grouped accessor that contains just a Setter. We're trying to avoid confusion and complexity when it comes to how these decorations are how decorators could be applied and the evaluation works. A grouped accessor can specify either a get or set or both in any order. A grouped accessor cannot specify more than one get or set, and otherwise they are defined in the class in the same way that the individual declarations would have would have been now now just on its own this might might not seem. is like provide it adds a lot to provide capability for lot to the language. being a better Target for decorators that need to work with both the getter and the setter, but it actually provides a stepping stone to the next feature that I'm looking to propose as part of this. There was a concern that grouped accessors would conflict with the static block proposal that I also have and the case that I wanted to describe is that we would probably want to emulate how Constructor is defined. so in JavaScript if you say a direct Function or use the string Constructor. Those are both the Constructor of the class. If you want to add a property named Constructor, you could use a computed property name. So this wouldn't prevent that. but again, so one of the value adds that is built on top of grouped accessors is the other feature that I'm intending to propose, which is these concept called auto accessors. So you can see a brief example of the syntactic space that I'm investigating for auto accessors. The basic idea. Is that an auto accessor is a getter and Setter pair over a private field that is unnamed. essentially you don't have access to the name of that field. It doesn't really matter what the name is. its implementation is stored on the instance just as with any other private field. It could be initialized and that works with same field initialization semantics we have for private Fields, but allows you to condense the ceremony around Getters and setters to allow you to have the same capabilities of accessors such that they can be subclassed properly without having to have the excess code of writing the private slot for the private field and the return in the the getter and the setter. So the idea is that these evolved naturally from the idea of group accessors. It provides a syntactic opt-in for converting Fields into accessors. One of the changes in the updated decorators proposal which is an improvement for performance in most VMs is that a decorated field would just become an accessor. This has some problems though. The issues that come out of that are subclassing concerns if I were to have a field defined on a superclass and then the same field on the subclass that would allow me to overwrite whatever these super class would have set for that. But if I then add a decorator to that field it turns it into an accessor pair that the superclass overwrites. So there is a subclassing hazard that can happen when a superclass fields shadows a subclass and as soon as we start saying that a decorator turns a field into an accessor pair then we're possibly running into that hazard. There also a class of decorators that only Witness a property or a field existing. these might be used for cases like dependency injection systems. or a decorator for metadata. but not actually need to observe observe the gutter where Setter of a property or observe the ability to mutate how those actions are performed. So if a decorator always always turns a field into an accessor, then you're adding the overhead of this accessor definition in some cases where decorators don't actually need it. So the idea here, is that auto accessors in addition to being able to simplify very common simple accessor patterns allow you to have a syntactic opt-in for the transformation of a field to a accessor pair rather than a implicit change which allows you to be very explicit about how this works. There is prior art for this, this C# again this again has this auto accessor properties capability. some of the semantics that I wanted to look into investigate as proposed for this feature is that it would again create an unnamed. private field for the accessor. Initializers evaluate during construction at the same time as other private Fields, so they don't pass through the set method so they're not - a subclass would not be able to observe the set of a superclass during construction. and you would have the ability to use a hash prefix on the getter or setter to create a private named getter or setter. with the same spelling as the property so you could have a public getter with a private setteror so that you could actually set values in the constructor or set values later on but still only expose a public getter. So there's a couple things that I see on the queue and I wanted to be able to go to that before I move to the last slide, which is just the summary. +RBN: a proposal for a change or not change, but some syntactic addition to the JavaScript or ecmascript language. to add grouped and auto accessor properties. This is a proposal that is intended to address a couple issues I've seen. with both accessors and Fields in in general and a number of other concerns that I've raised around the current decorators proposal and some ways that we can address some of those issues So I'm going to jump kind of right into what that syntax looks like just to give everybody an idea of what I mean by grouped accessors a grouped accessors essentially equivalent to a pair of individual get and set accessories except for the fact that they're grouped within a single declaration. There's a couple reasons that I'm looking to introduce this syntax. one is currently Getters and Setters can be spread across a class. It's generally bad practice, but it is perfectly feasible to have a getter at the top of the class etcetera at the bottom and thousands of lines of code in between. Most people. Don't do that. And that's a good thing. but there are some other issues with Getters Getters and Setters in that way one is that if you use Getters and Setters that use computed property names for example symbols that could theoretically require multiple evaluation steps because you have to evaluate the name of the property multiple times. There's also the possibility of side effects that can happen during computer property name access another reason and motivation for the proposal is currently the stage one decorators proposal works using descriptors, and we for stage 1 consensus describe worked on. how we do evaluation for accessors to get both together. the center to provide to The Decorator the updated proposal that was discussed in the last meeting and is still currently being. worked on by the Champions group. uses decorators on Getters and Setters instead of working with the entire descriptor instead now works on a function basis. So a decorator applied to a getter would only be decorating the get function. This can be problematic because there are some scenarios where a decorator would want to be able to get access to both the getter and the setter. And the current proposals don't provide a very easy way to actually correlate those two things. on a class without a lot of additional. access code. So this does provide a convenient place to attach decorators that would decorate both the getter and the setter at once. this Syntax for Getters and Setters. There's multiple languages that use it the example that I've used as prior art. Here comes from the C sharp language. Some of the semantics that this would have is that it would be a syntax error if grouped accessors share the same name of another member with the same placement on the class. So it's already an error if you have a get X and another get X this would also be an error if you had a get X and a grouped accessor that contains just a Setter. We're trying to avoid confusion and complexity when it comes to how these decorations are how decorators could be applied and the evaluation works. A grouped accessor can specify either a get or set or both in any order. A grouped accessor cannot specify more than one get or set, and otherwise they are defined in the class in the same way that the individual declarations would have would have been now now just on its own this might might not seem. is like provide it adds a lot to provide capability for lot to the language. being a better Target for decorators that need to work with both the getter and the setter, but it actually provides a stepping stone to the next feature that I'm looking to propose as part of this. There was a concern that grouped accessors would conflict with the static block proposal that I also have and the case that I wanted to describe is that we would probably want to emulate how Constructor is defined. so in JavaScript if you say a direct Function or use the string Constructor. Those are both the Constructor of the class. If you want to add a property named Constructor, you could use a computed property name. So this wouldn't prevent that. but again, so one of the value adds that is built on top of grouped accessors is the other feature that I'm intending to propose, which is these concept called auto accessors. So you can see a brief example of the syntactic space that I'm investigating for auto accessors. The basic idea. Is that an auto accessor is a getter and Setter pair over a private field that is unnamed. essentially you don't have access to the name of that field. It doesn't really matter what the name is. its implementation is stored on the instance just as with any other private field. It could be initialized and that works with same field initialization semantics we have for private Fields, but allows you to condense the ceremony around Getters and setters to allow you to have the same capabilities of accessors such that they can be subclassed properly without having to have the excess code of writing the private slot for the private field and the return in the the getter and the setter. So the idea is that these evolved naturally from the idea of group accessors. It provides a syntactic opt-in for converting Fields into accessors. One of the changes in the updated decorators proposal which is an improvement for performance in most VMs is that a decorated field would just become an accessor. This has some problems though. The issues that come out of that are subclassing concerns if I were to have a field defined on a superclass and then the same field on the subclass that would allow me to overwrite whatever these super class would have set for that. But if I then add a decorator to that field it turns it into an accessor pair that the superclass overwrites. So there is a subclassing hazard that can happen when a superclass fields shadows a subclass and as soon as we start saying that a decorator turns a field into an accessor pair then we're possibly running into that hazard. There also a class of decorators that only Witness a property or a field existing. these might be used for cases like dependency injection systems. or a decorator for metadata. but not actually need to observe observe the gutter where Setter of a property or observe the ability to mutate how those actions are performed. So if a decorator always always turns a field into an accessor, then you're adding the overhead of this accessor definition in some cases where decorators don't actually need it. So the idea here, is that auto accessors in addition to being able to simplify very common simple accessor patterns allow you to have a syntactic opt-in for the transformation of a field to a accessor pair rather than a implicit change which allows you to be very explicit about how this works. There is prior art for this, this C# again this again has this auto accessor properties capability. some of the semantics that I wanted to look into investigate as proposed for this feature is that it would again create an unnamed. private field for the accessor. Initializers evaluate during construction at the same time as other private Fields, so they don't pass through the set method so they're not - a subclass would not be able to observe the set of a superclass during construction. and you would have the ability to use a hash prefix on the getter or setter to create a private named getter or setter. with the same spelling as the property so you could have a public getter with a private setteror so that you could actually set values in the constructor or set values later on but still only expose a public getter. So there's a couple things that I see on the queue and I wanted to be able to go to that before I move to the last slide, which is just the summary. WH: The semantics I can see being slightly useful in some cases. However, the syntax clashes with `static` and anything like `static` that we might do in the future. -RBN: Are you talking about this concern here? +RBN: Are you talking about this concern here? WH: Yes the idea of using `"static"` to mean the keyword `static` is just not something I would want in language any more than I would want to use `"if"` to start an `if` statement. RBN: The only reason that I have this proposed here is that we already do this for Constructor. Constructor is a special case inside of class. -WH: No we don't. A constructor is not special syntax. At the syntactic level a constructor is a method like any other; it’s treated differently later. +WH: No we don't. A constructor is not special syntax. At the syntactic level a constructor is a method like any other; it’s treated differently later. RBN: This is fine. I'm looking for stage 1 so this is something that we can work out mean if we don't want to allow string for static, then that would be fine, but I probably would probably disallow a string name and static if that were the case just so that we don't have this confusion because you can do this with Constructor, why can't you do this with static? @@ -495,23 +508,23 @@ WH: My issue is broader than that. Had we done this first, we would not have bee RBN: I'd be interested if you have any ideas around if there's a different syntax you'd like to see. I went with the one that seemed the most convenient and similar to how this works and other languages that have grouped accessors. -GCL: I'm on board with grouped accessors. I think they're great. I would just be worried that the auto accessors would - if they were part of the same proposal - they seem like something that could be a lot more in the weeds and contentious and might hold back grouped accessors and I wouldn't want them to do that. +GCL: I'm on board with grouped accessors. I think they're great. I would just be worried that the auto accessors would - if they were part of the same proposal - they seem like something that could be a lot more in the weeds and contentious and might hold back grouped accessors and I wouldn't want them to do that. -RBN: Well, the reason that I'm presenting them together is that there I've been thinking about this proposal was for a while even without decorators, but the decorators proposal, its current direction, which I agree with, most of the current direction of the proposal - and I've been working with champions on places where I'm concerned. But one of the areas that I'm concerned about is the issue about the automatic transformation of fields accessors. And one of the reasons for proposing the auto accessories is a way to give you a syntactic opt-in that has more value than just being a syntactic opt-in. it provides more capabilities than just saying oh I want this to be observed to be an auto accessor. In stage one decorators is theoretically possible to write a decorator that does that for you. but it doesn't give you all the same capabilities because initialization still passes through the set; you can't separate public get and private set for example, so there's a lot of flexibility that this provides and grouped accessors kind of provides some of that value and also has some tie-ins to have decorators get applied. I have a couple extra slides in case I need to bring them into play if I need to the show some examples of the differences, but the reason I'm again presenting together is that there's some motivating scenarios from decorators that these are both intended to solve different use cases in those scenarios. +RBN: Well, the reason that I'm presenting them together is that there I've been thinking about this proposal was for a while even without decorators, but the decorators proposal, its current direction, which I agree with, most of the current direction of the proposal - and I've been working with champions on places where I'm concerned. But one of the areas that I'm concerned about is the issue about the automatic transformation of fields accessors. And one of the reasons for proposing the auto accessories is a way to give you a syntactic opt-in that has more value than just being a syntactic opt-in. it provides more capabilities than just saying oh I want this to be observed to be an auto accessor. In stage one decorators is theoretically possible to write a decorator that does that for you. but it doesn't give you all the same capabilities because initialization still passes through the set; you can't separate public get and private set for example, so there's a lot of flexibility that this provides and grouped accessors kind of provides some of that value and also has some tie-ins to have decorators get applied. I have a couple extra slides in case I need to bring them into play if I need to the show some examples of the differences, but the reason I'm again presenting together is that there's some motivating scenarios from decorators that these are both intended to solve different use cases in those scenarios. -GCL: Yeah. I was just just if they shouldn't be separated that's fine. It just was a concern I had. +GCL: Yeah. I was just just if they shouldn't be separated that's fine. It just was a concern I had. -DE: I'm a little concerned about the auto accessors just because of the verbosity. In decorators we've identified that many use cases do want to intercept where you decorate a field and you want to create a getter/getter pair that operates on this underlying storage. So if you had to write at each decoration site this several token syntax -It's like six different tokens - to invoke having a getter setter with an underlying field, that would be quite a burden. The current proposal has that fields are auto converted to these auto accessories. I think I could personally I could accept that they should be explicit and that we should go back to the previous [?] where it was explicit, but I think having such a verbose syntax would be a barrier to adoption. So I would prefer to investigate this terser syntax, but I would also be really interested in hearing the feedback for the rest of the committee. +DE: I'm a little concerned about the auto accessors just because of the verbosity. In decorators we've identified that many use cases do want to intercept where you decorate a field and you want to create a getter/getter pair that operates on this underlying storage. So if you had to write at each decoration site this several token syntax -It's like six different tokens - to invoke having a getter setter with an underlying field, that would be quite a burden. The current proposal has that fields are auto converted to these auto accessories. I think I could personally I could accept that they should be explicit and that we should go back to the previous [?] where it was explicit, but I think having such a verbose syntax would be a barrier to adoption. So I would prefer to investigate this terser syntax, but I would also be really interested in hearing the feedback for the rest of the committee. RBN: I find it interesting that you say that this is a verbose syntax and I think compared to a single keyword like trap being an option for the decorator use case It does seem more verbose, but if you then compare it to the number of accessors written in the ecosystem, that's a getter over a single field or a getter and a setter targeting a single field this is actually a much less verbose syntax. And as a matter of fact, that's why it was introduced in the C# language was to reduce this in the burden of writing boilerplate for private field access and providing public properties and providing again the ability to have public gets and private sets and doing the initialization. There's a lot of additional value here beyond just the fact that it provides syntactic opt-in DE: Yeah. I'm thinking a lot about the upgrade from existing decorators. decorators are largely motivated by providing a standard path to miss ecosystem thing. That's in wide use so would be asking a lot of people to write at their decorator use sites this new syntax we're proposing. So that's I think that was the thing for the public get and private set. That's a very separate in complicated case because it uses the hash token for something very different behaviors for so far in the language where right now the hash is always preceding the exact private identifier and here it's being used for the sake in order to you know. reconstruct some other private identifier. That's not literally lexically there that's a little bit confusing. -RNB: if we're trying to build the intuition that hash is perfectly that's something that I wanted to investigate. I didn't want to use a keyword. just because we don't use a keyword for private state in JavaScript for numerous reasons and that the hash character is essentially becoming synonymous with private states. So I felt that this was a good compromise. rather than other so one of the other options that I looked at was whether for like [?] z would you reserve [?] automatically as being the private State field rather than it being an unnamed field and those are some things I'd like to investigate if this does move to Stage 1 is there a slightly different approach that we want to take is there? is this the right way to go. I mean those are things that I think we could look at before we reach stage 2 if this is an area we want to inivestigate. +RNB: if we're trying to build the intuition that hash is perfectly that's something that I wanted to investigate. I didn't want to use a keyword. just because we don't use a keyword for private state in JavaScript for numerous reasons and that the hash character is essentially becoming synonymous with private states. So I felt that this was a good compromise. rather than other so one of the other options that I looked at was whether for like [?] z would you reserve [?] automatically as being the private State field rather than it being an unnamed field and those are some things I'd like to investigate if this does move to Stage 1 is there a slightly different approach that we want to take is there? is this the right way to go. I mean those are things that I think we could look at before we reach stage 2 if this is an area we want to inivestigate. -DE: Okay, so my next comment is you are talking about this subclassing hazard where you make decorative fields and then have a subclass of it that interacts and I think we spent a lot of time talking about these subclassing hazards around 2016, 2017 when we when we ended up setting on the define semantics for fields in the presence of accessors and decorators the ecosystem. and we came to consensus on that and it's been it's been shipping in now in all Evergreen browsers, I think. The interaction really exists independently of what we decide for decorators or grouped accessors and it's the kind of thing that I think would be in to begin to work on with type systems. but also the kind of thing that because we've discussed it. I think we came to this conclusion that we're sort of willing to pay this cost for a potential case and so I think I'm not not really convinced that this should be a point in the design I think. it would still be reasonable for decorated elements to by default become accessories. I can see the case for making it explicit, but I'm not convinced that this hazard is so fatal. +DE: Okay, so my next comment is you are talking about this subclassing hazard where you make decorative fields and then have a subclass of it that interacts and I think we spent a lot of time talking about these subclassing hazards around 2016, 2017 when we when we ended up setting on the define semantics for fields in the presence of accessors and decorators the ecosystem. and we came to consensus on that and it's been it's been shipping in now in all Evergreen browsers, I think. The interaction really exists independently of what we decide for decorators or grouped accessors and it's the kind of thing that I think would be in to begin to work on with type systems. but also the kind of thing that because we've discussed it. I think we came to this conclusion that we're sort of willing to pay this cost for a potential case and so I think I'm not not really convinced that this should be a point in the design I think. it would still be reasonable for decorated elements to by default become accessories. I can see the case for making it explicit, but I'm not convinced that this hazard is so fatal. -RBN: yet in the decorators proposal that you brought last meeting you discussed the fact that in typescript we have the declare field keyword. So you say declare and then field name as a way to attach decorators to a field because otherwise you would possibly be redefining that field. So it is an issue that we are constantly thinking about even if we we know what the cases are. So what we've essentially done is we said we know what these concerns are and we know where we're willing to draw the line, but there are still foot guns for users and the case you described is not the case that I described. The case that I described is that I have a super class that has a field and a sub class where I want to apply a decorator to that field the if I do that in my subclass field gets turned into an accessors the super class will override it because it will be defined on the instance. This is a thing that we know exists and we know is a problem and we are essentially saying that users will have to deal it, but we have also have to make sure that users are aware which is why we've discussed whether or not we should have something like the trap keyword or an opt-in syntax because otherwise applying a decorator just magically makes the field turn into an accessor and possibly causes problems. So you end up in a situation where I either can't decorate it or I have to rewrite my superclass hierarchy so that I can change it so that it can be decorated properly. +RBN: yet in the decorators proposal that you brought last meeting you discussed the fact that in typescript we have the declare field keyword. So you say declare and then field name as a way to attach decorators to a field because otherwise you would possibly be redefining that field. So it is an issue that we are constantly thinking about even if we we know what the cases are. So what we've essentially done is we said we know what these concerns are and we know where we're willing to draw the line, but there are still foot guns for users and the case you described is not the case that I described. The case that I described is that I have a super class that has a field and a sub class where I want to apply a decorator to that field the if I do that in my subclass field gets turned into an accessors the super class will override it because it will be defined on the instance. This is a thing that we know exists and we know is a problem and we are essentially saying that users will have to deal it, but we have also have to make sure that users are aware which is why we've discussed whether or not we should have something like the trap keyword or an opt-in syntax because otherwise applying a decorator just magically makes the field turn into an accessor and possibly causes problems. So you end up in a situation where I either can't decorate it or I have to rewrite my superclass hierarchy so that I can change it so that it can be decorated properly. DE: The case of declared decorators is very different from the case that you described as not really results of defining accessors or anything. It's a result of the semantics that we decided on which is that if you have a field declaration, but no initializer it initializes the fields to undefined. If you have a super class and subclass that overrides that you don't want you don't want that field to be set to undefined by the subclass so you're already just deviating from from the JavaScript standard your field semantics and I think what you want is a typescript declare field. @@ -525,27 +538,27 @@ DE: It's a known mismatch with typescript but I don't believe that we can accomm MBS: so next up we have some some data from C# usage. -CLA: I am wondering if you have data about the grouped accessors from C#. I personally I never used languages that have grouped accessors. so I was wondering like if it's widely used and which kind of use cases they're actually used a lot. +CLA: I am wondering if you have data about the grouped accessors from C#. I personally I never used languages that have grouped accessors. so I was wondering like if it's widely used and which kind of use cases they're actually used a lot. -RBN: short primer on C# has grouped accessors. You can define a getter only in c-sharp, but you can't have a separate declaration for the cetera. They're all part of the same declaration. So every accessor like property in C# is a grouped version. It is the case in JavaScript where we have get name or set name as separate declarations. +RBN: short primer on C# has grouped accessors. You can define a getter only in c-sharp, but you can't have a separate declaration for the cetera. They're all part of the same declaration. So every accessor like property in C# is a grouped version. It is the case in JavaScript where we have get name or set name as separate declarations. CLA: Oh. right. Yeah. Yeah because JavaScript is most of all one of my main languages. I'm actually pretty comfortable with the get and set stuff. But yeah, like looking syntax of the automatic initialization it way less verbose to use those. I like this a lot. So I was wondering if C# was having any kind of similar work as Javascript has. RBN: I don't have any specific numbers on auto accessors. I do know that when the feature was added in C# several years back that more and more code moved to it. I don't have numbers on that specifically. I do know that it was a convenience feature for many users that has improved quality of life for development because it reduced a lot of the boilerplate for accessors that I think would still be valuable in the JavaScript ecosystem. -DRR: I think it's just worth bringing up - accessors are really popular in C# ecosystem in general but for maybe it's slightly different reason than you might expect. Basically because of the way that C# has to be compiled and the way that these things get resolved you need to sort of future proof yourself in case you ever might want a field l to become an accessor, so what people will typically do is have these things declared as accessors ahead of the fact. and then if you ever need to implement some behavior in the accessor, you'll add that behavior. In JavaScript it's slightly different, there is no difference in resolving to an accessor versus a field, right? It's mostly opaque to a user unless you start to use some of the APIs to dig in and find out. And so some of the cases like get set for the auto accessor. I think are maybe not as valuable as just a get or a get and a private set or something like that. but I will say that it is pretty popular in the C# community it is very prevalently used. I don't have exact numbers though. +DRR: I think it's just worth bringing up - accessors are really popular in C# ecosystem in general but for maybe it's slightly different reason than you might expect. Basically because of the way that C# has to be compiled and the way that these things get resolved you need to sort of future proof yourself in case you ever might want a field l to become an accessor, so what people will typically do is have these things declared as accessors ahead of the fact. and then if you ever need to implement some behavior in the accessor, you'll add that behavior. In JavaScript it's slightly different, there is no difference in resolving to an accessor versus a field, right? It's mostly opaque to a user unless you start to use some of the APIs to dig in and find out. And so some of the cases like get set for the auto accessor. I think are maybe not as valuable as just a get or a get and a private set or something like that. but I will say that it is pretty popular in the C# community it is very prevalently used. I don't have exact numbers though. -DE: Okay. I I wanted to ask. We've been talking about this in the context of decorators. if what like what advice the committee has for The Decorator Champion we've discussed three alternatives for how decorators could allow fields to turn into accessors. one. Is that all decorated Fields become accessors Another is this trap keyword? and a third is this Auto accessor? My personal opinion is that the latter one would be too verbose to be ergonomic for the use cases that we've examined. but I would really like to hear the opinion of the committee. Should we be waiting for this proposal to work through the process? Should we wait for this auto accessors proposal to proceed on decorators in case it provides the kind of infrastructure we should be using. +DE: Okay. I I wanted to ask. We've been talking about this in the context of decorators. if what like what advice the committee has for The Decorator Champion we've discussed three alternatives for how decorators could allow fields to turn into accessors. one. Is that all decorated Fields become accessors Another is this trap keyword? and a third is this Auto accessor? My personal opinion is that the latter one would be too verbose to be ergonomic for the use cases that we've examined. but I would really like to hear the opinion of the committee. Should we be waiting for this proposal to work through the process? Should we wait for this auto accessors proposal to proceed on decorators in case it provides the kind of infrastructure we should be using. -GCL: I guess I'm on the Queue. I would say that the needs of the decorator proposal requires the accessor pair, it shouldn't have to ask for permission. That's an implementation detail leak. so I would say we should go with whatever. Oh. Allows us to work around that without having to like lightless The Decorator to do things. +GCL: I guess I'm on the Queue. I would say that the needs of the decorator proposal requires the accessor pair, it shouldn't have to ask for permission. That's an implementation detail leak. so I would say we should go with whatever. Oh. Allows us to work around that without having to like lightless The Decorator to do things. -DE: The implementation leak is a little bit - This is what we were discussing earlier with static decorators, if we want to allow different decorators to make different code transformations we could do so with static decorators, but that would add lots of other kinds of complexity that were trying to avoid it. Do other people have thoughts on this topic? +DE: The implementation leak is a little bit - This is what we were discussing earlier with static decorators, if we want to allow different decorators to make different code transformations we could do so with static decorators, but that would add lots of other kinds of complexity that were trying to avoid it. Do other people have thoughts on this topic? BSH: Okay, I'm partially addressing Waldemar's concerns. I was wondering if it's been considered to just have a keyword before the name so you can just say accessors name {} instead of just the bare name because that would resolve the problem. WH: That would resolve the problem. -RBN: It's possible that we could investigate that. I don't know that - it wouldn't be just open close curly because we'll still have to define what accessors are allowed. +RBN: It's possible that we could investigate that. I don't know that - it wouldn't be just open close curly because we'll still have to define what accessors are allowed. BSH: I didn't mean maybe empty. I just meant that we have the keyword in front. @@ -555,21 +568,21 @@ WH: If you do it that way then I would not want the static blocks proposal to ad RBN: we already make an exception for Constructor. and static is kind of like constructor except during static initialization. So I am willing to investigate changes to the syntax or the addition of a keyword at that becomes necessary to but the idea was to reduce boilerplates rather than add additional boilerplate if possible. I want to continue investigating if I can. -SYG: this is also about auto accessories. So I think with the ease of having these default. Getters and setters. I think I would be remiss as an engine of complimentary to point out that accessors are not free. They will be slower than data properties and I think the presence of the auto accessories syntax will nudge folks or will perhaps sweep some of the performance implications under the rug by making it seem like it is more free than it actually. Like this is in direct opposition to the case in c-sharp. In C#I should hope because it is ahead of time compiled that making things getter Setter prepared for this future proofing as Daniel said comes more or less for free from the runtime performance perspective. +SYG: this is also about auto accessories. So I think with the ease of having these default. Getters and setters. I think I would be remiss as an engine of complimentary to point out that accessors are not free. They will be slower than data properties and I think the presence of the auto accessories syntax will nudge folks or will perhaps sweep some of the performance implications under the rug by making it seem like it is more free than it actually. Like this is in direct opposition to the case in c-sharp. In C#I should hope because it is ahead of time compiled that making things getter Setter prepared for this future proofing as Daniel said comes more or less for free from the runtime performance perspective. -RBN: That is not going to be true for C# either.This allows me to reduce the syntax burden. on the developer for writing an accessor for very common cases. Getter only for example, or get in private set. or something that you want to be treated as a getter and a setter so they can be subclassed because Fields can't really see it Fields do have this subclassing hazard? It allows you to have a very simple syntax to get into the place where I can do things the way I need them to be with accessors without all the excess ceremony, but that doesn't change the fact that they are still accessors. They're going to have the same runtime burden that they would if you wrote them out manually. +RBN: That is not going to be true for C# either.This allows me to reduce the syntax burden. on the developer for writing an accessor for very common cases. Getter only for example, or get in private set. or something that you want to be treated as a getter and a setter so they can be subclassed because Fields can't really see it Fields do have this subclassing hazard? It allows you to have a very simple syntax to get into the place where I can do things the way I need them to be with accessors without all the excess ceremony, but that doesn't change the fact that they are still accessors. They're going to have the same runtime burden that they would if you wrote them out manually. -SYG: Right. I think the point is that by making it so much terser that we might be encouraging performance patterns that are in fact undesirable. And I think that's that's a trade-off like any other trade-off, if the actual problem that we would like to solve here calls for trading off the ergonomics over the performance implications of course the performance implications usually take a back seat unless it is Central to the proposal. But given that a lot of the ergonomic stuff seems to be born out of experience with C# and given the differences in how accessories are used between the two languages, I just want to call out that we should be more careful in the trade-off way. +SYG: Right. I think the point is that by making it so much terser that we might be encouraging performance patterns that are in fact undesirable. And I think that's that's a trade-off like any other trade-off, if the actual problem that we would like to solve here calls for trading off the ergonomics over the performance implications of course the performance implications usually take a back seat unless it is Central to the proposal. But given that a lot of the ergonomic stuff seems to be born out of experience with C# and given the differences in how accessories are used between the two languages, I just want to call out that we should be more careful in the trade-off way. -MBS: Next up we have is the idea to make. gets/set create public grouped by accsuccessor. +MBS: Next up we have is the idea to make. gets/set create public grouped by accsuccessor. -CLA: Yeah. Oh, sorry if I missed this but is there a way to allow a group that's for private field as well, and if so if I have a #x grouped accessors would this create a public x accessor? +CLA: Yeah. Oh, sorry if I missed this but is there a way to allow a group that's for private field as well, and if so if I have a #x grouped accessors would this create a public x accessor? -RBN: Let me specify the idea is that you would be able to have private accessors if you wanted them. There's very little reason to do that because they can't be subclassed. The main reason to leverage auto accessor is because you want either the hiding semantics of a public get with a private set or you want the subclassing semantics of accessors rather than the subclassing semantics of field definitions. You don't get either of those with private field grouped accessors, but if we have private accessors there no reason not to allow them. The design goal for having the prefix hash for a get or a set in a public accessor is to say that the hash basically tightens the privileged access. So if you say #x, the get is going to be private the set is going to be private. You don't make it public in that way. You wouldn't be able to turn a private accessor into a public one by just adding a get to it. The idea would only be to go from a public one and having one of the members become private. Or rather having one of the get or set become private. +RBN: Let me specify the idea is that you would be able to have private accessors if you wanted them. There's very little reason to do that because they can't be subclassed. The main reason to leverage auto accessor is because you want either the hiding semantics of a public get with a private set or you want the subclassing semantics of accessors rather than the subclassing semantics of field definitions. You don't get either of those with private field grouped accessors, but if we have private accessors there no reason not to allow them. The design goal for having the prefix hash for a get or a set in a public accessor is to say that the hash basically tightens the privileged access. So if you say #x, the get is going to be private the set is going to be private. You don't make it public in that way. You wouldn't be able to turn a private accessor into a public one by just adding a get to it. The idea would only be to go from a public one and having one of the members become private. Or rather having one of the get or set become private. -LEO: Yeah, I was just saying this is causing the confusion. I'm pretty on board with the proposed future. I know there are a lot of concerns before raising. but for ergonomics, I don't feel comfortable with the confusion for setting up public or private functionality of a fieldin the get/set keywords. I would stick with what is in the field name, so mixing them up inside of the group part just seems like it's leading to more confusion. I don't like it. +LEO: Yeah, I was just saying this is causing the confusion. I'm pretty on board with the proposed future. I know there are a lot of concerns before raising. but for ergonomics, I don't feel comfortable with the confusion for setting up public or private functionality of a fieldin the get/set keywords. I would stick with what is in the field name, so mixing them up inside of the group part just seems like it's leading to more confusion. I don't like it. -RBN: Yeah. So the one thing that I was there was a semantic that C# does have that I was trying to consider which is that C# does allow you to have Auto accessors that have - there's two ways you can have Auto accessors in C# that have differing. visibility modifiers. so you can have a public get and private set but you also have the ability to do initialization in the Constructor. So the case that I have on the screen here is the example of Z with just a get semicolon. in C#, you can say this.Z in the Constructor to do it to initialize it as well. But anywhere else in the class you have it becomes read-only, which is weird. It essentially has a private setter, but it's syntactically illegal to reference it. So the goal was to provide some similar semantics, but we don't have the ability to use keywords and we can't just magically make an accessor that has only a getter for example that has a private backing field. If you needed to set it for any reason how would you do that if you needed to? so the idea is to provide some mechanism for defining. How do I update the state so that I can have my class can update state but external callers can only read the state but cannot write to it. +RBN: Yeah. So the one thing that I was there was a semantic that C# does have that I was trying to consider which is that C# does allow you to have Auto accessors that have - there's two ways you can have Auto accessors in C# that have differing. visibility modifiers. so you can have a public get and private set but you also have the ability to do initialization in the Constructor. So the case that I have on the screen here is the example of Z with just a get semicolon. in C#, you can say this.Z in the Constructor to do it to initialize it as well. But anywhere else in the class you have it becomes read-only, which is weird. It essentially has a private setter, but it's syntactically illegal to reference it. So the goal was to provide some similar semantics, but we don't have the ability to use keywords and we can't just magically make an accessor that has only a getter for example that has a private backing field. If you needed to set it for any reason how would you do that if you needed to? so the idea is to provide some mechanism for defining. How do I update the state so that I can have my class can update state but external callers can only read the state but cannot write to it. LEO: I still have trouble. I still to expand and probably maybe give more time. I don't know to understand what it means like when you have the example on W get and then the private set what it means for for the end result. @@ -577,15 +590,15 @@ RBN: I think it would be clear if I had an example for what that would look like LEO: That's the part that still feels a little bit confusing but I like to give more time to understand how this will blend. if that actually happens. I'm not fully convinced about this part. I'm totally on board with the general idea of grouped accessors. But I think that's fine for the current - that's fine for me for the current proposal. I know there's other concerns we have here. -RBN: a lot of people seem indifferent a cup at least one person's interest in it. So I'd like to seek stage one. +RBN: a lot of people seem indifferent a cup at least one person's interest in it. So I'd like to seek stage one. GCL: I am okay with stage one, but I would want to have it only be stage one, which is we are accepting the problem space and not the solution space. -RBN: Yeah, I'm still interested in, if there are issues with specific parts of the proposal looking at ways of addressing those whether it's changes to syntax if necessary, but I do think it's interesting, I do think that if targets in a class to decorate that you give us different capabilities. I'm not the happiest with how. the changes for Getters and Setters in the current decorators proposal because I think that that breaks some. scenarios and I think this would open those back up. even if there are other alternatives that we look at and investigate for decorators such as the trap keyword and whatnot. I still think there's some value. +RBN: Yeah, I'm still interested in, if there are issues with specific parts of the proposal looking at ways of addressing those whether it's changes to syntax if necessary, but I do think it's interesting, I do think that if targets in a class to decorate that you give us different capabilities. I'm not the happiest with how. the changes for Getters and Setters in the current decorators proposal because I think that that breaks some. scenarios and I think this would open those back up. even if there are other alternatives that we look at and investigate for decorators such as the trap keyword and whatnot. I still think there's some value. DE: I'd like guidance for how to proceed with the decorators proposal because this as ron notes there's overlap. Like I'm all fine with watching how this investigation goes but I'm wondering whether we should be blocking on it or whether we should be proceeding with other ideas. I see a lot of indifferent reactions. So I don't know if we should do another temperature check or if people have comments, comments might be better. -MBS: I've just just reset the temperature check. Dan. Can you be a little bit more specific about what you're trying to measure the temperature on? +MBS: I've just just reset the temperature check. Dan. Can you be a little bit more specific about what you're trying to measure the temperature on? DE: Sure. so let's say heart is like this is really the way to go for decorators and decorators should should really not proceed until we have this worked out. And unconvinced means that I'm not convinced at these relate to each other at all. And then there can be middle reactions along that scale. Is this a fair kind of thing to to ask for the committee Okay. @@ -595,19 +608,19 @@ DE: I mean, we've been discussing it for an hour like we've made in different po WH: I don't understand what you're taking the temperature of. -DE: okay, we can leave this for later. I tried to explain but if people don't don't like this idea then never mind. Oh Bradford asks could Daniel explain why this blocks decorators? Well as Ron noted a lot of what decorators need to do sometimes is decorate the behavior of fields to make it so that when you access a field it calls a getter. and the decorators what installs that? that accessor +DE: okay, we can leave this for later. I tried to explain but if people don't don't like this idea then never mind. Oh Bradford asks could Daniel explain why this blocks decorators? Well as Ron noted a lot of what decorators need to do sometimes is decorate the behavior of fields to make it so that when you access a field it calls a getter. and the decorators what installs that? that accessor BSH:. I realize how that works, but I don't understand why. like, well, we don't think these are - DE: we need aside on the syntax in the decorators proposal for how to make a decorator that that adds this excess or behavior? So the current decorator proposal is basically implicitly an auto accessor. And Ron seems to be proposing that we don't go with those with that choice part of his explicit Auto accessor proposal. -BSH That's not how I understood it. I thought the idea was now, but it ordered but if you wear it automatically turns the field into a getter Setter pair, but as a sort of way to deal with that to make things work more smoothly in some cases you could just go ahead and Define things as a pair up front and then it's less of an issue. +BSH That's not how I understood it. I thought the idea was now, but it ordered but if you wear it automatically turns the field into a getter Setter pair, but as a sort of way to deal with that to make things work more smoothly in some cases you could just go ahead and Define things as a pair up front and then it's less of an issue. -RBN: Part of the reason that that I'm proposing this we've been discussing various this and some other ideas within the decorators Champions meetings. And part of the reason I'm proposing this is I'm not comfortable with how decorators currently in the proposal always make fields into accessors as you said there is a performance cost that comes with that, there is the cognitive burden of this subclassing hazard that I've mentioned and we say that it's not an issue, but it's not an issue because you can't get yourself really into this situation right now without writing a lot of code, but decorators are going to do this magically and that's going to cause issues. It's going to be a foot gun. So part of the reason that I'm proposing this as we were discussing in the Champions group was, if this is more than just adding a keyword, and it's syntax that I've been considering for a while, I felt that it would be valuable to get the committee's feedback on whether this is the the direction that we want to go because that will help frame the discussion for decorators when it comes to Fields. Because even if some think it's somewhat verbose, I still feel that it's very terse Syntax for defining a getter and a Setter back with the field which is what we would be doing automatically with the decorator. providing a terse syntax so that we know what is going to happen. I'm explicitly stating that this it is a field that's actually a getter and Setter. So I'm not having this implicit thing happen underneath me that can cause problems and therefore if this is a direction that we go then my suggestion would be that decorating a field does not turn it into an accessor because then you don't have that cost. Decorating a field wouold only be able to witness the field or add additional metadata but not intercept getting set. because that because that is it doesn't it's that's the least least surprising to the user if I have a syntactic opt-in whether it's a keyword or a block with a get set shorthand that says that this is automatically populated. Some syntactic opt-in allows the user to make the decision as to whether or not they want to eat that performance cost or lets them know that they are aware that this change is happening because that will affect how subclassing works. +RBN: Part of the reason that that I'm proposing this we've been discussing various this and some other ideas within the decorators Champions meetings. And part of the reason I'm proposing this is I'm not comfortable with how decorators currently in the proposal always make fields into accessors as you said there is a performance cost that comes with that, there is the cognitive burden of this subclassing hazard that I've mentioned and we say that it's not an issue, but it's not an issue because you can't get yourself really into this situation right now without writing a lot of code, but decorators are going to do this magically and that's going to cause issues. It's going to be a foot gun. So part of the reason that I'm proposing this as we were discussing in the Champions group was, if this is more than just adding a keyword, and it's syntax that I've been considering for a while, I felt that it would be valuable to get the committee's feedback on whether this is the the direction that we want to go because that will help frame the discussion for decorators when it comes to Fields. Because even if some think it's somewhat verbose, I still feel that it's very terse Syntax for defining a getter and a Setter back with the field which is what we would be doing automatically with the decorator. providing a terse syntax so that we know what is going to happen. I'm explicitly stating that this it is a field that's actually a getter and Setter. So I'm not having this implicit thing happen underneath me that can cause problems and therefore if this is a direction that we go then my suggestion would be that decorating a field does not turn it into an accessor because then you don't have that cost. Decorating a field wouold only be able to witness the field or add additional metadata but not intercept getting set. because that because that is it doesn't it's that's the least least surprising to the user if I have a syntactic opt-in whether it's a keyword or a block with a get set shorthand that says that this is automatically populated. Some syntactic opt-in allows the user to make the decision as to whether or not they want to eat that performance cost or lets them know that they are aware that this change is happening because that will affect how subclassing works. -DE: right Right. So what I what I wanted from this presentation from this discussion was guidance on what we should do with decorators. Does stage one mean that decorators should block until we have this worked out much more or does it mean it's doesn't have a particular endorsement, but it's something that the committe is interested in looking at and decorators should proceed independently. What does the committee - what does the committee think about this? +DE: right Right. So what I what I wanted from this presentation from this discussion was guidance on what we should do with decorators. Does stage one mean that decorators should block until we have this worked out much more or does it mean it's doesn't have a particular endorsement, but it's something that the committe is interested in looking at and decorators should proceed independently. What does the committee - what does the committee think about this? -BSH: Okay, I guess what I'm thinking is I think you could move forward with this proposal and that doesn't necessarily mean that that means decorators can't do what they are saying they do now, I feel like that's a that's a separate discussion. I'm mean that may be one one of the goals that Ron had in mind when he wrote this proposal but this proposal stands on its own without having to change the decorators proposal as it is now, I would think +BSH: Okay, I guess what I'm thinking is I think you could move forward with this proposal and that doesn't necessarily mean that that means decorators can't do what they are saying they do now, I feel like that's a that's a separate discussion. I'm mean that may be one one of the goals that Ron had in mind when he wrote this proposal but this proposal stands on its own without having to change the decorators proposal as it is now, I would think RBN: I'm bringing it up now because of decorators, but this is something that I have been investigating, spending some time on for over a year or more just because I'm interested in the syntax space. not just because of decorators, but it came up in the decorators Champions group prior to this meeting. So I felt it was worth bringing up for discussions. We can determine at this is direction that we want to go @@ -629,7 +642,7 @@ RBN: that's been been my direction within the group discussion but that's one of WH: Okay. -SYG: Let me try to reframe Dan's question and kind of say my piece. Where I'm seeing why it's difficult to get consensus for stage one right now. So stage one is supposed to be about motivating a problem space and us wanting to solve a particular problem staying in that space. Here Ron put forward the interaction with decorators specifically this implicit conversion problem and kind of orthogonally the same general ergonomics thing around getters and setters. And that disagreement comes to I think some of the decorator folks do not agree that the implicit conversion problem is a problem to be solved. Namely if you look at the problem from the framing of decorators the current decorators project in tc39 as a standard upgrade path for something that the ecosystem is already heavily using introducing additional syntax. like in this case is contra to that goal. So like the technical merits of that is they're just moot because the problem is not a problem that you want to solve. On the other hand the ergonomics for getters and setters thing I guess seems less controversial. so my question for Ron is one. Is that an accurate assessment and two it sounds like the two goals are not separable for you. It seems like the problem statement that you want us to advance to stage one on is explicitly like we should solve the implicit field to accessoor conversion issue in the current decorators +SYG: Let me try to reframe Dan's question and kind of say my piece. Where I'm seeing why it's difficult to get consensus for stage one right now. So stage one is supposed to be about motivating a problem space and us wanting to solve a particular problem staying in that space. Here Ron put forward the interaction with decorators specifically this implicit conversion problem and kind of orthogonally the same general ergonomics thing around getters and setters. And that disagreement comes to I think some of the decorator folks do not agree that the implicit conversion problem is a problem to be solved. Namely if you look at the problem from the framing of decorators the current decorators project in tc39 as a standard upgrade path for something that the ecosystem is already heavily using introducing additional syntax. like in this case is contra to that goal. So like the technical merits of that is they're just moot because the problem is not a problem that you want to solve. On the other hand the ergonomics for getters and setters thing I guess seems less controversial. so my question for Ron is one. Is that an accurate assessment and two it sounds like the two goals are not separable for you. It seems like the problem statement that you want us to advance to stage one on is explicitly like we should solve the implicit field to accessoor conversion issue in the current decorators RBN: The implicit conversion is something I've argued against for quite a while and I felt that and I feel that this is a possible solution to that issue that I think would be worthwhile. I'm not necessarily hinging the proposal on whether that is the solution for decorators. I do think that there's value in this capability one way or the other. I'm not trying to hinge this on just that specific feature of decorators, but the reason I brought it forward was we've been discussing how do we best handle this case? Is it a keyword? is it some type of syntax? if it is this syntax, then it's more than something than just the decorators proposal itself can just introduce and say here. Here's how you do it. We need more. rationale behind that and more benefit than just having a bunch of extra characters to indicate this transformation. And I brought this to the decorators champions group because I felt that this was not only a valuable feature in its own right but also it helps solve that specific issue and it's an issue that I've been passionate about. So that's kind of why these became intermingled. @@ -648,82 +661,86 @@ RBN: I was going to say if we wanted to come back to this later. That's fine. MBS: Okay, so then Dan just so you don't feel like things are being rushed when we give a proper time. I'd like to revisit this later in the meeting before coming to a conclusion on the item is everyone okay with with that? RBN: I'm fine with that. + ### Conclusion/Resolution -* Revisit before the end of the meeting + +- Revisit before the end of the meeting + ## Realms for Stage 3 + Presenter: Leo Balter (LEO) - [proposal](https://github.com/tc39/proposal-realms) - [slides](https://docs.google.com/presentation/d/1mKdez8FMbJ4QQ2KsOCMXOKVW6QoUnrNQf2cwsLy0MyI/edit?ouid=109846357552457289915&usp=slides_home&ths=true) -LEO: Alright, so this is a shared presentation. Caridy will take over at some point. Hello everyone. I am Leo, I work at Salesforce. And here I am presenting. a status update on the Realms proposal. This is a stage 2 proposal, but we have plenty of updates today. We would like to request Station 3, but unfortunately this is not going to be possible. You might see some of the reasons here. So the API remains the same, the external API Remains the Same ,as you might have seen in the past. So we have the realm we have its own structure. and direct access to the realm's object global this in the import method that operates similarly to the equally to the dynamic import. It's not a surprise. the API is quite simple in the external side of it, and it does. enables like control the execution of the program was a new global object It provides a new set of Entry 6. there is no default access to the incubator realm - for clarification the incubator realm would be the top the top realm or the place where you are instantiating a new realm. The new realm gets a separate module graph And yes. it does have sync communication with the incubator realm. Motivations for the is mostly for componentized applications from using programs with multiple sources or multiple. programs within a single web application or an application in general. That's actually seen easily with the advent of package managers, different teams, etc. Most of those programs they contend for Global shared resources. and what we're trying to provide here is integrity of this execution within a web application while still not facing race conditions about the current state of the application. This is also enables like virtualization and portability. And yeah, we currently we are not fully able to virtualize environments where the program should be executed, the least in the web platform, but this solves that problem in general that should work seamlessly for any JS environment. We have some prior art here. here. You might know we have iframes. Yeah, and most of the time they are bundled with unforgeables. We're trying to show today how iframes are unwelcoming to do a full virtualization set up. We're going to go through some use cases for these and some examples. Or you might be familiar with node.js api, the VM module. and the VM module basically provides a new realm creation, and it's pretty similar to what we have here. Maybe the VM module may offer more application settings then we are doing in our API. We also found some interesting Parts like in iOS you can use javascriptcore JS context group create, but also new in Android you have exposure of the V8 context new which basically provides a new realm. For the HTML behavior, synthetic Realms, which are those Realms we're creating here, they behave like a parent realm. So we have like the synthetic Realms that are realm are created through this new API, but also we have the main realms or principal rounds as you already use: the window, a worker or worklet Global Scopes that are created by HTML. and HTML Keys some some State and behavior of principal realms. and they all the synthetic programs have a parent which is a principal Rome. We also might call it the incubator realm here. and we have a look up State and behavior on parent realm, not synthetic Realm. so "current realm" is the current principal realm when we use this these wording. The synthetic realm also have their own module graph. This is a pretty important thing for what we want to to get with these API to fully support the realm import since module close over global object. So we actually don't want to leak the global objects by different modules and modules evaluation and observing execution from all the different modules in different different realms. Yeah. so so we've had some previous key concerns What problems this is trying to solve and if this is a net win over the school of using iframes. those are some these are summarized. question list from Shu which I really appreciate like trying to synthesize this. We've tried to overcome those questions, we'll will bring it up like some use cases. I also know we also going to present those here. and we try to land in a lot of differences this actually present from my frames one of them like being naturally. that naturally lightweight solution, an intentionally lightweight solution compared to what we actually need to roll out today with iframes. And yes, we believe it's a net win. Let's try to go over some of the questions, but before that we also have an ongoing TAG review and guess what we have a lot of we've had a lot of feedback coming in monday or yesterday. +LEO: Alright, so this is a shared presentation. Caridy will take over at some point. Hello everyone. I am Leo, I work at Salesforce. And here I am presenting. a status update on the Realms proposal. This is a stage 2 proposal, but we have plenty of updates today. We would like to request Station 3, but unfortunately this is not going to be possible. You might see some of the reasons here. So the API remains the same, the external API Remains the Same ,as you might have seen in the past. So we have the realm we have its own structure. and direct access to the realm's object global this in the import method that operates similarly to the equally to the dynamic import. It's not a surprise. the API is quite simple in the external side of it, and it does. enables like control the execution of the program was a new global object It provides a new set of Entry 6. there is no default access to the incubator realm - for clarification the incubator realm would be the top the top realm or the place where you are instantiating a new realm. The new realm gets a separate module graph And yes. it does have sync communication with the incubator realm. Motivations for the is mostly for componentized applications from using programs with multiple sources or multiple. programs within a single web application or an application in general. That's actually seen easily with the advent of package managers, different teams, etc. Most of those programs they contend for Global shared resources. and what we're trying to provide here is integrity of this execution within a web application while still not facing race conditions about the current state of the application. This is also enables like virtualization and portability. And yeah, we currently we are not fully able to virtualize environments where the program should be executed, the least in the web platform, but this solves that problem in general that should work seamlessly for any JS environment. We have some prior art here. here. You might know we have iframes. Yeah, and most of the time they are bundled with unforgeables. We're trying to show today how iframes are unwelcoming to do a full virtualization set up. We're going to go through some use cases for these and some examples. Or you might be familiar with node.js api, the VM module. and the VM module basically provides a new realm creation, and it's pretty similar to what we have here. Maybe the VM module may offer more application settings then we are doing in our API. We also found some interesting Parts like in iOS you can use javascriptcore JS context group create, but also new in Android you have exposure of the V8 context new which basically provides a new realm. For the HTML behavior, synthetic Realms, which are those Realms we're creating here, they behave like a parent realm. So we have like the synthetic Realms that are realm are created through this new API, but also we have the main realms or principal rounds as you already use: the window, a worker or worklet Global Scopes that are created by HTML. and HTML Keys some some State and behavior of principal realms. and they all the synthetic programs have a parent which is a principal Rome. We also might call it the incubator realm here. and we have a look up State and behavior on parent realm, not synthetic Realm. so "current realm" is the current principal realm when we use this these wording. The synthetic realm also have their own module graph. This is a pretty important thing for what we want to to get with these API to fully support the realm import since module close over global object. So we actually don't want to leak the global objects by different modules and modules evaluation and observing execution from all the different modules in different different realms. Yeah. so so we've had some previous key concerns What problems this is trying to solve and if this is a net win over the school of using iframes. those are some these are summarized. question list from Shu which I really appreciate like trying to synthesize this. We've tried to overcome those questions, we'll will bring it up like some use cases. I also know we also going to present those here. and we try to land in a lot of differences this actually present from my frames one of them like being naturally. that naturally lightweight solution, an intentionally lightweight solution compared to what we actually need to roll out today with iframes. And yes, we believe it's a net win. Let's try to go over some of the questions, but before that we also have an ongoing TAG review and guess what we have a lot of we've had a lot of feedback coming in monday or yesterday. -CP: we have been getting some feedback from the TAG review from mostly coming from Google. we have feedback in the past coming in from Domenic mostly right now. He's also providing a consolidated list of feedback, some of these feedbacks we have here in the past. The majority of the issues are related to the web and and how the web sees this proposal. we haven't gotten much pushback from anyone else. Obviously in the other platforms this feature exist in some degree and is is very similar to what we are proposing. In the case of node in of IOS and Android and similar they exist today. We were using in the apps. even in some of those platforms. But those are severely penalized by having a hidden web view just to have an iframe inside it and that's problematic for applications in those environments so people don't use those. they use the realm creation, the context creation, via the existing apis in native code in order to create a new JavaScript environment. For the web the biggest challenge has been been getting Doomenic and some other falks company conveyed this is useful. This is orthogonal to all the efforts the web is trying to adopt and the path of the web as they call it. we at this point and we could continue waiting for more feedback their shoes leading therefore and I've been great in terms of communication. communication. I would like to go very quickly over over these five major bullet points that Domenic mentioned a couple of days ago, I think. The first one is obviously the saying that in the past we have here. community talking about three years now four years. some of the and the fact that the Realms are not a security boundary and the proposal is not claiming that obviously but it seems that there is some concerns from Domenic and some other parts about whether or not the the this proposal. would influence developers in terms of using these as a security boundary rather than just a way to create a new context of evaluation and control the evaluation in that context. in this case, it is what it it is. So we have been very clear for a long time now this is about integrity. It's not about security you want security you want to protect against spectre and so on you have -- [interrupted] +CP: we have been getting some feedback from the TAG review from mostly coming from Google. we have feedback in the past coming in from Domenic mostly right now. He's also providing a consolidated list of feedback, some of these feedbacks we have here in the past. The majority of the issues are related to the web and and how the web sees this proposal. we haven't gotten much pushback from anyone else. Obviously in the other platforms this feature exist in some degree and is is very similar to what we are proposing. In the case of node in of IOS and Android and similar they exist today. We were using in the apps. even in some of those platforms. But those are severely penalized by having a hidden web view just to have an iframe inside it and that's problematic for applications in those environments so people don't use those. they use the realm creation, the context creation, via the existing apis in native code in order to create a new JavaScript environment. For the web the biggest challenge has been been getting Doomenic and some other falks company conveyed this is useful. This is orthogonal to all the efforts the web is trying to adopt and the path of the web as they call it. we at this point and we could continue waiting for more feedback their shoes leading therefore and I've been great in terms of communication. communication. I would like to go very quickly over over these five major bullet points that Domenic mentioned a couple of days ago, I think. The first one is obviously the saying that in the past we have here. community talking about three years now four years. some of the and the fact that the Realms are not a security boundary and the proposal is not claiming that obviously but it seems that there is some concerns from Domenic and some other parts about whether or not the the this proposal. would influence developers in terms of using these as a security boundary rather than just a way to create a new context of evaluation and control the evaluation in that context. in this case, it is what it it is. So we have been very clear for a long time now this is about integrity. It's not about security you want security you want to protect against spectre and so on you have -- [interrupted] -CP: So this first bullet is just about that. It's trying to disseminate information, try to see if this is a real concern. Our position has been involved. You can create these things today to know all these environment people are using it today. It is very hard to do the same. you all environments you have to to do very differently in some environment. You have to do Native code in the web. You have the same domain. I frame it exists today. We're just trying to normalize that across all the different environments and Domenic believe that this is in fact, problematic because it give the false sense of security security boundary, which it's not. +CP: So this first bullet is just about that. It's trying to disseminate information, try to see if this is a real concern. Our position has been involved. You can create these things today to know all these environment people are using it today. It is very hard to do the same. you all environments you have to to do very differently in some environment. You have to do Native code in the web. You have the same domain. I frame it exists today. We're just trying to normalize that across all the different environments and Domenic believe that this is in fact, problematic because it give the false sense of security security boundary, which it's not. -LEO: In addition to this, Domenic makes a reference to an article here from figma where they found out like. issues. The issues are raised from an implementation of a round brush in there was just an approximation of what we want for the round and the issues were well the in purse Percy, they would not be present or existing. in the D. Actual Realms API. and they were dressed in the industry and the shame. but it's not really a problem with in Realms API per si. So there are some subjective assumptions here, but I don't feel them like they are judgmental in terms of being subjective, but I don't see them nice technical. It's hard for us to give a technical aspect of this we can say many times. This is basically a layer of integrity. Not really a security sandbox. You might have heard this word because as Mark Miller has mentioned many times, there are many perspectives, many ways to define and perceive what security means and does. This is not the same for the implementation perspective? +LEO: In addition to this, Domenic makes a reference to an article here from figma where they found out like. issues. The issues are raised from an implementation of a round brush in there was just an approximation of what we want for the round and the issues were well the in purse Percy, they would not be present or existing. in the D. Actual Realms API. and they were dressed in the industry and the shame. but it's not really a problem with in Realms API per si. So there are some subjective assumptions here, but I don't feel them like they are judgmental in terms of being subjective, but I don't see them nice technical. It's hard for us to give a technical aspect of this we can say many times. This is basically a layer of integrity. Not really a security sandbox. You might have heard this word because as Mark Miller has mentioned many times, there are many perspectives, many ways to define and perceive what security means and does. This is not the same for the implementation perspective? -CP: By the way, if anyone has questions or comments about these just jumping we don't have to wait until the end. That's more like a conversation at this point. so this is is the first bullet is it's just that I think his sentiment about that earlier about this particular. block pass from figma. not necessarily about the problems they encounter when trying to do a polyfill. by more about the intent. but they were trying to do do kind of a security secur boundary on top of an iframe or something. Don't poke around from the same domain and so on. +CP: By the way, if anyone has questions or comments about these just jumping we don't have to wait until the end. That's more like a conversation at this point. so this is is the first bullet is it's just that I think his sentiment about that earlier about this particular. block pass from figma. not necessarily about the problems they encounter when trying to do a polyfill. by more about the intent. but they were trying to do do kind of a security secur boundary on top of an iframe or something. Don't poke around from the same domain and so on. -CP: The second bullet is more on the on the direction of the web rather than a technical concern. This is mostly saying the web is moving into a direction where you want to do code execution in isolation or so, you should go a sink rather than than sink or not. And this proposal is is basically going in a different direction in the past. We have say that it's the Realms are complementary to having a worker or having something running a different process, in a different agent and such. We cannot speak for the web in general but it seems complementary in our opinion. +CP: The second bullet is more on the on the direction of the web rather than a technical concern. This is mostly saying the web is moving into a direction where you want to do code execution in isolation or so, you should go a sink rather than than sink or not. And this proposal is is basically going in a different direction in the past. We have say that it's the Realms are complementary to having a worker or having something running a different process, in a different agent and such. We cannot speak for the web in general but it seems complementary in our opinion. -CP: The third bullet is more concrete. The third bullet is really about the developer experience and how to teach this to people on how to get developers to understand when you create a Realm. There will be things that you will not be able to access them out-of-the-box. There is a clear separation now between the APIs that are provided by the language versus APIs that are provided by the browser in this case. and this separation. up to today's date is now present in a web application in the web. Our position has been like this feature exists today. We know that and you know the platform distribution model that is more common today for developers is npm. So somehow they are already tuned in to factoring the case of different apis available to you for each individual program. when they use node for distribution and when they use the web or distribution of those of the bundle of those programs that are used by the app and so on. So this already exists today today in the industry. As a developer, you're facing these on the daily basic class. There are tons of tools that we have created in order to mitigate this problem in the first place. like things like linter were. you could specify what environment you're targeting and there are many settings for that where you already know up front? What are the things that are available for you programs? Ooh. use and and the linter will do the rest for you until 1 plus bundlers which or compilers were? They actually do work in terms of accommodating the source and the target. that you want for the program that you're writing. So yes, this is new to the web, but it's not new for developers. I'm not sure what can we say about it, because this is the reality that most people that live in today. +CP: The third bullet is more concrete. The third bullet is really about the developer experience and how to teach this to people on how to get developers to understand when you create a Realm. There will be things that you will not be able to access them out-of-the-box. There is a clear separation now between the APIs that are provided by the language versus APIs that are provided by the browser in this case. and this separation. up to today's date is now present in a web application in the web. Our position has been like this feature exists today. We know that and you know the platform distribution model that is more common today for developers is npm. So somehow they are already tuned in to factoring the case of different apis available to you for each individual program. when they use node for distribution and when they use the web or distribution of those of the bundle of those programs that are used by the app and so on. So this already exists today today in the industry. As a developer, you're facing these on the daily basic class. There are tons of tools that we have created in order to mitigate this problem in the first place. like things like linter were. you could specify what environment you're targeting and there are many settings for that where you already know up front? What are the things that are available for you programs? Ooh. use and and the linter will do the rest for you until 1 plus bundlers which or compilers were? They actually do work in terms of accommodating the source and the target. that you want for the program that you're writing. So yes, this is new to the web, but it's not new for developers. I'm not sure what can we say about it, because this is the reality that most people that live in today. -CP: the fourth one, we disagree on this one because the realm has to APIs basically the creation and the into words. and obviously give you access to the global object and you could use eval you want to the proposal does not introduce a new evaluation mechanism. there on the web. There are three main channels for evaluation. You can create a script you could do Eval. you have eval available depending on the settings of the realm, and you could do import Dynamic import to kick up the program and get some code. [transcription error] We're not introducing a new evaluation mechanism. Just what you have already at disposal and if you have a CSP rules that disables eval in the dark. You will not be able to create a Realm and called eval, you will only have available the input for you. And so it seems that this is not a problem. +CP: the fourth one, we disagree on this one because the realm has to APIs basically the creation and the into words. and obviously give you access to the global object and you could use eval you want to the proposal does not introduce a new evaluation mechanism. there on the web. There are three main channels for evaluation. You can create a script you could do Eval. you have eval available depending on the settings of the realm, and you could do import Dynamic import to kick up the program and get some code. [transcription error] We're not introducing a new evaluation mechanism. Just what you have already at disposal and if you have a CSP rules that disables eval in the dark. You will not be able to create a Realm and called eval, you will only have available the input for you. And so it seems that this is not a problem. -LEO: Just to add to this the feedback yours is like introducing a major new code evaluation Vector is that primary entry point into an API sound great. We are not really introducing any major new evaluation. Vector you could probably have access to evolve from the realm through the realm of object dot Global did a study all but there is no like direct or three entry point that if we we do have a primary entry point, we do have through our import method.I think there's some misinformation here even like the example with REamthat evolve does not not exist. +LEO: Just to add to this the feedback yours is like introducing a major new code evaluation Vector is that primary entry point into an API sound great. We are not really introducing any major new evaluation. Vector you could probably have access to evolve from the realm through the realm of object dot Global did a study all but there is no like direct or three entry point that if we we do have a primary entry point, we do have through our import method.I think there's some misinformation here even like the example with REamthat evolve does not not exist. -CP: So the last bullet bullet is Abby obviously out of our some of expertise at this point and I think Daniel is around to maybe provide more details if anyone has questions about these one of the concerns are being how we're going to integrate these into the the web in general like a from the specification point of view. We have been get Getting some great help from Daniel and some other folks who have been looking at how to solve this problem. In their opinion This integration is not that complex. But again, it's outside of our Lives outside of my expertise. I cannot really say much about it, but one thing is important I believe to to notice here that the pull request open from Daniel that has all the pieces of the integration and we like to continue. exploring more and more on that. but in general. it doesn't seem like like an impossible task [transcription error] +CP: So the last bullet bullet is Abby obviously out of our some of expertise at this point and I think Daniel is around to maybe provide more details if anyone has questions about these one of the concerns are being how we're going to integrate these into the the web in general like a from the specification point of view. We have been get Getting some great help from Daniel and some other folks who have been looking at how to solve this problem. In their opinion This integration is not that complex. But again, it's outside of our Lives outside of my expertise. I cannot really say much about it, but one thing is important I believe to to notice here that the pull request open from Daniel that has all the pieces of the integration and we like to continue. exploring more and more on that. but in general. it doesn't seem like like an impossible task [transcription error] -CP: the complexity of that implementation as you said. we have overcome this one so far. +CP: the complexity of that implementation as you said. we have overcome this one so far. -LEO: There are two different cases here for the complexity. One of them is integration into HTML, which I believe Daniel is doing an amazing job with the HTML integration. There is a pull request trying to solve and asking for feedback. I'm not sure if then is getting like all the answers probably. but it's in the works. but also the same for time the complexity relies on. the technical complexity of usage for Realms, I think this is pretty subjective as well to those who are using it institute the Realms just add is to those who are opting to implement Realms, not for those writing code That will be run in the realm. and for what it's worth of of as well, we've been doing a lot of Investigations and finding out the relationships that we have with the web platform and JavaScript. They had the engine integration that are mentioned here. We find they exist today. So we are not like reinventing the wheel, but just reusing what we have today as described in this text in what is implemented. +LEO: There are two different cases here for the complexity. One of them is integration into HTML, which I believe Daniel is doing an amazing job with the HTML integration. There is a pull request trying to solve and asking for feedback. I'm not sure if then is getting like all the answers probably. but it's in the works. but also the same for time the complexity relies on. the technical complexity of usage for Realms, I think this is pretty subjective as well to those who are using it institute the Realms just add is to those who are opting to implement Realms, not for those writing code That will be run in the realm. and for what it's worth of of as well, we've been doing a lot of Investigations and finding out the relationships that we have with the web platform and JavaScript. They had the engine integration that are mentioned here. We find they exist today. So we are not like reinventing the wheel, but just reusing what we have today as described in this text in what is implemented. -CP: So one thing that I want to point out as part of their learning process we choose to use the term realm for the API. And if in fact the API provides a realm Constructor called Realm and because of the way the web is the web has been using the concept or the the word realm in the specification and the way that they have implemented the web platform It has been very controversial in terms of the concepts that are predefined in our heads. Maybe that was a mistake. Maybe we should have chosen context maybe or something similar. which is more likely to match the current implementation. We didn't foresee these but there’s still the possibility to make a change in the name of it people think that this is important but in reality, which is using composite that were well defined 262 and we went with with that kind of naming an API where it has some conflicts with the concept in order platform. So it's still possible to change it if needed. +CP: So one thing that I want to point out as part of their learning process we choose to use the term realm for the API. And if in fact the API provides a realm Constructor called Realm and because of the way the web is the web has been using the concept or the the word realm in the specification and the way that they have implemented the web platform It has been very controversial in terms of the concepts that are predefined in our heads. Maybe that was a mistake. Maybe we should have chosen context maybe or something similar. which is more likely to match the current implementation. We didn't foresee these but there’s still the possibility to make a change in the name of it people think that this is important but in reality, which is using composite that were well defined 262 and we went with with that kind of naming an API where it has some conflicts with the concept in order platform. So it's still possible to change it if needed. -LEO: So those were some of the last Domenic's feedback that we would appreciate it if he would be willing to re-engage into the discussion promptly. Unfortunately, that's not totally possible right now, but we would be open to discuss this and try to address this as we actively been trying. There is some other feedback from and questions actually on the tag, [?] some of them we just try to keep address in here. Many of those questions are like regarding it the Realms only have the built-in JS apis available. available. Yeah. Realms are just require to be bundled with the frame modules and or the built-in JS. and if could whole side properties if they want to there's an ongoing discussion that we are probably said in the proposal to not expect any additional properties and to make that a like a full restriction. Our preference is to have a clean slate of the realm object and we just try to make it flexible for the integration with HTML so we are going like if it works if that's preferable. But for the HTML integration, we are definitely up to it. +LEO: So those were some of the last Domenic's feedback that we would appreciate it if he would be willing to re-engage into the discussion promptly. Unfortunately, that's not totally possible right now, but we would be open to discuss this and try to address this as we actively been trying. There is some other feedback from and questions actually on the tag, [?] some of them we just try to keep address in here. Many of those questions are like regarding it the Realms only have the built-in JS apis available. available. Yeah. Realms are just require to be bundled with the frame modules and or the built-in JS. and if could whole side properties if they want to there's an ongoing discussion that we are probably said in the proposal to not expect any additional properties and to make that a like a full restriction. Our preference is to have a clean slate of the realm object and we just try to make it flexible for the integration with HTML so we are going like if it works if that's preferable. But for the HTML integration, we are definitely up to it. -CP: that one is also related to Domenic's third Point like the difference between understanding that there is a web API and there is a language API, and how much of that can be smoothed out by allowing the web to add things to the realm. We believe that will be a little bit more confusing for developers because it's not going to be the same that they will get in other platforms. We try to go with the portability aspect of these where when you create a realm you get the same across all these different platforms. +CP: that one is also related to Domenic's third Point like the difference between understanding that there is a web API and there is a language API, and how much of that can be smoothed out by allowing the web to add things to the realm. We believe that will be a little bit more confusing for developers because it's not going to be the same that they will get in other platforms. We try to go with the portability aspect of these where when you create a realm you get the same across all these different platforms. -LEO: Yeah. and and there's the other question on how many libraries won't work unless they had [transcription error]. Yes as a nation like the clean slate of the Realms is one of the goals. So if you expect your you won't have many big things in the realm like we expect a clean slate and that's how it's mostly used today for those using sandbox or virtualization virtualization approaches and the current realm cannot be accessed from the incubator Realm. So the new created realm cannot have direct access to their incubator Realms or to the top level realms. security code doesn't need to know, It's an inner realm the code executed just like we just want this code to run seamlessly without observations, to the mess up in the global. So what we want is integrity. integrity. and Realm API has only important Global [?] as we mentioned. This is like how clear we want to go with this API. We might want to work with extensions in the future, but definitely not for this current proposal. We just want to make it like our MVP just going through Max Min and we may explore extensions in the future, but that's not like any compromise that I am expecting from anyone here. The current API works real well right now and yes, it does have known for further explanation. Yeah. so there is just one that I like to show here which is a Google and a Google and is issue today when I say like componentization. apps componentized. applications where they really need some immediate access to State and synchronize to avoid racing conditions. This is one of the examples with and worker Dom where the like it bounding and claims direct doesn't work over a sync communication channels. This is a real problem. They of to today does he just liked one use case you have. over days but most of them like you have a central DoM any. poor any. important to avoid Racing condition conditions at realms want introduced. +LEO: Yeah. and and there's the other question on how many libraries won't work unless they had [transcription error]. Yes as a nation like the clean slate of the Realms is one of the goals. So if you expect your you won't have many big things in the realm like we expect a clean slate and that's how it's mostly used today for those using sandbox or virtualization virtualization approaches and the current realm cannot be accessed from the incubator Realm. So the new created realm cannot have direct access to their incubator Realms or to the top level realms. security code doesn't need to know, It's an inner realm the code executed just like we just want this code to run seamlessly without observations, to the mess up in the global. So what we want is integrity. integrity. and Realm API has only important Global [?] as we mentioned. This is like how clear we want to go with this API. We might want to work with extensions in the future, but definitely not for this current proposal. We just want to make it like our MVP just going through Max Min and we may explore extensions in the future, but that's not like any compromise that I am expecting from anyone here. The current API works real well right now and yes, it does have known for further explanation. Yeah. so there is just one that I like to show here which is a Google and a Google and is issue today when I say like componentization. apps componentized. applications where they really need some immediate access to State and synchronize to avoid racing conditions. This is one of the examples with and worker Dom where the like it bounding and claims direct doesn't work over a sync communication channels. This is a real problem. They of to today does he just liked one use case you have. over days but most of them like you have a central DoM any. poor any. important to avoid Racing condition conditions at realms want introduced. -CP: AMP is the one of the more complete examples that we have been seeing out there and this is this app from Google. The fact that they will be able to use the Realm as a complement to The main iframes and or in general they will be using the Realms inside a secure boundary that they already created by creating a cross domain - inside back inside the main app. And by doing so they don't have to worry about security. They only want to worry about the Integrity of the different code that they want to evaluate inside the AMP app. This separation allows them to virtualize the DOM so they can provide their own implementation of the DOM inside a realm to the creating your own created. a thumbs up Dom apis available for the program to use and they have synchron's access to the actual. DOM structure. that is in the app so they can manipulate it by using the exact same apis that are a model. So I believe then this complete example sort of highlight how these features are complementary and we could just use them in ways that allows to to be able to continue using the apis that exist today. Those apis are mostly synchronous apis that you call and get it result back or you call and they have a side effect right away. That's the kind of thing that you would not be to do if you go through an async boundary like a worker, so where you don't have immediate feedback you have to do a lot of other things in order to accommodate the fact that something like computing the size of an element in the in the screen asynchronous today and you have to go async. I think that's just an an example. +CP: AMP is the one of the more complete examples that we have been seeing out there and this is this app from Google. The fact that they will be able to use the Realm as a complement to The main iframes and or in general they will be using the Realms inside a secure boundary that they already created by creating a cross domain - inside back inside the main app. And by doing so they don't have to worry about security. They only want to worry about the Integrity of the different code that they want to evaluate inside the AMP app. This separation allows them to virtualize the DOM so they can provide their own implementation of the DOM inside a realm to the creating your own created. a thumbs up Dom apis available for the program to use and they have synchron's access to the actual. DOM structure. that is in the app so they can manipulate it by using the exact same apis that are a model. So I believe then this complete example sort of highlight how these features are complementary and we could just use them in ways that allows to to be able to continue using the apis that exist today. Those apis are mostly synchronous apis that you call and get it result back or you call and they have a side effect right away. That's the kind of thing that you would not be to do if you go through an async boundary like a worker, so where you don't have immediate feedback you have to do a lot of other things in order to accommodate the fact that something like computing the size of an element in the in the screen asynchronous today and you have to go async. I think that's just an an example. -LEO: Okay. And just to wrap wrap up. Yes, we may not going to be able to have stage 3 here. SYG is really being helpful and doing communication like working as a communication between us and Google's team, to make this work. I think that the most pushback is from there today. I believe we have some more like neutral position. from other implementations. Not really official but like I seen 10 temperature seems to be neutral. Yeah, but I still want to request stage 3 in the next meeting and I like to check up. like if we can add anything to the next steps in January. And I also just want to make sure official eyes for [?] I've had three interesting revealed from Richard Gibson, it's not not not sure if I can can actually consider this as like a totally plus 1 but it's very positive and his opinion is positive about it is I have more reviewers, but I just want to make sure I officialize that the editors also want to follow up on the HTML integration so it's to I'm not saying it's a complete +1 but they are there is ongoing progress towards this. anything you you everyone for +LEO: Okay. And just to wrap wrap up. Yes, we may not going to be able to have stage 3 here. SYG is really being helpful and doing communication like working as a communication between us and Google's team, to make this work. I think that the most pushback is from there today. I believe we have some more like neutral position. from other implementations. Not really official but like I seen 10 temperature seems to be neutral. Yeah, but I still want to request stage 3 in the next meeting and I like to check up. like if we can add anything to the next steps in January. And I also just want to make sure official eyes for [?] I've had three interesting revealed from Richard Gibson, it's not not not sure if I can can actually consider this as like a totally plus 1 but it's very positive and his opinion is positive about it is I have more reviewers, but I just want to make sure I officialize that the editors also want to follow up on the HTML integration so it's to I'm not saying it's a complete +1 but they are there is ongoing progress towards this. anything you you everyone for RGN: I'll make an official +1 -DE: So there was a lot lot of discussion about technical issues in the HTML integration PR and I'm not not aware of problems with the with the current one The main thing is each realm has its own module map. Although that's something to find in the host. I think it's something that we kind of expect to be the case across environments for the for the fundamental reason that modules close over global objects And our realm is a global object. So if you want to be able to run module code in the context of a realm, you need a separate module map. I think the the inline module block proposal that that Shu sort of previewed would fit in very well with this. That's all. +DE: So there was a lot lot of discussion about technical issues in the HTML integration PR and I'm not not aware of problems with the with the current one The main thing is each realm has its own module map. Although that's something to find in the host. I think it's something that we kind of expect to be the case across environments for the for the fundamental reason that modules close over global objects And our realm is a global object. So if you want to be able to run module code in the context of a realm, you need a separate module map. I think the the inline module block proposal that that Shu sort of previewed would fit in very well with this. That's all. -AKI: Thank you, Daniel. Do you have a response? or CP? +AKI: Thank you, Daniel. Do you have a response? or CP? -CP: No. +CP: No. -AKI: Okay. okey-dokey, Shu +AKI: Okay. okey-dokey, Shu -SYG: so I do think there is utility here, but I don't think that all of the counter-arguments to Domenic's arguments are that straightforward as you played out in out in The Proposal? so to to kind of give the overview of the Chrome position, we don't yet have an official position. We're still working at that. So thank you for for taking that and delaying asking for stage 3 in a further meeting while we try to get that worked out. I think the complexity stuff aside which you know in Domenic's feedback he ranked last because of the the priority of constituencies. so we can get kind of ignore that for now, but I do think that the separation of GS and web web platform. is not a good thing like like it is a failure of along the same lines as Conway's law where because this is how the standards bodies happen to be set up and as a consequence of that it happens to be how JS engine teams versus web engine teams are set up that the Clean Slate to you to to the champion team seems to be everything the tc39 touches that I do believe that is not a useful distinction to the web platform or to Node programmers at large. I mean there are no efforts to align on a lot of stuff with the website and in repeated empty in surveys. I have just it's really jumped out at me. That's that devs do not know nor care nor should he care about the separation between JS features and web features? Some of that we can separate like we can cut the separation along some other lines like maybe it's like I owe maybe it's a lot of kind of capabilities but the line line to cut the line and according to standards boundaries is not the right one. And I think we should have a better answer to that then to push for prohibiting hosts from adding extensions from adding things to the global to the global object. +SYG: so I do think there is utility here, but I don't think that all of the counter-arguments to Domenic's arguments are that straightforward as you played out in out in The Proposal? so to to kind of give the overview of the Chrome position, we don't yet have an official position. We're still working at that. So thank you for for taking that and delaying asking for stage 3 in a further meeting while we try to get that worked out. I think the complexity stuff aside which you know in Domenic's feedback he ranked last because of the the priority of constituencies. so we can get kind of ignore that for now, but I do think that the separation of GS and web web platform. is not a good thing like like it is a failure of along the same lines as Conway's law where because this is how the standards bodies happen to be set up and as a consequence of that it happens to be how JS engine teams versus web engine teams are set up that the Clean Slate to you to to the champion team seems to be everything the tc39 touches that I do believe that is not a useful distinction to the web platform or to Node programmers at large. I mean there are no efforts to align on a lot of stuff with the website and in repeated empty in surveys. I have just it's really jumped out at me. That's that devs do not know nor care nor should he care about the separation between JS features and web features? Some of that we can separate like we can cut the separation along some other lines like maybe it's like I owe maybe it's a lot of kind of capabilities but the line line to cut the line and according to standards boundaries is not the right one. And I think we should have a better answer to that then to push for prohibiting hosts from adding extensions from adding things to the global to the global object. -LEO: There is a distinction there in something that I brought up to that issue that I mentioned for the discussion. so this discussion here [issue #284] we mentioned there are two parts one of them. We have the host initialized synthetic Realm that is almost a clone of hos initializeuserrealm which allows some extensions to the properties of the object. we also do have the settee for Global Bindings that we haven't decided anything yet. in the center for Global [?] is an ecmascript abstraction that actually allows the addition of a Global Properties or global names and that's still still untouched and still like a new discussion that we are having and we probably can improve that for integration. +LEO: There is a distinction there in something that I brought up to that issue that I mentioned for the discussion. so this discussion here [issue #284] we mentioned there are two parts one of them. We have the host initialized synthetic Realm that is almost a clone of hos initializeuserrealm which allows some extensions to the properties of the object. we also do have the settee for Global Bindings that we haven't decided anything yet. in the center for Global [?] is an ecmascript abstraction that actually allows the addition of a Global Properties or global names and that's still still untouched and still like a new discussion that we are having and we probably can improve that for integration. -DE: I don't believe so. I renamed initialize user realm to host initialize synthetic realm anyway, so they're not similar. It's just the same thing, but the main thing is I think we could Define something like how workers Define a certain set of web IDL interfaces that they Implement and they have interface Interfaces Exposed on them. I think we could do that with Realms as well if we decide to do so when I got involved in this proposal it was left completely kind of ambiguous up to hosts what to do, I think it's the job of this Champions group in the committee and this committee to actively propose a solution. So I proposed an initial one, which was nothing is exposed. But I do think that we could also expose things we would pass to consider making significant changes to web IDL to maybe not that significant but some changes to web IDL to accommodate this but I believe it would be possible. We would just have to think through which interfaces are exposed on Realms. This is an exercise that the web has done before in the context of workers and worklets. And I think it would be totally completely completely possible. +DE: I don't believe so. I renamed initialize user realm to host initialize synthetic realm anyway, so they're not similar. It's just the same thing, but the main thing is I think we could Define something like how workers Define a certain set of web IDL interfaces that they Implement and they have interface Interfaces Exposed on them. I think we could do that with Realms as well if we decide to do so when I got involved in this proposal it was left completely kind of ambiguous up to hosts what to do, I think it's the job of this Champions group in the committee and this committee to actively propose a solution. So I proposed an initial one, which was nothing is exposed. But I do think that we could also expose things we would pass to consider making significant changes to web IDL to maybe not that significant but some changes to web IDL to accommodate this but I believe it would be possible. We would just have to think through which interfaces are exposed on Realms. This is an exercise that the web has done before in the context of workers and worklets. And I think it would be totally completely completely possible. -SYG: Hi. agree. It would be completely completely possible. I would like to go on the record to say that pushing for instead that the realm API what is available in a room Global is the exactly the same across all JS runtimes that seems like a started to +SYG: Hi. agree. It would be completely completely possible. I would like to go on the record to say that pushing for instead that the realm API what is available in a room Global is the exactly the same across all JS runtimes that seems like a started to DE: this seems like something that we could definitely iterate on before the next meeting because I really want to push for a definition of this that we're thinking through completely more than sure. -CP: this is outside of my expertise or who need some help, from Shu, Daniel and some water parks and parks and defining that we're trying to accommodate the warden and 262 because obviously 262 now less that 262 probably would just have a normative node or something somewhere that finds it if the host will be able to add other things and then we have to work with the web spec to the fine. they will define what they are adding to it. +CP: this is outside of my expertise or who need some help, from Shu, Daniel and some water parks and parks and defining that we're trying to accommodate the warden and 262 because obviously 262 now less that 262 probably would just have a normative node or something somewhere that finds it if the host will be able to add other things and then we have to work with the web spec to the fine. they will define what they are adding to it. -SYG: Okay. right, cool. cool. Yeah, I won't rehash the other points. that I think there are strong counter arguments to your counter arguments as well. I think the point is that there is utility here, but it is not a slam-dunk in my opinion that realm is not like a slam dunk net good for all JS ecosystems. I would like to urge the other browsers to really think through on the complexity point, which arguably you know should be last in the priority of constituencies, that if this really has value add to web developers to large-scale Partners to the health of the web, you know, we should not consider how hard it is for us to be that high on the priority list that said said most of the work for implementing implementing Realms once it reaches stage 3 is not going to be in a JS engine. I think most js engine teams have no qualms about this because it is pretty trivial the things that are already exposed. like you showed on iPhones on IOS and Android. I mean those are exposed because I imagine like like there is no predefined runtime JS runtime except raw access to the JS engine that those are exposed. I wouldn't take that as evidence that people are reaching for that functionality as they need a starting point to run JS code and you do that by by me trying to create some kind of new context with the engine binding that you have available. available the web and node and existing JS run times that are built on top of the JS engines are just completely different. well So most of the work is going to be on the for Chrome on the blink side for JSC on the webkit side and for Firefox in the gecko side and part of why I cannot really give stage 3 or no stage 3 right now is I need to talk more with the web side to get their take on it if I say let's go ahead and do stage 3 V8 has not much work, but I cannot speak for the rest of the team and I would urge the other browsers to do so as well. Yes. this this is kind of of unique this proposal in that a stage three here, really should be coming from the entire browser engine team not jus the JS side. Yeah. those were my two my two topics +SYG: Okay. right, cool. cool. Yeah, I won't rehash the other points. that I think there are strong counter arguments to your counter arguments as well. I think the point is that there is utility here, but it is not a slam-dunk in my opinion that realm is not like a slam dunk net good for all JS ecosystems. I would like to urge the other browsers to really think through on the complexity point, which arguably you know should be last in the priority of constituencies, that if this really has value add to web developers to large-scale Partners to the health of the web, you know, we should not consider how hard it is for us to be that high on the priority list that said said most of the work for implementing implementing Realms once it reaches stage 3 is not going to be in a JS engine. I think most js engine teams have no qualms about this because it is pretty trivial the things that are already exposed. like you showed on iPhones on IOS and Android. I mean those are exposed because I imagine like like there is no predefined runtime JS runtime except raw access to the JS engine that those are exposed. I wouldn't take that as evidence that people are reaching for that functionality as they need a starting point to run JS code and you do that by by me trying to create some kind of new context with the engine binding that you have available. available the web and node and existing JS run times that are built on top of the JS engines are just completely different. well So most of the work is going to be on the for Chrome on the blink side for JSC on the webkit side and for Firefox in the gecko side and part of why I cannot really give stage 3 or no stage 3 right now is I need to talk more with the web side to get their take on it if I say let's go ahead and do stage 3 V8 has not much work, but I cannot speak for the rest of the team and I would urge the other browsers to do so as well. Yes. this this is kind of of unique this proposal in that a stage three here, really should be coming from the entire browser engine team not jus the JS side. Yeah. those were my two my two topics -LEO: For the records. I'm not in in disagreement with shu. I agree their topics from the feedback there like interesting. And yes, there are some of them are challenges, although I mentioned some of them, I believe they are just subjective assumptions. and I appreciate your feedback. Thank you. +LEO: For the records. I'm not in in disagreement with shu. I agree their topics from the feedback there like interesting. And yes, there are some of them are challenges, although I mentioned some of them, I believe they are just subjective assumptions. and I appreciate your feedback. Thank you. -Yulia: So I can speak a little bit to what SYG just described also about our investigations within Mozilla on this topic. We've been watching the tag review and we've been following Domenic's comments. We have a certain amount of hesitation about this proposal namely whether or not it should be exposed wholesale Because there might be some concern from a web architecture perspective. I'm not going to bring up the same points that Domenic has made but there have been a couple that have been directly echoed -- not by our HTML programmers -- But by people writing JavaScript in the browser. There were certain points like what Domenic raised about security being, because this confusion around the proposal of security has been around for so long, there has been some misconception around this and it was raised as a potential danger. At the same time there are people who are saying something like this would be useful, specifically if it had access to DOM apis, especially in the tooling space. But without access to DOM apis and only having access to JS apis, it wouldn't have the same effect. But again, these are things that are sort of in a very specific realm of JS development tooling and web extensions. We're not talking about regular websites and the feedback that we had from regular website developed type development within the company didn't have direct feedback on this beyond the “oh, I would use this in the wrong way” piece of feedback that sort of lines up with Domenic. We don't have a position where we would say that we would block stage 3 or necessarily support stage 3. We're watching this carefully and we will be following up on this discussion before January for sure. +Yulia: So I can speak a little bit to what SYG just described also about our investigations within Mozilla on this topic. We've been watching the tag review and we've been following Domenic's comments. We have a certain amount of hesitation about this proposal namely whether or not it should be exposed wholesale Because there might be some concern from a web architecture perspective. I'm not going to bring up the same points that Domenic has made but there have been a couple that have been directly echoed -- not by our HTML programmers -- But by people writing JavaScript in the browser. There were certain points like what Domenic raised about security being, because this confusion around the proposal of security has been around for so long, there has been some misconception around this and it was raised as a potential danger. At the same time there are people who are saying something like this would be useful, specifically if it had access to DOM apis, especially in the tooling space. But without access to DOM apis and only having access to JS apis, it wouldn't have the same effect. But again, these are things that are sort of in a very specific realm of JS development tooling and web extensions. We're not talking about regular websites and the feedback that we had from regular website developed type development within the company didn't have direct feedback on this beyond the “oh, I would use this in the wrong way” piece of feedback that sort of lines up with Domenic. We don't have a position where we would say that we would block stage 3 or necessarily support stage 3. We're watching this carefully and we will be following up on this discussion before January for sure. -LEO: I would would be very happy to sync with. anyone interested about about this. as time is less than short, can we just have someone else to apply as a reviewer to them? if any anyone is interested, I would be very happy to sink. So if anyone wants to reveal days, I would be more than happy to go through the proposal and shoulder also the HTML integration. integration. +LEO: I would would be very happy to sync with. anyone interested about about this. as time is less than short, can we just have someone else to apply as a reviewer to them? if any anyone is interested, I would be very happy to sink. So if anyone wants to reveal days, I would be more than happy to go through the proposal and shoulder also the HTML integration. integration. RGN: I'm willing to be official on this too if that wasn't already captured. -LEO: Yeah, so we have Richard Gibson and we do have anyone else. Otherwise, I'm going to try to find people offline. I believe Rick Waldron also started a review on this. If there any other next steps please? I urge everyone to let me know anything else that I can do from my side. +LEO: Yeah, so we have Richard Gibson and we do have anyone else. Otherwise, I'm going to try to find people offline. I believe Rick Waldron also started a review on this. If there any other next steps please? I urge everyone to let me know anything else that I can do from my side. MBS: Okay. Thank you. you. I think that that is it for today's agenda. Thank you, everyone for joining us and we'll be back tomorrow. diff --git a/meetings/2020-11/nov-18.md b/meetings/2020-11/nov-18.md index 261196b9..a4d98fdb 100644 --- a/meetings/2020-11/nov-18.md +++ b/meetings/2020-11/nov-18.md @@ -1,7 +1,8 @@ # 18 November, 2020 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -29,30 +30,30 @@ | Mark E. Davis | MED | Google | | Daniel Ehrenberg | DE | Igalia | - ## JSON modules for Stage 3 + Presenter: Daniel Ehrenberg (DE), Dan Clark (DDC) - [proposal](https://github.com/tc39/proposal-json-modules/) - [slides](https://docs.google.com/presentation/d/1veenbLkI0QWjAMkvMpVIDlCY5O2D6LqsVN-Yc_mTOfA/edit?usp=sharing) -DDC: Okay, JSON modules for stage 3. JSON modules is just a recap. This was split off originally from importance exertion. So this is not the end of the import assertions mechanics. This is this is just the bit about saying what should happen What should host to do when the assertions list includes the type JSON assertion if that is the that assertion is present the host must either reject the Imports or they must they must The module must be treated as a JSON module which is to say that the module content must be parse the JSON and the resulting object is like the modules default export. Is that is that resulting JSON exports and then there are no other exports type assertions are not required in all hosts. So like for hosts like the web where there's this mime type security concern that like those hosts will want to require this. This JSON on this slide syntax in the slides out of date, but those hosts will want to have this type equals JSON assertion, but other hosts that don't have those same conservative security concerns is the web can just about can just use this to search this syntax without the without the assertion being present. They can still interpret this as a JSON module. Yeah, so it's optional. I think the big question here from last time that's come up is whether these should be mutable or not. It is a position of the champion group that the JSON object should be mutable. That's it's more natural to developers who are used to immutability from like JSON modules where mutability is the default and there's just this issue where if you need mutability if you need immutability, but you need your JSON. Intent not to be changed. You can kind of have some workarounds here to get this by reordering your Imports and assuring that shooting you're able to lock down the JSON before other other modules can import it. But if you need mutability and the JSON is immutable by default, then you're kind of stuck. The only really you can you can then make changes to that imported object is to do like a deep copy I was just at the form of the JSON object expect it to be a just the default export is the entire JSON object. There are no named exports and these are exported. JSON is just made up of objects and arrays like what you'd get from parseJSON, not records or tuples like you get from a parseImmutable. We've previously got external positions on this tag review signed off and then the Zilla position that's worth prototyping these positions were obtained [?] like was paired with the import assertion stopped by some the JSON module as part of The Proposal hasn't really changed since since we got these approvals. Tom there's a HTML integration PR for this up again, most of the complexity here is with the import assertions stuff that JSON modules that it hasn't really changed since the last last time we discussed the proposal. and yeah, and then the spec has been split off from the import assertion spec. It's pretty much just the couple bullet points that say this is what much happened when the type. JSON assertion is present and then that plugs into the web IDL synthetic module spec and that's that's pretty much all there is to it. I think we plan to ask for stage three for this meeting. We should probably open it up to the Queue first that's Which offer the initial presentation not a bunch to this. I think the biggest question here is going to be this if I can find the slide this mutable versus immutable discussion. +DDC: Okay, JSON modules for stage 3. JSON modules is just a recap. This was split off originally from importance exertion. So this is not the end of the import assertions mechanics. This is this is just the bit about saying what should happen What should host to do when the assertions list includes the type JSON assertion if that is the that assertion is present the host must either reject the Imports or they must they must The module must be treated as a JSON module which is to say that the module content must be parse the JSON and the resulting object is like the modules default export. Is that is that resulting JSON exports and then there are no other exports type assertions are not required in all hosts. So like for hosts like the web where there's this mime type security concern that like those hosts will want to require this. This JSON on this slide syntax in the slides out of date, but those hosts will want to have this type equals JSON assertion, but other hosts that don't have those same conservative security concerns is the web can just about can just use this to search this syntax without the without the assertion being present. They can still interpret this as a JSON module. Yeah, so it's optional. I think the big question here from last time that's come up is whether these should be mutable or not. It is a position of the champion group that the JSON object should be mutable. That's it's more natural to developers who are used to immutability from like JSON modules where mutability is the default and there's just this issue where if you need mutability if you need immutability, but you need your JSON. Intent not to be changed. You can kind of have some workarounds here to get this by reordering your Imports and assuring that shooting you're able to lock down the JSON before other other modules can import it. But if you need mutability and the JSON is immutable by default, then you're kind of stuck. The only really you can you can then make changes to that imported object is to do like a deep copy I was just at the form of the JSON object expect it to be a just the default export is the entire JSON object. There are no named exports and these are exported. JSON is just made up of objects and arrays like what you'd get from parseJSON, not records or tuples like you get from a parseImmutable. We've previously got external positions on this tag review signed off and then the Zilla position that's worth prototyping these positions were obtained [?] like was paired with the import assertion stopped by some the JSON module as part of The Proposal hasn't really changed since since we got these approvals. Tom there's a HTML integration PR for this up again, most of the complexity here is with the import assertions stuff that JSON modules that it hasn't really changed since the last last time we discussed the proposal. and yeah, and then the spec has been split off from the import assertion spec. It's pretty much just the couple bullet points that say this is what much happened when the type. JSON assertion is present and then that plugs into the web IDL synthetic module spec and that's that's pretty much all there is to it. I think we plan to ask for stage three for this meeting. We should probably open it up to the Queue first that's Which offer the initial presentation not a bunch to this. I think the biggest question here is going to be this if I can find the slide this mutable versus immutable discussion. MM: So this is just something that I became curious about, in no sense an objection, which is why did we - you know in terms of non JavaScript resources, It seems like the most natural place to have started would be to read in the contents of a file either as binary turning into a big you and at array or as utf-8 turning into string and then you know the the JSON.parse of a string could have given us JSON resources indirectly. Not that that would be preferable, but it's certainly a more universal place to start. So, why did we not start with binary and text? DE: I think that would be an interesting proposal and I'd encourage people to Champion it. I mean we see that pattern in tooling today. It's not really clear which mime type we should be expecting for that. There are several different mime types that we could use for binary. I think JSON is used very widely in JavaScript programs. So it's a natural thing to include and the relative priority of those two things is just kind of subjective. -MM: Okay. +MM: Okay. CM: Yeah. just yeah issues about mutability. I'm a little concerned about the “well, just control the order of imports” as being the the explanation for why mutability might be okay, since this presumes that you're in complete control of the entire ordering of the import graph, which seems like an implausible state in the general case, and I'm wondering do you know something I don't about how one how one controls what what order Imports happen in. DE: I can answer that question. Also, I think I can see how this is a concern and I'm not really sure if it's practical to control this ordering. What you can do is all callers can freeze it because it's identity operation. This is one of those cases where we have conflicting arguments and somehow both sides - if we want to move forward with this proposal we kind of have to agree to disagree and make a compromise -CM: Because I mean, I think that obviously there's no way you can have a compromise position without making the proposal more complicated somehow and I understand where that might be a non-starter. But you know if I import the JSON module and then immediately freeze it I still have no way of knowing whether somebody else has imported it and modified it when I wasn't looking, you know, somehow somebody else got to it prior to my getting to it. +CM: Because I mean, I think that obviously there's no way you can have a compromise position without making the proposal more complicated somehow and I understand where that might be a non-starter. But you know if I import the JSON module and then immediately freeze it I still have no way of knowing whether somebody else has imported it and modified it when I wasn't looking, you know, somehow somebody else got to it prior to my getting to it. -DE: I want to go back and disagree with you where you say that you know allowing both who would be too complicated. I think that would be a totally valid thing to do in a follow-on proposal in particular when we have the evaluator attributes that were mentioned in the in the import assertions proposal. We're really interested in hearing about more different use cases and one that MF raised previously was that you could have these module attributes that change how a JSON module is interpreted. So one way of changing how it's interpreted would be to parse it as records and temples or as frozen objects and arrays, I think that would be a valid thing to do. Like continue to free associate like how do you know that no one else imported but then modified it but also how do you know that - like in node nobody wrote to the file system before you did your module read or like on the web install the service worker worker to intercept the network ultimately, you kind of have to be in control of things in order to have guarantees about what's going on. There's a lot of things going on. +DE: I want to go back and disagree with you where you say that you know allowing both who would be too complicated. I think that would be a totally valid thing to do in a follow-on proposal in particular when we have the evaluator attributes that were mentioned in the in the import assertions proposal. We're really interested in hearing about more different use cases and one that MF raised previously was that you could have these module attributes that change how a JSON module is interpreted. So one way of changing how it's interpreted would be to parse it as records and temples or as frozen objects and arrays, I think that would be a valid thing to do. Like continue to free associate like how do you know that no one else imported but then modified it but also how do you know that - like in node nobody wrote to the file system before you did your module read or like on the web install the service worker worker to intercept the network ultimately, you kind of have to be in control of things in order to have guarantees about what's going on. There's a lot of things going on. -CM: I mean, it's certainly true that there are lots of places for things to go off the rails and you have to be in control of all of them, but if there were some way to direct the import to give you an immutable form and I think. I think in conjunction with the records and tuples proposal might be a good place to stand to do that or simply some kind of annotation or some kind of attribute, which said “give this to me in a mutable form, please”. I just think that if you want it mutable it should be *you* doing it. You want to be sure the thing imported was actually the thing that you intended to import. But arranging to have a trusted path to the resource is a separate problem and it's something you need for any import. For something which is code, code is in control of itself -- it gets a chance to have a say on what it is that's being returned prior to anybody else getting a handle to it. Whereas something which is pure data has no agency and therefore any kind of qualities that you want to ensure have to be ensured extrinsically. Whether they're insured by the platform or by the consumer of the data can make a lot of difference in how much effort it is, how reliable and trustworthy the mechanism is, whether you push a lot of additional complexity on users. I just generally think that the need for immutability here is real and while I understand some people want to have a mutable form as well, I would be much happier if the need for immutability were given more weight than just "oh, well, that's your problem". +CM: I mean, it's certainly true that there are lots of places for things to go off the rails and you have to be in control of all of them, but if there were some way to direct the import to give you an immutable form and I think. I think in conjunction with the records and tuples proposal might be a good place to stand to do that or simply some kind of annotation or some kind of attribute, which said “give this to me in a mutable form, please”. I just think that if you want it mutable it should be *you* doing it. You want to be sure the thing imported was actually the thing that you intended to import. But arranging to have a trusted path to the resource is a separate problem and it's something you need for any import. For something which is code, code is in control of itself -- it gets a chance to have a say on what it is that's being returned prior to anybody else getting a handle to it. Whereas something which is pure data has no agency and therefore any kind of qualities that you want to ensure have to be ensured extrinsically. Whether they're insured by the platform or by the consumer of the data can make a lot of difference in how much effort it is, how reliable and trustworthy the mechanism is, whether you push a lot of additional complexity on users. I just generally think that the need for immutability here is real and while I understand some people want to have a mutable form as well, I would be much happier if the need for immutability were given more weight than just "oh, well, that's your problem". JHD: In general, this kind of seems like the same issue that exists with any shared object which includes anything that's exported from a JavaScript module. If I export an object like export default an object from a module and I import it. How do I know that another module hasn't a mutated that object before I got to it if I freeze it, how do I know that something that Imports it later doesn't need to modify it right? Like the this sort of interaction always happens with a thing that is shared and mutable and @@ -74,13 +75,13 @@ CM: You’re looking for something more useful than "don't argue do it my way". DE: I guess we have to - I mean we could make a decision one way or the other which is kind of like that or we could not do this proposal. We've to decide among those three options. -YSV: Okay, so I just want to note I stepped out for maybe two minutes and I missed the exact topic that's being discussed right now. So to verify we're talking about the decision around whether or not the Singleton object returned from this module is mutable or Frozen, right? +YSV: Okay, so I just want to note I stepped out for maybe two minutes and I missed the exact topic that's being discussed right now. So to verify we're talking about the decision around whether or not the Singleton object returned from this module is mutable or Frozen, right? CM: Indeed. YSV: Okay, so from our side when we reviewed the proposal our feeling was that this should be immutable. And there should be potentially a second proposal like JSON dot module which gives you the behavior that you would expect from JSON Doug parse gives you a copy of the of the JSON module that you can then mutate, but it should be a separate proposal that was appealing. -DE: Could you elaborate on what you meant by this second proposal previously? I mentioned the possibility of using multiple attributes. I have no idea what JSON.module would be. +DE: Could you elaborate on what you meant by this second proposal previously? I mentioned the possibility of using multiple attributes. I have no idea what JSON.module would be. YSV: This would be a completely separate proposal that would leverage the import capability that we currently have, but it would instead give you back something that is a copy and mutable so it breaks from what our module systems do right now, which is to give you back a singleton. And in this case, it would be immutable so we would in this JSON.module version of it, which would take a string like a URL specifier. It would use that same infrastructure, but from that immutable JSON object from that object, that would be mutable coming out of this JSON file. We would create a copy that then is mutable. That's what we were thinking might be an interesting direction. @@ -98,9 +99,9 @@ KG: Great, so Dan was proposing a specific question that we are asking for, like DE: so if it's phrased about positive or negative feelings, the positive feeling would be towards the aspect of this proposal that we're making that the JSON modules be mutable. And so if you vote negative then you think the JSON module should be immutable by default in either option that we choose is use open various different forms of follow-on proposals for making the modules. You know for making the opposite Choice as YSV mentioned in a different form. All right, so we'll leave that open while RPR. I think you're up next. -RPR: so only because we're really being asked to chip in here. From my point of view. I think immutability provides better overall better behavior because, certainly for one of our use cases, it would enable safe sharing across multiple users. However, I don't see that as the high-order bit. The most important thing is that we get something that is acceptable to the host environments that are going to use this. So that means something that is acceptable to node and something that is acceptable to browsers and I feel like our steer on this in the past has been guided by input that we've had from those constituencies. So this is a mild push towards immutability, but, you know, could go either way. +RPR: so only because we're really being asked to chip in here. From my point of view. I think immutability provides better overall better behavior because, certainly for one of our use cases, it would enable safe sharing across multiple users. However, I don't see that as the high-order bit. The most important thing is that we get something that is acceptable to the host environments that are going to use this. So that means something that is acceptable to node and something that is acceptable to browsers and I feel like our steer on this in the past has been guided by input that we've had from those constituencies. So this is a mild push towards immutability, but, you know, could go either way. -DE: so great the feedback we've gotten from web browsers so far until right now with most of us feedback has been towards mutable JSON modules. And I can't really speak for node I can't speak to the landscape of opinions there. +DE: so great the feedback we've gotten from web browsers so far until right now with most of us feedback has been towards mutable JSON modules. And I can't really speak for node I can't speak to the landscape of opinions there. RPR: So I worry a little bit that if we just express, you know know the tc39 use here - sure we may wind up, you know preferring immutability, but that could then cause contention down the line. So, that's all. Thank you. @@ -116,15 +117,18 @@ RGN: Okay, so thinking about the motivations for the proposal and it being a bri DRO: So maybe I did miss this before but there was a question earlier about you know, what about other proposals like binary data and whatnot. So my personal preference if you just gave me this feature would be I bought this to be immutable right, But I'm vaguely empathetic like if you give it to either way most my use cases wouldn't make a difference but if you know long-term use the other data formats coming in as coming in as you know, arrays or something like that and those are immutable well it might be strange to have have that difference. So I don't know if there's a specific thing in mind there. But from what I would think that might push the direction towards mutable all the time. -DE: Yeah, I want to agree with you there. I think it's clear that there are lots of module types that we want to add in the future where there's no reasonable way to make them immutable. For example, CSS modules could also be thought of as a data structure which you could make immutable but CSS modules will definitely be normal mutable CSS things. so this could not establish any kind of precedent that everything is immutable by default. +DE: Yeah, I want to agree with you there. I think it's clear that there are lots of module types that we want to add in the future where there's no reasonable way to make them immutable. For example, CSS modules could also be thought of as a data structure which you could make immutable but CSS modules will definitely be normal mutable CSS things. so this could not establish any kind of precedent that everything is immutable by default. YSV: So I just want to clarify what I said earlier: we're not pushing for this to be immutable. It might be better if we have it as being immutable because chances are that due to the expectations that users have had from `JSON.parse` - not from `require(JSON)` as Jordan brings up - They may be expecting to work with the copy and this could lead to hard to catch bugs. So for that reason, it may be a good idea to make it immutable. But if we do that then we may want to have a second API that does create the copy. That's more our position rather than it must be immutable. There's a slight preference for immutability and I think Daniel who just spoke made a really good point that we might want to think about future data types that come in whether or not those will be mutable or not and how the decision we make here will impact that later. MM: Yeah, so the UInt8Array for binary I think does not argue that binary should therefore be shared mutability. It more argues that we should have an immutable way to represent binary data. We have an immutable way to represent text data which is strings and people have even used strings for binary data. The bigger point is that moddable has repeatedly raised the desire to have some form of raw representation of various Collections and likewise the moddable way of treating treating pre-compiled modules really naturally want pre-compiled resources like JSON resources to be something they just put in ROM without having to do the extra bookkeeping to shadow that with mutability. So I think that once we extend beyond JSON, I think we're going to continue to have desires for immutability being the default and to adjust the data types that we provide to deal with that. + ### Conclusion/Resolution -* We will revisit this topic later + +- We will revisit this topic later ## Tour of Temporal + Presenter: Ujjwal Sharma (USA) - [proposal](https://github.com/tc39/proposal-temporal) @@ -144,7 +148,7 @@ USA: Then we have PlainMonthDay, which is the opposite of year month really so I USA: Next up we have Instant. Now Instant represents a point on the universal timeline. It replaces sort of the frequent use of UTC in objects like the legacy date object that we see there no time zone instead of having a [?] time zone. Logically it stands in the place of legacy date, which also lacks a time zone for confusingly has methods to operate in the current time zone that really works as you'd acted to in most cases and there's more confusing. So here you are. To access calendar or clock units like month or year a time zone is needed so you can have an Instant from zero epochs seconds, seconds, and then you can convert that to a zoned time in a particular time zone. -USA: Next up you have durations. These are signed ISO 8601 durations. So they have a direction and you can create a duration from just using the from method in durations and you can say for example called the total method and get the exact number of minutes in that duration. +USA: Next up you have durations. These are signed ISO 8601 durations. So they have a direction and you can create a duration from just using the from method in durations and you can say for example called the total method and get the exact number of minutes in that duration. USA: next up we have calendars. As I mentioned we have first class support for calendars and the main sort of calendar we mean is ISO 8601, but and you know you it might sound weird, but it's usually called Gregorian, but it's actually, fun fact, it's not Gregorian. It's different slightly from Gregorian, but we have decided to add support for different calendars that are not just this calendar. We are planning to add support for all International calendars provided through ECMA 402 and also you have have the ability to define custom calendars and calendars are special, you can you can have fields in certain calendar and then you can call date from fields on that calendar and get a Date object or you or you can see months in year, days in a year, these operations in the calendar specific context. Now, that would be more useful than just in the ISO calendar. @@ -162,7 +166,7 @@ USA: So as I said talking about the operations, we have a whole lot of interesti USA: Next up we have with. So say for example, you have the current date which is someday and what if you want the first day of the month, well, you cannot mutate and projects write I as I said, they're immutable so you can't change the day variable to 1. What you can do is call the "with" method and this would return your date with these changes made. So it's actually a new date with the changes made and that's what I mentioned when I said it there are pretty functional and it's really fun to make all these changes without actually having to mutate the original object. Next you have to other type, if it's not exactly to other tight, but you can call "to" and then the name of the type you want to convert it to, to convert objects from one type to the other. This is really useful for people who have applications that play around with different temporal types because we have so many of them so if for example you have plain date you can convert that to date time and if you have a point in time, you can also convert that to find a time using to [?] datetime and it accepts strings options bags really the whole deal. Okay, we have basic math. You have add and subtract which are really simple methods that allow you to do addition and subtraction. These accept a duration like you can pass in an options bag that has a number of minutes and number of nanoseconds, maybe a number of days, weeks, whatever you prefer. And you can even use a duration instance which you've carried over from from a previous operation. You can include a ISO 8601 string that can be deserialized into a duration and you can also specify the overflow mode to actually make sure that the arithmetic is as precise as you'd like it to be. -USA: Then we have difference methods. So that includes since and until. If you have two objects of the same type, you can find the difference between those two depending on which direction you want to go from later to earlier or or earlier to later you use these two methods to specify the directionality and it returns a duration object as I mentioned in the last slide. You can then use this direction object to either display it or to use it in further calculation. +USA: Then we have difference methods. So that includes since and until. If you have two objects of the same type, you can find the difference between those two depending on which direction you want to go from later to earlier or or earlier to later you use these two methods to specify the directionality and it returns a duration object as I mentioned in the last slide. You can then use this direction object to either display it or to use it in further calculation. USA: Next up we have compared. So of course when we play around with date so much you want to compare them at some point right if you have a long list of dates and times and you want to sort them then you'll also need a compare method. So we have a static compare method on each type that allows you to compare two objects of that type and because it returns -1, 0, or 1 it is compatible with Array.sort. and we have a convenience method called equals, which is sort of a subset of compared but instead of calling compare and checking if the result is 0 you can just use equals that returns a Boolean which is sort of a more common more specific use case of comparing. We also allow rounding so you can for example round the time to the nearest hour, you can run round they time as you see here to the nearest 30 minutes, you can round the day to the nearest ten days. Whatever you want really. Rounding is a really important use case as we identified. While I was working on the duration format proposal it became evident that Rounding is something that people really want especially when displaying stuff because you know, you sometimes really don't need all that precision. I don't want to know in how many Nanoseconds my cab is going to arrive. but you know really if any think about it. It has applications beyond the realm of just formatting. Rounding is really useful for all sorts of arithmetic and it was more relevant for all types. So we added this to the main proposal instead. @@ -180,7 +184,7 @@ DE: I feel like this would be better framed as a question about what the plan is RGN: Stage 4, and it's not necessarily ISO... an IETF RFC would be sufficient in my mind. But again, other people might be comfortable with shipping something even before that and this seemed to me like the kind of thing that the committee should weigh in on. -AKI: To clarify, your question that you're asking people's feelings on right now is, how do you feel about shipping a feature that does not yet have a published standard alongside it. Is that accurate? +AKI: To clarify, your question that you're asking people's feelings on right now is, how do you feel about shipping a feature that does not yet have a published standard alongside it. Is that accurate? RGN: That is accurate. @@ -210,7 +214,7 @@ SFC: So the way that we deal with the problem is that each type's difference fun https://github.com/tc39/proposal-temporal/blob/main/docs/instant.md#instantuntilother-temporalinstant--string-options-object--temporalduration -AKI: Does that fully answer the question you have WH? +AKI: Does that fully answer the question you have WH? WH: Yes, it's kind of a weird space to do things in. There are also leap seconds, which I assume you're still ignoring. @@ -235,6 +239,7 @@ USA: Yay, I guess that's all and thank you everyone for your time and I hope to AKI: Excellent. Thank you very much. Love a good temporal update. ## Intl Enumeration API update + Presenter: Frank Yung-Fong Tang (FYT) - [proposal](https://github.com/tc39/proposal-intl-enumeration) @@ -242,19 +247,19 @@ Presenter: Frank Yung-Fong Tang (FYT) FYT: Cool, so, my name is Frank Yung-Fong Tang from Google. I will just give you an update about this API. It's not for stage advancement of this intl enumeration API. This is a proposal we're working on. I think next time I will probably try to be published more and bring to you again. But this time just try to give you an update of this API originally put on the stand. And agenda, but somehow just not be able to work on that prevents. So they're not not nothing much you can talk about -FYT: So what is intl enumeration API? ECMA 402 supports lots of APIs and for lots of them, there's a lot of option bags as in the argument of API that the option value are not very clear to the caller that what we supported were not - [technical issues with the slides] +FYT: So what is intl enumeration API? ECMA 402 supports lots of APIs and for lots of them, there's a lot of option bags as in the argument of API that the option value are not very clear to the caller that what we supported were not - [technical issues with the slides] -FYT: So the charter of this API is an effort to make sure we are able to allow the developer to programmatically figure out the supported value for some option in the pre-existing API. The motivations are original from, I think, Temporal proposal that will try to identify what Timezones are supported. So I put together a Stage 0 proposal and Stage 1 that passed in June 2020 and the last time in September would advance to stage 2. The scope of this API is currently try to solve not all the values in ECMA402 but only some of them. For example, the Locale is not. Locale or region is a very long list. And also there's a to define whether it's supported or not because there're multiple places that you can support but more focusing on whether we support datetime format for calendar or an number format whether the currency is supported or timezone or unit. +FYT: So the charter of this API is an effort to make sure we are able to allow the developer to programmatically figure out the supported value for some option in the pre-existing API. The motivations are original from, I think, Temporal proposal that will try to identify what Timezones are supported. So I put together a Stage 0 proposal and Stage 1 that passed in June 2020 and the last time in September would advance to stage 2. The scope of this API is currently try to solve not all the values in ECMA402 but only some of them. For example, the Locale is not. Locale or region is a very long list. And also there's a to define whether it's supported or not because there're multiple places that you can support but more focusing on whether we support datetime format for calendar or an number format whether the currency is supported or timezone or unit. -After the stage two investment we change some of the designs originally we have this spec. It used to use several different functions. One function to support calendars and the other for units. It returned an array. Another for currencies. But later we talk about that and we fold it into only one method call which is called supportiveValuesOf. And we're passing a key that we can be decided whether the key is a calendar or a collation or number system or time zone +After the stage two investment we change some of the designs originally we have this spec. It used to use several different functions. One function to support calendars and the other for units. It returned an array. Another for currencies. But later we talk about that and we fold it into only one method call which is called supportiveValuesOf. And we're passing a key that we can be decided whether the key is a calendar or a collation or number system or time zone -FYT: So basically that particular function returns an object that supports an iterator symbol that you can call it to iterate. We change it to be an iterator. `Intl.supportedValuesOf` by passing those key and each, you know, then you can either way through it because the for example time zone could have a lot of timezone again. And so this is a new design of how the API looks like we talked about. that actually because this option there solve those key may have options for example timezones only maybe eight if we don't pass any option worked on all the times or its way to all the kinds of but we keep passing region, Uso will only return whatever the times are used but inside, who is in the collation, the collation key we are considering adding that in the collation there. We haven't finalized these thoughts yet, but sound some way to have option to narrow down What is supported in that particular collation? And I think that's about it. We I think currently we still work out this I think one of the issues in the stage to Investments that we are there are people in the state. [transcription error] So we still need people to understand the Privacy concern to help us to see whether our design currently. Well whether have the issue major issue, but I think the tendon I cannot believe that is pretty limited because it's legally on welcome version of the chauffeur or supported so thin versions of user agent probably off very similar list or what stand list until for example if they get a newer version of of of time zone if someone change the time zone have a traditional kinds o ID by Allah say some government decide to split different kinds of IV until that happened probably or get very similar. They all wore the same value well, Of course, there are chances that some of you user agent may make their data set can install different language We may become some issue about fingerprinting issue because then different user may got to come packaged here, but short currently we don't see that in happened at this moment, but there are people thinking about maybe some of the shoulder to go to that direction. Also, if you are interested about this and you are particular, people who know about fingerprinting and privacy concerns will be really helpful. It can chip in to take a look at this and give you some feedback. Yeah, that's it. Those are my updates, any questions? +FYT: So basically that particular function returns an object that supports an iterator symbol that you can call it to iterate. We change it to be an iterator. `Intl.supportedValuesOf` by passing those key and each, you know, then you can either way through it because the for example time zone could have a lot of timezone again. And so this is a new design of how the API looks like we talked about. that actually because this option there solve those key may have options for example timezones only maybe eight if we don't pass any option worked on all the times or its way to all the kinds of but we keep passing region, Uso will only return whatever the times are used but inside, who is in the collation, the collation key we are considering adding that in the collation there. We haven't finalized these thoughts yet, but sound some way to have option to narrow down What is supported in that particular collation? And I think that's about it. We I think currently we still work out this I think one of the issues in the stage to Investments that we are there are people in the state. [transcription error] So we still need people to understand the Privacy concern to help us to see whether our design currently. Well whether have the issue major issue, but I think the tendon I cannot believe that is pretty limited because it's legally on welcome version of the chauffeur or supported so thin versions of user agent probably off very similar list or what stand list until for example if they get a newer version of of of time zone if someone change the time zone have a traditional kinds o ID by Allah say some government decide to split different kinds of IV until that happened probably or get very similar. They all wore the same value well, Of course, there are chances that some of you user agent may make their data set can install different language We may become some issue about fingerprinting issue because then different user may got to come packaged here, but short currently we don't see that in happened at this moment, but there are people thinking about maybe some of the shoulder to go to that direction. Also, if you are interested about this and you are particular, people who know about fingerprinting and privacy concerns will be really helpful. It can chip in to take a look at this and give you some feedback. Yeah, that's it. Those are my updates, any questions? DRO: Yeah along what you just mentioned with the potential fingerprinting concerns. I think as a general suggestion whenever you have an API that enumerates things you're opening yourself up to those concerns and as a possibility perhaps instead of saying having an API that is equivalent to “give me all of the support of values that match this” you could have an API that says “is this thing supported” and sort of flip it on its head. I don't really see common situations where someone would need to know the full list of time zones and in those situations, I feel like there's other ways of solving those problems that don't involve exposing the full enumerated list to an API like this and therefore the potential fingerprinting concern so I would imagine that possibly a querying API would be a better way forward. FYT: Okay. One of the one of the issues that for example with just adding the examples is whether what if people want to build a list of what timezone is supported in less a India right? Well not India, for example, some bigger area. Let's say Russia, that is, information can do is try to provide it. -DRO: Just a suggestion: if you're talking about making a picker or a list view for that area, generally speaking, my recommendation would be to create a new input type (e.g. ``) so that the control over what is shown is given to the browser/user rather than relying on the good faith of the page or app that is using this enumeration. That kind of been a design that's done for other things that have been similar to this in the past. One example that is similar to this would be like trying to show a list of supported fonts. Rather than having a enumeration API for all of the fonts that exists on the system you have a way to have the browser give me a font and let the user pick, giving them the control. +DRO: Just a suggestion: if you're talking about making a picker or a list view for that area, generally speaking, my recommendation would be to create a new input type (e.g. ``) so that the control over what is shown is given to the browser/user rather than relying on the good faith of the page or app that is using this enumeration. That kind of been a design that's done for other things that have been similar to this in the past. One example that is similar to this would be like trying to show a list of supported fonts. Rather than having a enumeration API for all of the fonts that exists on the system you have a way to have the browser give me a font and let the user pick, giving them the control. JHD: I'm in the queue to reply to that. DRO any of these things that need to be available in the browser also need to be available on the server where HTML generated user input is evaluated - so any DOM element is just not a solution to those use cases. I realize that with a SPA that's fully client-rendered that would be sufficient, but many apps don't opt for that for various reasons. @@ -264,35 +269,35 @@ SFC: Yeah, so about the adding new HTML input types. This is an idea that's been DRO: Yeah, for what it's worth I wasn't saying that that was the solution. It was a possible solution. It's just that there is a possible concern here with fingerprinting that we would like to avoid. -FYT: well, actually, I even that one actually could be fingerprinted because the reason I just saw the but relate to that the file picker extension the width of that, depends on the Locale . So actually people are using that to figure out what the width of that weíll take ourselves to decide which language you are using so it actually could go either way. That itself becomes a fingerprinting problem. So just an interesting thing that we bring up. +FYT: well, actually, I even that one actually could be fingerprinted because the reason I just saw the but relate to that the file picker extension the width of that, depends on the Locale . So actually people are using that to figure out what the width of that weíll take ourselves to decide which language you are using so it actually could go either way. That itself becomes a fingerprinting problem. So just an interesting thing that we bring up. -SFC: I'm glad we're having this discussion and I would like to reply about the fingerprinting concern here. So suppose that you were to have a large comprehensive list of, say, 250 timezone IDs, and you pass those 250 timezone IDs to the inverted function. You get a true or false value for all 250 of them. Is that not also a fingerprint itself? Why is that approach less of a fingerprinting vector than having an iterator method such as this one? +SFC: I'm glad we're having this discussion and I would like to reply about the fingerprinting concern here. So suppose that you were to have a large comprehensive list of, say, 250 timezone IDs, and you pass those 250 timezone IDs to the inverted function. You get a true or false value for all 250 of them. Is that not also a fingerprint itself? Why is that approach less of a fingerprinting vector than having an iterator method such as this one? -DRO: I don't think that it's any less of a fingerprinting vector, but I think the difference is that it gives more control to the host to mitigate those fingerprinting concerns if it decides to. I think that signals intent a lot more clearly. There's not much you can reason about when it comes to an enumeration API. If you're given an entire list, it's really hard to derive intention from that. I'm not saying it's perfect to derive intention from repeated query and calls either but at least it enables what I believe is likely the more common use case of “get this enumeration API and find out if this item exists in this list”. I don't really think that either one of them is the same result of what you're describing as it could be done with both, but there are more ways to prevent the bad actors with a querying API then there are with enumeration APIs. +DRO: I don't think that it's any less of a fingerprinting vector, but I think the difference is that it gives more control to the host to mitigate those fingerprinting concerns if it decides to. I think that signals intent a lot more clearly. There's not much you can reason about when it comes to an enumeration API. If you're given an entire list, it's really hard to derive intention from that. I'm not saying it's perfect to derive intention from repeated query and calls either but at least it enables what I believe is likely the more common use case of “get this enumeration API and find out if this item exists in this list”. I don't really think that either one of them is the same result of what you're describing as it could be done with both, but there are more ways to prevent the bad actors with a querying API then there are with enumeration APIs. SFC: So the other direction is supported in limited cases; for example, it is possible for units by trying to pass the unit into Intl.NumberFormat; you get an exception if it's not supported. I think that's largely the case for some of these types, but maybe not comprehensively, so that's definitely an interesting suggestion. I think the other direction is also useful; they're two related questions. I wouldn't say necessarily that having one means the other is no less important for its own use cases. I think that maybe we should consider both directions and then we can debate each one on its own merits. -FYT: Yeah, I think what a lot of things are is the last time we were here. I think one of the issues in general not not bigger for this API I think so apart before me we're talking about whether any feature that we have put in at 262-402 will be able to be detected right? So if we have a you know, whether the website they can easily figure out a particular feature is detected or not the supported by that and I think currently a lot of things people just using very high key way the packed it and of course the detection can become fingerprint inspector, too. here, we just try to make the detection easier, but then of course you can say that makes you think you're going to be easier. So if I think there's a contradiction of the direction right whether our old API that ships in including our standard should have a design in a way to be easily for the web developer to figure out whether that exists or not were supportive or not or we shouldn't even allow that to happen. Won't you make sure we make it harder to happen. I think there's a contradiction there. +FYT: Yeah, I think what a lot of things are is the last time we were here. I think one of the issues in general not not bigger for this API I think so apart before me we're talking about whether any feature that we have put in at 262-402 will be able to be detected right? So if we have a you know, whether the website they can easily figure out a particular feature is detected or not the supported by that and I think currently a lot of things people just using very high key way the packed it and of course the detection can become fingerprint inspector, too. here, we just try to make the detection easier, but then of course you can say that makes you think you're going to be easier. So if I think there's a contradiction of the direction right whether our old API that ships in including our standard should have a design in a way to be easily for the web developer to figure out whether that exists or not were supportive or not or we shouldn't even allow that to happen. Won't you make sure we make it harder to happen. I think there's a contradiction there. WH: I am concerned about fingerprinting, but I don't understand the position that enumeration helps fingerprinting in any significant way here because these sets are small. The set of time zones commonly used in browsers is likely to be well-known and you can just iterate through all of them and find out if they're supported or or not. So if the concern is fingerprinting then the better solution is to reduce the variety of different kinds of subsets you might have. -DRO: One quick reply to that is time zones are entirely man-made and political. It's totally possible for a new time zone to appear tomorrow or a million or a thousand or none of them. So it very well may be that today this is true, but this API that we're designing should be designed for the future and I don't think that it's a safe assumption to say that this truth that is true today will always be the case. That is the concern about fingerprinting. - +DRO: One quick reply to that is time zones are entirely man-made and political. It's totally possible for a new time zone to appear tomorrow or a million or a thousand or none of them. So it very well may be that today this is true, but this API that we're designing should be designed for the future and I don't think that it's a safe assumption to say that this truth that is true today will always be the case. That is the concern about fingerprinting. + WH: I don't see how that's relevant to what I just said. -DRO: I mean my point was more that this list might not be small in the future. +DRO: I mean my point was more that this list might not be small in the future. -FYT: But I think let's say you add 300 new time zones all the browser will ship with the other 300 time zones to fulfill the user right? So then why this becomes a fingerprinting issue, of course in the transition, you know some users upgrade faster than the order will not have that new 300 time zone. But any vendor who supports all the users in the world will have those 300 timezones ship in a reasonable period of time right? +FYT: But I think let's say you add 300 new time zones all the browser will ship with the other 300 time zones to fulfill the user right? So then why this becomes a fingerprinting issue, of course in the transition, you know some users upgrade faster than the order will not have that new 300 time zone. But any vendor who supports all the users in the world will have those 300 timezones ship in a reasonable period of time right? DRO: Well, I think you even gave the example earlier of it's potentially possible that the list would depend on who installs the package or if you add an additional package that contains these types of information. It is an assumption that the host will contain all time zones and that is not an assumption that we want, unless it's codified into the standard and again even then time zones are political so it's very possible that depending on where you are someone might not want to support or acknowledge the existence of a time zone. I know that sounds ridiculous, but other ridiculous things like this have happened. -AKI: I think in that sense in a lot of ways it’s no different than political borders. +AKI: I think in that sense in a lot of ways it’s no different than political borders. -DRO: Exactly. +DRO: Exactly. YSV: I don't think that we've had this come up from our side before and Zibi isn't in the call. So I'm going to be representing a hesitation that we have towards this API in general. We are not entirely convinced by the use cases additionally this sort of deviates from how intl has been working so far and that the APIs have been opaque and this is introduced. A kind of transparency to what's available on until rather than say. Oh I need this thing for for the specific users. What is the time zone for the specific user Instead This is an enumeration of everything. This has a pretty heavy cost on implementers, and we're not sure that the use case mandates this cost. So it's hesitation and I wanted to make sure that the committee was aware of that. -LEO: I have a reply regarding fingerprinting. We've had some discussions about fingerprinting in a ECMA-402 Meeting in February 2020, IIRC that happened in perso. We might have notes about it as ended up reaching some conclusion for fingerprinting not being a problem for many different aspects, etc. I would suggest Devin to be in the loop, and reach out to us (ECMA-402 / TG2) so we can bring what we have from those notes and our conclusions. +LEO: I have a reply regarding fingerprinting. We've had some discussions about fingerprinting in a ECMA-402 Meeting in February 2020, IIRC that happened in perso. We might have notes about it as ended up reaching some conclusion for fingerprinting not being a problem for many different aspects, etc. I would suggest Devin to be in the loop, and reach out to us (ECMA-402 / TG2) so we can bring what we have from those notes and our conclusions. SYG: YSV, quick clarifying question for you. What are the implementer costs? @@ -307,17 +312,20 @@ WH: That's correct. AKI: Thank you for that clarification, that helped me a lot. All right. Thanks everyone. We are at time Frank. Do you have any 15 seconds items you want to wrap up with? FYT: Nope, thanks + ### Conclusion/Resolution -* No change -* Request for those who wanted timezone listing in temporal to chime in with use cases on the enumeration repo. + +- No change +- Request for those who wanted timezone listing in temporal to chime in with use cases on the enumeration repo. ## JS Module Blocks + Presenter: Surma (SUR) - [proposal](https://github.com/surma/proposal-js-module-blocks) - [slides](https://drive.google.com/file/d/1RKEKPM2CQSAkhN_EyTJtbIVFGE49hKnz/view?usp=sharing) -SUR: I want to talk a bit about modular blocks. I'm going to talk about it with this in the color palette of the cake that I had recently. So before I get into syntax, which we probably already talked about. I wanted to quickly outline what the actual problems are that I was aiming to solve with this and the whole realm and talking about its responsiveness and whenever someone cares about responsiveness in the low latency sense not in the responsive design sense, JavaScript is kind of facing a challenge because I mean, I'm you all know this but JavaScript is event driven and single-threaded and still you to make sure that you keep your tasks nice and short to allow other tasks to get processed as quickly as possible and that you know, if your responsiveness low, the thing is that developers increasingly care about having low latency in their apps and not only on the web on the server side note as well and the language is not really supporting developers very well in this desire. And so to achieve responsiveness the advice is often to chunky your code or yield to the browser. It's at least when working on the web and which means your chunk your code into multiple smaller chunks. As with other tax can be processed in between and that is kind of a form of Cooperative scheduling. Well and apart from fact that often the advice is using settimeout which is especially bad on the web because we have timeout clamping on the web to it to a minimum of four milliseconds, even if you do it, right you are left to figure out what the right chunk size is and it means if you want to make any guarantees or even estimates about responsiveness the capabilities of the environment play a huge role and and any given chunk size can be too big or too wasteful depending on the environment and by that, I mean the device that the code is running on. So I guess it's to no one's surprise at this point what I'm talking about here is workers where you move the actual work to a different thread to keep your main thread, that does the processing of the events, keep that one free and respond quickly to incoming events and you know workers are now in node and have been on the web platform for a long long time, and I've been working on on this off main thread. problem in the JavaScript context for a couple of years now with that perspective on the web, but I also care about it another time and quite passionate about it and I won't go into any more detail about all the stuff that I think there is to talk about here. But if you're curious I'm going to shamelessly link to a talk that I gave in November 2013 as well as a blog post that I wrote on this topic. Now if workers are the solution, and I think developers are picking up on this, developers portals like CSS tricks or Smashing Magazine are starting to cover off main thread as a hot topic, but at the same time workers have often been called an unergonomic and that's a problem because it's perceived. They are perceived as a big hurdle to enter or even as a hurdle to adoption. And so I think we have a chance you could preempt an upcoming developer needs and I think the problem comes with that many people I think learned to think about threads in the world of C++ or C# or Java and expect something similar to this and now I think we know that this requires shared memory and shared memories and that we can't really retrofit to JavaScript as a whole but that doesn't mean that people won't try. And so one big difference to threads is that workers have to be in a different file and people often point out they don't like that. They don't; it seems minor but really people dislike having to break up code that belongs together just so they can use a worker and also in the age of bundler it can be surprisingly hard to stop a bundler from what bundling like to keep a file separate and as a result is a whole bunch of packages out there pretend to run functions in the worker trying to emulate that developer experience for more traditional approaches to threads and this is often reliant on strings or blobification as I call it and those patterns actually kind kinda work until don't so stringification turns a function into a string before sending it over and uses eval or the Function constructor or something similar to turn it back into code and know apart from the obvious performance hit of double parsing. There's actually many invisible trip wires that make code that is technically correct approach that looks correct stop working. You can't really close other variables as the verb is looted in is to close over or not the string, some global's are available in one thread, but not the other any form of eval like behavior is often not CSP compatible and tooling, you know has a hard time catching these errors Blob-ification relies on using blob URLs. It's pretty much still the same as blobificiation, but with proper URLs and probably a bit more CSP compatible. Although I'm a bit foggy on the details here either way neither if you're using using either of these techniques, they will especially frustrating the second pass come into the equation because the string if occasion passed an hour relative to the worker where the string got reparsed with blobification means relative and absolute path are completely meaningless. only full paths including the hostname will actually continue to work. And so this piece of code here would not work as a typical might expect it to. And so this entire problem space can actually extract it to think about Realms in general if you create a realm, how do you even think about code execution inside that realm without making it an ergonomic nightmare? Q finally this proposal and it's based on Dominicm and Nikolas and mine all the blocks proposal and there's also enough just an in line models proposal in here. So it's all a new iteration and in its simplest form. It is just a way to declare an inline module and you can then use to import to quote unquote instantiate it. Dynamic import is asynchronous, but that is actually a good thing because at any module block could potentially import other modules potentially from a URL. Or make use of top-level await. So it should be in the synchronous process by making module blocks work as modules. We can actually build on top of a lot of great spec work. It is already well defined well explored and well-understood by developers avoiding a lot of potential problems or questions because it is in the end just a module. With respect to syntax, we can't really introduce new keywords. I've been told and I've been taught taught about contextual keywords by saying there is no line Terminator allowed between module and the opening brace, but this is not set in stone. I'm happy to bike shed syntax with you all if you think it's a bad idea, but I think this actually looks quite idiomatic and nice. We did consider strings and template strings but in the end end they imply that you can close over values from outside the module scope and we did try to do this in the blocks proposal way back when and it just turns out at any time you try to close over values and still keep the code in a way that you can transfer between Realms. It turns into an absolute can of worms and so by using modules, an existing primitive, we naturally solve this by just disallowing closing over values right from the get-go with string application. As I said has this double parsing cost and with modules these molds were just participate in the module caching layer, which as long as in the same realm, but could even potentially be shared or cashed across realms by the engine, I'm not quite sure about the idea, but I think it definitely leaves a lot of room for optimization here because there is no closing over variables. Secondary benefits of this approach also include the parsing and compilation of these inline modules cut kick off earlier by the engine even before instantiation or transfer has occurred and by consequence errors of multiple kinds can actually be caught early rather than just an instantiation or at runtime. Now with the Realms proposal we can create multiple Realms within one file and with modular blocks. We then be able to also put the code for each of these Realms in the same file and shared across these Realms however, we like. And so I think that has a really nice bit of synergy here that these two things are kind of like the complement of each other to have state and code that is now nicely encapsulated. And workers in the end benefit you as well because as I said, they're kind of just a realm Well, so to build on top of this and address the ergonomics problem. I mentioned earlier my goal would be that in the end the worker Constructor would not only accept the path to a file but also a module blocks. I can instantiate the worker from within the same file and not only that but we could also make module block structured cloneable meaning that we can send modules using post message and instantiate it on the other side. And in this way we would finally give JavaScript a way to model tasks in a way that workers across realm boundaries that were excited - model task in a way that works across realm boundaries and allow people to build, you know, like a proper scheduler for example on top of this primitive. And to address the path problem I mentioned earlier, at least the path problem you get when you use string application or blobificatoin. The idea would be that the module block inherits the import.meta URL data from the module it is syntactically embedded in. Just in case you don't know import meta URL is the path of the module you are currently in, its like the portable version of document.transcript or __dirname or __file name in node. So this way right Ask would work intuitively for Developers So if you look at this code sample, this would now work as expected even Even if the scheduler package is loaded from a and the resulting worker would run a complete different origin. This would still continue to work as expected. And that's a really nice thing and lastly in terms of compatibility with APIs that have not been updated to consume or take a module block. The idea is to allow model blocks to be turned into object URLs. And this way module blocks will also work with APIs that have not been designed to just accept good old string URLs. objects lifetime is often a bit iffy, but it is usually tied it is tied to the creating realm which should be sufficient for the vast majority of use cases if you want to make use of this kind of technique. You might be thinking about bundlers about now that this might be nice to use with bundlers. For example allowing a file to contain multiple modules that can import each other, but Dan has kind of separate out a separate proposal that is built on the same ideas, but is complementary to address that specific use case in isolation and that target static Imports instead because we need to think about model identifies that point and I think he'll talk about this at some point. And then a couple of open question still and I'm happy to hear your thoughts either here or on the repository. But for example, like what exactly is the type of a module block it's probably an object but does have a prototype and if so what kind; I do not know happy to hear ideas here. There's another open question about caching because do we cache the module and if we do that do we cache it by its parse as the cache key or do we create really do create a new module on on each evaluation similar to an object literal. Now if we cache it by the parse node then assertion 1 and 2 would pass by searching 3 & 4 would throw while if it's like an object literal all the assertions will throw. Personally I think making it behave similarly to an object literal is the most intuitive for developers and I can see actually wanting to create the same module multiple times in a loop but might be interesting to hear some more opinions on this and what the implications are. Lastly there I have questions for the engine implementers you are hopefully here if this is actually a simple as I believe it to be what it does. Does this work passing a module block to a worker constructor. What about post messaging module blocks? Is this actually doable or am I breaking a million assumptions in everyone's code base? And with that I am happy to start a discussion and hopefully get some questions from y'all. +SUR: I want to talk a bit about modular blocks. I'm going to talk about it with this in the color palette of the cake that I had recently. So before I get into syntax, which we probably already talked about. I wanted to quickly outline what the actual problems are that I was aiming to solve with this and the whole realm and talking about its responsiveness and whenever someone cares about responsiveness in the low latency sense not in the responsive design sense, JavaScript is kind of facing a challenge because I mean, I'm you all know this but JavaScript is event driven and single-threaded and still you to make sure that you keep your tasks nice and short to allow other tasks to get processed as quickly as possible and that you know, if your responsiveness low, the thing is that developers increasingly care about having low latency in their apps and not only on the web on the server side note as well and the language is not really supporting developers very well in this desire. And so to achieve responsiveness the advice is often to chunky your code or yield to the browser. It's at least when working on the web and which means your chunk your code into multiple smaller chunks. As with other tax can be processed in between and that is kind of a form of Cooperative scheduling. Well and apart from fact that often the advice is using settimeout which is especially bad on the web because we have timeout clamping on the web to it to a minimum of four milliseconds, even if you do it, right you are left to figure out what the right chunk size is and it means if you want to make any guarantees or even estimates about responsiveness the capabilities of the environment play a huge role and and any given chunk size can be too big or too wasteful depending on the environment and by that, I mean the device that the code is running on. So I guess it's to no one's surprise at this point what I'm talking about here is workers where you move the actual work to a different thread to keep your main thread, that does the processing of the events, keep that one free and respond quickly to incoming events and you know workers are now in node and have been on the web platform for a long long time, and I've been working on on this off main thread. problem in the JavaScript context for a couple of years now with that perspective on the web, but I also care about it another time and quite passionate about it and I won't go into any more detail about all the stuff that I think there is to talk about here. But if you're curious I'm going to shamelessly link to a talk that I gave in November 2013 as well as a blog post that I wrote on this topic. Now if workers are the solution, and I think developers are picking up on this, developers portals like CSS tricks or Smashing Magazine are starting to cover off main thread as a hot topic, but at the same time workers have often been called an unergonomic and that's a problem because it's perceived. They are perceived as a big hurdle to enter or even as a hurdle to adoption. And so I think we have a chance you could preempt an upcoming developer needs and I think the problem comes with that many people I think learned to think about threads in the world of C++ or C# or Java and expect something similar to this and now I think we know that this requires shared memory and shared memories and that we can't really retrofit to JavaScript as a whole but that doesn't mean that people won't try. And so one big difference to threads is that workers have to be in a different file and people often point out they don't like that. They don't; it seems minor but really people dislike having to break up code that belongs together just so they can use a worker and also in the age of bundler it can be surprisingly hard to stop a bundler from what bundling like to keep a file separate and as a result is a whole bunch of packages out there pretend to run functions in the worker trying to emulate that developer experience for more traditional approaches to threads and this is often reliant on strings or blobification as I call it and those patterns actually kind kinda work until don't so stringification turns a function into a string before sending it over and uses eval or the Function constructor or something similar to turn it back into code and know apart from the obvious performance hit of double parsing. There's actually many invisible trip wires that make code that is technically correct approach that looks correct stop working. You can't really close other variables as the verb is looted in is to close over or not the string, some global's are available in one thread, but not the other any form of eval like behavior is often not CSP compatible and tooling, you know has a hard time catching these errors Blob-ification relies on using blob URLs. It's pretty much still the same as blobificiation, but with proper URLs and probably a bit more CSP compatible. Although I'm a bit foggy on the details here either way neither if you're using using either of these techniques, they will especially frustrating the second pass come into the equation because the string if occasion passed an hour relative to the worker where the string got reparsed with blobification means relative and absolute path are completely meaningless. only full paths including the hostname will actually continue to work. And so this piece of code here would not work as a typical might expect it to. And so this entire problem space can actually extract it to think about Realms in general if you create a realm, how do you even think about code execution inside that realm without making it an ergonomic nightmare? Q finally this proposal and it's based on Dominicm and Nikolas and mine all the blocks proposal and there's also enough just an in line models proposal in here. So it's all a new iteration and in its simplest form. It is just a way to declare an inline module and you can then use to import to quote unquote instantiate it. Dynamic import is asynchronous, but that is actually a good thing because at any module block could potentially import other modules potentially from a URL. Or make use of top-level await. So it should be in the synchronous process by making module blocks work as modules. We can actually build on top of a lot of great spec work. It is already well defined well explored and well-understood by developers avoiding a lot of potential problems or questions because it is in the end just a module. With respect to syntax, we can't really introduce new keywords. I've been told and I've been taught taught about contextual keywords by saying there is no line Terminator allowed between module and the opening brace, but this is not set in stone. I'm happy to bike shed syntax with you all if you think it's a bad idea, but I think this actually looks quite idiomatic and nice. We did consider strings and template strings but in the end end they imply that you can close over values from outside the module scope and we did try to do this in the blocks proposal way back when and it just turns out at any time you try to close over values and still keep the code in a way that you can transfer between Realms. It turns into an absolute can of worms and so by using modules, an existing primitive, we naturally solve this by just disallowing closing over values right from the get-go with string application. As I said has this double parsing cost and with modules these molds were just participate in the module caching layer, which as long as in the same realm, but could even potentially be shared or cashed across realms by the engine, I'm not quite sure about the idea, but I think it definitely leaves a lot of room for optimization here because there is no closing over variables. Secondary benefits of this approach also include the parsing and compilation of these inline modules cut kick off earlier by the engine even before instantiation or transfer has occurred and by consequence errors of multiple kinds can actually be caught early rather than just an instantiation or at runtime. Now with the Realms proposal we can create multiple Realms within one file and with modular blocks. We then be able to also put the code for each of these Realms in the same file and shared across these Realms however, we like. And so I think that has a really nice bit of synergy here that these two things are kind of like the complement of each other to have state and code that is now nicely encapsulated. And workers in the end benefit you as well because as I said, they're kind of just a realm Well, so to build on top of this and address the ergonomics problem. I mentioned earlier my goal would be that in the end the worker Constructor would not only accept the path to a file but also a module blocks. I can instantiate the worker from within the same file and not only that but we could also make module block structured cloneable meaning that we can send modules using post message and instantiate it on the other side. And in this way we would finally give JavaScript a way to model tasks in a way that workers across realm boundaries that were excited - model task in a way that works across realm boundaries and allow people to build, you know, like a proper scheduler for example on top of this primitive. And to address the path problem I mentioned earlier, at least the path problem you get when you use string application or blobificatoin. The idea would be that the module block inherits the import.meta URL data from the module it is syntactically embedded in. Just in case you don't know import meta URL is the path of the module you are currently in, its like the portable version of document.transcript or __dirname or__file name in node. So this way right Ask would work intuitively for Developers So if you look at this code sample, this would now work as expected even Even if the scheduler package is loaded from a and the resulting worker would run a complete different origin. This would still continue to work as expected. And that's a really nice thing and lastly in terms of compatibility with APIs that have not been updated to consume or take a module block. The idea is to allow model blocks to be turned into object URLs. And this way module blocks will also work with APIs that have not been designed to just accept good old string URLs. objects lifetime is often a bit iffy, but it is usually tied it is tied to the creating realm which should be sufficient for the vast majority of use cases if you want to make use of this kind of technique. You might be thinking about bundlers about now that this might be nice to use with bundlers. For example allowing a file to contain multiple modules that can import each other, but Dan has kind of separate out a separate proposal that is built on the same ideas, but is complementary to address that specific use case in isolation and that target static Imports instead because we need to think about model identifies that point and I think he'll talk about this at some point. And then a couple of open question still and I'm happy to hear your thoughts either here or on the repository. But for example, like what exactly is the type of a module block it's probably an object but does have a prototype and if so what kind; I do not know happy to hear ideas here. There's another open question about caching because do we cache the module and if we do that do we cache it by its parse as the cache key or do we create really do create a new module on on each evaluation similar to an object literal. Now if we cache it by the parse node then assertion 1 and 2 would pass by searching 3 & 4 would throw while if it's like an object literal all the assertions will throw. Personally I think making it behave similarly to an object literal is the most intuitive for developers and I can see actually wanting to create the same module multiple times in a loop but might be interesting to hear some more opinions on this and what the implications are. Lastly there I have questions for the engine implementers you are hopefully here if this is actually a simple as I believe it to be what it does. Does this work passing a module block to a worker constructor. What about post messaging module blocks? Is this actually doable or am I breaking a million assumptions in everyone's code base? And with that I am happy to start a discussion and hopefully get some questions from y'all. MM: Yep, so I think that there's a lot of convergence between your module block notion and a non syntactic notion in the compartments proposal called Static module records. The module record in the spec language is an odd beast because it starts off with the static and Information but then as it's linked and initialized, it's modified in place. As I understand your module block, the module module block object is not modified in place as it is linked and initialized. Rather it's static and the thing that it corresponds to a module record would be dynamic would be something dynamic that is derived from the module block? So that's very much like our static module record. So I want to verify that the module block object itself does not capture its linkage graph. Rather the resolution of the import specifiers would be per importing context. So the same module block would resolve to different import graphs in different contexts. @@ -341,7 +349,7 @@ DRR: But yeah, you would not not want that. Maybe maybe can I ask a clarifying q SUR: No, like currently in my head. It was there is no capturing. There is no closing over anything that is outside the module blocks because we tried in previous proposal and it was just so hard to work out. So currently you the idea was for now to not close over anything, to make it really clear even for like very simple interest that these are module boundaries between the curly braces basically in there is no referencing to anything outside of those. -DRR: I want to keep talking about this, but I want to let other people make you go. So let's circle back. +DRR: I want to keep talking about this, but I want to let other people make you go. So let's circle back. DE: So I think the the idea is that it closes over the global object, but not the other lexically scoped things, so I think for type systems that need to track this it seems it seems pretty similar to the case where you have scripts that might be included in a worker or in a global object. I agree with what Shu was saying about how you might assume some kind of stability. I don't want to put words in his mouth. when I say close over the global object, I mean an instantiation of a module closes over the global object. So the module block is unbounded and it can reference only things in the global object where it is imported, which is how you instantiate a module. Thanks MM for the clarification. Yeah. That was my comment there. So I think it would be good to work through this and hopefully we can work together before stage 3 to validate the design. @@ -351,11 +359,11 @@ DE: Okay, very briefly, I think one object representation of these multiple bloc KKL: I think that a lot of the design considerations so far have been very good in particular echoing MM's sentiment that a module block could evaluate to a static module record is good and it is also consistent with that design if the module has no lexical capture of the surrounding environment, that it's effectively equivalent to a separate file. A useful design consideration, I think is that the module block would have to also capture the referrer specifier. I think it would have to inherit the import module URL of the environment in which it's declared and that would need to be carried with the static module record to wherever it's evaluated so that the import specifiers can be resolved. -SYG: So to SUR's question about "is structured cloning workable": for V8 it should be very easy. How I'm thinking about it for the internal representation is this unbounded module script that then gets bound when you eventually import it. Such a thing exists already in the implementation like in the API itself, when you first compile a module, you get back this unbounded script. And basically what aren't bound to scripts exist and you confine them to a particular context so +SYG: So to SUR's question about "is structured cloning workable": for V8 it should be very easy. How I'm thinking about it for the internal representation is this unbounded module script that then gets bound when you eventually import it. Such a thing exists already in the implementation like in the API itself, when you first compile a module, you get back this unbounded script. And basically what aren't bound to scripts exist and you confine them to a particular context so JWK: You say it supports structured cloning so can it be stored in an IndexedDB? -SUR: am not sure that it needs to be the conclusion. Like I don't know if there's a value in storing a module block inside indexeddb for now for Simplicity sake I would exclude it unless anyone has used case whether it actually becomes valuable. +SUR: am not sure that it needs to be the conclusion. Like I don't know if there's a value in storing a module block inside indexeddb for now for Simplicity sake I would exclude it unless anyone has used case whether it actually becomes valuable. SYG: Yeah, I was understanding the question to be for postmessage. Is that right? That's mainly for postmessage. @@ -369,7 +377,7 @@ SFC: I think that this proposal is really interesting from the perspective of gi SUR: The short answer is yes, there is a lot of practice but I've been working on for the last couple years and I've been working on - I've been maintaining a couple of apps that make use of this multi thread architecture. And so I think I have a fairly good understanding of where the ergonomics problems are, and I'm pretty confident that this would solve the vast majority of them, which is why I've now finally built this all together. But I'm happy to chat more with you about this offline. -SYG: quick reply to SFC. I don't know if you were able to make the meeting yesterday or was it two days ago. I gave talk about my vision for concurrent JS in the future and the model that this will better enable is and the worker model that the web uses is something that is that I'm thinking of it as actor inspired like not quite actors, but it's communicating event loops thing that you said, that's exactly how what this would help and there is a lot of prior art there. +SYG: quick reply to SFC. I don't know if you were able to make the meeting yesterday or was it two days ago. I gave talk about my vision for concurrent JS in the future and the model that this will better enable is and the worker model that the web uses is something that is that I'm thinking of it as actor inspired like not quite actors, but it's communicating event loops thing that you said, that's exactly how what this would help and there is a lot of prior art there. SFC: Yeah. I think I missed your presentation, but I'll review it and talk with you offline. Thanks. @@ -385,9 +393,9 @@ MM: I'm not expressing a desire. I'm just trying to verify my understanding in p SUR: I'm going to defer to to Dan on this one. -DE: So fortunately or unfortunately, I believe this would support live bindings just the same way because multiple namespace objects do support live bindings. And so but you're right this would only be useful through dynamic import, not static. If you're interested in bundling static [?]. I have a presentation later in this meeting on that topic. +DE: So fortunately or unfortunately, I believe this would support live bindings just the same way because multiple namespace objects do support live bindings. And so but you're right this would only be useful through dynamic import, not static. If you're interested in bundling static [?]. I have a presentation later in this meeting on that topic. -MM: Thanks. +MM: Thanks. SUR: Okay, since the speaker queue is now empty, I want to ask if there's any opposition or I don't know about the process - but my intention was to move this to stage 1, I'm asking for consensus. @@ -399,7 +407,7 @@ DRO: I like the things that the proposal is trying to solve. Part of me wonders SUR: I'm happy to hear any and all of your ideas and concerns on this but the problem space is just really interesting to me. So yeah. -KM: Defining the problem space is what's needed for stage 1; it's not necessarily a prescribed solution at that point so that's exactly the time to have this conversation. +KM: Defining the problem space is what's needed for stage 1; it's not necessarily a prescribed solution at that point so that's exactly the time to have this conversation. WH: I am also enthusiastic about this proposal. @@ -408,10 +416,15 @@ MBS: So I think with that in mind we have stage 1 potentially with the caveat th AKI: In case you aren't familiar with the process document rubric, Stage 1 is all about TC39 expressing an interest in exploring the problem space. Since this is your first proposal, I want to remind you to be open to being flexible about what the solution looks like as it’s not necessarily what the committee is endorsing for stage 1. SUR: I will definitely keep that in mind. Thank you very much. + ### Conclusion/Resolution -* Stage 1 + +- Stage 1 + ## Process Update + Presenter: Yulia Startsev (YSV) + - [pr](https://github.com/tc39/process-document/pull/29) - [slides](https://docs.google.com/presentation/d/15kaoyGic2yahxdo1TCXOeIR0fYGNMdbLjT1SBc1sSmI/edit#slide=id.ga7260a4d84_0_5) @@ -421,7 +434,7 @@ YSV: Withdrawing proposals, reverting to earlier stages, and adopting proposals MLS: So you said that reject and Block are different things. Could you describe the substantive difference between those two? -YSV: So the okay. we don't say that something will never happen. We don't say that a given proposal when it's blocked has absolutely no possibility of ever being ever being a realistic proposal that comes back to committee and then is adopted instead. We block things and then they don't get advanced. They might not be picked up ever again. So that that functionally is the same as rejecting something but we sort of leave the door open. That's the difference. +YSV: So the okay. we don't say that something will never happen. We don't say that a given proposal when it's blocked has absolutely no possibility of ever being ever being a realistic proposal that comes back to committee and then is adopted instead. We block things and then they don't get advanced. They might not be picked up ever again. So that that functionally is the same as rejecting something but we sort of leave the door open. That's the difference. MLS: It seems in practice they're the same thing. It seems a little interesting that we're not willing to use the word reject. @@ -437,7 +450,7 @@ WH: Okay, to understand your position, we should never clean it up, we should ju YSV: I think so. I think it's really interesting as an archival document and it really helps us see how we're developing as committee. -SYG: I want to engage with MLS on on his question. So I think it is true that we have priors that we reevaluate. Case in point, no data races, no shared memory was a pretty strongly held position. Let's not expose GC was a strongly held position. And we have changed our thinking on both given what's happening in ecosystem given stuff that's happened. I think the distinction that YSV is trying to make does make sense to me. I was wondering if your position on the reject versus block thing is - like that the difference to me is that there is a it's a time difference. Like I don't think we can ever say we will never consider something but it might like practically that's probably understood as we're not going to revisit this without significant new information or new changes in the ecosystem or or other external factors. So that might take longer to come to pass then placed work through some of this to concrete issues of then we can can keep progressing. +SYG: I want to engage with MLS on on his question. So I think it is true that we have priors that we reevaluate. Case in point, no data races, no shared memory was a pretty strongly held position. Let's not expose GC was a strongly held position. And we have changed our thinking on both given what's happening in ecosystem given stuff that's happened. I think the distinction that YSV is trying to make does make sense to me. I was wondering if your position on the reject versus block thing is - like that the difference to me is that there is a it's a time difference. Like I don't think we can ever say we will never consider something but it might like practically that's probably understood as we're not going to revisit this without significant new information or new changes in the ecosystem or or other external factors. So that might take longer to come to pass then placed work through some of this to concrete issues of then we can can keep progressing. MLS: The one proposal that comes to mind is SIIMD JS, which I think all of the implementors basically said we won't implement this because we think that that wasm is probably a better forum to provide that functionality. So I think maybe we can say that it's blocked but effectively it's rejected in my mind. @@ -446,13 +459,12 @@ MM: I think we're ignoring the fact that there's also withdrawn the champions th SYG: That's a thing that already exists. MLS: SIMD JS, I don't know whether it's been withdrawn, just it's a dead proposal that hasn't been worked on some time. There's also that we've had history of a few proposals that have been shopped to other venues and they've advanced in that in those venues and effectively they're they're dead now for JavaScript. They were rejected or the champion felt that they were rejected and took an easier path to implementation. - SYG: So my question to you then is do you see value in kind of capturing that limbo state more formally in the process doc? MLS: yeah, I think it'd be good that when proposals are quote unquote blocked if the reasons for blocking from various delegates is strong enough that the whatever constraints need to be overcome. I think it'd be good to include those in the proposal repository so that we know, how likely something is to be revived or somebody thinks that that a feature like that, several years later thinks it's worth pursuing again that they understand what they're up against. -YSV: Just a second. I'll share the actual document. I think there might be an interesting text in the actual document. Because I may have misrepresented this just now in terms of what that "reject" concept is. So not all issues with a proposal are easily solvable. Some issues are too fundamental and serious, requiring significant rework of the proposal or may be unsolvable. This might capture what you mentioned and these situations have consensus withheld. It may be referred to colloquially as a block if the proposal will require substantial work to address the concern. It may need to be rethought or may not have enough justification to pursue at this time. So this is how a block is currently being defined. And this next sentence where it says that proposals don't get rejected, it's that there's always the option for a champion to pick it up and make a modification be re-presented to the committee in order to seek consensus. So I may have misrepresented how those two paragraphs interact with one another. +YSV: Just a second. I'll share the actual document. I think there might be an interesting text in the actual document. Because I may have misrepresented this just now in terms of what that "reject" concept is. So not all issues with a proposal are easily solvable. Some issues are too fundamental and serious, requiring significant rework of the proposal or may be unsolvable. This might capture what you mentioned and these situations have consensus withheld. It may be referred to colloquially as a block if the proposal will require substantial work to address the concern. It may need to be rethought or may not have enough justification to pursue at this time. So this is how a block is currently being defined. And this next sentence where it says that proposals don't get rejected, it's that there's always the option for a champion to pick it up and make a modification be re-presented to the committee in order to seek consensus. So I may have misrepresented how those two paragraphs interact with one another. MLS: So if something is that there's there's unsolvable issues doesn't that mean that The proposal should be rejected. @@ -478,7 +490,7 @@ YSV: I would like to make a suggestion here after hearing everyone's thoughts. O MM: I like that refinement. -MLS: One clarifying comment. If the committee generally would encourage a champion to withdraw and they refuse, or vice versa they withdraw and the committee doesn't think it needs to be withdrawn. It seems like that may be kind of a disconnect between - in either case - a disconnect between the champion champion and the committee. +MLS: One clarifying comment. If the committee generally would encourage a champion to withdraw and they refuse, or vice versa they withdraw and the committee doesn't think it needs to be withdrawn. It seems like that may be kind of a disconnect between - in either case - a disconnect between the champion champion and the committee. YSV: Yes, but this would also reflect how we currently work because a champion may withdraw a proposal or member organization may withdraw a proposal on behalf of a champion who's no longer working with the organization. We had that at Mozilla. And then another member organization may say we wish to pick this work up again and start the proposal process again using that withdrawn proposal as a starting point. And if a champion chooses not to withdraw proposal in spite of the recommendation of the committee to do so then they also have the rather heavy task of convincing a committee that has gone to the point of saying we have consensus that this is a bad idea. They would have to find a way to convince the committee that said that to accept their proposal. So yeah, I don't know if that answers your question. @@ -488,7 +500,7 @@ SYG: I would like to express general support for this PR. I think it's a pretty MLS: This is a completely separate issue. There have been times when we've talked about removing something from language and I'm not talking about shared arraybuffers and the whole Spectre thing. I think that that was very principled and it was done when shared array buffers were pretty new, but there's been other times, rare, but other times when we've considered removing something from the language. And I think we struggle because we don't have a process for doing that - and this PR probably isn't place to doing it - but we probably should spend some time thinking about what is a principled way of doing that since something is in the standard and in we now consider removing it. Is that just a demotion from stage 4 to some lesser stage stage or is it something else? -YSV: That's a really great point. I would love to continue talking about that. +YSV: That's a really great point. I would love to continue talking about that. [queue is empty] @@ -507,8 +519,10 @@ RPR: Congratulations Yulia you have consensus. WH: For the third item you raised about deleting things from a language: I would view it as a proposal that would have to go through the stages if it's anything more substantive than a bug fix. Proposals are for changes to the language. When something is already in the language standard, deleting it is a change like any other that should go through the stages. YSV: I'm also thinking about that the same way. So what I'll do is - because we have currently the `Symbol.species` removal in flight that's being worked on by Shu and myself, and we're following what WH just suggested. So what I'll do is I'll open an issue on this is a more general topic and we can discuss how we handle that kind of a deletion. + ### Conclusion/Resolution -* PR to be merged + +- PR to be merged ## Adopting Unicode behavior for set notation in regular expressions @@ -517,16 +531,15 @@ Presenter: Mathias Bynens (MB) & Markus W. Scherer (MWS) - [proposal](https://github.com/mathiasbynens/proposal-regexp-set-notation) - [slides](https://docs.google.com/presentation/d/1kroPvRuQ8DMt6v2XioFmFs23J1drOK7gjPc08owaQOU/edit#slide=id.p) - MWS: Hello, good morning. Good evening. My name is Markus Scherer. I work for Google. I think Mathias is going to present and start on this occasion. MB: I'm happy to let you do the presentation if you prefer Markus, I just wanted to say a few words as an intro. Yeah, maybe if you want to get set up with the presentation I can start the intro because I just want to give some context to this proposal. It's a brand new proposal stage 0 were not asking for any stage advancement. So not even for stage 1 today. We're just want to throw some ideas out there and see what the general. Yeah, but the general sense is within the committee about some of these ideas that we want to figure out if this is something worth pursuing in one particular way or another and another thing I wanted to make very clear is Is that in both the repository and these slides that prepared prepared from the content in the Repository. Here we have some illustrative example slides. I want to make it very clear that we are not tied to any particular syntax here. So although we do use some example syntax in the slides, I'm really hoping to avoid thread holding on syntactic details too much today. We're really trying to focus on the use cases and whether or not the committee thinks this is a problem worth solving or investigating further. With that out of the way, MWS go ahead when you're ready. -MWS: All right, so this is not actually precisely About Properties or strings or sequence properties. This is also about regular expressions. What we are proposing here today is to add set notations in regular expressions in character classes in regular expressions. Basically starting from where we are. We have Unicode properties and that's a wonderful way of number one making the character class or regular expression future proof because as Unicode ads characters like digits or let us these things naturally grow and we don't have to update them ourselves. It also means that when we have something that would take hundreds of ranges actually in this case. It would be I think 63 range of stress the digits. The regular Expressions don't get totally unwieldy and that's all fine and good but typically you quickly get into a place where you want to have whatever the property says plus a few things, but maybe you want to combine multiple Properties or you want a property like [?] except for something and then you want to remove things and sometimes what you remove his another property sometimes what you remove is just a list of exception cases and it's also also quite common for people to use, basically an intersection of sets meaning that I want characters to match that have both this property and and that other property. +MWS: All right, so this is not actually precisely About Properties or strings or sequence properties. This is also about regular expressions. What we are proposing here today is to add set notations in regular expressions in character classes in regular expressions. Basically starting from where we are. We have Unicode properties and that's a wonderful way of number one making the character class or regular expression future proof because as Unicode ads characters like digits or let us these things naturally grow and we don't have to update them ourselves. It also means that when we have something that would take hundreds of ranges actually in this case. It would be I think 63 range of stress the digits. The regular Expressions don't get totally unwieldy and that's all fine and good but typically you quickly get into a place where you want to have whatever the property says plus a few things, but maybe you want to combine multiple Properties or you want a property like [?] except for something and then you want to remove things and sometimes what you remove his another property sometimes what you remove is just a list of exception cases and it's also also quite common for people to use, basically an intersection of sets meaning that I want characters to match that have both this property and and that other property. -MWS: Currently what we have in JavaScript regular Expressions is that we can make a union inside a character class you can use character class escape, which means you can have the properties in there and you can have characters and ranges and you can make the union of these so basically for one notion of quote and identify your let us for example, you would take all the letter characters Does and all the characters characters all the combining mark that have a numeric kind of property including the digits but also Roman numerals and stuff. And in this case, I also added an underscore just to illustrate that. We can have just a single character [?] as well. that helps for Union, but what if you only want the letters that are in the Khmer script? for the Cyrillic script or some other script and this is a real life example it except for the underscore here that kind of gets thrown out as we intersect with with the queer script. This is the kind of thing that's used in real life. I fished this out of a piece of Google code except what I had to do for EcmaScript. I had to express the intersection as a positive. hit that's the issue and that's pretty unintuitive if when I looked on StackOverflow and other places for how to do intersection and set difference in various regular expression engines for some of them. It works as a built-in feature and for some of them you have to do things that are not obvious and one of one of the suggestions that keeps coming up is to do lookup ahead. +MWS: Currently what we have in JavaScript regular Expressions is that we can make a union inside a character class you can use character class escape, which means you can have the properties in there and you can have characters and ranges and you can make the union of these so basically for one notion of quote and identify your let us for example, you would take all the letter characters Does and all the characters characters all the combining mark that have a numeric kind of property including the digits but also Roman numerals and stuff. And in this case, I also added an underscore just to illustrate that. We can have just a single character [?] as well. that helps for Union, but what if you only want the letters that are in the Khmer script? for the Cyrillic script or some other script and this is a real life example it except for the underscore here that kind of gets thrown out as we intersect with with the queer script. This is the kind of thing that's used in real life. I fished this out of a piece of Google code except what I had to do for EcmaScript. I had to express the intersection as a positive. hit that's the issue and that's pretty unintuitive if when I looked on StackOverflow and other places for how to do intersection and set difference in various regular expression engines for some of them. It works as a built-in feature and for some of them you have to do things that are not obvious and one of one of the suggestions that keeps coming up is to do lookup ahead. -MWS: This is kind of clunky and not intuitive, but it's also slower because it does actually to lookups on us in character. I don't know if engines can optimize that but typically I would expect that to have an impact. We can do the same thing with a negative look ahead kind of for subtraction. So if we have the non-ascii digits we can match on personal decimal number characters, but only if The character is not also an Esky zero through 9 digit. So that kind of works. The other thing that people can do of course is they can take the full list of ranges corresponding to the property and can remove the ones that they don't want and basically pre-compute and then hard code the character class. and that works, of course, but now we are back to having lots of ranges. So abbreviated as here. This is actually sixty two ranges of 0 through 9 in its full form currently and every few years Unicode adds a script that has its own set of digits. So this gets out of date we lose the benefit of readability in updating of properties. So probably this is not the best way of doing things. So what we are proposing is basically to add real support for doing intersection and subtraction was that difference and with that also to make it kind of useful and meaningful. We need nested character classes. So we need to be able to not just have a class Escape like a \p quality inside of a character class, but also another square bracket character class for doing things like exceptions like the removing a scalar case so for example, instead of doing the grouping and the positive look ahead we could just write something that looks like well, yeah, there is a syntax for doing an intersection. We have the Khmer script property. And the Letter, Mark, and Number class and those two classes or sets together. The intersection is what we want for having mellitus. I put a note here that this is not necessarily the actual syntax we would be using this is the kind of syntax that's used in other places. There is variation on things like presidents on whether to use a single or double ampersand. and various things like that. We're not trying to settle those kind of things here. We're just presenting an idea for subtraction for non-ascii digits instead of doing a negative look ahead. Can just write a character class and the character class gets computed from decimal numbers except for those asking digits and you could also express it in a different way. could use a pointer here, right? You could have the decimal number - oh - - the property for the ASCII range. And so this could be a \p Calibre zosky. But in this case, it's kind of easy to list the singular points for the digits that we all know and love and of course in a union in order to keep the syntax consistent predictable. It would be also handy if we we could have a square bracket character character class in there even though it's not strictly necessary an example like this it would work just as well if you left out the inner square brackets, but it would be strange if we allowed the character class nested inside of other things are not here. so I dug up a few more examples from Google code where people were doing things like breaking spaces basically taking all the 30 or so space characters in Unicode and removing one set. worked like a non-breaking space and things like that. I'm aware that the line-break properties aren’t currently supported in ecmascript regular Expressions, but that's I think a decent illustration. Anyway, there's an emoji property and there are ASCII characters that are also cut, also have this property for the keycap sequences. And I found code that wanted to remove those it like having the ASCII characters in there. There was code looking for combining marks that were not script specific. So they intersect to combining Mark property with having inherited and common script. So that's that's all the characters that are not like the vowel marks in in the clutch more like the acute and graph that we use in French and Spanish and other places or there was a piece of code that was looking for the first letter in a script in it had a starting point of taking things that are quick check. Maybe our yes in normalization form z c and then removing things that have sort of dtc's common inherited script properties meaning punctuation symbols in those kind of stuff. +MWS: This is kind of clunky and not intuitive, but it's also slower because it does actually to lookups on us in character. I don't know if engines can optimize that but typically I would expect that to have an impact. We can do the same thing with a negative look ahead kind of for subtraction. So if we have the non-ascii digits we can match on personal decimal number characters, but only if The character is not also an Esky zero through 9 digit. So that kind of works. The other thing that people can do of course is they can take the full list of ranges corresponding to the property and can remove the ones that they don't want and basically pre-compute and then hard code the character class. and that works, of course, but now we are back to having lots of ranges. So abbreviated as here. This is actually sixty two ranges of 0 through 9 in its full form currently and every few years Unicode adds a script that has its own set of digits. So this gets out of date we lose the benefit of readability in updating of properties. So probably this is not the best way of doing things. So what we are proposing is basically to add real support for doing intersection and subtraction was that difference and with that also to make it kind of useful and meaningful. We need nested character classes. So we need to be able to not just have a class Escape like a \p quality inside of a character class, but also another square bracket character class for doing things like exceptions like the removing a scalar case so for example, instead of doing the grouping and the positive look ahead we could just write something that looks like well, yeah, there is a syntax for doing an intersection. We have the Khmer script property. And the Letter, Mark, and Number class and those two classes or sets together. The intersection is what we want for having mellitus. I put a note here that this is not necessarily the actual syntax we would be using this is the kind of syntax that's used in other places. There is variation on things like presidents on whether to use a single or double ampersand. and various things like that. We're not trying to settle those kind of things here. We're just presenting an idea for subtraction for non-ascii digits instead of doing a negative look ahead. Can just write a character class and the character class gets computed from decimal numbers except for those asking digits and you could also express it in a different way. could use a pointer here, right? You could have the decimal number - oh - - the property for the ASCII range. And so this could be a \p Calibre zosky. But in this case, it's kind of easy to list the singular points for the digits that we all know and love and of course in a union in order to keep the syntax consistent predictable. It would be also handy if we we could have a square bracket character character class in there even though it's not strictly necessary an example like this it would work just as well if you left out the inner square brackets, but it would be strange if we allowed the character class nested inside of other things are not here. so I dug up a few more examples from Google code where people were doing things like breaking spaces basically taking all the 30 or so space characters in Unicode and removing one set. worked like a non-breaking space and things like that. I'm aware that the line-break properties aren’t currently supported in ecmascript regular Expressions, but that's I think a decent illustration. Anyway, there's an emoji property and there are ASCII characters that are also cut, also have this property for the keycap sequences. And I found code that wanted to remove those it like having the ASCII characters in there. There was code looking for combining marks that were not script specific. So they intersect to combining Mark property with having inherited and common script. So that's that's all the characters that are not like the vowel marks in in the clutch more like the acute and graph that we use in French and Spanish and other places or there was a piece of code that was looking for the first letter in a script in it had a starting point of taking things that are quick check. Maybe our yes in normalization form z c and then removing things that have sort of dtc's common inherited script properties meaning punctuation symbols in those kind of stuff. MWS: For comparison, We have two versions of this slide. We looked at other regular expression engines. They all basically support unions and nested classes. So the ones we looked at here several of them support either intersection or subtraction. I'm not sure why if they go to the lengths of having one syntax. Why not add the other you can emulate these things? By combinations of intersecting with the negation and things like that, but it's much more obvious if you have dedicated syntax, there are a couple of regex engines that also support a symmetric difference. I'm not really sure what the use case is for that but UTS 18 the Unicode regex describes it and a couple places have implemented it. It is a tub table view view of that. Showing which engine has which feature in a more visual form? Thanks to Matias who put together this nice table form the star here the in and shrug for Java subtraction is basically saying that they don't have Syntax for it, but they documented as you can get it. From doing intersections and something else. And then you can see there are places that do one but not the other and at this point the ecmascript Regex. can only do Union and not even Union of an SAE class and we would like to fill in that bottom row. I'm not sure that we need a symmetric difference. We are basically not proposing to add that but otherwise subtraction intersection and nested classes would be handy and we think it would be handy in summary because the regular expressions with a character class has become a lot more. Show a little more readable, more intuitive and because of that they help avoid errors by having hard-coded classes that are then also hard to check, hard to keep up to date, and people might be tempted to just stick with simple character classes that don't really support internationalization or doing lookaheads that are also unintuitive not easy to use and hurt performance. so as Mathias said this is not ready yet for really having something concrete at once to be decided and this is what it's going to look like, but if we could get a thumbs up for continuing this work, that would be great. If people have concerns and think this doesn't fit in EcmaScript regexes. I would like to hear what the line of argument is for that and we can think about that but basically basically what we are asking is a go-ahead to make a real proposal for adding these features into JavaScript ecmascript regular expressions. MED suggested that I put in an example for something that has a lot of ranges. So here we have all the letters that are not lowercase. These are almost a hundred thirty thousand characters with hundreds and hundreds of ranges, so imagine you do this and then you don't just have this one character class, but you do this kind of expression. And have to write it out and someone has to make sense of it. Matthias @@ -534,23 +547,23 @@ MB: yeah at this point. I think we can open the discussion and see if there's an WH: Long ago during the presentation you had a bunch of slides in which you used `/(: …)/`. As far as I can tell that just evaluates to a capturing group whose first character is `:`. What did you mean here? -MWS: I have to admit that I'm not a complete regex Maven, but I've seen suggestions on stack overflow with and without the grouping and I remember there was a reason why the grouping was needed when you were doing a plus oor star Star operator after it. It was something about how these things are getting evaluated. +MWS: I have to admit that I'm not a complete regex Maven, but I've seen suggestions on stack overflow with and without the grouping and I remember there was a reason why the grouping was needed when you were doing a plus oor star Star operator after it. It was something about how these things are getting evaluated. WH: I'm not familiar with that. As far as I can tell it's just a capturing group that requires its first character to be a colon. -MWS: So I think the high point on this kind of slide is that we need to look ahead assertion to emulate the intersection or the subtraction. I had the impression from some of the stackoverflow answers that the grouping was also useful to do some of the work that was wanted to be done. But if that's not the case, then I can remove that part from from the example. +MWS: So I think the high point on this kind of slide is that we need to look ahead assertion to emulate the intersection or the subtraction. I had the impression from some of the stackoverflow answers that the grouping was also useful to do some of the work that was wanted to be done. But if that's not the case, then I can remove that part from from the example. -BSH: I think. ‘:’ actually means that it doesn't really capture it only groups. +BSH: I think. ‘:’ actually means that it doesn't really capture it only groups. MWS: I'm sorry for not copying it correctly. -WH: Okay. I thought you were trying to introduce a new syntax which I wasn’t familiar with here. +WH: Okay. I thought you were trying to introduce a new syntax which I wasn’t familiar with here. MWS: That's just a typo; I'm sorry for any confusion. So I need a question mark before the colon. WH: Yeah, okay. I'm also next on the queue. There are all kinds of syntactic issues with this proposal. `/[[0-9]]/` currently means a character class containing '[' or one of the ASCII digits, followed by a `]` character. Also, `/a--b/` means a range starting with `a` and ending with a `-`, or a `b` character. Introducing those would be breaking changes. I want to get an idea of where you want to go with this. Is your intent to do this without breaking changes to the language or not? -MB: We're definitely not interested in making breaking changes to the language. So again, like the examples were used, we're not married to any particular type of syntax, but if we want to pursue this in some way we either need something that is backwards compatible. So since I said currently breaks that throws an exception or we would need to introduce a new regular expression flag. Those are the two options. +MB: We're definitely not interested in making breaking changes to the language. So again, like the examples were used, we're not married to any particular type of syntax, but if we want to pursue this in some way we either need something that is backwards compatible. So since I said currently breaks that throws an exception or we would need to introduce a new regular expression flag. Those are the two options. WH: I’m concerned because all of all of the examples you gave are breaking changes — they currently mean something different. @@ -564,13 +577,13 @@ WH: I worry about trying to specify syntax in which `--` only works if you have MED: I think it's a balance. You can have a flag and then you can have you could have have a single ampersand, you do all sorts of things, or you can make it less likely and make it easier to migrate migrate Expressions if you also also use syntax like doubled characters that unlikely. I mean unlikely that you would have a character. I mean a double - in between character classes and that's the place where it's really useful. -WH: `--` is used quite a bit. Things like the nested square brackets also mean something already. If you nest square brackets the first closing square bracket will end your character class while the redundant opening square bracket is just a square bracket literal. +WH: `--` is used quite a bit. Things like the nested square brackets also mean something already. If you nest square brackets the first closing square bracket will end your character class while the redundant opening square bracket is just a square bracket literal. MED: I don't want it to talk too much but a lot of programming languages, I mean languages have faced exactly the same problem and I think we can learn by the steps that they've taken. -WH: I'm really worried about breaking syntax. But I do support the concept of doing operations character classes like unions and intersections and whatnot. Also, to clarify, you're proposing to do this only for single characters and not character sequences, right? +WH: I'm really worried about breaking syntax. But I do support the concept of doing operations character classes like unions and intersections and whatnot. Also, to clarify, you're proposing to do this only for single characters and not character sequences, right? -MWS: This is separate but could be combined in the future is if both of these proposals are accepted then they would naturally combined. Like if you have a set of all emojis, which is a sequences property and then you remove the Emoji flag sequences as a subtraction that would work and make sense. +MWS: This is separate but could be combined in the future is if both of these proposals are accepted then they would naturally combined. Like if you have a set of all emojis, which is a sequences property and then you remove the Emoji flag sequences as a subtraction that would work and make sense. WH: In some cases yes, in some cases no, but I don't want to rathole on that right now. @@ -588,7 +601,7 @@ MB: Re: how common this is, I'd like to clarify that the examples we've used in MF: Oh, yeah. I don't doubt that. It definitely is happening and happening often enough to warrant inclusion in the language, and inconvenient enough to do within a library. I just think that it doesn't happen commonly enough to warrant addition to the regular expression Pattern grammar, which as we discussed just a minute ago is already hard enough for users to comprehend. If you're doing something here on the more advanced side of the regular expression use, I think you can use an API to construct your regular expression, especially if we have better ability to construct regular expressions in the future with like - as we were discussing in IRC - with like a template tag for regex construction that would make it really easy to compose character classes and stuff. That's my opinion. -MB: Okay, so that would be a third option. So far, we've been talking about a) somehow finding magical syntax that is backwards compatible, or b) adding a new flag, but there's also c) what if we don't do any of those and add an API, okay, I see. +MB: Okay, so that would be a third option. So far, we've been talking about a) somehow finding magical syntax that is backwards compatible, or b) adding a new flag, but there's also c) what if we don't do any of those and add an API, okay, I see. MWS: I don't know in terms of complication of the grammar. It seems like a lot of other regex engines have added this and that tells me that they had motivation to do so and it seems like they document it in ways that aren't all that confusing I think. @@ -602,29 +615,29 @@ MF: I say that without the flag. Yes, with the flag, it's not ambiguous. But the MED: I mean, this is something that ECMAScript used in the past, when it went to using Unicode capabilities and eventually the u flag. I don't think you have to use that flag anymore. -MF: yeah, we have the U flag which I don't think anyone would argue wasn't worthwhile adding the flag because it's for such an incredibly useful purpose. +MF: yeah, we have the U flag which I don't think anyone would argue wasn't worthwhile adding the flag because it's for such an incredibly useful purpose. KG: Just a quick comment there. You do still have to use the u flag and you will have to use the u flag forever because we never change the meaning of anything ever. So if we introduce a new flag here you will have to continue to use that new flag for this syntax forever. MED: Although that new flag could subsume the U flag as well. if you wanted to keep the number of flags in an expression down, so if it were a new flag, then we could say that also implies the U flag. -MF: I don't think the burden is the number of flags. It's the number of individual states that you have to be aware of when reading a regex. +MF: I don't think the burden is the number of flags. It's the number of individual states that you have to be aware of when reading a regex. -MLS: So MF, when you say API you think about an API to construct the pattern and just want to clarify? [transcription error]. +MLS: So MF, when you say API you think about an API to construct the pattern and just want to clarify? [transcription error]. MF: The API I'm talking about is doing the like union and intersection operations on character classes and resulting in some representation of the code points that would or would not be matched by that character class and then being able to construct a regex from it via a separate API. -MLS: So I see some problems with that because it considers a regular expression that has multiple logical operations of Union and subtraction intersection that you'd almost have to have like a formatting kind of API that would take some special syntax do it needs to do and construct this pattern or the API. We have the ability to compose separate patterns so you can do it individually and I think that may be just as problematic as coming coming up with syntax and be nonbreaking +MLS: So I see some problems with that because it considers a regular expression that has multiple logical operations of Union and subtraction intersection that you'd almost have to have like a formatting kind of API that would take some special syntax do it needs to do and construct this pattern or the API. We have the ability to compose separate patterns so you can do it individually and I think that may be just as problematic as coming coming up with syntax and be nonbreaking MF: I believe so. I believe MB has a library that does, effectively, this. MB, can you speak on that point? -MB: Yeah. Okay. I have a library called regenerate or regular expression generate. It's supposed to be used at build time (instead of run-time) and part of the reason is that performance is a problem depending on how you implement this. My implementation is pretty basic. It gives you an API to easily operate on sets of code points and the library makes it very easy to add or do Union or subtraction or all of these things and then to finally `toString()` that resulting set into a regular expression pattern, with different options to customize the output depending on whether or not you want to use the `u` flag. I think there's also precedent for this in ICU and MWS can definitely speak more about that in the name for that — UnicodeSet is the name of that API. So this is what I'm imagining when when I heard heard your proposal: conceptually it could be an API that accepts a string that describes this pattern, like for it could be the patterns that we had in the slides as strings and then that produces either a regular expression pattern or it gives you some other type of objects that you can then use to combine as MLS said into the larger regular expression pattern. Does anyone want to speak about UnicodeSet and how ICU handles this? +MB: Yeah. Okay. I have a library called regenerate or regular expression generate. It's supposed to be used at build time (instead of run-time) and part of the reason is that performance is a problem depending on how you implement this. My implementation is pretty basic. It gives you an API to easily operate on sets of code points and the library makes it very easy to add or do Union or subtraction or all of these things and then to finally `toString()` that resulting set into a regular expression pattern, with different options to customize the output depending on whether or not you want to use the `u` flag. I think there's also precedent for this in ICU and MWS can definitely speak more about that in the name for that — UnicodeSet is the name of that API. So this is what I'm imagining when when I heard heard your proposal: conceptually it could be an API that accepts a string that describes this pattern, like for it could be the patterns that we had in the slides as strings and then that produces either a regular expression pattern or it gives you some other type of objects that you can then use to combine as MLS said into the larger regular expression pattern. Does anyone want to speak about UnicodeSet and how ICU handles this? MWS: Yeah, I can speak to that a little bit the UnicodeSet - actually MED I think cooked at up over 20 years ago together with [?] who was working on the translator-inator service in ICU, which is basically a rule-based way of transforming strings from something like Russian Cyrillic to Russian Latin or something like that, for example, but you also do other things with that syntax. And basically it introduced the notion of the Unicode set which is like a regular expression character class and the rules in the transliterator are a lot like regular Expressions but you have potentially hundreds of these kinds of rules. and at the sets are used for most keys at the context like the context before and after something that wants to be replaced kind of like a look ahead assertion a bit, but sort of more describing the context. And for that, that was probably one of the early places that supported Unicode properties in the syntax and one of the early places that something like 20 years ago supported the set operations. And that has been very fruitful in the transliterator framework. It's also been used as a standalone feature in lots and lots of places. You can create a Unicode set based on one of those patterns and people do that all the time and then say, given a string tell me how far I can go from the beginning of the string or from some offset in the string with characters that are in there like whitespace space characters or [?] letter sets that I showed as an example and give me the end point of where that is. So that's been a very useful but also a very popular feature and people are successfully writing their own patterns for these things which are really just basically regex character classes in a standalone implementation. MEd, do you want to add something? MED: I think I think you covered it quite well. The advantage of the Unicode sets is that they can be that they have both - you can create them from a string representation and you can also perform all of these set operations on the Unicode set so that I can use a Unicode defining it goes that later on subtract it from another unit code. and produce the third one one. And so on. That capability turns out to be very useful. It's like sets it's as if you had sets of characters, but you can make a much more compact representation for them, much easier to process, and a smaller footprint. -AKI: Yeah, I think we're ready to move on to Richard. +AKI: Yeah, I think we're ready to move on to Richard. RGN: Yep, we had some sleep deprived chatter in IRC and basically at least convinced ourselves that there is room for syntax here (e.g., `/\U[\p{N}--\p{Nd}]/u`). I don't want to get into the concrete parts of it. Obviously that's a later stage concern, but it's not prima facie dead. @@ -633,12 +646,13 @@ KG: Yeah, just as RGN said, the details of the syntax are definitely a later sta SFC: It looks like there's positive sentiments toward this which is good. And I also just wanted to think about - the set operations were sort of one of the biggest areas of improvement for that were ecmascript regular Expressions were farthest behind regular expression engines in other programming languages, but there's going to be room to extend this to some of the other features also that that that Unicode regular Expressions also support, the biggest of which is multi character sets which are important for things like Emoji. I think that this was alluded to a little bit earlier in this conversation, but I like, the idea that Richard posted about the set notation here. That could also be extensible to other area, so I just wanted to sort of get that thought out there because I think it would be really exciting if ecmascript added not only set notation, but also did it in a way that we can also add other Unicode regular expression features at the same time. Sentiment meter: + - Strong positive: SFC, RGN - Positive: DE, KG, BSH MLS: There already is a sequence property proposal. -SFC: There is a sequence property proposal, but that's different than sequence character sets. And the sequence property proposal is indefinitely blocked on the syntax; my hope is that if we were to adopt if we were to agree on adopting Unicode character class behavior, then this could also help us unblock the sequence character proposal, which is really one of the reasons why we're really looking at this problem because the sequence property proposal is not as useful as it could be if we had support for all of these other features, which is in large part why Markus and Matthias are bringing this proposal forward. +SFC: There is a sequence property proposal, but that's different than sequence character sets. And the sequence property proposal is indefinitely blocked on the syntax; my hope is that if we were to adopt if we were to agree on adopting Unicode character class behavior, then this could also help us unblock the sequence character proposal, which is really one of the reasons why we're really looking at this problem because the sequence property proposal is not as useful as it could be if we had support for all of these other features, which is in large part why Markus and Matthias are bringing this proposal forward. WH: Since you brought up sequence properties: If you intend to go to sequence properties, I think it's crucial to consider the the syntax of sequence properties together with this because there are things which are going to come up in the combination of those two which will not come up if you consider this alone and sequence properties alone. @@ -652,11 +666,11 @@ WH: Yes, you’ll have that issue too. Back to the point I was making: Singleton MWS: So if I might jump in here for a moment, in the Unicode set in ICU, we have supported what you call sequence is what we just call strings as part of a set for I don't know something like 15 years. At the time, we added syntax that wasn't quite backward compatible just using a curly brace bracketing around the string as a single element of the set. I understand that that particular syntax is too disruptive. There is a recommendation in the Unicode regular expression spec for doing something like a \q{ and then the string but in general that are definitely options for supporting something like that and these kinds of things do make sense. We use them for example in the Locale data, the CLDR data, for things like to set of characters that you need to write a language. and that can include sequences not just single code points. The other thing - someone mentioned negation, and for UTS 18 I really have to credit MED on this one, he came up with a way of resolving some internal negations so that in the end you can test and make sure that you don't end up with a negated set that contains multi-character strings because that just doesn't work, but it is possible (if you wish to do that) to have some indication on the inside and have it be resolved in case that's permissible. For example, you could have a double negation which then falls out and gets resolved away. -WH: Yes, my thinking about this is that there is too much of syntax within the character classes we have today that's already used for existing behaviors. I would prefer to start with a cleaner slate which we can get with a flag that lets us define a straightforward uniform syntax for doing sequence classes and not have to worry about doing really bizarre contortions to avoid breaking stuff. +WH: Yes, my thinking about this is that there is too much of syntax within the character classes we have today that's already used for existing behaviors. I would prefer to start with a cleaner slate which we can get with a flag that lets us define a straightforward uniform syntax for doing sequence classes and not have to worry about doing really bizarre contortions to avoid breaking stuff. MED: and I think that's perfectly reasonable direction to take. A lot of times you make decisions for backwards compatibility that ten years down the line people squaring that because it gets so ugly. -MLS: And, with the U flag, we have syntax available to us because the escapes are the currently unused escapes are syntax errors with the U flag so we can introduce new escapes for new constructs. +MLS: And, with the U flag, we have syntax available to us because the escapes are the currently unused escapes are syntax errors with the U flag so we can introduce new escapes for new constructs. MB: Yeah, we were discussing this on IRC. It was an interesting discussion and we could basically do [`\UnicodeSet{…}`](https://github.com/mathiasbynens/proposal-regexp-set-notation/issues/2), which I think is quite elegant and readable. But anyway we can discuss this on the repository. @@ -664,7 +678,7 @@ AKI: Yeah, I know that we've definitely had this space to discuss syntax, but th MB: All right. Thanks everyone. -SFC: Thank you. This feedback was helpful. And I also recorded in the notes the sentiment meter. +SFC: Thank you. This feedback was helpful. And I also recorded in the notes the sentiment meter. KG: just for the notes. Are we officially calling this stage one? @@ -672,9 +686,9 @@ AKI: So I'm going to just mention this doesn't to my knowledge have a repo. MB: It does have a repo and slides, but we didn't provide those materials before the stage advancement deadline. It's fine, we're not asking for stage 1. -MBS: So I believe but I could be wrong that those requirements don't apply to stage one historically, but someone can correct me if I'm wrong there. I think they do. the committee has the ability to the deadlines not naming the refused. I don't remember now, but I believe we changed some details of that in a recent meeting. I would like to see if something does go for stage advancement. +MBS: So I believe but I could be wrong that those requirements don't apply to stage one historically, but someone can correct me if I'm wrong there. I think they do. the committee has the ability to the deadlines not naming the refused. I don't remember now, but I believe we changed some details of that in a recent meeting. I would like to see if something does go for stage advancement. -YSV: So stage one isn't quite as important, but if it doesn't get into the agenda within like before the 10-day limit for example any member organization that relies on their peer review will be able to review it. Yeah, I think it should apply to stage one, but I think whatever. +YSV: So stage one isn't quite as important, but if it doesn't get into the agenda within like before the 10-day limit for example any member organization that relies on their peer review will be able to review it. Yeah, I think it should apply to stage one, but I think whatever. MWS: I apologize for putting this together later. I was in Germany for three weeks with my parents and just got around to doing it early this week. @@ -683,4 +697,5 @@ YSV: I think no problem. Thank you so much for the presentation. Okay. AKI: Yes. Thank you so much and the conversation. thank you. ### Conclusion/Resolution -* No advancement due to late addition to the agenda + +- No advancement due to late addition to the agenda diff --git a/meetings/2020-11/nov-19.md b/meetings/2020-11/nov-19.md index 53e88fe0..42a7ebd7 100644 --- a/meetings/2020-11/nov-19.md +++ b/meetings/2020-11/nov-19.md @@ -1,7 +1,8 @@ # 19 November, 2020 Meeting Notes + ----- -**Remote attendees:** +**Remote attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Waldemar Horwat | WH | Google | @@ -26,8 +27,8 @@ | Daniel Ehrenberg | DE | Igalia | | HE Shi-Jun | JHX | 360 | - ## Extensions for stage 1 + Presenter: HE Shi-Jun (JHX) - [proposal](https://github.com/hax/proposal-extensions) @@ -35,41 +36,40 @@ Presenter: HE Shi-Jun (JHX) JHX: Okay. Hello, everybody. This is a long slide and I try to go through it as fast as I can. This is the contents of my presentation and I will start from our minimal examples. The extensions proposal introduces some syntax to declare adequate extension method and it could be a code like this to this is just like a real mess of it and it could be explained like this, just like you have [transcription error] And it's the double colon double notation have same precedence [transcription error] could be be changed seamlessly. and the example measures are declaring in a separate namespace, which means it will not conflict with the normal normal bindings. So with this syntax you can borrow the binding methods like that because the children probably do not have the forEach you can use the array prototype for each. And it also adds a syntax that not only the binary form, but also ternary that here just have the same effects of this example. Just you don't do not need to extract the do not need the clear it you can just use it here and exists you can use Constructor has extension object or you could also use namespace extension. It's like this we impose the lodash and use the lodash `last` method here. We will also use Global namespace with more mass so This is a very simple example. So basically the syntax of the ternary form works exactly like its Constructor. It will use the prototyping method and if it’s not present it will be treated like the namespace object. This is a very simple part 0. -JHX: Actually this proposal is not a new proposal, it actually is based on the old bind operator. If you are familiar with the old bind operator will find the syntax look very similar to this. This is the older proposal and it already has a [transcription error] tc39 report in a day of many discussions on the issue and it also has official bubble support from 2015. And as I know there are some programmers already using it in the production, but the bind operator proposal actually is still in stage 0 and not even stage 1. This is just very surprising. Our chapter the meeting notes of the white operate [?] and Allen said that in 2015 that's yet because of that sort of in that time we decided to get more feet apart from babel. I think the issue Allen raised is no longer an issue, as already five years passed and I think every JavaScript programmer has used ES6 Class. - +JHX: Actually this proposal is not a new proposal, it actually is based on the old bind operator. If you are familiar with the old bind operator will find the syntax look very similar to this. This is the older proposal and it already has a [transcription error] tc39 report in a day of many discussions on the issue and it also has official bubble support from 2015. And as I know there are some programmers already using it in the production, but the bind operator proposal actually is still in stage 0 and not even stage 1. This is just very surprising. Our chapter the meeting notes of the white operate [?] and Allen said that in 2015 that's yet because of that sort of in that time we decided to get more feet apart from babel. I think the issue Allen raised is no longer an issue, as already five years passed and I think every JavaScript programmer has used ES6 Class. JHX: this is the old bind operator proposal looks very close to my proposal but they all had the prefix for like the old ::console.log() form. -JHX: The old bind operator actually has three features. We should note that the second one is actually based on the first one. It has 2 motivations: one is close to my proposal and the other one [transcription error] +JHX: The old bind operator actually has three features. We should note that the second one is actually based on the first one. It has 2 motivations: one is close to my proposal and the other one [transcription error] JHX: The first could be seen as virtual methods. This could be a bad name, as a virtual method usually means that it can be overwritten. In this case, this does not correspond to a real method. -I’d like to use the term extension method to describe that. -The extension method, defined by wikipedia, is a method which is added to an object after the original object was compiled. -I think we understand what that means is the method is on the original object. +I’d like to use the term extension method to describe that. +The extension method, defined by wikipedia, is a method which is added to an object after the original object was compiled. +I think we understand what that means is the method is on the original object. If you follow the idea we now never rely on the bind concept that was a problem of the bind operator that has a motivation issue. And because of that these 2 features if we think about it indiv they cloud have no relationship. So if we restart from the virtual/ext method themselves [shows slide 22] This is the real method and this is the extension method so what is this? JHX: The old bind operator to make this means “bind” but it’s not very [?] it’s actually property. It could be an extension property. A new proposal to allow that extension accessors so you can rewrite code like this. [slide 27] If we look back to the old bind operator, it has two ways to invoke a “virtual” method and to extract a real method. -If we treat the virtual method the same as a real method it should use the same syntax like this [slide 28] Of course the syntax is now not any good, too many colons. +If we treat the virtual method the same as a real method it should use the same syntax like this [slide 28] Of course the syntax is now not any good, too many colons. I think this proposal should be split into two proposals, one for virtual methods, and one for method extraction. The third has some use cases, but not strong enough to have its own proposal. In this proposal I focused on the virtual method. I’d like to discuss something about method extracting. One application for this is “partial application proposal” The one parameter but there are some discussions about partial application if we have this syntax so if it could just be used to achieve that Or, we can have an individual proposal for that. My intention here is to use the infix operator instead of the prefix form. Prefix form needs the [transcription error] -Even if we do not have those proposal, we can use the extraction method to address the same issue. And, additionally, you don’t need to write the method yourself, they already exist in libraries like lodash. +Even if we do not have those proposal, we can use the extraction method to address the same issue. And, additionally, you don’t need to write the method yourself, they already exist in libraries like lodash. -You can import the lodash module and create a lodash namespace, and use the extension operator on it. So it could just work. +You can import the lodash module and create a lodash namespace, and use the extension operator on it. So it could just work. Let’s talk about the virtual part. One discussion is we can replace extensions with the pipeline method. However there are some ergonomics issues. [shows example, slide 34] -The precedence of the pipeline operator is very low, and you would need to add parens, or you can change everything to pipeline, but this would require a dramatic rewrite. +The precedence of the pipeline operator is very low, and you would need to add parens, or you can change everything to pipeline, but this would require a dramatic rewrite. -Compared to the original example of the extension method, maybe it is not the best. I think the pipeline operator is very good for functional programming, but in many situations where you want to use the builtin methods in chaining then extensions have a place. +Compared to the original example of the extension method, maybe it is not the best. I think the pipeline operator is very good for functional programming, but in many situations where you want to use the builtin methods in chaining then extensions have a place. -Pipeline can also be a userland implementation, and that can be used with extensions. +Pipeline can also be a userland implementation, and that can be used with extensions. [transcription error] @@ -81,21 +81,21 @@ Other programming languages adopted extensions. Here is a complete timeline of other programming languages adopting similar language features. Here is an example of swift (followed by kotlin). -Ruby is interesting: most PLs have extensions in static typing languages but ruby is a dynamic language just like javascript. It introduces a language feature called “refinement” which can be thought of as a “safe” monkey patch. -It uses different dispatch rules. -In a classic extension, they should first look up the real method, and then the extension method. But ruby uses a different rule. +Ruby is interesting: most PLs have extensions in static typing languages but ruby is a dynamic language just like javascript. It introduces a language feature called “refinement” which can be thought of as a “safe” monkey patch. +It uses different dispatch rules. +In a classic extension, they should first look up the real method, and then the extension method. But ruby uses a different rule. I think it is because the design is better than monkey path because monkey patch has higher precedence. -If you look at all of these examples, they still use the `.` notation, and the extension can still be dispatched by the type. This question has been asked in typescript in a very old issue (#9). [presents example from the issue, slide 79] +If you look at all of these examples, they still use the `.` notation, and the extension can still be dispatched by the type. This question has been asked in typescript in a very old issue (#9). [presents example from the issue, slide 79] [transcription error] -JHX: Even if it can generate that code, it is hard to infer what it would do and we would overload the dot operation. So that is why we talk about a new syntax here with different semantics. This avoids runtime dispatching cost. For instance Ruby has a runtime cost with extension. If you look at the spec it is very complex. Ruby is a good example of it, here is the look up rule of the refinements (slide [?]).I think we can’t reuse the `.` and it is very unlikely that we can dispatch by type. I think however extensions are still useful for javascript. For JS, we use 2 different symbols to decouple the behavior. Compared to the classical extension methods, my proposal still keeps the call value of the extension method. And so, we lose something, but we also gain something. Without an IDE also it is hard to know where the method comes from and makes for unpredictable performance. My proposal has a predictable performance. +JHX: Even if it can generate that code, it is hard to infer what it would do and we would overload the dot operation. So that is why we talk about a new syntax here with different semantics. This avoids runtime dispatching cost. For instance Ruby has a runtime cost with extension. If you look at the spec it is very complex. Ruby is a good example of it, here is the look up rule of the refinements (slide [?]).I think we can’t reuse the `.` and it is very unlikely that we can dispatch by type. I think however extensions are still useful for javascript. For JS, we use 2 different symbols to decouple the behavior. Compared to the classical extension methods, my proposal still keeps the call value of the extension method. And so, we lose something, but we also gain something. Without an IDE also it is hard to know where the method comes from and makes for unpredictable performance. My proposal has a predictable performance. -JHX: Part 3. As I said, this lookup based on the ext, is delegated by the ext. There are three forms: invoke, get, set. It could be customized. This is the previous example [presents slide 95] We could rewrite this like so: The extract can make the syntax much better. +JHX: Part 3. As I said, this lookup based on the ext, is delegated by the ext. There are three forms: invoke, get, set. It could be customized. This is the previous example [presents slide 95] We could rewrite this like so: The extract can make the syntax much better. JHX: part 4 is use cases. [Presents examples in slides from slide 97] And in this way it could improve code readability. This is extremely useful if you have a very long expression. And, CSS units should just work. With the exception of a hack that we need to use. [slide 111] There is another proposal, first class protocols, which I really like. One issue of this proposal is that it doesn’t have a good syntax. Maybe we can use extension methods here and use the double colon notation for it. Benefit is it could ensure it really implements the protocol. If you only use the symbol, it can be faked. And maybe it could be shorthand syntax for it. And sensitive code is an example for branding at slide 115. On slide 121: Eventual send, Needs better syntax and needs two types of proxy. Talking about wavy dot proposal: For syntax we have a high bar. [transcription error]. Not as good as wavy dot but saves the syntax space. And the last use case is experimental implementations for new APIs on the prototype. Presents Proposal-array-filtering issue #5. -JHX: If we have the extension method, we can have polyfills and experimental implementations. +JHX: If we have the extension method, we can have polyfills and experimental implementations. JHX: This is the whole thing and the summary is [slide 131] I hope we can revive the old virtual methods from the bind operator. @@ -103,13 +103,13 @@ JHX: This is the whole thing and the summary is [slide 131] I hope we can revive MM: I want to express my appreciation for this proposal. I'm very very supportive of this going to stage one. I think this is an excellent investigation. I want to point out a tension here, which is I think what this needs to turn into to go forward from stage one and a viable manner is to identify a very simple core that I think is here where there's a lot. lot of power for very very little mechanism and I think you're focusing on the right starting point there, which was the virtual aspect of the original bind operator. But I also like the fact that you're starting by doing this broader exploration before narrowing down to that simpler core too soon so that we can see what the alternatives are. So I very much like this entire thing. Then I have a question specifically on the ternary form. You're showing me the initial discussion before you showed the symbol dot extension form; you had the thing that switch Behavior based on an is Constructor test where if it's a Constructor you're getting it from the Prototype and otherwise are getting it from the object itself. There's a problem with that is many Constructors or classes have static methods and with regard to the static methods are effectively acting as a namespace object for the static methods. So I'm skeptical that kind of dynamic change in Behavior based on the is Constructor test is a viable thing there ; and the other thing is is I would like you to a show again the definition of your pipe combinator because that went by very very quickly before I was able to absorb the the meaning of it. Where you’re showing how the double colon could be used for pipelining rather than the pipelining operator? You showed that you had a pipe function that was applied to other functions in order to insert that into the in order to use the double colon as if it's a pipeline operator. The at method. (slide 50) Could you explain this? -JWK: yeah, it's receiving the incoming values as it’s “this” value. For example, the first pipe call gets “hello” as it’s “this” value then passes it to the f. Does this clarify? +JWK: yeah, it's receiving the incoming values as it’s “this” value. For example, the first pipe call gets “hello” as it’s “this” value then passes it to the f. Does this clarify? -MM: I think I don't know the double say I'm not. In this one, but I would have expected the pipe to return a function because double colon expects a function. +MM: I think I don't know the double say I'm not. In this one, but I would have expected the pipe to return a function because double colon expects a function. JWK: Oh. I guess that's a mistake. (Clarify: Actually not, I misunderstood the semantics) -MM: Okay, in any case given what I think you meant pipe to say this makes a lot of sense to me and defend the idea that one operator could subsume the utility of pipelining and subsume the utility of the eventual sand till doc proposal. I find a very nice demonstration that there might be a lot of reach for very little mechanism here. So, thank you. +MM: Okay, in any case given what I think you meant pipe to say this makes a lot of sense to me and defend the idea that one operator could subsume the utility of pipelining and subsume the utility of the eventual sand till doc proposal. I find a very nice demonstration that there might be a lot of reach for very little mechanism here. So, thank you. MF: I saw that a few times you were showing how introducing a new namespace avoids possible collisions with your local scope and I wasn't very convinced by the need for that. I think, considering the pros and cons, it's fine to do the resolution in your local scope instead of introducing a new namespace because I think the the developer burden of having to manage these two separate namespaces is not worth that potential risk of collisions and just having to rename imports on the important side. @@ -117,11 +117,11 @@ JWK: Having a separate namespace and enforcing a stronger rule can help the engi MF: Oh, I didn't realize that. Can you explain more about that? -JHX: My intention here is mostly for the developer experience. I try to make much of the behaviors close to us the real method because of the real method, they would not conflict with what that for. For example, It’s very common that you will see the code in any place like the last JSON one. So if we use the extent method with the actual method. I try to make the two limits different between the real method and the end of the extension classes, so it should be able to refactor your code for a real method to extension method or vice versa. This is my starting point. But anyway, can we discuss this later? Maybe? This part is not the most important part in my proposal. But I prefer that. +JHX: My intention here is mostly for the developer experience. I try to make much of the behaviors close to us the real method because of the real method, they would not conflict with what that for. For example, It’s very common that you will see the code in any place like the last JSON one. So if we use the extent method with the actual method. I try to make the two limits different between the real method and the end of the extension classes, so it should be able to refactor your code for a real method to extension method or vice versa. This is my starting point. But anyway, can we discuss this later? Maybe? This part is not the most important part in my proposal. But I prefer that. -MF: Sure. +MF: Sure. -JYU: Yeah, actually the proposal is good for me and I'm just curious about the double colon symbol. So are there any other options as substitutions? Because you know, I do a lot of coding with C++. So, it's just a little bit weird to me to use double colon here this way and actually I'm not sure about what the situation is here in other languages in terms of this double colon, so I just want to call this out here and want to hear what others think about it because I just want to know is it natural to use double colons this way from the perspective of a normal JavaScript developer? So this is my little concern from the form of the grammar. +JYU: Yeah, actually the proposal is good for me and I'm just curious about the double colon symbol. So are there any other options as substitutions? Because you know, I do a lot of coding with C++. So, it's just a little bit weird to me to use double colon here this way and actually I'm not sure about what the situation is here in other languages in terms of this double colon, so I just want to call this out here and want to hear what others think about it because I just want to know is it natural to use double colons this way from the perspective of a normal JavaScript developer? So this is my little concern from the form of the grammar. JHX: Yeah, good question. Just decided on this syntax, because syntax is always a problem and I think that we do not have many choices here. It's which is possible, maybe the arrow is possible. I just followed the old operator because it's already there for many years. So, I just follow that and I think the double colon has slightly better ergonomics than all the other options. But it is just syntax problems which we can always discuss. @@ -131,13 +131,13 @@ JWK: It’s kind of like a language that supports both fp style and oo style. Fo JHX: I think this is really a problem with the pipeline operator, I think actually I like it. Sometimes I use the pure functional programming style and it's over. It’s very helpful, but as my slides showed that if you want to mix styles then the experience is not good. So I think it's hard to say, but my personal opinion about that is if we have the extension methods it may help us to choose which form we adopt as a pipeline operator. For example, I think maybe the abstract style is better if we have the extension method. so we have F# style is very good for mainstream functional programming and the problem is when you use it to waste their [?] methods, but if we have the extension method, we do care about that. You just use the F# style for functional programming. Maybe I'm not sure about it but this is my personal feeling. -SYG: All right. So along the same line as JHD was saying, maybe I have a slightly weaker understanding that the pipeline path and something like this would be mutually exclusive but it seems to certainly lean that way that it would be mutually mutually exclusive and I guess that's okay. So I'll State my high level concern which would be addressed if they were mutually exclusive. My high level concern is that if they were to exist there would be too many. There will be a proliferation of different syntax to do the same things with some of the use cases overlapping but not all and I think that would strictly work for readability. But of course if we only go with one of them then that's fine. So I guess the concrete concern is where the champions for the various pipeline operators are in the room. And when do we expect this question of which path do we take as a committee go forward, I guess from JHD’s point of view. This path has already been decided. It's your take from a previous consensus. Yeah, this was the explicit discussion that we had. I can probably dig up the notes when pipeline went to stage one. I see you thanks. +SYG: All right. So along the same line as JHD was saying, maybe I have a slightly weaker understanding that the pipeline path and something like this would be mutually exclusive but it seems to certainly lean that way that it would be mutually mutually exclusive and I guess that's okay. So I'll State my high level concern which would be addressed if they were mutually exclusive. My high level concern is that if they were to exist there would be too many. There will be a proliferation of different syntax to do the same things with some of the use cases overlapping but not all and I think that would strictly work for readability. But of course if we only go with one of them then that's fine. So I guess the concrete concern is where the champions for the various pipeline operators are in the room. And when do we expect this question of which path do we take as a committee go forward, I guess from JHD’s point of view. This path has already been decided. It's your take from a previous consensus. Yeah, this was the explicit discussion that we had. I can probably dig up the notes when pipeline went to stage one. I see you thanks. -DE: Can I speak for Pipeline champions. I don't think we have consensus as a committee on whether we want to go forward with pipeline Mozilla raised significant concerns, and I really don't think it's appropriate for us to be saying that these proposals are mutually exclusive at this point. I disagree with that, you know a name that when I presented on Pipeline in the past specifically said that they're not necessarily mutually exclusive though, there would be a cost to having both at this point. We just have not agreed on Pipeline as a community. So I don't think it's appropriate for us to do foreclosed discussions about it. I have other concerns for their down on the cue like I have reasons for why I've been pushing for pipeline rather than rather than than bind. But I don't think we have a process of reason to just propose one one. The current status of pipeline is I would really be happy to have additional co-champions on proposal because I don't have much time to push it forward and I'm not sure how to move forward based on Mozilla’s feedback. So please get in touch with me if you're interested in pipeline. +DE: Can I speak for Pipeline champions. I don't think we have consensus as a committee on whether we want to go forward with pipeline Mozilla raised significant concerns, and I really don't think it's appropriate for us to be saying that these proposals are mutually exclusive at this point. I disagree with that, you know a name that when I presented on Pipeline in the past specifically said that they're not necessarily mutually exclusive though, there would be a cost to having both at this point. We just have not agreed on Pipeline as a community. So I don't think it's appropriate for us to do foreclosed discussions about it. I have other concerns for their down on the cue like I have reasons for why I've been pushing for pipeline rather than rather than than bind. But I don't think we have a process of reason to just propose one one. The current status of pipeline is I would really be happy to have additional co-champions on proposal because I don't have much time to push it forward and I'm not sure how to move forward based on Mozilla’s feedback. So please get in touch with me if you're interested in pipeline. SYG: Thank you Dan. I think my constraint here. Is stage one. Well right now I guess we're at State 0 for for this proposal, but this does seem too early for us to make a mutual exclusion call, that does seem inappropriate, but I think I would object more strongly come stage 2 proposals become become stage 2 and perhaps by then the shapes of these various proposals have taken in the evolution. Is that they serve completely different purposes of that point and they're no longer so overlapping and that's fine too. But if they remain overlapping by slaves to telegraphing that I will be gravely concerned. -AKI: That's a problem for future tc39 not us. +AKI: That's a problem for future tc39 not us. WH: I did not understand the separate namespace in the presentation. You said that the thing after the `::` is in a separate namespace, which I understand, but then you had examples like `::Math.abs`. Does that mean that `Math` is now in a different namespace? @@ -147,31 +147,31 @@ WH: Looking at the longer term consequences if we adopted this: This is mutually JWK: You can import normal functions as namespaced extension functions in the syntax. -DRO: Generally speaking. I feel like the I believe you're calling it The turn a syntax that really feels like it should be a separate proposal because I understand how you might want it to relate to extensions, but the semantics it has itself of sort of this magical behavior of sometimes depending on whether something is a Constructor or not going from from the Prototype or going Static is like that to me seems something that needs its own discussions first. Is this sort of binding calling approach of the double colon? I'm not really comfortable with the two of them being mixed together because they seem to be very different. +DRO: Generally speaking. I feel like the I believe you're calling it The turn a syntax that really feels like it should be a separate proposal because I understand how you might want it to relate to extensions, but the semantics it has itself of sort of this magical behavior of sometimes depending on whether something is a Constructor or not going from from the Prototype or going Static is like that to me seems something that needs its own discussions first. Is this sort of binding calling approach of the double colon? I'm not really comfortable with the two of them being mixed together because they seem to be very different. JHX: Okay, I understand your point and I think it's possible to divide this proposal into several proposals, but I designed them in a whole way, so what we can discuss is in the repo issues. Personally I like it to be together to keep the consistency but I think it's okay if the committee likes to separate them. -MM: ya I didn't I didn't need to discuss this. I just wanted to weigh in on that. I agree that this is exclusive with Pipeline and I prefer this to Pipeline and do not consider us to have any consensus to do pipeline rather than this. +MM: ya I didn't I didn't need to discuss this. I just wanted to weigh in on that. I agree that this is exclusive with Pipeline and I prefer this to Pipeline and do not consider us to have any consensus to do pipeline rather than this. DE: I wanted to say for `this` this proposal encourages you to write functions that use `this` but lots of JavaScript developers find this confusing that was frequent feedback we got for pipeline. Oh finally, I don't have to use `this`. So I think that's a significant disadvantage of this proposal. RPR: Okay. So thanks the queue is empty. Would you like to ask for stage 1? -JHX: yes. I like to ask for stage 1 . +JHX: yes. I like to ask for stage 1 . -RPR:Any objections? +RPR:Any objections? -WH: I'm really reluctant about this. It creates a rift in the ecosystem with two different ways of doing the same thing, which means that half of the people will adopt one way and half will adopt the other way. There will be friction at the boundaries. So far, I see this as just a different function calling syntax, but with a separate namespace. +WH: I'm really reluctant about this. It creates a rift in the ecosystem with two different ways of doing the same thing, which means that half of the people will adopt one way and half will adopt the other way. There will be friction at the boundaries. So far, I see this as just a different function calling syntax, but with a separate namespace. RBN: that they did remind me and I added the topic to the queue if we're looking to advance this to stage one with that supplant the existing stage zero at least as it's listed in the room in the tc39 tc39 GitHub bind proposal? JWK: Hax said the namespace isn't the necessary part of this. If people feel not happy with a separate Namespace. -RBN: my question is with this should this become just the direction if this moves forward did this just become direction of the existing bind proposal and should probably have their champion of existing champions for that way and if they're carrying if they have interests are all run. +RBN: my question is with this should this become just the direction if this moves forward did this just become direction of the existing bind proposal and should probably have their champion of existing champions for that way and if they're carrying if they have interests are all run. -RPR: This is a slightly Divergent topic so I think we need to conclude whether Waldemar is making a true block. Can you confirm that you are blocking stage one? +RPR: This is a slightly Divergent topic so I think we need to conclude whether Waldemar is making a true block. Can you confirm that you are blocking stage one? -WH: I haven't heard a response. +WH: I haven't heard a response. JHX: I'm not sure. understand the concern is if the concern is about the separate namespace, I think it could be discussed into stage one. @@ -181,49 +181,50 @@ JHX: I think if most people think separating namespace is bad, we can drop it. RPR: Okay, so there's a potential to work through that in stage one. -WH: I'm not going to block it from stage 1, but I am really dubious about this proposal advancing past stage 1 due to the rift issue, and I also see this as mutually exclusive with the pipeline proposal. +WH: I'm not going to block it from stage 1, but I am really dubious about this proposal advancing past stage 1 due to the rift issue, and I also see this as mutually exclusive with the pipeline proposal. -RPR: Yeah and multiple people have said that last part. Yes. Okay, then say it's a given that you are not locking and I don't think anyone else else has objected. then we conclude this section with a consensus on stage one. Thank you JHX. +RPR: Yeah and multiple people have said that last part. Yes. Okay, then say it's a given that you are not locking and I don't think anyone else else has objected. then we conclude this section with a consensus on stage one. Thank you JHX. RBN: I still don't feel that my concern was addressed. I my concern is there is an existing proposal for bind and using this syntax and although it is sitting at stage zero this would essentially block that proposal and feel like it would be worthwhile to have even though it's been sitting fallow for a while individuals representing the champions for that proposal determine whether or not they're concerned concerned. mean, I know Jordan has a domestic use to say that there's two competing proposals for it. But these are also doing the essentially the same thing. So should it just be that proposal and that proposal to get updated? -JHX: It seems like for the old proposal, the Champions do not want to push forward. So this is just another request that if this proposal, I would also like to ask if I can reuse the bind operator proposal. +JHX: It seems like for the old proposal, the Champions do not want to push forward. So this is just another request that if this proposal, I would also like to ask if I can reuse the bind operator proposal. RPR: So I think it's state stage 0 / proposals. Don't don't block the stage one, and we've already said that in stage one will figure out whether this conflicts with the pipeline as well. So I don't think this is a stage one concern but this proposal. -RBN: All right. That's all I wanted to make sure. I have no other concern. Thank you, specifically. +RBN: All right. That's all I wanted to make sure. I have no other concern. Thank you, specifically. JHD: I don't think we should be dictating which; I don't think it makes a difference whether this is a new repo in a new entry on the proposals table, versus whether it reuses the existing one or replaces the existing one. I think that at the time when something advances to Stage 2 is when we should be explicit about which pre-stage-2 things are effectively inactive as a result of that stage to advancement. So, not today. RBN: I appreciate the clarification. There was one other question earlier about whether this ternary form should be split. But again, it's probably post stage one and down has a good point that we should clarify this nephritis and don't dogs. -DE: So I think yeah the process docs change that we Agreed on that you live proposed as specifically mentions that it's possible to take on proposals that others have dropped. This could be considered part of that if we want that to be a thing that we prefer to not happen until after stage Let's decide on that as a committee and document it. +DE: So I think yeah the process docs change that we Agreed on that you live proposed as specifically mentions that it's possible to take on proposals that others have dropped. This could be considered part of that if we want that to be a thing that we prefer to not happen until after stage Let's decide on that as a committee and document it. RBR: Okay. All right, so we conclude this item X you have stage 1. Thank you very much. Thank you. ### Conclusion/Resolution -* proposal advances to stage 1 -* stage 1 concerns about pipeline operator and namespace +- proposal advances to stage 1 +- stage 1 concerns about pipeline operator and namespace ## Dealing with TC39 data + Presenter: Yulia Startsev (YSV) - [slides](https://docs.google.com/presentation/d/1RNRJ1pPgta-1nIISo6I8jOR3SDZsaxhyzo9ak9ymfbM/edit#slide=id.gc6fa3c898_0_0) YSV: Good, so let's talk about tc39 data. And what do I mean by data? I'm going to clarify that in a second. The problem that I want to raise is that we've got stuff and it gets out of date and I'm talking about looking at the whole range of our proposals plus the website plus other consumers of tc39 data or people who want to programmatically understand what's happening on committee, maybe run their keep their own tabs of what we've been doing. We've had a few people have projects like that from outside of committee and also from inside of committee. How can we make that easier and keep things up to date? Basically, that's the goal. -YSV: What data are we interested in? So generally from my perspective as maintaining the website what I've been been interested in has been the title, stage, Champions and authors, links to past presentations, test 262 status, a link to the spec, a short description and a simplified example. That's what the current tc39 website is interested alternatively as a delegate who keeps a metric of what's going on. In tc39 for my company. I'm also interested in very similar pieces of data. We also have an effort from the JSC IG the JavaScript Chinese interest group who are maintaining a website that also makes use of this data Etc. So the sources of truth that we have for this information - and there might be other information, please let me know if there are other pieces of information we should be tracking - but the sources of Truth are: the meeting itself is the ultimate source of Truth; this is where we make the decisions and that is reflected in the notes. Additionally we have the proposals repository and I mean the proposals as in plural where we aggregate information about all of the proposals. This is usually the most up-to-date location. We've got test 262 and we've got individual proposal repositories. So if you're running an aggregator of some sort, you have to be aware of all four of these resources because if the proposal repository might be out of date for some reason that you might need to go and double check either in test 262 or individual repositories or in the notes. So usually you find yourself bouncing between those different sources of truth, but generally the source of Truth is going to be coming from GitHub. So this slide is just to just to show what we can get directly out of the GitHub API if we're trying to get this information from an API, and what GitHub will give us if we pull for example all of the proposals from tc39, we're going to get the title and the short description description. Additionally we've written that will parse the stage, Champions, authors, and links to past presentations, and the spec can be generated from the title from the proposal URL and we can also And we parse a simple example, that's what get the places where we that's what we've currently got in place. JSCIG also gets this data, this is also from the JSCIG crawler. So they've got their own crawler. They get very similar information from the proposal repo and generate their Json data from that. So and this is what's the website crawler does. We also pull our information from The Proposal repo, so that's our source of truth then we parse individual read me. Is of each proposal to get the spec. Sorry, we generate the spec from the URL and we parse a simple example directly from the repo explainer. +YSV: What data are we interested in? So generally from my perspective as maintaining the website what I've been been interested in has been the title, stage, Champions and authors, links to past presentations, test 262 status, a link to the spec, a short description and a simplified example. That's what the current tc39 website is interested alternatively as a delegate who keeps a metric of what's going on. In tc39 for my company. I'm also interested in very similar pieces of data. We also have an effort from the JSC IG the JavaScript Chinese interest group who are maintaining a website that also makes use of this data Etc. So the sources of truth that we have for this information - and there might be other information, please let me know if there are other pieces of information we should be tracking - but the sources of Truth are: the meeting itself is the ultimate source of Truth; this is where we make the decisions and that is reflected in the notes. Additionally we have the proposals repository and I mean the proposals as in plural where we aggregate information about all of the proposals. This is usually the most up-to-date location. We've got test 262 and we've got individual proposal repositories. So if you're running an aggregator of some sort, you have to be aware of all four of these resources because if the proposal repository might be out of date for some reason that you might need to go and double check either in test 262 or individual repositories or in the notes. So usually you find yourself bouncing between those different sources of truth, but generally the source of Truth is going to be coming from GitHub. So this slide is just to just to show what we can get directly out of the GitHub API if we're trying to get this information from an API, and what GitHub will give us if we pull for example all of the proposals from tc39, we're going to get the title and the short description description. Additionally we've written that will parse the stage, Champions, authors, and links to past presentations, and the spec can be generated from the title from the proposal URL and we can also And we parse a simple example, that's what get the places where we that's what we've currently got in place. JSCIG also gets this data, this is also from the JSCIG crawler. So they've got their own crawler. They get very similar information from the proposal repo and generate their Json data from that. So and this is what's the website crawler does. We also pull our information from The Proposal repo, so that's our source of truth then we parse individual read me. Is of each proposal to get the spec. Sorry, we generate the spec from the URL and we parse a simple example directly from the repo explainer. -YSV: so problem is stuff getting out of date or not be usable for this kind of machine reading of the repositories. Let's take a look at how this can happen. So the proposals repository is a high-level aggregate of all the proposals. It's manually edited by people such as JHD and others who are often keeping this stuff up to date. Now things sometimes get out of date or or they're not correct. We require a delegate to do this update manually after the meeting and it doesn't always have all of the information necessary for aggregators such as the website. So the modification that I would propose to how we work on the proposals repository - so we're talking about the aggregate proposals repository - is to incorporate the JSCIG link checker, which makes sure that all links are up to date. So one question was have there been examples where something's been not correct and the example, is that the JSCIG link checker did find stuff out of date and posted updates manually. Someone had to go and take information from that link checker and then make manual pull requests to fix issues on the proposals repo. We're talking about a lot of information that could potentially go wrong and we've got people who have been very responsible and keeping the proposals repository up to date. The goal here would be, let's make their job easier by making link checker maybe something part of tc39. Let's join forces with the JSCIG and see how much stuff we can share so that we we can keep each other to date as things go ahead. Septs, who was maintaining the JSCIG link Checker, also recommended that we have automated workflows. I think that is something we can continue the discussion in the issue around merging in the JSCIG link Checker. So please take a look at the reflector for more information there. +YSV: so problem is stuff getting out of date or not be usable for this kind of machine reading of the repositories. Let's take a look at how this can happen. So the proposals repository is a high-level aggregate of all the proposals. It's manually edited by people such as JHD and others who are often keeping this stuff up to date. Now things sometimes get out of date or or they're not correct. We require a delegate to do this update manually after the meeting and it doesn't always have all of the information necessary for aggregators such as the website. So the modification that I would propose to how we work on the proposals repository - so we're talking about the aggregate proposals repository - is to incorporate the JSCIG link checker, which makes sure that all links are up to date. So one question was have there been examples where something's been not correct and the example, is that the JSCIG link checker did find stuff out of date and posted updates manually. Someone had to go and take information from that link checker and then make manual pull requests to fix issues on the proposals repo. We're talking about a lot of information that could potentially go wrong and we've got people who have been very responsible and keeping the proposals repository up to date. The goal here would be, let's make their job easier by making link checker maybe something part of tc39. Let's join forces with the JSCIG and see how much stuff we can share so that we we can keep each other to date as things go ahead. Septs, who was maintaining the JSCIG link Checker, also recommended that we have automated workflows. I think that is something we can continue the discussion in the issue around merging in the JSCIG link Checker. So please take a look at the reflector for more information there. -YSV: So that brings us to another issue. How do we ensure that data is correct like throughout all of our workflow? So one place is that we've got the notes repository. The notes repository a full proceedings more or less of this meeting. It records in real time what our discussions are about and also the conclusions that we come to in this meeting. So this is fantastic and it's my source of truth when I have some ambiguous understanding of something I go to the notes. The problem with this is that the notes are not machine readable and the conclusions do not have a consistent scheme. So I'll get into the proposed modifications in a second which are to standardize conclusions and make it possible for a GitHub action to verify that meeting notes are up-to-date in the proposals repository and that they have a consistent scheme. That should be pretty easy to do. So, here's an example of one way where we record an advancement. So this is an advancement to stage one: conclusion/resolution is stage one and we have the proposal link here that tells us which proposal we are working with. So we can always match on the basis of the proposal URL because URLs are unique and we can parse the notes for these proposal links and then attach them to a conclusion. Here we have it recorded as stage one. So I want to note that it will be probably impossible to use the titles as machine readable information because there's too much variance in the titles and we can get all of that information from the proposal link and from the conclusion/resolution, assuming that the conclusion resolution is well-formed. We have no rules about how we record conclusions and resolutions at this moment. Here's another example, this is an update that happened in this meeting where Shu asked if it could remain at stage 3 and this is how it was recorded. Now from a machine-readable perspective consensus on stage 3 might sound like an advancement and depending on how we program this it would just be a no op where it advances from stage 3 to stage 3, but we've also got a lot of extra detail here about for arrays and typed arrays and strings Etc. So this information isn't going to be as usable to a program that is scraping the notes, but we could form this in a way that makes sense to a machine and perhaps people who are checking the notes can get more details this way. What about proposals that don't get advancement? We also don't have a clear scheme for this. We can of course do a fuzzy match on no consensus or no matching and then whatever stage is being sought. .And additionally there are pieces of extra information here, details about why this didn't Advance, which in the proposal update presentation I mentioned we should be very clear about why things don't advance so that we can learn from from past presentations. So, what's the proposal for this conclusion segment? I would suggest that we have a scheme that is consistent when we record the conclusion and result so if something gets consensus for advancement to a given stage, we always record it in the same way. So I've proposed this wording: "consensus for advancement to stage one" or "no consensus for advancement to stage one", with additional details being added as extra bullet points and a special segment called "additional comments" for anything that doesn't fit the schema. Same thing goes for - for example if we take SYG’s presentation of item something that remains at stage 3. we could represent this as consensus for the following changes to proposal. So let's say someone's doing an update and they want to get the committee to be aware of certain changes that have happened since the last presentation we should record those as consensus for the following changes and then detail which changes are being are being consented to and if there's no consensus then we say which ones have been rejected and these two fields can exist at the same time in the same conclusion. So we may have consensus for one change and not for another. Again that there will be additional comments in case something doesn't fit into the schema. Finally, there might be something where there are no action items, it's just informative updates on a proposal, and we can say that that's no action items with additional comments. So you'll notice that the schema is very strict. Well, it's not very strict. Basically. The first line is the first set of lines are dedicated to what's the status of the proposal and then we've got an additional comment section for anything that can't be captured by what's the status of the proposal? +YSV: So that brings us to another issue. How do we ensure that data is correct like throughout all of our workflow? So one place is that we've got the notes repository. The notes repository a full proceedings more or less of this meeting. It records in real time what our discussions are about and also the conclusions that we come to in this meeting. So this is fantastic and it's my source of truth when I have some ambiguous understanding of something I go to the notes. The problem with this is that the notes are not machine readable and the conclusions do not have a consistent scheme. So I'll get into the proposed modifications in a second which are to standardize conclusions and make it possible for a GitHub action to verify that meeting notes are up-to-date in the proposals repository and that they have a consistent scheme. That should be pretty easy to do. So, here's an example of one way where we record an advancement. So this is an advancement to stage one: conclusion/resolution is stage one and we have the proposal link here that tells us which proposal we are working with. So we can always match on the basis of the proposal URL because URLs are unique and we can parse the notes for these proposal links and then attach them to a conclusion. Here we have it recorded as stage one. So I want to note that it will be probably impossible to use the titles as machine readable information because there's too much variance in the titles and we can get all of that information from the proposal link and from the conclusion/resolution, assuming that the conclusion resolution is well-formed. We have no rules about how we record conclusions and resolutions at this moment. Here's another example, this is an update that happened in this meeting where Shu asked if it could remain at stage 3 and this is how it was recorded. Now from a machine-readable perspective consensus on stage 3 might sound like an advancement and depending on how we program this it would just be a no op where it advances from stage 3 to stage 3, but we've also got a lot of extra detail here about for arrays and typed arrays and strings Etc. So this information isn't going to be as usable to a program that is scraping the notes, but we could form this in a way that makes sense to a machine and perhaps people who are checking the notes can get more details this way. What about proposals that don't get advancement? We also don't have a clear scheme for this. We can of course do a fuzzy match on no consensus or no matching and then whatever stage is being sought. .And additionally there are pieces of extra information here, details about why this didn't Advance, which in the proposal update presentation I mentioned we should be very clear about why things don't advance so that we can learn from from past presentations. So, what's the proposal for this conclusion segment? I would suggest that we have a scheme that is consistent when we record the conclusion and result so if something gets consensus for advancement to a given stage, we always record it in the same way. So I've proposed this wording: "consensus for advancement to stage one" or "no consensus for advancement to stage one", with additional details being added as extra bullet points and a special segment called "additional comments" for anything that doesn't fit the schema. Same thing goes for - for example if we take SYG’s presentation of item something that remains at stage 3. we could represent this as consensus for the following changes to proposal. So let's say someone's doing an update and they want to get the committee to be aware of certain changes that have happened since the last presentation we should record those as consensus for the following changes and then detail which changes are being are being consented to and if there's no consensus then we say which ones have been rejected and these two fields can exist at the same time in the same conclusion. So we may have consensus for one change and not for another. Again that there will be additional comments in case something doesn't fit into the schema. Finally, there might be something where there are no action items, it's just informative updates on a proposal, and we can say that that's no action items with additional comments. So you'll notice that the schema is very strict. Well, it's not very strict. Basically. The first line is the first set of lines are dedicated to what's the status of the proposal and then we've got an additional comment section for anything that can't be captured by what's the status of the proposal? -YSV: Okay. So, how do we enforce this? We can have a script in the notes repository that verifies conclusions against agenda items just to make sure that we've got the like for example if something incorrectly gets recorded like something was supposed to advance to Stagwe 2. And in the notes, it was supposed to advance to stage three. Maybe there was a mistake there. It wouldn't change the contents of the notes repository. It would instead set a flag that informs us that there is an inconsistency between some of our documents. Additionally, I would recommend that we have a five minute break between agenda items to properly summarize what's going on in the notes. So that would give people who are about to present a little bit of time to make sure their setup is ready and that would give a chance for note takers to make sure that it's properly summarized and for the people who just presented to verify the conclusions. So this would add a little bit of time to our committee meetings to make sure that the data that we're recording in the notes is correct. Finally, we update how0we-work the document we have on note-taking and link that note-taking document from the agenda. We have a recommendation on how new people should start working on the notes but maybe it should be more explicitly linked to on the repository on the Note so the people who are taking notes are aware of this schema. +YSV: Okay. So, how do we enforce this? We can have a script in the notes repository that verifies conclusions against agenda items just to make sure that we've got the like for example if something incorrectly gets recorded like something was supposed to advance to Stagwe 2. And in the notes, it was supposed to advance to stage three. Maybe there was a mistake there. It wouldn't change the contents of the notes repository. It would instead set a flag that informs us that there is an inconsistency between some of our documents. Additionally, I would recommend that we have a five minute break between agenda items to properly summarize what's going on in the notes. So that would give people who are about to present a little bit of time to make sure their setup is ready and that would give a chance for note takers to make sure that it's properly summarized and for the people who just presented to verify the conclusions. So this would add a little bit of time to our committee meetings to make sure that the data that we're recording in the notes is correct. Finally, we update how0we-work the document we have on note-taking and link that note-taking document from the agenda. We have a recommendation on how new people should start working on the notes but maybe it should be more explicitly linked to on the repository on the Note so the people who are taking notes are aware of this schema. -YSV: Okay, so individual repositories, it's the most complete source of Truth for a single proposal, but at the moment we don't have a consistent schema that is machine readable, and we don't have any metadata now. It's not always up to date. I have found proposals that say that they're in stage 1 when they're in stage 2 or they haven't been updated for a long time or they don't have the most recent link to the notes or something else. So there's often a little bit of distance from an individual proposal repository and the general proposals repository. There's a lag. So w3c has an approach to the problem of metadata. They have a JSON file included in every single repository that gives a little bit of information that's machine readable. So I would propose that we have a similar JSON file that's machine readable and that would be - so we can do this in a couple of different ways. One way is a metadata.json file, another way is strict readme rules about which sections are in the readme, which Fields, so that we can verify those fields directly with the parser. Both are fine. Yeah other stuff here and make sense. And the final thing is we can use GitHub topics. This is also recommended by septs to categorize proposals by their stages. So at the moment we record the stage information in the proposal itself as plain text we can actually just have that as a category. 2 proposals and when a proposal advances we can have it as a stage three category proposal. So those are a couple a couple of adjustments that we can make so if we were to go for the metadata change without changing the readme the reason that I'm proposing using JSON metadata is because the readmes are often needing to address human problems rather than machine problems, and I don't think that we should sacrifice communicating to humans in the explainers in favor of making things machine readable. So the compromise here would be to have a dedicated JSON file with the metadata that is machine readable. So that we have that available for those who need to use it. And then again, we would use GitHub actions to make sure that this is up-to-date and use it sort of as a triaging point. Okay, and this is the thing that I mentioned about using GitHub topics to categorize our proposals accordingly. +YSV: Okay, so individual repositories, it's the most complete source of Truth for a single proposal, but at the moment we don't have a consistent schema that is machine readable, and we don't have any metadata now. It's not always up to date. I have found proposals that say that they're in stage 1 when they're in stage 2 or they haven't been updated for a long time or they don't have the most recent link to the notes or something else. So there's often a little bit of distance from an individual proposal repository and the general proposals repository. There's a lag. So w3c has an approach to the problem of metadata. They have a JSON file included in every single repository that gives a little bit of information that's machine readable. So I would propose that we have a similar JSON file that's machine readable and that would be - so we can do this in a couple of different ways. One way is a metadata.json file, another way is strict readme rules about which sections are in the readme, which Fields, so that we can verify those fields directly with the parser. Both are fine. Yeah other stuff here and make sense. And the final thing is we can use GitHub topics. This is also recommended by septs to categorize proposals by their stages. So at the moment we record the stage information in the proposal itself as plain text we can actually just have that as a category. 2 proposals and when a proposal advances we can have it as a stage three category proposal. So those are a couple a couple of adjustments that we can make so if we were to go for the metadata change without changing the readme the reason that I'm proposing using JSON metadata is because the readmes are often needing to address human problems rather than machine problems, and I don't think that we should sacrifice communicating to humans in the explainers in favor of making things machine readable. So the compromise here would be to have a dedicated JSON file with the metadata that is machine readable. So that we have that available for those who need to use it. And then again, we would use GitHub actions to make sure that this is up-to-date and use it sort of as a triaging point. Okay, and this is the thing that I mentioned about using GitHub topics to categorize our proposals accordingly. -YSV: Okay. So this is the enforcement bit. We would have a script in every proposal repository that verifies it against the proposals repo. So if for example the stage is out of date we can say hey, this failing our build step or or our verification step because the proposals repo says that this proposal is now in stage 3 and this proposal is saying that it's in stage 2, that looks like it's wrong, can you check it it? The actual adjustments would have to be verified by humans to make sure that - we don't want the machine accidentally make a mistake for us. So this would all be checked by humans. And in the case that we choose to have JSON metadata to allow crawlers to have an easier time. To get specific machine readable information that they need then we would verify that JSON using this using this GitHub action against the proposal repository itself and the proposals repository. +YSV: Okay. So this is the enforcement bit. We would have a script in every proposal repository that verifies it against the proposals repo. So if for example the stage is out of date we can say hey, this failing our build step or or our verification step because the proposals repo says that this proposal is now in stage 3 and this proposal is saying that it's in stage 2, that looks like it's wrong, can you check it it? The actual adjustments would have to be verified by humans to make sure that - we don't want the machine accidentally make a mistake for us. So this would all be checked by humans. And in the case that we choose to have JSON metadata to allow crawlers to have an easier time. To get specific machine readable information that they need then we would verify that JSON using this using this GitHub action against the proposal repository itself and the proposals repository. YSV: Okay, finally test 262 repository. Conformance test Suites are fantastic in the test. 262 team does an amazing job with this. Yes, they have specific issues related to proposals that are called test plans for large-scale features. Again, this is another case where the issues are not machine readable PR and issues don't have a consistent titling scheme. There is some similarity, but it's not consistent. The problem there is that you can't with certainty link a PR to a given proposal, for example, if we're doing something around updating promises you might have you might have an issue saying "update promise tests" and not the specific promise tests related to the new proposal. So here's an example of - these are the best examples that I have for the naming schemes that currently exist in the test 262 repo. And the reason why we want this is just so that when we are tracking which tests are available for a proposal we can just point to one place because there's often multiple PRS related things. Okay. So here's an example. We've got we weakrefs and finalization group test plan atomics waitAsync testing plan Etc. And here is a detailed view of one of those. So this also doesn't link back to the proposal. Also, for example, if we were to pull all of the issues, which we can do from the test 262 repository we wouldn't be able to verify that this weakrefs or finalization group tests plan is issue for weakrefs. So The proposed modification is we standardize issue titles so that their machine readable and test 262 repo is of course the source of Truth for test data and we explore GitHub projects as another potential way to organize proposal data so we can leverage GitHub in certain ways. We'll need to do a little bit of research about how that might work. But this is another way that we can look at keeping the information up to date. So here's an example issue template. The change here is that we use the exact feature name and the second half of the title is stage 3 acceptance test plan then in this issue issue we have proposal info we can link to the proposal repository and verify that indeed this is the test suite related to the proposal in question, and then the the rest is free to use by test 262 authors as they wish. And again, you can tag things with projects of an organization in an issue. So if for example, we have a project weakrefs we could tag this issue with that project. So we don't really have any enforcement that we can do here, but we can format the issues using the issue template. @@ -235,15 +236,15 @@ JHD: The first question is just for the sake of the notes. Can you please explai YSV: JSCIG is the JavaScript China interest group. So we I believe it is composed of a number of tc39 members from China. So I believe that includes JHX and a few other people. -JHD: I think automated updating a links in the proposals repository and elsewhere is a win, we should just do it. And while it's awesome that it's being talked about with the committee, I hope that that's not something we require consensus for. +JHD: I think automated updating a links in the proposals repository and elsewhere is a win, we should just do it. And while it's awesome that it's being talked about with the committee, I hope that that's not something we require consensus for. -JWK: In the slides about gathering data of meeting notes I found it seems like you're trying to analyze natural language to like stage advancing information and that might be very error prone. Why not just normalize data in some forms, and it'll be friendly to machine reading. +JWK: In the slides about gathering data of meeting notes I found it seems like you're trying to analyze natural language to like stage advancing information and that might be very error prone. Why not just normalize data in some forms, and it'll be friendly to machine reading. YSV: that is actually the proposal. The proposal is that we normalize the conclusions in a consistent way so that they are machine readable. JWK: Oh, thanks. -WH: As part of this presentation you have a proposal to have a 5 minute break between every pair of items. That is incompatible with physical meetings. There is no way you'll be able to get everybody back in the room within 5 minutes. And if you do it with a virtual meeting: we had something like 38 items in this meeting so this would remove two hours from this week’s meeting. +WH: As part of this presentation you have a proposal to have a 5 minute break between every pair of items. That is incompatible with physical meetings. There is no way you'll be able to get everybody back in the room within 5 minutes. And if you do it with a virtual meeting: we had something like 38 items in this meeting so this would remove two hours from this week’s meeting. YSV: Yes, we would be sacrificing meeting time for the gain would be that our conclusions are correct from the beginning now the 5 minute breaks wouldn't necessarily mean that people can leave the room in the physical meeting. It would mean that we're just going to take a pause, people set up their laptops or whatever, and during this time. It's just a designated time for it checking the notes. There we can say that we don't want to do this. So that means that our note takers will be rushed. That's currently what happens. If you take notes you will find yourself being rushed to concretely record the conclusion and at times it will be difficult to get everything formatted correctly. So that's why I'm recommending the the break but as I mentioned we can have a GitHub action that before any notes merged into the repository we validate the conclusion by processing the conclusions and checking them against a template if they don't match the pr fails the check and whoever is responsible for merging the pr has to fix it @@ -255,17 +256,17 @@ WH: Let's move on. MS: Isn't one minute enough? -YSV: The time is arbitrary that I suggested. What I meant is a short period of time that's dedicated to making sure that that gets checked. Hopefully that answers your question +YSV: The time is arbitrary that I suggested. What I meant is a short period of time that's dedicated to making sure that that gets checked. Hopefully that answers your question JHD: I also think that having structured JSON metadata in the proposals repo that's used to generate the markdown in it is also a great idea, and hope that also wouldn't require consensus. -LEO: (paraphrased) I think it's very cool to have ideas about we do on tests plans. Test262 has received negative feedback when we have tried to add metadata in the past. A template could be proposed on the test262 repo. +LEO: (paraphrased) I think it's very cool to have ideas about we do on tests plans. Test262 has received negative feedback when we have tried to add metadata in the past. A template could be proposed on the test262 repo. YSV: I'm interested in talking about this more offline. ## JSON modules for Stage 3 (cont.) -Presenter: Daniel Ehrenberg (DE) +Presenter: Daniel Ehrenberg (DE) DE: So we were previously talking about the mutable versus immutable question for JSON modules. There were a number of people in the queue. Unfortunately, we lost the queue entries. So if you have comments on the mutable versus immutable- @@ -273,7 +274,7 @@ AKI: I have a screen shot! I can drop it in the chat. https://snaps.akibraun.com MM: The issue is that rather than the UInt8Array example pushing us towards mutability, it should instead push us towards having some immutable way to represent a string of octets. That's all. -DE: Yeah, so I want to be clear. If we make JSON module immutabile it would not creating a new convention that in general module types are immutable. I think we will have to expect that hosts which we explicitly enfranchised to make other modules - with this proposal are likely to make other mutable module types. So if tc39 makes a sort of data type. It's immutable. that's that's a decision we could make but we're not we're not going to - +DE: Yeah, so I want to be clear. If we make JSON module immutabile it would not creating a new convention that in general module types are immutable. I think we will have to expect that hosts which we explicitly enfranchised to make other modules - with this proposal are likely to make other mutable module types. So if tc39 makes a sort of data type. It's immutable. that's that's a decision we could make but we're not we're not going to - MM: Why do you think hosts are biased towards making mutable types rather than making immutable types? @@ -291,11 +292,11 @@ BSH: I think I wanted to state that a little more strongly. On the whole really DDC: Yes, first also have kind of preference for default mutability here. Like I agree with what JHD said that there's kind of a history here for mutability. It's what users expect from node and from JS modules like I might impression kind of is that the main argument against this is that like if you do need guaranteed unchanged fresh copy, like the workarounds for this were somewhat weak like I need to maybe reorder the Imports to make sure I can be the first and lock it down, or I need to to like proxy through a JS module to lock down the object or get a fresh copy. It seems like maybe there's value in having like a stronger guarantee here. This is sort of the use case I'd vision for evaluator attributes, like I'd be really interested in exploring this space if it turned out to be something that is needed. So that's kind of why I like I'd like to have this default of mutability that like follows the historical case here, but like if it turned out that like this was actually a problem like evaluator attribute seem like an interesting tool to explore to like solve some of the workarounds there if we need something better, but just going with going with the historical Direction so far. That said it's not a super strongly held opinions if the temperature of the room is the other way. My main priority here is just getting consensus on one way way or another. -PST: So on microcontroller XS already has both mutable and immutable modules. The decision is made out of the language, itis made in a manifest or something and so on, but both are useful in fact, especially when the memory is very constrained and so I expect the same will apply to JSON modules when XS will Implement them. I mean we use both. Module that expert I mean binary data and stuff like that and they can be need to be immutable most of the time because of the memory constraints, but I mean both would that's my you just would experience because we already did. Thank you. +PST: So on microcontroller XS already has both mutable and immutable modules. The decision is made out of the language, itis made in a manifest or something and so on, but both are useful in fact, especially when the memory is very constrained and so I expect the same will apply to JSON modules when XS will Implement them. I mean we use both. Module that expert I mean binary data and stuff like that and they can be need to be immutable most of the time because of the memory constraints, but I mean both would that's my you just would experience because we already did. Thank you. CM: Yeah, not to be beating a dead horse here, but it strikes me that when people have made various assertions that people are used to modules being mutable and that's true, but the extent to which they’re mutable is really the extent to which whatever the module exports as affordances for mutability, e.g. functions which can be invoked to change things or a complex data objects that are made available in mutable form, and so code modules are sort of sovereign over their mutability, whereas data modules, since they have no agency, have no way to control this. And it seems like some mechanism for at least indication of intent ought to be made available. -DE: So as a member of the champion group for JSON modules, we'd really like to conclude on this discussion and we've heard arguments for both sides. The champion group hass expressed, know opinions, but but openness to to go either either either way on this question. And so we'd like to return next TC39 meeting to ask for stage 3 based on the committee's decision. The temperature check previously seemed show on balance, but it's very divided, that there was a lot of interest in immutable. So I want to return to CM's previous question about - would people who feel strongly block in either direction? +DE: So as a member of the champion group for JSON modules, we'd really like to conclude on this discussion and we've heard arguments for both sides. The champion group hass expressed, know opinions, but but openness to to go either either either way on this question. And so we'd like to return next TC39 meeting to ask for stage 3 based on the committee's decision. The temperature check previously seemed show on balance, but it's very divided, that there was a lot of interest in immutable. So I want to return to CM's previous question about - would people who feel strongly block in either direction? CM: I wouldn't block, it's just that it's just that I would find the feature much less useful without some means to ensure immutability. It doesn't mean things have to be invariably immutable, I would just like there to be a way, in use cases where that's what you need, for you to have a way to get what you need. @@ -315,31 +316,31 @@ CM: Yes, that would be my desire although in the case of data, since data is sor SYG: Okay, thanks. -PST: about Chips' concern, even a module cannot completely control if they are mutable. if you module export a date you can still change the date even if you freeze the date object. so there is something else they're there, it's not just freezing. That's all. +PST: about Chips' concern, even a module cannot completely control if they are mutable. if you module export a date you can still change the date even if you freeze the date object. so there is something else they're there, it's not just freezing. That's all. DE: Yeah, so I want to come back to CM’s Point briefly. I don't think it's viable for us to say that in general data modules are immutable. I think if we try to overstate that we're going to get push back from for example web module types that don't run code but are mutable. So I want to focus this question on JSON modules. The temperature I'm getting from the room is that there's some - on balance it seems like there's a bit more interest in immutable modules. And I haven't heard anybody say that they would block any either direction. So my plan would be to return next meeting with a proposal for immutable JSON modules and ask for stage 3. I ask that if people are concerned or feel like they would block that they would raise this to the champion group. Is this a reasonable conclusion or do people have concerns with this? -RPR: I think I think that's a reasonable way to wrap this up. Shu is on the queue to say that he leans mutable but won't block. And I think we have to stop now. Thank you very much. +RPR: I think I think that's a reasonable way to wrap this up. Shu is on the queue to say that he leans mutable but won't block. And I think we have to stop now. Thank you very much. AKI: the conclusion is that the JSON module champion group is going to return for stage 3 next meeting and if you have very strong feelings on either mutability or immutability please bring it to the issues queue. As it stands it would appear that nobody is interested in blocking for either. DE: and that the current understanding is to go with immutable, though the last couple of comments did make it a bit ambiguous what the temperature was. [Discussion continues in https://github.com/tc39/proposal-json-modules/issues/1] ## Supporting MDN's documentation about TC39's output (redux) + Presenter: Daniel Ehrenberg (DE) -- [slides]() +- slides -DE: Okay, so when we were discussing this previously, I think we got a bit off track when we're talking about Ecma’s overall budget situation. That's really out of scope for this committee. It's more a thing for the Ecma execom or general assembly. So I want to focus on the questions that are within our scope asking the committee to questions that the the exact come asked for a signal for from the committee on which is more about what kinds of services we hope to expect from Ecma and less about the exact budget details. So when I asked and thanks to Myles for helping with thinking about how to frame this. One question is does TC39 value MDN. Do we see it as an important thing for us other question is which I don't want to come to a full answer today because it's a broader question, but do we want MDN contributions for TC39 proposals to be part of our process like we have with test262 and and finally do we want help from from Ecma funding this the proposal that I'm making there would not be sufficient by itself. It would be additional things and if anybody else wants -to contribute to this, please get in touch me offline, but the question is sort of whether this is a priority for us. I heard that there are many other things that we might benefit for from dedicated funding for example, We previously talked about having a transcriptionist. It seems like even with the automated transcriptions note-taking remains burdensome or also help with some kind of professional typesetting help for the final specification for its PDF form, I think these would also be reasonable things to ask for but the request from the exec committee was to get a clear signal from the from the committee about whether these are requests from the committee then later in the GA and we can discuss the overall budget. So open up to the queue. +DE: Okay, so when we were discussing this previously, I think we got a bit off track when we're talking about Ecma’s overall budget situation. That's really out of scope for this committee. It's more a thing for the Ecma execom or general assembly. So I want to focus on the questions that are within our scope asking the committee to questions that the the exact come asked for a signal for from the committee on which is more about what kinds of services we hope to expect from Ecma and less about the exact budget details. So when I asked and thanks to Myles for helping with thinking about how to frame this. One question is does TC39 value MDN. Do we see it as an important thing for us other question is which I don't want to come to a full answer today because it's a broader question, but do we want MDN contributions for TC39 proposals to be part of our process like we have with test262 and and finally do we want help from from Ecma funding this the proposal that I'm making there would not be sufficient by itself. It would be additional things and if anybody else wants to contribute to this, please get in touch me offline, but the question is sort of whether this is a priority for us. I heard that there are many other things that we might benefit for from dedicated funding for example, We previously talked about having a transcriptionist. It seems like even with the automated transcriptions note-taking remains burdensome or also help with some kind of professional typesetting help for the final specification for its PDF form, I think these would also be reasonable things to ask for but the request from the exec committee was to get a clear signal from the from the committee about whether these are requests from the committee then later in the GA and we can discuss the overall budget. So open up to the queue. -MBS: There is nothing in the queue. +MBS: There is nothing in the queue. -DE: Okay, does nobody value mdn? Show me do it. +DE: Okay, does nobody value mdn? Show me do it. -MM: Yeah, I mean my response is yes. Yes. Yes, but it hardly seems worth putting myself in the queue for that. for that. So temperature check sounds good. +MM: Yeah, I mean my response is yes. Yes. Yes, but it hardly seems worth putting myself in the queue for that. for that. So temperature check sounds good. -DE: okay, can we do temperature check for sort of like that and of these three first three questions and maybe we can break it down more if needed like of skepticism about any of these then note that a skeptical or we could do one by one. Does anybody have a preference? +DE: okay, can we do temperature check for sort of like that and of these three first three questions and maybe we can break it down more if needed like of skepticism about any of these then note that a skeptical or we could do one by one. Does anybody have a preference? AKI: All right. Well, I am on the Queue now. I am a massive supporter of Ecma International finding ways to prop up the MDN documentation. It is vital to JavaScript developers and if we don't have JavaScript developers, no one's going to use our standard. So like there's a clear and black and white line between what we do and what MDN provides. However It's a little unclear to me what the structure here would be like. How would we use the funding? Would we be hiring a person ourselves? Would this involve giving money to an organization like the Mozilla Foundation? I am a big fan of asking for Ecma to utilize our dues in a way that supports the committee big fan of supporting MDN. @@ -355,7 +356,7 @@ MF: Okay, so as part of the editor group, I dread having to deal with this PDF c DE: Great, would you be up for working offline to formulate this as a proposal with me to Ecma management? -MF: Yes, if you would be able to join one of our editor group meetings - we have them weekly - I'd love to have the whole editor group participate in the discussion. +MF: Yes, if you would be able to join one of our editor group meetings - we have them weekly - I'd love to have the whole editor group participate in the discussion. DE: Great. I'll do that. Thank you. @@ -371,19 +372,19 @@ WH: Dan, I find the form of your temperature check to be really inappropriate. Y DE: I'm very confused because it would be totally valid for TC39 to say look MDN is just a random project and it's out of scope for us. I think because it has like Mozilla in the name that does make ambiguous something that has come up in discussion before. -AKI: I love that you think of MDN as a puppy. +AKI: I love that you think of MDN as a puppy. WH: That's not a good response to the concern I raised. I would like to separate the issue of whether TC39 “values” MDN from whether ECMA should be funding MDN. DE: So I can try to clarify. I want this temperature to be on sort of the intersection between them. So if you feel comfortable with any parts of this, I didn't mean like the or I think that would be a little bit unfair because then we wouldn't know what was going on. -WH: I object to doing a temperature check like that. That's inappropriate. +WH: I object to doing a temperature check like that. That's inappropriate. DE: Okay, so let's close the temperature check. I'm wondering if can move forward with this being the message to Ecma that because it's and I haven't heard anybody say no to these questions, the only I mean I heard responses to the broader Ecma funding thing, but that's the GA’s jurisdiction. I wanted to know another Ecma funding item. I'm sorry. I didn't put this on the agenda before but we previously talked about nonviolent communication training or some kind of training to help the committee communicate better. We previously discussed this and approved for our 2019 budget, but that's sort of expired for the 2021 budget. I think would need to reaffirm that we're still interested in this I think he's I personally could use help in how I'm communicating some time. and maybe the committee could is a whole so I wanted to ask if we could reaffirm that as well in addition to these topics. We don't have to do this by temperature. AKI: Okay. I'm just going to provide a clarifying answer as opposed to a clarifying question. The budget for comms training was for 2020, and then 2020 happened. We had been approved for this budget at the Ecma GA in December of 2019. We talked about this admittedly hadn't gotten it together and then 2020 happened. I'm pretty confident that if we could have some sort of structured plan that we could bring that back to the GA and say this is how we're going to actually execute on what you previously budgeted for us. -DE: So we're discussing this in some more detail in the inclusion calls that we now have every two weeks weeks. They courage we I didn't I didn't realize that you were looking into this also. Well, please come to those calls and so my discussion with the exact was basically they were saying we can't just like carry carry over the 2020 but we have to you know confirm that we're still interested in this so that's why I'm asking the question because you know things don't just get copied from one to the other. +DE: So we're discussing this in some more detail in the inclusion calls that we now have every two weeks weeks. They courage we I didn't I didn't realize that you were looking into this also. Well, please come to those calls and so my discussion with the exact was basically they were saying we can't just like carry carry over the 2020 but we have to you know confirm that we're still interested in this so that's why I'm asking the question because you know things don't just get copied from one to the other. AKI: I was just clarifying that we had gotten approval in the past and I think that if we brought forward a more concrete plan of this is how we are going to spend that money. I'm confident that we could get some manner of continuation of that line item. @@ -397,7 +398,7 @@ WH: That did not answer the question. You said “work on this”, but what is DE: The specific question is in this slide. It's the third bullet point. What is this work? And MDN documentation about the output of TC39. -WH: Okay, so you're conflating the MDN work with formatting the PDF with doing transcription with doing team training. +WH: Okay, so you're conflating the MDN work with formatting the PDF with doing transcription with doing team training. DE: If response or concern about one in particular than I'd like to hear it. @@ -417,9 +418,9 @@ DE: So I think that's the scope of the GA did present in a previous slide. I'd i YSV: Can I just interrupt here before? We get too far into details just to make sure that Waldemar's concern is fully addressed. So it looks like Waldemar position. we should have for him to understand this more fully before agreeing to it from the tc39 perspective. He would need to see the budget ahead of time and the disagreement here. is that since we don't have all of the information about Ecma budget that Dan you see that as something that should be raised at the GA. Is that right? Is that a good summary? -DE: I'm really trying to pass along the concrete feedback from the from the exec that they yeah. Along the lines of what you said. I wanted to further say that for, you know, the committee PDF generation. We don't have any estimates for how much that cost for the communication training. We have estimates, but they're old and we'll need to get new estimates, but I feel like these questions about details can be resolved offline. +DE: I'm really trying to pass along the concrete feedback from the from the exec that they yeah. Along the lines of what you said. I wanted to further say that for, you know, the committee PDF generation. We don't have any estimates for how much that cost for the communication training. We have estimates, but they're old and we'll need to get new estimates, but I feel like these questions about details can be resolved offline. -YSV: So Dan is coming from some feedback that he got from the GA and Waldemar is expressing his concern about his ability to make a decision right now based of the information. +YSV: So Dan is coming from some feedback that he got from the GA and Waldemar is expressing his concern about his ability to make a decision right now based of the information. WH: The answers I’ve gotten have been rather evasive. I want to see actual Swiss Franc amounts. @@ -435,10 +436,9 @@ DE: I agree completely with what Michael said and I think there's a lot of diffe YSV: And Waldemar? - WH: Yeah, when asked to decide such things I feel like I have a fiduciary duty to ask how much. -DE: so should we think he belongs to the Ecma management and general assembly. +DE: so should we think he belongs to the Ecma management and general assembly. WH: Look, let's not confuse things by saying you're deferring it to the Ecma General Assembly. The first thing that the General Assembly will ask is how much you want. @@ -460,7 +460,7 @@ MLS: Yeah. I'm not sure I fully understand both the Waldemar comment, but I thin DE: That's a great answer that question based on my discussions with the Ecma management the just tried to discuss budget trade-offs with them and they said actually what we'd like to hear from the technical committees is what services are you? Are you interested in and then then we can see whether we can fit into the budget and get and get back to you the about this particular request my understanding from them was that it will not be difficult to fit into the budget, but that's something that we'll have to be, you know Revisited in more detail based on our on our feedback. So I think that this is my understanding and we can all you know run run for positions in Ecma management to be involved more, I think I want to respect they said that. we can leave these things to Ecma management and focus on raising our interests. -YSV: Okay. So in the interest of time, I'm going to move on to the second question. So this first question is we would be presenting MDN as a contribution 20,000 francs for GA as a per item basis thing. The second question is do we want to draft a list of items with priorities and the amount that they would cost and present that as a holistic item for Ecma to review. So this is the second temperature check. I have a temperature check screenshot of the first temperature check, please feel free to give your thoughts on a holistic list of items and their priorities that would be presented to Ecma at at some point +YSV: Okay. So in the interest of time, I'm going to move on to the second question. So this first question is we would be presenting MDN as a contribution 20,000 francs for GA as a per item basis thing. The second question is do we want to draft a list of items with priorities and the amount that they would cost and present that as a holistic item for Ecma to review. So this is the second temperature check. I have a temperature check screenshot of the first temperature check, please feel free to give your thoughts on a holistic list of items and their priorities that would be presented to Ecma at at some point AKI: Okay, well people contemplate that question and decide their opinions. I think we are so far over time on this and we need to call it to an end. I would love to see this come back and a little bit more of a concrete form. I made my opinion clear. I am hugely supportive of it conceptually speaking. I would love to see a little bit more structure. We super duper need to move on. @@ -469,10 +469,11 @@ YSV: Alright that concludes this topic item and then I will send you the tempera DE: Thank you. ## Continuation: Grouped Accessors and Auto-Accessors + Presenter: Ron Buckton (RBN) -- [proposal]() -- [slides]() +- proposal +- slides RBN: So when we left off the discussion on grouped and auto accessor properties, there was some debate between myself and Daniel Ehrenberg about how this would affect the decorators proposal. I wanted to discuss this with Daniel and offline a bit and I wanted to present some of these to see if we can move past that discussion and continued discussing whether or not we could look for stage 1. So one thing that I wanted to point out is my intention with this proposal is not to block decorators. I do not believe this proposal should be considered blocking. it proposes a new feature that I could not necessarily bring up in the context of decorators on its own as that would be out of scope for the decorators proposal is something that I hope to be able to Leverage. As part of the decorators proposal but not necessarily blocker forces specific decision within that group. This is again something that we have discussed in the decorators call which is why I'm bringing this up to to the contrary. I believe that in general there's value for this proposal even with implicit conversion fields. grouped accessors and auto accessories provide more decoration targets and more flexibility, which I hope I was able to show in some of the earlier flow earlier slides in addition. One of the things that we discussed was migration path and what the migration path is for existing implementations of stage one decorators that go through transpilers such as TypeScript or Babel and that the current proposal does not actually have a clearer migration. Have for decorators on accessors that use get and set based on the current stage one semantics to provide the descriptor the way around this is extremely complex and cumbersome and by being able to provide this you have that flexibility of being able to still have something that decorates the combination of get and set. if we do choose explicit conversion of fields for decorators using a prefix keyword, for example, we have multiple options. users would have the ability to freely choose between using that keyword or the get set syntax the prefix keyword would obviously be shorter for many scenarios for auto accessors. However, allow you to declare the get or set independently and give you a place to actually decorate independently, which you would not be able to do with a field that gets to convert it into an accessor. So it does give you some more expressivity. And auto-accessors still are one of the core features that I want to try to be able to provide with this assuming we can find a syntax that everyone is comfortable with is the ability to succinctly be able to define a public debt and private set type scenario. Another possibility is that since Waldemar mentioned he was concerned about the syntax possibly. Being up too much of possible class in Tech space is that if we did find a perfect keyword that we liked we could theoretically apply it to both scenarios where we could say that keyword whatever keyword we look at says that this actually is an accessor. We could expand that out to be more specific that it has those individual get and set branches so they could theoretically decorate the individual Getters and Setters if we so choose and that essentially is what I show here in this slide is essentially the internal translation of what these things mean. Is that keyword field is essentially the same as property with Get Set initializer, which is essentially the same as declaring some private named filled with a getter and a Setter that wraps it one of the things that I would like to be able to do with this is depending on how quickly this proposal advances it may or may not It does provide an additional option that the decree is proposal can look at and consider. That is more expressive than a simple keyword could provide and again not to block the proposal but to give it more options with that. I'd like to be able to go to the queue and address any discussion that are there. @@ -507,11 +508,13 @@ YSV: Ron do you want to ask for stage 1? RBN: yes at this point. I'd like to ask the committee if there is consensus in stage 1 for investigating this proposal. And again I've mentioned on the slide here that I'm interested in the space investigating either the as proposed or an alternative syntax if necessary to achieve the things I'm looking for. YSV: All right, please speak now if you wish to block this proposal and any concerns. I'm not hearing anything. So congratulations Ron you have stage 1. + ### Conclusion/Resolution -* proposal advances to stage 1 +- proposal advances to stage 1 ## Incubator Chartering + Presenter: Shu-yu Guo (SYG) SYG: so we have two overflow items from the previous charter double ended iterator structuring and ergonomic branch checks for private fields. So those will carry over and in the interest of the people taking PTO and stuff I will propose basically just one more topic. and before I do that, are there any volunteers for proposals early stage proposals who would like to participate in the one additional slot that I am asking for before the next meeting? @@ -519,7 +522,9 @@ SYG: so we have two overflow items from the previous charter double ended iterat CZW: so error causes overflow in this meeting, so I'd like to request for the incubator call for error cost. SYG: Yeah, that sounds fine to me if there are no others. I am happy to add error cause. + ## Error cause for Stage 2 + Presenter: Chengzhong Wu (CZW) - [proposal](https://github.com/tc39/proposal-error-cause/) @@ -546,22 +551,24 @@ YSV: Does anyone have any other comments that they would like to get in? I'm not CZW: Yes, please. YSV: Are there any objections to stage two for Error cause? [nope] It looks like we have stage 2. Congratulations. + ### Conclusion/Resolution + proposal advances to stage 2 ## Batch preloading and JavaScript + Presenter: Daniel Ehrenberg (DE) - [slides](https://docs.google.com/presentation/d/1smfn5YiLCLgw30L4fbkaS-C3qxQdk3O4vt6E3DO7qxA/edit#slide=id.p) - DE: My Hope here is that we could find a way to load JavaScript modules native. and efficiently and so, you know, we have a bunch of different ways to load JavaScript now you can make individual modules so that you know, going to talk a lot about the way up here, but these things also apply in node and other environments. So Excuse excuse any sort of web specific references so you can load from individual files or fetches and this means that you're running code directly using the engines's module implementation, but in practice this is often too slow. So people made bundlers going to talk about [interruption] - So API improvements themselves don't seem to be enough to make bundlers not needed and I'll explain why. So we have bundlers that make a bunch of modules into one big script or some number of scripts or modules to reduce the overhead. That means that JavaScript module semantics are emulated. so when after Serma gave his presentation about module blocks, many people said oh, this is great. Module blocks will solve the bundling problem. So I don't know can they? Well, you can have a module block in a local variable. You can import it and then use it locally. Problem is, this falls over if you have multiple module blocks that want to import each other, because they don't close over the same outerscope and they're not present in the module map. So they just have no way to actually reference each other. What we really want instead if we wanted to bundle modules together is to have some kind of shared space where these modules exist. So for example, maybe this could be the module map. Maybe you have a declaration where you declare that things are present in the module map and then they would be able to import from each other. So if we want to proceed with a JS specific bundling solution then we have these JavaScript module bundles as a declarative way of putting multiple modules in one in one file. So in an environment like HTML or or node js. They could be interpreted as inserting entries into the module map. And then once the bundle is loaded then from anywhere in the realm you're using the same module map and you can import those modules, but actually this proposal alone would leave some aspects of loading performance on the table that we get better from individual resources. It would also leave some privacy and security improvements to be desired. So I'm going to talk about these aspects and especially how they relate to existing scripts and modules and bundlers. DE: So there's a bunch of different things that affect loading performance. One you could you could call the waterfall effect. So you really want to start loading the critical resources that are necessary for the page to run as soon as possible. You could think about this like you want a waterfall rather than rapids or a staircase that are going down gradually because you really want everything to start loading in parallel. If you have a module graph by default you'll have each module references from other modules that it uses and you're loading these things one by one you can opt into prefetching to improve the loading performance, but that can often be difficult. Bundlers handle this by default by just putting everything in the bundle. Load the bundle and you have all the modules. It's also kind of per-resource overhead, each resource that you load has cost. Maybe this is especially bad - I don't want to call out a certain operating system that has kind of slow access to open files on the file system, but this can be slow. And even with HTTP2 and HTTP3 there's overhead. There's less but there's still some so fewer resources mean less overhead, win for bundlers. There can also be caused from emulation now to be fair to Modern bundlers. There are a lot of Advanced Techniques that are used such as bundling CSS rather than putting it in JavaScript that can decrease the cost. But if you try to emulate one file type inside of JavaScript, it can be especially slow to decode because it requires parsing as a string and then interpreting that data again afterwards. It's not even visible to the browser what kind of data that string is until it then gets through JavaScript logic to expose it to the browser. So even for JavaScript modules emulating JavaScript modules with CJS or something like that. It's not quite slower, but it's not spec compliant either. It doesn't tend to have all of the features like temporal dead zones or live bindings completely accurately because these would cause extra performance overhead binary formats are especially bad when emulated because you might need to put them in base64. Emulation cost from bundling makes me sad. DE: next, inter-process communication. So again, this is this is all kind of with a broad brush, but in many - in both browsers and other JavaScript environments that do sandboxing like Deno there's one place where IO happens in a separate place for JavaScript runs, and this separation separation is very good for security. But it can cause overhead because even if you do all the prefetching like in this second bar, it can still be a lot slower due to processing to try to bring that module into the JavaScript engine. So yeah, I can't strongly claim that this graph shows exactly IPC overhead, but it's a phenomenon that's been noticed across engines. So with bundling it's just in one script and you've handed it off already to JavaScript engine. -DE: compression is also a factor. When things are compressed over the network, it's an individual response that's compressed. So when you have individual resources, it's possible to compress only based on a compression dictionary for one. On the other hand if you have one one bigger resource, one bigger bundle, then the compression can be across the whole bundle. So if you have common strings or common substrings, the result smaller. There was a proposal by YWS in another quarter to share compression dictionaries across responses in HTTP, but this is proven not viable so far. +DE: compression is also a factor. When things are compressed over the network, it's an individual response that's compressed. So when you have individual resources, it's possible to compress only based on a compression dictionary for one. On the other hand if you have one one bigger resource, one bigger bundle, then the compression can be across the whole bundle. So if you have common strings or common substrings, the result smaller. There was a proposal by YWS in another quarter to share compression dictionaries across responses in HTTP, but this is proven not viable so far. DE: So, code splitting? so it's important to download only the code that you need on initial load. And not run this extra code too soon because that could help critical path bandwidth and CPU for processing it or memory pressure, everything that could make your responsiveness worse. So individual resources handle this perfectly. You reference the resources you want, you just load those with bundles. It's a bit more complicated because in a lot of set-ups by default (...) tooling one is one big bundle these days to split up into smaller chunks, but this can be somewhat difficult to configure. I'm happy that there's also work on going to bring better defaults. But with all this chunking you start to incrementally lose some of the benefits of bundling and you have to make this trade-off upfront. @@ -581,11 +588,11 @@ DE: So the semantics of pre-loading a resource batch is when you do this preload DE: So this proposal meets the Privacy goals that I articulated earlier as far as I know. the origin model is easy too, the bundle can only represent things in the same HTTP origin not a different HTTP origin. The term origin is a little complicated. So I'm glossing over that a little bit in terms of URLs semantics. We could build an enforcement mechanism here that would be optional for browsers. So my understanding is that Brave would be interested in this enforcement mechanism and other browsers might not implement it but some other people have said it would be okay to be as an optional step in the specification: the browsers may decide to do use resources either from the batch or fetching individually. They could do this either by offline analysis, or they could do it by online validating the fetch to the underlying resource. If you have personalized contents, that's just out of scope for this proposal. for [?] verifying things as well as sending uncredentialed requests would help and for Content blocking efficiency. The idea is that browsers that want to do content blocking would use their subsetting step to only request the parts of the bundle that they're interested in so it would look just like if it's in the cache so looking at those performance factors if it works. I mean it's a big proposal, but if it works then resource batch preloading could get kind of the best of all worlds in terms of these different factors. And when it comes to things like chunking, because the set of resources is articulated at the site of importing it, it's a completely kind of dynamic way of splitting up things into chunks. So for something like compression, which is hurt by chunking code if it into small chunks here. We get more dynamic chunking and compression that's more optimal. So, you know this proposal is kind of complicated enough, but as further things in this area, one thing I'm especially interested in is using this kind of resource batch format as a convention in building and serving tools. I think this is this is important for being able to deploy this because we're asking the servers to do a couple different things at once, to do the subsetting of the resource batch and pre compression as well as to serve the individual files the resource batch, but I do think it would have other benefits like making it easier to configure servers in serving different HTTP headers. There's ongoing work from Google from (?) Swierski and Yoav Weis about streaming module graph execution. So part of this is getting the parallel compilation. Another part would be considering actually executing the module graph as a potentially non-atomic operation that we can do as we get the modules. That's something that we can consider once the earlier bottlenecks are solved. There's also a possibility that we could think about schemas for efficient downloading of new versions. So you have this upgrade problem where you have an old version of the site and new version, but you may be incrementally pulling down pieces, so this can be solved some extent through this kind of cache busting like putting a hash in the in the URL but that sometimes complicated and non-optimal, maybe we could use the resource batch itself as this unit of atomicity to build a nicer mechanism on top of. Finally the dep cache proposal by Guy Bedford could give a more kind of centralized automatic way to identify the resources list. I think these are all important and but in this presentation, I've kind of focused on the core web side. -DE: So to go into the resource patches in bundling and serving which I've had just about the idea is that have resource badge represents the whole static part of a site Frameworks and could output resource batches or authors could directly with the resources list being unfilled then it would be up to bundlers or similar build tools to fill in the resources list based on their whole inter-module understand of what's needed. This could also be used as a file format by minifiers or other kinds of optimizers to translate one batch to another batch. And I hope this could have the potential to reduce the need for configuration, not eliminate, but allow some additional commonalities. Then servers could use the resource batches and input to both serve the substance of the batch and serve the individual responses. I think if we get this right it could improve interoperability between tools and reduce the amount of configuration required. +DE: So to go into the resource patches in bundling and serving which I've had just about the idea is that have resource badge represents the whole static part of a site Frameworks and could output resource batches or authors could directly with the resources list being unfilled then it would be up to bundlers or similar build tools to fill in the resources list based on their whole inter-module understand of what's needed. This could also be used as a file format by minifiers or other kinds of optimizers to translate one batch to another batch. And I hope this could have the potential to reduce the need for configuration, not eliminate, but allow some additional commonalities. Then servers could use the resource batches and input to both serve the substance of the batch and serve the individual responses. I think if we get this right it could improve interoperability between tools and reduce the amount of configuration required. DE: So for discussion, my biggest point is I think resource batch pre-loading would be a more general and more useful construct than JS module bundles, but I do really want to consider JS multiple bundles as well. They've been considered in this committee in the past es6 cycle and I think they're a valid design point. They're a bit overlapping but I also could see us having both in the broader platform. So I want to ask what do you think about that proposition? Does this whole idea seem worthy of further investigation? So one question is, is it important to fund all non JS resources or should the focus really be on just JS bundling? I've. talked about a number of different optimizations or performance factors, and I'm wondering what you think about those. Maybe I'm weighing these in a way that you disagree with. There are also complexity and complexity trade-offs. Maybe this could be solved through other layers and wondering what extensions or applications you're interested in? So this isn't going for a stage. It's a good system safe in and people want to respond because I just said, thank you. -KM: Yes, maybe I don't know the details of how the batch resources would work but one concern that we had internally at Apple over. like web bundles and things like that in the past is that a lot of - if you look at the kind of module graph used by large web applications, they're on the order of like tens of thousands of modules and whence you start having that many resources in the system that just the memory overhead and the overhead of having that many individual resources becomes problematic in and of itself. So that was a reason internally we're thinking of going for inline modules. I don't know whether that's actually the case in the batched resources, but I don't I don't know the details of how to be implemented under the hood. So I don't have an answer right now because I'm like think about that but +KM: Yes, maybe I don't know the details of how the batch resources would work but one concern that we had internally at Apple over. like web bundles and things like that in the past is that a lot of - if you look at the kind of module graph used by large web applications, they're on the order of like tens of thousands of modules and whence you start having that many resources in the system that just the memory overhead and the overhead of having that many individual resources becomes problematic in and of itself. So that was a reason internally we're thinking of going for inline modules. I don't know whether that's actually the case in the batched resources, but I don't I don't know the details of how to be implemented under the hood. So I don't have an answer right now because I'm like think about that but DE: I'd be interested in learning more because I feel like this comes down to details. The big observable difference is that one one would be, you know, JS module bundles would be cashed in the module map and a more general mechanism would be cached in kind of a prefetch cash. I don't think that alone would necessarily explain the difference in resource usage. I mean another one is the IPC issue that I mentioned.I think the design here should be compatible with having this prefetch reload cache be in the renderer. I know Apple raised concerns about putting too much logic in the render, but I think we're really just talking about the cache of of URLs to payloads and mime types and interpreting mime types, which I think is - @@ -615,7 +622,7 @@ YSV: The queue is currently empty. Do we have any other comments that people wou YWS: Yeah, so I don't have any part. I guess my main comment. Is like that you mentioned that you envision the full site like the bundle to contain all static sighs that all the static resources for the side that is one potential deployment mode, but not necessarily the only deployment mode. You could also imagine that sites create multiple bundles, 1 per route, that have overlap between them and that would also work because each one of the internal resources has its own URL. and the browser can know not to fetch the shared pieces. So it does like essentially - you could imagine multiple deployment modes on that front and I expect bundlers to still be opinionated on like - to use web bundles as an output format and still provide Innovation on that front of what is the ideal way to use them. -DE: Yeah, that'll make sense to me. I thought that the route aspects that you mentioned would be subsumed by the resources list, but I might be missing something. There's a lot more to research here and I want to - this slide is to kind of the most poorly researched part of the presentation. I want to iterate on that some more. +DE: Yeah, that'll make sense to me. I thought that the route aspects that you mentioned would be subsumed by the resources list, but I might be missing something. There's a lot more to research here and I want to - this slide is to kind of the most poorly researched part of the presentation. I want to iterate on that some more. Peter Snyder (Brave): I mostly just want to say that this is all a new process on my end. But I think this really addresses a lot of the concerns that we had in me the initial bundles proposal. Mainly breaking the tie between what's in the bundle and - let me say that differently. By preventing the bundle from controlling the page itself removes a bunch of the kind of URL playing games that we were concerned about and I can go into more reasons about why this proposal seems appealing but just in the way that the the larger puzzle seemed concerning, but I think there's just a lot of nice things that makes this really compatible and play well with the kinds of bundling tools that are used commonly in the wild and seems really appealing from that perspective. @@ -653,13 +660,13 @@ KM: Yes, I think the problem is - the thing that we were talking about before, a DE: Yeah, that would be a serious problem if it weren't possible to represent. So I saw that discussion, but I don't understand how it would relate to this exact proposal. It seems to be coming back to that IPC issue. -KM: I don't know all the details, like I said, but yeah, I it's I'm just mentioning that for the record not so much that I have any problem with this proposal. +KM: I don't know all the details, like I said, but yeah, I it's I'm just mentioning that for the record not so much that I have any problem with this proposal. Peter Snyder: I definitely need to familiarize myself more with the online modules proposal but one thing that's more appealing about this at least from the perspective of working at a browser where we you want to make changes on the initially requested module graph often is that this allows us to place to kind of monkey patch the graph in the way that just a streaming a large number of inline modules wouldn't where we can kind of make those changes as we see for privacy protections and kind of drop in different implementations. That's a very nice thing about this part of this approach from our perspective, or from my perspective. KM: Are you saying that you're going to monkey patch it? like that you're going to take text that the server would have provided and replace it with text that the browser wasn't seeing. -Peter Snyder: One thing that we think actually we would like to replace a module with the privacy preserving alternate implementation. We can do that through this approach by just not requesting that from the bundle and sticking your own version in there, but it would be difficult to do that. If we just had if we had a streaming in my module that maybe my ignorance of the existing in my module proposal, which I'm only lightly familiar with but that is something that does seem enabled by this approach that's appealing about it. +Peter Snyder: One thing that we think actually we would like to replace a module with the privacy preserving alternate implementation. We can do that through this approach by just not requesting that from the bundle and sticking your own version in there, but it would be difficult to do that. If we just had if we had a streaming in my module that maybe my ignorance of the existing in my module proposal, which I'm only lightly familiar with but that is something that does seem enabled by this approach that's appealing about it. YSV: And we are at time. Thank you everyone for attending the last meeting of this year and tc39. Also massive thank you to the note editors and takers for their work on keeping the notes for us. Before the closing comments, DE, are you satisfied with this topic?