-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
harden as a new integrity level #1912
Comments
I made a mistake in my original reading of the proxy steps (I missed the IsCompatiblePropertyDescriptor steps). They do enforce that own property descriptors are stable, and the checks are actually more stringent than the object invariants. I updated the description to reflect this. |
IMO, no. With "yes", freezing an almost unrelated object Y elsewhere can change the semantics of frozen object X wrt the proposed semantic changes. It also make these semantic changes much more expensive when many objects are already frozen. In fact, for a large graph that is almost frozen, you'd either have to do the transitive check each time, or cache the failure of the check, in which case you'd have an unpleasant cache invalidation problem. With "no", nothing is significantly more expensive than the status quo. Semantic changes are more local, understandable, and intentional. But OTOH it perhaps makes the proxy questions harder. |
First, I agree with your subtext here: We should propose to add to the object invariants the stability of non-configurable accessors. I suspect that no implementation is currently in violation of that rule, so it might be an easy sell. Given that the answer to the previous question is "no" (my preference), then
|
Given my suggestion above that
|
(I have not yet absorbed the new material there.) |
Why is it important that proxy objects cannot observe that they are being tested on their hardened status? Proxies can intercept In-line with @erights 's first option above #1912 (comment) I think the consistent design is to have If "hardened" is presented as a new integrity level, then it makes sense to only let You can draw parallels with the other integrity levels if you squint hard enough ;-) For instance: |
As I mentioned, currently a sealed / frozen check can only be triggered directly by a predicate the user code explicitly invokes. I think it would be surprising if an operation like stamping a private field would trigger a trap. Currently that does not trigger any user code and it would likely result in security bugs in JS engines if changed.
That's the thing though, throwing on a trap that can supposedly only answer according to its target is surprising, as a non-exotic target would not throw. Since I'm hearing the revoked proxy argument coming, this is another thing that bothers me. A hardened plain object that has a property whose value is a revoked (previously hardened) proxy, should it still be considered hardened? What conceptual property do we want harden to mean if an hardened object can stop being inspected all the sudden? |
Are we certain about that? A quick look at the spec shows several places where the spec calls the IsExtensible operation beyond the Object.isExtensible user code: https://tc39.es/ecma262/#sec-isextensible-o (even as part of variable binding semantics it seems) If you treat hardening as 'deep-freezing' an object, then just like a deep-frozen object can still refer to an object with stateful behavior that can change over time, I see nothing exceptional about a deep-frozen/hardened object holding onto a ref to a revoked proxy that will throw when accessed. (isn't this analogous to hardened objects with accessor properties that throw?) It is only in contexts where user code can ensure that a graph of objects is both non-exotic and deep-frozen that one can make additional assumptions about the behavior (e.g. when the set of objects was created through unmarshalling or some other vetted process that is guaranteed not to generate exotics). |
A freeze guarantees that the object own properties will not change. Harden conceptually extends that to the properties' values. That said a frozen proxy can throw on own property lookups, and I suppose that is similar.
Defensive code worried about reentrancy can avoid invoking accessors, but it cannot avoid proxy traps. I suppose I was trying to avoid more cases where proxy traps can surprisingly trigger, but maybe I have to resign myself that hardening will not provide us that. I am still considering a "make inert" operation that would effectively replace the proxy with its target, disabling all traps (maybe except call/construct, and maybe get/has/set/delete for non-own properties). |
Presumably that would require such defensive code to work at the level of inspecting and using property descriptors rather than direct property access. When going through such pains one can - with similar effort - avoid proxy traps by querying the object for its own property descriptors ahead-of-time and then interact with the object only through the resultant descriptors. In fact I just reviewed the implementation for That said, if you're arguing for If the desired end-goal is to rid an object graph of all exotic behavior keep in mind that this would also break transparent interposition with membranes
I'm not sure this is a good idea: replacing a proxy with its target would effectively "dissolve" any defensive membranes wrapped around the target, and potentially leak access to otherwise encapsulated state. I guess what you really want is to create a kind of "safe copy" (essentially query the object for its own property descriptors again and re-constructing a plain object from this data). This would satisfy both the needs of the defensive code (that wants to protect against exotic proxy objects) and any membrane code (that wants to protect target objects against untrusted clients) |
We discussed this further and arrived at the following design:
|
Responding to a few open questions here:
Stabilization only stabilizes the object's "own" properties, but some proxy traps (most notably get/set/has) virtualize the entire prototype chain. I think we should apply the principle: "what operations still make sense for a proxy to a frozen object to intercept?"
I think yes, in order to preserve membrane transparency (if a membrane proxy wraps an object that dynamically transitioned from being non-hardened to being hardened, then in order for the proxy to "deny" the private field stamping, it first needs to be able to intercept "isStable" to update its shadow target, before answering A final observation: while I like clear terminology during the design phase to distinguish current "freezing" from the new "stabilizing", let's be mindful that the committee (and the JS community at large) may not like the proliferation of integrity-level terminology (We already have non-extensible, sealed, frozen. Now add 'stable'. There is no natural analogy to hint that "stable" is stronger than "frozen".) Mathieu's earlier API design of passing an optional flag to |
I am not sure I follow. From our discussion, we want the stabilize traps to not trigger if the proxy is stable. The question is whether to consider the proxy stable just based on its target, or based on a previous answer to its traps (which was verified against the target). I am wondering about the use cases where the target would be marked stable without the proxy's knowledge, and whether the proxy would expect to have a chance to discover it. All other traps would remain triggered in this proposal, so I'm not sure how relevant is the prototype chain reference.
Yeah that's what I expected. That also means we couldn't remove the risk of user code running during private fields stamping, which we've identified as a potential implementation security hazard.
I agree 100%, and likely would still want to propose this as a freeze option to avoid intrinsic proliferation as well (we'd have the new Reflect intrinsics anyway). |
My bad, I thought you meant that a proxy whose target is stable would no longer be able to intercept certain operations. For the specific case of membranes that use a shadow target, if the shadow target is already stable at proxy-creation-time, then indeed it would not be strictly necessary to trigger the The only situation that a membrane proxy must be able to trap is when it is born wrapping a non-stabilized object which then later gets stabilized during its lifetime. In that setup, the membrane proxy would be initialized with a non-stabilized shadow target, and so when |
I'm not sure I understand. Perhaps there is still some confusion due to the ambiguity of the "target" terminology. I'll speak below purely in terms of "shadow" and "emulated object". The proxy itself only knows about the shadow (what is called "target" in the spec). A handler implementing a membrane also knows what object it is emulating (what is unfortunately called the "actual target" in the membrane literature). Let's start with extensibility, as our precedent for an explicit and shallow integrity level. In fact we decided that both First, there is no distinction between "the proxy is non-extensible" and "the proxy's shadow is non-extensible". As with many other state queries about proxies, the shadow is the bookkeeping mechanism for tracking the actual state. The proxy has extremely little state of its own beyond a pointer to its shadow and a pointer to its handler. Even for a revocable proxy, the state change can be represented by nulling these two pointers. https://tc39.es/ecma262/multipage/ordinary-and-exotic-objects-behaviours.html#sec-proxy-object-internal-methods-and-internal-slots . If the shadow is extensible, then both If the shadow is already non-extensible, then I propose we would have lost no functionality we actually care about if An important edge case is a revoked proxy. Whether or not the shadow was non-extensible before the proxy because revoked, once it is revoked it has no memory of which that is since it has no other state. On a revoked proxy, both I propose that
As an example of how beautiful it is to use the shadow for the proxy-state bookkeeping, we do not need any additional checks in the proxy mechanism to ensure that the handler can only claim the proxy is stable if it would also claim that the proxy is non-extensible, and would claim that all own properties and non-configurable and non-writable. These checks are already enforced on any attempt to stabilize the shadow, which is all the mechanism we need to enforce these invariants on the proxy. Does this agree with what everyone was saying above? |
For |
This follows my second option. However there is a subtlety with checking stability
The subtlety is when the shadow is also a proxy. If we trap, we should trap the outermost proxy first, and leave the trap of the target to be triggered either by the handler, or by the invariant check. Aka this would be a |
Yeah for other reasons I am not yet satisfied with harden being an emergent property. But I also can't reconcile how an explicit atomic integrity level on a set of object could be reconciled with membrane transparency. |
@erights Yes it does. As for revokable proxies, there's another option we might consider: Going back to non-extensibility first: what if we had specified originally that Similarly, for Returning a boolean would probably be a more pleasant behavior not only for developer-facing APIs, but would also keep a revoked proxy from "un-hardening" an otherwise hardened subgraph. (similarly, |
I had an idea based on shared context and hooks as part of the hardening process. Drafted an (untested) pseudo implementation here: https://gist.github.com/mhofman/3c85b5d82f7ec9245336ddd0e38da870 Regarding revocability, I kept the throwing behavior around for now. As an explicit integrity level, there is no regression once hardened. It simply means a hardened state does not guarantee non-throwing behavior on walk of the object, but in the face of proxies, it never did. Edit: I just realized, the hook system may provide a convoluted way to test whether an object is in the proxy target chain of another object. Not sure if this would be considered a problem for proxy transparency. |
Mark and I discussed, and we reached the following conclusions:
Mark remarked that with these stable semantics, a We also discussed the possibility of a non prototype recursive harden (#1686), and how that meshes with an emergent harden integrity level. To summarize the problem:
It transpires that the pre-hardened prototype check would also apply to harden pre-lockdown, with the exemption of intrinsics used as prototypes, which would be hardened during lockdown. The justification is that a user land prototype can always be explicitly hardened before or during an instance hardening, even if that prototype was defined by a library unaware of harden. An explicit hardening of prototypes before instances is good code hygene, and the current behavior of automatically hardening prototypes has been masking cases of prototypes mistakenly not hardened. The problem is with the definition of intrinsics, and how to virtualize that (aka how to support shims). A shim (or a harden aware shim adapter) would need to "register the shim's intrinsics" for them to be exempted pre-lockdown. This effectively extends the scope of the "Get Intrinsic" proposal to be a mutable registry, which is worrisome. All registered intrinsics would be hardened at lockdown, which does provide a basis for integrating trusted shims with lockdown. For now in our non standardized implementation of harden, we can skip the virtualization concern, and require shims (or their adapter) to harden any of their objects used as prototypes. |
During our most recent SES call, we agreed that if going for a Stabilize that makes proxies inert, we cannot break existing proxy usage that expect to remain in a position to observe of cause side effects. One option is to require an explicit opt-in with the presence of a custom There is still a concern of erosion of proxy transparency with this capability. The original sin is that the power to stabilize (or even freeze / re-configure) an object is an implicit power any holder of the object reference possesses, instead of the object creator. |
Good summary Mathieu. I’ll take the opportunity to document some of the arguments in more detail, and also propose a potentially safer and less problematic design for adding the ability to make proxies inert, below. hardening own objects vs hardening untrusted objectsKris made an important observation when he indicated that But with the new proposal of having We should consider whether these opposite uses are both equally served by the same function and the same behavior w.r.t. making proxies inert. I would argue that in the first case (hardening your own objects), chances are you want any proxies to stay active - because they’re usually there for a reason! (a simple/benign case could be for logging/tracing purposes). stabilize trap: the revocability issueAs mentioned, from the proxy creator's point of view, allowing With a stabilize trap having to give explicit consent, the proxy creator does retain some control. But there is a wrinkle for the simple case of revocation, which in the current API design is supported as a built-in feature by calling This gives the caller a In today’s API, the authority to revoke is not dependent on the trap logic of the With the proposed design a party calling We discussed this in the meeting, where the argument was made that from a software-engineering point-of-view the party calling A safer stabilizable Proxy API design?Here’s an alternative API that makes “consent” of a proxy creator to make the proxy inert more explicit: Introduce a new Proxy constructor This creates a proxy that can be made inert through This design has a number of benefits compared to a new
Note 1: as stabilization implies freezing, a stabilizable proxy would still be able to reject freezing through the Note 2: I dislike the name |
What is the Problem Being Solved?
Currently
harden
is defined as transitively freezing the object, its properties (whether data values, or accessor functions), and each of their prototypes. This can be considered as a new integrity level.However we would like to attach new behavior to this integrity level:
Currently JavaScript supports 2 integrity levels: Sealed and Frozen. The specification defines those as a non-extensible objects (which is specified as a flag on the object) plus checks on the own property descriptors (non-configurable only for sealed, and also non-writable for data properties of frozen objects).
The specification does not itself check for the integrity level state of an object outside of the 2 intrinsics
Object.isSealed
andObject.isFrozen
. While these checks can be memoized / cached for regular objects, they are observable by Proxy exotic objects (the traps are called during the check).Description of the Design
To make harden efficient, and to allow the spec to check an object for its hardened state as a side effect of other operations, we would like to make the hardened integrity level an explicitly cached state of objects, and disallow exotic proxy objects from observing when that state is checked.
This state can only be applied atomically to a set of objects once they all have been transitively frozen.
Open questions
isHardened()
predicate?harden()
was applied on it (directly or indirectly)isHardened()
check post-lockdown, but not before). Reverting integrity levels during lockdown does not seem appropriate.harden()
to build a weak list of the prototypes encountered, and havelockdown()
fail if not all prototypes have been hardened by then to ensure that lockdown does not rollback the claimed integrity level. More details in Terminate CapTP in non-hardened Realm without SES shim #1686 (comment)The text was updated successfully, but these errors were encountered: