Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple proxies added #1

Draft
wants to merge 9 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
130 changes: 124 additions & 6 deletions pallets/collective-proxy/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,36 @@ mod benchmarking;
pub mod weights;
pub use weights::WeightInfo;

/// The parameters under which a particular account has a proxy relationship with some other
/// account.
#[derive(
Encode,
Decode,
Clone,
Copy,
Eq,
PartialEq,
Ord,
PartialOrd,
MaxEncodedLen,
TypeInfo,
)]
pub struct ProxyDefinition<AccountId, CallFilter> {
/// The account which may act on behalf of another.
pub proxy: AccountId,
/// A value defining the subset of calls that it is allowed to make.
pub filter: CallFilter,
}

#[frame_support::pallet]
pub mod pallet {
use super::*;

/// The current storage version.
pub const STORAGE_VERSION: StorageVersion = StorageVersion::new(1);

#[pallet::pallet]
#[pallet::storage_version(STORAGE_VERSION)]
pub struct Pallet<T>(_);

// TODO: The pallet is intentionally very basic. It could be improved to handle more origins, more aliases, etc.
Expand All @@ -68,11 +93,21 @@ pub mod pallet {
/// Origin that can act on behalf of the collective.
type CollectiveProxy: EnsureOrigin<<Self as frame_system::Config>::RuntimeOrigin>;

/// Account representing the collective treasury.
type ProxyAccountId: Get<Self::AccountId>;

/// Filter to determine whether a call can be executed or not.
type CallFilter: InstanceFilter<<Self as Config>::RuntimeCall> + Default;
type CallFilter: InstanceFilter<<Self as Config>::RuntimeCall>
+ Member
+ Clone
+ Ord
+ PartialOrd
+ Encode
+ Decode
+ MaxEncodedLen
+ TypeInfo
+ Default;

/// The maximum amount of proxies allowed for a single account.
#[pallet::constant]
type MaxProxies: Get<u32>;

/// Weight info
type WeightInfo: WeightInfo;
Expand All @@ -85,6 +120,25 @@ pub mod pallet {
CollectiveProxyExecuted { result: DispatchResult },
}

#[pallet::error]
pub enum Error<T> {
/// There are too many proxies registered
TooManyProxies,
/// Proxy registration not found.
NotFound,
}

/// The set of account proxies
#[pallet::storage]
pub type Proxies<T: Config> = StorageValue<

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you consider using a StorageMap, as Dino suggested? This can enforce uniqueness and efficient lookups. Also, it could simplify logic ensuring no redundant filters are stored.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, reworked

_,
BoundedVec<
ProxyDefinition<T::AccountId, T::CallFilter>,
T::MaxProxies,
>,
ValueQuery,
>;

#[pallet::call]
impl<T: Config> Pallet<T> {
/// Executes the call on a behalf of an aliased account.
Expand All @@ -98,19 +152,22 @@ pub mod pallet {
})]
pub fn execute_call(
origin: OriginFor<T>,
filter: Option<T::CallFilter>,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting argument choice - why not choose T::AccountId instead?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the design choice that I had to make. As I've said in PR's description, I tried to make minimal changes into the pallet that would add multiple proxies support but would keep its current design as much as possible (because I don't know details about the use case). I tried to make assumptions on use case and imagine, how pallet would be used basing on its current design. So I decided to go this way (proxy calls based on the needed filters). But again it's only my assumption how it would be used. And obviously it's pretty straightforward to rework it into the opposite way (proxy call based on needed proxy account).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The purpose of the pallet was to allow non-EoA accounts to act on behalf of a custom origin type.
Practically, this is used for pallet-collective origins for implementing governance.
You can check this in one of the runtimes, e.g. shibuya-runtime.

My comment here was more related to why choose the filter as argument, and not the account Id.
It seems very strange to specify a call filter from which the proxy account is derived - IMO it would make it cleaner if user directly specifies the account Id on whose behalf they want to execute a call.
Based on the account Id argument, you check the call filter & decide whether call is allowed or not.
That's how pallet-proxy does anyways.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the context! I looked through runtime and the usage is more clear now. I reworked to AccountId.

call: Box<<T as Config>::RuntimeCall>,
) -> DispatchResult {
// Ensure origin is valid.
T::CollectiveProxy::ensure_origin(origin)?;

let def = Self::find_proxy(filter)?;

// Account authentication is ensured by the `CollectiveProxy` origin check.
let mut origin: T::RuntimeOrigin =
frame_system::RawOrigin::Signed(T::ProxyAccountId::get()).into();
frame_system::RawOrigin::Signed(def.proxy).into();

// Ensure custom filter is applied.
origin.add_filter(move |c: &<T as frame_system::Config>::RuntimeCall| {
let c = <T as Config>::RuntimeCall::from_ref(c);
T::CallFilter::default().filter(c)
def.filter.filter(c)
});

// Dispatch the call.
Expand All @@ -121,5 +178,66 @@ pub mod pallet {

Ok(())
}

/// Register a proxy account for the sender that is able to make calls on its behalf.
///
/// The dispatch origin for this call must be _Signed_.
///
/// Parameters:
/// - `proxy`: The account that the `caller` would like to make a proxy.
/// - `filter`: Call filter used for the proxy
#[pallet::call_index(1)]
#[pallet::weight(T::WeightInfo::add_proxy(T::MaxProxies::get()))]
pub fn add_proxy(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when the ProxyAdmin tries to overwrite an existing proxy with a stricter filter? (e.g., temporary restriction during maintenance/emergency)

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reworked to map

origin: OriginFor<T>,
proxy: T::AccountId,
filter: T::CallFilter,
) -> DispatchResult {
T::CollectiveProxy::ensure_origin(origin)?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The collective proxy origin can register arbitrary account.

Security-wise, what do you think about this?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, true. It's a security flaw. AFAIU the current pallet design also has it, right? In order to address it, the major change of the pallet needed. Proxy pallet uses announcements mechanism to provide the ability to cancel "proxying". So it's also the tradeoff: should we keep current minimalist design for the pallet or we make it more secure. I assumed that my task was in the scope of the first approach.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we're not thinking about the same flaw :)

The current pallet's custom CollectiveProxy origin filter is hardcoded in the runtime. Runtime upgrade is a heavily privileged operation, and cannot be done easily.

On the other hand, in this new extrinsic call, the collective proxy origin can register any new filter they want.
E.g. they can decide to create a proxy to my own account, and execute any action they want (based on the filter). No one would like that 🙂.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, true. So bringing announcements into the design? Are there any other means to prevent it?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having an announcement delay is good feature.
Keep in mind though that this pallet is used currently only with private accounts, so even if we had announcements, it wouldn't improve the current situation.

For the suggestions on how to improve this, I have two:

  1. Keeping your approach, introduce a special origin type that can register a new proxy. E.g. make it root-only.
  2. Taking a different approach, expand the initial pallet code to allow registering (or defining) multiple account Ids & filters. E.g. something like a type that implements Get<Vec<(Origin, AccountId, CallFilter)>. That way the same security level as it is now is kept.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented the first approach. Corresponding test also added.

Proxies::<T>::try_mutate(|proxies| -> Result<(), DispatchError> {
let proxy_def = ProxyDefinition {
proxy: proxy.clone(),
filter: filter.clone(),
};
proxies.try_push(proxy_def).map_err(|_| Error::<T>::TooManyProxies)?;
Ok(())
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if there are multiple Proxies for the same account Id?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They differ by calling filter and the first found with the suitable filter will be used. I think, I relied this logic on the base proxy pallet

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I understand it's implemented like that.

But since CollectiveProxy can register any filter they want, for any account they want, what is the use case for having multiple ProxyDefinition objects with e.g. the same account Id but different privileges?

If one privilege is superset of the other, why would proxy ever use the less privileged one?

To follow-up on my previous comment - one thing to consider here is the data struct used to store the values. Like it is now, we can have vector full of essentially the same values. Is this good for the functionality?

Would you suggest a change here?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aa, I got your concern. I think, it can be easily addressed with superset validation that would prevent the case you're describing (the change added).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would help, yes, but you can also replace the existing vector with a Map-like collection. E.g. if key is the account Id, then it's not possible to have duplicates.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I did experiments with such structures (ordering leftovers were caused by them) but decided that storage costs are not worth it and used simple vec eventually. Added code will prevent any redundant duplication.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see!

Out of curiosity, how much less performant was the BTreeMap (I'm assuming you tried using that 🙂 )?

Back to my suggestion above, performance wise, this should be equivalent or faster than the vec approach:

    #[pallet::storage]
    pub type Proxies<T: Config> = StorageMap<
        _,
        (T::CollectiveProxy, T::AccountId),
        T::CallFilter,
        ValueQuery,
    >;

The T::CollectiveProxy would be extended to be a parameter-like type.
The T::CallFilter would follow your approach.

This way you can support more than 1 collective proxy type, each proxy can delegate to multiple accounts, but only one delegation type is possible.

Anyways, just something of top of my hat, haven't tried or implemented it 🙂


Thank you for all the replies & the effort.
Except for the one at the start of this comment, no more questions from me :)

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was considering standard BoundedBTreeMap. I don't think, there is any significant difference in performance for such small sets of data (adding to equation codec's complexity makes things even more complicated). My concern was mostly about the additional storage costs for TreeMap. I don't remember already but there was a general optimization hint to use compact storage structures as much as possible.

Thank you for your comments and questions! It helped me a lot to understand better the use case.


/// Unregister a proxy account for the sender.
///
/// The dispatch origin for this call must be _Signed_.
///
/// Parameters:
/// - `proxy`: The account that the `caller` would like to remove as a proxy.
/// - `filter`: Call filter used for the proxy
#[pallet::call_index(2)]
#[pallet::weight(T::WeightInfo::remove_proxy(T::MaxProxies::get()))]
pub fn remove_proxy(
origin: OriginFor<T>,
proxy: T::AccountId,
filter: T::CallFilter,
) -> DispatchResult {
T::CollectiveProxy::ensure_origin(origin)?;
Proxies::<T>::try_mutate(|proxies| -> Result<(), DispatchError> {
let proxy_def = ProxyDefinition {
proxy: proxy.clone(),
filter: filter.clone(),
};
proxies.retain(|def| def != &proxy_def);
Ok(())
})
}
}

impl<T: Config> Pallet<T> {
pub fn find_proxy(
filter: Option<T::CallFilter>,
) -> Result<ProxyDefinition<T::AccountId, T::CallFilter>, DispatchError> {
let f = |x: &ProxyDefinition<T::AccountId, T::CallFilter>| -> bool {
filter.as_ref().map_or(true, |y| &x.filter == y)
};
Ok(Proxies::<T>::get().into_iter().find(f).ok_or(Error::<T>::NotFound)?)
}
}
}
40 changes: 32 additions & 8 deletions pallets/collective-proxy/src/mock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -108,24 +108,48 @@ ord_parameter_types! {
pub const CollectiveProxyManager: AccountId = PRIVILEGED_ACCOUNT;
}

#[derive(Default)]
pub struct MockCallFilter;
#[derive(
Copy,
Clone,
Eq,
PartialEq,
Ord,
PartialOrd,
std::fmt::Debug,
parity_scale_codec::Encode,
parity_scale_codec::Decode,
parity_scale_codec::MaxEncodedLen,
scale_info::TypeInfo,
)]
pub enum MockCallFilter {
Any,
JustTransfer
}
impl Default for MockCallFilter {
fn default() -> Self {
Self::Any
}
}
impl InstanceFilter<RuntimeCall> for MockCallFilter {
fn filter(&self, c: &RuntimeCall) -> bool {
matches!(
c,
RuntimeCall::Balances(pallet_balances::Call::transfer_allow_death { .. })
| RuntimeCall::System(frame_system::Call::remark { .. })
)
match self {
MockCallFilter::Any => true,
MockCallFilter::JustTransfer => {
matches!(
c,
RuntimeCall::Balances(pallet_balances::Call::transfer_allow_death { .. })
)
},
}
}
}

impl pallet_collective_proxy::Config for Test {
type RuntimeEvent = RuntimeEvent;
type RuntimeCall = RuntimeCall;
type CollectiveProxy = EnsureSignedBy<CollectiveProxyManager, AccountId>;
type ProxyAccountId = ProxyAccountId;
type CallFilter = MockCallFilter;
type MaxProxies = ConstU32<2>;
type WeightInfo = ();
}

Expand Down
15 changes: 15 additions & 0 deletions pallets/collective-proxy/src/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ fn execute_call_fails_for_invalid_origin() {
assert_noop!(
CollectiveProxy::execute_call(
RuntimeOrigin::signed(1),
None,
Box::new(RuntimeCall::Balances(BalancesCall::transfer_allow_death {
dest: 2,
value: 10
Expand All @@ -42,9 +43,16 @@ fn execute_call_filters_not_allowed_call() {
ExtBuilder::build().execute_with(|| {
let init_balance = Balances::free_balance(COMMUNITY_ACCOUNT);

assert_ok!(CollectiveProxy::add_proxy(
RuntimeOrigin::signed(PRIVILEGED_ACCOUNT),
COMMUNITY_ACCOUNT,
MockCallFilter::JustTransfer
));

// Call is filtered, but `execute_call` succeeds.
assert_ok!(CollectiveProxy::execute_call(
RuntimeOrigin::signed(PRIVILEGED_ACCOUNT),
Some(MockCallFilter::JustTransfer),
Box::new(RuntimeCall::Balances(BalancesCall::transfer_keep_alive {
dest: 2,
value: 10
Expand Down Expand Up @@ -74,8 +82,15 @@ fn execute_call_succeeds() {
let init_balance = Balances::free_balance(COMMUNITY_ACCOUNT);
let transfer_value = init_balance / 3;

assert_ok!(CollectiveProxy::add_proxy(
RuntimeOrigin::signed(PRIVILEGED_ACCOUNT),
COMMUNITY_ACCOUNT,
MockCallFilter::JustTransfer
));

assert_ok!(CollectiveProxy::execute_call(
RuntimeOrigin::signed(PRIVILEGED_ACCOUNT),
Some(MockCallFilter::JustTransfer),
Box::new(RuntimeCall::Balances(BalancesCall::transfer_allow_death {
dest: 2,
value: transfer_value
Expand Down
46 changes: 46 additions & 0 deletions pallets/collective-proxy/src/weights.rs
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,8 @@ use core::marker::PhantomData;
/// Weight functions needed for pallet_collective_proxy.
pub trait WeightInfo {
fn execute_call() -> Weight;
fn add_proxy(p: u32, ) -> Weight;
fn remove_proxy(p: u32, ) -> Weight;
}

/// Weights for pallet_collective_proxy using the Substrate node and recommended hardware.
Expand All @@ -62,6 +64,28 @@ impl<T: frame_system::Config> WeightInfo for SubstrateWeight<T> {
// Minimum execution time: 7_732_000 picoseconds.
Weight::from_parts(7_950_000, 0)
}
fn add_proxy(p: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `161 + p * (37 ±0)`
// Estimated: `4706`
// Minimum execution time: 21_495_000 picoseconds.
Weight::from_parts(22_358_457, 4706)
// Standard Error: 1_606
.saturating_add(Weight::from_parts(64_322, 0).saturating_mul(p.into()))
.saturating_add(T::DbWeight::get().reads(1_u64))
.saturating_add(T::DbWeight::get().writes(1_u64))
}
fn remove_proxy(p: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `161 + p * (37 ±0)`
// Estimated: `4706`
// Minimum execution time: 21_495_000 picoseconds.
Weight::from_parts(22_579_308, 4706)
// Standard Error: 2_571
.saturating_add(Weight::from_parts(62_404, 0).saturating_mul(p.into()))
.saturating_add(T::DbWeight::get().reads(1_u64))
.saturating_add(T::DbWeight::get().writes(1_u64))
}
}

// For backwards compatibility and tests
Expand All @@ -73,4 +97,26 @@ impl WeightInfo for () {
// Minimum execution time: 7_732_000 picoseconds.
Weight::from_parts(7_950_000, 0)
}
fn add_proxy(p: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `161 + p * (37 ±0)`
// Estimated: `4706`
// Minimum execution time: 21_495_000 picoseconds.
Weight::from_parts(22_358_457, 4706)
// Standard Error: 1_606
.saturating_add(Weight::from_parts(64_322, 0).saturating_mul(p.into()))
.saturating_add(RocksDbWeight::get().reads(1_u64))
.saturating_add(RocksDbWeight::get().writes(1_u64))
}
fn remove_proxy(p: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `161 + p * (37 ±0)`
// Estimated: `4706`
// Minimum execution time: 21_495_000 picoseconds.
Weight::from_parts(22_579_308, 4706)
// Standard Error: 2_571
.saturating_add(Weight::from_parts(62_404, 0).saturating_mul(p.into()))
.saturating_add(RocksDbWeight::get().reads(1_u64))
.saturating_add(RocksDbWeight::get().writes(1_u64))
}
}