Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intercept HTLC forwards for JIT channels #1835

260 changes: 244 additions & 16 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -92,8 +92,8 @@ use core::ops::Deref;
pub(super) enum PendingHTLCRouting {
Forward {
onion_packet: msgs::OnionPacket,
/// The SCID from the onion that we should forward to. This could be a "real" SCID, an
/// outbound SCID alias, or a phantom node SCID.
/// The SCID from the onion that we should forward to. This could be a real SCID or a fake one
/// generated using `get_fake_scid` from the scid_utils::fake_scid module.
ViktorTigerstrom marked this conversation as resolved.
Show resolved Hide resolved
short_channel_id: u64, // This should be NonZero<u64> eventually when we bump MSRV
},
Receive {
Expand Down Expand Up @@ -207,6 +207,24 @@ impl Readable for PaymentId {
Ok(PaymentId(buf))
}
}

/// An identifier used to uniquely identify an intercepted HTLC to LDK.
/// (C-not exported) as we just use [u8; 32] directly
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my understanding InterceptedId is unique but non-random. I.e a legit HTLC sender in knowledge of the routing hint could replay it (even inadvertently) multiple times ? I think we have protection against duplicate intercepted payment L5243. However this routing hint might could leak to a non-authorized third-party that could reuse this knowledge to probe the presence of a pending HTLC in some kind of deanonymization attack (e.g "Identify LSP X is the offline sender of Alice"). I wonder if we should do some time linear processing of duplicated intercepted payment to mask this behavior, where we would automatically fail_intercepted_htlc() after one min.

Another direction could be to more strict on the client processing of the incoming_shared_secret by suggesting some client salting. Or document that any leak of incoming_shared_secret opens the door of deanonymization attack, or even worst ?

Note, I don't remember the general rules on processing invoices, and offers improve on that so those concerns might be already existent with leaks of lambda invoices.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused how an attacker would get the incoming_shared_secret without having the node's privkey? I agree that if the attacker had that piece of info + an intercept scid, they may be able to probe for pending intercepts via some timing attack, though note that the error returned is identical to the "don't have available channel for forwarding" error.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this would assume either a leak of the sender secret key or the routing hop secret key. In both cases, I think we can assume you're opening the door to few privacy leaks and worst. As the InterceptId is based on the shared secret, an entity with the leak of the intercept id shouldn't be able to hit the Occupied entry L5256 in channelmanager.

#[derive(Hash, Copy, Clone, PartialEq, Eq, Debug)]
pub struct InterceptId(pub [u8; 32]);

impl Writeable for InterceptId {
fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
self.0.write(w)
}
}

impl Readable for InterceptId {
fn read<R: Read>(r: &mut R) -> Result<Self, DecodeError> {
let buf: [u8; 32] = Readable::read(r)?;
Ok(InterceptId(buf))
}
}
/// Tracks the inbound corresponding to an outbound HTLC
#[allow(clippy::derive_hash_xor_eq)] // Our Hash is faithful to the data, we just don't have SecretKey::hash
#[derive(Clone, PartialEq, Eq)]
Expand Down Expand Up @@ -666,6 +684,8 @@ pub type SimpleRefChannelManager<'a, 'b, 'c, 'd, 'e, M, T, F, L> = ChannelManage
// `total_consistency_lock`
// |
// |__`forward_htlcs`
// | |
// | |__`pending_intercepted_htlcs`
// |
// |__`pending_inbound_payments`
// | |
Expand Down Expand Up @@ -751,6 +771,11 @@ pub struct ChannelManager<M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
pub(super) forward_htlcs: Mutex<HashMap<u64, Vec<HTLCForwardInfo>>>,
#[cfg(not(test))]
forward_htlcs: Mutex<HashMap<u64, Vec<HTLCForwardInfo>>>,
/// Storage for HTLCs that have been intercepted and bubbled up to the user. We hold them here
/// until the user tells us what we should do with them.
valentinewallace marked this conversation as resolved.
Show resolved Hide resolved
///
/// See `ChannelManager` struct-level documentation for lock order requirements.
pending_intercepted_htlcs: Mutex<HashMap<InterceptId, PendingAddHTLCInfo>>,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an internal map, of which the growth is driven by an external entity of the node. If the intercepted HTLCs are not dried-up by the user (especially if they were not expected by the application module and there is not automatic timeout of them) with forward_intercepted_htlc() or fail_intercepted_htlc() it could constitute a DoS vector. Should we implement a hard map bound (MAX_PENDING_INTERCEPTED_HTLC) in LDK node, at the price of lower processing capability, or precise the documentation of HTLCIntercepted that forward_intercepted_htlc or fail_intercepted_htlc MUST be call after a timeout expiration, and a suggest a value for it (e.g 10min) ? We could also flush the map after X blocks reception.

What I'm worried about is a third-party HTLC sender exploiting a naive implementation of a HTLC interceptor, as the implementation guidelines we're providing are too loose. What do you think ? How else could things go wrong with pending_intercepted_htlcs ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also flush the map after X blocks reception.

Let me know if this isn't resolved, since as you mention below we have ee70d50. My opinion is that that commit plus the config knob is sufficient for now, but we could consider a MAX_PENDING_INTERCEPTED_HTLC in the future.

Copy link

@ariard ariard Nov 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'm good with 5a543a0 for now. However could you add a follows-up issue with a reference to MAX_PENDING_INTERCEPTED_HTLC as it can be a source of significant DoS and I would like to be sure we end up implementing it ?

Also, I think this would be also worthy to document accept_intercept_htlcs with an explicit warning towards users that it could be source of DoS or deanonymization attacks, in function of their intercepting modules. Even if the default is currently false, we could have a comment like "The current intercepting API is still experimental and shenanigans from the HTLC sender can be expected. Be conservative in our usage of it".

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see why this is a DoS vector at all - it can't grow more than 400-something entries per channel, entries which can always exist in our channels. Its some relatively small constant factor increase over our existing per-HTLC usage (and probably not the biggest memory usage you can get per HTLC).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note, the new map is global at the ChannelManager-level and PendingAddHTLCInfo -> PendingHTLCInfo -> PendingHTLCRouting -> msgs::OnionPacket, of which the current size is 1336 bytes. As of today biggest Lightning node do have something like ~2000 channels. So 2000 * 483 * 1336 = 1290576000, or said more than one 1GB, this starts to be a lot of memory, if you can find other internal maps to DoS and combine that with memory DoSing the associated Bitcoin Core node. Let me know if you have another concrete estimation for the DoS risk ?

I think we should also be aware the 483 bound might be modified in the future with lightning/bolts#873 (option_dusty_htlcs_uncounted). The number of pending inbound HTLC might change by an order of magnitude, however I agree we're likely to revisit few more internal maps.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, mostly we need to drop the onion packet from the pending add info, as that also makes the monitor updates big. That's separate from this, though.

Copy link

@ariard ariard Nov 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we really drop the onion packet from the pending add info? If the intercepted HTLC pursues its forward, we need to send the remaining onion less our payload to next hop, not sure we're in sync here. Though I agree this not new to this PR, as we might have already traditional HTLC blurring up our maps.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tracked with #1888


/// Map from payment hash to the payment data and any HTLCs which are to us and can be
/// failed/claimed by the user.
Expand Down Expand Up @@ -1566,6 +1591,7 @@ impl<M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelManager<M, T, K, F
pending_outbound_payments: Mutex::new(HashMap::new()),
forward_htlcs: Mutex::new(HashMap::new()),
claimable_htlcs: Mutex::new(HashMap::new()),
pending_intercepted_htlcs: Mutex::new(HashMap::new()),
id_to_peer: Mutex::new(HashMap::new()),
short_to_chan_info: FairRwLock::new(HashMap::new()),

Expand Down Expand Up @@ -2206,8 +2232,11 @@ impl<M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelManager<M, T, K, F
let forwarding_id_opt = match id_option {
None => { // unknown_next_peer
// Note that this is likely a timing oracle for detecting whether an scid is a
// phantom.
if fake_scid::is_valid_phantom(&self.fake_scid_rand_bytes, *short_channel_id, &self.genesis_hash) {
// phantom or an intercept.
Comment on lines 2234 to +2235
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand what "this" is referring to in this comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, it's the fact that we have some checks that only apply if the next hop is a "real" channel, so it takes less time to check fake-channel-forwards

if (self.default_configuration.accept_intercept_htlcs &&
ViktorTigerstrom marked this conversation as resolved.
Show resolved Hide resolved
fake_scid::is_valid_intercept(&self.fake_scid_rand_bytes, *short_channel_id, &self.genesis_hash)) ||
fake_scid::is_valid_phantom(&self.fake_scid_rand_bytes, *short_channel_id, &self.genesis_hash)
{
None
} else {
break Some(("Don't have available channel for forwarding as requested.", 0x4000 | 10, None));
Expand Down Expand Up @@ -3023,6 +3052,102 @@ impl<M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelManager<M, T, K, F
Ok(())
}

/// Attempts to forward an intercepted HTLC over the provided channel id and with the provided
/// amount to forward. Should only be called in response to an [`HTLCIntercepted`] event.
///
/// Intercepted HTLCs can be useful for Lightning Service Providers (LSPs) to open a just-in-time
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grepping this is the first time we mention LSP in the codebase. I think it could be opportunistic to make an entry in GLOSSARY.md, at least to document how "we" LDK contributors understand it. The reality behind varies a lot across the ecosystem and it could be good to have a well-defined term to make sense of API like this one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not opposed but sounds a bit bikeshed-y, would prefer to punt for now

/// channel to a receiving node if the node lacks sufficient inbound liquidity.
///
/// To make use of intercepted HTLCs, set [`UserConfig::accept_intercept_htlcs`] and use
/// [`ChannelManager::get_intercept_scid`] to generate short channel id(s) to put in the
/// receiver's invoice route hints. These route hints will signal to LDK to generate an
/// [`HTLCIntercepted`] event when it receives the forwarded HTLC, and this method or
/// [`ChannelManager::fail_intercepted_htlc`] MUST be called in response to the event.
///
/// Note that LDK does not enforce fee requirements in `amt_to_forward_msat`, and will not stop
/// you from forwarding more than you received.
///
/// Errors if the event was not handled in time, in which case the HTLC was automatically failed
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could say "not handled in time (HTLC_FAIL_BACK_BUFFER)"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HTLC_FAIL_BACK_BUFFER isn't pub, though

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah neither do_chain_event(). Okay I find nice to give hints to the user where to look for in case of issues.

/// backwards.
///
/// [`UserConfig::accept_intercept_htlcs`]: crate::util::config::UserConfig::accept_intercept_htlcs
/// [`HTLCIntercepted`]: events::Event::HTLCIntercepted
// TODO: when we move to deciding the best outbound channel at forward time, only take
// `next_node_id` and not `next_hop_channel_id`
pub fn forward_intercepted_htlc(&self, intercept_id: InterceptId, next_hop_channel_id: &[u8; 32], _next_node_id: PublicKey, amt_to_forward_msat: u64) -> Result<(), APIError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a question to check my understanding:

Is this basically the procedure for an LDK user?

  1. Calls ChannelManager::get_intercept_scid
  2. Generates invoice with intercept scid in route hints for end-user
  3. Receives an HTLCIntercepted event when the HTLC is intercepted
  4. At this point the LDK user can create a channel
  5. Then call this method to forward over that JIT channel

Copy link
Contributor

@ViktorTigerstrom ViktorTigerstrom Nov 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's my understanding, where the LDK user is the LSP in your example (nodes[1] in the test), and the end-user is nodes[2] in the test. I guess step (2) in your example doesn't necessarily need to be on the LDK user's side, as long as the Intercept scid is passed to the end-user.

Copy link
Contributor

@dunxen dunxen Nov 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I should have read the test.

Yeah I think I phrased that weirdly for (2). End user generates invoice but just shoves the scid they got from the LSP in the route hints.

let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(&self.total_consistency_lock, &self.persistence_notifier);

let next_hop_scid = match self.channel_state.lock().unwrap().by_id.get(next_hop_channel_id) {
Some(chan) => {
if !chan.is_usable() {
return Err(APIError::APIMisuseError {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have APIError::ChannelUnavailable in fact, could be used.

err: format!("Channel with id {:?} not fully established", next_hop_channel_id)
})
}
chan.get_short_channel_id().unwrap_or(chan.outbound_scid_alias())
},
None => return Err(APIError::APIMisuseError {
err: format!("Channel with id {:?} not found", next_hop_channel_id)
})
};

let payment = self.pending_intercepted_htlcs.lock().unwrap().remove(&intercept_id)
valentinewallace marked this conversation as resolved.
Show resolved Hide resolved
.ok_or_else(|| APIError::APIMisuseError {
valentinewallace marked this conversation as resolved.
Show resolved Hide resolved
err: format!("Payment with intercept id {:?} not found", intercept_id.0)
})?;

let routing = match payment.forward_info.routing {
PendingHTLCRouting::Forward { onion_packet, .. } => {
PendingHTLCRouting::Forward { onion_packet, short_channel_id: next_hop_scid }
},
_ => unreachable!() // Only `PendingHTLCRouting::Forward`s are intercepted
};
let pending_htlc_info = PendingHTLCInfo {
outgoing_amt_msat: amt_to_forward_msat, routing, ..payment.forward_info
};

let mut per_source_pending_forward = [(
payment.prev_short_channel_id,
payment.prev_funding_outpoint,
payment.prev_user_channel_id,
vec![(pending_htlc_info, payment.prev_htlc_id)]
)];
self.forward_htlcs(&mut per_source_pending_forward);
Ok(())
}

/// Fails the intercepted HTLC indicated by intercept_id. Should only be called in response to
/// an [`HTLCIntercepted`] event. See [`ChannelManager::forward_intercepted_htlc`].
///
/// Errors if the event was not handled in time, in which case the HTLC was automatically failed
/// backwards.
///
/// [`HTLCIntercepted`]: events::Event::HTLCIntercepted
pub fn fail_intercepted_htlc(&self, intercept_id: InterceptId) -> Result<(), APIError> {
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(&self.total_consistency_lock, &self.persistence_notifier);

let payment = self.pending_intercepted_htlcs.lock().unwrap().remove(&intercept_id)
.ok_or_else(|| APIError::APIMisuseError {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mutating this line does not break intercepted_payment

err: format!("Payment with InterceptId {:?} not found", intercept_id)
})?;

if let PendingHTLCRouting::Forward { short_channel_id, .. } = payment.forward_info.routing {
let htlc_source = HTLCSource::PreviousHopData(HTLCPreviousHopData {
short_channel_id: payment.prev_short_channel_id,
outpoint: payment.prev_funding_outpoint,
htlc_id: payment.prev_htlc_id,
incoming_packet_shared_secret: payment.forward_info.incoming_shared_secret,
phantom_shared_secret: None,
});

let failure_reason = HTLCFailReason::Reason { failure_code: 0x4000 | 10, data: Vec::new() };
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this value choice of an onion error message be standardized, otherwise if Eclair use PERM|1 (invalid_realm) and LND use UPDATE|20 (channel_disabled), we could have some implementation fingerprint in the support of offline-receive.

Do you think it could be worthy to introduce a new PERM|23 (custom_routing_policy_unsatisfied) at the spec-level as a catch-all for HTLC intercepting application ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are easier ways to fingerprint implementations as-is, but it would be good for the LSP spec (which this feature is based on) to specify what error should be returned.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you can add a comment, suggesting the error could be a custom one and as a reminder to update if/when the LSP spec adopts one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it makes sense to do anything else here, and it definitely doesn't make sense to have a custom error. The error we return here absolutely should be the same error as if we didn't have intercepts enabled, which makes unknown_next_peer correct.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After understanding this PR is only about JIT channel, I think effectively we shouldn't introduce a custom error. We can revisit in the future, when offline receive at the receiver's LSP or things like "delay my HTLC", introduces HTLC delay processing and it makes sense to return new onion errors.

let destination = HTLCDestination::UnknownNextHop { requested_forward_scid: short_channel_id };
self.fail_htlc_backwards_internal(htlc_source, &payment.forward_info.payment_hash, failure_reason, destination);
} else { unreachable!() } // Only `PendingHTLCRouting::Forward`s are intercepted

Ok(())
}

/// Processes HTLCs which are pending waiting on random forward delay.
///
/// Should only really ever be called in response to a PendingHTLCsForwardable event.
Expand Down Expand Up @@ -5067,28 +5192,82 @@ impl<M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelManager<M, T, K, F
fn forward_htlcs(&self, per_source_pending_forwards: &mut [(u64, OutPoint, u128, Vec<(PendingHTLCInfo, u64)>)]) {
for &mut (prev_short_channel_id, prev_funding_outpoint, prev_user_channel_id, ref mut pending_forwards) in per_source_pending_forwards {
let mut forward_event = None;
let mut new_intercept_events = Vec::new();
let mut failed_intercept_forwards = Vec::new();
if !pending_forwards.is_empty() {
let mut forward_htlcs = self.forward_htlcs.lock().unwrap();
if forward_htlcs.is_empty() {
forward_event = Some(Duration::from_millis(MIN_HTLC_RELAY_HOLDING_CELL_MILLIS))
}
for (forward_info, prev_htlc_id) in pending_forwards.drain(..) {
match forward_htlcs.entry(match forward_info.routing {
PendingHTLCRouting::Forward { short_channel_id, .. } => short_channel_id,
PendingHTLCRouting::Receive { .. } => 0,
PendingHTLCRouting::ReceiveKeysend { .. } => 0,
}) {
let scid = match forward_info.routing {
PendingHTLCRouting::Forward { short_channel_id, .. } => short_channel_id,
PendingHTLCRouting::Receive { .. } => 0,
PendingHTLCRouting::ReceiveKeysend { .. } => 0,
};
// Pull this now to avoid introducing a lock order with `forward_htlcs`.
let is_our_scid = self.short_to_chan_info.read().unwrap().contains_key(&scid);

let mut forward_htlcs = self.forward_htlcs.lock().unwrap();
let forward_htlcs_empty = forward_htlcs.is_empty();
match forward_htlcs.entry(scid) {
hash_map::Entry::Occupied(mut entry) => {
entry.get_mut().push(HTLCForwardInfo::AddHTLC(PendingAddHTLCInfo {
prev_short_channel_id, prev_funding_outpoint, prev_htlc_id, prev_user_channel_id, forward_info }));
},
hash_map::Entry::Vacant(entry) => {
entry.insert(vec!(HTLCForwardInfo::AddHTLC(PendingAddHTLCInfo {
prev_short_channel_id, prev_funding_outpoint, prev_htlc_id, prev_user_channel_id, forward_info })));
if !is_our_scid && forward_info.incoming_amt_msat.is_some() &&
fake_scid::is_valid_intercept(&self.fake_scid_rand_bytes, scid, &self.genesis_hash)
{
let intercept_id = InterceptId(Sha256::hash(&forward_info.incoming_shared_secret).into_inner());
let mut pending_intercepts = self.pending_intercepted_htlcs.lock().unwrap();
match pending_intercepts.entry(intercept_id) {
hash_map::Entry::Vacant(entry) => {
new_intercept_events.push(events::Event::HTLCIntercepted {
requested_next_hop_scid: scid,
payment_hash: forward_info.payment_hash,
inbound_amount_msat: forward_info.incoming_amt_msat.unwrap(),
expected_outbound_amount_msat: forward_info.outgoing_amt_msat,
intercept_id
});
entry.insert(PendingAddHTLCInfo {
prev_short_channel_id, prev_funding_outpoint, prev_htlc_id, prev_user_channel_id, forward_info });
},
hash_map::Entry::Occupied(_) => {
log_info!(self.logger, "Failed to forward incoming HTLC: detected duplicate intercepted payment over short channel id {}", scid);
let htlc_source = HTLCSource::PreviousHopData(HTLCPreviousHopData {
short_channel_id: prev_short_channel_id,
outpoint: prev_funding_outpoint,
htlc_id: prev_htlc_id,
incoming_packet_shared_secret: forward_info.incoming_shared_secret,
phantom_shared_secret: None,
});

failed_intercept_forwards.push((htlc_source, forward_info.payment_hash,
HTLCFailReason::Reason { failure_code: 0x4000 | 10, data: Vec::new() },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not for this PR, but I've wondered whether we should collect failure code consts somewhere so it's easy to see what it is. I can create an issue if we'd want that, unless I missed some prior reasoning not to.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think that would be nice to have!

HTLCDestination::InvalidForward { requested_forward_scid: scid },
));
}
}
} else {
// We don't want to generate a PendingHTLCsForwardable event if only intercepted
// payments are being processed.
if forward_htlcs_empty {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "We don't want to generate a PendingHTLCsForwardable event if only intercepted payments are being processed. Each intercepted HTLC generates its own Event::HTLCIntercepted. However a single directly forwardable HTLC should generate a PendingHTLCsForwardable."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code comment best practices tend to prescribe not duplicating what the code is doing, which is how those additions read to me IMO

forward_event = Some(Duration::from_millis(MIN_HTLC_RELAY_HOLDING_CELL_MILLIS));
}
entry.insert(vec!(HTLCForwardInfo::AddHTLC(PendingAddHTLCInfo {
prev_short_channel_id, prev_funding_outpoint, prev_htlc_id, prev_user_channel_id, forward_info })));
}
}
}
}
}

for (htlc_source, payment_hash, failure_reason, destination) in failed_intercept_forwards.drain(..) {
self.fail_htlc_backwards_internal(htlc_source, &payment_hash, failure_reason, destination);
}

if !new_intercept_events.is_empty() {
let mut events = self.pending_events.lock().unwrap();
events.append(&mut new_intercept_events);
}

match forward_event {
Some(time) => {
let mut pending_events = self.pending_events.lock().unwrap();
Expand Down Expand Up @@ -5690,6 +5869,23 @@ impl<M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelManager<M, T, K, F
}
}

/// Gets a fake short channel id for use in receiving intercepted payments. These fake scids are
/// used when constructing the route hints for HTLCs intended to be intercepted. See
/// [`ChannelManager::forward_intercepted_htlc`].
valentinewallace marked this conversation as resolved.
Show resolved Hide resolved
///
/// Note that this method is not guaranteed to return unique values, you may need to call it a few
/// times to get a unique scid.
pub fn get_intercept_scid(&self) -> u64 {
let best_block_height = self.best_block.read().unwrap().height();
let short_to_chan_info = self.short_to_chan_info.read().unwrap();
loop {
let scid_candidate = fake_scid::Namespace::Intercept.get_fake_scid(best_block_height, &self.genesis_hash, &self.fake_scid_rand_bytes, &self.keys_manager);
// Ensure the generated scid doesn't conflict with a real channel.
if short_to_chan_info.contains_key(&scid_candidate) { continue }
return scid_candidate
}
}

/// Gets inflight HTLC information by processing pending outbound payments that are in
/// our channels. May be used during pathfinding to account for in-use channel liquidity.
pub fn compute_inflight_htlcs(&self) -> InFlightHtlcs {
Expand Down Expand Up @@ -6073,7 +6269,6 @@ where
if height >= htlc.cltv_expiry - HTLC_FAIL_BACK_BUFFER {
let mut htlc_msat_height_data = byte_utils::be64_to_array(htlc.value).to_vec();
htlc_msat_height_data.extend_from_slice(&byte_utils::be32_to_array(height));

timed_out_htlcs.push((HTLCSource::PreviousHopData(htlc.prev_hop.clone()), payment_hash.clone(), HTLCFailReason::Reason {
failure_code: 0x4000 | 15,
data: htlc_msat_height_data
Expand All @@ -6083,6 +6278,29 @@ where
});
!htlcs.is_empty() // Only retain this entry if htlcs has at least one entry.
});

let mut intercepted_htlcs = self.pending_intercepted_htlcs.lock().unwrap();
intercepted_htlcs.retain(|_, htlc| {
if height >= htlc.forward_info.outgoing_cltv_value - HTLC_FAIL_BACK_BUFFER {
let prev_hop_data = HTLCSource::PreviousHopData(HTLCPreviousHopData {
short_channel_id: htlc.prev_short_channel_id,
htlc_id: htlc.prev_htlc_id,
incoming_packet_shared_secret: htlc.forward_info.incoming_shared_secret,
phantom_shared_secret: None,
outpoint: htlc.prev_funding_outpoint,
});

let requested_forward_scid /* intercept scid */ = match htlc.forward_info.routing {
PendingHTLCRouting::Forward { short_channel_id, .. } => short_channel_id,
_ => unreachable!(),
};
timed_out_htlcs.push((prev_hop_data, htlc.forward_info.payment_hash,
HTLCFailReason::Reason { failure_code: 0x2000 | 2, data: Vec::new() },
HTLCDestination::InvalidForward { requested_forward_scid }));
log_trace!(self.logger, "Timing out intercepted HTLC with requested forward scid {}", requested_forward_scid);
false
} else { true }
});
valentinewallace marked this conversation as resolved.
Show resolved Hide resolved
}

self.handle_init_event_channel_failures(failed_channels);
Expand Down Expand Up @@ -6991,8 +7209,15 @@ impl<M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> Writeable for ChannelMana
_ => {},
}
}

let mut pending_intercepted_htlcs = None;
let our_pending_intercepts = self.pending_intercepted_htlcs.lock().unwrap();
if our_pending_intercepts.len() != 0 {
pending_intercepted_htlcs = Some(our_pending_intercepts);
}
write_tlv_fields!(writer, {
(1, pending_outbound_payments_no_retry, required),
(2, pending_intercepted_htlcs, option),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this even because we want old versions reading this to fail?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup! Otherwise, the pending intercept HTLCs would never be handled and channels would force close

(3, pending_outbound_payments, required),
(5, self.our_network_pubkey, required),
(7, self.fake_scid_rand_bytes, required),
Expand Down Expand Up @@ -7306,12 +7531,14 @@ impl<'a, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
// pending_outbound_payments_no_retry is for compatibility with 0.0.101 clients.
let mut pending_outbound_payments_no_retry: Option<HashMap<PaymentId, HashSet<[u8; 32]>>> = None;
let mut pending_outbound_payments = None;
let mut pending_intercepted_htlcs: Option<HashMap<InterceptId, PendingAddHTLCInfo>> = Some(HashMap::new());
let mut received_network_pubkey: Option<PublicKey> = None;
let mut fake_scid_rand_bytes: Option<[u8; 32]> = None;
let mut probing_cookie_secret: Option<[u8; 32]> = None;
let mut claimable_htlc_purposes = None;
read_tlv_fields!(reader, {
(1, pending_outbound_payments_no_retry, option),
(2, pending_intercepted_htlcs, option),
(3, pending_outbound_payments, option),
(5, received_network_pubkey, option),
(7, fake_scid_rand_bytes, option),
Expand Down Expand Up @@ -7534,6 +7761,7 @@ impl<'a, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
inbound_payment_key: expanded_inbound_key,
pending_inbound_payments: Mutex::new(pending_inbound_payments),
pending_outbound_payments: Mutex::new(pending_outbound_payments.unwrap()),
pending_intercepted_htlcs: Mutex::new(pending_intercepted_htlcs.unwrap()),

forward_htlcs: Mutex::new(forward_htlcs),
claimable_htlcs: Mutex::new(claimable_htlcs),
Expand Down
Loading