-
Notifications
You must be signed in to change notification settings - Fork 88
Logbook 2025 H1
-
So what we want to do is to reduce the memory footprint in hydra-node as the final outcome
-
There are couple of ADRs related to persisting stream of events and having different sinks that can read from the streams
-
Our API needs to become one of these event sinks
-
The first step is to prevent history output by default as history can grow pretty large and it is all kept in memory
-
We need to remove ServerOutput type and map all missing fields to StateChange type since that is what we will use to persist the changes to disk
-
I understand that we will keep existing projections but they will work on the StateChange type and each change will be forwarded to any existing sinks as the state changes over time
-
We already have
PersistenceIncremental
type that appends to disk, can we use similar handle? Most probably yes - but we need to pick the most performant function to write/read to/from disk. -
Seems like we currently use
eventPairFromPersistenceIncremental
to setup event stream/sink. What we do is load all events from disk. We also have a TVar holding the event id. Ideally what we would like is to output every new event in our api server. I should take a look at our projections to see how we output individual messages. -
Ok, yeah, projections are displaying the last message but looking at this code I am realizing how complex everything is. We should strive for simplicity here.
-
Another thought - would it help us to use Servant at least to separate the routing and handlers? I think it could help but otoh Servant can get crazy complex really fast.
-
So after looking at the relevant code and the issue https://github.com/cardano-scaling/hydra/issues/1618 I believe the most complex thing would be this
Websocket needs to emit this information on new state changes.
but even this is not hard I believe since we have control of what we need to do when setting up event source/sink pair.
- Streaming events using
conduit
makes us buy into theunliftio
andresourcet
environment. Does this go well with ourMonadThrow
et al classes? - When using conduits in
createHydraNode
, therunConduitRes
requires aMonadUnliftIO
context. We have aIOSim
usage of this though and its not clear if there can be aMonadUnliftIO (IOSim s)
instance even? - We have not only loading
[StateEvent]
fully into memory, but also[ServerOutput]
. - Made
mkProjection
to take a conduit, but then we are running it for each (3 times). Should do something withfuseBoth
or zip-like conduit combination.
- Started simplifying the
hydra-explorer
and wanted to get rid of allhydra-node
,hydra-tx
etc. dependencies because they include most of the cardano ecosystem. However, on the observer api we will need to refer to cardano-specifics likeUTxO
and some hydra entities likeParty
orHeadId
. So a dependency ontohydra-tx
is most likely needed. - Shouldn't these hydra specific types be in an actual
hydra-api
package? Thehydra-tx
or a futurehydra-client
could depend on that then. - When defining the observer API I was reaching for the
OnChainTx
data type as it has json instances and enumerates the things we need to observer. However, this would mean we need to depend onhydra-node
in thehydra-explorer
. - Could use the
HeadObservation
type, but that one is maybe a bit too low level and does not have JSON instances? -
OnChainTx
is really the level of detail we want (instantiated for cardano transactions, but not corrupted by cardano internal specifics) - Logging in the main entry point of
Hydra.Explorer
is depending onhydra-node
anyways. We could be exploring something different to get rid of this? Got https://hackage.haskell.org/package/Blammo recommended to me. - Got everything to compile (with a cut-off
hydra-chain-observer
). Now I want to have an end-to-end integration test forhydra-explorer
, that does not concern itself with individual observations, but rather that the (latest)hydra-chain-observer
can be used withhydra-explorer
. That, plus some (golden) testing agains theopenapi
schemas should be enough test coverage. - Modifying
hydra
andhydra-explorer
repositories to integration test new http-based reporting.- Doing so offline from a plane is a bit annoying as both
nix
orcabal
would be pulling dependencies from the internet. - Working around using an alias to the
cabal
built binary:
- Doing so offline from a plane is a bit annoying as both
alias hydra-chain-observer=../../hydra/dist-newstyle/build/x86_64-linux/ghc-9.6.6/hydra-chain-observer-0.19.0/x/hydra-chain-observer/build/hydra-chain-observer/hydra-chain-observer
-
cabal repl
is not picking up thealias
, maybe need to add it toPATH
? - Adding a
export PATH=<path to binary>:$PATH
to.envrc
is quite convenient - After connecting the two servers via a bounded queue, the test passes but sub-process are not gracefully stopped.
- I created a relevant issue to track this new feature request to enable stake certificates on L2 ledger.
- Didn't plan on working on this right away but wanted to explore a problem
with
PPViewHashesDontMatch
when trying to submit a new tx on L2. - This happens both when obtaining the protocol-parameters from the hydra-node or if I query them from cardano-node (the latter is expected to fail on L2 since we reduce the fees to zero)
- I added the line to print the protocol-parameters in our tx printer and it
seems like
changePParams
is not setting the protocol-parameters correctly for whatever reason:
changePParams :: PParams (ShelleyLedgerEra Era) -> TxBodyContent BuildTx -> TxBodyContent BuildTx
changePParams pparams tx =
tx{txProtocolParams = BuildTxWith $ Just $ LedgerProtocolParameters pparams}
- There is
setTxProtocolParams
I should probably use instead. - No luck, how come this didn't work? I don't see why setting the protocol-parameters like this doesn't work....
- I even compared the protocol-parameters loaded into the hydra-node and the ones I get back from hitting the hydra-node api and they are the same as expected
- Running out of ideas
- I want to know why I get mismatch between pparams on L2?
- It is because we start the hydra-node in a separate temp directory from the test driver so I got rid of the problem by querying hydra-node to obtain L2 protocol-parameters
- The weird issue I get is that the budget is overspent and it seems bumping
the
ExecutionUnits
doesn't help at all. - When pretty-printing the L2 tx I noticed that cpu and memory for cert redeemer are both zero so that must be the source of culprit
- Adding separately cert redeemer fixed the issue but I am now back to
PPViewHashesDontMatch
. - Not sure why this happens since I am doing a query to obtain hydra-node protocol parameters and using those to construct the transaction.
- Note that even if I don't change protocol-parameters the error is the same
- This whole chunk of work is to register a script address as a stake certificate and I still need to try to withdraw zero after this is working.
- One thing I wanted to do is to use the dummy script as the provided Data in the Cert Redeemers - is this even possible?
-
When trying to align
aiken
version in our repository with what is generated intoplutus.json
, I encountered errors inhydra-tx
tests even with the same aiken version as claimed. -
Error:
Expected the B constructor but got a different one
-
Seems to originate from
plutus-core
when it tries to run the builtinunBData
on data that is not a B (bytestring) -
The full error in
hydra-tx
tests actually includes what it tried tounBData
:Caused by: unBData (Constr 0 [ Constr 0 [ List [ Constr 0 [ Constr 0 [ B #7db6c8edf4227f62e1233880981eb1d4d89c14c3c92b63b2e130ede21c128c61 , I 21 ] , Constr 0 [ Constr 0 [ Constr 0 [ B #b0e9c25d9abdfc5867b9c0879b66aa60abbc7722ed56f833a3e2ad94 ] , Constr 1 [] ] , Map [(B #, Map [(B #, I 231)])] , Constr 0 [] , Constr 1 [] ] ] , Constr 0 ....
This looks a lot like a script context. Maybe something off with validator arguments? -
How can I inspect the uplc of an aiken script?
-
It must be the "compile-time" parameter of the initial script, which expects the commit script hash. If we use that unapplied on the transaction, the script context trips the validator code.
-
How was the
initialValidatorScript
used on master such that these tests / usages pass? -
Ahh .. someone applied the commit script parameter and stored the resulting script in the
plutus.json
! Most likely usingaiken blueprint apply -v initial
and then passing theaiken blueprint hash -v commit
into that. -
Realized that the
plutus.json
blueprint would have said that a script hasparameters
.