Cap Explorer is the client interface for Cap, an open internet service to store transaction history for NFTs/Tokens.
Learn more about Cap by reading the original spec here.
- Requirements
- Getting Started
- Build the distribution app
- Configure NPM for Github Package Registry
- Tests
- Profiler
- Contribution guideline
- Nodejs
- Yarn or NPM
- The Plug extension
- The DFX SDK to run the CLI
- Configure NPM for Github Package Registry
The project is split into different packages and uses Lerna, a tool to manage multiple packages in a monorepo.
Use the Lerna boostrap command to install and link the packages and their dependencies.
lerna bootstrap
To launch the main Dashboard
UI development is quite simple:
yarn dev:dashboard-dev
Although, for local development the CAP Service
is required, unless running in MOCKUP environment, which skips the Service by providing mock data.
Optionally, you might want to dev with mainnet data:
yarn dev:dashboard-prod
On development, CAP Explorer
runs against the CAP canister within the local replica network, a such you have CAP as a submodule (if you're already running the Service separatily on your own, feel free to skip these steps).
After you clone the CAP Explorer
repository you have to pull the CAP
submodule content as follows:
yarn cap:init
You only need to do it once, for example, after you cloned the CAP Explorer
repository.
Note: Make sure you have the DFX SDK installed to run the DFX cli, otherwise visit the Dfinity for instructions
Afterwards, launch the local replica in the foreground (you're advised to do so, to monitor the service, otherwise feel free to add the --background flag):
dfx start --clean
Once the local replica is running, start the CAP Service
(you need to initialise it in a separate terminal session/window, if the local replica is running in the background):
yarn cap:start
π‘ If errors occur, run the following command before starting the local replica dfx start
and cap:start
:
yarn cap:reset
From then on, the Service is available and you can follow up with your development work but while contributing, you'll need to understand how to manage the CAP Service version you're creating a feature against.
When commiting to CAP Explorer
the CAP
submodule will be detached from the latest commit or HEAD
.
This means that your contributions are in check or associated with a particular CAP
commit, otherwise changes in the CAP
submodule could break your changes!
To update the CAP
submodule to be checked to latest, for example, you can use the convenient command:
yarn cap:update
Alternatively, you can checkout any particular commit in the CAP
submodule by:
# go to CAP submodule repo
cd cap
# pull latest or checkout a particular commit
git pull origin main
git checkout <commit-hash>
# go back to parent repo
cd ..
In any case, you'll have to commit your changes from the CAP Explorer
root of the project work directory (see above π that you have cd..
).
Here's an example:
git add cap
git commit "chore: π€ switched to commit of production version"
On development, use the CAP Service Mock generator to get some data for testing by executing the command from the project root:
yarn cap:generate-mocks
The command will generate 20 mocks by default, optionally, a number can be passed to set the generator count, here's an example for 16 records:
yarn cap:generate-mocks 16
The process handles the creation of Root bucket Canister
for the Router Canister
and inserting a transaction, which while sounding trivial is a long process at time of writing.
π‘ If you'd like to add more token contracts, call the cap:generate-mocks
again.
Run unit-tests for the Dashboard UI
by:
yarn test:dashboard-unit
The distribution version of the Application, which is client facing, you need to consider the environment it'll be targetting.
Here're the available environments:
- Local, you'd like to test locally
- Staging, to host it remotely and make calls to a mainnet canister used for tests
- Production, to host it remotely and make calls to the mainnet production canister
Assuming that you have run the application startup guide, you'd run:
yarn build:local
Similarily, in a Cloud environment you'd have to setup a few environment variables, such as:
- NODE_ENV, the target environment (staging or production)
- IC_HISTORY_ROUTER_ID, the mainnet canister id (for staging or production)
- MOCKUP, if present the data to present will be fake (dynamically generated in the client-side, this can be useful for staging or testing environment)
In the Cloud runner, after installing all the dependencies, you'd run:
yarn build:staging
We've used staging
environment as an example, where production
is also available.
yarn build:production
Bear in mind that the Psychedelic keeps packages in the Github Registry, so you need to make sure you have setup a Personal Access Token
to read from Github registry. If you haven't setup, or you're required to build the cloud for staging, production then find the instructions here
The Staging url can be found here and should be in sync with the lastest succesfull build and deploy in the HEAD of develop branch.
https://zx4fi-ryaaa-aaaad-qa4ya-cai.ic.fleek.co
You'll need to have @Psychedelic Github Package Registry setup, if you haven't done for other projects find out how here.
To pull and install from @Psychedelic via the NPM CLI, you'll need:
-
A personal access token (you can create a personal acess token here)
-
The personal access token with the correct scopes, repositories, org repositories and read:packages to be granted access to the GitHub Package Registry.
-
Authentication via
npm login
, using your Github username for the username and the personal access token as your password:
Once you have those ready, run:
npm login --registry=https://npm.pkg.github.com --scope=@psychedelic
Note: You only need to configure this once to install the package! On npm login provide your Github email as your username and the Personal access token as the password.
You can also setup your npm global settings to fetch from the Github registry everytime it finds a @Psychdelic package:
npm set //npm.pkg.github.com/:_authToken "$PAT"
.npmrc
as //npm.pkg.github.com/:_authToken=xxxxxx
but we'd like to keep it clean, tokenless. Here's an example where the PAT
is an environment variable:
@psychedelic:registry=https://npm.pkg.github.com
//npm.pkg.github.com/:_authToken=${PAT}
Whichever option is best for you, you'll then be able to install packages from @psychedelic
, such as the @psychedelic/cap-js
.
Our stylesheets are created with Stitches, a CSS-in-JS with best-in-class developer experience.
Our icons are managed with Icomoon Web App, that generates the icon fonts for us and placed in-componet by our font toolking Fleekon.
Generate statistics about the modules for the dashboard, by running the profiler command:
yarn profiler:dashboard
The webpack bundle analyzer will start and if your OS allows it, open the dashboard in the default browser of your OS.
Optionaly, find the webpack-stats.json
in the packages/dashboard
and upload it to webpack analyse.
Create branches from the main
branch and name it in accordance to conventional commits here, or follow the examples bellow:
test: π Adding missing tests
feat: πΈ A new feature
fix: π A bug fix
chore: π€ Build process or auxiliary tool changes
docs: βοΈ Documentation only changes
refactor: π‘ A code change that neither fixes a bug or adds a feature
style: π Markup, white-space, formatting, missing semi-colons...
The following example, demonstrates how to branch-out from main
, creating a test/a-test-scenario
branch and commit two changes!
git checkout main
git checkout -b test/a-test-scenario
git commit -m 'test: verified X equals Z when Foobar'
git commit -m 'refactor: input value changes'
Here's an example of a refactor of an hypotetical address-panel
:
git checkout main
git checkout -b refactor/address-panel
git commit -m 'fix: font-size used in the address description'
git commit -m 'refactor: simplified markup for the address panel'
Once you're done with your feat, chore, test, docs, task:
- Push to remote origin
- Create a new PR targeting the base main branch, there might be cases where you need to target to a different branch in accordance to your use-case
- Use the naming convention described above, for example PR named
test: some scenario
orfix: scenario amend x
- On approval, make sure you have
rebased
to the latest in main, fixing any conflicts and preventing any regressions - Complete by selecting Squash and Merge
If you have any questions get in touch!