This is a guideline for those who newly got into the project
for everyone to start from the same place in the beginning.
Follow these steps and reach out with any questions.
Take time to read through SSH_SETUP.md and make sure to set up and understand the relationship between your local machine, your generated SSH key on your machine, keychain for your convenience, Github, and pushing from your local machine to our Github repository.
In the repository you plan to work on, click on 'fork'.
By default, the repository name will be set the same as the repository you are working on.
Go on, and click 'Create fork'.
You should see the page simillar to this.
The name of the repository will be different with me if you are working on another repository.
Click on 'Code', and the copy symbol. Make sure to choose option 'SSH'.
Create your local folder that you will be working on.
I created the folder 'Nerf' on my Desktop.
It doesn't have to be on the Desktop, but your choice of preference.
Open your terminal, go to the directory you just created.
In my case, I am now in the Nerf folder.
Do git clone and paste the link you just copied in Step 2, in the fork page.
Since in my case I cloned into vidtonerf,
I will now have 'vidtonerf' folder in the directory 'Nerf'.
If your fork repository name is different, you cloned into that repository name.
Go to that folder, in my case, vidtonerf.
And do the commands 'git fetch' and 'git pull'.
The important part in the project is you know how to use Docker.
First, Install Docker desktop and make sure you can start it up and it says running.
If you are using Mac, make sure you know whether you have an intel chip inside or an Apple sillicon chip. There are two versions for Mac.
In the configuration screen within the Installer,
check Use WSL 2 instead of Hyper-V (recommended)
, Click OK
, Install
Close
and log out to your computer.
Log in back to your computer and reboot the system (recommended)
Log in to your computer again, open Docker Desktop.
Click Accept
to the agreement, and you will see it says 'Docker desktop is starting...' (click done installation).
[I have an isse with the installation] | [done installation]
These are installation issues we know of right now. Please reach out with any further issues and let it documented.
In this case, this is likely due to an out of date WSL (WSL and docker share the same hypervisor).
First, you should make sure you are on WSL2 which can be checked in the command line by enteringwsl -l -v
on your terminal.
If you don't have WSL, open up Powershell as an admininstrator and dowsl --install
.
If you see in the command line that you have the version 1, you can update from 1 to 2 bywsl --update
.
Try the same if you are on version 2 and having this issue as well, as WSL could be out of date.
If you are still having the issue, you should let us know.
If you need more info about installing WSL, check out installing.
[ready to install Docker again | [done installation]
In this case,
[ready to install Docker again | [done installation]
...
Below is the quick overview of possible other cases for Windows.
Docker can use either Windows Subsystem for Linux (WSL2) or Hyper-V as backend. (WSL2 recommended)
The current version of a Docker Desktop only works on the 64-bit edition of Windows. Both Windows 10 and Windows 11 are supported.
requirements for Windows:
Windows 10 - Home/Pro 21H1 (build19403) or higher, or Enterprise/Education 20H2 (build 19402) or higher
Windows 11 -Home/Pro 21H2 or higher, or Enterprise/Education 21H2 or higher WSL2 feature must be installed and enabled.
Linux kernel update package for WSL2 must be installed.
You need 64-bit CPU (with second-level adress translation - SLAT) enabled
You need 4 GB of RAM.
[ready to install Docker again | [done installation]
When you are done with the installation and open Docker,
you might see 'Docker Desktop Starting' for few seconds.
Next, it asks you to start the tutorial, but you can skip it.
Instead, click on 'sign in' and log in to your account or,
create a new Docker account and sign in.
After that, you will see this screen as mine.
Installation completed, Docker successfully started.
When the Docker is started, pull up the terminal again.
Make sure you are on the same cloned (your working) directory.
This time, do the command `docker compose build`.
This will download all of the components and dependencies needed to work on our project.
It will take some time, expect 10 minutes minimum for a comparably fast laptop.
(Make sure you have enough time)
This will take a lot of time at first but, once the dependencies are installed it will boot quickly.
You have successfully done setting up Docker, and working on our project.
Before working on the project, you might have wondered what it was all about
when you were following Installing Docker
, Starting Docker
or, Downloading Docker Image
,
If it was your first time hearing Docker.
[I have enough knowledge on Docker, I can skip this part.]
We recommend you to take time understanding Docker
otherwise, it must be hard to understand why we run Docker everytime.
We have a [Wiki Page] in our project page for understanding Docker, that you can ask questions about Docker Desktop.
You can also read through other questions to understand what you might be missing as well.
Making this page active help other newcomers as well, please don't hesitate to put any questions on Docker.
Short introduction on What is Docker:
(Docker is a tool that lets you package your application and all of its dependencies,
including all the necessary components to run an application, including code, libraries, and system tools,
into a single, portable container that you can develop and deploy easily ... )
Think of it like a shipping container that you can easily move from one place to another. For us in this case, is our application.
See that boxed application as animage
. This a delivery package that can be shipped to anywhere
(that image can be shipped to any computer, whatever dependencies they might have).
If we had to download each of the dependencies separately getting issues within the installation process for each,
after Docker, now all of the dependencies are in the 'image' and Docker runs them for us.
That for us, Docker creates a unified devlopment environment that is consistent across all of our developer's machines.
[the official Docker guide] | [official Docker Manual]
If you prefer searching on Youtube over reading lengthy documents,
check on these videos based on your preferences or time you have.
[What is Docker? Easy way] | [Image Guide on What is Docker] | [Beginner Course (1 hour)} | [Complete Course (3 hours)]
First, open up the terminal, and make sure you are on your working directory.
And do the command, `docker compose up`.
Next, read through the following paragraph.
Base Knowledge:
With the commanddocker compose up
,
a container for each process: web-server, sfm-worker, rabbitmq, and mongodb was installed.
One was for the web-server,
one was for the sfm-worker,
another was for the RabbitMQ our scheduling serice,
and the other was for MongoDB our database.
web-server and sfm-worker is our code and the other two are dependencies we rely on.
The directory of the web-server is the web-server
and the directory of the sfm worker is the colmap folder.
Basically, sfm-worker is all the code in the colmap folder which is for figuring out where the user camera was.
Each time when the web-server and sfm-worker are started (run),
they copy their directories (source code) into the containers and runpython main.py
.
Pretty much the containers with our code just auto runmain
every time they are started.
Since we are commiting and pushing our work to Github, when each time these source codes are run,
we get to run our application updated with the newly pushed work up to just before we started running.
On top of this, the volumes are shared between the host machine and the docker container
meaning anything saved in the web-server container will also be saved in the web-server directory.
These containers are built to just toally mirror the host machine.
So, you can work on it as you did on your local machine not worrying about handling dependencies.
In summary: start.
Every time you type docker compose up
into your terminal of choice, it starts up all of our projects services and applications.
docker compose up {image name}
will start a specific container.
In summary: start (in a detached mode).
Detached mode is a running mode for Docker containers that allows them to run in the background and continue running even if the terminal session is closed or the user logs out, allowing you to continue using the terminal for other tasks. The container continues to run in the background, detached from the terminal session.
This means that the containers will be started in the background and you will not see the output of the containers on your terminal.
Detached mode is useful for running long-running containers, such as web servers or databases, that do not require user interaction or console input. Running containers in detached mode also allows you to manage them easily, since you can start, stop, and view their logs independently of the terminal session.
However, you can still view the logs of the containers by running the docker compose logs
command.
By default, it will show the logs of all containers defined in the docker-compose.yml file.
You can also specify the name of a specific service or container to view only its logs.
The docker logs container_name
command is used to view the logs of a single container.
You can also stop a container running in detached mode using the docker stop command followed by the container ID or name.
For example, the command docker stop container_name
would stop the container named container_name
.
In summary: update & start (in a detached mode).
If other developers push updated code to the Github project and you want to incorporate those changes into your local Docker environment, you will need to rebuild the Docker images to ensure that the latest changes are included in the containers that you start.
In this case, you should use the docker compose up --build -d
command to rebuild the images and start the containers in detached mode. This will ensure that the latest changes from the Github project are included in your Docker environment.
So, whenever you pull updated code from the Github project, you should rebuild the Docker images by running the docker compose up --build -d
command. After that, you can use docker-compose up -d
for subsequent runs as long as you have not made any changes to the configuration files.
Additionally, you can use docker compose build
and docker compose up
commands separately instead of docker compose up --build
to build the Docker images and start the containers.
In summary: no dependencies
In a Docker Compose project, you can define multiple services in the docker-compose.yml file. These services can have dependencies on other services, meaning that these services rely on other containers being started before they can be started.
When you use the docker compose up
command to start the containers for the services, Docker Compose will automatically start all the dependencies for a service before starting the service itself. This ensures that all the required containers are running and ready to use when a service starts.
However, there may be cases where you want to start a service without starting its dependencies. For example, the sfm-worker developers may want to start the sfm-worker service and its dependencies to test the worker's integration, or they may want to start only the sfm-worker service locally without starting its dependencies.
To start a service without starting its dependencies, you can use the --no-deps
flag with the docker compose up
command. For example, to start only the sfm-worker service without starting its dependencies, you can use the following command:
docker compose up sfm-worker --no-deps
This command will start only the sfm-worker container and not start any of its dependencies. This can be useful for testing or development purposes when you want to isolate a specific service and its functionality.