Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apollo Credentials and other configuration #14

Open
RichardBruskiewich opened this issue Dec 5, 2018 · 11 comments
Open

Apollo Credentials and other configuration #14

RichardBruskiewich opened this issue Dec 5, 2018 · 11 comments

Comments

@RichardBruskiewich
Copy link
Contributor

Aside from perhaps overriding the docker image (through an extra Dockerfile), it is unclear how one applies Apollo configuration options. One gets the impression that this is normally done using the apollo-config.groovy file.

Beyond that, the simple question arises as to what exactly are the default Apollo admin credentials for this system?

Sorry for my relative ignorance about this.

@abretaud
Copy link
Member

abretaud commented Dec 5, 2018

Aside from perhaps overriding the docker image (through an extra Dockerfile), it is unclear how one applies Apollo configuration options. One gets the impression that this is normally done using the apollo-config.groovy file.

Yes, I think that if you want to modify the apollo-config.groovy file, you need to rebuild the image. I remember I tried to make it more flexible, but I don't remember if I found a way, or gave up( probably the second)

Beyond that, the simple question arises as to what exactly are the default Apollo admin credentials for this system?

There's a default local Apollo admin user predefined: login = [email protected], password = password
It is only used internally by the Galaxy tools.
However, when you login in your web browser, you are authenticated using REMOTE_USER authentication, which for dockerized-gmod-deployment means that you're using your galaxy account thanks to the galaxy cookie proxy (an Apollo account is automatically created on first login).
I've added some doc for this in the README

abretaud added a commit that referenced this issue Dec 5, 2018
@RichardBruskiewich
Copy link
Contributor Author

RichardBruskiewich commented Dec 5, 2018

Thanks for the quick response to this, @abretaud.

Alas, I must be doing something wrong because despite creating a Galaxy account, I am not automatically logged into Apollo... the Login dialog box still comes up on the screen and refuses to accept any plausible credentials that I throw at it.

I should confess to a minor deployment complexity, though: given a scarcity of public IP addresses, we are actually running all our traffic through a front end NGINX proxy on another machine, which itself targets the 2nd NGINX proxy of the dockerized-gmod-deployment system. Double jeopardy, I guess, especially given that the galaxy cookie proxy does some fancy tunnelling through @erasche https://github.com/erasche/gx-cookie-proxy.

Aliasing the site directly to its own public IP is not completely out of the question but I'd like to avoid this, if feasible. I'll see if I can wrap my head around the gx-cookie-proxy and figure out if I need to do something different here.

@RichardBruskiewich
Copy link
Contributor Author

RichardBruskiewich commented Mar 18, 2019

@erasche @abretaud These past few days, I've been taking a fresh run at using this project, but am still a bit stuck on the Apollo integration. When I access the /apollo path link, I fail to get the site but instead get the following error message:

                Error: dial tcp: lookup target on 127.0.0.11:53: no such host

Do you have any idea what is going on (where I should look to fix this)?

Hmm... Googling this error message suggests that it is a docker (or docker-compose) related issue. I'll see if I can figure out if I broke something in my build somewhere... but if you have any idea what is going on, please let me know.

@hexylena
Copy link
Member

hexylena commented Mar 18, 2019

Ah yeah, I'm afraid this is definitely a docker/host issue that we can't really help debug :( That port is DNS, something isn't right there. It shouldn't be a problem with any of the containers. Maybe try reproducing on another machine?

@hexylena
Copy link
Member

I'm afraid I'd missed your previous message

I should confess to a minor deployment complexity, though: given a scarcity of public IP addresses, we are actually running all our traffic through a front end NGINX proxy on another machine, which itself targets the 2nd NGINX proxy of the dockerized-gmod-deployment system. Double jeopardy, I guess, especially given that the galaxy cookie proxy does some fancy tunnelling through @erasche erasche/gx-cookie-proxy.

That should be fine, you shouldn't have to setup a separate domain or IP address for this. I completely understand the restriction, I worked at a place where you got one public domain and one public IP and no subdomains. It was unpleasant for sure. :(

The cookie proxy you can just imagine as another layer in the proxying, it will read the Cookie header and set a REMOTE_USER header. It must be the last proxying step, but it should not be an issue at all that there is an extra NGINX on top. Just more work for you maybe to figure out the proxying configuration for your NGINX server.

@RichardBruskiewich
Copy link
Contributor Author

RichardBruskiewich commented Mar 18, 2019

Ah yeah, I'm afraid this is definitely a docker/host issue that we can't really help debug :( That port is DNS, something isn't right there. It shouldn't be a problem with any of the containers. Maybe try reproducing on another machine?

See mattermost/mattermost-docker#368

I figured out that in my zeal to deprecate the Compose V 3.x "links" field, that I had commented out:

links:
 - apollo:target

but that the 'target' alias was used just above in the remoteuser service configuration, i.e.

GXC_BACKEND_URL: "target:8080"

I don't know if the use of this ad hoc domain name is necessary, but I'd like to try changing it back to 'apollo' in the service and see if that works. Yep... changing target hostname to apollo in the remoteuser service GXC_BACKEND_URL environment variable seemed to get the apollo service running again. However, I haven't tested the Galaxy integrated user authentication yet, so...

@hexylena
Copy link
Member

It isn't needed, but you'll need to change

GXC_BACKEND_URL: "target:8080"

to use apollo instead of target

@RichardBruskiewich
Copy link
Contributor Author

RichardBruskiewich commented Mar 18, 2019

Yep, @erasche did that and it worked :-).

However, with Apollo now running, I've tried logging into Galaxy and accessing the /apollo path (which comes up) and it presents the Apollo login dialog, which is refactory to my Galaxy user account.

BTW, I did rename the admin user account and change the password (from the Galaxy 'preferences') but perhaps, that is a perilous thing to do (as in Tripal).

I did try registering a second non-admin account, but that one also failed to authenticate. Here's the docker compose log snippet, which might be informative for diagnosing the failure conditions:

remoteuser_1           | time="2019-03-18T16:57:37Z" level=debug msg="Connection headers" headers=[close]
remoteuser_1           | time="2019-03-18T16:57:37Z" level=debug msg="Cache hit" hit=false
remoteuser_1           | time="2019-03-18T16:57:37Z" level=debug msg="Session Key Decoded" sk=e88B
remoteuser_1           | time="2019-03-18T16:57:37Z" level=info msg="Invalid session key / cookie"
remoteuser_1           | time="2019-03-18T16:57:37Z" level=info msg="Unauthenticated request"
remoteuser_1           | 2019/03/18 16:57:37 Graphite: requests.unauthenticated 1 2019-03-18 16:57:37
apollo_1               | 2019-03-18 16:57:37,475 [http-nio-8080-exec-4] WARN  authenticator.RemoteUserAuthenticatorService  - Remote user found [null]
apollo_1               | 2019-03-18 16:57:37,475 [http-nio-8080-exec-4] WARN  authenticator.RemoteUserAuthenticatorService  - No remote user passed in header!
apollo_1               | 2019-03-18 16:57:37,484 [http-nio-8080-exec-4] ERROR authenticator.UsernamePasswordAuthenticatorService  - Problem authenticating: org.apache.shiro.authc.UnknownAccountException: No account found for user [[email protected]]
apollo_1               | 2019-03-18 16:57:37,484 [http-nio-8080-exec-4] WARN  apollo.PermissionService  - Failed to authenticate user
proxy_1                | 75.156.98.4 - - [18/Mar/2019:16:57:37 +0000] "POST /apollo/Login?clientToken=19921453922115755004 HTTP/1.1" 200 68 "https://sunflower.divseekcanada.ca/apollo/annotator/index" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36" "-" 

@hexylena
Copy link
Member

hexylena commented Mar 18, 2019

Invalid session key / cookie

that is the key one. Is apollo beneath the galaxy path? If not, it must be. (e.g. /galaxy, and /galaxy/apollo) Or you need to scope your galaxy cookie to /

@RichardBruskiewich
Copy link
Contributor Author

Thanks for the quick reply. I'm just using the standard paths of the original configuration, I think, so my Galaxy is path '/' and apollo is path '/apollo', so I would expect this error to crop up, but since I'm now attempting the "galaxy" and "tripal" path swap again, I'll pay close attention to this

@RichardBruskiewich
Copy link
Contributor Author

@erasche, @abretaud I have managed to swap the Galaxy and Tripal sites paths at https://sunflower.divseekcanada.ca. The system does build up and run (see a test deployment at https://sunflower.divseekcanada.ca).

However, my attempts to apply your advice to make the system resolve the Galaxy-Apollo user authentication have not be successful. In some cases, the configuration changes badly broke the Apollo path. Thus, I simply reverted (for now) back to the original apollo configuration.

I am wondering if you could possibly have a quick look at our configurations from your end? We forked your dockerized-gmod-deployment code to https://github.com/DivSeek-Canada/divseek-canada-portal and created a special development branch called divseek-canada-build (which is the default branch on our forked repo).

As a matter of explanation, we've made some changes to the docker-compose.yml file to parameterise it a bit (see template.env for applicable environment variables, copied into .env (dot-env) and the main README). We've also tried to apply Compose 3.x standards and deprecate fields (e.g. links, volumes_from, etc.) which are deprecated or forbidden in this compose release. I am hopeful that most of the build remains functional despite these changes (but this needs validation).

We've also split the nginx/default.conf in two parts: the server definitions and (included) server.locations. The reason for doing this is indicated in our expanded README, and relates to facilitation of applying Certbot ssl onto the system, so as to configure https:// access with some modest ease.

There are probably some other oddities about which you may wish an explanation. Just ask me about them.

I've run out of time at my end to work on this right now - other urgent professional fires beacon to be extinguished. If you wish, you can branch off to apply necessary changes and issue a pull request back to our fork.

BTW, let me know if the modified README (on our branch) makes sense. I'm targeting a slightly less Docker-savvy crowd with it hence the extra excruciating detail. Using it, you can fire up the modified system at your end to poke around and offer feedback on how to repair things.

I hope you can help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants