Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Render blocking question #24

Closed
onezerodigits opened this issue Jan 22, 2016 · 4 comments
Closed

Render blocking question #24

onezerodigits opened this issue Jan 22, 2016 · 4 comments

Comments

@onezerodigits
Copy link

I'm liking this as a system quite a bit, but have mixed feelings (or maybe just confusion) about the impact of taking a content site (vs app) and blocking immediate render. Is there a work-around for people who share this concern?

screen shot 2016-01-22 at 11 29 55 am

@ghost
Copy link

ghost commented Jan 25, 2016

Due to the nature of the way it works, those resources have to be loaded first because they are what grabs the content. After an initial load, you could cache the js files so that the user doesn't have to reload them every time. Still, this is somewhat necessary since this works with hash-based routing so you technically never leave the same page.

Essentially, Google is misleading because they are assuming your content is on the html page when in fact, it is being loaded by the very JS that they are saying to defer. It's just a different way of doing it. Much more like an app than a traditional website.

@onezerodigits
Copy link
Author

Thanks @barryanders that makes sense. So should cms.js also be able to generate a sitemap for crawling? https://support.google.com/webmasters/answer/183668?hl=en&ref_topic=4581190
I'm a little unclear on if sites generated by cms.js can be crawled as-is.

@ghost
Copy link

ghost commented Jan 25, 2016

I think that's a good idea to generate a sitemap for sure. Although, looks like including hash links won't do any good. Direct links to the content would have to be included in the sitemap, because Google would count each hash link as the same page (ex. index.html#page1, index.html#page2). I'm going to guess that adding markdown files to a sitemap will have no effect, which I also replied to someone in their issue regarding SEO. Seems like this project does an excellent job as an app engine, but may not be best for a website due to that issue.

I'm still thinking about how the SEO could be achieved without completely changing everything. You could generate static pages for each piece of content, but then it wouldn't be purely javascript anymore, and that would require server-side code (not necessarily a bad thing, just doesn't seem like the goal of the original author).

@chrisdiana
Copy link
Owner

Going to close out this issue for now since the main issue regarding search engine crawling/SEO will be focused on in #93 @BarryMode @nsteiner.

@BarryMode is right as hash links will be a difficult hurdle to overcome without some larger rewrites. Generating the content server-side or offline would move away from the goals of the project to be a completely client-side solution. There are plenty of great solutions that are already available to solve that problem.

Still working through how we can give crawlers the ability to read content generated by CMS.js...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants