javascript - Making AJAX Applications crawlable without backend control -
i've built tool leverages emberjs , github pages create blogging application rendered in-browser. uses javascript fetch markdown files , render them body of application. because content fetched via ajax requests, i'm not sure of best way make content crawlable google, etc.
i've read many articles suggest using phantomjs handle _escaped_fragment_
requests, since content hosted on github, there's no way run server-side.
is there possible work-around (such rendering ahead-of-time before pushing content github) or experiencing shortcomings of javascript applications?
the question is, can googlebot basic javascript?
if not, then, no. read you, app requires js support render page. leaves without bot-friendly access method.
if yes, then, yes:
because javascript can access url parameters via location.search
, can create plausible urls google fetch in href
attributes interpreted js app, , overridden users in onclick
attributes.
<a href="/?a=my-blog-post" onclick="somefunc(this.href);return false;">
this paired code in app's onload seek location.search
, fetch .md may appear in designated url parameter (after parse query string) in hopes google running said onload specified content. variant of many sites' domain.com/#!ajax/path
style pathing. both client side, query string variant indicate googlebot page worth fetching distinct url.
you may able test http://google.com/webmasters, has "fetch googlebot" feature.
Comments
Post a Comment