Internal links on a website are a critical organic ranking factor for Google. Links help Google both discover pages and assign rankings based on quantity and location. A page with 100 internal links is presumably a higher priority than one with a single link.
But neither purpose — discovery and rankings — is possible if Googlebot cannot crawl the links. This can occur in three primary ways:
- Links behind JavaScript. Google can usually crawl and render links in JavaScript, such as tabs and collapsible sections. But not always, especially if the JavaScript requires execution first.
- Links present on a desktop version but not on a mobile. Google indexes a site’s mobile version by default. However, mobile sites are often downsized desktop versions with much fewer links, preventing Google from discovering and indexing those excluded pages.
- Links with a nofollow attribute or meta tag. Google claims it can follow links with nofollow attributes, but there’s no way to know if that happened. And the meta tag blocks crawls only if Googlebot responds to it. Moreover, many site owners are unaware of active nofollow attributes or meta tags, especially if they use a plugin such as Yoast, which adds those functions with a single click.
Even if a page is indexed, you can never be sure the links to or from that page are crawlable and thus pass link equity.
Here are three ways to ensure Googlebot can crawl links on your website.
Tools to Inspect Links
Google’s text cache. The text-only...
Read Full Story: https://news.google.com/__i/rss/rd/articles/CBMiQmh0dHBzOi8vd3d3LnByYWN0aWNhbGVjb21tZXJjZS5jb20vc2VvLWNhbi1nb29nbGUtY3Jhd2wteW91ci1saW5rc9IBAA?oc=5
Your content is great. However, if any of the content contained herein violates any rights of yours, including those of copyright, please contact us immediately by e-mail at media[@]kissrpr.com.