SEO Factors Affecting Website Ranking

SEO Factors Affecting Website Ranking 2021

Over a period of time, SEO (search engine optimization) has evolved and become one of the exciting disciplines. And yet, it’s one of the least understood and least transparent aspects of great digital marketing. It involves various On-page SEO, off-page SEO, and technical factors to include HTTPS, site speed, robot.txt, sitemaps, URLs, redirecting URLs, duplicate content, and canonical tags, structured data, site status codes. Meta tags, Link Building, and the content-related factors. By fixing, SEO factors affecting website immediately increase online traffic to your website.

Carry out the audit of your entire site to analyze and find these issues. It involves an audit of every resource, both internal and external: HTML, CSS, JavaScript, text, videos, images, and more. Use SEO tools that crawl the website like a search engine by following robot.txt instructions. I found SEO PowerSuite, Smart SEO Auditor and SEMRush, as very helpful tools. These are simple and easy to use. They cover all the aspects of SEO i.e. Technical audit, On-page SEO, and Off-page SEO.

We have explained some of the common SEO issues and solutions as under.

  • Crawlability and Indexing Issues
  • Improper Redirects
  • Encoding and technical factors
  • Unsuitable URLs
  • Links not made properly
  • Images not made to read by Bots
  • On-page Factors
  • Localization of website

Crawlability and Indexing

It is an important technical SEO factor affecting website ranking. Both these terms are related, as crawlability is the ability of a search engine to access the website and Indexing is to analyze and add those collected pages into an index. Make your site free from crawlability issues, so that, your site is visible in the search results. To get the site crawled, you can invite the Crawlers (spiders) directly to access your website content, or crawlers (spiders) can reach by following links (Internal links) between pages from your own site or links from other sites to your site. But if there are broken links or dead ends, these create crawlability issues. As a result, search engine’s inability to access specific content on your site.

What affects crawlability and indexing?

  • Site Structure
  • Internal link structure
  • Looped redirects
  • Server errors
  • Unsupported scripts and other technology factors
  • Blocking web crawler access

How to facilitate web crawlers to crawl your website?

  • Submit sitemap to Google
  • Strengthen internal links
  • Regularly update and add new content
  • Avoid duplicating any content
  • Speed up your page load time

We understand, it sounds intimidating but let us put you at ease that there is a number of tools available like SEOPower suite, SEMrush, and Google search console to facilitate fix SEO factors affecting website ranking. Google search console (A free service but useful) is the place to submit your sitemap and find the web crawler’s coverage of your website.

Crawlbility and indexing problems -Effects on SEO

1. Broken Links, which generates 404 error code

404 often points to a problem on a website. For example, if you have a broken link on a page, and a visitor clicks it, they may see a 404 error. It’s important to regularly monitor these errors and investigate their causes, because they may have a negative impact and lower site authority in the user’s eyes.

Note: In some cases (especially with older and slower websites) your pages and resources may return 4xx and 5xx status code simply because the server was unable to handle the requests while crawling the site. To solve this problem and make sure the crawls do not overwhelm your server’s bandwidth, please navigate to the Google console and set the speed for bot visits.

2. Server Error with 5xx error code

5xx error messages are sent when the server is aware that it has a problem or error. It’s important to regularly monitor these errors and investigate their causes, because they may have a negative impact and lower site authority in the user’s eyes.

Note: In some cases (especially with older and slower websites) your pages and resources may return 4xx and 5xx status code simply because the server was unable to handle the requests while crawling the site. To solve the problem and make sure the crawls do not overwhelm your server’s bandwidth, please navigate to Google console and set the speed for bot visits.

3. A Resource restricted from indexing

If your unique and useful content is restricted from indexing in your robots.txt file or by Noindex X-Robots tags or by Noindex Meta tags then search engine will not crawl that content and that useful content will be barred from indexing.

This happens when a crawler is not allowed to visits your site pages, or resource. Does not follow its links and no indexing is done. So, ensure useful pages are allowed to crawl.

4. 404 page not set up correctly

A customized informative 404 error page can help you keep users on your website. It informs users that the page they are looking for doesn’t exist, and, most importantly, a 404 error page should return the 404 response code. So make sure it is set up correctly.

According to Google Search Console, if a page returns other than 404 or 410 for a non-existent page that means there is a real page at that URL. As a result, that URL may be crawled and its content indexed. Time spent by Google bot on non-existent pages, as a result, may bar your unique URLs from crawling and impact your crawl coverage.

5. Robots.txt file

Robots.txt file is a set of commands that direct robots which pages should or should not be indexed. But, these are not absolute rules for robots to follow. So there’s no guarantee that some disobedient robot will not check the content that you have disallowed. Read rules at robotstxt.org.

Note: Disallowing in robots.txt for any Secret or sensitive content should not be used as a way to lock it away from the public. No Guarantee!

Redirects

By redirects, we mean to forward the traffic from one URL to another where the original page is nonexistent. And for instance, if no redirect is put in place, and anyone lands on a page that has been moved, or deleted, would see an error. Which is a bad user experience. So always use a redirect to forward traffic to the URL that exists. This will improve SEO factors affecting website ranking. By this, visitors will end up where they wanted to go.

You need to use redirects when:

  • You want to move the URL of a web page (from URL A to URL B).
  • Deleted a page
  • Moved your website to a new domain name.
  • Merged two or more websites.
  • Migrated from HTTP to HTTPS.
  • Need to prevent duplicate content in www or non-www URLs.
  • Redeveloped your website and the structure changes.

Redirects problems and their effect on SEO

 1. Fixed www and non-www versions

Usually, websites are available with and without “www” in the domain name. This is quite common, and mostly, people link to both www and non-www versions. Fixing this will help you prevent search engines from indexing two versions of the same website.

Although such indexation won’t cause a penalty, setting one version as a priority is best practice, especially because it helps you save link juice from links with and without www for one common version.

2. Issues with HTTP.HTTPS site version

Using secure encryption is highly recommended for many websites (for instance, those taking transactions and collecting sensitive user information.) However, in many cases, webmasters face technical issues when installing SSL certificates and setting up the HTTP/HTTPS versions of the website.

In case you’re using an invalid certificate (ex, untrusted or expired one), most Web browsers will prevent users from visiting your site by showing them an “insecure connection” notification.

If the HTTP and HTTPS versions of your website are not set properly, both of them can get indexed by search engines and cause duplicate content issues that may undermine your website ranking.

3. Pages with 302 redirects

302 redirects are temporary so they don’t pass any link juice. If you use them instead of 301’s, search engines might continue to index the old URL and disregarded the new one as a duplicate, or they might divide the link popularity between the two versions, thus hurting search ranking.

We recommend, If you are permanently moving a page or website use 301 instead of 302. 301 redirect twill preserve link juice and as a result, avoid duplicate content issues.

4. Pages with 301 redirects

301 redirects are permanent and are usually used to solve problems with duplicate content, or if URLs are no longer necessary. The use of 301 redirects is absolutely legitimate, and it’s good for SEO because 301 redirect will funnel link juice from the old page to the new one. Just make sure you redirect old URLs to the most relevant pages.

5. Pages with long redirect chains

In certain cases, either due to bad htaccess file setup or due to some deliberately taken measures, a page may end up having two or more redirects. It is strongly recommended to avoid such redirect chains longer than 2 redirects since they may be the reason for multiple issues.

  • There is a high risk that a page will not be indexed as Google bots do not follow more than 5 redirects.
  • Too many redirects will slow down your page speed. Every new redirect may add up to several seconds to the page load time.
  • High bounce rate: users are not willing to stay on a page that takes more than 3 seconds to load

6. Pages with meta refresh

Basically, meta refresh may be seen as a violation of Google’s Quality Guidelines and therefore is not recommended from an SEO point of view.

As one of Google’s representatives points out: “In general, we recommend not using meta-refresh type redirects, as this can cause confusion with users (and search engine crawlers, who might mistake that for attempted redirects)… This is currently not causing any problems with regards to crawling, indexing, or ranking, but it would still be a good idea to remove that.” So stick to the permanent 301 redirects instead.

7. Pages with rel= “canonical”

If there are some products in two categories with two different URLs and both need to be live, then those need to be handled and specified which page should be considered a priority with the help of rel=” canonical” tags. In this case, you handle it via 301 redirects. It should be correctly implemented within the <head> tag of the page and point to the main page version that you want to rank in search engines. Alternatively, if you can configure your server, you can indicate the canonical URL using rel=” canonical” HTTP headers.

Encoding and Technical Factors

This refers to website and server optimization. Search engine spiders can only crawl and index your site if it’s technically fit to accept crawlers to crawl and index. This is another important technical SEO factor affecting website ranking.

Encoding and technical factors- Their Effects on SEO

1. Mobile friendly

According to Google, the mobile algorithm affects mobile search in all languages worldwide and has a significant impact on Google rankings. This algorithm works on a page-by-page basis. it is not about how mobile-friendly your pages are, it is simply are you mobile-friendly or not. The algorithm is based on such criteria as small font sizes, tap targets/links, readable content, and your viewpoint. Etc.

To address this SEO factor affecting website, your website must be responsive and adjusts itself automatically to the size of users screen.

2. Site loading speed

Search engines prefer sites that load quickly hence better ranking. It’s an important ranking signal. You can improve the site speed by:

  • By using better hosting services i.e. SiteGround, BlueHost, Hostwinds. (look for Hosting service that gives speed, reliability, security, and auto SSL setup)
  • Minimize “HTTP requests” use minimum scripts and plugins.
  • Use one CSS stylesheet instead of multiple CSS stylesheets.
  • Images with small pixels.
  • Compress your web pages (use GZIP)
  • Minify your site’s code (Google helps this at Minify Resource page)
  • Use a fast DNS (domain name system)

3. HTTPS pages with mixed content issues

Using secure encryption is highly recommended for many websites (for instance, those taking transactions and collecting sensitive user information.) However, in many cases, webmasters face technical issues when switching their websites from HTTP to HTTPS. And one of these issues is the so-called mixed content-i.e. when your secure HTTPS pages include insecure content served over HTTP.

If the HTTPS page includes content retrieved through regular, clear text HTTP, this weakens the security of the entire page as the unencrypted content is accessible to sniffers and can be modified by man-in-the-middle attackers. For this reason, in many modern browsers, such content might get blocked from loading or load with an “insecure connection” warning.

According to Google, there are two types of mixed content: Active and Passive; Passive mixed content refers to content that doesn’t interact with the rest of the page, and thus a man-in-the-middle attack is restricted to what they can do if they intercept or change that content. Passive mixed content includes images, video, and audio content, along with other resources that cannot interact with the rest of the page. Active mixed content interacts with the page as a whole and allows an attacker to do almost anything with the page. Active mixed content includes scripts, stylesheets, iframes, flash resources, and other code that the browser can download and execute.

4. Pages with multiple canonical URLs

Having multiple canonical URLs specified for a page happens frequently in conjunction with SEO plugins that often insert a default rel=canonical link, possibly unknown to the webmaster who installed the plugin. Double-checking the page’s source code and your server’s rel=” canonical” HTTP headers configurations will help correct the issue.

5. Pages with Frames

Frames allow displaying more than one HTML document in the same browser window. As a result, text and hyperlinks (the most important signals for search engines) seem missing from such documents.

If you use Frames, search engines will fail to properly index your valuable content, and won’t rank your website high in the SERP.

6. Pages with W3C HTML errors and warnings

The validation is usually performed via the W3C Markup Validation Service. And although it’s not obligatory and will not have a direct SEO effect, bad code may be the cause of Google not indexing your important content properly. We recommend checking your website pages for broken code to avoid issues with search engine spiders.

7. Pages with W3C CSS errors and warnings

The validation is usually performed via the W3C Markup Validation Service (W3C stands for World Wide Web Consortium).

CSS styles are used to control the design and formatting of the page, and to separate styles from the structure, which ultimately makes the page load faster. Errors in CSS maybe not that important to search engines, but they can lead to your page being incorrectly displayed to visitors, which, in turn, may affect your conversion and bounce rates. So, make sure the page is intended across all browsers (including mobile ones) important to you.

8. Too big pages

Naturally, there’s a direct correlation between the size of the page and its loading speed, which in its turn is one of the numerous ranking factors. Basically, heavy pages load longer. That’s why the general rule of thumb is to keep your pages size up to 3MB.

Of course, it’s not always possible. For example, if you have an e-commerce website with a large number of images, you can push this up to more megabytes, but this can significantly impact page loading time for users with a slow connection speed.

Use this template when analyzing the pages;

S.noURL’sSize
   
   
Template to Analyze the Pages

URL Issues

URL Factors and their effects on SEO

1. Dynamic URLs

URL is an important on-page SEO factor impacting SERP ranking. Hence, URLs that contain dynamic characters like “?”, “_”, and parameters are not user-friendly. Moreover, they are not descriptive and harder to memorize. To increase your pages’ chances to rank, it’s best to set up dynamic URLs so that they would be descriptive and include keywords, not numbers in parameters.

As Google Webmaster Guidelines state, “URLs should be clean coded for best practice, and not contain dynamic characters.”

2. Too long URL’s

URLs shorter than 115 characters are easier to read by end-users and search engines and will work to keep the website user-friendly. The following template will help you to gather such URLs and analyze which are not corresponding to the rules.

S.noURL’sTitleLength
 
 
Template to Analyze Too long URL’s

Note: URL should not be more than 115 characters in length.

Link Problems

Link Factors and Their Effects on SEO

1. Broken link

Broken outgoing links can be a bad quality signal to search engines and users. If a site has many broken links it is logical to conclude that it has not been updated for some time. As a result, the site’s ranking may be downgraded. Try to regularly check your website and fix broken links if any. This important on-page SEO factor affecting ranking.

2. Pages with excessive number of links

This is another factor relating to links on your website. To make sure your linking is totally fine, you should also check your website for Pages with an excessive number of outgoing links and review your Dofollow external links.

According to Matt Cutts (head of Google Webspam team), “..there’s still a good reason to recommend keeping to under a hundred links or so. If you’re showing well over 100 links per page, you could be overwhelming your users and giving them a bad experience. A page might look good to you until you put on your “user hat” and see what it looks like to a new visitor.” So the rule is simple: the fewer links on a page the fewer problems with its rankings.

3. Pages with Dofollow external links

While there is nothing wrong with linking to other sites via dofollow links, if you link extensively to irrelevant or low-quality sites, search engines may conclude your site sells links or participates in other link schemes, and it can get penalized.

Simply speaking, dofollow links are links missing the rel=”nofollow” attribute. Such links are followed by search engines and pass PageRank (please note that links can also be restricted from following in bulk via the nofollow <meta> tag).

Images Problems

Images factors and their Effects on SEO

1. Broken images

 Fix the problematic images by removing or replace these. An image is considered broken if it returns a 4xx or 5xx status code. Broken images are a crucial factor for user experience and may result in visitors bouncing away from the site with completing their goals.

Second, missing images may impede the site’s crawling and indexation, wasting its crawl budget and making it hard for search engine bots crawl some of the site’s important content.

2. Empty alt text

Search engines can’t read text of images, alt attributes (also known as “alternative attributes”) helps search engines to understand what your images portray. The best practice is to create an alt text for each image, using keywords on it when possible to help search engines to understand better your pages’ content and hopefully rank your site higher in search results.

On-page SEO Problems

On-page problems and Their Effects on SEO

1. Empty title tags

If a page doesn’t have a title or the title tag is empty (i.e. it just looks like this in the code: <title></title>), Google and other search engines will decide on their own, what content to show on the result page. Thus if the page ranks on Google for a keyword, and someone sees it in Google’s results for their search, they may not want to click on it simply because it says something absolutely not appealing. No webmaster would want this, because in this case, you cannot control what people see on Google when they find your page. Therefore, every time you are creating a webpage, don’t forget to add a meaningful title that would attract people. It should be unique and keyword-rich title.

2. Duplicate titles

A page title is the most important on-page element in SEO. It is a strong relevancy signal for search engines because it tells them what the page is really about. Page with duplicate titles does not rank high as search engines can’t determine which of the website pages is relevant for this or that query. It is of course important that the title includes your most important keyword. But more to that, every page should have a unique title to ensure that search engines have no trouble in determining which of the website pages is relevant for this or that query.

3. Too long titles

Titles that are longer than 70 characters get truncated by search engines and will look unappealing in search results. If the title is shortened and incomplete, it won’t attract as many clicks as your webpage deserve.

4. Empty meta description

Although meta descriptions don’t have a direct influence on ranking, still, these are important as they form the snippet people see in search results. Therefore, it should “sell” the webpage to the searcher and encourage him to click through. Meta description with keywords gives better results.

If the meta description is empty, search engines will themselves decide what to include in a snippet. Most often it’ll be the first sentence on the page. As a result, such snippets may be unappealing and irrelevant. You should write a Meta description for each of your website pages (especially for landing pages) and include marketing text that can lure a user to click.

5. Duplicate meta descriptions

Some of your website pages may contain identical (or duplicate) descriptions. These are problematic, rewrite descriptions to make them unique for every page.

According to Matt Cutts, it is better to have unique meta description and even no meta description at all, then to show duplicate meta descriptions across pages. Ensure that your top important pages have unique and optimized descriptions.

6. Too long meta descriptions

Although meta descriptions don’t have a direct influence on rankings, they are still important while they form the snippet people see in search results. Therefore, it should “sell” the webpage to the searcher and encourage him to click through. If the meta description is too long, it’ll get cut by the search engine and may look unappealing to users.

Localization Problems

Localization Factors and Their Effects on SEO

This is an important SEO factor which many people ignore.

1. Language Versions

If you have a multi-language website with different regional versions of a page, there is a good way to tell search engines about these localized variations by using hreflang elements. For this check Google guidelines for multi-language websites about how to use hreflang elements properly.

Set up multiple language and region variants of your pages via XML Sitemap. An SEO PowerSuite website Auditor can help you. It’s simple with SEO PowerSuite Website Auditor, just under the Pages > website Tools > Sitemap menu. Or create HTML hreflang tags with the help of the Pages > WebSite Tools > Localization menu

2. Verify Pages with hreflang elements

Verify hreflang elements that have been used on the project domain pages for incorrect usage of hreflang elements and their values.

Set up multiple language and region variants of your pages via XML Sitemap or by using Website Auditor.

3. Incorrect language codes

Always verify values of all hreflang elements found on your pages. One of the most common mistake is to use incorrect language-country values for hreflang annotations, for example; “en-UK” instead of “en-GB”. All hreflang values should comply with ISO 639-1 standards if only language version of a page are specified or with ISO 3166-1 Alpha 2 if both language and country versions are indicated. You can also use the “x-default” value for pages that do not have other better suited language/region versions.

You can set up multiple language and region variants of your pages via XML Sitemap or by using Website Auditor.

4. Invalid URLs

Make sure to use fully-qualified URLs for each language/region version of a page including the http:// or https:// protocole. It is not allowed to use relative URLs for hreflang attributes.

Set multiple language and region variants of your pages via XML Sitemap or by using Website Auditor.

5. Missing return links

Each language version of a page should have hreflang attributes that point to the page itself and also to other language/region versions of a page. It is also required that all language/region versions should point to each other in their hreflang attribute.

Example: You have a page in English (en) with 2 other language variations – Russian (ru) and German (de). The head section of the English version (en) should contain 3 hreflang elements; one pointing to the English version itself and 2 others pointing to Russian (ru) and German (de) pages. Both German (de) and Russian (ru) variants should also have 3 hreflang elements that point back to the English version, to each other, and also list themselves in hreflang attributes. Otherwise, if pages do not point back to each other their hreflang attributes will be ignored by search engines.

You can set up multiple language and region variants of your pages via XML Sitemap or by using Website Auditor.

6. Conflicting hreflang elements

A page language variant should have only one hreflang language attribute assigned (ex.. a page should not have both “en” and “de” hreflang values). Although, it is possible to assign the same language but different regions: en- US and en- GB.

You can set up multiple language and region variants of your pages via XML Sitemap or by using Website Auditor.

7. Non-canonical pages with hreflang elements

In this factor, you have hreflang attributes but at the same time use a canonical element that points to some other page. Such a combination of elements can confuse search engines because it suggests different URLs for indexation. The best practice is to either remove the canonical tag or to link the canonical element to the page itself. You can set up multiple language and region variants of your pages via XML Sitemap or by using Website Auditor.

8. Missing “x-default” values

In this factor, you check for pages that use hreflang elements without the “x-default” value. Using “x-default is not language and region that have not been defined through your hreflang attributes. You can set up multiple language and region variants of your pages via XML Sitemap or by using Website Auditor.

9. Incomplete hreflang values

This factor analyzes all pages that use language – region hreflang attribute and verifies that all these pages also have the “x-default” value and provide a generic URL for geographically unspecified users of the same language.

Example: A page may have specific variants for English speaking users from Canada (en-Ca) and Australia (en-Au) but you also need to specify a generic URL that will be used for English-speaking users from other countries.

10. Pages without hreflang elements

Pages that do not have hreflang elements either in the head section of a page or in the XML Sitemap or in the HTTP header fall under this category of issues. It may be a really time-consuming and laborious task to set up language and region variants for all pages of a domain. If you have a big website you may start with localizing just the main pages and directories. It is also possible to omit specific language or regions because Google will still be able to process those that point to each other.

To conclude, SEO needs to improve the relevance and authority of pages by optimizing. Optimized webpages give a better user experience and facilitate spiders to crawl the website in an effective manner. Technical SEO factors, on-page SEO factors, and off-page SEO factors all bring a cumulative impact to bring the website higher in SERP. Use tools like SEO PowerSuite to get the desired results

Keep Your webpage free from errors.