Error analysis
All pages in a website should return a 200 code from the server. If so, that means that everything is correct and that you have it under control.
What other codes can I find, and how can I identify and fix them?
301 Errors: Within this block we can find the famous 301, which is the code that tells us when there is a redirect from one page to another . Redirects increase loading times when generating requests, so we should correct them all as much as possible.
It is also possible that you will find 302 redirects, which are those that we generate to indicate that a redirect is temporary and that in the future that page will be active again. In this way we do not lose the authority of the page.
404 Errors: Who Has Never Had 404 Errors ? When a page does not exist, it returns a 404 code generating a bad user experience. The solution is to redirect that page to another related one, in addition to correcting the internal links that are pointing to this already non-existent page.
Errors 500: This code warns you that the server could not process the page and therefore shows this status code.
When a page receives a 500 error it can be seen clearly, either by the typical blank page that we see or by the message that it usually tells us: 500 Internal Server Error
Sitemap and Robots
Configuring these two files is a fundamental part to indicate to Google all our pages of the site and where we do not want the Google bot to pass.
On the part of the sitemap, this is an XML file where it contains the structure of all the pages that make up the web .
In this file, only the pages that we want to index should appear and that contain the 200 code that we had previously talked about.
We can add the site map from the Search Console. Other optimizations that we can make to our sitemap are the following:
In case of having a site with many URLs, we can divide the sitemap by sections of the web: categories, articles, products, etc.
In case of having a multi-language site, we can create several sitempas for each language version.
As for the robots.txt file , in it we define which parts of our site we want it to crawl or not. Using the Disallow command , we can block those sections that we don't want the bot to crawl.
If we want to see the robots.txt, we can access it from any web page at the following URL: mydomain.com/robots.txt.
So far we have seen the main more technical SEO aspects to review during this audit.
Next, we will analyze all the points related to the Contents.
Content quality
The content is another of the most important parts of this branch of SEO since it is the users, and Google, who are going to interpret all the text. Therefore, the content that you are going to offer has to be well optimized.
Does it meet the needs of your potential users? Is the information well structured and relevant?
These are some of the questions you should ask yourself when determining the quality of a text.
Of course, in this content, keywords play a fundamental role, since it will be the search engines that will determine when your page should appear in Google for them.
Doing a keyword research, or Keyword Research, is the first step before you start writing.
In this keyword search you will be able to discover how many times a specific term is searched, as well as new opportunities that you may not consider.
keyword research in google planner
Other factors that fall within this optimization and we recommend you analyze are the following:
Duplicate: if you have repeated fragments of texts on different pages you can have duplicate content problems and as a consequence, Google can penalize you for it.
Density - The number of times the keyword appeard in the text matters here. We must try to be as natural as possible when writing content to avoid falling into over optimization or what we also call Keyword Stuffing .
Thin Content - Also called poor or poor quality content. All pages that are relevant for SEO must have a minimum text length so as not to be considered low quality in the eyes of Google.
Those that do not contribute anything at the SEO level, it is best to remove them from the Google index.
Cannibalizations: does your website have several pages that are very similar to each other and do they work with the same search intention ? If so, you may find yourself with cannibalization, that is, several of your internal pages are competing with each other under the same keyword.
Meta Tags
All the information that Google shows in its search engine are meta tags. The best way to indicate the content or theme of the page is to include these HTML tags, on the one hand the title tag, and on the other hand the description tag.
Some recommendations that we give you when creating these labels:
For meta titles:
The title must be unique , and not be repeated on other pages.
It should include the word key forever, and as far as possible to the beginning of the text.
The size of the title must not exceed 65 characters .
For meta descriptions:
Nor should they be repeated on more than 1 page.
The keyword should be included, in this case the location of the keyword is not so relevant.
It should include a call to action to attract the user's attention and thus increase the CTR.
The size of the description must not exceed 155 characters.
Images
How do we inform Google of the content of an image?
With the ALT tag we can assign a description according to the image and using the keywords that we are aiming for.
Add ALT tag on images
Remember that Google also has a main search engine for images available and your page could receive traffic from there.
In terms of loading, one of the first optimizations that we should pursue with this type of content is to reduce its size and resolution as much as possible, adapting it to the size of the page where it is going to be placed.