How To Fix Excluded Pages (Continues...)
Submitted URL not selected as Canonical:
Google says the same content is repeated on so many pages and, thus, suggests finding another URL for those pages.
And for that reason, those pages have been excluded from its index.
Pro advice: If some of your pages contain plagiarized content, make sure to check the "Noindex" and "Nofollow" meta tag boxes for all of those pages and leave pages you want Google to index unchecked.
Blocked by robots.txt:
These are pages blocked by robots.txt from getting crawled.
You must carefully check and separate pages that ought to be blocked from those that shouldn't be blocked.
If things are properly controlled, then robots.txt is doing its best and there will be nothing to get bothered about.
Duplicate page without Canonical tag:
These are plagiarized pages that contain no Canonical URLs.
So, you can view the following examples:
You can have a number of duplicate pages. But that wouldn't be a thing to worry about at all.
If you have duplicate pages, make sure to use robots.txt blocking those pages and prevent Google from indexing them. While other pages you want Google to index should be left unblocked.
This is part of how you can leverage the index coverage report to improve the quality of your pages in Google. I hope you will consent to that!
Jenny