A Question on Robot.txt Fetch
Sorry for not asking this as a question but I needed to include an image.,... so had to make it as a blog instead.
I had the following display on my GWT display.. saying that the spiders had problems crawling my site due to this error... Robots.txt Fetch
What can i do to sort this out?
Any ideas?
PS The Image does not seem to be displayed here either lol .. nightmare!!!
Recent Comments
5
Hi,
Examples of what your robot.txt file should say to get the results you want.
To allow full access
User-agent: *
Disallow:
To block all access
User-agent: *
Disallow: /
To block one folder
User-agent: *
Disallow: /folder/
Replace "folder" with the name of the folder. If have more folders to block, add another line - Disallow /folder/ beneath it.
Block one file
User-agent: *
Disallow: /file.html
Replace "file" with the name of the file. If have more files to block, add another line - Disallow /file/ beneath it.
Hope this helps.
Hi,
To be honest I do not see a problem with this,
The # global line is just a comment and is ignored.
It then instructs all search engines to ignore that one page.
Possibly it may be that it was saved wrong.
It must be saved in the Web server's root directory. Also,
the filename of robots.txt is case sensitive. Use "robots.txt", not "Robots.TXT."
Hope that helps, if not may want to check Google or whoever is having problems and see if they have any suggestion.
Hey there... I really appreciate your help ... I have learned something from you there...
Also I contacted support to see if it was a hosting issue.. so waiting their input too..
Thanks again, I really appreciate your time! :)
Chris
Followed you too :)
See more comments
I wish I could say something about this, but I am clueless. I did learn something though.
Thanks for sharing Chris:)