Scanning "search engine" protected websites
A website say http://abcxyz.com has disabled search engine crawlers by using robots.txt file on its site. I know, the sole purpose of search protect is defeated if we could browse the files on it.
However, is it possible to scan the website for its contents, somehow? How? The website I am trying to scan has also disabled directory browsing.
A website say http://abcxyz.com has disabled search engine crawlers by using robots.txt file on its site. I know, the sole purpose of search protect is defeated if we could browse the files on it.
However, is it possible to scan the website for its contents, somehow? How? The website I am trying to scan has also disabled directory browsing.
No comments:
Post a Comment