Robot.txt is the file on website root directory which is required to communicate with the web crawler robots. Robot.txt defines the protocol about which directory or sub path of the website should not be indexed. E.g. If you don’t want your web subdirectory to be scanned by web robots like http://webpricecalculator.com/blog then you can do it with robot.txt.
First choose default value for given search engine robots. Default values are of two types:-
Allowed: - It means all links are allowed to scan with no restriction.
Refused: - It means given restricted links shall not to be scanned by search engine robots.
Then choose crawling delay in seconds if you want any delay in crawling.
Write sitemap URL of your website in textbox .(This is optional , you can leave it)
Then you can choose separate options for each search engine robots.
Then write down the multiple restricted directory paths.
Verify the captcha as given in Image.
Click on the green button “Create Robot.txt” & then sit back and get your content for robot.txt in result.