WebLight is a web crawler that lets you efficiently maintain xml sitemaps and find markup, CSS and link problems so you can maintain error free, fully indexed sites without sacrificing productivity. The heart of WebLight is its fast web crawler that makes validating and mapping entire sites, even very large sites, easy. You simply tell WebLight where to start and use rules like robots.txt to control the resources it validates and the links it follows. You can use as many starting URL's and rules as necessary to scan your all of the resources you want and none that you don't. Unlike most web crawlers that only scan public html pages, WebLight can scan most commonly used web resources - CSS, (x) html, atom, RSS, and xml sitemaps - on local disks, public and private web sites. If you can browse it, WebLight can analyze it.
WebLight is like a link checker that validates CSS, html, news feeds and sitemaps while it crawls. But, WebLight doesn't just find non-standard code and broken links like other validators, it lets you categorize classes of validation problems so that you can find real problems without being distracted by noise caused by non-standard code that you intentionally use such as browser specific CSS properties and WAI-ARIA attributes, or other problems that you choose not to fix. Maintaining xml sitemaps that help search engines efficiently index your sites is easy with WebLight. Its scanner finds all of your documents linked from news feeds, sitemaps and other documents. Then you can customize your sitemap, setting the priority, change frequency and last modified date for URL's using WebLight's spreadsheet like interface. Finally, when your sitemap is ready, WebLight will ping the search engines so they can index your new pages as soon as possible.