Showing posts with label website optimization. Show all posts
Showing posts with label website optimization. Show all posts

Friday, December 26, 2008

Google optimization tips for get more traffic

  • insert keywords at right place on your meta tags.
  • Name your Images with Descriptive Titles and alt text.
  • Use relevant text on your website.
  • Use Ad sense to assess overall content relevance.
  • Optimize Your Title and Meta tags.
  • Use anchor text keywords in links.
  • Submit your website to search engines and directories.
  • Content updates
  • Internal Linking Structure
  • Link Building { One Way , Two Way and three way }
  • Article Posting
  • Press Releases
  • Forums Postings
  • Advertising
  • Google Site map Creation and setup
  • Blogs creation
  • Make some videos and submit them to the larger video sharing sites like Youtube.
  • Take part in social bookmark sites. These can get some good quick traffic (don't spam).
  • Pay for direct advertising on sites that are on the same subject as yours.
  • Take part in PPC advertising including Google Adwords, Yahoo Search Marketing and Microsoft adcenter and etc..
  • Make an affiliate program for your website.
  • Do viral marketing by producing something with your sites link attached.
  • Include your sites link in the signature of any emails that you send to people.

Monday, December 15, 2008

Crawling vs. Indexing


Crawling means sucking content without processing the results. Crawlers are rather dumb processes that fetch content supplied by Web servers answering (HTTP) requests of requested URIs, delivering those contents to other processes, e.g. crawling caches or directly to indexers. Crawlers get their URIs from a crawling engine that’s feeded from different sources, including links extracted from previously crawled Web documents, URI submissions, foreign Web indexes, and whatnot.

Indexing means making sense out of the retrieved contents, storing the processing results in a (more or less complex) document index. Link analysis is a way to measure URI importance, popularity, trustworthiness and so on. Link analysis is often just a helper within the indexing process, sometimes the end in itself, but traditionally a task of the indexer, not the crawler (high sophisticated crawling engines do use link data to steer their crawlers, but that has nothing to do with link analysis in document indexes).

A crawler directive like “disallow” in robots.txt can direct crawlers, but means nothing to indexers.

An indexer directive like “noindex” in an HTTP header, an HTML document’s HEAD section, or even a robots.txt file, can direct indexers, but means nothing to crawlers, because the crawlers have to fetch the document in order to enable the indexer to obey those (inline) directives.

So when a Web service offers an indexer directive like to keep particular content out of its index, but doesn’t offer a crawler directive like User-agent: SEOservice Disallow: /, this Web service doesn’t crawl.

That’s not about semantics, that’s about Web standards.