Skip to content

Introduction to SEO

Search Systems

Search Systems are what you typically refer to as Search Engines (Google, Bing, DuckDuckGo, etc.). They are massively complex systems that tackle some of the biggest challenges in technology history.

Search Systems have four main responsibilities:

  1. Crawling – the process of going through the Web and parsing the content in all websites. This is a massive task as there are over 350 million domains available.
  2. Indexing – finding places to store all of the data gathered during the crawling stage so it can be accessed.
  3. Rendering – executing any resources on the page such as JavaScript that might enhance the features and enrich content on the site. This process doesn't happen for all pages that are crawled and sometimes it happens before the content is actually indexed. Rendering might happen after indexing if there are no available resources to perform the task at the time.
  4. Ranking – querying data to craft relevant results pages based on user input. This is where the different ranking criteria are applied in Search engines to give users the best answer to fulfill their intent.

In the next section, we will learn more specifically how Googlebot works. Googlebot is Google's internet crawler, the part of the search system that gathers all the information needed to create the massive database of content to serve search results.


Quick Review

What happens to a page's data after it has been crawled?