A long-term ranking in the top 10 Google search results requires that a site must actually be among the top 10 results and that the search engine shows and proves this. This tutorial should give you a first insight into search engine optimization and help you to better assess your needs.
What is search engine optimization (SEO)
Search Engine Optimization (SEO) as a subfield of search engine marketing refers to measures that serve to ensure that websites appear in the organic search engine rankings in the unpaid search results (natural listings) on higher positions. 
This tutorial is based on the current state of knowledge in the field of search engine optimization, as well as the guidelines for webmasters, which are provided by Google. All recommendations refer to the organic search on Google and in distinction to the paid ads:
The global market target of the search engine Google is 86 percent of all search queries in April 2020. The competitors bing and Yahoo together serve only 9.56% of search queries worldwide.
Therefore, the recommendations in this tutorial focus especially on Google’s search algorithms. The search engine bing and also Yahoo, which uses bing’s search technology, basically use the same technologies with a slightly different focus and is one to two years behind Google in terms of technology and data depth. Therefore, all optimization measures that are implemented for Google usually have a positive effect on the visibility in bing and Yahoo.
What is a search engine
A search engine is basically speaking a program for searching for documents stored in a computer or a computer network such as the World Wide Web. Internet search engines, such as Google or bing, have their origins in information retrieval systems.
They create a keyword index for the document base in order to answer search queries using keywords with a hit list sorted by relevance. After entering a search term, a search engine delivers a list of references to possibly relevant documents, usually displayed with title and a short excerpt of the respective document. Meanwhile, Google also integrates vertical search results, such as images, videos, news, shopping results or even author information in the so-called universal search.
Learn more: How Google Search works
Display of the search results
The search results displayed, the so-called “snippets”, contain important elements of the ranking page and are intended to give the visitor an impression of the topic and content of the respective page. As a result, these snippets make a significant contribution to the so-called “click-through rate”, i.e. the ratio between insertions and actual clicks on the corresponding result.
The snippet basically always consists of the elements:
- URL or breadcrumb trail
- Page title (space for about 55 characters)
- Description (if sufficiently relevant, the meta-description is used here)
Example of a snippet
Increasingly, snippets can also contain meta-information, such as ratings, dates, lists, author information and the like. If there is a very high probability of clicking on the domain, they can also contain so-called sitelinks, i.e. further short links below the snippet with further relevant entry points for the user. These forms of display known as “rich snippets” generally increase the click rate in the search results and should therefore be used wherever possible.
SEO as a continuing process
On the search engine side, different search methods and algorithms can be applied at any time, which are subject to constant change and development. Therefore all recommendations for search engine optimization are only valid as long as the search engines do not make any further changes to their algorithms.
Search engine optimization involves the examination of current search engine techniques, which are not disclosed by the search engine operators and are frequently changed in order to make abuse more difficult and provide the user with relevant results. The techniques that are not known and kept secret are examined for possibilities by “reverse engineering” the search results.
This means that there can never be any truly reliable findings and therefore no guarantee of success. All changes are made at your own risk.
Search engine optimization often makes it necessary to make small changes on parts of the website. If you look at these changes on their own, they may only bring about small improvements. But when combined with the other optimization measures, they can have a significant effect on organic search performance.
The user at the centre of action
Although this tutorial often contains the word “search engine”, I would like to point out that optimization decisions should first and foremost focus on what is best for the human visitors to your website.
After all, they are the true consumers of your content and only use search engines to find it. If you focus too much on being at the top of organic search results without delivering the desired result to the visitor, you will not create sustainable added value or a positive search experience.
Search engine optimization is all about improving the website in terms of search engine visibility, but the overall goal must always be to attract converting users to the website.
The 4 elements of search engine optimization
The OnPage optimization includes all content adjustments of the own website. This includes the optimization of the page content (including content) in terms of content quality, formatting, headlines, etc., but also technical aspects such as the headers and tags, as well as the internal link structure of the page. 
In order to be able to optimize a website, it is essential to know its keywords, i.e. search terms that describe your products and services and which users search for in search engines.
Keyword research and the selection of the relevant keywords should therefore be carried out at the beginning of each analysis, as they form the basis for many further measures
What should all be optimized? The individual components are examined in more detail below:
1. page title
The HTML tag within the <head> section describes to both users and search engines what this page is about. Choose short and meaningful titles.
The title of the home page or homepage should include the name of the website or your company, as well as other important information, such as the location of the company or some of the focal points and offers.
When your page appears in search results, the content of the title tag usually appears as the first line of the result. Words in the title are in bold if they appear in the user’s search query or are very similar to them (word parts, synonyms, numerals, etc.). This is intended to help users determine how relevant a particular web page is to their particular search and therefore has a large impact on the click-through rate of the entry.
Each page must have a unique title tag.
This helps the search engines to recognize that the page is different from others on the same website. In addition, pages with the same titles often obstruct each other in the ranking, because the search engine cannot recognize which of the pages is now the right one and so the pages cannibalize each other.
- Use of titles that effectively communicate the content of the respective page including the relevant keywords
- Use of unique titel for each page
- Use of short and meaningful titles
The meta-description is a summary of a page that can be displayed in directories or search engines as explanatory text in the hit lists. In fact, this is often the case in Google’s hits.
However, if the search engine finds that the meta-description does not match the search query, for example because the word searched for is not included, a relevant excerpt from the “normal” text of the page around this word is often displayed instead of the description.
Adding a meta-description is always recommended in case Google cannot find a good text excerpt for the snippet.
Words that appear in the user’s search query are printed in bold in the snippet, so it is recommended to include keywords in the meta-description. But be careful not to use too many keywords. A meaningful text in the result displays, which addresses the visitor directly and possibly already leads him to action (Call-To-Action) will entice more interested visitors to click than senseless “keyword stuttering” in a better position.
Google itself provides a guideline for the optimal length of the description in the result display:
Each hit description consists of about 160 – 170 characters, depending on the length of the run, with a line break in about half of it. In practice, this means that the meta-description can contain 1 to 2 sentences or a short paragraph. The meta-description should not only contain the essential keywords in the front part (the first 70 – 80 characters), but should also encourage clicks. After an interruption (dot, comma, callsign) the second part can contain the less important keywords, the USP, branding or image phrases or possibly again the main keyword.
As with the title, it is recommended to use unique meta-descriptions for each page.
- Use of unique meta-description describing the content of the page to improve the snippets.
- Meta-Description is intended to inform and arouse interest!
- Use of meaningful keywords in the meta-description.
- For the optimization of the meta-description it is recommended to transfer the experience from the AdWords campaigns with their respective ad texts. Targeted testing is also possible here.
3. URL structure
The use of expressive categories and file names on a web page can not only help to organize the page better, but could also contribute to more effective crawling of documents by search engines. In addition, it can create more easily assignable, “friendlier” URLs for those who want to link to you.
Potential visitors could be deterred by extremely long and meaningless URLs with only a few comprehensible words. In addition, some users link to your site with the URL as anchor text. If the URL contains a relevant text, it gives the search engine more information about the content of the linked page.
The URL is also displayed as part of the Google search results. In addition, words from the search query are displayed in bold in the URL.
By the way, the occurrence of the keyword in the URL alone is not a ranking factor! Therefore it might be more reasonable to keep already existing speaking URLs instead of adding search terms “by force”.
- Use speaking and meaningful URLs.
- Use a simple directory structure so that the user can easily see where he is on the website.
- Use a single version to get to the document.
The information architecture describes the basic structure of information systems: No matter if it is a library, an intranet or a normal website. Every website has such an architecture. Often it is created simply and unplanned and grows organically with the expansion of the website and the offer.
An optimal information architecture ensures that both the user and the search engine crawler can find their way around optimally at all times.
Although Google ranks each individual page, it is advantageous for optimal indexing to know what role a page plays in the overall structure of the website.
All websites have a home, root or start page. It is usually the most frequently visited page and serves as the basis for navigation within the site for many visitors. If your website contains more than a handful of pages, you should consider how visitors move from this general home page to pages with more specific content.
The optimal internal linking of your pages is a compromise between two paradigms of usability:
- Short distances for visitors and the crawler. Because the more pages a user has to “click through” to reach his destination, the more dissatisfied he usually is. The decision for the next click must be as easy as possible in order to find the target quickly.
- While the first rule would lead to a start page with far too many links to all sub-pages and thus break the second rule, the second rule leads in extreme cases to pages with only one link, so that the decision can be made as quickly as possible but there are far too many clicks to the target.
In practice, the middle ground between these two paradigms is the ideal.
According to the motto: “As little as possible, as much as necessary”, each page should contain, if possible, only those links that the user needs from exactly this page to find his desired destination.
- Create organic, traceable hierarchies that guide users from general to specific content as easily as possible.
- Don’t overdo it with the number of links offered per page, this not only confuses users but also weakens the transmission of relevance within your domain.
- Use text for navigation and avoid dropdown menus, graphics and animations.
A link text are the clickable words in a link. It is located within the tag <a href=“…“></a>. This text tells users and Google something about the content of the linked page. Whether internal or external links to external sites, the better the link text, the easier it is for the user to navigate and the better Google understands what the linked page is about.
In contrast to external backlinks, there is no over-optimization or devaluation through too hard link texts in internal linking!
- Suitable, descriptive link texts help to convey the linked contents.
- Use of short but expressive link texts.
- Integrate the keywords of the target page into the link text!
- Choose a formatting that highlights the linked text so that the user can distinguish this link from the normal text.
In the first step, before the results page is delivered, a search engine first forms a list of all websites that contain the search terms at all or that are linked with this link text. Only then does the search engine sort the list according to the secret ranking factors.
For an optimal placement in the search results it is essential to use the targeted terms and term combinations on the respective page!
Never write a text from a search engine perspective, but always for the reader.
You can then ensure that the key terms are well integrated into the text and, if necessary, extend or modify some of the wording to ensure integration. Logical division of the text into headings and body text with meaningful paragraphs helps Google to better understand the content of the page.
Heading 1 should therefore contain your most important keywords, in subheadings synonyms and variants can be used. If the user finds his search query directly on the page, he has the feeling that he is in the right place and can find the content he is looking for.
Contrary to persistent rumors, the pure keyword density is not a ranking factor for search engines!
A statement about the keyword density of a document, as an isolated value, does not make sense from several points of view. This can be illustrated with a simple example: A completely meaningless dummy text in which the search term occurs several times would already be relevant from the perspective of keyword density.
The keyword density alone cannot therefore provide a reliable statement about the quality of the text.
Length of text
The texts should also represent a real added value for the user and be of a certain length so that they are considered relevant by Google. The more content and thus added value you offer, the better.
In concrete terms, this means that articles should have a minimum length of 1,000 words. 1,500 words would be ideal, according to a study by SerpIQ. According to Medium.com, the ideal reading time for an article is 7 minutes, which corresponds to about 1,600 words. On product pages and category pages this value can also be undercut, as these are more for orientation purposes and stay longer on the product detail page after the user has made his selection.
Good texts with relevant information also let the user stay longer on your site. Ideally, they will find exactly the information they are looking for and will not return to the search results. This positive sign about the quality of your site can make you appear more relevant to Google and achieve better rankings.
Latent semantic optimization and WDF * IDF
Recently there has been a real hype in the SEO world about the small formula: WDF * p * IDF. Explanation: This function allows you to mathematically determine the ratio of certain words within a text document in relation to all other relevant documents. Thus, the search engine can determine whether a text is really intensively dealing with a topic or whether only the keyword occurs in an otherwise irrelevant text. It can also be used to determine whether the text sheds broad light on the topic or only provides superficial information. Even meaningless texts, or over-optimization and spam can be identified relatively reliably by means of term frequency analyses.
In a semantic optimization, the content of the website is textually designed in such a way that words are used which are generally mentioned in connection with the topic of this website, mostly also on other websites, and are thus quasi expected.
In principle, this approach is not to be welcomed when creating text, but the focus of your efforts must be on the readability and the benefit for the reader and not on adapting the text to a curve. Another problem with many WDF * IDF tools is that they only use the top 10 ranking pages for comparison, but the actual formula works with a complete corpus of all relevant documents. This results in a too superficial view. But even with more sophisticated tools, the observation is still very technical or mathematical and the output values often do not really help to write a better text in practice.
The so-called proof keywords, i.e. the terms that are frequently used in most relevant and well-ranked documents, indicate which terms the search engine simply expects for a certain topic. For example, the proof keywords: financing, account and money to the keyword bank indicate that it is not a park bank to sit, but a financial institution.
If you research a topic extensively, illuminate it in a helpful and useful way for the reader, and then check the use of the correct keywords (proof keywords) with a good TF * IDF tool, you will achieve a well readable and also statistically ideal text.
In practice, I think it is easier to work with a W-question tool first. This tool finds frequently asked questions that users search for your keyword. So you can create really good content and answer all the needs and questions of the visitors. So you can easily find out what the searchers are really interested in a term. With these questions in the background, it is usually much easier to answer them in the text and to illuminate a topic comprehensively.
Afterwards you can check with a WDF * IDF tool whether you have used the important proof keywords in the text.
- First and foremost, write good texts with real added value for your users!
- Try to cover all aspects of a topic within the article.
- Avoid sloppy texts with spelling and grammar errors.
- Avoid text in images. Search engines cannot read it.
- Structure the text logically with headings and paragraphs.
- If possible, use keywords in the page title, the URL and headings 1 and 2.
- Important keywords should be evenly distributed throughout the text. This ensures that the search engine does not consider just part of the text as relevant content.
- Also use synonyms and variations of the keywords as well as different declensions and conjugations.
- Avoid over-optimized texts that are no longer readable!
Pictures in search engines
Google’s goal is to provide our users with the best and most relevant search results in both image and web searches. By following the best practices listed below, as well as Google’s standard webmaster guidelines, you will increase the likelihood that your images will appear in these search results. You can provide Google with additional details about your images, including the URLs of images that the search engine might not otherwise detect, by adding information to an image sitemap.
Avoid embedding important text, such as page headers and menu items, in graphics, as not all users will have technical access to them. Use regular HTML to allow users to access important text content as freely as possible.
In general, when it comes to images SEO can be said: Provide as much image information as possible. Give your images precise, meaningful file names. The file name can point Google to the subject of the image. Try to express the object of the image in the file name. For example, “My-new-black-kitten.jpg” is much more meaningful than “IMG00023.JPG”.
Meaningful file names are also helpful for users: If the search engine cannot find suitable text on the page where it found the image, it uses the file name as an excerpt of the image in the search results.
Create suitable alternative text!
The alt attribute is used to describe the content of an image file. This attribute is important for two reasons:
- Through alternative text Google receives useful information about the subject of the picture. Google cannot read images, so it is essential to tell Google what is shown in the image. This information is used to determine the best image as a result for a user’s query in the image search.
- Many users – for example, visually impaired users or users who use speech-enabled applications or have low-bandwidth connections – may not be able to view images on the web pages. A meaningful alternative text provides these users with important information about the intended image.
Code language: HTML, XML (xml)
<img src="puppy.jpg" alt=""/>
Code language: HTML, XML (xml)
<img src="puppy.jpg" alt="puppy"/>
Code language: HTML, XML (xml)
<img src="spielender-dalmatinerwelpe.jpg" alt="Spielender Dalmatinerwelpe">
What you should avoid:
Code language: HTML, XML (xml)
<img src="puppy.jpg" alt="Welpe Hund kleiner Hund Welp Welps Welpen"/>
Filling search terms with too many alternative properties (“superfluous keywords”) reduces the user experience and can lead to your website being classified as spam.
Instead, fill your website with useful and informative content and use appropriate search terms that fit the context.
- Do not embed important text in images.
- Give your images accurate, meaningful file names.
- Use the Alt attribute to describe the content of an image file.
Technology plays a central role in search engine optimization. Ensuring the “crawlability” of content is therefore a top priority. In addition, it is important not to let the crawler get close to pages that you don’t want to have in the index anyway and at the same time not to overload the index with useless pages.
HTML source code
The use of the semantic elements offered by the HTML standard can help the search engine to better understand the most important terms of the respective page. Especially the headings <h1>,<h2>,<h3>,etc. should therefore be used.
The element <h1>, i.e. the first-order headline should, if possible, appear only once on each page and match the page title of the respective page.
In addition, particularly important page elements and terms can be highlighted for the search engine with the tags <strong> and <em>.
Recurring page elements
From a semantic point of view, recurring elements, such as the boxes in the sidebar should not contain <hx> headings, as these do not represent sub-items of the respective page and should therefore not be given any relevance within the current document.
HTTP Status Codes
The HTTP standard defines numerous status codes as the response of the web server to requests from a browser (client). The most important ones from an SEO point of view are:
- 200 OK – The page loads correctly and is included in the search engine index
- 301 permanent redirect – A permanent redirect that inherits PageRank
- 302 temporary redirect – A temporary redirect which does not inherit a PageRank (avoid it!)
- 404 error – The URL could not be retrieved and is therefore not indexed
- 503 maintenance – The URL is temporarily unavailable and will not be deindexed (recommended for maintenance purposes)
As a site operator, it is most important that error pages provide the 404 status code, redirections are always resolved by 301 and regular pages respond with 200 OK.
Duplicate content and keyword cannibalization
The most important principle in URL design is: All content may only be accessed under a single URL. This URL is also called the canonical URL. If several URLs lead to the same content, a problem arises through so-called duplicate content and keyword cannibalization.
Cannibalization in marketing occurs when different products of a company are in direct competition with each other, i.e. they work the same market and rob each other of market share.
In terms of SEO, cannibalization at the keyword level means that several sites compete with each other in the rankings because they are too similar and target the same terms. Often this leads to negative effects, so that none of the sites really ranks well, because the search engine cannot decide which is the “right” or the best site for this term. Therefore, Google simply does not rank any of the pages as a precaution.
Duplicate content or near-duplicate content occurs when two or more pages (different URLs) show the same or almost the same content. All content should only be findable or indexable under a single URL.
Duplicate content can have a negative effect on the search engine ranking of a website, as search engines filter duplicate content to prevent users from seeing content-identical or very similar hits that would reduce the quality of the search result pages (SERP).
Common causes of duplicate content
- Content can be called up with and without “www
- Content can be called up both via “http” and “https
- Content can be called up both with and without “Slash
- Content can be called up with the index file displayed and without it
- Content can also be called up with tracking parameters
- Page can also be called up with session IDs
- Content can be called with upper and lower case
- Content can be called up via several views
The solution to this problem is the so-called “Canonical Tag”:
The comparatively new “Canonical Tag” (also called ‘Canonical Link’, ‘Canonical URL’ or ‘URL Canonicalization’) is another step of the big search engine operators (especially Google) to get rid of ‘Duplicate Content’.
Web sites use different methods for page numbering. For example, many sites divide very long articles into several shorter pages. Product websites and online stores usually divide the list of items in a long product category into several pages. Similarly, discussion forums divide long threads into consecutive URLs.
If you use page numbering for content on your site and want it to appear optimally in search results, Google recommends either of these two methods:
- Do nothing. Content with numbered pages occurs very often. Google is good at delivering the most relevant results to users, regardless of whether content is divided into multiple pages.
- Specify a total view page. Searchers generally prefer to see an entire article or category on a single page. If we can assume that the searcher prefers this, we try to display the Total View page in search results. You can also add a rel=”canonical” link to component pages to tell Google to display the full view version in search results.
You should, however, prefer the third variant, as this way no misinterpretations can occur. It is also important to note:
The Canonical tag must NOT point to another URL on paginated pages, otherwise the links on this page will not even be considered by Google, which massively hinders internal linking.
Example for paginated pages
First page: http://www.example.com/?page=1 corresponds to http://www.example.com/
Code language: HTML, XML (xml)
<link rel="canonical" href="http://www.example.com/" /> <meta name="robots" content="index,follow">
Second and each subsequent page: http://www.example.com/?page=2
Code language: HTML, XML (xml)
<link rel="canonical" href="http://www.example.com/?page=2"> <meta name="robots" content="noindex,follow">
URL parameters often create so-called duplicate content. If Google detects duplicate content, this can be disadvantageous for the ranking of a web page. Although a Google algorithm groups the duplicated URLs into a cluster and selects a URL that probably best represents the cluster in the search results, the Google algorithm is not able to identify duplicate content. However, Google often cannot find all URLs in a cluster, or the automatically selected page is not the best landing page or desired destination page for the user. In addition, Google may even consider duplicate content as a scam if too many pages with the same content appear in the index over and over again.
With the Google Webmaster Tools you have the possibility to set the URL parameters of a page for crawling by Google. This helps Google crawl the site more efficiently by specifying how parameters in your URLs should be handled.
Page load time
Google has already announced in 2010 in the official Webmaster Central Blog that the loading speed of websites has been a ranking factor for some time now and can therefore damage the positioning in the search results.
Weak points can be uncovered with the Google Page Speed Insight Tools.
A robots.txt is a text file that is stored in the root directory of the domain and contains information for spiders/robots/crawlers. Here you can, for example, block content for crawlers so that this data cannot be read by search engines.
A blocking by the Robots.txt does NOT prevent the page from being indexed!
To exclude pages from indexing, use the following meta tag:
Code language: HTML, XML (xml)
<meta name="robots" content="noindex" />
Sitemaps are an easy way for search engines to get all the pages on your websites that are available for search. In its simplest form, a Sitemap is an XML file that lists URLs for a Web site and additional metadata about each URL (such as last update date, frequency of changes, importance of the URL compared to other URLs on the site), allowing search engines to crawl Web sites more intelligently.
Web crawlers typically crawl pages based on links within the site and across other sites. Sitemaps supplement this data to allow crawlers to include all URLs in the sitemap and determine information about those URLs based on the linked metadata. Using the Sitemap protocol does not guarantee that Web pages will be listed in search engines, but it does provide clues for Web crawlers to help them better index your site when they crawl it.
The common Sitemaps formats are supported by Google, Yahoo, and Microsoft (Bing).
SEO checklist for your ranking
- Keyword research to find the terms that users and potential customers are looking for, that have problems you can solve or needs you can satisfy.
- Analyze the search results to see what Google thinks is relevant.
- Create a trusted site with the content that best helps users achieve their goals.
- Is your URL crawlable and indexable? Can Google easily parse and index the content?
- Do you have an appealing title, description and URL for an optimal click-through rate?
- Do you have the best content using meaningful secondary keywords and proof keywords?
- Do you use meaningful tagging of entities using structured data?
- Do you have short loading times, ideal display on all devices and secure encryption?
- Find supporters and multipliers that help to spread the content.