The semantic core of the article is an example. How to make a semantic core from scratch? What to do with the semantic core

Many web editions and publications talk about the importance of the semantic core.

There are similar texts on our Chto Delat website. In this case, only the general theoretical part of the issue is often mentioned, while the practice remains unclear.

All experienced webmasters say that you need to form the basis for promotion, but only a few explain how to use it in practice. To remove the veil of secrecy from this issue, we decided to highlight the practical side of using the semantic core.

Why do we need a semantic core

This is, first of all, the basis and plan for further filling and promotion of the site. The semantic basis, divided by the structure of the web resource, is the pointers on the way to the systematic and purposeful development of the site.

If you have such a basis, you do not have to think about the topic of each next article, you just need to follow the list items. With the core, site promotion moves much faster. And the promotion acquires clarity and transparency.

How to use semantic core on practice

To begin with, it is worth understanding how the semantic basis is generally compiled. In fact, this is a list of key phrases for your future project, supplemented by the frequency of each request.

It will not be difficult to collect such information using the Yandex Wordstat service:

http://wordstat.yandex.ru/

or any other special service or program. In this case, the procedure will be as follows ...

How to make a semantic core in practice

1. Collect in a single file (Exel, Notepad, Word) all queries on your key topic taken from statistics data. Also include phrases “from the head”, that is, logically valid phrases, morphological options (as you yourself would search for your topic), and even options with typos!

2. List semantic queries sorted by frequency. From requests with the highest frequency to requests with a minimum of popularity.

3. From the semantic basis, all junk queries that do not correspond to the subject or direction of your site are removed and cleaned. For example, if you're teaching people about washing machines for free but not selling them, don't use words like:

  • "buy"
  • "wholesale"
  • "delivery"
  • "order"
  • "cheap"
  • "video" (if there are no videos on the site) ...

Meaning: Do not mislead users! Otherwise, your site will receive a huge number of bounces, which will affect its rankings. And this is important!

4. When the main list is cleared of unnecessary phrases and queries, includes a sufficient number of items, you can use the semantic core in practice.

IMPORTANT: a semantic list can never be considered completely ready and complete. In any subject, you will have to update and supplement the core with new phrases and queries, periodically tracking innovations and changes.

IMPORTANT: the number of articles on the future site will depend on the number of items in the list. Consequently, this will also affect the volume of the necessary content, the working time of the author of the articles, and the duration of filling the resource.

Overlaying the semantic core on the site structure

In order to get a sense out of the entire list received, you need to distribute requests (depending on frequency) according to the structure of the site. It is difficult to name specific figures here, since the scale and frequency difference can be quite significant for different projects.

If, for example, you take a request with a million frequency as a basis, even a phrase with 10,000 requests will seem to be average.

On the other hand, when your main request is 10,000 frequencies, the average frequency will be about 5,000 requests per month. Those. some relativity is taken into account:

"High - Mid - Low" or "High - Mid - Low"

But in any case (even visually) you need to divide the whole core into 3 categories:

  1. high-frequency requests (HF - short phrases with a maximum frequency);
  2. low-frequency queries (LF - rarely requested phrases and phrases with low frequency);
  3. mid-frequency requests (MF - all average requests that are in the middle of your list.

In the next step, 1 or more (maximum 3) requests for the main page are backed up. These phrases should be of the highest possible frequency. High-frequency speakers are placed on the main one!

Further, from the general logic of the semantic core, it is worth highlighting several main key phrases from which sections (categories) of the site will be created. Here you could also use tweeters with a lower frequency than the main one, or better - mid-range requests.

The remaining low-frequency phrases are sorted into categories (under the created sections and categories), turn into topics for future site publications. But it's easier to understand with an example.

EXAMPLE

A good example of using the semantic core in practice:

1. Main page(HF) - high-frequency request - "website promotion".

2. Section pages (SC) - "website promotion to order", "self-promotion", "website promotion with articles", "website promotion with links". Or simply (if adapted for the menu):

Section No. 1 - "to order"
Section number 2 - "on your own"
Section number 3 - "article promotion"
Section No. 4 - “link promotion”

All this is very similar to the data structure in your computer: logical drive (main) - folders (partitions) - files (articles).

3. Pages of articles and publications (NP) - “quick promotion of the site for free”, “promotion to order is cheap”, “how to promote the site with articles”, “promotion of the project on the Internet to order”, “inexpensive promotion of the site with links”, etc. .

In this list, you will have the largest number of various phrases and phrases, according to which you will have to create further site publications.

How to use a ready-made semantic core in practice

Using a query list is internal optimization content. The secret is to optimize (adjust) each page of the web resource for the corresponding core item. That is, in fact, you take a key phrase and write the most relevant article and page for it. To assess the relevance, a special service will help you, available at the link:

To have at least some guidance in your SEO work, it is better to pre-check the relevance of sites from the TOP results for specific queries.

For example, if you write text for the low-frequency phrase “inexpensive website promotion with links”, then first just enter it in the search and evaluate the TOP-5 sites in the search results using the relevance assessment service.

If the service showed that sites from the TOP-5 for the query “inexpensive website promotion with links” have relevance from 18% to 30%, then you need to focus on the same percentages. Even better is to create unique text with keywords and about 35-50% relevance. By slightly beating competitors at this stage, you will lay a good foundation for further promotion.

IMPORTANT: the use of the semantic core in practice implies that one phrase corresponds to one unique resource page. The maximum here is 2 requests per article.

The more fully the semantic core is revealed, the more informative your project will be. But if you are not ready for long-term work and thousands of new articles, you do not need to take on wide thematic niches. Even a narrow specialized area, 100% open, will bring more traffic than an unfinished large site.

For example, you could take as the basis of the site not the high-frequency key “site promotion” (where there is tremendous competition), but a phrase with a lower frequency and narrower specialization - “article site promotion” or “link promotion”, but expand this topic to the maximum in all articles of the virtual platform! The effect will be higher.

Useful information for the future

Further use of your semantic core in practice will only consist in:

  • correct and update the list;
  • write optimized texts with high relevance and uniqueness;
  • publish articles on the site (1 request - 1 article);
  • increase the usefulness of the material (edit ready-made texts);
  • improve the quality of articles and the site as a whole, keep an eye on competitors;
  • mark in the kernel list those requests that have already been used;
  • supplement optimization with other internal and external factors (links, usability, design, usefulness, videos, online help tools).

Note: All of the above is a very simplified version of activities. In fact, on the basis of the core, sublevels, deep nested structures, and branches to forums, blogs, and chats can be created. But the principle will always be the same.

GIFT: a useful tool for collecting the core in the Mozilla Firefox browser -

Often novice webmasters, faced with the need to create a semantic core, do not know where to start. Although there is nothing complicated in this process. Simply put, you need to collect a list of key phrases that Internet users use to search for information on your site.

The more complete and accurate it is, the easier it is for a copywriter to write good text, and you get high positions in the search for the right queries. About how to correctly compose large and high-quality semantic cores and what to do with them further so that the site goes to the top and collects a lot of traffic, and will be discussed in this material.

The semantic core is a set of key phrases grouped by meaning, where each group reflects one need or desire of the user (intent). That is, what a person thinks about when he enters his request into the search bar.

The whole process of creating a kernel can be represented in 4 steps:

  1. Facing a challenge or problem;
  2. We formulate in our head how we can find its solution through the search;
  3. We drive a request into Yandex or Google. Besides us, other people do the same;
  4. The most frequent variants of requests fall into analytics services and become key phrases that we collect and group according to needs. As a result of all these manipulations, a semantic core is obtained.

Is it necessary to select key phrases or can you do without it?

Previously, semantics was compiled in order to find the most frequent keywords on the topic, fit them into the text and get good visibility for them in the search. Last 5 years search engines tend to move to a model where the relevance of a document to a query will be assessed not by the number of words and the variety of their variations in the text, but by evaluating the disclosure of the intent.

For Google, it started in 2013 with the Hummingbird algorithm, for Yandex in 2016 and 2017 with Palekh and Korolev technologies, respectively.

Texts written without SL will not be able to fully reveal the topic, which means that it will not be possible to compete with the TOP in high-frequency and mid-frequency queries. It makes no sense to bet on low-frequency queries - there is too little traffic for them.

If you want to successfully promote yourself or your product on the Internet in the future, you need to learn how to compose the right semantics that fully reveal the needs of users.

Search query classification

Let's analyze 3 types of parameters by which keywords are evaluated.

By frequency:

  • High-frequency (HF) - phrases that define the topic. Consist of 1-2 words. On average, the number of search queries starts from 1000-3000 per month and can reach hundreds of thousands of impressions, depending on the topic. Most often, under them, the main pages of sites are sharpened.
  • Mid-frequency (MF) - separate directions in the topic. Mostly contain 2-3 words. With an exact frequency of 500 to 1000. Usually categories for a commercial site or topics for large informational articles.
  • Low-frequency (LF) - requests related to the search for a specific answer to a question. As a rule, from 3-4 words. It can be a product card or an article topic. On average, they search from 50 to 500 people per month.
  • When analyzing metrics or statistics counter data, one more type can be found - micro low-frequency keys. These are phrases that are often asked once in a search. It makes no sense to sharpen a page for them. It is enough to be in the top in bass, which includes them.



By competitiveness:

  • Highly competitive (VC);
  • Medium-sized (SC);
  • Low competitive (NK);

According to need:

  • Navigational. Express the desire of the user to find a specific Internet resource or information on it;
  • Informational. Characterized by the need to obtain information as a response to a request;
  • Transactional. Directly related to the desire to make a purchase;
  • Fuzzy or general. Those for which it is difficult to accurately determine the intent.
  • Geo-dependent and geo-independent. Reflect the need to search for information or make a transaction in your city or without a regional reference.


Depending on the type of site, the following recommendations can be given when selecting key phrases for the semantic core.

  1. Information resource. The main focus should be on finding topics for articles in the form of midrange and low-frequency queries with low competition. It is recommended to open the topic broadly and deeply, sharpening the page for a large number of low-frequency keys.
  2. Online store or commercial site. We collect HF, MF and LF, segmenting as clearly as possible so that all phrases are of a transactional type and belong to the same cluster. We focus on finding well-converting low-frequency NK keywords.

How to correctly compose a large semantic core - step by step instructions

We have moved on to the main part of the article, where I will sequentially analyze the main stages that you need to go through to build the core of the future site.
To make the process clearer, all steps are given with examples.

Search for basic phrases

Work with seo core begins with the selection of a primary list of basic words and phrases (HF), which best characterize the topic and are used in a broad sense. They are also called markers.

It can be both the names of directions, and types of products, popular queries from the topic. As a rule, they consist of 1-2 words and have tens, and sometimes hundreds of thousands of impressions per month. It is better not to take very wide keywords so as not to drown in negative keywords at the expansion stage.

It is most convenient to select marker phrases using . By driving a query into it, in the left column we see the phrases that it contains, in the right - similar queries from which you can often find suitable topics for expanding. The service also shows the basic frequency of the phrase, that is, how many times it was asked per month in all word forms and with the addition of any words to it.

By itself, such a frequency is of little interest, so to get more accurate values, you need to use operators. Let's figure out what it is and why you need it.

Operators Yandex Wordstat:

1) "..." - quotes. A query in quotation marks allows you to track how many times a phrase with all its word forms was searched in Yandex, but without adding other words (tails).

2) ! - Exclamation point. Using it before each word in the query, we fix its form and get the number of impressions in the search for the key phrase only in the specified word form, but with a tail.

3) "!… !… !…" - quotation marks and an exclamation point before each word. The most important operator for the optimizer. It allows you to understand how many times a keyword is requested per month strictly for a given phrase, in the form as it is written, without adding any words.

4) +. Yandex Wordstat does not take into account prepositions and pronouns when querying. If you want him to show them, put a plus sign in front of them.

5) -. The second most important operator. With its help, words that do not fit are quickly eliminated. To apply it, after the analyzed phrase, put a minus and a stop word. If there are several of them, repeat the procedure.

6) (…|…). If you need to get data from Yandex Wordstat for several phrases at the same time, we enclose them in brackets and separate them with a forward slash. In practice, the method is rarely used.

For the convenience of working with the service, I recommend installing a special browser extension "Wordstat Assistant". It is installed on Mozilla, Google Chrome, Y. Browser and allows you to copy phrases and their frequencies with one click of the “+” or “Add All” icon.


Let's say we decide to make our SEO blog. Let's choose 7 basic phrases for him:

  • semantic core;
  • optimization;
  • copywriting;
  • promotion;
  • monetization;
  • Direct

Search for synonyms

When formulating a query to search engines, users can use words that are close in meaning, but different in spelling.

For example, "car" and "car".

It is important to find as many synonyms as possible for the main words in order to increase the coverage of the future semantic core. If this is not done, then during parsing we will miss a whole layer of key phrases that reveal the needs of users.

What we use:

  • Brainstorm;
  • Right column Yandex Wordstat;
  • Requests typed in Cyrillic;
  • Special terms, abbreviations, slang expressions from the subject;
  • Blocks Yandex and Google - together with the "query name" search;
  • Snippets of competitors.

As a result of all actions for the selected topic, we get the following list of phrases:


Expanding Basic Queries

Let's parse these keywords to identify the basic needs of people in this area.
The most convenient way to do this is in the Key Collector program, but if it's a pity to pay 1800 rubles for a license, use it free analogue- Wordboy.

In terms of functionality, it is certainly weaker, but it is suitable for small projects.
If you do not want to delve into the work of programs, you can use the Just-Magiс service and Rush Analytics. But still it is better to spend a little time and deal with the software.

I will show the principle of work in the Key Collector, but if you work with Slovoeb, then everything will be clear too. The program interface is similar.

Procedure:

1) Add a list of basic phrases to the program and use them to record the basic and exact frequency. If we are planning promotion in a particular region, we indicate the regionality. For informational sites, this is most often not necessary.


2) Parse the left column of Yandex Wordstat by the added words to get all requests from our topic.


3) At the output, we got 3374 phrases. We will remove the exact frequency from them, as in the 1st paragraph.


4) Check if there are any keys with zero base frequency in the list.


If there is, remove it and move on to the next step.

Negative keywords

Many people neglect the procedure for collecting negative keywords, replacing it with the removal of phrases that do not fit. But later you will realize that it is convenient and really saves time.

Open the Key Collector tab Data -> Analysis. Select the type of grouping by individual words and scroll through the list of keys. If we see a phrase that does not fit, we press the blue icon and add the word instead of with all its word forms to the stop words.


In Slovoeb, work with stop words is implemented in a more simplified version, but you can also make your own list of phrases that do not fit and apply them to the list.

Do not forget to use sorting by Base frequency and number of phrases. This option helps to quickly reduce the list of initial phrases or weed out rare ones.


After we have compiled a list of stop words, we apply them to our project and proceed to collect search suggestions.

Tip parsing

When entering a query in Yandex or Google, search engines offer their own options for continuing it from the most popular phrases that Internet users drive in. These keywords are called search suggestions.

Many of them do not fall into Wordstat, so when building a semantic one, it is necessary to collect such queries.

Key Collector, by default, parses them with enumeration of endings, Cyrillic and Latin alphabets and with a space after each phrase. If you are ready to sacrifice the quantity in order to significantly speed up the process, check the box "Collect only TOP hints without busting and a space after the phrase."


Often among the search tips you can find phrases with good frequency and competition ten times lower than in Wordstat, so in narrow niches I recommend collecting as many words as possible.

The hint parsing time directly depends on the number of simultaneous requests to the search engine servers. The maximum Key Collector supports 50 threads.
But in order to parse requests in this mode, you will need the same number of proxies and accounts in Yandex.

For our project, after collecting clues, we got 29595 unique phrases. In time, the whole process took a little more than 2 hours on 10 threads. That is, if there are 50 of them, we will meet in 25 minutes.


Determination of base and exact frequencies for all phrases

For further work, it is important to determine the basic and exact frequency and filter out all nulls. Requests with a small number of impressions are left if they are targeted.
This will help you better understand the intent and create more complete structure articles than are in the top.

In order to remove the frequency, we first filter out all unnecessary:

  • word repetitions
  • keys with other characters;
  • duplicate phrases (via the Implicit Duplicates Analysis tool)


For the remaining phrases, we determine the exact and basic frequency.

a) for phrases up to 7 words:

  • We select through the filter "The phrase consists of no more than 7 words"
  • Open the "Collection from Yandex.Direct" window by clicking on the "D" icon;
  • If necessary, specify the region;
  • Select the guaranteed impressions mode;
  • We set the collection period - 1 month and tick the necessary types of frequencies;
  • Click "Get Data".


b) for phrases from 8 words:

  • We set a filter for the “Phrase” column - “consists of at least 8 words”;
  • If you need to promote in a particular city, indicate the region below;
  • Click on the magnifying glass and select "Collect all types of frequencies."


Cleaning up keywords

After we have received information about the number of impressions for our keywords, we can start weeding out those that are not suitable.

Let's take a look at the steps:

1. Go to Key Collector's "Group Analysis" and sort the keys by the number of words used. The task is to find non-target and frequent ones and add them to the list of stop words.
We do everything the same as in the “Minus words” paragraph.


2. We apply all the found stop words to the list of our phrases and run through it so as not to lose the target queries. After checking, click delete "Marked phrases".


3. Weed out empty phrases that are rarely used in exact occurrence, but have a high base frequency. To do this, in the settings of the Key Collector program in the "KEY & SERP" item, insert the calculation formula: KEY 1 = (YandexWordstatBaseFreq) / (YandexWordstatQuotePointFreq) and save the changes.


4. We calculate KEY 1 and delete those phrases for which this parameter turned out to be 100 or more.


The remaining keys need to be grouped by landing pages.

Clustering

The distribution of requests by groups begins with the clustering of phrases by top through free program"Majento Clusterer". I recommend KeyAssort, a paid analogue with wider functionality and faster work speed, but the free one is quite enough for a small kernel. The only caveat is that in order to work in any of them, you will need to buy XML-limits. The average price is 5 rubles. for 1000 requests. That is, processing an average core for 20-30 thousand keys will cost 100-150 rubles. The address of the service you are using, see the screenshot below.


The essence of key clustering by this method is to combine into groups those phrases that have Yandex Top 10:

  • shared urls with each other (Hard)
  • with the most frequent request in the group (Soft).

Depending on the number of such matches for different sites, clustering thresholds are distinguished: 2, 3, 4 ... 10.

The advantage of the method is the grouping of phrases according to the needs of people, and not only according to synonymous relationships. This allows you to immediately understand which keywords can be used on one landing page.

Suitable for informers:

  • Soft with a threshold of 3-4 and then cleaning by hand;
  • Hard on 3-ke, and then the union of clusters by meaning.

Online stores and commercial sites, as a rule, are promoted by Hard-y with a clustering threshold of 3. The topic is voluminous, so I will analyze it later in a separate article.

For our project, after grouping by the Hard method on 3-ke, 317 groups were obtained.


Competition check

There's no point in moving on highly competitive requests. It is difficult to get into the top, and without it there will be no traffic to the article. To understand what topics are profitable to write, we use the following method:

We focus on the exact frequency of the group of phrases for which the article is written and the competition for Mutagen. For informational sites, I recommend taking into work topics that have a total exact frequency of 300 or more, and a competitive rate of 1 to 12 inclusive.

In commercial topics, be guided by the marginality of a product or service and how competitors in the top 10 are doing. Even 5-10 targeted queries per month can be a reason to make a separate page for it.

How to check competition on demand:

a) manually, by entering the appropriate phrase in the service itself or through bulk tasks;


b) in batch mode through the Key Collector program.


Topic selection and grouping

Consider each of the resulting groups for our project after clustering and select topics for the site.
Majento, unlike Key Assort, does not allow you to download data on the number of impressions for each phrase, so you will have to additionally collect them through the Key Collector.

Instruction:

1) Upload all groups from Majento in CSV format;
2) We concatenate phrases in Excel by the mask "group: key";
3) Load the resulting list into the Key Collector. In the settings, there must be a checkmark in the "Group:Key" import mode and do not monitor the presence of phrases in other groups;


4) We remove the base and exact frequency for the keywords from the newly created groups. (If you are using Key Assort, then you do not need to do this. The program allows you to work with additional columns)
5) We are looking for clusters with a unique intent, containing at least 3 phrases and the number of impressions for all queries in total more than 300. Next, we check the 3-4 most frequent of them for competition by Mutagen. If among these phrases there are keys with competition less than 12, we take it to work;

6) We look through the rest of the groups. If there are phrases that are close in meaning and they should be considered within the same page, we combine them. For groups containing new meanings, we look at the prospects for the total frequency of phrases, if it is less than 150 per month, then we postpone it until we go through the entire core. Perhaps it will be possible to combine them with another cluster and score 300 exact impressions - this is the minimum from which it is worth taking an article to work. To speed up manual grouping, use auxiliary tools: a quick filter and a frequency dictionary. They will help you quickly find suitable phrases from other clusters;


Attention!!! How to understand that the cluster can be combined? We take 2 frequency keys from those that we picked up in paragraph 5 for the landing page and 1 request from the new group.
We add them to Arsenkin's tool "Upload Top 10", specify the desired region if necessary. Next, we look at the number of color intersections for the 3rd phrase with the rest. We combine groups if there are 3 or more of them. If there are no matches or one, you can’t merge - different intents, in the case of 2 intersections, see the output with your hands and use logic.

7) After grouping the keys, we get a list of promising topics for articles and semantics for them.


Deleting other content type requests

When compiling a semantic core, it is important to understand that commercial queries are not needed for blogs and information sites. Just like online stores do not need information.

We go over each group and clean up everything superfluous, if we can’t accurately determine the intent of the request, we compare the output or use the tools:

  • Commercialization check from Pixel Tools (free, but with a daily limit of checks);
  • Just-Magic service, clustering with a checkmark to check the commerciality of the request (for a fee, the cost depends on the tariff)

After that, we move on to the last step.

Phrase optimization

We optimize the semantic core so that it is convenient to work with it in the future for an SEO specialist and a copywriter. To do this, we will leave in each group key phrases that fully reflect the needs of people and contain as many synonyms as possible for the main phrases.

Action algorithm:

  • Sort keywords in Excel or Key Collector alphabetically from A to Z;
  • We will choose those that reveal the topic from different angles and in different words. Other things being equal, we leave phrases with a higher exact frequency or with a lower key 1 indicator (the ratio of the base frequency to the exact one);
  • We delete keywords with the number of impressions per month less than 7, which do not carry new meanings and do not contain unique synonyms.

An example of what a well-composed semantic core looks like -

In red, I marked the phrases that are not suitable for the intent. If you ignore my recommendations for manual grouping and do not check compatibility, it will turn out that the page will be optimized for incompatible keywords and high positions for promoted queries will no longer be seen.

Final checklist

  1. We select the main high-frequency queries that set the topic;
  2. We are looking for synonyms for them using the left and right columns of Wordstat, competitor sites and their snippets;
  3. We expand the received requests by parsing the left column of Wordstat;
  4. We prepare a list of stop words and apply them to the received phrases;
  5. We parse Yandex and Google hints;
  6. We remove the basic and exact frequency;
  7. Expanding the list of negative keywords. We clean from garbage and requests for dummies
  8. We do clustering through Majento or KeyAssort. For information sites in Soft mode, the threshold is 3-4. For commercial Internet resources using the Hard method with a threshold of 3.
  9. We import data into the Key Collector and determine the competition of 3-4 phrases for each cluster with a unique intent;
  10. We select topics and determine landing pages for requests based on an assessment of the total number of accurate impressions for all phrases from one cluster (from 300 for informants) and competition for the most frequent of them according to Mutagen (up to 12).
  11. For each eligible page, we look for other clusters with similar user needs. If we can consider them on one page, we combine them. When the need is not clear or there are suspicions that another type of content or page should be the answer to it, we check it by the issuance or through the Pixel Tools or Just-Magic tools. For content sites, the core should consist of information requests, for commercial sites, transactional ones. We remove the excess.
  12. We sort the keys in each group alphabetically and leave those that describe the topic from different angles and in different words. Other things being equal, priority is given to those queries that have a lower ratio of base frequency to exact frequency and a higher number of precise impressions per month.

What to do with the SEO core after it is created

We made a list of keys, gave them to the author, and he wrote an excellent article in full, revealing all the meanings. Oh, I was dreaming about something ... An explanatory text will turn out only if the copywriter clearly understands what you want from him and how he can check himself.

Let's analyze 4 components, having worked out with high quality, which you are guaranteed to get a lot of targeted traffic to the article:

Good structure. We analyze the queries selected for the landing page and identify what needs people have in this topic. Next, we write an article plan that fully answers them. The task is to make sure that people who visit the site receive a voluminous and exhaustive answer on the semantics that you have compiled. This will give good behavioral and high relevance to the intent. After you have made a plan, look at the sites of competitors by driving the main promoted query into the search. You need to do it exactly in this order. That is, first we do it ourselves, then we look at what others have and, if necessary, we refine it.

Turnkey optimization. We sharpen the article itself for 1-2 of the most frequent keys with competition for Mutagen up to 12. Another 2-3 mid-frequency phrases can be used as headlines, but in a diluted form, that is, by inserting additional words that are not related to the topic, using synonyms and word forms . We focus on low-frequency phrases from which we pull out a unique part - the tail and evenly implement it into the text. The search engines themselves will find and glue everything.

Synonyms for basic queries. We write them out separately from our semantic core and set the task for the copywriter to use them evenly throughout the text. This will help to reduce the density of our main words and at the same time the text will be optimized enough to get to the top.

Thematic phrases. By themselves, LSIs do not promote the page, but their presence indicates that most likely the written text belongs to the “pen” of an expert, and this is already a plus for the quality of the content. To search for thematic phrases, we use the tool "Terms of Reference for a Copywriter" from Pixel Tools.


An alternative method of keyword selection using competitor analysis services

There is a quick approach to creating a semantic core that is applicable to both beginners and experienced users. The essence of the method is that we initially select keys not for the entire site or category, but specifically for the article, landing page.

It can be implemented in 2 ways, which differ in how we choose topics for the page and how deep we expand the key phrases:

  • by parsing the main keys;
  • based on competitor analysis.

Each of them can be implemented at a simple and more complex level. Let's take a look at all the options.

Without the use of programs

A copywriter or webmaster often does not want to deal with the interface of a large number of programs, but they need good themes and key phrases under them.
This method is just for beginners and those who do not want to bother. All actions are performed without the use of additional software, using simple and understandable services.

What you need:

  • Keys.so service for competitor analysis - 1500 rubles. By promo code "altblog" - 15% discount;
  • Mutagen. Checking the competition of requests - 30 kopecks, collecting basic and exact frequency - 2 kopecks for 1 check;
  • Bookvarix - free version or business account - 995 rubles. (now with a discount of 695 r)

Option 1. Selecting a topic through parsing basic phrases:

  1. We select the main keys from the topic in a broad sense, using brainstorming and the left and right columns of Yandex Wordstat;
  2. Next, we look for synonyms for them, the methods of which were mentioned earlier;
  3. We fill in all received marker requests in Bookvarix (you will need to pay paid tariff) in the advanced mode "Search by list keywords»;
  4. Specify in the filter: "! Exact! frequency" from 50, Number of words from 3;
  5. Upload the entire list to Excel;
  6. We select all the keywords and send them for grouping to the Kulakov Clusterer service. If the site is regional, select the desired city. We leave the clustering threshold for information sites at 2, for commercial sites we set 3;
  7. After grouping, we select topics for articles by looking at the resulting clusters. We take those where the number of phrases is from 3 and with a unique intent. It is better to understand the needs of people by analyzing the urls of sites from the top in the “Competitors” column (on the right in the Kulakov service plate). Also, do not forget to check the competitiveness of the Mutagen. We break through 2-3 requests from the cluster. If everything is more than 12, then the topic is not worth taking;
  8. We decided on the name of the future landing page, it remains to choose key phrases for it;
  9. From the “Competitors” field, copy 3 URLs with the appropriate type of pages (if the site is informational, we take links to articles, if commercial, then to stores);
  10. We insert them sequentially into keys.so and unload all key phrases for them;
  11. We combine them in Excel and remove duplicates;
  12. Service data alone is not enough, so you need to expand it. Let's use Bookvarix again;
  13. The resulting list is sent for clustering to the "Kulakov clusterer";
  14. We select groups of requests that are suitable for the landing page, focusing on the intent;
  15. We remove the basic and exact frequency through the Mutagen in the "Mass tasks" mode;
  16. We upload a list with updated data on the number of impressions in Excel. We delete nulls for both types of frequencies;
  17. Also in Excel, we add the formula for the ratio of the base frequency to the exact one and leave only those keys for which this ratio is less than 100;
  18. Delete requests of another type of content;
  19. We leave phrases that fully and in different words reveal the main intent;
  20. We repeat all the same actions on points 8-19 for other topics.

Option 2. Choose a topic through competitor analysis:

1. We are looking for top sites in our topic by driving in high-frequency queries and viewing the results through Arsenkin's "Top-10 Analysis" tool. It is enough to find 1-2 suitable resources.
If we promote the site in a particular city, we indicate the regionality;
2. Go to the keys.so service and enter the urls of the sites that you found into it and see which pages of competitors bring the most traffic.
3. We check the 3-5 most accurate frequency queries for competition. If for all phrases it is above 12, then it is better to look for another topic that is less competitive.
4. If you need to find more sites for analysis, open the "Competitors" tab and set the parameters: similarity - 3, thematic - 10. Sort the data in descending traffic order.
5. After we have chosen a topic, we drive its name into the search results and copy 3 urls from the top.
6. Next, repeat steps 10-19 from the 1st option.

Using Key Collector or Slovoeb

This method will differ from the previous one only by the use of the Key Collector program for some operations and by a deeper expansion of the keys.

What you need:

  • Key Collector program - 1800 rubles;
  • all the same services as in the previous method.

"Advanced - 1"

  1. We parse the left and right columns of Yandex for the entire list of phrases;
  2. We remove the exact and basic frequency through the Key Collector;
  3. We calculate the index key 1;
  4. Delete null requests and with key 1 > 100;
  5. Then we do everything the same as in paragraphs 18-19 of option 1.

"Advanced - 2"

  1. We do steps 1-5, as in option 2;
  2. We collect keys for each url in keys.so;
  3. Delete duplicates in Key Collector;
  4. Repeat Steps 1-4 as in the Advanced -1 method.

Now let's compare the number of received keys and their exact total frequency when collecting CL by different methods:

As you can see from the table, the best result was shown by an alternative method of creating a core for a page - "Advanced 1.2". It was possible to get 34% more target keys and at the same time the total traffic in the cluster turned out to be 51% more than in the case of the classical method.

Below in the screenshots, you can see what the finished kernel looks like in each of the cases. I took phrases with an exact number of impressions from 7 per month, so that I could evaluate the quality of the keywords. See the full semantics in the table at the "View" link.

BUT)


B)


AT)

Now you know that not always the most common way, as everyone does, is the most faithful and correct, but you should not give up other methods either. Much depends on the topic itself. For commercial sites where there are not so many keys, the classic version is quite enough. You can also get excellent results on information sites if you correctly compose the terms of reference for a copywriter, make a good structure and seo-optimization. We will talk about all this in detail in the following articles.

3 common mistakes when creating a semantic core

1. Collection of phrases at the top. It is not enough to parse Wordstat to get a good result!
More than 70% of queries that people enter rarely or periodically do not get there at all. But among them there are often key phrases with good conversion and really low competition. How not to miss them? Be sure to collect search suggestions and combine them with data from different sources(, counters on sites, statistics services and databases).

2. Mixing informational and commercial queries on one page. We have already discussed that key phrases differ in the type of needs. If a visitor comes to your site who wants to make a purchase, but sees a page with an article as an answer to his request, do you think he will be satisfied? Not! Search engines also think the same way when they rank a page, which means that you can immediately forget about the top for midrange and high-frequency phrases. Therefore, if you are in doubt about determining the type of request, see the issue or use the tools of Pixel Tools, Just-Magic to determine the commerciality.

3. Choice to promote very competitive requests. Positions for HF VC phrases are 60-70% dependent on behavioral factors, and in order to get them you need to get into the top. The more applicants, the longer the queue of applicants and the higher the requirements for sites. Everything, as in life or sports. Becoming a world champion is much more difficult than getting the same title in your city.
Therefore, it is better to go into a quiet, rather than overheated niche.

It used to be even harder to get to the top. In the top they stood on the principle of whoever had time, he ate it. The leaders got to the first places, and they could be displaced only by accumulating behavioral factors. And how to get them if you are on the second or third page… Yandex broke this vicious circle in the summer of 2015 by introducing the “multi-armed bandit” algorithm. Its essence is precisely to randomly raise and lower the positions of sites in order to understand whether more worthy candidates have appeared for being in the top.

How much money do you need to start?

To answer this question, let's calculate the costs of the necessary arsenal of programs and services in order to prepare and ungroup key phrases into 100 articles.

The bare minimum (suitable for the classic version):

1. Slovoeb - free
2. Majento clusterer - free
3. For captcha recognition — 30 rubles.
4. Xml limits - 70 rubles.
5. Checking the competition of the request for Mutagen - 10 checks per day for free
6. If you are not in a hurry and are ready to spend 20-30 hours on parsing, you can do without a proxy.
—————————
The result is 100 rubles. If you enter captchas yourself, and get xml limits in exchange for those transferred from your site, then it’s really possible to prepare the core for free. You just need to spend another day setting up and mastering the programs, and another 3-4 days waiting for the results of parsing.

Standard semantic set (for advanced and classic methods):

1. Kay Collector - 1900 rubles
2. Key Assortment - 1700 rubles
3. Bookvarix (business account) - 650 rubles.
4. Competitor analysis service keys.so - 1500 rubles.
5. 5 proxies - 350 rubles per month
6. Anticaptcha - about 30 rubles.
7. Xml limits - about 80 rubles.
8. Checking competition with Mutagen (1 check = 30 kopecks) - we will meet 200 rubles.
———————-
The result is 6410 rubles. Of course, you can do without KeyAssort, replacing it with a Majento clusterer and using Slovob instead of Key Collector. Then enough and 2810 rubles.

Is it worth trusting the development of the kernel to a “pro” or is it better to figure it out and do it yourself?

If a person regularly does what he loves, pumps in it, then, following the logic, his results should definitely be better than those of a beginner in this field. But with the selection of keywords, everything turns out exactly the opposite.

Why in 90% of cases a beginner does better than a professional?

It's all about approach. The task of a semanticist is not to assemble the best core for you, but to complete his work in the shortest possible time and so that its quality suits you.

If you do everything yourself according to the algorithms that were mentioned earlier, the result will be an order of magnitude higher for two reasons:

  • You understand the topic. This means that you know the needs of your customers or site users and at the initial stage you will be able to expand the marker queries for parsing as much as possible by using a large number of synonyms and specific words.
  • Interested in doing everything well. The owner of a business or an employee of the company in which he works, of course, will approach the issue more responsibly and will try to do everything to the maximum. The more complete the core and the more low-competitive requests in it, the more targeted traffic will be collected, which means that the profit with the same investments in content will be higher.

How to find the remaining 10%, which will make up the core better than you?

Look for companies where keyword research is a core competency. And immediately discuss what result you want, like everyone else or the maximum. In the second case, it will be 2-3 times more expensive, but in the long run it will pay off many times over. For those who want to order a service from me, all the necessary information and conditions. I guarantee quality!

Why is it so important to fully work out the semantics

Here, as in any field, the principle of “good and bad choices” works. What is its essence?
Every day we are faced with what we choose:

  • meet a person who seems to be nothing, but does not cling or, having understood himself, build a harmonious relationship with those who are needed;
  • do a job that you don’t like or find what your soul lies in and make it your profession;
  • rent a room for a store in a non-trafficking place or still wait until it becomes free, a suitable option;
  • take on the team the best manager in sales, and the one who showed himself the best at today's interview.

Everything seems to be clear. And if you look at it from the other side, presenting each choice as an investment in the future. This is where the fun begins!

Saved on seven. core, 3-5 thousand. Happy as elephants! But what does this lead to:

a) for information sites:

  • Traffic losses are at least 1.5 times with the same investments in content. Comparing different methods for obtaining key phrases, we have already found out empirically that the alternative method allows you to collect 51% more;
  • The project sags faster in the search results. It is easy for competitors to bypass us by giving a more complete answer on intent.

b) for commercial projects:

  • Fewer leads or higher value. If we have semantics, like everyone else, then we are advancing on the same requests as competitors. A large number of proposals with constant demand reduces the share of each of them in the market;
  • Low conversion. Specific requests are better converted into sales. Saving on seven kernel, we lose the most conversion keys;
  • Harder to move on. There are many who want to be in the top - the requirements for each of the candidates are higher.

Wish you always do a good choice and invest only in plus!

P.S. Bonus "How to write a good article with bad semantics", as well as other life hacks for promotion and making money on the Internet, read in my group

At the moment for search promotion factors such as content and structure play the most important role. However, how to understand what to write the text about, what sections and pages to create on the site? In addition to this, you need to find out exactly what the target visitor of your resource is interested in. To answer all these questions, you need to assemble a semantic core.

Semantic core- a list of words or phrases that fully reflect the theme of your site.

In the article I will tell you how to pick it up, clean it and break it into structure. The result will be a complete structure with requests clustered by pages.

Here is an example of a query engine broken down into a structure:


By clustering, I mean splitting your search queries into separate pages. This method will be relevant both for promotion in the PS of Yandex and Google. In the article I will describe completely free way creating a semantic core, but I will also show options with various paid services.

By reading this article, you will learn

  • Choose the right queries for your topic
  • Collect the most complete core of phrases
  • Clean from uninteresting requests
  • Group and create structure

Having collected the semantic core, you can

  • Create a meaningful structure on the site
  • Create layered menu
  • Fill pages with texts and write meta descriptions and titles on them
  • Collect positions of your site for queries from search engines

Collection and clustering of the semantic core

Proper compilation for Google and Yandex begins with determining the main key phrases of your subject. For example, I will demonstrate its compilation on a fictitious online clothing store. There are three ways to collect the semantic core:

  1. Manual. Using the Yandex Wordstat service, you enter your keywords and manually select the phrases you need. This method is fast enough if you need to collect keys for one page, however, there are two drawbacks.
    • The accuracy of the method is lame. You can always miss some important words if you use this method.
    • You will not be able to assemble a semantic core for a large online store, although you can use the Yandex Wordstat Assistant plugin to simplify it - this will not solve the problem.
  2. Semi-automatic. In this method, I'm going to use a program to build the kernel and then manually break it down into sections, subsections, pages, and so on. This method compilation and clustering of the semantic core, in my opinion, the most effective. has a number of advantages:
    • Maximum coverage of all topics.
    • Quality breakdown
  3. Auto. Nowadays, there are several services that offer fully automatic kernel collection or clustering of your requests. Fully automatic option - I do not recommend for use, because. the quality of collection and clustering of the semantic core is currently quite low. Automatic query clustering - is gaining popularity and has a place to be, but you still need to combine some pages by hand, because. the system does not provide a perfect off-the-shelf solution. And in my opinion, you will just get confused and will not be able to dive into the project.

To compile and cluster a full-fledged correct semantic core for any project, in 90% of cases I use a semi-automatic method.

So, in order for us to do the following steps:

  1. Selection of queries for topics
  2. Collecting the kernel by request
  3. Purging of non-targeted requests
  4. Clustering (we break phrases into structure)

I showed an example of selecting a semantic core and grouping for a structure above. I remind you that we have an online clothing store, let's start to disassemble 1 point.

1. Selection of phrases for your subject

At this stage, we need the Yandex Wordstat tool, your competitors and logic. In this step, it is important to collect a list of phrases that are thematic high-frequency queries.

How to select queries to collect semantics from Yandex Wordstat

Go to the service, select the city (s) / region (s) you need, drive in the most “fat” requests in your opinion and look at the right column. There you will find the thematic words you need, both for other sections, and frequency synonyms for the entered phrase.

How to select queries before compiling a semantic core with the help of competitors

Enter the most popular queries in the search engine and select one of the most popular sites, many of which you most likely already know.

Pay attention to the main sections and save yourself the phrases you need.

At this stage, it is important to do the right thing: to cover all kinds of words from your subject as much as possible and not miss anything, then your semantic core will be as complete as possible.

Applied to our example, we need to make a list of the following phrases/keywords:

  • clothing
  • Shoes
  • Boots
  • Dresses
  • T-shirts
  • Underwear
  • Shorts

What phrases to enter is meaningless: women's clothing, buy shoes, prom dresses, etc. Why?— These phrases are “tails” of the queries “clothes”, “shoes”, “dresses” and will be added to the semantic core automatically at the 2nd stage of collection. Those. you can add them, but that would be pointless double work.

What keys do you need to enter?"half boots", "boots" are not the same as "boots". It is the word form that is important, and not whether these words have the same root or not.

Someone will have a long list of key phrases, and for someone it consists of one word - do not be alarmed. For example, for an online store of doors, the word “doors” is quite possibly enough to compose the semantic core.

And so, at the end of this step, we should have a similar list.

2. Collection of queries for the semantic core

For the correct full-fledged collection, we need a program. I will show an example simultaneously on two programs:

  • On a paid one - KeyCollector. For those who have or want to buy.
  • On the free - Slovoeb. Free program for those who are not ready to spend money.

Opening the program

Create a new project and name it, for example, Mysite

Now, to further collect the semantic core, we need to do a few things:

Create new account on Yandex mail (the old one is not recommended to be used due to the fact that it can be banned for many requests). So, you have created an account, for example [email protected] with password super2018. Now you need to specify this account in the settings as ivan.ivanov:super2018 and click the "save changes" button below. More details in the screenshots.

We select a region for compiling a semantic core. You need to select only those regions in which you are going to advance and click save. The frequency of requests and whether they will fall into the collection in principle will depend on this.

All settings are completed, it remains to add our list of key phrases prepared at the first step and click the "start collecting" button of the semantic core.

The process is fully automatic and quite long. You can make coffee for now, and if the topic is wide, for example, like the one we collect, then it’s for a few hours 😉

As soon as all the phrases are collected, you will see something like this:

And at this stage is over - proceed to the next step

3. Cleaning the semantic core

First, we need to remove requests that are not of interest to us (non-targeted):

  • Associated with another brand, such as "gloria jeans", "ekko"
  • Information queries, e.g. "I wear boots", "Jean size"
  • Similar topics, but not related to your business, for example, “used clothes”, “wholesale clothes”
  • Requests that are not related to the topic in any way, for example, “Sims dresses”, “Puss in Boots” (there are quite a lot of such requests after selection in the semantic core)
  • Requests from other regions, metro, districts, streets (it doesn’t matter which region you collected requests for - another region still comes across)

Cleaning must be done manually as follows:

We enter the word, press "Enter", if in our created semantic core it finds exactly the phrases that we need, select the found one and press delete.

I recommend that you enter the word not in its entirety, but using a construction without prepositions and endings, i.e. if we write the word "glory", it will find the phrases "buy jeans at gloria" and "buy jeans at gloria". When writing "gloria" - "gloria" would not be found.

Thus, you need to go through all the points and remove queries that you do not need from the semantic core. This can take a significant amount of time, and you may end up removing most of the collected requests, but the result will be a full clean and correct list all sorts of promoted queries for your site.

Upload now all your queries to excel

You can also bulk remove non-target queries from the semantics, provided you have a list. You can do this with stop words, and it's easy to do for a typical group of words with cities, subways, streets. You can download a list of such words that I use at the bottom of the page.

4. Clustering the semantic core

This is the most important and interesting part - we need to divide our requests into pages and sections, which together will create the structure of your site. A bit of theory - how to guide when splitting requests:

  • Competitors. You can pay attention to how the semantic core of your competitors from the TOP is clustered and do the same, at least with the main sections. And also see which pages are in the search results for low-frequency queries. For example, if you're not sure "do or don't" a separate section for "red leather skirts", then type the phrase into a search engine and see the results. If the search results contain resources where there are such sections, then it makes sense to make a separate page.
  • Logics. Do the whole grouping of the semantic core using logic: the structure should be understandable and represent in your head a structured tree of pages with categories and subcategories.

And a couple more tips:

  • It is not recommended to put less than 3 queries per page.
  • Do not make too many levels of nesting, try to make sure that there are 3-4 of them (site.ru/category/subcategory/sub-subcategory)
  • Do not make long URLs, if you have many levels of nesting when clustering the semantic core, try to shorten the url of categories high in the hierarchy, i.e. instead of "your-site.ru/zhenskaya-odezhda/palto-dlya-zhenshin/krasnoe-palto" do "your-site.ru/zhenshinam/palto/krasnoe"

Now to practice

Kernel Clustering by Example

To begin with, we will divide all requests into main categories. Looking at the logic of competitors, the main categories for a clothing store will be: men's clothing, women's clothing, children's clothing, as well as a bunch of other categories that are not tied to gender / age, such as just “shoes”, “outerwear”.

We group the semantic core with the help of Excel. Open our file and act:

  1. Divided into main sections
  2. We take one section and break it into subsections

I will show on the example of one section - men's clothing and its subsections. In order to separate some keys from others, you need to select the entire sheet and click conditional formatting-> cell selection rules-> text contains

Now in the window that opens, write "husband" and press enter.

Now all of our menswear keys are highlighted. It is enough to use the filter to separate the selected keys from the rest of our collected semantic core.

So let's turn on the filter: you need to select the column with queries and click sort and filter-> filter

And now let's sort

Create a separate sheet. Cut the highlighted lines and paste them there. In this way, you will need to further break the kernel.

Change the name of this sheet to "Men's Clothing", a sheet where the rest of the semantic core is called "All Queries". Then create another sheet, name it "Structure" and put it as the very first one. On the page with the structure, create a tree. You should end up like this:

Now we need to divide the large menswear section into sub-sections and sub-subsections.

For ease of use and navigation through your clustered semantic core, put links from the structure to the corresponding sheets. To do this, right-click on the desired item in the structure and do as in the screenshot.

And now you need to methodically separate requests with your hands, simultaneously deleting what you might not have been able to notice and remove at the stage of cleaning the kernel. Ultimately, thanks to semantic core clustering, you should end up with a structure similar to this one:

So. What we have learned to do:

  • Select the queries we need to collect the semantic core
  • Collect all-possible phrases for these queries
  • Clean up "garbage"
  • Cluster and create structure

What you can do next by creating such a clustered semantic core:

  • Create a website structure
  • Create a menu
  • Write texts, meta descriptions, titles
  • Collect positions to track the dynamics of requests

Now a little about programs and services

Programs for collecting the semantic core

Here I will describe not only programs, but also plug-ins and online services that I use

  • Yandex Wordstat Assistant is a plugin that makes it convenient to select queries from wordstat. Great for quickly compiling a core for a small site or 1 page.
  • Keycollector (slovoeb - free version) is a full-fledged program for clustering and creating a semantic core. Enjoys great popularity. A huge amount of functionality in addition to the main direction: Selection of keys from a bunch of other systems, the possibility of autoclustering, collecting positions in Yandex and Google, and much more.
  • Just-magic is a multifunctional online service for compiling the core, auto splitting, checking the quality of texts and other functions. The service is shareware, for full-fledged work you need to pay a monthly fee.

Thanks for reading the article. Thanks to this step-by-step manual, you will be able to compose the semantic core of your site for promotion in Yandex and Google. If you have any questions - ask in the comments. Below are bonuses.

In our article, we told what a semantic core is and gave general recommendations on how to compose it.

It's time to break down this process in detail, building the semantic core for your site step by step. Stock up on pencils and paper, and most importantly, time. And join...

We compose the semantic core for the site

Let's take the site http://promo.economsklad.ru/ as an example.

Field of activity of the company: warehouse services in Moscow.

The site was developed by the specialists of our site service, and the semantic core of the site was developed in stages in 6 steps:

Step 1. We make a primary list of keywords.

After conducting a survey of several potential customers, having studied three sites that are close to us in terms of subject matter, and using our own brains, we have compiled a simple list of keywords that, in our opinion, reflect the content of our site: warehouse complex, warehouse rental, storage services, logistics, storage space rental, warm and cold warehouses.

Task 1: Review competitors' websites, consult with colleagues, brainstorm and write down all the words that you think describe YOUR website.

Step 2. Expanding the list.

Let's use the service http://wordstat.yandex.ru/. In the search bar, enter each of the words in the primary list one by one:


Copy the refined queries from the left column to Excel spreadsheet, we look through the associative queries from the right column, select among them those relevant to our site, and also enter them into the table.

After analyzing the phrase "Warehouse rental", we received a list of 474 refined and 2 associative queries.

After a similar analysis of the rest of the words from the primary list, we received a total of 4,698 refined and associative queries that were entered by real users in the past month.

Task 2: Collect a complete list of queries for your site by running each of the words of your primary list through the Yandex.Wordstat query statistics.

Step 3. Stripping

First, we remove all phrases with an impression frequency below 50: “ how much does it cost to rent a warehouse"- 45 impressions," Warehouse rental 200 m» - 35 impressions, etc.

Secondly, we remove phrases that are not related to our site, for example, “ Warehouse rental in St. Petersburg" or " Warehouse rental in Yekaterinburg”, as our warehouse is located in Moscow.

The phrase " warehouse lease agreement download» - this sample may be present on our website, but actively promoted on this request it makes no sense, since a person who is looking for a sample contract is unlikely to become a client. Most likely, he has already found a warehouse or is the owner of the warehouse himself.

After you remove all unnecessary requests, the list will be significantly reduced. In our case with “warehouse rental”, out of 474 refined queries, 46 were left relevant to the site.

And when we cleaned up the full list of refined queries (4,698 phrases), we got the Semantic Core of the site, consisting of 174 key queries.

Task 3: Clean up the list of refined queries created earlier, excluding low-frequency ones with less than 50 impressions and phrases that are not related to your site.

Step 4. Refinement

Since you can use 3-5 different keywords on each page, we won't need all 174 queries.

Considering that the site itself is small (maximum 4 pages), then from complete list choose 20, which, in our opinion, most accurately describe the company's services.

Here they are: warehouse rental in Moscow, warehouse rental, warehouse and logistics, customs services, safekeeping warehouse, warehouse logistics, logistics services, office and warehouse rental, safekeeping of goods and so on….

These key phrases include low-frequency, mid-frequency and high-frequency queries.

Note that this list is significantly different from the primary, taken from the head. And it is definitely more accurate and efficient.

Task 4: Reduce the list of remaining words to 50, leaving only those that, in your experience and opinion, are the most optimal for your site. Don't forget that the final list should contain queries of varying frequency.

Conclusion

Your semantic core is ready, now is the time to put it into practice:

  • review the texts of your site, maybe they should be rewritten.
  • write a few articles on your topic using the selected key phrases, place the articles on the site, and after the search engines index them, register in the article directories. Read One Unusual Approach to Article Promotion.
  • pay attention to search ads. Now that you have a semantic core, the effect of advertising will be much higher.

The semantic core is a scary name that SEOs have come up with to refer to a fairly simple thing. We just need to select the key queries for which we will promote our site.

And in this article, I will show you how to properly compose a semantic core so that your site quickly reaches the TOP, and does not stagnate for months. Here, too, there are "secrets".

And before we move on to compiling the SA, let's look at what it is, and what we should eventually come to.

What is the semantic core in simple words

Oddly enough, but the semantic core is a regular excel file, in which the list contains key queries for which you (or your copywriter) will write articles for the site.

For example, here is how my semantic core looks like:

I have marked in green those key queries for which I have already written articles. Yellow - those for whom I am going to write articles in the near future. And colorless cells mean that these requests will come a little later.

For each key request, I have determined the frequency, competitiveness, and invented a "catchy" title. Here is approximately the same file you should get. Now my SL consists of 150 keywords. This means that I am provided with “material” for at least 5 months in advance (even if I write one article a day).

A little lower we will talk about what you should prepare for if you suddenly decide to order the collection of a semantic core from specialists. Here I will say briefly - they will give you the same list, but only for thousands of "keys". However, in SA it is not the quantity that matters, but the quality. And we will focus on this.

Why do we need a semantic core at all?

But really, why do we need this torment? You can, in the end, just write high-quality articles just like that, and attract an audience with this, right? Yes, you can write, but you can’t attract.

The main mistake of 90% of bloggers is just writing high-quality articles. I'm not kidding, they have really interesting and useful materials. But search engines don't know about it. They are not psychics, but just robots. Accordingly, they do not put your article in the TOP.

There is another subtle point here with the title. For example, you have a very high-quality article on the topic "How to do business in the" muzzle book ". There you describe everything about Facebook in great detail and professionally. Including how to promote communities there. Your article is the most high-quality, useful and interesting on the Internet on this topic. No one was lying next to you. But it still won't help you.

Why quality articles fly out of the TOP

Imagine that your site was visited not by a robot, but by a live checker (assessor) from Yandex. He understood that you have the coolest article. And the hands put you in first place in the search results for the query "Community promotion on Facebook."

Do you know what will happen next? You will be out of there very soon. Because no one will click on your article, even in the first place. People enter the query "Community promotion on Facebook", and your headline is "How to do business in the" muzzle book ". Original, fresh, funny, but ... not on demand. People want to see exactly what they were looking for, not your creative.

Accordingly, your article will empty take a place in the TOP of the issue. And a living assessor, an ardent admirer of your work, can beg the authorities for as long as he likes to leave you at least in the TOP-10. But it won't help. All the first places will be occupied by empty, like husks from seeds, articles that were copied from each other by yesterday's schoolchildren.

But these articles will have the correct “relevant” title - “Community promotion on Facebook from scratch” ( step by step, 5 steps, from A to Z, free etc.) It's a shame? Still would. Well, fight against injustice. Let's make a competent semantic core so that your articles take the well-deserved first places.

Another reason to start compiling SA right now

There is one more thing that for some reason people don't think much about. You need to write articles often - at least every week, and preferably 2-3 times a week to get more traffic and faster.

Everyone knows this, but almost no one does it. And all because they have “creative stagnation”, “they can’t force themselves”, “just laziness”. But in fact, the whole problem is precisely in the absence of a specific semantic core.

I entered one of my basic keys — “smm” into the search field, and Yandex immediately gave me a dozen hints about what else might be of interest to people who are interested in “smm”. I just have to copy these keys into a notebook. Then I will check each of them in the same way, and collect clues on them as well.

After the first stage of collecting SA, you should be able to Text Document, which will contain 10-30 wide base keys, with which we will work further.

Step #2 - Parsing Basic Keys in SlovoEB

Of course, if you write an article for the query "webinar" or "smm", then a miracle will not happen. You will never be able to reach the TOP for such a broad query. We need to break the base key into many small queries on this topic. And we will do this with the help of a special program.

I use KeyCollector but it's paid. You can use a free analogue - the SlovoEB program. You can download it from the official site.

The most difficult thing in working with this program is to set it up correctly. How to properly set up and use Slovoeb I show. But in that article, I focus on the selection of keys for Yandex-Direct.

And here let's take a look at the features of using this program for compiling a semantic core for SEO step by step.

First we create a new project and name it according to the broad key you want to parse.

I usually give the project the same name as my base key so I don't get confused later. And yes, I will warn you against another mistake. Don't try to parse all base keys at the same time. Then it will be very difficult for you to filter out “empty” key queries from golden grains. Let's parse one key at a time.

After creating the project, we carry out the basic operation. That is, we actually parse the key through Yandex Wordstat. To do this, click on the "Worstat" button in the program interface, enter your base key, and click "Start collecting".

For example, let's parse the base key for my blog "contextual advertising".

After that, the process will start, and after a while the program will give us the result - up to 2000 key queries that contain "contextual advertising".

Also, next to each request there will be a “dirty” frequency - how many times this key (+ its word forms and tails) was searched per month through Yandex. But I do not advise you to draw any conclusions from these figures.

Step #3 - Gathering the exact frequency for the keys

Dirty frequency will not show us anything. If you focus on it, then do not be surprised later when your key for 1000 requests does not bring a single click per month.

We need to find the net frequency. And for this, we first select all the found keys with checkmarks, and then click on the Yandex Direct button and start the process again. Now Slovoeb will look for the exact request frequency per month for each key.

Now we have an objective picture - how many times what request was entered by Internet users over the past month. Now I propose to group all key queries by frequency, so that it would be more convenient to work with them.

To do this, click on the "filter" icon in the "Frequency "!" ”, and specify to filter out keys with the value “less than or equal to 10”.

Now the program will show you only those requests, the frequency of which is less than or equal to the value "10". You can delete these queries or copy them for the future to another group of keywords. Less than 10 is very low. Writing articles for these requests is a waste of time.

Now we need to choose those keywords that will bring us more or less good traffic. And for this we need to find out one more parameter - the level of competition of the request.

Step #4 - Checking for Query Concurrency

All "keys" in this world are divided into 3 types: high-frequency (HF), mid-frequency (MF), low-frequency (LF). And they can also be highly competitive (VC), medium competitive (SC) and low competitive (NC).

As a rule, HF requests are simultaneously VC. That is, if a query is often searched on the Internet, then there are a lot of sites that want to advance on it. But this is not always the case, there are happy exceptions.

The art of compiling a semantic core lies precisely in finding queries that have high frequency and the level of competition is low. Manually determining the level of competition is very difficult.

You can focus on indicators such as the number of main pages in the TOP-10, the length and quality of texts. the level of trust and sites in the TOP of the issue on request. All of this will give you some idea of ​​how tough the competition for positions is for this particular request.

But I recommend that you use service Mutagen. It takes into account all the parameters that I mentioned above, plus a dozen more, which neither you nor I probably even heard of. After analysis, the service gives the exact value - what is the level of competition for this request.

Here I checked the request "setting up contextual advertising in google adwords". The mutagen showed us that this key has a concurrency of "more than 25" - this is the maximum value that it shows. And this request has only 11 views per month. So it doesn't suit us.

We can copy all the keys that we picked up in Slovoeb and do a mass check in Mutagen. After that, we will only have to look through the list and take those requests that have a lot of requests and a low level of competition.

The mutagen is paid service. But you can do 10 checks per day for free. In addition, the cost of verification is very low. For all the time I have worked with him, I have not yet spent even 300 rubles.

By the way, at the expense of the level of competition. If you have a young site, then it is better to choose queries with a competition level of 3-5. And if you have been promoting for more than a year, then you can take 10-15.

By the way, at the expense of the frequency of requests. We now need to take the final step, which will allow you to attract a lot of traffic even for low-frequency queries.

Step #5 - Collecting "tails" for the selected keys

As has been proven and verified many times, your site will receive the bulk of traffic not from the main keys, but from the so-called “tails”. This is when a person enters strange key queries into the search box, with a frequency of 1-2 per month, but there are a lot of such queries.

To see the "tail" - just go to Yandex and enter your chosen key query in the search bar. Here's what you'll see.

Now you just need to write out these additional words in a separate document, and use them in your article. At what it is not necessary to put them always next to the main key. Otherwise, search engines will see "re-optimization" and your articles will fall in the search results.

Just use them in different places in your article, and then you will receive additional traffic from them as well. I would also recommend that you try to use as many word forms and synonyms as possible for your main key query.

For example, we have a request - "Setting up contextual advertising". Here's how you can reformulate it:

  • Setup = set up, make, create, run, launch, enable, host…
  • Contextual advertising = context, direct, teaser, YAN, adwords, kms. direct, adwords…

You never know exactly how people will look for information. Add all these additional words to your semantic core and use when writing texts.

So, we collect a list of 100 - 150 keywords. If you are compiling a semantic core for the first time, then it may take you several weeks to complete it.

Or maybe break his eyes? Maybe there is an opportunity to delegate the compilation of CL to specialists who will do it better and faster? Yes, there are such specialists, but it is not always necessary to use their services.

Is it worth ordering SA from specialists?

By and large, specialists in compiling a semantic core will only take you steps 1 - 3 of our scheme. Sometimes, for a big additional fee, they will also take steps 4-5 - (collecting tails and checking the competition of requests).

After that, they will give you several thousand key queries with which you will need to work further.

And the question here is whether you are going to write articles yourself, or hire copywriters for this. If you want to focus on quality, not quantity, then you need to write it yourself. But then it won't be enough for you to just get a list of keys. You will need to choose those topics that you understand well enough to write a quality article.

And here the question arises - why then do we actually need specialists in SA? Agree, parsing the base key and collecting the exact frequencies (steps #1-3) is not at all difficult. It will take you literally half an hour.

The most difficult thing is to choose high-frequency requests that have low competition. And now, as it turns out, you need HF-NC, on which you can write a good article. This is exactly what will take you 99% of the time working on the semantic core. And no specialist will do this for you. Well, is it worth spending money on ordering such services?

When the services of SA specialists are useful

Another thing is if you initially plan to attract copywriters. Then you do not need to understand the subject of the request. Your copywriters will also not understand it. They will simply take a few articles on this topic and compile “their” text from them.

Such articles will be empty, miserable, almost useless. But there will be many. On your own, you can write a maximum of 2-3 quality articles per week. And the army of copywriters will provide you with 2-3 shitty texts a day. At the same time, they will be optimized for requests, which means they will attract some kind of traffic.

In this case, yes, calmly hire SA specialists. Let them also draw up TK for copywriters at the same time. But you understand, it will also cost some money.

Summary

Let's go over the main ideas in the article again to consolidate the information.

  • The semantic core is just a list of keywords for which you will write articles on the site for promotion.
  • It is necessary to optimize the texts for the exact key queries, otherwise your even the highest quality articles will never reach the TOP.
  • SL is like a content plan for social networks. It helps you not to fall into a “creative block”, and always know exactly what you will write about tomorrow, the day after tomorrow and in a month.
  • To compile the semantic core, it is convenient to use the free Slovoeb program, you only need it.
  • Here are five steps in compiling CL: 1 - Selection of basic keys; 2 - Parsing basic keys; 3 - Collection of exact frequency for requests; 4 - Checking the competitiveness of keys; 5 - Collection of "tails".
  • If you want to write articles yourself, then it is better to make the semantic core yourself, for yourself. Specialists in the compilation of CL will not be able to help you here.
  • If you want to work on quantity and use copywriters to write articles, then it is entirely possible to involve delegating and compiling the semantic core. If only there was enough money for everything.

I hope this guide was helpful to you. Save it to your favorites so as not to lose it, and share it with your friends. Don't forget to download my book. There I show you the fastest way from zero to the first million on the Internet (squeezed from personal experience over 10 years =)

See you later!

Your Dmitry Novoselov

Internet