Brutka IT: Our Services

Web Scraping

Парсинг сайтов

Automated web data extraction (aka Website scraping, parsing) – is the process of collecting data with its subsequent processing and analysis. It is used when necessary to get and process a large amount of information. A program that performs such browsing and data extraction is called a parser.

A typical example of parsing content is to copy a contact list from a web directory. However, extracting and saving data from a web page to an Excel spreadsheet only works with small amounts of data and takes considerable time. To process large amounts of data, automation is needed. And here the web parsers come in.

A web parser scans web pages, downloads content, extracts the necessary data from it, and then saves it in files or a database.

Parsing is not the same as an API. For example, a company may open access to the API to allow other systems to interact with its data; however, the quality and quantity of data available through the API is usually lower than can be obtained using parsing. In addition, parsing provides more relevant information than through the API, and is much easier to configure from a structural point of view.

What is site parsing used for?

Parsing sites can be used to automate all sorts of data collection tasks. Web parsers, along with other programs, can do almost everything the same as a person does in a browser and much more. They can automatically order your favorite food, buy concert tickets as soon as they become available, periodically scan e-commerce sites and send you text messages when the price of the product you are interested in drops, etc.

The legality of parsing

Information posted on publicly accessible Internet sites is publicly available, as There is no legislation restricting access to such information. In this connection, it is not prohibited to rewrite and withdraw prices and other information in the store.

Parsing sites is legal if there are no violations of prohibitions established by law during its implementation. Thus, with the automated collection of information, it is necessary to comply with applicable law.

Key limitations to keep in mind:

  • No violation of Copyright and related rights.
  • Illegal access to computer-protected information legally prohibited.
  • It is not permitted to collect information constituting a commercial secret in an unlawful way.
  • Obviously unfair exercise of civil rights (abuse of law) is not allowed.
  • Civil rights are not allowed to restrict competition.

From the above prohibitions it follows that the organization has the right to carry out automated collection of information (parsing sites) posted in the public domain on sites on the Internet if the following conditions are met:

  • Information is in the public domain and is not protected by copyright and related rights laws.
  • Automated collection is carried out by legal means.
  • Automated collection of information does not lead to disruption of sites on the Internet.
  • Automated collection of information does not limit competition.

Thus, the main recommendations that should be followed if parsing is used:

  • Content retrieved must not be copyrighted.
  • The parsing process should not interfere with the operation of the site that is being parsed.
  • Parsing must not violate the terms of use of the site
  • The parser should not retrieve the user's personal (personal) information
  • Parsed content must comply with fair use standards

The parser of our company operates in single-threaded mode, without creating a significant load on the site and follows the instructions of the robots.txt file, which excludes the possibility of downloading sensitive information.

Parsing results are unloaded in any format convenient for you: Microsoft Excel (.xlsx), delimited file (.csv), XML file (.xml), Microsoft Access DB (.accdb), SQL, NoSQL.