How to Avoid Website Downtime and Prevent Financial Loss: A Real Solution for Businesses

A Common Problem Many Face

Your website is not just a web page; it’s a full-fledged business tool. But what if it keeps going down? An unstable virtual server, Apache crashes due to memory shortages, competitors launching DDoS attacks, and insufficient hosting space at the worst possible moment. The result – your website is unavailable for hours or even days. And that’s not all:

  • Some pages may return 404 errors.
  • The website is running but does not respond correctly to requests.
  • You only learn about failures from clients or when sales suddenly drop.

The consequences of such problems are catastrophic:

  • Wasted advertising budgets.
  • Lost clients and orders.
  • Lower search engine rankings due to poor site availability.

Why Don’t Regular Monitoring Services Help?

There are many services that “ping” your website every few minutes to check its availability. However, they have several drawbacks:

  • They often produce false alarms.
  • They do not account for errors like 303 (redirects) and other critical issues.
  • They do not always detect partial website inaccessibility.

Our Solution: Monitoring with ELK and a Telegram Bot

We approached the problem comprehensively and implemented the following:

  • Deployed an ELK server (Elasticsearch, Logstash, Kibana). This powerful tool collects and analyzes logs, allowing us to track all website requests in real time.
  • Sent all access.log records to ELK. Now, we have a complete picture of the server’s activity.
  • Configured a Telegram bot that analyzes logs and reports issues. If recent logs contain errors (404, 500, etc.), the bot instantly sends a notification to a dedicated chat.
  • Added additional logic: if no requests are received on the website within 5 minutes, the bot sends a warning. This helps detect major outages (such as server crashes or hosting issues).

What Did This Do for Businesses?

  • ✅ Instant error response. Now, 404 errors and other issues are detected in real time rather than after hours or days.
  • ✅ Full website control. We always know when failures occur and can quickly resolve them.
  • ✅ Minimized financial losses. Downtime is reduced from several hours to just 10 minutes, meaning advertising budgets are used efficiently, and clients don’t turn to competitors.

Conclusion

Automated monitoring via ELK and a Telegram bot is a modern and effective solution for website availability control. It allows timely detection and resolution of issues, preventing financial losses and reputational damage.

Want to implement a similar system? Contact us – we’ll help make your business more stable and secure!

How We Increased Paid Knitting Masterclass Sales by 7 Times

Problem

A client, the owner of a popular knitting website, approached us. The platform sells both knitting supplies and access to exclusive masterclasses held in private Telegram channels. However, a serious issue arose: customers were sharing links to paid channels with their acquaintances. This allowed people who hadn’t paid to access the content, causing the client to lose a significant portion of potential revenue.

Additionally, a new paid channel was created every two weeks, while old channels also needed to continue functioning. Therefore, a solution was required to ensure reliable protection against link leaks, flexible access management, and convenience for customers.

Our Solution

1. We developed a comprehensive system to protect against unauthorized link distribution, which not only eliminated leaks but also significantly increased sales. Here’s what we did:

  • Closed Telegram channels. All previously public channels became private, excluding the possibility of free distribution.
  • Created a Telegram admin bot. This bot is added to the channel as an admin and is responsible for issuing one-time entry links. Telegram allows configuring the expiration time of such links, providing flexibility and access control.
  • Developed a customer data encryption system. After purchase, the client receives an individual link containing encrypted information about their order. Each link is tied to a specific user and cannot be reused.

2. Created a second Telegram bot integrated with Bitrix. It performs several key tasks:

  • Automatically authorizes the customer based on encrypted data from the link.
  • Checks if the latest paid orders include access to the desired channel.
  • Requests a one-time link from the first bot and sends it to the customer. This way, the client receives a personalized, time-limited access link.
  • Developed a link issuance monitoring system. This allowed tracking their usage and preventing abuse. Now the client can see who, when, and which links were issued and quickly revoke access if necessary.

Additional Benefits

Apart from content protection and preventing leaks, our system brought several other significant advantages:

  • ✅ A 7x increase in masterclass access sales. After implementing the system, the client stopped losing money from illegal link distribution, leading to a significant profit boost.
  • ✅ Automation of the access issuance process. Previously, administrators manually sent links to customers. Now this process is fully automated, reducing staff workload and eliminating human errors.
  • ✅ Additional channel for marketing and repeat sales. The client can now send personalized messages to customers via the Telegram bot. When a customer receives a link, the chat with the bot remains open, enabling continued communication with updates on new courses, discounts, and special offers.
  • ✅ Flexible access settings. Thanks to the Telegram API, we implemented different access levels depending on the type of purchase. This allowed for differentiated offers and subscription-based access to specific channels.

Conclusion

We helped the client protect their business from losses and unlock new growth opportunities. Now their exclusive knitting masterclasses are protected from free distribution, and the access issuance system works automatically and without failures.

Do you want to protect your content and increase profits? Contact us, and we will develop a custom solution to help you earn more while eliminating losses!

Features of Searching in Tender Applications and What Elastic Search Has to Do with It

In today’s world, the efficiency and speed of product search on a website play a key role in meeting customer needs and increasing business competitiveness. This is especially relevant for companies participating in tenders, where the specific way product names and article numbers are written can significantly complicate the search for necessary items. Let’s explore how the implementation of advanced technologies helped one such company optimize the search process on its website and achieve impressive results.

Problem: Inefficient product search on the website

The company operates a website on the Bitrix platform, offering more than 8,000 types of specialized equipment. The product database is constantly updated, requiring an efficient tool for fast and accurate searches. However, the standard site search faced a serious issue: in tender applications, article numbers often use Russian letters replaced with similar-looking English ones. For example, the article number “СВ-13457” could be written with the English letter “C” and the Russian letter “В”. The standard search could not handle such variations, forcing users to manually try different symbol combinations, making the search process lengthy and inconvenient.

Solution: Integration of the ELK system

To solve this problem, the company decided to implement the ELK system (Elasticsearch, Logstash, Kibana) — a powerful tool for data search and analytics. For non-specialists, ELK is a set of programs that allows for efficient collection, processing, and visualization of large amounts of data, ensuring fast and accurate information retrieval.

Implementation steps:

  1. ELK server setup: A separate server was allocated for deploying the ELK system, ensuring the necessary performance and independence from the main website infrastructure.
  2. Product indexing: All products and their attributes were uploaded into the Elasticsearch index with pre-processed data markup. This allowed the system to understand the data structure and provide more accurate search results.
  3. Regular data updates: A system for regularly updating and removing products in the index was set up, ensuring information remains relevant and matches the current product database.
  4. Development of a custom search module: Instead of the standard Bitrix search module, a custom one was created that sends queries directly to the ELK server. The retrieved product IDs are then processed by standard Bitrix commands. This approach significantly accelerated the search process: a single query is processed in about 50 milliseconds.
  5. Caching frequent queries: To provide an instant response for the most popular queries, caching was implemented, allowing users to receive search results almost instantly.
  6. Monitoring and search efficiency analysis: A system for tracking user queries and evaluating their effectiveness was implemented. Now it is possible to monitor whether users navigate to a product page after searching, helping to further improve search algorithms.
  7. Integration with the client’s analytics system: All data on search queries and user behavior was exported to the client’s analytics system, providing valuable insights for making informed business decisions.

Results and conclusions:

  • Improved search: The system can now handle complex queries, considering different variations of article numbers and product names, which is especially important for tender applications.
  • Reduced database load: The load on the main database decreased by 30%, positively impacting overall website performance and speed.
  • Understanding customer interests: The client can now analyze which products are most interesting to visitors and adjust stock availability according to current demand.
  • Increased search speed and accuracy: Users can now find the necessary products faster and with greater precision, enhancing their satisfaction and loyalty to the company.
  • Expanded analytical capabilities: Integration with the analytics system allowed the client to gain deeper insights into user behavior on the site and make informed decisions for business development.

The implementation of the ELK system and the development of a custom search module not only solved existing product search issues but also significantly improved the user experience, increased website efficiency, and provided valuable data for strategic planning. This approach demonstrates how modern technologies can be successfully integrated into business processes to achieve tangible results and strengthen market positions.

Implementation of Inventory Management for Multiple Offline Stores

How We Implemented Inventory Management for ITKKIT Considering Multiple Offline Stores

In today’s world of e-commerce, it is important not only to efficiently manage an online store but also to consider the needs of offline retail locations. We faced an interesting and complex challenge: to develop a system that would allow managers of two ITKKIT offline stores to see up-to-date inventory levels not only in their own stores but also in other warehouses, including the online store. This solution would help increase sales and improve logistics within the company.

Initial Data

The client — ITKKIT, has:

  • An online store with high traffic and a large number of products.
  • Two offline stores, each with its own warehouse.
  • An accounting system in 1C, already used for inventory management.

Main Project Challenges

  1. High website load. The online store’s database was already operating at its limit, so any bulk operations could lead to reduced performance and failures.
  2. Data caching. The website actively used caching, making it impossible to retrieve real-time inventory data.

These limitations forced us to look for efficient solutions that would not overload the database while ensuring data accuracy.

Solution Implementation

1. Integration of Data from 1C

The first step was to establish interaction with the 1C accounting system, where inventory tracking was already in place for all warehouses. Together with the client’s specialist, we configured the export of two new properties for each product:

  • Stock levels in the first offline store.
  • Stock levels in the second offline store.

Thus, we obtained structured data that could be loaded into the online store’s database.

2. Optimized Data Updates

Since the website’s database was under heavy load, we could not simply update all stock data at once. Instead, we developed a gradual update method:

  • Every 5 minutes, small batches of 10 product requests were sent.
  • Over a week, the system updated data for all products and their sizes without causing excessive server load.

This allowed us to implement up-to-date data without harming website performance.

3. Asynchronous Data Display System

The next step was to create a user-friendly interface for store managers. Since real-time data retrieval was impossible due to caching, we implemented an asynchronous request system:

  • When a manager opens a product page, the system sends a background request to the server.
  • The server retrieves stock data from all warehouses.
  • The data is loaded on the page without reloading, ensuring high-speed operation.

This way, managers could instantly see up-to-date inventory information without creating additional server load.

4. Automated Inventory Deduction and Reporting

To make the system fully functional, we implemented two key features:

  • Automatic inventory deduction. When a purchase was made, the product was reserved and deducted from the appropriate warehouse.
  • Inventory reporting. We configured a reporting system that allowed sales and inventory data analysis in a convenient format.

Project Results

As a result of our work, ITKKIT’s inventory management system reached a new level. Now, managers in each store:

  • Can see not only their store’s inventory but also stock levels in the other offline store and the online store.
  • Can quickly check product availability and direct customers to the right location.
  • Work in a convenient interface without delays or system overloads.

This solution not only improved employee efficiency but also increased sales through optimal inventory distribution.

Our team successfully completed the task despite technical limitations and helped ITKKIT take inventory management to the next level!

Harness the power of ChatGPT: a new assistant for developers

In today’s fast-paced world of software development, the need for reliable assistance and guidance is paramount. Developers often face complex challenges, seeking solutions that are both efficient and accurate. Enter ChatGPT, a groundbreaking language model developed by OpenAI. With its vast knowledge base and natural language processing capabilities, reaching out to ChatGPT for assistance offers a multitude of benefits for developers.

1. Expansive Knowledge: ChatGPT is built upon a vast corpus of information, including up-to-date knowledge across various domains. With a knowledge cutoff in 2021, it has been trained on a diverse range of topics, making it a valuable resource for developers seeking guidance on programming languages, frameworks, algorithms, and more. Its ability to tap into this wealth of information allows for swift and accurate responses to queries.

2. Quick and Responsive: ChatGPT operates in real-time, ensuring developers receive prompt assistance when they need it the most. It is available 24/7, eliminating the constraints of time zones and allowing developers from around the globe to benefit from its expertise. Whether it’s a late-night bug hunt or a pressing deadline, ChatGPT is there to provide support, helping developers overcome obstacles and move forward with their projects.

3. Tailored Solutions: ChatGPT’s adaptability is a key advantage. It can understand and respond to natural language queries, allowing developers to express their problems in a way that feels natural and intuitive. This flexibility ensures that developers can receive personalized assistance, tailored to their specific needs. Whether it’s debugging code, understanding complex concepts, or exploring best practices, ChatGPT can provide insightful guidance.

4. Learning on the Go: With each interaction, ChatGPT learns and improves its responses. As more developers reach out for assistance, the model refines its knowledge base, incorporating new information and refining its understanding of developer challenges. This constant learning process ensures that ChatGPT evolves alongside the ever-changing landscape of software development, providing up-to-date and relevant guidance.

5. Developer Community Empowerment: ChatGPT serves as a catalyst for collaboration and knowledge sharing within the developer community. By providing assistance and resolving queries, it encourages developers to share their experiences and insights, fostering a vibrant ecosystem of learning and growth. Developers can not only seek assistance but also contribute to the collective knowledge by sharing their expertise, further enhancing the value of the platform.

In conclusion, reaching out to ChatGPT for developer assistance offers a range of benefits. Its extensive knowledge, responsiveness, tailored solutions, continual learning, and community-driven nature make it an invaluable tool in the developer’s arsenal. By harnessing the power of ChatGPT, developers can overcome challenges, accelerate their projects, and unlock new levels of productivity. Embrace the future of developer assistance and tap into the potential of ChatGPT today.

rsync – fast and easy way to copy files to another server

Before making changes, it can be helpful to save the files in a safe place so that you can restore them. The rsync utility will help with this task, which minimizes traffic by copying only the changed parts of the files.

First of all, let’s install the package on both servers, if it doesn’t already exist. We use the following command:

sudo apt-get install rsync (for CentOS use yum instead of apt-get)

To copy from a remote server, we naturally need access to it, since we will have to enter a password before starting copying. In our example, the data will be copied from the directory /remote/source to /local/destination and if the directory does not exist, it will be created, and the files existing in it will not be overwritten:

rsync -avzP --stats [email protected]:/remote/source/ /local/destination/
  • -a saves information about dates, symlinks, and file permissions
  • -z archives data
  • -v increases the verbosity of messages during program operation
  • -P combines “progress” (show progress while copying) and “partial” (to continue copying when the link is broken)

If you want to make sure that everything goes well, you can additionally add the “dry-run” option – in this case, the utility will start a simulation of copying with a log entry, but the actual copying of files will not occur.

To copy to a remote server, set up a connection for the directory where the copy will be made:

rsync -avzP --stats /local/source/ [email protected]:/remote/destination/

Note about the closing slash:

When specifying the path to a directory as a source, you should pay attention to the closing slash – the character / at the end of the directory name. The closing slash means the inner content, i.e. if /source/ ends with a slash, rsync will copy the contents of /source/ to /destination/. But if there is no slash in /source, then rsync will create a dir directory inside /destination/ and copy all the contents of /source/ to /destination/source/. Hopwever, the presence or absence of a closing slash in the name of the directory to which the copy will take place does not matter.

Sources:

  • https://help.ubuntu.com/community/rsync
  • https://www.servers.ru/knowledge/linux-administration/how-to-copy-files-between-linux-servers-using-rsync

Bulk reduce the size of images with ImageMagick

With the rapid growth of the resource, it can be difficult to keep track of the correct processing of images: images are loaded randomly, which negatively affects page loading speed and takes up extra space on the server. You can fix this situation using the ImageMagick utility. Its toolkit is very extensive, but we will focus on the most important points. It will not be superfluous to also recall that before starting processing, just in case, it is worth making a backup of the images.

  1. Install the package on the production server: sudo apt install ImageMagick (Use yum instead of apt in CentOS)
  2. To make sure that the installation was successful, we execute the command identify -version and look at the version of ImageMagick

For the most effective tool for reducing the weight of images, let’s take their quality reduction-quality, set the quality to 50% of the original. It should be borne in mind that ImageMagick is not sharpened for processing png images, so let’s tell it that it needs to work only with jpg images: -type f ( -name “*.jpg” -o -name “*.jpeg” ).

  1. We move to the directory with images and get ready to start processing, the full command will look like this: find . -type f ( -name “*.jpg” -o -name “*.jpeg” ) -execdir mogrify -quality 50 {} +
  2. After the command completes, we check the images. The program had to process, including files in nested directories

Please note: when copying the above commands, make sure that the quotes are simple, not curly.

We have considered a special case of working with quality reduction, but there are a lot of tools: the -resize command is used for resizing, -crop is for cropping, -format is for changing the format, etc. A complete list of all ImageMagick features can be found in the official documentation.

Preparing a VDS server for Bitrix

Starting tasks on bare VSD

Tools

  1. Pytty + configured ROOT connection
  2. Hosting + Account
  3. WinCSP + configured connection
  4. Notepad++

User preparation

In Ubuntu, it is highly discouraged to work under the ROOT account, so first we will create our own user with sudo rights

  1. List all users: nano /etc/passwd
  2. Create a new user with console sudo useradd -s /bin/bash username
  3. Set password for user sudo passwd username
  4. Make user sudo usermod -aG sudo username
  5. Login as username.
  6. Create a home directory: sudo mkdir /home/username
  7. Reconnect with user rush
  8. Add the user to the www-data group sudo usermod -a -G www-data username

Software installation and system update

  1. Get information about the latest package versions: sudo apt-get update
  2. Install MC: sudo apt-get install mc
  3. Install tasksel: sudo apt-get install tasksel
  4. Install git: sudo apt-get install git

Installing LAMP

  1. Running installation sudo tasksel install lamp-server
  2. Generate mysql password using http://www.onlinepasswordgenerator.ru/ – 10 characters with special characters
  3. Filling in the password on the “Project Information” board
  4. Set the mysql root password in the console GUI
  5. Installation will complete
  6. Starting Apache sudo /etc/init.d/apache2 restart

Apache setup

  1. Check availability by ip – in the browser as the IP address of the server. If everything is ok – show the page Apache
  2. Create a copy of the Apache configuration file
    sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/yoursite.conf
  3. Connect to the server via root account using WinCSP
  4. Create a site folder via WinCSP or console
  5. Create a test index.html in the site folder
  6. Edit /etc/apache2/sites-available/yoursite.conf

    ServerAdmin [email protected]
    DocumentRoot /var/www/yoursite
    ErrorLog /var/www/yoursite_error.log
    CustomLog /var/www/yoursite_access.log combined
  7. Deactivating the old site sudo a2dissite 000-default
  8. Activating a new site sudo a2ensite yoursite
  9. Apache restart sudo service apache2 restart
  10. If everything is OK, then when you go to ip, our test page will be displayed.
  11. /etc/apache2/apache2.conf

    Options Indexes FollowSymLinks
    AllowOverride All
    Require all granted

Configuring Apache Modules

  1. sudo apt-get install php-mbstring
  2. sudo phpenmod mbstring
  3. sudo phpenmod mcrypt
  4. sudo a2enmod rewrite
  5. sudo a2enmod ssl
  6. sudo service apache2 restart

Installing phpmyadmin

  1. sudo apt-get install phpmyadmin php-mbstring php-gettext
  2. sudo service apache2 restart

MySQL setup

  1. In /etc/mysql/conf.d file
    [mysqld]
    sql_mode=ONLY_FULL_GROUP_BY,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVI
    SION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
  2. sudo service mysql restart

Setting permissions

  1. We need to set the owner/group for the root directory (and any internal directories and files): sudo chown -R www-data:www-data /var/www
  2. So that no one except the current user (www-data) has access to the contents of the web-root directory. sudo chmod go-rwx /var/www
  3. Next, you need to allow users from the same group (and ‘other’) to open the /var/www directory sudo chmod go+x /var/www
  4. Next, change the permissions of all directories and files in the root directory for the same group (www-data): sudo chgrp -R www-data /var/www
  5. Use chmod commands to make it so that only one user can access content: sudo chmod -R go-rwx /var/www
  6. Make it so that any user in the same group can read/write and execute directories and files in the root directory on the server: sudo chmod -R g+rx /var/www
  7. Personally, I gave the group write permission – this is necessary for those users who edit content. It looks like this: sudo chmod -R g+rwx /var/www

php setup

/etc/php/7.0/apache2/php.ini after changing – Apache restart and check via phpinfo.php

  1. short_open_tag = On
  2. mbstring.internal_encoding = UTF-8
  3. mbstring.func_overload = 2

Site transfer to Bitrix. Preparing to deploy a backup.

Often the server bears the name of the domain that will be spinning on it. This gives rise to the following problem: ping from the VDS server to the domain goes to 127.0.0.1, and to deploy a copy, you need to knock on the “hosting” IP address by the domain name.

To edit in the /etc/hosts file of the VDS server, write a line like
87.236.16.31 yoursite.ru

And comment the line like
127.0.1.1 yoursite.ru yoursite

to make it look like this:
# 127.0.1.1 yoursite.ru yoursite

save and run check
ping yoursite.ru

After deploying the backup, we return everything as it was! Otherwise, when changing edits, edits will be made to the “old” battle site.

Preparing a backup on combat

  1. Check free space on the battlefield through hosting, if there is not enough space – temporarily increase the disk quota
  2. Create a full backup of Bitrix on the production site
  3. Download the restore.php file from there
  4. We place it in the root of the site folder
  5. Run by IP http://yourip/restore.php
  6. Act on system requests.
  7. Waiting for deployment.

Preparing the migrated site

  1. Go to Bitrix admin panel by IP in VDS
  2. Check http://yourip/bitrix/admin/site_checker.php?lang=ru
  3. If everything is OK, proceed.

Import/Export Database via SSH

Works faster and more reliably than through phpmyadmin and with databases of any size.

On large databases, in order not to crash the server, we use the configuration
mysqldump -u USER -p –single-transaction –quick –lock-tables=false DATABASE | gzip > OUTPUT.gz

  1. Go to the client’s SSH server
  2. Test Mysql connection mysql -u [DB_username] -p (will ask for password)
  3. See if the user is the right one (list of databases) show databases;
  4. Checking the availability of free space on the client’s hosting (if possible)
  5. Go to the directory where we want to get the dump
  6. Dumping mysqldump -u [Username] -p [DBname] > [filename].sql (it will ask for a password and silently start working. That is, you have to wait until the BASH prompt appears to enter – it means the file is ready)
  7. Copy the created file to the development server in the folder /var/www/html/storage/db
  8. Go to phpmyadmin on the development server
  9. Rename the target database if it exists, adding the index _1, _2 or the next one after its name CAREFULLY! through the section Operations
  10. Create a database user (as in hosting)
  11. When creating a user, check the box “Create database with same name and grant all privileges.”
  12. Create a new database with the desired name (home section) – the encoding of the new database must be the same as that of the copied one!!
  13. Go to the created database in the Privileges section and check that the user with the name of this database has full access to it
  14. SSH into the development server
  15. Go to the folder /var/www/html/storage/db
  16. Run the command mysql -u [DBUsername] -p [DBName] < [filename].sql if everything is OK, you will have to wait similarly to step 7
  17. Checking the database with phpmyadmin
  18. Checking site performance
  19. Remove the database you made in step 9.

Source: http://qaru.site/questions/114074/how-can-i-slow-down-a-mysql-dump-as-to-not-affect-current-load-on-the-server

What is mining and where did all the video cards go?

In simple words: what is mining and where did all the video cards go

You probably heard from the news that all video cards have disappeared from sale. You even found out from there who bought everything – the miners. They “mine” cryptocurrency on their “farms”. I am sure that you have heard about the most famous cryptocurrency – Bitcoin.

But I also believe that you don’t really understand why it started right now, what exactly this mining is about, and why there is so much noise around some strange “electronic candy wrappers” in general. Maybe if everyone is engaged in mining, then you should too? Let’s get to the bottom of what’s going on.

Blockchain

Let’s start with a bit of bitcoin and blockchain basics. You can read more about this in our other article, and here I will write very briefly.

Bitcoin is decentralized virtual money. That is, there is no central authority, no one trusts anyone, but nevertheless, payments can be safely organized. Blockchain helps with this.

Blockchain technology, in my opinion, is the new internet. It’s an idea on the same level as the internet.

Herman Gref

Blockchain is such an Internet diary. Blockchain is a sequential chain of blocks, each of which contains transactions: who transferred how many bitcoins and to whom. In English, it is also called ledger – literally “ledger”. Actually, the ledger is – but with a couple of important features.

The first key feature of the blockchain is that all full-fledged participants in the Bitcoin network store the entire block chain with all transactions for all time. And they constantly add new blocks to the end. I repeat, the entire blockchain is stored by each user in its entirety – and it is exactly the same as that of all other participants.

The second key point: the blockchain is based on cryptography (hence the “crypto” in the word cryptocurrency). The correct operation of the system is guaranteed by mathematics, and not by the reputation of any person or organization.

Those who create new blocks are called miners. As a reward for each new block, its creator now receives 12.5 bitcoins. At the exchange rate as of July 1, 2017, this is approximately $30,000. A little later, we will talk about this in more detail.

By the way, block rewards are the only way to issue bitcoin. That is, all new bitcoins are created with the help of mining.

A new block is created only once every 10 minutes. There are two reasons for this.

Firstly, this was done for stable synchronization – in order to have time to distribute the block throughout the Internet in 10 minutes. If blocks were created continuously by everyone, then the Internet would be filled with different versions, and it would be difficult to understand which of these versions everyone should eventually add to the end of the blockchain.

Secondly, these 10 minutes are spent on making the new block “beautiful” from a mathematical point of view. Only the correct and only beautiful block is added to the end of the blockchain diary.

Why blocks should be “beautiful”

The correct block means that everything is correct in it, everything is according to the rules. The basic rule: the one who transfers the money really has that much money.

And a beautiful block is one whose convolution has many zeros at the beginning. You can again remember more about what a convolution (or “hash” is the result of some mathematical transformation of a block) is from here. But for us now it is completely unprincipled. The important thing is that to get a beautiful block, you need to “shake” it. “Shake” means to slightly change the block – and then check if it suddenly became beautiful.

Each miner continuously “shakes” candidate blocks and hopes that he will be the first one to “shake” a beautiful block, which will be included at the end of the blockchain, which means that this particular miner will receive a reward of $30,000.

At the same time, if suddenly there are ten times more miners, then the blockchain will automatically require that in order to recognize a new block as worthy of being written to the blockchain, it must now be ten times “beautiful”. Thus, the rate of appearance of new blocks will be preserved – one block will appear every 10 minutes anyway. But the probability of a particular miner to receive a reward will decrease by 10 times.

Now we are ready to answer the question why blocks should be beautiful. This is done so that some conditional Vasya cannot take and simply rewrite the entire history of transactions.

Vasya will not be able to say: “No, I did not send Misha 10 bitcoins, in my version of the story there is no such thing – believe me.” Indeed, in this fake version of the story, the blocks must be beautiful, and as we know, in order to shake at least one such block, it is necessary that all the miners work for 10 minutes, which Vasya alone can handle.

Miners

The concept is clear, now let’s take a closer look at the miners.

In 2009, when only enthusiasts (or rather, even only its creators) knew about Bitcoin and it cost five cents apiece, it was easy to mine. There were few miners, say, a hundred. This means that, on average, per day, the conditional miner Innokenty at least once had the luck to shake a block and receive a reward.

By 2013, when the price of Bitcoin rose to hundreds of dollars apiece, there were already so many enthusiastic miners that it would take months to wait for luck. Miners began to unite in “pools”. These are cartels that shake the same block candidate all together, and then share the reward for everyone fairly (in proportion to the effort expended).

Then there were special devices – ASIC. These are microcircuits that are designed specifically to perform a specific task. In this case, ASICs are narrowly focused on shaking Bitcoin blocks as efficiently as possible.

The mining power of ASICs is incomparably greater than the power of a conventional computer that can perform any calculations. In China, Iceland, Singapore and other countries, they began to build huge “farms” from ASIC systems. It is advantageous to locate the farm in a mine underground, because it is cold there. It is even more profitable to build a hydroelectric power station nearby so that electricity is cheaper.

The result of this arms race was that it was completely unjustified to mine bitcoins at home.

Altcoin mining or why video cards disappeared right now

Bitcoin is the first and most popular cryptocurrency. But with the advent of the popularity of cryptocurrencies as a phenomenon, competitors began to appear like mushrooms. Now there are about a hundred alternative cryptocurrencies – the so-called altcoins.

Each altcoin creator does not want to mine his coins at once very difficult and expensive, so he comes up with new criteria for the beauty of blocks. It is desirable that the creation of specialized devices (ASIC) is difficult or delayed as much as possible.

Everything is done so that any fan of this altcoin can take his usual computer, make a tangible contribution to the total power of the network and receive a reward. For “shaking” in this case, a regular video card is used – it just so happened that video cards are well suited for such calculations. Thus, with the help of the availability of the mining process, it is possible to increase the popularity of this altcoin.

Pay attention to the second line in the table above – Ethereum. This is a relatively new cryptocurrency (appeared in 2015), but with special features. In short, the main innovation of Ethereum is the ability to include in the blockchain not only static information about payments made, but also interactive objects – smart contracts – that work according to programmed rules.

Why this created such a stir, we will discuss in a separate article. For now, it will suffice to say that the new properties of Ethereum have ensured great interest from “crypto-investors” and, as a result, the rapid growth of its exchange price. If at the beginning of 2017 one “ether” cost $8, then by June 1, the rate broke through the $200 mark.

It has become especially profitable to mine Ethereum, which is why miners bought up video cards.

Видеокарта Gigabyte специально для майнинга — сразу без всяких ненужных вещей вроде выхода на монитор. Источник

What happens if miners stop mining

Suppose that mining has become unprofitable (the profit does not pay off the costs of equipment and electricity), and the miners stop mining or start mining some other currency. What then? Is it true that if miners stop mining, then Bitcoin will stop working or will work too slowly?

No. As we found out above, the blockchain constantly adapts the criteria for the “beauty” of the created blocks so that, on average, the speed of their creation is constant. If there are 10 times fewer miners, the new block will have to “shake” 10 times less, but the blockchain itself will fully perform its functions.

So far, the growth of the exchange rate more than compensates for the drop in rewards, but someday the main profit will come from transfer fees, which the miner also takes. They will not remain without work and without reward.

Conclusion

We figured out what mining really is, why it is needed, who and when it is profitable to mine, where all the video cards have disappeared from the stores, and why some manufacturers now release video cards immediately without going to the monitor.

The material was taken from: https://blog.kaspersky.ru/