-
Notifications
You must be signed in to change notification settings - Fork 112
Home
SQL Index Manager has been designed to efficiently maintain indexes within the current versions of Microsoft SQL Server and Azure SQL Database.
Paid analogues from Devart and RedGate have been taken for the main competitors in the development. At the same time, at the development stage, there was no task of thoughtlessly copying the existing functionality of competitors. The goal was to make a truly convenient software product that could be used by both beginners and experienced users.
In order to use SQL Index Manager, it is sufficient to compile the app from the source or download the latest release version of the app, unzip the archive and run it. Administrator rights are not required when starting the application.
It is important to note that the application executable file does not have a digital signature, therefore, it may be necessary to mark it as "trusted" in the antivirus exclude list (encountered this problem when using Norton Internet Security).
When you run the application, the new connection window always opens at the beginning, in which you can specify to which server the connection will be made. The list of local SQL Server instances is determined from the settings in the registry. In this case, remote servers have to be specified manually.
Supported versions: Microsoft SQL Server 2019/2017/2016/2014/2012/2008R2 and Azure SQL Database.
When connecting to the server for the first time, a query will be sent to it to check the version of SQL Server and the necessary rights. Further, an attempt will be made to obtain a list of databases available for scanning.
If the user has sysadmin rights, then extended information about the current state of the databases will be displayed. In the header, this shows information about free space on all available local drives.
Next, it is necessary to note the required databases for scanning, after which the selected databases will be sequentially describer taking into account the specified settings.
When designing the describe mechanism, the scanning methods used in the solutions of competitors were not explicitly taken as the basis, since they were far from optimal. The main goal of your own describe is to speed up scanning by reducing network and disk waits.
The problem with RedGate is that it uses a very simple approach, when one query produces a list of user tables and views, and after the second query, a list of all indexes within the selected database is returned. Then, in a loop, for each section of the index, a request is sent to determine its size and level of fragmentation. At the end of the scan, indexes that weigh less than the entry threshold are discarded on the client.
The disadvantage of this approach is the large number of requests of the same type and the fact that no checks are made before sending the request whether the current section contains lines to exclude empty sections from scanning.
In addition, the number of requests to the server will be approximately equal to the total number of lines received from sys.partitions. Given that real databases can contain tens of thousands of sections, such an implementation can lead to an impressive number of requests of the same type sent to the server. In conditions when the database is remote, the scanning time will become even longer due to increased network delays for each request, even the simplest one.
The product developed in Devart receives information in one big request and then filters out the excess on the client. It managed to get rid of the main problem with a bunch of similar requests in the example described earlier.
But the disadvantages of this implementation are that no additional parameters are passed to the sys.dm_db_index_physical_stats function, which can limit the scanning of obviously unnecessary indexes. In fact, this leads to unnecessary disk loads during the first scan.
The peculiarity is that the data obtained from sys.dm_db_index_physical_stats is not cached permanently in the buffer pool, so minimizing physical readings when obtaining information about index fragmentation was one of the priority tasks during development.
After several experiments, it turned out to combine the approaches described above and implement the so-called partial describe. First, we form the list of included/excluded list of objects, after that we determine the preliminary size of the indexes in order to filter out too small and too large indexes.
We return to the client only those sections that contain data in order to avoid unnecessary read operations from empty indexes.
Depending on the settings, only those types of indexes are obtained that the user wants to analyze. The current implementation supports work with heaps, cluster and non-cluster B-Tree indexes and columnstores. In addition, at the initial stage of scanning, various information is obtained on the properties of the index and statistics on its use.
Further, the fragmentation level for small indexes is obtained within the framework of one query, and fragmentation for large indices is deferred after the main query has been executed.
Due to this approach, when generating queries, it was possible to solve the problems with scanning performance that were encountered in competitors' apps.