
#Sql bulk copy log code#
You can download the code from GitHub here. The following is a simple example where we are saving daily sales figures for each sales person.

We simply pass in the data we would receive and assert on the values in the DataTables. Also, because the data is all in memory, it makes it very easy to test all of our stats. So rather than hundreds of thousands of insert statements, it is just one bulk copy, and rather than taking minutes or longer to run, it just takes seconds to dump all the data into MS SQL Server. Once the DataTable is ready, it is just a simple SqlBulkCopy statement to insert all the data at once. DataTables allow you to create the table in memory, add rows to it, edit values in specific columns of a row, etc, until all the data is exactly what you want. NET has a handy feature called DataTables.Ī DataTable is basically an in memory representation of an MS SQL Server table. We were worried that the bottleneck in this application would be in running all those insert statements against our MS SQL Server database. As usual we want to make sure the application is easy to test (We need to make sure those stats are correct!), but we also need to ensure it performs well because we will be adding possibly hundreds of thousands of rows daily to a number of different tables every time this job runs. Both the VM and the Azure SQL database were in the same region.Recently, we were asked to start pulling data daily from a number of sources (e.g., several REST APIs), aggregating the data, and saving it to a database to be used for generating reports. Smart Bulk Copy was running on the same Virtual Machine where also source database was hosted. Source database was a SQL Server 2017 VM running on Azure and the target was Azure SQL Hyperscale Gen8 8vCores. Uncompressed table size is around 8.8 GB with 59,986,052 rows. Tests have been ran using the LINEITEM table of TPC-H 10GB test database. M-series that can do up to 256 MB/sec of log throughput.Of course, if using a small number of cores on Hyperscale, other factors (for example: sorting when inserting into a table with indexes) could come into play and prevent you to reach the mentioned 100 Mb/Sec. Azure SQL Hyperscale always provides 100 MB/Sec of maximum log throughput, no matter the number of vCores.There are a couple of exceptions to what just described:

In this case move to an higher SKU for the bulk load duration.


When copying a Columnstore table, you may see very low values (If you have columnstore it is generally recommended to increase the value to 1048576 in order to maximize compression and reduce number of rowgroups. Smart Bulk Copy will always use a Batch Size of a minimum of 102400 rows, no matter what specified in the configuration as per best practices.Heaps, Clustered Rowstores, Clustered Columnstoresįrom version 1.7 Smart Bulk Copy will smartly copy tables with no clustered index (heaps), and tables with clustered index (rowstore or columnstore it doesn't matter.)Ĭouple of notes for tables with Clustered Columnstore index: You're using a database set in READ_ONLY mode.You're using a database snapshot as the source database.You're absolutely sure there is no activity of any kind on the source database, or.PLEASE NOTE that the physical position of a row may change at any time if there is any activity on the database (updates, index reorgs, etc.) so it is recommended that this approach is used only in three cases: SELECT * FROM WHERE ABS(CAST(%%PhysLoc%% AS BIGINT)) % =
