You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a large table (~300GB) that is constructed as multiple tables that each contain a subset of the rows.
Each of these tables is written to disk, occupying around 1GB as CSV data. This happens using Julia.
This data must ultimately exist as a single table in a SQL Server database.
I can code a function to BULK INSERT each file to the database.
Would such a function be a suitable addition to this package?
Are you aware of any pitfalls/obstacles to this approach?
Is there a better way?
Cheers,
Jock
The text was updated successfully, but these errors were encountered:
Hi there,
I have a large table (~300GB) that is constructed as multiple tables that each contain a subset of the rows.
Each of these tables is written to disk, occupying around 1GB as CSV data. This happens using Julia.
This data must ultimately exist as a single table in a SQL Server database.
I can code a function to BULK INSERT each file to the database.
Would such a function be a suitable addition to this package?
Are you aware of any pitfalls/obstacles to this approach?
Is there a better way?
Cheers,
Jock
The text was updated successfully, but these errors were encountered: