如何加快熊猫to_sql

我正在尝试使用pandas to_sql将数据上传到MS Azure Sql数据库,这需要很长时间。我经常必须在上床睡觉之前运行它,早上醒来就可以了,但是已经花费了几个小时,如果出现错误,我将无法解决。这是我的代码:

params = urllib.parse.quote_plus(
'Driver=%s;' % driver +
'Server=%s,1433;' % server +
'Database=%s;' % database +
'Uid=%s;' % username +
'Pwd={%s};' % password +
'Encrypt=yes;' +
'TrustServerCertificate=no;'
)

conn_str = 'mssql+pyodbc:///?odbc_connect=' + params
engine = create_engine(conn_str)

@event.listens_for(engine,'before_cursor_execute')
def receive_before_cursor_execute(conn,cursor,statement,params,context,executemany):
    if executemany:
        cursor.fast_executemany = True
        cursor.commit()
        
connection = engine.connect()
connection

然后我运行以下命令以获取sql:

master_data.to_sql('table_name',engine,chunksize=500,if_exists='append',method='multi',index=False)

我一直在处理chunksize,最有效点似乎是100,考虑到我通常一次尝试上传800,000-2,000,000条记录,这还不够快。如果超出此范围,则会出现错误,该错误似乎仅与块大小有关。

OperationalError: (pyodbc.OperationalError) ('08S01','[08S01] [microsoft][ODBC Driver 17 for SQL Server]Communication link failure (0) (SQLExecDirectW)')
iCMS 回答:如何加快熊猫to_sql

不确定是否已解决问题,但确实想在此处提供答案,以提供Azure SQL Database libraries for Python特定信息和一些有用的资源来调查和解决此问题(如果适用)。

使用pyodbc直接查询Azure SQL数据库的示例: Quickstart: Use Python to query Azure SQL Database Single Instance & Managed Instance

使用熊猫数据框的示例:How to read and write to an Azure SQL database from a Pandas dataframe

main.py

"""Read write to Azure SQL database from pandas"""
import pyodbc
import pandas as pd
import numpy as np
from sqlalchemy import create_engine

# 1. Constants
AZUREUID = 'myuserid'                                    # Azure SQL database userid
AZUREPWD = '************'                                # Azure SQL database password
AZURESRV = 'shareddatabaseserver.database.windows.net'   # Azure SQL database server name (fully qualified)
AZUREDB = 'Pandas'                                      # Azure SQL database name (if it does not exit,pandas will create it)
TABLE = 'DataTable'                                      # Azure SQL database table name
DRIVER = 'ODBC Driver 13 for SQL Server'                 # ODBC Driver

def main():
"""Main function"""

# 2. Build a connectionstring
connectionstring = 'mssql+pyodbc://{uid}:{password}@{server}:1433/{database}?driver={driver}'.format(
    uid=AZUREUID,password=AZUREPWD,server=AZURESRV,database=AZUREDB,driver=DRIVER.replace(' ','+'))

# 3. Read dummydata into dataframe 
df = pd.read_csv('./data/data.csv')

# 4. Create SQL Alchemy engine and write data to SQL
engn = create_engine(connectionstring)
df.to_sql(TABLE,engn,if_exists='append')

# 5. Read data from SQL into dataframe
query = 'SELECT * FROM {table}'.format(table=TABLE)
dfsql = pd.read_sql(query,engn)

print(dfsql.head())


if __name__ == "__main__":
    main()

最后,以下资源应有助于将特定的实现与性能问题进行比较,并提供以下信息,其中“堆栈溢出”线程可能是最佳资源,而“监视和性能”调整文档可用于调查和缓解服务器故障-方面的性能问题等。

Speeding up pandas.DataFrame.to_sql with fast_executemany of pyODBC Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance

关于, 迈克

,
params = urllib.parse.quote_plus(
    'Driver=%s;' % driver +
    'Server=%s,1433;' % server +
    'Database=%s;' % database +
    'Uid=%s;' % username +
    'Pwd={%s};' % password +
    'Encrypt=yes;' +
    'TrustServerCertificate=no;'
    )


conn_str = 'mssql+pyodbc:///?odbc_connect=' + params
engine = create_engine(conn_str)

@event.listens_for(engine,'before_cursor_execute')
def receive_before_cursor_execute(conn,cursor,statement,params,context,executemany):
    if executemany:
        cursor.fast_executemany = True
        cursor.commit()
        
connection = engine.connect()
connection

下一行将完成数据库提取。之前我在chunksize方面遇到问题,但通过添加方法和索引对其进行了修复。

ingest_data.to_sql('db_table_name',engine,if_exists='append',chunksize=100000,method=None,index=False)
本文链接:https://www.f2er.com/1995833.html

大家都在问