SDE to CSV slow transfer speedsOracle / SDE: ST_GEOMETRY or SDE Compressed BinaryListFeature classes in SDESDE State ID with PythonDelete tables from SDE including archive-tablesMonitor open sde connectionArcpy Slow to Read Datasets from Connection (.sde) FileSlow SDE Feature Class PerformanceArcPy - SDE Connection Causes “pythonw.exe” Stops WorkingAddRastersToMosaicDataset_management performs slowSwitching from Nested Search Cursors to Dictionaries

What does CI-V stand for?

Why not use SQL instead of GraphQL?

Why are electrically insulating heatsinks so rare? Is it just cost?

Is it tax fraud for an individual to declare non-taxable revenue as taxable income? (US tax laws)

Why doesn't H₄O²⁺ exist?

Risk of getting Chronic Wasting Disease (CWD) in the United States?

How to write a macro that is braces sensitive?

Approximately how much travel time was saved by the opening of the Suez Canal in 1869?

If I cast Expeditious Retreat, can I Dash as a bonus action on the same turn?

Mathematical cryptic clues

TGV timetables / schedules?

Why do I get two different answers for this counting problem?

Why did the Germans forbid the possession of pet pigeons in Rostov-on-Don in 1941?

How do we improve the relationship with a client software team that performs poorly and is becoming less collaborative?

Which models of the Boeing 737 are still in production?

Is it legal for company to use my work email to pretend I still work there?

Dragon forelimb placement

How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?

Can a Warlock become Neutral Good?

Why "Having chlorophyll without photosynthesis is actually very dangerous" and "like living with a bomb"?

What would happen to a modern skyscraper if it rains micro blackholes?

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

The use of multiple foreign keys on same column in SQL Server

What do you call a Matrix-like slowdown and camera movement effect?



SDE to CSV slow transfer speeds


Oracle / SDE: ST_GEOMETRY or SDE Compressed BinaryListFeature classes in SDESDE State ID with PythonDelete tables from SDE including archive-tablesMonitor open sde connectionArcpy Slow to Read Datasets from Connection (.sde) FileSlow SDE Feature Class PerformanceArcPy - SDE Connection Causes “pythonw.exe” Stops WorkingAddRastersToMosaicDataset_management performs slowSwitching from Nested Search Cursors to Dictionaries






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I'm trying to move a feature class on our network SDE that's made up of polylines as a CSV file to the local machine (or fileGDB but slow speed seems to be the theme regardless of output format)



import arcpy, csv
import datetime
from arcpy import env

def tableToCSV(input_tbl, csv_filepath):
fld_list = arcpy.ListFields(input_tbl)
fld_names = [fld.name for fld in fld_list]
with open(csv_filepath, 'wb') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(fld_names)
with arcpy.da.SearchCursor(input_tbl, fld_names) as cursor:
for row in cursor:
writer.writerow(row)
print csv_filepath + " CREATED"
csv_file.close()

env.workspace = r"Q:SDEDirect Connection to delivery.sde"
desiredFC = arcpy.ListFeatureClasses("","","Topography")


for fc in desiredFC:
if fc == 'Contours':
print 'Starting to Process/' + fc
startTime = datetime.datetime.now()
out_csv = "C:UsersCHOKDesktopExtract Data Method/" + fc[14:] + '.csv'
tableToCSV(fc, out_csv)
endTime = datetime.datetime.now()
print 'Finished Processing/' + fc
print 'It took ' + str(endTime - startTime) + ' to process ' + fc

raw_input('Press Enter to exit')


When I run the above script, I see that transferring 3.6 gb of data from the SDE to a CSV (or file geodatabase using arcpy copy methods) takes more than 3~4 hours to run.



I can see in task manager that the network speed utilised is usually between 0.3 Mbps to 2 Mbps but sometimes the speed jumps to 10~20 Mbps and other feature classes are transferred at around 5~7 Mbps.



Is there something I can try to improve the speed at which SDE stored on the network is copied over to a local drive or is this a lost cause due to the network currently in place?










share|improve this question









New contributor




Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • That's probably because your HDD is thrashing writing to CSV from a feature class - open your resource monitor to see how hard your drive is being pushed. I would think a file geodatabase would be better to write to, a shapefile is definitely not an option due to size, you could create a replica perhaps; what is your underlying database for your SDE? What else is running on that server? when was it last restarted? Have you tried running the export on the server instead of on a remote workstation to negate the network speed?

    – Michael Stimson
    Apr 3 at 5:23












  • The disk writing is at 1% so I don't think it's a hard drive issue. Attempts to write to a new file gdb also seems to yield similar speeds. Unfortunately don't have authority to the actual SDE (located in different location and manged by different team)

    – Andy
    Apr 3 at 5:31











  • That often happens. Have you tried to create replica resources.arcgis.com/en/help/main/10.2/index.html#//…, it should be optimized for this sort of thing but it may also create a version, which you may not have permission to do, otherwise Copy Features resources.arcgis.com/en/help/main/10.2/index.html#//… or Feature Class to Feature Class resources.arcgis.com/en/help/main/10.2/index.html#//… should be faster, if not then it's a server/network issue - contact the team that does administer the database and see what's up

    – Michael Stimson
    Apr 3 at 5:37











  • A few years back I needed to use ASCII to export and import several hundred million rows. I used arcpy.da.SearchCursor and csv, and the export to 68Gb took 20 minutes and the import two hours. 4Gb is a drop in that bucket and should only take minutes, which makes me wonder if you're using an IO-limited resource like an RDS instance for the Enterprise geodatabase (there is no longer anything called SDE).

    – Vince
    Apr 3 at 11:21












  • I think this is an XY Problem because the root issue is poor performance of the data itself, not the access methodology. If you intersect your contours with a coarse fishnet, then Dissolve or dissolve on the attribute used for rendering, the feature count will be reduced and the spatial index more effective. The fact that the data is in a remote data Center may be a contributing factor, as may fragmentation and uncompressed version tree, but for contours, you have an additional issue.

    – Vince
    Apr 3 at 11:54

















0















I'm trying to move a feature class on our network SDE that's made up of polylines as a CSV file to the local machine (or fileGDB but slow speed seems to be the theme regardless of output format)



import arcpy, csv
import datetime
from arcpy import env

def tableToCSV(input_tbl, csv_filepath):
fld_list = arcpy.ListFields(input_tbl)
fld_names = [fld.name for fld in fld_list]
with open(csv_filepath, 'wb') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(fld_names)
with arcpy.da.SearchCursor(input_tbl, fld_names) as cursor:
for row in cursor:
writer.writerow(row)
print csv_filepath + " CREATED"
csv_file.close()

env.workspace = r"Q:SDEDirect Connection to delivery.sde"
desiredFC = arcpy.ListFeatureClasses("","","Topography")


for fc in desiredFC:
if fc == 'Contours':
print 'Starting to Process/' + fc
startTime = datetime.datetime.now()
out_csv = "C:UsersCHOKDesktopExtract Data Method/" + fc[14:] + '.csv'
tableToCSV(fc, out_csv)
endTime = datetime.datetime.now()
print 'Finished Processing/' + fc
print 'It took ' + str(endTime - startTime) + ' to process ' + fc

raw_input('Press Enter to exit')


When I run the above script, I see that transferring 3.6 gb of data from the SDE to a CSV (or file geodatabase using arcpy copy methods) takes more than 3~4 hours to run.



I can see in task manager that the network speed utilised is usually between 0.3 Mbps to 2 Mbps but sometimes the speed jumps to 10~20 Mbps and other feature classes are transferred at around 5~7 Mbps.



Is there something I can try to improve the speed at which SDE stored on the network is copied over to a local drive or is this a lost cause due to the network currently in place?










share|improve this question









New contributor




Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • That's probably because your HDD is thrashing writing to CSV from a feature class - open your resource monitor to see how hard your drive is being pushed. I would think a file geodatabase would be better to write to, a shapefile is definitely not an option due to size, you could create a replica perhaps; what is your underlying database for your SDE? What else is running on that server? when was it last restarted? Have you tried running the export on the server instead of on a remote workstation to negate the network speed?

    – Michael Stimson
    Apr 3 at 5:23












  • The disk writing is at 1% so I don't think it's a hard drive issue. Attempts to write to a new file gdb also seems to yield similar speeds. Unfortunately don't have authority to the actual SDE (located in different location and manged by different team)

    – Andy
    Apr 3 at 5:31











  • That often happens. Have you tried to create replica resources.arcgis.com/en/help/main/10.2/index.html#//…, it should be optimized for this sort of thing but it may also create a version, which you may not have permission to do, otherwise Copy Features resources.arcgis.com/en/help/main/10.2/index.html#//… or Feature Class to Feature Class resources.arcgis.com/en/help/main/10.2/index.html#//… should be faster, if not then it's a server/network issue - contact the team that does administer the database and see what's up

    – Michael Stimson
    Apr 3 at 5:37











  • A few years back I needed to use ASCII to export and import several hundred million rows. I used arcpy.da.SearchCursor and csv, and the export to 68Gb took 20 minutes and the import two hours. 4Gb is a drop in that bucket and should only take minutes, which makes me wonder if you're using an IO-limited resource like an RDS instance for the Enterprise geodatabase (there is no longer anything called SDE).

    – Vince
    Apr 3 at 11:21












  • I think this is an XY Problem because the root issue is poor performance of the data itself, not the access methodology. If you intersect your contours with a coarse fishnet, then Dissolve or dissolve on the attribute used for rendering, the feature count will be reduced and the spatial index more effective. The fact that the data is in a remote data Center may be a contributing factor, as may fragmentation and uncompressed version tree, but for contours, you have an additional issue.

    – Vince
    Apr 3 at 11:54













0












0








0








I'm trying to move a feature class on our network SDE that's made up of polylines as a CSV file to the local machine (or fileGDB but slow speed seems to be the theme regardless of output format)



import arcpy, csv
import datetime
from arcpy import env

def tableToCSV(input_tbl, csv_filepath):
fld_list = arcpy.ListFields(input_tbl)
fld_names = [fld.name for fld in fld_list]
with open(csv_filepath, 'wb') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(fld_names)
with arcpy.da.SearchCursor(input_tbl, fld_names) as cursor:
for row in cursor:
writer.writerow(row)
print csv_filepath + " CREATED"
csv_file.close()

env.workspace = r"Q:SDEDirect Connection to delivery.sde"
desiredFC = arcpy.ListFeatureClasses("","","Topography")


for fc in desiredFC:
if fc == 'Contours':
print 'Starting to Process/' + fc
startTime = datetime.datetime.now()
out_csv = "C:UsersCHOKDesktopExtract Data Method/" + fc[14:] + '.csv'
tableToCSV(fc, out_csv)
endTime = datetime.datetime.now()
print 'Finished Processing/' + fc
print 'It took ' + str(endTime - startTime) + ' to process ' + fc

raw_input('Press Enter to exit')


When I run the above script, I see that transferring 3.6 gb of data from the SDE to a CSV (or file geodatabase using arcpy copy methods) takes more than 3~4 hours to run.



I can see in task manager that the network speed utilised is usually between 0.3 Mbps to 2 Mbps but sometimes the speed jumps to 10~20 Mbps and other feature classes are transferred at around 5~7 Mbps.



Is there something I can try to improve the speed at which SDE stored on the network is copied over to a local drive or is this a lost cause due to the network currently in place?










share|improve this question









New contributor




Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












I'm trying to move a feature class on our network SDE that's made up of polylines as a CSV file to the local machine (or fileGDB but slow speed seems to be the theme regardless of output format)



import arcpy, csv
import datetime
from arcpy import env

def tableToCSV(input_tbl, csv_filepath):
fld_list = arcpy.ListFields(input_tbl)
fld_names = [fld.name for fld in fld_list]
with open(csv_filepath, 'wb') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(fld_names)
with arcpy.da.SearchCursor(input_tbl, fld_names) as cursor:
for row in cursor:
writer.writerow(row)
print csv_filepath + " CREATED"
csv_file.close()

env.workspace = r"Q:SDEDirect Connection to delivery.sde"
desiredFC = arcpy.ListFeatureClasses("","","Topography")


for fc in desiredFC:
if fc == 'Contours':
print 'Starting to Process/' + fc
startTime = datetime.datetime.now()
out_csv = "C:UsersCHOKDesktopExtract Data Method/" + fc[14:] + '.csv'
tableToCSV(fc, out_csv)
endTime = datetime.datetime.now()
print 'Finished Processing/' + fc
print 'It took ' + str(endTime - startTime) + ' to process ' + fc

raw_input('Press Enter to exit')


When I run the above script, I see that transferring 3.6 gb of data from the SDE to a CSV (or file geodatabase using arcpy copy methods) takes more than 3~4 hours to run.



I can see in task manager that the network speed utilised is usually between 0.3 Mbps to 2 Mbps but sometimes the speed jumps to 10~20 Mbps and other feature classes are transferred at around 5~7 Mbps.



Is there something I can try to improve the speed at which SDE stored on the network is copied over to a local drive or is this a lost cause due to the network currently in place?







arcpy enterprise-geodatabase






share|improve this question









New contributor




Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Apr 3 at 11:19









Vince

14.8k32849




14.8k32849






New contributor




Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Apr 3 at 5:17









AndyAndy

141




141




New contributor




Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Andy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • That's probably because your HDD is thrashing writing to CSV from a feature class - open your resource monitor to see how hard your drive is being pushed. I would think a file geodatabase would be better to write to, a shapefile is definitely not an option due to size, you could create a replica perhaps; what is your underlying database for your SDE? What else is running on that server? when was it last restarted? Have you tried running the export on the server instead of on a remote workstation to negate the network speed?

    – Michael Stimson
    Apr 3 at 5:23












  • The disk writing is at 1% so I don't think it's a hard drive issue. Attempts to write to a new file gdb also seems to yield similar speeds. Unfortunately don't have authority to the actual SDE (located in different location and manged by different team)

    – Andy
    Apr 3 at 5:31











  • That often happens. Have you tried to create replica resources.arcgis.com/en/help/main/10.2/index.html#//…, it should be optimized for this sort of thing but it may also create a version, which you may not have permission to do, otherwise Copy Features resources.arcgis.com/en/help/main/10.2/index.html#//… or Feature Class to Feature Class resources.arcgis.com/en/help/main/10.2/index.html#//… should be faster, if not then it's a server/network issue - contact the team that does administer the database and see what's up

    – Michael Stimson
    Apr 3 at 5:37











  • A few years back I needed to use ASCII to export and import several hundred million rows. I used arcpy.da.SearchCursor and csv, and the export to 68Gb took 20 minutes and the import two hours. 4Gb is a drop in that bucket and should only take minutes, which makes me wonder if you're using an IO-limited resource like an RDS instance for the Enterprise geodatabase (there is no longer anything called SDE).

    – Vince
    Apr 3 at 11:21












  • I think this is an XY Problem because the root issue is poor performance of the data itself, not the access methodology. If you intersect your contours with a coarse fishnet, then Dissolve or dissolve on the attribute used for rendering, the feature count will be reduced and the spatial index more effective. The fact that the data is in a remote data Center may be a contributing factor, as may fragmentation and uncompressed version tree, but for contours, you have an additional issue.

    – Vince
    Apr 3 at 11:54

















  • That's probably because your HDD is thrashing writing to CSV from a feature class - open your resource monitor to see how hard your drive is being pushed. I would think a file geodatabase would be better to write to, a shapefile is definitely not an option due to size, you could create a replica perhaps; what is your underlying database for your SDE? What else is running on that server? when was it last restarted? Have you tried running the export on the server instead of on a remote workstation to negate the network speed?

    – Michael Stimson
    Apr 3 at 5:23












  • The disk writing is at 1% so I don't think it's a hard drive issue. Attempts to write to a new file gdb also seems to yield similar speeds. Unfortunately don't have authority to the actual SDE (located in different location and manged by different team)

    – Andy
    Apr 3 at 5:31











  • That often happens. Have you tried to create replica resources.arcgis.com/en/help/main/10.2/index.html#//…, it should be optimized for this sort of thing but it may also create a version, which you may not have permission to do, otherwise Copy Features resources.arcgis.com/en/help/main/10.2/index.html#//… or Feature Class to Feature Class resources.arcgis.com/en/help/main/10.2/index.html#//… should be faster, if not then it's a server/network issue - contact the team that does administer the database and see what's up

    – Michael Stimson
    Apr 3 at 5:37











  • A few years back I needed to use ASCII to export and import several hundred million rows. I used arcpy.da.SearchCursor and csv, and the export to 68Gb took 20 minutes and the import two hours. 4Gb is a drop in that bucket and should only take minutes, which makes me wonder if you're using an IO-limited resource like an RDS instance for the Enterprise geodatabase (there is no longer anything called SDE).

    – Vince
    Apr 3 at 11:21












  • I think this is an XY Problem because the root issue is poor performance of the data itself, not the access methodology. If you intersect your contours with a coarse fishnet, then Dissolve or dissolve on the attribute used for rendering, the feature count will be reduced and the spatial index more effective. The fact that the data is in a remote data Center may be a contributing factor, as may fragmentation and uncompressed version tree, but for contours, you have an additional issue.

    – Vince
    Apr 3 at 11:54
















That's probably because your HDD is thrashing writing to CSV from a feature class - open your resource monitor to see how hard your drive is being pushed. I would think a file geodatabase would be better to write to, a shapefile is definitely not an option due to size, you could create a replica perhaps; what is your underlying database for your SDE? What else is running on that server? when was it last restarted? Have you tried running the export on the server instead of on a remote workstation to negate the network speed?

– Michael Stimson
Apr 3 at 5:23






That's probably because your HDD is thrashing writing to CSV from a feature class - open your resource monitor to see how hard your drive is being pushed. I would think a file geodatabase would be better to write to, a shapefile is definitely not an option due to size, you could create a replica perhaps; what is your underlying database for your SDE? What else is running on that server? when was it last restarted? Have you tried running the export on the server instead of on a remote workstation to negate the network speed?

– Michael Stimson
Apr 3 at 5:23














The disk writing is at 1% so I don't think it's a hard drive issue. Attempts to write to a new file gdb also seems to yield similar speeds. Unfortunately don't have authority to the actual SDE (located in different location and manged by different team)

– Andy
Apr 3 at 5:31





The disk writing is at 1% so I don't think it's a hard drive issue. Attempts to write to a new file gdb also seems to yield similar speeds. Unfortunately don't have authority to the actual SDE (located in different location and manged by different team)

– Andy
Apr 3 at 5:31













That often happens. Have you tried to create replica resources.arcgis.com/en/help/main/10.2/index.html#//…, it should be optimized for this sort of thing but it may also create a version, which you may not have permission to do, otherwise Copy Features resources.arcgis.com/en/help/main/10.2/index.html#//… or Feature Class to Feature Class resources.arcgis.com/en/help/main/10.2/index.html#//… should be faster, if not then it's a server/network issue - contact the team that does administer the database and see what's up

– Michael Stimson
Apr 3 at 5:37





That often happens. Have you tried to create replica resources.arcgis.com/en/help/main/10.2/index.html#//…, it should be optimized for this sort of thing but it may also create a version, which you may not have permission to do, otherwise Copy Features resources.arcgis.com/en/help/main/10.2/index.html#//… or Feature Class to Feature Class resources.arcgis.com/en/help/main/10.2/index.html#//… should be faster, if not then it's a server/network issue - contact the team that does administer the database and see what's up

– Michael Stimson
Apr 3 at 5:37













A few years back I needed to use ASCII to export and import several hundred million rows. I used arcpy.da.SearchCursor and csv, and the export to 68Gb took 20 minutes and the import two hours. 4Gb is a drop in that bucket and should only take minutes, which makes me wonder if you're using an IO-limited resource like an RDS instance for the Enterprise geodatabase (there is no longer anything called SDE).

– Vince
Apr 3 at 11:21






A few years back I needed to use ASCII to export and import several hundred million rows. I used arcpy.da.SearchCursor and csv, and the export to 68Gb took 20 minutes and the import two hours. 4Gb is a drop in that bucket and should only take minutes, which makes me wonder if you're using an IO-limited resource like an RDS instance for the Enterprise geodatabase (there is no longer anything called SDE).

– Vince
Apr 3 at 11:21














I think this is an XY Problem because the root issue is poor performance of the data itself, not the access methodology. If you intersect your contours with a coarse fishnet, then Dissolve or dissolve on the attribute used for rendering, the feature count will be reduced and the spatial index more effective. The fact that the data is in a remote data Center may be a contributing factor, as may fragmentation and uncompressed version tree, but for contours, you have an additional issue.

– Vince
Apr 3 at 11:54





I think this is an XY Problem because the root issue is poor performance of the data itself, not the access methodology. If you intersect your contours with a coarse fishnet, then Dissolve or dissolve on the attribute used for rendering, the feature count will be reduced and the spatial index more effective. The fact that the data is in a remote data Center may be a contributing factor, as may fragmentation and uncompressed version tree, but for contours, you have an additional issue.

– Vince
Apr 3 at 11:54










1 Answer
1






active

oldest

votes


















0














I believe this is happening because you are using cursors for your data fetching operation, I had encountered same behavior in my past. I would suggest you to use plain SQL statements for query. Use this pseudo python code. Once you get result set then you can iterate through result set and save in CSV.




ArcSDESQLExecute.execute(sql_statement)




Now since result set in plain format, you can perform write operation with much faster speed.
.
ArcSDESQLExecute



 import arcpy

# Use a connection file to create the connection
db = r'Database ConnectionssDEConnection_pointing_to_your_version.sde'
egdb_conn = arcpy.ArcSDESQLExecute(db)

table_name = 'YourFeatureClassName'
field_name = 'FieldName'

# don't select the Shape field until its necessary as Shape stored in BLOB field which take time to process.
sql = '''
SELECT * FROM 0
'''.format(table_name)

egdb_return = egdb_conn.execute(sql)
for i in egdb_return:
print(': '.format(*i))





share|improve this answer

























  • A simple database cursor is not going to be version-aware, so your answer should mention this. The Python code is pretty basic, so giving a C# example seems confusing.

    – Vince
    Apr 3 at 11:16












  • Agree, I will update my answer.

    – Bhaskar Singh
    Apr 3 at 12:10











  • Thanks for the help, I specified my db and have table_name = 'TopographyContour_Other' and field_name = 'load_date' when I run the rest of the code the sql is shown as 'nSELECT * FROM Topography\Contour_Othern' and egdb_return = egdb_conn.execute(sql) returns this error AttributeError: ArcSDESQLExecute: StreamPrepareSQL ArcSDE Extended error -202 [Informix][Informix ODBC Driver][Informix]An illegal character has been found in the statement.

    – Andy
    Apr 3 at 21:59











  • Are you storing your user name and passwords in SDE file along with version information. If not then you must have to specify the username and password explicitly. "ArcSDESQLExecute (server, instance, database, user, password)". Also you must be setting "env.workspace". after that you would be executing SQL statement again of your table/feature class. in that case you need to pass feature class or table name only, not full path.

    – Bhaskar Singh
    2 days ago












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "79"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






Andy is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f317588%2fsde-to-csv-slow-transfer-speeds%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














I believe this is happening because you are using cursors for your data fetching operation, I had encountered same behavior in my past. I would suggest you to use plain SQL statements for query. Use this pseudo python code. Once you get result set then you can iterate through result set and save in CSV.




ArcSDESQLExecute.execute(sql_statement)




Now since result set in plain format, you can perform write operation with much faster speed.
.
ArcSDESQLExecute



 import arcpy

# Use a connection file to create the connection
db = r'Database ConnectionssDEConnection_pointing_to_your_version.sde'
egdb_conn = arcpy.ArcSDESQLExecute(db)

table_name = 'YourFeatureClassName'
field_name = 'FieldName'

# don't select the Shape field until its necessary as Shape stored in BLOB field which take time to process.
sql = '''
SELECT * FROM 0
'''.format(table_name)

egdb_return = egdb_conn.execute(sql)
for i in egdb_return:
print(': '.format(*i))





share|improve this answer

























  • A simple database cursor is not going to be version-aware, so your answer should mention this. The Python code is pretty basic, so giving a C# example seems confusing.

    – Vince
    Apr 3 at 11:16












  • Agree, I will update my answer.

    – Bhaskar Singh
    Apr 3 at 12:10











  • Thanks for the help, I specified my db and have table_name = 'TopographyContour_Other' and field_name = 'load_date' when I run the rest of the code the sql is shown as 'nSELECT * FROM Topography\Contour_Othern' and egdb_return = egdb_conn.execute(sql) returns this error AttributeError: ArcSDESQLExecute: StreamPrepareSQL ArcSDE Extended error -202 [Informix][Informix ODBC Driver][Informix]An illegal character has been found in the statement.

    – Andy
    Apr 3 at 21:59











  • Are you storing your user name and passwords in SDE file along with version information. If not then you must have to specify the username and password explicitly. "ArcSDESQLExecute (server, instance, database, user, password)". Also you must be setting "env.workspace". after that you would be executing SQL statement again of your table/feature class. in that case you need to pass feature class or table name only, not full path.

    – Bhaskar Singh
    2 days ago
















0














I believe this is happening because you are using cursors for your data fetching operation, I had encountered same behavior in my past. I would suggest you to use plain SQL statements for query. Use this pseudo python code. Once you get result set then you can iterate through result set and save in CSV.




ArcSDESQLExecute.execute(sql_statement)




Now since result set in plain format, you can perform write operation with much faster speed.
.
ArcSDESQLExecute



 import arcpy

# Use a connection file to create the connection
db = r'Database ConnectionssDEConnection_pointing_to_your_version.sde'
egdb_conn = arcpy.ArcSDESQLExecute(db)

table_name = 'YourFeatureClassName'
field_name = 'FieldName'

# don't select the Shape field until its necessary as Shape stored in BLOB field which take time to process.
sql = '''
SELECT * FROM 0
'''.format(table_name)

egdb_return = egdb_conn.execute(sql)
for i in egdb_return:
print(': '.format(*i))





share|improve this answer

























  • A simple database cursor is not going to be version-aware, so your answer should mention this. The Python code is pretty basic, so giving a C# example seems confusing.

    – Vince
    Apr 3 at 11:16












  • Agree, I will update my answer.

    – Bhaskar Singh
    Apr 3 at 12:10











  • Thanks for the help, I specified my db and have table_name = 'TopographyContour_Other' and field_name = 'load_date' when I run the rest of the code the sql is shown as 'nSELECT * FROM Topography\Contour_Othern' and egdb_return = egdb_conn.execute(sql) returns this error AttributeError: ArcSDESQLExecute: StreamPrepareSQL ArcSDE Extended error -202 [Informix][Informix ODBC Driver][Informix]An illegal character has been found in the statement.

    – Andy
    Apr 3 at 21:59











  • Are you storing your user name and passwords in SDE file along with version information. If not then you must have to specify the username and password explicitly. "ArcSDESQLExecute (server, instance, database, user, password)". Also you must be setting "env.workspace". after that you would be executing SQL statement again of your table/feature class. in that case you need to pass feature class or table name only, not full path.

    – Bhaskar Singh
    2 days ago














0












0








0







I believe this is happening because you are using cursors for your data fetching operation, I had encountered same behavior in my past. I would suggest you to use plain SQL statements for query. Use this pseudo python code. Once you get result set then you can iterate through result set and save in CSV.




ArcSDESQLExecute.execute(sql_statement)




Now since result set in plain format, you can perform write operation with much faster speed.
.
ArcSDESQLExecute



 import arcpy

# Use a connection file to create the connection
db = r'Database ConnectionssDEConnection_pointing_to_your_version.sde'
egdb_conn = arcpy.ArcSDESQLExecute(db)

table_name = 'YourFeatureClassName'
field_name = 'FieldName'

# don't select the Shape field until its necessary as Shape stored in BLOB field which take time to process.
sql = '''
SELECT * FROM 0
'''.format(table_name)

egdb_return = egdb_conn.execute(sql)
for i in egdb_return:
print(': '.format(*i))





share|improve this answer















I believe this is happening because you are using cursors for your data fetching operation, I had encountered same behavior in my past. I would suggest you to use plain SQL statements for query. Use this pseudo python code. Once you get result set then you can iterate through result set and save in CSV.




ArcSDESQLExecute.execute(sql_statement)




Now since result set in plain format, you can perform write operation with much faster speed.
.
ArcSDESQLExecute



 import arcpy

# Use a connection file to create the connection
db = r'Database ConnectionssDEConnection_pointing_to_your_version.sde'
egdb_conn = arcpy.ArcSDESQLExecute(db)

table_name = 'YourFeatureClassName'
field_name = 'FieldName'

# don't select the Shape field until its necessary as Shape stored in BLOB field which take time to process.
sql = '''
SELECT * FROM 0
'''.format(table_name)

egdb_return = egdb_conn.execute(sql)
for i in egdb_return:
print(': '.format(*i))






share|improve this answer














share|improve this answer



share|improve this answer








edited Apr 3 at 12:33

























answered Apr 3 at 5:44









Bhaskar SinghBhaskar Singh

466




466












  • A simple database cursor is not going to be version-aware, so your answer should mention this. The Python code is pretty basic, so giving a C# example seems confusing.

    – Vince
    Apr 3 at 11:16












  • Agree, I will update my answer.

    – Bhaskar Singh
    Apr 3 at 12:10











  • Thanks for the help, I specified my db and have table_name = 'TopographyContour_Other' and field_name = 'load_date' when I run the rest of the code the sql is shown as 'nSELECT * FROM Topography\Contour_Othern' and egdb_return = egdb_conn.execute(sql) returns this error AttributeError: ArcSDESQLExecute: StreamPrepareSQL ArcSDE Extended error -202 [Informix][Informix ODBC Driver][Informix]An illegal character has been found in the statement.

    – Andy
    Apr 3 at 21:59











  • Are you storing your user name and passwords in SDE file along with version information. If not then you must have to specify the username and password explicitly. "ArcSDESQLExecute (server, instance, database, user, password)". Also you must be setting "env.workspace". after that you would be executing SQL statement again of your table/feature class. in that case you need to pass feature class or table name only, not full path.

    – Bhaskar Singh
    2 days ago


















  • A simple database cursor is not going to be version-aware, so your answer should mention this. The Python code is pretty basic, so giving a C# example seems confusing.

    – Vince
    Apr 3 at 11:16












  • Agree, I will update my answer.

    – Bhaskar Singh
    Apr 3 at 12:10











  • Thanks for the help, I specified my db and have table_name = 'TopographyContour_Other' and field_name = 'load_date' when I run the rest of the code the sql is shown as 'nSELECT * FROM Topography\Contour_Othern' and egdb_return = egdb_conn.execute(sql) returns this error AttributeError: ArcSDESQLExecute: StreamPrepareSQL ArcSDE Extended error -202 [Informix][Informix ODBC Driver][Informix]An illegal character has been found in the statement.

    – Andy
    Apr 3 at 21:59











  • Are you storing your user name and passwords in SDE file along with version information. If not then you must have to specify the username and password explicitly. "ArcSDESQLExecute (server, instance, database, user, password)". Also you must be setting "env.workspace". after that you would be executing SQL statement again of your table/feature class. in that case you need to pass feature class or table name only, not full path.

    – Bhaskar Singh
    2 days ago

















A simple database cursor is not going to be version-aware, so your answer should mention this. The Python code is pretty basic, so giving a C# example seems confusing.

– Vince
Apr 3 at 11:16






A simple database cursor is not going to be version-aware, so your answer should mention this. The Python code is pretty basic, so giving a C# example seems confusing.

– Vince
Apr 3 at 11:16














Agree, I will update my answer.

– Bhaskar Singh
Apr 3 at 12:10





Agree, I will update my answer.

– Bhaskar Singh
Apr 3 at 12:10













Thanks for the help, I specified my db and have table_name = 'TopographyContour_Other' and field_name = 'load_date' when I run the rest of the code the sql is shown as 'nSELECT * FROM Topography\Contour_Othern' and egdb_return = egdb_conn.execute(sql) returns this error AttributeError: ArcSDESQLExecute: StreamPrepareSQL ArcSDE Extended error -202 [Informix][Informix ODBC Driver][Informix]An illegal character has been found in the statement.

– Andy
Apr 3 at 21:59





Thanks for the help, I specified my db and have table_name = 'TopographyContour_Other' and field_name = 'load_date' when I run the rest of the code the sql is shown as 'nSELECT * FROM Topography\Contour_Othern' and egdb_return = egdb_conn.execute(sql) returns this error AttributeError: ArcSDESQLExecute: StreamPrepareSQL ArcSDE Extended error -202 [Informix][Informix ODBC Driver][Informix]An illegal character has been found in the statement.

– Andy
Apr 3 at 21:59













Are you storing your user name and passwords in SDE file along with version information. If not then you must have to specify the username and password explicitly. "ArcSDESQLExecute (server, instance, database, user, password)". Also you must be setting "env.workspace". after that you would be executing SQL statement again of your table/feature class. in that case you need to pass feature class or table name only, not full path.

– Bhaskar Singh
2 days ago






Are you storing your user name and passwords in SDE file along with version information. If not then you must have to specify the username and password explicitly. "ArcSDESQLExecute (server, instance, database, user, password)". Also you must be setting "env.workspace". after that you would be executing SQL statement again of your table/feature class. in that case you need to pass feature class or table name only, not full path.

– Bhaskar Singh
2 days ago











Andy is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















Andy is a new contributor. Be nice, and check out our Code of Conduct.












Andy is a new contributor. Be nice, and check out our Code of Conduct.











Andy is a new contributor. Be nice, and check out our Code of Conduct.














Thanks for contributing an answer to Geographic Information Systems Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f317588%2fsde-to-csv-slow-transfer-speeds%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

រឿង រ៉ូមេអូ និង ហ្ស៊ុយលីយេ សង្ខេបរឿង តួអង្គ បញ្ជីណែនាំ

Crop image to path created in TikZ? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Crop an inserted image?TikZ pictures does not appear in posterImage behind and beyond crop marks?Tikz picture as large as possible on A4 PageTransparency vs image compression dilemmaHow to crop background from image automatically?Image does not cropTikzexternal capturing crop marks when externalizing pgfplots?How to include image path that contains a dollar signCrop image with left size given

Romeo and Juliet ContentsCharactersSynopsisSourcesDate and textThemes and motifsCriticism and interpretationLegacyScene by sceneSee alsoNotes and referencesSourcesExternal linksNavigation menu"Consumer Price Index (estimate) 1800–"10.2307/28710160037-3222287101610.1093/res/II.5.31910.2307/45967845967810.2307/2869925286992510.1525/jams.1982.35.3.03a00050"Dada Masilo: South African dancer who breaks the rules"10.1093/res/os-XV.57.1610.2307/28680942868094"Sweet Sorrow: Mann-Korman's Romeo and Juliet Closes Sept. 5 at MN's Ordway"the original10.2307/45957745957710.1017/CCOL0521570476.009"Ram Leela box office collections hit massive Rs 100 crore, pulverises prediction"Archived"Broadway Revival of Romeo and Juliet, Starring Orlando Bloom and Condola Rashad, Will Close Dec. 8"Archived10.1075/jhp.7.1.04hon"Wherefore art thou, Romeo? To make us laugh at Navy Pier"the original10.1093/gmo/9781561592630.article.O006772"Ram-leela Review Roundup: Critics Hail Film as Best Adaptation of Romeo and Juliet"Archived10.2307/31946310047-77293194631"Romeo and Juliet get Twitter treatment""Juliet's Nurse by Lois Leveen""Romeo and Juliet: Orlando Bloom's Broadway Debut Released in Theaters for Valentine's Day"Archived"Romeo and Juliet Has No Balcony"10.1093/gmo/9781561592630.article.O00778110.2307/2867423286742310.1076/enst.82.2.115.959510.1080/00138380601042675"A plague o' both your houses: error in GCSE exam paper forces apology""Juliet of the Five O'Clock Shadow, and Other Wonders"10.2307/33912430027-4321339124310.2307/28487440038-7134284874410.2307/29123140149-661129123144728341M"Weekender Guide: Shakespeare on The Drive""balcony"UK public library membership"romeo"UK public library membership10.1017/CCOL9780521844291"Post-Zionist Critique on Israel and the Palestinians Part III: Popular Culture"10.2307/25379071533-86140377-919X2537907"Capulets and Montagues: UK exam board admit mixing names up in Romeo and Juliet paper"Istoria Novellamente Ritrovata di Due Nobili Amanti2027/mdp.390150822329610820-750X"GCSE exam error: Board accidentally rewrites Shakespeare"10.2307/29176390149-66112917639"Exam board apologises after error in English GCSE paper which confused characters in Shakespeare's Romeo and Juliet""From Mariotto and Ganozza to Romeo and Guilietta: Metamorphoses of a Renaissance Tale"10.2307/37323537323510.2307/2867455286745510.2307/28678912867891"10 Questions for Taylor Swift"10.2307/28680922868092"Haymarket Theatre""The Zeffirelli Way: Revealing Talk by Florentine Director""Michael Smuin: 1938-2007 / Prolific dance director had showy career"The Life and Art of Edwin BoothRomeo and JulietRomeo and JulietRomeo and JulietRomeo and JulietEasy Read Romeo and JulietRomeo and Julieteeecb12003684p(data)4099369-3n8211610759dbe00d-a9e2-41a3-b2c1-977dd692899302814385X313670221313670221