Improve database and results files read performance
When opening an ICM network for the first time (i.e. no network files in the local files "Working" folder), the WDS reads through the database file in 4 kB chunks to retrieve the requested version of the network. This read process appears to be limited to a disk "queue depth" of 1 and is therefore very sensitive to drive latency. In a cloud environment, drive latency can be in single digit milliseconds. Therefore, the read process might achieve 400 x 4 kB = 1.6 MB per second. If the database file is large, it can take many minutes for the network to open. A similar process appears to take place when producing graphs or statistical reports from results files. Would it be possible to re-engineer the database read process so that a larger queue depth or i/o size could be used to achieve a significant improvement in read times?
Tom Walker commented
Fully support this idea. As the InfoWorks application uses the SMB protocol to read/write to a file share. Transferring a 2GB file we can see network speeds of 40Mpbs during the transfer with Infoworks. Simulating the same 2GB file transfer with Windows Explorer we see file transfer speeds up to 800Mpbs, i.e. 20 times faster. A detailed WireShark trace shows InfoWorks negotiates a 4K block length for the file transfer, whereas Windows Explorer negotiates a 1MB block length, which is significantly more efficient for large file transfers.
Can Innovyze tune the InfoWorks application to make better use of the latest SMB Protocols (3.1.1) tuning for large file transfers?
Gary Richards commented
We have observed this issue with our ICM deployment on Azure.