-
My Integrated MIKE 21 HD/SW terminated prematurely with the following message in the run log file:
"Cannot flush files".
There were 4 sets of parallel runs and the CPU was fully used. The harddisk was adequate for the output files. However, I did have a 5th shorter run started and the above crash occurred while this shorter run was ongoing, and this shorter run did complete successively after the above crash.
What could cause this problem and how could I fix it?
Thanks and regards,
Say Chong
-
I would love to hear about the solution as well. I have similar experience running MIKE 21 SW and often have to re-run.
Say Chong
-
A couple of other things that you could try include:
1) If directionally decoupled formulation, increase the steepness-related breaking parameter (say 2-5).
2) If fully spectral formulation, use coupled air-sea interaction instead of uncoupled.
3) If fully spectral formulation, reduce white capping parameter (though not generally recommended).
Say Chong
-
Hi Adam,
Now that the 2015 version comes with multiple GPU capability, is your Titan Z problem resolved and how is its performance if you would like to share your experience?
Any views from others are welcome too!
Thanks and regards,
Say Chong
-
Thanks, smw13, that's a great tip. I was looking for the same tool but for extracting a boundary from a larger 3D model for a smaller local 3D. You saved the day. And it works the same way in MIKE 2014 too.
Say Chong
-
Thanks, Poul. Yes, I read that in the Online Help, that In the further MIKE 3 farfield calculations only the vertical position of the outlet is changed, according to the results of the nearfield calculation. However, I did not realize that the jet solution output file is specified in the Source menu and not under the Main Outout menu. Thanks for that.
A follow up question is whether implicitly the jet solution will be more accurate (hence less conservative) than that of specifying a standard source with an estimated exit velocity but the nearfield dispersion, just like its far-field counterpart, will be based on the specified dispersion coefficient-based approach, even with fine resolution of the outfall configuration to resolve the flow/turbulence regime.
Would the above be a fair statement?
Say Chong
-
Has any one tested the jet flow facility in MIKE3 FM in a seawater recirculation study? I'm interested in the initial dilution results from a shoreline outfall and how it performs in comparison with CORMIX results. Would the use of this considered a coupled nearfield/farfield heated effluent dispersion modeling tool?
Thanks and regards,
Say Chong
-
That is a great advice, Nilor. Thanks a million.
Say Chong
-
Thanks, Jonas, for the help and apologies on the late response. I have run using an amended version of the run setup file (for changing the model start time in an add-on module such as Transport coupled to the HD) but need to run it in DOS using a batch file.
Anyway I will try your recommended approach and see whether that will help keep the run time manageable.
And yes, we have to launch the GPU-enabled runs locally at the workstation as the Remote Access facility that we use does not work.
Say Chong
-
Thanks, Jonas.
I read the run log file for a completed GPU-enabled HD sim and it has the following at the beginning:
======================= Computing Environment ========================
Computer name : [BLANKed out]
Number of processors: 32
======================================================================
==================== CUDA GPU Device Information =====================
Number of GPU devices : 1
GPU device number : 1
GPU device name : Quadro K5000
CUDA Compute Capability : 3.0
selected_device_number : 1 (default)
number_of_threadsPerBlock : 128 (default)
======================================================================
and the end:
============================ Performance =============================
Number of threads on CPU: 8
Double precision GPU calculations
======================================================================
Clearly it the double precision computation has been invoked. However, I was not able to find where I can disable the double precision computation during the run setup. Can you advise? Thanks.
Say Chong