-
i know in FEFLOW 6.0+ you can add .pow files as timeseries. The #number is the index of the timeseries. You assign the timeseries, the imported .pow, to each boundary condition.
You should be able to link them all up by assigning a few BCs to a time series, exporting it, then changing them all (easily) through spreadsheet software and importing them again.
Adam
-
You can do this through the IFM Manager by first installing MS Visual Studio then installing FEFLOW SDK.
I'm not certain what the exact commands would be to restart the model (which you might need the command line for), but you can modify your hydraulic head boundary conditions pre/post-simulation and you can export head values programatically via the filestream.
API Index:
http://www.feflow.info/html/help/default.htm?turl=HTMLDocuments%2Fifm%2Fifm_api%2Fapi_index.htm
Adam
-
I'm not sure if there's an IFM function that has a textbox prompt, but what I've done as a workaround is import a .txt file representing my user input and store it in vectors pre-simulation...
I'd also like to know if this is possible.
Adam
-
Okay, I did some research and I found that BiCGSTAB is 65% parallelized... Also found that SAMG on the gets diminishing returns for more than about 5 cores; an implementation in CUDA or OpenCL would be very beneficial for GPGPU. I believe the set up time for SAMG skewed my results; with a 1M+ node 2M+ element model, I only got 75% parallelization, which I'm sure would improve with larger models.
So, Intel's Devil's Canyon Core i7-4790K would be best to maximize single threaded performance since all solvers reach their maximum theoretical speedup at around 4-5 cores, which most processors have anyway nowadays. Xeons are definitely not worth the cost; 16 threads for a CPU is overkill.
Adam
-
Hi,
I'd like to know the best way to increase performance for unsaturated-saturated transient flow and mass transport models that contain 1M+ nodes and 2M+ elements using shock capturing. Currently, our run times go over 40 days, making it hard to meet deadlines in some cases. Due to stability issues, it's typically not possible to have large timesteps, resulting in us needing approximately 30000 timesteps for a transient model.
I'm wondering what sort of ideal setup would be necessary to maximize performance. Currently, I'm eyeing Intel's Devil's Canyon Core i7-4790K but I'm wondering if it would even be worth it to go for a Xeon with Devil's Canyon architecture with a Xeon Phi co-processor, but I don't think multi-threading is even possible in transient models? I'm pretty sure GPUs are out of the question.
I've ran some tests on steady state models (for sake of time) using the BiCGSTABP solver and multi-threading does reduce time, but I am wondering if it's just because steady-state simulations can be parallelized, unlike transient simulations!? Obviously single-threaded performance is important, but how important is multi-threading for transient simulations? It's especially hard to pinpoint the bottleneck because I think the solvers utilize both parallel and serial architecture... ? Also are the solvers memory or CPU-bound?
Regards,
Adam
-
Use seepage face BC to represent springs (subset of the constant head BC)
-
To run a flow steady-state simulation, you need at least one flow boundary condition. You can run a flow steady-state for unsaturated models, but the solution is dependent on the initial conditions (if you have a bad initial condition, you'll likely have a bad steady-state solution). I'm unsure what you meant by completely dry; either no groundwater or no water in the pore space (i.e. column of sand)? If you set the residual saturation to 0, you'll have a completely dry model once you run a steady-state.
Adam
-
Any arbitrary number equal or lower than your lowest elevation should work I think. Setting everything to -11m would give you weird suction pressure -- it would be best steady-state with just one BC and let it set inital conditions and hopefully it would result in a mass balance of 0? My mind doesn't work well with negative elevation heads, sorry!
-
I would recommend against putting many constant head BCs across your model. Since the default setup for a model contains all no flow boundaries around the domain, you can get away with just setting 1 head BC to a value of 0 (doesn't matter where) and set the initial condition for hydraulic head to be 0. With more BCs, you might run into convergence/mass balance issues.
I'm under the assumption you currently have no flow BCs in your model.
Adam
-
The software is the biggest difference between Tesla, Titan, and Quadros (and the prices).
As of right now, Titan Blacks are likely to have the best cost/benefit for MIKE 21, but their drivers do not support remote desktoping (unlike the Tesla TCC drivers, which allow you to dedicate to specific applications). However, Titans do not have ECC memory, unlike Quadros and Teslas, which are good for scientific computing, but I think iterative solvers should account for any large convergence issues due to bitflips??? An added benefit to the Titan is that it's possible to support an additional GPU (see Titan Z), but for the most part, Teslas are far more scalable for scientific computing purposes.
Bottom line is to get a Titan over a Quadro or Tesla for a workstation. If you're going the server route and have a lot of cash lying around, you can probably get more performance out of Teslas due to their scalability assuming you can use multiple GPUs. I don't think MIKE 21 supports multiple GPUs currently, so this probably isn't the most ideal route.