Monthly Archives: April 2014

VPN from Linux Home PC to CU network (i.e access to license servers or HPCs)

To connect to campus network you have to install openconnect — the Open source version of the for Cisco VPN AnyConnect Client.

for Ubuntu /Debian I do:

apt-get install openconnect

or for RedHat/Fedora

yum install openconnect

Once installed do in terminal:

sudo openconnect --user=ax3333@coventry.ac.uk --no-dtls --authgroup='VPN' --no-cert-check anyconnect.coventry.ac.uk

where ax3333@coventry.ac.uk is your user name on CU network. You will be asked for Password (your University one), enter it.

Leave this terminal window running in a background, There are also OpenConnect plug-ins for NetworkManager, but CLI method described here is much simpler (AND WORKS!).
Then use CU Intranet or whatever (e.g. access license servers in CU private network).

GUI method (will not require sudo privileges to setup):

Install OpenConnect add-on for Network Manager (https://launchpad.net/ubuntu/+source/network-manager-openconnect) for Ubuntu:

apt install network-manager-openconnect
or
yum install NetworkManager-openconnect
for RedHat/CentOS.

Then you can add new VPN connection via NetworkManager GUI tool, similarly VPN server must be “anyconnect.coventry.ac.uk” , Group “VPN” (default), “accept all certificates”, use your Uni usrrname/password to connect to VPN.

Alex

EC STRUCTURAL SIMULATION HPC SUITE

by Oliver Grimes
——————

MS-WORD *.doc version of this document.

EC_FEA_Run_Script

TAKE CARE AS MOST PARAMETERS ARE CASE SENSITIVE

The purpose of this script is to enable quick solver choice and settings for FEA runs on Coventry University HPC “ZEUS”. The following document details how to use and submit the script.

Currently Registered Solvers are:

LS-DYNA, OPTISTRUCT(_NL), RADIOSS, LS-OPT, LS-TASC, DIGIMAT_DYNA or MADYMO

You can find the most up to date EC_FEA_Run_Script.sh in your $HOME/RUN_SCRIPTS

HOW TO SUMBIT THE SCRIPT

  1. Ensure the script is in your run directory (ideally a clean directory)
  2. Places you input file (solver input deck) and any include folders in this folder.
  3. Set your preferred solver and settings as detailed in the following pages.
  4. The job is then ready to submit in either of the following ways:
  • In a terminal window enter
    ./EC_FEA_Run_Script.sh
  • In a terminal window enter
    sh EC_FEA_Run_Script.sh

Note: Wildcards can be used to save time ie. ./EC_FE* or sh EC* etc.

  1. Any errors on warnings on submitting the job will appear in the standard output of the terminal window. It is import to read these warnings as error as the job may make adjustments to the solver settings if incorrect.
  2. On submission an incrementally numbered SOLUTION_#### folder and SOLUTION_####.out file will be produced. This contains your job solution and output log summary (which is useful to monitor whilst the job is initialising to check it doesn’t error terminate). The SOLUTION_#### names (folder and out log file) which automatically increment by number if the job is resubmitted in the same directory. This can be useful when debugging job and keeping a record of changes.
  3. Check the job is running using the “squeue –u username” command in a terminal.

IMPORTANT: Do not change the SOLUTION_#### names until the job has finished!!

USEFUL COMMANDS

squeue 

– View the job queue

squeue -u username

– View just your user jobs (replace username with your’s)

scancel JOB_ID

– Cancel a job by JOB_ID (found in the squeue)

scancel -u username

– Cancel all your user jobs (replace username with your’s)

sinfo

– View queue partitions status

sbatch

– Manually submit a SUBMISSION_SCRIPT

RUN SCRIPT Parameters and Settings

The following section details the run script settings.

Input Filename:

Enter your filename (include the file extension) in the input parameter. This must not contain spaces, neither must the run directory.

Eg.

input="filename.ext"

Jobname:

Entry of the jobname is not essential but is very useful. The name entered under this parameter will appear as the name of the job in the job queue.

Eg.

jobname="JOB_123"

Solver Choice:

Enter the FEA solver you require from the list of registered solvers. This parameter is case sensitive so be careful to copy it directly as in the list.

Currently Registered Solvers are:

LS-DYNA, OPTISTRUCT(_NL), RADIOSS, LS-OPT, LS-TASC, DIGIMAT_DYNA or MADYMO

Eg.

solver=OPTISTRUCT

Number of Nodes for the job:

Enter the number of computer nodes required for the job. Most nodes on ZEUS contain 2 x Quad core processors (8 cores nodes) so two nodes provides 16 processors. Remember, more processors does not necessarily provide improvements in the runtime, some solvers don’t scale well. DO NOT WAIST HPC RESOURCE!! When running multiple jobs, running more jobs simultaneously on less processors will overall be quicker than with job stacked sequentially on more nodes.

Eg.

NUM_NODES=3 

(must be “default” or an integer)

Note: Unless the more_nodes_password is supplied the max number of nodes maybe capped. If the specified nodes is over the capped limit NUM_NODES will be automatically set to the max allowed. A warning will be issued in such a case.

Walltime:

The walltime is the maximum time you think the job will take to run. If unsure, be overly cautious as the job will be terminated if the walltime is reached, even if the job hasn’t finished. However when the HPC queue is busy specifying lengthy walltime’s will deprioritise the job and take longer to start up. Try to keep this realistic.

Eg.

walltime=24:00:00

(for 24 hours)

ADVANCED Options

The second half of the EC_FEA_Run_Script.sh contain more settings for advanced users. Left these parameters as default using you know what they do.

For specific information on how these parameters relate to each solver see the APPENDIX section of these instructions.

#####################################################################
#####                                                           #####
#####                   ADVANCED OPTIONS                        #####
#####                                                           #####
#####################################################################

More Nodes Password:

Specify the more nodes password here if you have been given it by your supervisor. This will remove the student cap of the maximum number of nodes you can launch the job on.

Eg.

more_nodes_password="password"

 

Number of CPU’s:

This option is to specify the number of processes (domains/MPI processes etc.) the entire job requires. If set as default the most efficient configuration for specified solver with be chosen.

Eg.

NUM_NCPU=default

(must be “default” or an integer)

 

Number of Threads per process:

This option specifies the number of thread per process (specified above). For example a Hybrid job run multiple SMP processes using MPI, each SMP process will likely have multiple threads running using several cores. This parameter specifies in this example the number of threads/sub processes for the SMP process. Only set if you know what you are doing as job will run extremely slowly/inefficiently if set incorrectly. If set as default the most efficient configuration for specified solver with be chosen.

Eg.

NUM_THREADS=default

(must be “default” or an integer)

 

Memory for the job:

Specifies the required memory for the job. This units of this input are solver dependant (could be words, megawords, megabytes). Ideally leave as default unless needed.

Eg.

memory=default

(must be “default” or an integer)

Queue Partition:

To set a specific SLURM queue partition enter the required partition name using this parameter. Usually set as default which choses the most relevant queue for the solver choice (usually the “all” queue).

Eg.

queue=default

(must be “default” or an existing queue partition name)

Parallel Processing Type:

This option let you choose your parallel processing job (assuming the corresponding solver in compatible and installed). The options are default, SMP, MPP, Hybrid (Using multiple SMP). Leave as default unless you know the job with be faster/more efficient using a different processing type.

Eg.

parallel=default

Precision Type:

Sets the solver to process in either single or double precision (floating point length). Only require for some simulation type. Normally costs approximately 20-30% extra runtime. The default option is usually single precision.

Eg.

precision=default

(must be “default”, single or double)

 

Solver Version:

A different version of the solver can be called be setting this parameter. The specific option relates to the version folder name in the /share/apps/EC_STRUCT_SIM/%solver%/%version folder%. Leave as default unless required.

Eg.

version=default

(must be “default”, previous, latest, specific (ie. 971_R5.1.1))

Other Solver Command Line Options/Arguments:

If the user wants to provide any additional solver options/arguments then this parameter will add them to the end of the default command line.

Eg.

OTHER_ARGS="-option argument=1"

DO NOT EDIT THE SCRIPT BEYOND THE ABOVE PARAMETER

 

APPENDIX – SOLVER SPECIFIC SETTINGS

The follow appendix details how the above settings relate to the specific solver settings.

APPENDIX A – LS-DYNA Run Script Settings.

Location:

/share/apps/EC_STRUCT_SIM/RUN_SCRIPTS/MASTER_SCRIPT/LS-DYNA_Script

Required Modules:

######################## LOAD REQUIRED MODULES ########################
module purge
module load /share/apps/EC_STRUCT_SIM/modulefiles/LS-DYNA/v971
#######################################################################

Default Options:

###################### SET LS-DYNA RUN PARAMETERS #####################
DEFAULT_NUM_NODES=2

DEFAULT_QUEUE=all

DEFAULT_NUM_NCPU_SMP=1

DEFAULT_NUM_NCPU_MPP=8

DEFAULT_NUM_NCPU_Hybrid=2

DEFAULT_NUM_THREADS_SMP=8

DEFAULT_NUM_THREADS_MPP=1

DEFAULT_NUM_THREADS_Hybrid=4

DEFAULT_MEMORY=777777777

DEFAULT_PARALLEL="MPP"

DEFAULT_PRECISION="single"

SOLVER_VERSIONS="971_5.1.1 971_6.1.1 971_6.1.2"

DEFAULT_VERSION=971_6.1.2

LATEST_VERSION=971_6.1.2

PREVIOS_VERSION=971_5.1.1

EXECUTABLE_DIR="/share/apps/EC_STRUCT_SIM/LS-DYNA"

Solver Notes:

SMP – Runs one process (Shared memory parallel) using multiple threads.

  • Must be run on 1 x node only (will auto default to 1 otherwise)

MPP– Runs one serial process per core (Massively Parallel Processing) using HPMPI.

  • Total cores = NUM_NODES * NUM_NCPU

Hybrid – Runs parallel SMP processes (which can each run parallel threads) using HPMPI.

  • No. of SMP processes = NUM_NODES * NUM_NCPU / NUM_THREADS
  • No of threads per SMP process = NUM_THREADS (best to use a whole physical processor=4)
  • Total cores = No. of SMP processes * No. of thread per SMP process

APPENDIX B – LS-OPT Run Script Settings.

Location:

/share/apps/EC_STRUCT_SIM/RUN_SCRIPTS/MASTER_SCRIPT/LS-OPT_Script

Required Modules:

######################## LOAD REQUIRED MODULES ########################
module purge
module load /share/apps/EC_STRUCT_SIM/modulefiles/LS-DYNA/v971
#######################################################################

Default Options:

####################### SET LS-TASC RUN PARAMETERS #######################
LS_OPT_PATH=/share/apps/EC_STRUCT_SIM/LS-OPT/v5.0_84950
####################### SET LS-DYNA RUN PARAMETERS ####################### 

input=DynaOpt.inp

jobname=$(basename `pwd`)

DEFAULT_NUM_NODES=1

DEFAULT_QUEUE=all

DEFAULT_NUM_NCPU_SMP=1

DEFAULT_NUM_NCPU_MPP=8

DEFAULT_NUM_NCPU_Hybrid=2

DEFAULT_NUM_THREADS_SMP=8

DEFAULT_NUM_THREADS_MPP=1

DEFAULT_NUM_THREADS_Hybrid=4

DEFAULT_MEMORY=777777777

DEFAULT_PARALLEL="MPP"

DEFAULT_PRECISION="single"

SOLVER_VERSIONS="971_5.1.1 971_6.1.1 971_6.1.2"

DEFAULT_VERSION=971_6.1.2

LATEST_VERSION=971_6.1.2

PREVIOS_VERSION=971_5.1.1

EXECUTABLE_DIR="/share/apps/EC_STRUCT_SIM/LS-DYNA"

APPENDIX C – LS-TASC Run Script Settings.

Location:

/share/apps/EC_STRUCT_SIM/RUN_SCRIPTS/MASTER_SCRIPT/ LS-TASC_Script

Required Modules:

######################## LOAD REQUIRED MODULES ########################
module purge
module load /share/apps/EC_STRUCT_SIM/modulefiles/LS-DYNA/v971
#######################################################################

Default Options:

####################### SET LS-TASC RUN PARAMETERS #######################
LS_OPT_PATH=/share/apps/EC_STRUCT_SIM/LS-TASC/lstasc_21_86715_x64_rhel6
####################### SET LS-DYNA RUN PARAMETERS #######################

input=DynaOpt.inp

jobname=$(basename `pwd`)

DEFAULT_NUM_NODES=1

DEFAULT_QUEUE=all

DEFAULT_NUM_NCPU_SMP=1

DEFAULT_NUM_NCPU_MPP=8

DEFAULT_NUM_NCPU_Hybrid=2

DEFAULT_NUM_THREADS_SMP=8

DEFAULT_NUM_THREADS_MPP=1

DEFAULT_NUM_THREADS_Hybrid=4

DEFAULT_MEMORY=777777777

DEFAULT_PARALLEL="MPP"

DEFAULT_PRECISION="single"

SOLVER_VERSIONS="971_5.1.1 971_6.1.1 971_6.1.2"

DEFAULT_VERSION=971_6.1.2

LATEST_VERSION=971_6.1.2

PREVIOS_VERSION=971_5.1.1

EXECUTABLE_DIR="/share/apps/EC_STRUCT_SIM/LS-DYNA"

APPENDIX D – DIGIMAT_DYNA Run Script Settings.

Location:

/share/apps/EC_STRUCT_SIM/RUN_SCRIPTS/MASTER_SCRIPT/DIGIMAT_DYNA_Script

Required Modules:

######################## LOAD REQUIRED MODULES ########################
module purge
module load /share/apps/EC_STRUCT_SIM/modulefiles/LS-DYNA/v971
module load /share/apps/EC_STRUCT_SIM/modulefiles/digimat/v5.0.1
#######################################################################

Default Options:

 

####################### SET DIGIMAT RUN PARAMETERS #######################

DEFAULT_DIGI_DYNA_VERSION=V611

LATEST_DIGI_DYNA_VERSION=V611

PREVIOS_DIGI_DYNA_VERSION=V511

DIGIMAT2CAE="/share/apps/EC_STRUCT_SIM/DIGIMAT/v5.0.1/Digimat/Digimat2CAE/5.0.1"

DIGIMAT_BIN="/share/apps/EC_STRUCT_SIM/DIGIMAT/v5.0.1/Digimat/Digimat/5.0.1/exec"

####################### SET LS-DYNA RUN PARAMETERS #######################

DEFAULT_NUM_NODES=2

DEFAULT_QUEUE=all

DEFAULT_NUM_NCPU_SMP=1

DEFAULT_NUM_NCPU_MPP=8

DEFAULT_NUM_NCPU_Hybrid=2

DEFAULT_NUM_THREADS_SMP=8

DEFAULT_NUM_THREADS_MPP=1

DEFAULT_NUM_THREADS_Hybrid=4

DEFAULT_MEMORY=777777777

DEFAULT_PARALLEL="MPP"

DEFAULT_PRECISION="single"

SOLVER_VERSIONS="971_5.1.1 971_6.1.1"

DEFAULT_VERSION=971_6.1.1

LATEST_VERSION=971_6.1.1

PREVIOS_VERSION=971_5.1.1

EXECUTABLE_DIR="/share/apps/EC_STRUCT_SIM/LS-DYNA"

APPENDIX E – MADYMO Run Script Settings.

Location:

/share/apps/EC_STRUCT_SIM/RUN_SCRIPTS/MASTER_SCRIPT/MADYMO_Script

Required Modules:

######################## LOAD REQUIRED MODULES ########################
module purge
module load /share/apps/EC_STRUCT_SIM/modulefiles/madymo/R7.5
#######################################################################

Default Options:

 

######################## SET MADYMO RUN PARAMETERS #######################

DEFAULT_NUM_NODES=2

DEFAULT_QUEUE=all

DEFAULT_NUM_NCPU_SMP=8

DEFAULT_NUM_NCPU_MPP=8

DEFAULT_NUM_NCPU_Hybrid=2

DEFAULT_NUM_THREADS_SMP=8

DEFAULT_NUM_THREADS_MPP=1

DEFAULT_NUM_THREADS_Hybrid=4

DEFAULT_MEMORY=777777777

DEFAULT_PARALLEL="MPP"

DEFAULT_PRECISION="single"

SOLVER_VERSIONS="madymo_75"

DEFAULT_VERSION=madymo_75

LATEST_VERSION=madymo_75

PREVIOS_VERSION=madymo_75

EXECUTABLE_DIR="/share/apps/EC_STRUCT_SIM/MADYMO"

APPENDIX F – OPTISTRUCT Run Script Settings.

Location:

/share/apps/EC_STRUCT_SIM/RUN_SCRIPTS/MASTER_SCRIPT OPTISTRUCT_Script

Required Modules:

######################## LOAD REQUIRED MODULES ########################
module purge
module load /share/apps/EC_STRUCT_SIM/modulefiles/optistruct/opt_12.0.1
#######################################################################

Default Options:

 

##################### SET OPTISTRUCT RUN PARAMETERS #####################

DEFAULT_NUM_NODES=1

DEFAULT_QUEUE=all

DEFAULT_NUM_NCPU_SMP=1

DEFAULT_NUM_NCPU_MPP=4

DEFAULT_NUM_NCPU_Hybrid=4

DEFAULT_NUM_THREADS_SMP=4

DEFAULT_NUM_THREADS_MPP=4

DEFAULT_NUM_THREADS_Hybrid=4

DEFAULT_MEMORY=

DEFAULT_PARALLEL="SMP"

DEFAULT_PRECISION="double"

SOLVER_VERSIONS="12.0 12.0.210"

DEFAULT_VERSION=12.0.210

LATEST_VERSION=12.0.210

PREVIOS_VERSION=12.0

SOLVER=$solver

EXECUTABLE_DIR="/share/apps/EC_STRUCT_SIM/HYPERWORKS/v12.0.1/altair/scripts"

TMP_DIR="/tmp"

Solver Notes:

SMP – Runs one process (Shared memory parallel) using multiple threads (1 x node max)

  • Must be run on 1 x node only (will auto default to 1 otherwise)

MPP – Runs the same as MPP mode (no MPP mode in OPTISTRUCT)

Hybrid – Natively (to Altair/OPTISTRUCT) known as SPMD (Single Program Multi Domain)

  • Minimum on 3x processes (1 x Manager, 2 x Run jobs)
  • Runs multiple SMP processes (best 4 threads/cores) per SMP process.
  • NUM_NODES = No. of loadcases + 1 (for manager) / 2 (2 x processes per node)

Parallel run speed up with OPTISTRUCT can only be gained if the job has multiple loadcases. Ie. Each loadcase is run on a SMP process in parallel.

APPENDIX G – RADIOSS Run Script Settings.

Location:

/share/apps/EC_STRUCT_SIM/RUN_SCRIPTS/MASTER_SCRIPT/RADIOSS_Script

Required Modules:

######################## LOAD REQUIRED MODULES ########################
module purge
module load /share/apps/EC_STRUCT_SIM/modulefiles/radioss/rad_12.0.1
#######################################################################

Default Options:

 

####################### SET RADIOSS RUN PARAMETERS ####################### 

DEFAULT_NUM_NODES=1

DEFAULT_QUEUE=all

DEFAULT_NUM_NCPU_SMP=1

DEFAULT_NUM_NCPU_MPP=4

DEFAULT_NUM_NCPU_Hybrid=4

DEFAULT_NUM_THREADS_SMP=4

DEFAULT_NUM_THREADS_MPP=4

DEFAULT_NUM_THREADS_Hybrid=4

DEFAULT_MEMORY=

DEFAULT_PARALLEL="SMP"

DEFAULT_PRECISION="double"

SOLVER_VERSIONS="12.0 12.0.210"

DEFAULT_VERSION=12.0.210

LATEST_VERSION=12.0.210

PREVIOS_VERSION=12.0

SOLVER=$solver

EXECUTABLE_DIR="/share/apps/EC_STRUCT_SIM/HYPERWORKS/v12.0.1/altair/scripts"

TMP_DIR="/tmp"

Using StarCCM+ on zeus form your Desktop

This guide describes how to submit StarCCM+ job on ZEUS HPC, connect to the StarCCM+ server form your local Windows installation of StarCCM+.

  1. Assuming you have StarCCM+ installed on your Windows PC, say I have “STAR-CCM+ 8.04.007-R8 for Windows 64” installed from ld-fs-01ECSoftwareStudent (network share)
  2. Assuming you have full installation of SSH-client PuTTY installed on your Windows PC, “full” means you have not just “putty.exe” executable, but also “plink.exe” and “puttygen.exe” for connection in command line and generating rsa-key for pasword-less connection to the cluster.  If you don’t have full install of PuTTY: http://the.earth.li/~sgtatham/putty/latest/x86/putty-0.63-installer.exe
  3. First Let’s create ssh-keys for Putty (this step is not compulsory, if you don’t do it you can still enter passwords when connecting in steps 7-8:  (a) Launch PuTTYgen (b) Generate key (c) save public and private key, say in your home folder “c:Usersag7634sshpublic.pub”, “c:Usersag7634sshprivate.ppk”. Copy “OpenSSH key string from the top of PuttyGen windows” and insert it into “authorized_keys” file on ZEUS in your home folder “/home/ag7634/.ssh/authorized_keys”. See, eg. see here procedure in more details. Or just Google for “paswordless putty connection”.
  4. Now we are ready to submit Starccm+ job on HPC:
    • Transfer your simulation files to your zeus’s home folder, say I copied my test *.sim file into “~/test/casesim.sim” on zeus (where ~/ stands for your home folder, i.e. /home/ag7634). I use WinSCP for copying files to/from HPC. Another possibility to copy files is discussed here.
    • Login to HPC with Putty (hopefully now you can login without entering password, with the key –see step 3)
    • In Putty console window change to your working folder, say I do “cd test” to get to my sim file.
    • If you havent’ set up yet job submission script on ZEUS, use the template from here:  /share/slurm_scripts/starccm_submit.slurm, i.e to copy it to your working folder do: “cp /share/slurm_scripts/starccm_submit.slurm  ./”
    • Change details in this slurm submission script — open it in text editor (or edit it from WinSCP session). Save it.
    • load the starccm+ module of needed version, i.e. I do: “module load starccm/8.04.007”. To list of all available modules do “module avail”.
    • Submit the job — from your working folder do:  “sbatch starccm_submit.slurm”
    • This will start starccm+ server process on one of the compute nodes. List of all nodes you reserved is in “hostfile.txt” in your working folder. Normally starccm+ server process run on the 1st node in the list on port 47827.
  5. Now we have to connect to this starccm+ server process in order to start iterations or pre-post processing. For this we have to connect to the target node, on which starccm+ server process is running form your local PC starccm+ installation.
  6. Say, the first node in “hostfile.txt” list of reserved nodes is zeus81. So there is a good chance that server process is running on this node.
  7. To establish the tunnel to this node form your Desktop PC we create small script file on youe Windows PC. Open notepad and put there (change the path to plink.exe if yours is different):
    start \coventry.ac.ukCSVStudentsSharedECSTUDENTHPCputtyplink.exe -ssh -2 -l %USERNAME% -noagent -i  %USERPROFILE%sshprivate.ppk -N -L 47827:$1:47827 zeus
    You can skip “-i %USERPROFILE%sshprivate.ppk ” part if did not bother to set up ssh-key and password-less connection as described above.

    Note that there is a link to plink.exe from PuTTY installation on your PC, check if the file there or if not adjust the path to it.

    There is also the path to private ssh-key file we generated with puttygen.exe previously in step 3.

    %USERPROFILE% will point windows to your home folder, e.g. c:Usersag7634. …. “c:Usersag7634sshprivate.ppk” is where private key file was saved.

    Change also “ag7634” to your username on HPC.

    Save this text file as, say “tunnel.cmd” in location “visible” by Windows.

    I tend to store such files in my home folder in “bin” directory (create if needed), i.e. in “C:Usersag7342bin”.  To make “bin” visible from any location I added it to the PATH variable: “right-click on My Computer->Properties-Advanced system settings->Environment Variables->New USER VARIABLE-> name: PATH, value: %USERPROFILE%bin

  8. The result that we want after saving “tunnel.cmd” is such: press WinKey+R (or Run from start menu), type there

    tunnel zeus81

    and hit Enter. This will open command window with tunnel to zeus81 node.
    tunnel

    Leave this window running! This is your tunnel to the node!

  9. Now start StarCCM+ on your Desktop PC. Menu: File->Connect to Server. Host: localhost   port: 47827 (default). DO NOT CHECK “connect through SSH tunnel”. DO NOT CLICK “Scan Host”. These options will not work. Just click OK. You now must see your sim-filename in the list of starccm+ servers on the left, which you can click and do whatever you intended to do. This Starccm session is running on your Desktop Windows PC, which processes are being executed on Zeus HPC.
  10. If you just close this StarCCM+ session with “x”, it may terminate server process. So to disconnect (say you launched iterations and now need to wait 30 hours) do menu->”File: Disconnect” first and then close StarCCM+ window.
  11. Later you may connect again to the same StarCCM+ server to check progress or whatever. You can close putty tunnel command window, it will terminate the tunnel. But later you can again do step 8: i.e. launch the tunnel to the same node, and open StarCCM+ local session.
  12. Don’t forget to terminate the job on HPC (your starccm+ server process in the queue if you finished.

 

Alex.

 

 

css.php