SubmittingTLVMieSjobs

Before submitting jobs, the tlvmie-simulation software package must be installed somewhere on your (linux) machine or FSL supercomputer account. Simply download the tlvmie-simulation.tar.gz file and place it somewhere, like your home directory. Then extract the *.tar.gz file via: tar -xzvf tlvmie-simulation.tar.gz

NOTE: This package REQUIRES the python package, bokeh (https://pypi.org/project/bokeh/) for generating interactive plots of the resulting viscosity prediction. This can be installed via pip3 install --user bokeh (on FSL, this would need to be preceded by loading the python3 module via module load python/3.8. The python modules provided by FSL may change overtime, after which module load python/3.X would be used.

Jobs can be submitted via the following syntax:

tlvmie-simulation/run.sh /path/to/jobfile.sjob

The script starts by verifying the parameters in the *.sjob file are valid, after which it will start preparing the directory backbones for the simulation replicates in the compute_dir specified. Jobs will then be queued with hardware requirements according to the hwtype option.

Simulations will proceed without any user intervention and upon completion results are output to compute_dir/jobname/EMD/csvdata/. Users can monitor the progress of these jobs with SLURM's squeue command. Interpretation of the results is discussed in the next section.

The run.sh script submits jobs as follows:

  1. setup_NPT_* - These jobs prepare the lammps input files and utilize the packmol and moltemplate software packages to prepare the simulation box (very fast)
  2. NPT_* - These jobs run the LAMMPS simulation of the NPT (density) portion. There will be N of these jobs, where N is the number of NPT replicates specified in the .sjob file. (slower)
  3. parse_NPT_* - This job parses the resulting density prediction from the NPT replicates and prepares the NVT (viscosity portion) simulation box with the density discovered here. (very fast)
  4. EMD-TD_* - These jobs run the LAMMPS simulation of the NVT (viscosity portion) ensemble. There will be N of these jobs, where N is the number of NVT replicates specified in the .sjob file. (slowest)
  5. parse_EMD_* - This job determines the amount of memory, etc. required for the data processing in the next step and queues those jobs. (very fast)
  6. pbin-ac-int_* - These jobs first utilize the DJC_pdata2bin software package to compress the large plain-text pressure.data file into a binary representation (pressure.bin) to save space and speed up the data processing portion. Next, these jobs use the DJC_autocorr_threads software package to calculate the autocorrelation functions from the simulation pressure.bin files. Results are output to an autocorr*.csv, which is integrated with the TD_integrate.py script to generate the Green-Kubo integral, which is stored in an integral*.csv file. There will be N of these jobs, where N is the number of NVT replicates specified in the .sjob file. (slow)
  7. sort-plot* - This job waits until all pbin-ac-int jobs have completed, then pulls the integral*.csv of all simulation replicates, bootstraps the uncertainty across these replicates, and provides a final determination in results.txt, visc.html, and hist.png. The interpretation of these results is discussed in a following section. (slow)