The workflow is: you're working on a R script, and every lines from the top of the file (or from `first_line` parameter) to the current line (or the last run a full-blown model on the HPC. Then adjust the r script and run this function after the last line you want to send to the cluster (or outside the range from `first.line` to `last.line`.).

slurm(
  r.file = "job.r",
  sh.file = paste0(basename(r.file), ".sh"),
  job.name = as.character(r.file),
  time = "1:00:00",
  ntask = 1,
  partition = "fuchs",
  nodes = 1,
  home = "/home/fuchs/fias/knguyen/",
  conda.env = "kinh",
  working.dir = ".",
  submit = FALSE,
  monitor = FALSE,
  user = "knguyen",
  iteration = 1,
  first.line = 1,
  last.line = Inf,
  shift.line = 2
)

Details

Adjust SLURM and conda configuration to your case.