TerraSlave and TerraDispatcher

Applies to TerraScan and TerraSlave v020.006 and later. A TerraSlave User Guide is available from Terrasolid. View a short tutorial on using TerraSlave and TerraDispatcher.


  • Windows executable for batch processing TerraScan, TerraPhoto and TerraMatch tasks
  • Alternative method for running processes TerraScan/Photo/Match can run
  • Advantages:
    • Run batch processes without tying up CAD software
    • Run multiple instances on one computer to speed up task
    • Distribute processing to multiple computers to speed up task


  • Master computer is a computer where human user initiates a task with TerraScan, TerraPhoto or TerraMatch.
  • Slave computer is a computer processing tasks initiated from some other computer. Slave computer can be a computer without a human operator. Master computer can act as a slave computer for tasks initiated from other computers.
  • Working segment is a set of data to be processed. Working segment can be for example:
    • one TerraScan project block
    • one TerraPhoto image
  • Distributed processing is computation involving multiple computers. One computer acts as master computer, other computers act as slave computers.
  • Single computer processing is computation taking place on computer only. It can run with multiple program instances running concurrently on the same computer.

Technical Requirements for Distributed Processing

  • 64-bit Windows operating system – Windows 10 or later
  • Launching computer must:
    • Have \terra64 folder shared for read access
    • Have data folder shared for read/write (source data, result data, macro)
    • Have read/write access to \terra64\tslave folder of participating computers
  • Participating computers must:
    • Have \terra64\tslave folder shared for read/write
    • Have read/write access to \terra64\tslave folder of launching computer
    • Have read/write access to data
  • Software will convert local paths such as “e:\jyvaskyla\laser01” to UNC paths such as “\\TYOASEMA\jyvaskyla\laser01” automatically
  • Use UNC paths yourself to verify that everything is properly shared

Note: For more complex dispatched and distributed processing, more user control, tracking and reporting please see the GeoCue Workflow Management System.

Software Components Involved

  • TerraScan / TerraPhoto / TerraMatch launches a task to be processed by TerraSlave
    • Run a macro on each project block
    • Compute feature points for every image
  • TerraDispatcher dispatches working segments to participating computers
    • For master computer: TerraDispatcher starts TerraSlave with a segment assigned
    • For slave computer: TerraDispatcher writes segment assignment as a file for TerraSlaveService
  • TerraSlaveService is a service running on a slave computer. It checks regularly for segment assignment files appearing in \terra64\queue. If found, it launches TerraSlave. TerraSlaveService has no user interface.
  • TerraSlave is an executable which processes one working segment at a time. TerraSlave.exe has no user interface.

TerraSlave Licensing

  • Separate TerraSlave license needs to be purchased for computers with no TerraScan/TerraPhoto/TerraMatch license
  • Computer with TerraScan license can run TerraScan tasks in TerraSlave
  • Computer with TerraPhoto license can run TerraPhoto tasks in TerraSlave
  • Computer with TerraMatch license can run TerraMatch tasks in TerraSlave

Computer Hardware Recommendations

  • TerraSlave has same hardware requirements as TerraScan: 32 GB RAM
    • 32GB enough for running 1-2 instances of TerraSlave with 100 million point block size
  • You can make good use of high core count processors with TerraSlave
    • Highly threaded task: one TerraSlave instance can use many cores
    • Single threaded task: you can dispatch many instances of TerraSlave
  • To run many instances of TerraSlave, you should have more RAM
    • Add 8 – 16 GB RAM for each additional instance when using 100 million point block size
  • For distributed processing: make sure computers have fast connection to the data

Master Computer Software Setup

  • Run normal TerraScan/TerraPhoto/TerraMatch setup
Terra Setup
  • This will install TerraDispatcher and TerraSlave as part of TerraScan setup
  • No need to separately install TerraSlave unless you want to use the same computer as a slave computer (=run tasks initiated from other computers)

Slave Computer Software Setup

  • Run TerraSlave setup on all computers which will act as a slave computer
  • This will install TerraSlave and TerraSlaveService
  • TerraSlaveService needs a user name and password for an account which will have appropriate read/write access to shared folders
TerraSlave Setup v020.006 and later

If the TerraSlaveService does not install as it should using setup.exe, manually install the service.

TerraSlave Folder Structure

  • TerraSlave relies on a fixed folder structure under installation folder (default c:\terra64)
New TSlave Folder Structure

Slave Computer List in TerraScan/Photo

  • User defines a list computers which participate in computation tasks
  • Default list is just the local computer
TerraScan-> Settings -> Slave Computers

TerraDispatcher User Interface

  • TerraDispatcher user interface lets you:
    • Pause or Resume automatic dispatching
    • Abort task
    • Move computers up or down in priority order
  • When automatic dispatching is paused, you can:
    • Remove working segments from the list
    • Reset working segments to pending status
    • Dispatch working segments to computer selected in lower list

TerraSlave Preferences File

  • TerraSlave has no user interface
  • You can modify user preferences by editing c:\terra64\tslave\tslave.upf text file
ApplicationTerraSlave – do not modify
LicDirLocation of license files
LicUseServerIf non zero, requests a license from license server*
LicServerLicense server computer name
LicAccessAccess code for license server
RunTasksHistorical – do not modify
MaxThreadsMax number of threads Value of 0 or -1 means all processor cores

*Note: TerraSlave license self-checkout is not currently working when tslave.exe exists in the tslave folder.

If you don’t have a tslave.upf from the old TerraSlave installation, copy the contents of the following to a text file, adjust to fit your needs, and save as tslave.upf in your %terra%\tslave folder.

[Terra preferences]

Task Manager Instructions

  • Use Task Manager to see how much processor power and how much memory is in use

Note: If the processor has hyperthreading on, Task Manager shows 50% processor usage when almost all power is in use

  • Never let memory usage reach 100%
  • Aim for about 50% processor usage for single threaded tasks
    • Single threaded tasks (ground, hard surface, height from ground) typically access memory non-linearly and hyperthreading gives no speed improvement
  • Aim for 50% – 100% processor usage for multithreaded tasks
    • Nicely multithreaded tasks (classify surface points, compute normal vectors) have a lot of linear memory access and hyperthreading speeds up the task

How Number of Instances Affects Execution Time

Execution time vs Number of Instances

GeoCue Support has written 973 articles

6 thoughts on “TerraSlave and TerraDispatcher

  1. Saxon says:

    Hi Don,
    I wrote to Terrasolid in November 2020 about this issue and their response is seen below. In the current release there is still no indication of which tasks are multithreaded.
    “If classification is your final goal, the computation such as normal vectors is highly threaded, so your classifying routine will be faster.

    Some tasks, we are able to multithread in Terrasolid software applications, but some are not easy for implementation, such as produce contours in TerraModeler.
    Therefore, for example produce contours is single task.

    Unfortunately, we do not have a list of single and multithreaded tasks in our current version of user guide. Anyway, thank you for notice, we will consider to add it in the future.

    In general, when using TerraSlave/TerraDispatcher you will benefit during classification task by using more instances (recommended best number is 12), see the video which explains that from 18th minute:


    and in addition, see the slide which comes with performed test during 24th minute.”

    Saxon: Sorry I could not be of greater assistance.

  2. Saxon says:

    When using Terradispatcher, a number of blocks (2km x 2km block with up to 35 million points Las 1.4) are failing to save. This seems to occur most often when neighbours are set to 30m such as when running a ground routine. With terradispatcher, unless you monitor the dispatcher window you cannot determine the blocks which have failed as they are simply moved to the reports folder. One work around is for me to search the reports folder for “failed”, open the report, identify the block, select the block and rerun the macro on selected blocks.

    Is there a way to get terradispatcher to rerun the failed blocks at the end of the task again or hold the task open so that failed blocks can be reset in dispatcher and rerun? I suspect the failed to save status is a conflict writing to the drive whilst trying to read and process a neighbouring block.The blocks are sorted by name which creates a North to south order to ensure the blocks can read at least some of their neighbours rather trying to randomise the block order by sorting by point count. It seems of little use to have to monitor the dispatcher window as the whole idea of distributed processing is improve speed and reduce human intervention time in the process.

    1. DonMarsh says:

      I have this same issue. On my huge corridor project Ground classification of blocks in TerraDispacher fails to do the job – for about 20% of the blocks. Dispatcher does not report as ‘fail’, nor does the information in the TSlave reports. I have tested different numbers of instances and time intervals and can find no acceptable solution. The issue crops up on edges of the corridor where, if there are 3 TScan blocks across the corridor width only 1 or 2 properly process the ground. I am left with a ‘stair step’ of blocks that are not classified. After the whole set of blocks process, i load the data subsampled to inspect the result, then selecting the blocks that did not classify. I then select the blocks, Identify them in the TScan project block linst and re-run them. This mostly works but not all. This is very frustrating. Any suggestions would be welcome. Don

  3. Saxon says:

    Using Terradispatcher, I have found on numerous occassions that a report gets stuck in the progress folder Terra64/tslave/progress as below where the Status=Success


    Block APLACE_2020-C1-ELL_5XXX_55_0002_0002.las
    Loaded 28906967 points from active block.
    FnScanClassifyClass(999,1,0) returned 28 906 967
    Saved 28906967 points


    It should automatically be moved to the tslave/reports folder but it does not. This then stops the next macro which is in the queue from running. Why might this be the case and how can this be resolved?

  4. Saxon says:

    In the section Task Manager Instructions there is a difference between Single and multithreaded tasks. Is there a list of tasks for each available from TerraSolid?

Comments are closed.