Page 1 of 1
FFAStrans Farm Setup
Posted: Fri Jan 23, 2026 1:00 pm
by SheldonABS
Hi Team,
I'm currently planning on setting up a Farm environment within the systems that I have available and I was hoping to get some advice on the best way to set it all up?
The plan is to have 2 Local Windows systems with 1-2TB SATA SSDs as hosts (expanding to 3+ systems in the future) with a NAS server holding all the Watch folders for the various workflows in use alongside all the output files. All work will be done on a 10GB network, if that is also relevant, and I'll be using the latest version of FFAStrans.
To my understanding, setting this up would involve storing an instance of FFAStrans in a shared location that each host has access to. Is it that simple? Is there also an 'optimal' way of setting this all up?
I'm not sure if there is anything else that would be useful to note but do let me know and I can provide any additional info!
Thanks!
Re: FFAStrans Farm Setup
Posted: Fri Jan 23, 2026 3:18 pm
by SheldonABS
Hi,
One thing I also forgot to mention is that I also want to be able to use the Web Interface to monitor my progress of all the files going through the workflow instead of the in-app status monitor. Is this still possible using the Farm setup?
Thanks!
Re: FFAStrans Farm Setup
Posted: Fri Jan 23, 2026 3:58 pm
by emcodem
Sure, its no difference from webint perspective if you got a farm or not.
Note that you only need a single webinterface installation AND the webint files (server.exe etc..) should be always local, never install the service from UNC drive.
Just as heads up, the best setup is to run webinterface on the same host that serves the ffastrans share and configure webint to access ffastrans files from local drive instead of UNC. That of course only works if you share the ffastrans install files from a windows host and not from a central NAS that runs on linux or similar.
But thats very much optional, it will work this or that way.
Optimal farm setup:
1) serve the ffastrans install files from a dedicated Windows Host (e.g. small VM) which does NOT have ffastrans services installed but only webint
2) do not use the same network cables to access the ffastrans install files as you do for access the media files shares (e.g. have 1gbit control network to the share with install files and 10gbit media network to the shares with media files)
Again its optional, will work so or so.
The reason for separate netwok cables for control network and media is that the latency to the ffastrans DB files would raise up to many tens or hundreds of seconds if transfer big media in the same cable as you access the small DB files. Especially in webint this can lead to "timeout" error messages and very slow job updates.
Re: FFAStrans Farm Setup
Posted: Tue Feb 03, 2026 6:05 pm
by SheldonABS
Hi!
Thank you for the quick response and I apologise for the extremely slow response! We wanted to start this project sooner but some other projects came up that took priority in the office.
This information has been useful but we are still a bit confused on the proper way of setting this up. Do you think it would be possible to get a step by step guide on how to set up each individual component? I.e. The Web interface, the various hosts, the use of our NAS server within the farm, etc?
That would be greatly appreciated.
Thanks!
Re: FFAStrans Farm Setup
Posted: Wed Feb 04, 2026 10:13 am
by emcodem
Hey Sheldon,
no problem, i think we have too less descriptions about the farm setup. I updated the wiki and added a drawing at the bottom of the System Architecture page:
https://ffastrans.com/wiki/doku.php?id= ... yment_farm
Let me know any questions, i'm very happy to provide in-depth detail about any aspect you need.
Re: FFAStrans Farm Setup
Posted: Wed Feb 04, 2026 12:56 pm
by SheldonABS
Hi Em,
Thank you for this! I'll go through it all, see where I get up to and come back with any questions I have, if any at all!
Re: FFAStrans Farm Setup
Posted: Thu Mar 12, 2026 3:16 pm
by SheldonABS
Hi Team,
We have managed to successfully setup out farm and has been working well, thanks again for the help before!
We're currently aiming to start doing failsafe testing to see how the farm works in a variety of 'fail' scenarios; Generally issues such as if a worker node gets knocked off the network mid transcode etc...
We just wanted to ask about the how the farm is intended to react in a couple scenarios:
If we are using a shared cache within the farm and during a batch transcode something happens to one or more of the worker nodes that affects the workflow process, what is intended to happen to the files that are currently working through the workflow on the affected machines? Are these files shifted to the other available worker nodes within the farm? Similarly, is this the same if we are using local caches for each worker node instead of a shared cache?
Additionally, if we are using a shared cache and that cache is on the NAS as mentioned in the diagram, if something happens to the NAS with the shared cache such as the machine crashing / shutting down etc. while the worker nodes are processing files, what is intended to happen to the files currently processing? Will they need to be re-processed once the NAS is up and running again? Are there any fail safes such as the files being locally cached at any point?
We'll be testing these types of scenarios on our side but would be great to know how things are supposed to work to make sure we've got everything set up right on our side too, if you're able to let me know? And if there is anything else we should know in terms of how the farm is supposed to work in these kinds of scenarios, please do let us know.
Thanks!
Re: FFAStrans Farm Setup
Posted: Thu Mar 12, 2026 9:53 pm
by admin
Hi SheldonABS
Ok, so there are two different kind of data sharing when it comes to FFAStrans farm. The ideal is to separate the location for installation share and work share. One of them has better protection than the other. So the install location will usually be self healing. One should be able to reboot the host haveing the share, and the hosts should connect automatically when it's back up and running. However, if it happens in the middle of an ongoing job the story is a bit different.
Both the install share and work share are dependencies, so you will have various failures. If it's the share host going down or being rebooted, the job should automatically retry on the failing host and node once up and running. In the even that a working host goes offline, the system will detect a stale job and retry. However, and again, the retry only applies to the host that initialy failed or went offline.
If the work share goes offline, the job will likely fail, and you have to manually retry.
Either way, both are dependencies causing a bit of trouble if one of them goes offline.
Make sure your dependencies are online
With the current FFAStrans design, there's no way to guarantee 100% success on failing infrastructure.
-steinar
Re: FFAStrans Farm Setup
Posted: Fri Mar 13, 2026 10:35 am
by emcodem
What i can add is that you might want to have at least 2 forms of desaster recovery ready and tested:
0) both described methods require a full stop/kill of all ffastrans processes on all hosts - it is good if you know how to do that
https://ffastrans.com/wiki/doku.php?id=system:processes
1) full desaster recovery means to restore the ffastrans installation files (especially the Processors/db folder) to the state of some backup
2) partial desaster recovery means to stop all running ffastrans processes on all hosts and ensure there are no .json files in the db/cache/tickets and db/cache/monitor folders
The biggest problem are actually Watchfolders or at least Watches that are normally not empty and require ffastrans to remember which files are processed already. A backup will not contain the most recent information about that. If you have such watchfolders, it might be a good idea to stop all watches after Scenario 1 desaster recovery and manually "rebuild" the file history in the corresponding workflow.
So in short, it will be a good idea to have an overview which workflows/watchfolders require which actions after desaster recovery and some guide/documentation about the steps to execute full and partial recovery