Status monitor - disabling for some workflows?

Questions and answers on how to get the most out of FFAStrans
veks
Posts: 79
Joined: Fri Oct 25, 2019 6:51 am

Status monitor - disabling for some workflows?

Post by veks »

Hi all!

I want to disable monitoring for some workflows I have.
I don't want to have every file that is being processes by that workflow being listed in status monitor or Web GUI.
As, this workflow only makes list of files being watched inside the folder, so no need for those files to be listed again in status monitor.

How to disable it?

Thanks!
emcodem
Posts: 1631
Joined: Wed Sep 19, 2018 8:11 am

Re: Status monitor - disabling for some workflows?

Post by emcodem »

Hi veks, good to see you again!
You point out a valid question but this is not foreseen in ffastrans and it will be a hassle to implement too, not sure if we solve that but we will discuss for sure.

Anyway, that was the reason why i implemented the scheduled jobs in the webinterface. Unfortunately one needs some scripting knowledge. I could easily provide you a script as you need it but for this you would need to describe exactly what your workflow does (it does not only "list" the files in a folder).

1) make sure your webinterface is running with the same credentials as ffastrans does.
2) on the webinterface, add a new scheduled job, hit the edit condition button and insert this script.
3) change the first 2 lines to your watchfolder and the log file where the current contents will be logged to in case there were changes
4) save and close the script and alter the frequency of the scheduled job to e.g. 1 minute

let me know if you can live with such a solution and if you need something different than just log the filenames.

Code: Select all

var watchfolder = "c:\\temp";
var logfile = "c:\\dev\\log.txt";

//do not change this code unless you know what you do
var fs = require("fs");
var oldfiles = [];
var tempfile = process.env["appdata"] + "\\store.json"
try{oldfiles = require(tempfile)}catch(ex){}

var newfiles = [];
fs.readdirSync(watchfolder).forEach(file => {
  newfiles.push(file);
});

//make diff between new and old
let difference = newfiles
                 .filter(x => !oldfiles.includes(x))
                 .concat(oldfiles.filter(x => !newfiles.includes(x)));

//remember new files for next run
fs.writeFile( tempfile, JSON.stringify( newfiles ), "utf8", function(){} );

//write log if directory content changed
if (difference.length != 0){
	fs.appendFile( logfile, "\n" + new Date() + " " + watchfolder + "\n", "utf8", function(){} );
	fs.appendFile( logfile, JSON.stringify( newfiles , null, 4), "utf8", function(){} );
}
emcodem, wrapping since 2009 you got the rhyme?
veks
Posts: 79
Joined: Fri Oct 25, 2019 6:51 am

Re: Status monitor - disabling for some workflows?

Post by veks »

emcodem wrote: Fri Feb 07, 2020 3:28 pm Hi veks, good to see you again!
You point out a valid question but this is not foreseen in ffastrans and it will be a hassle to implement too, not sure if we solve that but we will discuss for sure.

Anyway, that was the reason why i implemented the scheduled jobs in the webinterface. Unfortunately one needs some scripting knowledge. I could easily provide you a script as you need it but for this you would need to describe exactly what your workflow does (it does not only "list" the files in a folder).

1) make sure your webinterface is running with the same credentials as ffastrans does.
2) on the webinterface, add a new scheduled job, hit the edit condition button and insert this script.
3) change the first 2 lines to your watchfolder and the log file where the current contents will be logged to in case there were changes
4) save and close the script and alter the frequency of the scheduled job to e.g. 1 minute

let me know if you can live with such a solution and if you need something different than just log the filenames.

Code: Select all

var watchfolder = "c:\\temp";
var logfile = "c:\\dev\\log.txt";

//do not change this code unless you know what you do
var fs = require("fs");
var oldfiles = [];
var tempfile = process.env["appdata"] + "\\store.json"
try{oldfiles = require(tempfile)}catch(ex){}

var newfiles = [];
fs.readdirSync(watchfolder).forEach(file => {
  newfiles.push(file);
});

//make diff between new and old
let difference = newfiles
                 .filter(x => !oldfiles.includes(x))
                 .concat(oldfiles.filter(x => !newfiles.includes(x)));

//remember new files for next run
fs.writeFile( tempfile, JSON.stringify( newfiles ), "utf8", function(){} );

//write log if directory content changed
if (difference.length != 0){
	fs.appendFile( logfile, "\n" + new Date() + " " + watchfolder + "\n", "utf8", function(){} );
	fs.appendFile( logfile, JSON.stringify( newfiles , null, 4), "utf8", function(){} );
}
Hi, thanks for this!
What I'm trying to achieve is to have a way to know when a file is fully transcoded and it's size isn't growing anymore.
So, in first/main workflow transcoding is being done -> then it needs to wait for delivering it to the folder of all profiles and only than it needs to (an old problem that I still have) call python script only once.

The possible solution would be to have another (second workflow) that would watch (check growing file) for new files being delivered in folder where transcoded files go.
And then to make a list (filename like -> dd-mm-yyyy.txt) of all new files from that day.
As watchfolder will only start when file isn't growing anymore, it would put the filename of the file inside the list .txt which then I'd check for in main workflow like this:
1. If there are all profiles (ID_profileXX.mp4) in that .txt list and .smil file continue the workflow (call the python script, once)
2. If some file is missing, print the error which files are missing, or (if that's possible in ffastrans) go back to the transcoding part and retry transcoding and delivering of that profile only.

For that, I'd need a way to not list all files from the second workflow inside the status monitor as this would add useless information.

This makes me think, it would be awesome if there would be a way to separate Web GUI per each workflow. So for example, workflow 1 at port 3001, workflow 2 at port 3002 and so on.

Can you please describe what exactly your script does, I'm trying to understand it but I'm unsure if it prints the jobs every 1 minute or never from let's say that second workflow?

Thanks a lot!
emcodem
Posts: 1631
Joined: Wed Sep 19, 2018 8:11 am

Re: Status monitor - disabling for some workflows?

Post by emcodem »

Hey!
if i was you, i would try to do this without the use of a .txt list.
Automatic retry of only a certain part of a workflow is always a huge hassle as it needs a lot of testing of all possible cases. Typically instead of implementing such workarounds, one concentrates on the actual issue, the cause why sometimes not all bitrates are being transcoded correctly.
The problem is that one can only workaround known errors but for me it looks like you want to workaround unknown errors. As unknown errors cannot be tested, typically the workaround does not work as intended too.

Are you sure this is worth the effort? Are you aware that there are a some problems left that you did not yet think about?

If you really want to continue this path of workarounding unknown errors, i suggest a totally different way then you think about right now. But you will need to use Version 1 for it as your workflow is already pretty big.

1) stay in one workflow, don't split off into another one
2) write a python script that is able to check the existence and validity of the mp4 files and .smil file, execute it after all deliveries were done. There is no need to check if the files are still growing with python as all deliveries were already done, but you will need to check e.g. the duration of all videos.
3) for start, concentrate on retry the whole workflow by moving the source file back to the watchfolder (rename if neccessary). It will be a LOT of work to retry only a part of the workflow.

Can you imagine what i say, does it make sense, would it probably work?

Regarding multiple instances of webinterface on a per-workflow base, you are talking mainly about the job monitor piece, it should only show the files from a certain workflow, correct?
emcodem, wrapping since 2009 you got the rhyme?
veks
Posts: 79
Joined: Fri Oct 25, 2019 6:51 am

Re: Status monitor - disabling for some workflows?

Post by veks »

emcodem wrote: Mon Feb 10, 2020 9:24 am Hey!
if i was you, i would try to do this without the use of a .txt list.
Automatic retry of only a certain part of a workflow is always a huge hassle as it needs a lot of testing of all possible cases. Typically instead of implementing such workarounds, one concentrates on the actual issue, the cause why sometimes not all bitrates are being transcoded correctly.
The problem is that one can only workaround known errors but for me it looks like you want to workaround unknown errors. As unknown errors cannot be tested, typically the workaround does not work as intended too.

Are you sure this is worth the effort? Are you aware that there are a some problems left that you did not yet think about?

If you really want to continue this path of workarounding unknown errors, i suggest a totally different way then you think about right now. But you will need to use Version 1 for it as your workflow is already pretty big.

1) stay in one workflow, don't split off into another one
2) write a python script that is able to check the existence and validity of the mp4 files and .smil file, execute it after all deliveries were done. There is no need to check if the files are still growing with python as all deliveries were already done, but you will need to check e.g. the duration of all videos.
3) for start, concentrate on retry the whole workflow by moving the source file back to the watchfolder (rename if neccessary). It will be a LOT of work to retry only a part of the workflow.

Can you imagine what i say, does it make sense, would it probably work?

Regarding multiple instances of webinterface on a per-workflow base, you are talking mainly about the job monitor piece, it should only show the files from a certain workflow, correct?
Hi,
as you explained the python workflow, this is being done already in that script.
As I mentioned, problem is how to be sure that file has been fully delivered from being localized back to the network storage folder?
As we had a problem with one profile that, all profiles were delivered and transcoded, but for some reason one profile was still being delivered and the python script was called which made all other profiles renamed correctly via the script, but that one profile wasn't renamed as it still was being delivered.
So we added a sleep/wait node of 60 seconds which seems that it solved this issue for now.

Ty
veks
Posts: 79
Joined: Fri Oct 25, 2019 6:51 am

Re: Status monitor - disabling for some workflows?

Post by veks »

Hi,
I've been testing a workflow.
It seems for some reason, one of the profiles doesn't go wait for hold/sync and job would just go through to the python script which would run let's say at least 2 minutes before that profiles has been delivered to the "finished" folder.
Any reason why "hold" isn't waiting just for that profile?

I've checked all connections between nodes, and it seems fine. So no reason that this should be happening.

Thanks!
emcodem
Posts: 1631
Joined: Wed Sep 19, 2018 8:11 am

Re: Status monitor - disabling for some workflows?

Post by emcodem »

We need to see your Workflow, please check your PM :-)
emcodem, wrapping since 2009 you got the rhyme?
emcodem
Posts: 1631
Joined: Wed Sep 19, 2018 8:11 am

Re: Status monitor - disabling for some workflows?

Post by emcodem »

Ay, thanks for your PM with the workflow.
I checked and you are correct, the Synchronize option does not work - appearingly it does not work at all in the 9.4 Version that you can currently download from the ffastrans main page. In fact it behaves as if the "Hold node (with synchronize)" was just not there at all.
I also checked with the latest beta, there it works. Are you sure you cannot drive with Beta 7.1? :-) - the 1.0 release should come in the next days.
Currently i am checking back with the others if we have a simple solution for you without upgrading, stay tuned.

Maybe it is best if you just do sequential transcoding and deliver instead of paralell (until you upgraded to 1.0). As you said you have only 8 cores (which means 4 cores without Hyperthreading), it should not take much longer as each and every single transcode will utilize 100% CPU anyways.
emcodem, wrapping since 2009 you got the rhyme?
veks
Posts: 79
Joined: Fri Oct 25, 2019 6:51 am

Re: Status monitor - disabling for some workflows?

Post by veks »

emcodem wrote: Tue Feb 11, 2020 9:29 am Ay, thanks for your PM with the workflow.
I checked and you are correct, the Synchronize option does not work - appearingly it does not work at all in the 9.4 Version that you can currently download from the ffastrans main page. In fact it behaves as if the "Hold node (with synchronize)" was just not there at all.
I also checked with the latest beta, there it works. Are you sure you cannot drive with Beta 7.1? :-) - the 1.0 release should come in the next days.
Currently i am checking back with the others if we have a simple solution for you without upgrading, stay tuned.

Maybe it is best if you just do sequential transcoding and deliver instead of paralell (until you upgraded to 1.0). As you said you have only 8 cores (which means 4 cores without Hyperthreading), it should not take much longer as each and every single transcode will utilize 100% CPU anyways.
That's great to hear that this is the problem :D
Maybe some of the solutions would be to do what I've described before.
Have another workflow watch for growing files, and print filenames into .txt.
While first/main workflow waits for all filenames to be inside that .txt.
But for that I'd need hold option also, or is there another way?

Or, if you say that 1.0.0 will release soon, I can wait for it too.
But I'd need some explanation and help how to upgrade to it without much hassle.

As for sequential transcoding, or in serial not parallel, I've been doing that too, but there were another problems then, and the main is that what happens is:
- source gets transcoded with first encoding job
- next job takes the previous transcoded profile and transcodes the transcoded file
- which makes the last transcoded job useless as it uses previous transcoded file that is lower resolution and makes it also asynchronized.

There should be an option inside the custom FFMpeg node for choosing "use source file" so that next node doesn't take previous file being transcoded which is a problem.

Thanks!

P.S. so, HOLD is fixed in latest beta?
Can I sideload beta while having stable installed too, but without turned on workflows?

EDIT 1: also, I can't switch yet to beta because of this?
viewtopic.php?f=8&t=1006#p4812
admin wrote: Wed Feb 05, 2020 9:50 pm Importing legacy xml workflow packs will be available in the final 1.0.0 version scheduled for release 14. february.

-steinar
emcodem
Posts: 1631
Joined: Wed Sep 19, 2018 8:11 am

Re: Status monitor - disabling for some workflows?

Post by emcodem »

veks wrote: Tue Feb 11, 2020 10:10 am Maybe some of the solutions would be to do what I've described before.
For sure you can workaround but it will take lots of time to make it work.
veks wrote: Tue Feb 11, 2020 10:10 am But I'd need some explanation and help how to upgrade to it without much hassle.
There is a "how to upgrade" thread in the 1.0 forum section.
veks wrote: Tue Feb 11, 2020 10:10 am There should be an option inside the custom FFMpeg node for choosing "use source file" so that next node doesn't take previous file being transcoded which is a problem.
You could do as following: Use a populate vars processor before each and every transcode set s_source to %s_pickup_file%
veks wrote: Tue Feb 11, 2020 10:10 am P.S. so, HOLD is fixed in latest beta?
Can I sideload beta while having stable installed too, but without turned on workflows?
Yes and yes
veks wrote: Tue Feb 11, 2020 10:10 am EDIT 1: also, I can't switch yet to beta because of this?
Yes you can, just follow the how to upgrade procedure (copy your whole ffastrans installation and upgrade the copy only, dont touch your productive
installation) - but i fear you cannot drive 2 running ffastrans on 1 machine
emcodem, wrapping since 2009 you got the rhyme?
Post Reply