If you are trying to use scripts on Synology NAS you will need to enable ssh and navigate to your docker folder that you want to use nzbToMedia scripts in.
Ex. Your docker folder is /volume1/docker/sabnzbd
SSH into your synology with putty or winscp logging in as root user
CD to the same folder where you know your docker files are /volume1/docker/sabnzbd
You can get the path from properties of shared folder in GUI as well
I am now in
/volume1/docker/sabnzbd:
Next run the command git init from the putty cmd window
Then run the command to download nzbtomedia from git repo: git clone https://github.com/clinton-hall/nzbToMedia.git
You will now see a new folder called nzbtomedia with all the necessary files needed to use the scripts.
You will then be able to follow the wikis original install instructions and further guidance: https://github.com/clinton-hall/nzbToMedia/wiki/autoProcessMedia.cfg
If you are using a hybrid setup of Docker programs and Regular Syno programs please note this counts as running a remote host and you should consider any referrals in the setup instructions as such.
Ex. You have a Medusa or Sickbeard running on a Docker Container but your SABnzbd is using a regular syno package.
If you have either of your programs on docker, even if you set up the config and api correctly on both medusa/sick and sabnzbd the postprocessing will not work this is because in your nzbtomedia.cfg the following has to be configured for them to talk to each other as remote hosts.
Your Sickbeard, Sonarr or etc. section must also be filled out correctly and enable remote hosts so it can talk to and know to use the mount points you created. (example below)
Example with anime category added addtionally:
[SickBeard]
#### autoProcessing for TV Series
#### tv - category that gets called for post-processing with SB
[[tv]]
enabled = 1
host = localhost
port =
apikey =
username = ""
password = ""
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
web_root = ""
ssl = 0
fork = auto
delete_failed = 0
# Enable/Disable linking for Torrents
Torrent_NoLink = 0
keep_archive = 1
process_method = ""
# force processing of already processed content when running a manual scan.
force = 0
# tell SickRage/Medusa to delete all source files after processing.
delete_on = 0
# tell Medusa to ignore check for associated subtitle check when postponing release
ignore_subs = 0
extract = 1
nzbExtractionBy = Downloader
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
# Enable/Disable deleting ignored files (samples and invalid media files)
delete_ignored = 0
##### Enable if SickBeard is on a remote server for this category
**remote_path = 1**
##### Set to path where download client places completed downloads locally for this category
**watch_dir = "/volume1/downloads/complete/"**
##### Set the recursive directory permissions to the following (0 to disable)
chmodDirectory = 0
[[anime]] <<<<<<<<<<<<<<<<<< Optional
enabled = 1
host = localhost
port =
username =
password =
web_root = ""
ssl = 0
fork = auto
delete_failed = 0
Torrent_NoLink = 0
keep_archive = 1
process_method = ""
force = 0
extract = 1
nzbExtractionBy = Downloader
minSize = 0
delete_ignored = 0
**remote_path = 1**
**watch_dir = "/volume1/downloads/complete/"**
chmodDirectory = 0
The watch directory was filled in with the local directory of where the downloader (sab or nzbget) send the completed download to.
The network category in the nzbtomedia.cfg file must be configured with mount points that point to paths in your docker instance and syno package instance where the completed downloads are.
[Network]
# Enter Mount points as LocalPath,RemotePath and separate each pair with '|'
# e.g. MountPoints = /volume1/Public/,E:\|/volume2/share/,\\NAS\
mount_points = /volume1/downloads/complete, /downloads/complete/
Once you have setup your cfg file and filled in all the sections above you can save your nzbtomedia.cfg.
at this point you can setup the relevant scripts in your sab or nzbget for nzbtomedia for your processor of choice following the guide: https://github.com/clinton-hall/nzbToMedia/wiki and begin testing.
If you receive any 400 errors confirm you have entered the correct api key in your nzbtomedia.cfg and or you may be using an https port on sab or nzbget that needs to be correctly in your nzbtomedia.cfg
If you are receiving python errors confirm with bash cmd prompty in your docker instance with command: what python that your docker instance is using a compatible version python that works with NZBtoMedia
If you are receiving folder not found errors confirm you have mapped a docker folder to the same download path as your sab or nzb get
A Failure feedback may look something this from sabnzbd:
[08:11:55] [POSTPROCESS]::SICKBEARD: SUCCESS: The download succeeded, sending a post-process request
[08:11:55] [POSTPROCESS]::**SICKBEARD: Post-processing attempted but directory doesn't exist: /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].3**
**[08:11:55] [ERROR]::MAIN: A problem was reported in the /volume1/@appstore/sabnzbd/var/scripts/nzbToMedia/nzbToSickBeard.py script.**
Changing to directory: /volume1/@appstore/sabnzbd/var/scripts/nzbToMedia
This is sab output before adding a link in docker to the completed folder
ex. /volume1/downloads/complete >>> /volume1/downloads/complete
- our network mounts from before from the local path to remote path we fix this issue
[Network]
# Enter Mount points as LocalPath,RemotePath and separate each pair with '|'
# e.g. MountPoints = /volume1/Public/,E:\|/volume2/share/,\\NAS\
mount_points = /volume1/downloads/complete, /downloads/complete/
once we added the correct local path, mount point in docker for both sides and enabled the remote hosts options on our config we get what we want which is:
The example output below is sabnzbd on my local syno as a package talking to medusa on a docker container which is technically walled off even though they are the same system so remote host setup applies here
[08:22:54] [INFO]::MAIN: Loading config from [/volume1/@appstore/sabnzbd/var/scripts/nzbToMedia/autoProcessMedia.cfg]
[08:22:54] [INFO]::MAIN: Python v3.7 will reach end of life in 1038 days.
[08:22:54] [INFO]::MAIN: Checking database structure...
[08:22:54] [INFO]::MAIN: Checking if git needs an update
[08:22:55] [INFO]::MAIN: No update needed
[08:22:55] [INFO]::MAIN: nzbToMedia Version:f5e4ec0981ab2f6fd97b14b24aa363a393396073 Branch:master (Linux 4.4.59+)
[08:22:55] [WARNING]::MAIN: Failed to locate ffprobe. Video corruption detection disabled!
[08:22:55] [WARNING]::MAIN: Install ffmpeg with x264 support to enable this feature ...
[08:22:55] [INFO]::MAIN: #########################################################
[08:22:55] [INFO]::MAIN: ## ..::[nzbToMedia.py]::.. ##
[08:22:55] [INFO]::MAIN: #########################################################
[08:22:55] [INFO]::MAIN: Script triggered from SABnzbd Version 3.0.1.
[08:22:55] [INFO]::MAIN: Auto-detected SECTION:SickBeard
[08:22:55] [INFO]::MAIN: Calling SickBeard:anime to post-process:[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p]
[08:22:55] [INFO]::MAIN: Attempting to auto-detect anime fork
[08:22:55] [INFO]::MAIN: SickBeard:anime fork auto-detection found custom params {'force': None, 'type': None, 'is_priority': None, 'delete_on': None, 'proc_dir': None, 'process_method': None}
[08:22:55] [INFO]::MAIN: SickBeard:anime fork set to custom
[08:22:55] [WARNING]::EXCEPTION: No name replacement determined for d2d772dc2cca446283e74007dad48f7e
[08:22:55] [INFO]::MAIN: FLATTEN: Flattening directory: /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].4
[08:22:55] [POSTPROCESS]::SICKBEARD: SUCCESS: The download succeeded, sending a post-process request
[08:22:56] [POSTPROCESS]::SICKBEARD: Processing path: /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].4
[08:22:56] [POSTPROCESS]::SICKBEARD: Processing folder: /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].4
[08:22:56] [POSTPROCESS]::SICKBEARD: Post-processing files: ['[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].mkv']
[08:22:56] [POSTPROCESS]::SICKBEARD: Post-processing video files: ['[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].mkv']
[08:22:56] [POSTPROCESS]::SICKBEARD: Processing /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].4/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].mkv
[08:22:56] [POSTPROCESS]::SICKBEARD: Show information parsed from file name
[08:22:56] [POSTPROCESS]::SICKBEARD: Retrieving episode object for Boruto: Naruto Next Generations S01E162
[08:22:56] [POSTPROCESS]::SICKBEARD: The episode file has a quality in it, using that: 720p HDTV
[08:22:56] [POSTPROCESS]::SICKBEARD: This episode was found in history as SNATCHED.
[08:22:56] [POSTPROCESS]::SICKBEARD: Snatch in history: True
[08:22:56] [POSTPROCESS]::SICKBEARD: Manually snatched: True
[08:22:56] [POSTPROCESS]::SICKBEARD: Info hash: None
[08:22:56] [POSTPROCESS]::SICKBEARD: NZB: True
[08:22:56] [POSTPROCESS]::SICKBEARD: Current quality: 720p HDTV
[08:22:56] [POSTPROCESS]::SICKBEARD: New quality: 720p HDTV
[08:22:56] [POSTPROCESS]::SICKBEARD: Proper: False
[08:22:56] [POSTPROCESS]::SICKBEARD: This episode is a priority download: True
[08:22:56] [POSTPROCESS]::SICKBEARD: New file: /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].4/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].mkv
[08:22:56] [POSTPROCESS]::SICKBEARD: New size: 705.01 MB
[08:22:56] [POSTPROCESS]::SICKBEARD: There is no existing file so there's no worries about replacing it
[08:22:56] [POSTPROCESS]::SICKBEARD: This download is marked a priority download so I'm going to replace an existing file if I find one
[08:22:56] [POSTPROCESS]::SICKBEARD: Found release name [HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p]
[08:22:56] [POSTPROCESS]::SICKBEARD: Destination folder for this episode: /mnt/Anime/Boruto Naruto Next Generations
[08:22:56] [POSTPROCESS]::SICKBEARD: Copying file from /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].4/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].mkv to /mnt/Anime/Boruto Naruto Next Generations/Boruto Naruto Next Generations - 162 - Escaping the Tightening Net.mkv
[08:22:56] [POSTPROCESS]::SICKBEARD: Processing succeeded for /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].4/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].mkv
[08:22:56] [POSTPROCESS]::SICKBEARD: Post-processing completed.
[08:22:56] [INFO]::CLEANDIRS: Directory /volume1/downloads/complete/[HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p].4 still contains 1 unprocessed file(s), skipping ...
[08:22:56] [INFO]::MAIN: The /volume1/@appstore/sabnzbd/var/scripts/nzbToMedia/nzbToSickBeard.py script completed successfully.
Changing to directory: /volume1/@appstore/sabnzbd/var/scripts/nzbToMedia
-- Cleaning bytecode --
b'Removing __pycache__/\n'
b'Removing __pycache__/\n'
-- Cleaning folders: ['libs', 'core'] --
No folders to clean
Returning to directory: /volume1/@appstore/sabnzbd/share/SABnzbd
-- Cleanup finished --
SickBeard: Successfully post-processed [HorribleSubs].Boruto-Naruto.Next.Generations-162.[720p]!
SAB and Sick were able to confirm the paths on both ends where the completed download folder are because the mount point we setup earlier
In a nutshell we told the script: "Hey this where the completed download folder is on my syno locally and this where the download folder is in my docker"
This allows the post processing to work normally at that point as long as you have setup your docker container with read and write access to the folder it should work.
When you setup your docker container you should use your GUID and PUID of your primary syno user.
GUID and PUID can be obtained by running the following command while ssh'd into your syno: id yourusernamehere
example: id Max the output will spit out the puid and guid of your syno user.
additionally you will want to check from the users section of control panel in synology dash that your user has read/ write access to the folders that you are going to map to docker.