Automate build and load of Hugo sites to Amazon S3 using Rclone with Python
This was the build and load flow for my Hugo site before I gave up on Hugo and moved back to WordPress. The site was generated using Hugo and pushed to Amazon s3 using rclone.
The code assumes the hugo and rclone executables, and the hugo root directory are in the same folder.
In this example the hugo directory is called codepearls.xyzio.com.
import os, shutil # s3 auth info s3SecretAccessKey = 's3_secret_key' s3AccessKeyId = 's3_access_key' # Hugo folder and bucket name. sitepath is path to the hugo public folder. path = 'codepearls.xyzio.com' sitepath = os.path.join(path, 'public') #Remove the files from the previous build by deleting the public folder if os.path.exists(sitepath): shutil.rmtree(sitepath) # Run hugo from root directory cmd = 'hugo.exe -s ' + path os.system(cmd) # Run rclone from root directory to sync /public to s3 and the appropriate args to encrypt the files and enable public read cmd = 'rclone.exe sync ' + sitepath + ' s3:' + path cmd += ' -v --s3-secret-access-key ' + s3SecretAccessKey cmd += ' --s3-access-key-id ' + s3AccessKeyId cmd += ' --s3-acl public-read' cmd += ' --s3-server-side-encryption AES256' cmd += ' --s3-provider AWS' cmd += ' --s3-region us-west-2' os.system(cmd)
Leave a Reply