Amazon EC2, S3, s3fs, f4fpackager, and limits
Hello,
I have run into some limitations on the AMS (FMS) EC2 AMIs and how they work with content on S3.
The bundled s3fs (S3 filesystem) on the Adobe AMIs does not properly read from S3. I have connected an S3 bucket to /hds-vod as per instructions, but the entire source file is read from S3 at each player request for dynamic segment/fragment. That is, there is no caching. It is unclear if it is s3fs, fuse or httpd that does not cache properly. And even if caching worked, how would the instance be able to cache many TB of data (for concurrent fragmentations)? It appears the entire "AMS AMI on AWS" solution is a bit of an impossibility to start with?
Potential solutions and why they fail.
- Same problem on both FMS 4.5.1 and AMS 5.0.x AMIs
- Serve content from local instance disk or EBS is not possible since we have many TB of data. The cost involving so would be prohibitive. EBS limit tops out at 1 TB, too.
- Update instance (with "yum update")? No, all updates to the old s3fs system on the Adobe AMI yield no difference. Content is never cached.
Any one have any good ideas? I really like the idea of AMS on EC2, with content in S3 and dynamic packaging. Can it be done?
Thanks,
Hans
